Revision
+
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
+1. Log in to KubeSphere as `admin`, move the cursor to
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
```bash
kubectl -n kubesphere-system edit cc ks-installer
diff --git a/content/en/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md b/content/en/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md
index a9d3b21e5..b9b843566 100644
--- a/content/en/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md
+++ b/content/en/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md
@@ -18,7 +18,7 @@ You need to deploy a Kubernetes cluster and install KubeSphere in the cluster. F
## Procedure
-1. Log in to KubeSphere as `admin`, move the cursor to
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
+1. Log in to KubeSphere as `admin`, move the cursor to
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
```bash
kubectl -n kubesphere-system edit cc ks-installer
diff --git a/content/en/docs/v3.3/access-control-and-account-management/external-authentication/use-an-ldap-service.md b/content/en/docs/v3.3/access-control-and-account-management/external-authentication/use-an-ldap-service.md
index 0c6192132..5a64f3131 100644
--- a/content/en/docs/v3.3/access-control-and-account-management/external-authentication/use-an-ldap-service.md
+++ b/content/en/docs/v3.3/access-control-and-account-management/external-authentication/use-an-ldap-service.md
@@ -16,7 +16,7 @@ This document describes how to use an LDAP service as an external identity provi
## Procedure
-1. Log in to KubeSphere as `admin`, move the cursor to
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
+1. Log in to KubeSphere as `admin`, move the cursor to
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
```bash
kubectl -n kubesphere-system edit cc ks-installer
diff --git a/content/en/docs/v3.3/access-control-and-account-management/external-authentication/use-an-oauth2-identity-provider.md b/content/en/docs/v3.3/access-control-and-account-management/external-authentication/use-an-oauth2-identity-provider.md
index e7ae81348..356113ab3 100644
--- a/content/en/docs/v3.3/access-control-and-account-management/external-authentication/use-an-oauth2-identity-provider.md
+++ b/content/en/docs/v3.3/access-control-and-account-management/external-authentication/use-an-oauth2-identity-provider.md
@@ -10,7 +10,7 @@ This document describes how to use an external identity provider based on the OA
The following figure shows the authentication process between KubeSphere and an external OAuth 2.0 identity provider.
-
+
## Prerequisites
@@ -81,7 +81,7 @@ KubeSphere provides two built-in OAuth 2.0 plugins: [GitHubIdentityProvider](htt
## Integrate an Identity Provider with KubeSphere
-1. Log in to KubeSphere as `admin`, move the cursor to
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
+1. Log in to KubeSphere as `admin`, move the cursor to
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
```bash
kubectl -n kubesphere-system edit cc ks-installer
@@ -126,5 +126,5 @@ KubeSphere provides two built-in OAuth 2.0 plugins: [GitHubIdentityProvider](htt
6. On the login page of the external identity provider, enter the username and password of a user configured at the identity provider to log in to KubeSphere.
- 
+ 
diff --git a/content/en/docs/v3.3/access-control-and-account-management/multi-tenancy-in-kubesphere.md b/content/en/docs/v3.3/access-control-and-account-management/multi-tenancy-in-kubesphere.md
index fbe355bf9..5c66bef59 100644
--- a/content/en/docs/v3.3/access-control-and-account-management/multi-tenancy-in-kubesphere.md
+++ b/content/en/docs/v3.3/access-control-and-account-management/multi-tenancy-in-kubesphere.md
@@ -24,7 +24,7 @@ The isolation of physical resources includes nodes and networks, while it also r
To solve the issues above, KubeSphere provides a multi-tenant management solution based on Kubernetes.
-
+
In KubeSphere, the [workspace](../../workspace-administration/what-is-workspace/) is the smallest tenant unit. A workspace enables users to share resources across clusters and projects. Workspace members can create projects in an authorized cluster and invite other members to cooperate in the same project.
@@ -54,4 +54,4 @@ KubeSphere also provides [auditing logs](../../pluggable-components/auditing-log
For a complete authentication and authorization chain in KubeSphere, see the following diagram. KubeSphere has expanded RBAC rules using the Open Policy Agent (OPA). The KubeSphere team looks to integrate [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) to provide more security management policies.
-
+
diff --git a/content/en/docs/v3.3/application-store/_index.md b/content/en/docs/v3.3/application-store/_index.md
index 348390088..8c0f1387f 100644
--- a/content/en/docs/v3.3/application-store/_index.md
+++ b/content/en/docs/v3.3/application-store/_index.md
@@ -7,7 +7,7 @@ layout: "second"
linkTitle: "App Store"
weight: 14000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
diff --git a/content/en/docs/v3.3/application-store/app-lifecycle-management.md b/content/en/docs/v3.3/application-store/app-lifecycle-management.md
index 7d2cd0003..4499d09f4 100644
--- a/content/en/docs/v3.3/application-store/app-lifecycle-management.md
+++ b/content/en/docs/v3.3/application-store/app-lifecycle-management.md
@@ -132,7 +132,7 @@ After the app is approved, `isv` can release the Redis application to the App St
`app-reviewer` can create multiple categories for different types of applications based on their function and usage. It is similar to setting tags and categories can be used in the App Store as filters, such as Big Data, Middleware, and IoT.
-1. Log in to KubeSphere as `app-reviewer`. To create a category, go to the **App Store Management** page and click
in **App Categories**.
+1. Log in to KubeSphere as `app-reviewer`. To create a category, go to the **App Store Management** page and click
in **App Categories**.
2. Set a name and icon for the category in the dialog, then click **OK**. For Redis, you can enter `Database` for the field **Name**.
diff --git a/content/en/docs/v3.3/application-store/built-in-apps/deploy-chaos-mesh.md b/content/en/docs/v3.3/application-store/built-in-apps/deploy-chaos-mesh.md
index 9e10be832..2041c2eb0 100644
--- a/content/en/docs/v3.3/application-store/built-in-apps/deploy-chaos-mesh.md
+++ b/content/en/docs/v3.3/application-store/built-in-apps/deploy-chaos-mesh.md
@@ -8,7 +8,7 @@ linkTitle: "Deploy Chaos Mesh on KubeSphere"
[Chaos Mesh](https://github.com/chaos-mesh/chaos-mesh) is a cloud-native Chaos Engineering platform that orchestrates chaos in Kubernetes environments. With Chaos Mesh, you can test your system's resilience and robustness on Kubernetes by injecting various types of faults into Pods, network, file system, and even the kernel.
-
+
## Enable App Store on KubeSphere
@@ -22,34 +22,34 @@ linkTitle: "Deploy Chaos Mesh on KubeSphere"
1. Login KubeSphere as `project-regular`, search for **chaos-mesh** in the **App Store**, and click on the search result to enter the app.
- 
+ 
2. In the **App Information** page, click **Install** on the upper right corner.
- 
+ 
3. In the **App Settings** page, set the application **Name,** **Location** (as your Namespace), and **App Version**, and then click **Next** on the upper right corner.
- 
+ 
4. Configure the `values.yaml` file as needed, or click **Install** to use the default configuration.
- 
+ 
5. Wait for the deployment to be finished. Upon completion, Chaos Mesh will be shown as **Running** in KubeSphere.
- 
+ 
### Step 2: Visit Chaos Dashboard
1. In the **Resource Status** page, copy the **NodePort **of `chaos-dashboard`.
- 
+ 
2. Access the Chaos Dashboard by entering `${NodeIP}:${NODEPORT}` in your browser. Refer to [Manage User Permissions](https://chaos-mesh.org/docs/manage-user-permissions/) to generate a Token and log into Chaos Dashboard.
- 
+ 
### Step 3: Create a chaos experiment
@@ -63,20 +63,20 @@ curl -sSL https://mirrors.chaos-mesh.org/latest/web-show/deploy.sh | bash
1. From your web browser, visit ${NodeIP}:8081 to access the **Web Show** application.
- 
+ 
2. Log in to Chaos Dashboard to create a chaos experiment. To observe the effect of network latency on the application, we set the **Target **as "Network Attack" to simulate a network delay scenario.
- 
+ 
The **Scope** of the experiment is set to `app: web-show`.
- 
+ 
3. Start the chaos experiment by submitting it.
- 
+ 
Now, you should be able to visit **Web Show** to observe experiment results:
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/content/en/docs/v3.3/application-store/built-in-apps/harbor-app.md b/content/en/docs/v3.3/application-store/built-in-apps/harbor-app.md
index 39e0f9c1f..97db9778a 100644
--- a/content/en/docs/v3.3/application-store/built-in-apps/harbor-app.md
+++ b/content/en/docs/v3.3/application-store/built-in-apps/harbor-app.md
@@ -48,7 +48,7 @@ This tutorial walks you through an example of deploying [Harbor](https://goharbo
1. Based on the field `expose.type` you set in the configuration file, the access method may be different. As this example uses `nodePort` to access Harbor, visit `http://
on the right of a project gateway to select an operation from the drop-down menu:
+Click
on the right of a project gateway to select an operation from the drop-down menu:
- **Edit**: Edit configurations of the project gateway.
- **Disable**: Disable the project gateway.
diff --git a/content/en/docs/v3.3/cluster-administration/cluster-wide-alerting-and-notification/alerting-policy.md b/content/en/docs/v3.3/cluster-administration/cluster-wide-alerting-and-notification/alerting-policy.md
index 33d379370..97e843ac6 100644
--- a/content/en/docs/v3.3/cluster-administration/cluster-wide-alerting-and-notification/alerting-policy.md
+++ b/content/en/docs/v3.3/cluster-administration/cluster-wide-alerting-and-notification/alerting-policy.md
@@ -48,7 +48,7 @@ KubeSphere also has built-in policies which will trigger alerts if conditions de
## Edit an Alerting Policy
-To edit an alerting policy after it is created, on the **Alerting Policies** page, click
on the right of the alerting policy.
+To edit an alerting policy after it is created, on the **Alerting Policies** page, click
on the right of the alerting policy.
1. Click **Edit** from the drop-down list and edit the alerting policy following the same steps as you create it. Click **OK** on the **Message Settings** page to save it.
@@ -62,8 +62,8 @@ Under **Monitoring**, the **Alert Monitoring** chart shows the actual usage or a
{{< notice note >}}
-You can click
on the top navigation bar.
+1. Log in to the Nexus console as `admin` and click
on the top navigation bar.
2. Go to the **Repositories** page and you can see that Nexus provides three types of repository.
@@ -37,9 +37,9 @@ This tutorial demonstrates how to use Nexus in pipelines on KubeSphere.
2. In your own GitHub repository of **learn-pipeline-java**, click the file `pom.xml` in the root directory.
-3. Click
to edit the file. For example, change the value of `spec.replicas` to `3`.
+3. Click
to edit the file. For example, change the value of `spec.replicas` to `3`.
4. Click **Commit changes** at the bottom of the page.
diff --git a/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md b/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
index fcdb34cec..e97bcc9af 100644
--- a/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
+++ b/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
@@ -89,16 +89,16 @@ The following briefly introduces the CI and CI & CD pipeline templates.
- CI pipeline template
- 
+ 
- 
+ 
The CI pipeline template contains two stages. The **clone code** stage checks out code and the **build & push** stage builds an image and pushes it to Docker Hub. You need to create credentials for your code repository and your Docker Hub registry in advance, and then set the URL of your repository and these credentials in corresponding steps. After you finish editing, the pipeline is ready to run.
- CI & CD pipeline template
- 
+ 
- 
+ 
The CI & CD pipeline template contains six stages. For more information about each stage, refer to [Create a Pipeline Using a Jenkinsfile](../create-a-pipeline-using-jenkinsfile/#pipeline-overview), where you can find similar stages and the descriptions. You need to create credentials for your code repository, your Docker Hub registry, and the kubeconfig of your cluster in advance, and then set the URL of your repository and these credentials in corresponding steps. After you finish editing, the pipeline is ready to run.
\ No newline at end of file
diff --git a/content/en/docs/v3.3/faq/_index.md b/content/en/docs/v3.3/faq/_index.md
index 753d10890..624319ca4 100644
--- a/content/en/docs/v3.3/faq/_index.md
+++ b/content/en/docs/v3.3/faq/_index.md
@@ -6,7 +6,7 @@ layout: "second"
linkTitle: "FAQ"
weight: 16000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
This chapter answers and summarizes the questions users ask most frequently about KubeSphere. You can find these questions and answers in their respective sections which are grouped based on KubeSphere functions.
diff --git a/content/en/docs/v3.3/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md b/content/en/docs/v3.3/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
index e887f6b72..f192366b2 100644
--- a/content/en/docs/v3.3/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
+++ b/content/en/docs/v3.3/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
@@ -30,7 +30,7 @@ For more information about creating a Kubernetes namespace, see [Namespaces Walk
1. Log in to the KubeSphere console as `admin` and go to the **Cluster Management** page. Click **Projects**, and you can see all your projects running on the current cluster, including the one just created.
-2. The namespace created through kubectl does not belong to any workspace. Click
on the right and select **Assign Workspace**.
+2. The namespace created through kubectl does not belong to any workspace. Click
on the right and select **Assign Workspace**.
3. In the dialog that appears, select a **Workspace** and a **Project Administrator** for the project and click **OK**.
diff --git a/content/en/docs/v3.3/faq/access-control/cannot-login.md b/content/en/docs/v3.3/faq/access-control/cannot-login.md
index ba0d8bd39..357c1a1b8 100644
--- a/content/en/docs/v3.3/faq/access-control/cannot-login.md
+++ b/content/en/docs/v3.3/faq/access-control/cannot-login.md
@@ -14,7 +14,7 @@ Here are some of the frequently asked questions about user login failure.
You may see an image below when the login fails. To find out the reason and solve the issue, perform the following steps:
-
+
1. Execute the following command to check the status of the user.
@@ -86,7 +86,7 @@ kubectl -n kubesphere-system get deploy ks-controller-manager -o jsonpath='{.spe
## Wrong Username or Password
-
+
Run the following command to verify that the username and the password are correct.
diff --git a/content/en/docs/v3.3/faq/console/change-console-language.md b/content/en/docs/v3.3/faq/console/change-console-language.md
index 808641417..9e1e2d055 100644
--- a/content/en/docs/v3.3/faq/console/change-console-language.md
+++ b/content/en/docs/v3.3/faq/console/change-console-language.md
@@ -22,4 +22,4 @@ You have installed KubeSphere.
3. On the **Basic Information** page, select a desired language from the **Language** drop-down list.
-4. Click
on the right of `ks-installer` and select **Edit YAML**.
+4. Click
on the right of `ks-installer` and select **Edit YAML**.
5. Scroll down to the bottom of the file, add `telemetry_enabled: false`, and then click **OK**.
diff --git a/content/en/docs/v3.3/installing-on-kubernetes/_index.md b/content/en/docs/v3.3/installing-on-kubernetes/_index.md
index f83d3d1f3..6d079ef2d 100644
--- a/content/en/docs/v3.3/installing-on-kubernetes/_index.md
+++ b/content/en/docs/v3.3/installing-on-kubernetes/_index.md
@@ -6,7 +6,7 @@ layout: "second"
linkTitle: "Installing on Kubernetes"
weight: 4000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
This chapter demonstrates how to deploy KubeSphere on existing Kubernetes clusters hosted on cloud or on-premises. As a highly flexible solution to container orchestration, KubeSphere can be deployed across various Kubernetes engines.
@@ -15,14 +15,14 @@ This chapter demonstrates how to deploy KubeSphere on existing Kubernetes cluste
Below you will find some of the most viewed and helpful pages in this chapter. It is highly recommended that you refer to them first.
-{{< popularPage icon="/images/docs/v3.3/brand-icons/gke.jpg" title="Deploy KubeSphere on GKE" description="Provision KubeSphere on existing Kubernetes clusters on GKE." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke/" >}}
+{{< popularPage icon="/images/docs/v3.x/brand-icons/gke.jpg" title="Deploy KubeSphere on GKE" description="Provision KubeSphere on existing Kubernetes clusters on GKE." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke/" >}}
-{{< popularPage icon="/images/docs/v3.3/bitmap.jpg" title="Deploy KubeSphere on AWS EKS" description="Provision KubeSphere on existing Kubernetes clusters on EKS." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks/" >}}
+{{< popularPage icon="/images/docs/v3.x/bitmap.jpg" title="Deploy KubeSphere on AWS EKS" description="Provision KubeSphere on existing Kubernetes clusters on EKS." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks/" >}}
-{{< popularPage icon="/images/docs/v3.3/brand-icons/aks.jpg" title="Deploy KubeSphere on AKS" description="Provision KubeSphere on existing Kubernetes clusters on AKS." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks/" >}}
+{{< popularPage icon="/images/docs/v3.x/brand-icons/aks.jpg" title="Deploy KubeSphere on AKS" description="Provision KubeSphere on existing Kubernetes clusters on AKS." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks/" >}}
-{{< popularPage icon="/images/docs/v3.3/brand-icons/huawei.svg" title="Deploy KubeSphere on CCE" description="Provision KubeSphere on existing Kubernetes clusters on Huawei CCE." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce/" >}}
+{{< popularPage icon="/images/docs/v3.x/brand-icons/huawei.svg" title="Deploy KubeSphere on CCE" description="Provision KubeSphere on existing Kubernetes clusters on Huawei CCE." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce/" >}}
-{{< popularPage icon="/images/docs/v3.3/brand-icons/oracle.jpg" title="Deploy KubeSphere on Oracle OKE" description="Provision KubeSphere on existing Kubernetes clusters on OKE." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke/" >}}
+{{< popularPage icon="/images/docs/v3.x/brand-icons/oracle.jpg" title="Deploy KubeSphere on Oracle OKE" description="Provision KubeSphere on existing Kubernetes clusters on OKE." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke/" >}}
-{{< popularPage icon="/images/docs/v3.3/brand-icons/digital-ocean.jpg" title="Deploy KubeSphere on DO" description="Provision KubeSphere on existing Kubernetes clusters on DigitalOcean." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do/" >}}
+{{< popularPage icon="/images/docs/v3.x/brand-icons/digital-ocean.jpg" title="Deploy KubeSphere on DO" description="Provision KubeSphere on existing Kubernetes clusters on DigitalOcean." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do/" >}}
diff --git a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md
index 9e12fa160..b85bf9c35 100644
--- a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md
+++ b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md
@@ -16,11 +16,11 @@ Azure can help you implement infrastructure as code by providing resource deploy
You don't have to install Azure CLI on your machine as Azure provides a web-based terminal. Click the Cloud Shell button on the menu bar at the upper-right corner in Azure portal.
-
+
Select **Bash** Shell.
-
+
### Create a Resource Group
@@ -62,15 +62,15 @@ aks-nodepool1-23754246-vmss000000 Ready agent 38m v1.16.13
After you execute all the commands above, you can see there are 2 Resource Groups created in Azure Portal.
-
+
Azure Kubernetes Services itself will be placed in `KubeSphereRG`.
-
+
All the other Resources will be placed in `MC_KubeSphereRG_KuberSphereCluster_westus`, such as VMs, Load Balancer and Virtual Network.
-
+
## Deploy KubeSphere on AKS
diff --git a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md
index 9eeb34aa2..79673ff9b 100644
--- a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md
+++ b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md
@@ -6,7 +6,7 @@ description: 'Learn how to deploy KubeSphere on DigitalOcean.'
weight: 4230
---
-
+
This guide walks you through the steps of deploying KubeSphere on [DigitalOcean Kubernetes](https://www.digitalocean.com/products/kubernetes/).
@@ -14,7 +14,7 @@ This guide walks you through the steps of deploying KubeSphere on [DigitalOcean
A Kubernetes cluster in DO is a prerequisite for installing KubeSphere. Go to your [DO account](https://cloud.digitalocean.com/) and refer to the image below to create a cluster from the navigation menu.
-
+
You need to select:
@@ -24,7 +24,7 @@ You need to select:
4. Cluster capacity (for example, 2 standard nodes with 2 vCPUs and 4GB of RAM each)
5. A name for the cluster (for example, *kubesphere-3*)
-
+
{{< notice note >}}
@@ -36,7 +36,7 @@ You need to select:
When the cluster is ready, you can download the config file for kubectl.
-
+
## Install KubeSphere on DOKS
@@ -82,23 +82,23 @@ Now that KubeSphere is installed, you can access the web console of KubeSphere b
- Go to the Kubernetes Dashboard provided by DigitalOcean.
- 
+ 
- Select the **kubesphere-system** namespace.
- 
+ 
- In **Services** under **Service**, edit the service **ks-console**.
- 
+ 
- Change the type from `NodePort` to `LoadBalancer`. Save the file when you finish.
- 
+ 
- Access the KubeSphere's web console using the endpoint generated by DO.
- 
+ 
{{< notice tip >}}
diff --git a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md
index 4cab20ec0..95a9078cc 100644
--- a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md
+++ b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md
@@ -17,15 +17,15 @@ pip3 install awscli --upgrade --user
```
Check the installation with `aws --version`.
-
+
## Prepare an EKS Cluster
1. A standard Kubernetes cluster in AWS is a prerequisite of installing KubeSphere. Go to the navigation menu and refer to the image below to create a cluster.
- 
+ 
2. On the **Configure cluster** page, fill in the following fields:
- 
+ 
- Name: A unique name for your cluster.
@@ -40,7 +40,7 @@ Check the installation with `aws --version`.
- Tags (Optional): Add any tags to your cluster. For more information, see [Tagging your Amazon EKS resources](https://docs.aws.amazon.com/eks/latest/userguide/eks-using-tags.html).
3. Select **Next**. On the **Specify networking** page, select values for the following fields:
- 
+ 
- VPC: The VPC that you created previously in [Create your Amazon EKS cluster VPC](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#vpc-create). You can find the name of your VPC in the drop-down list.
@@ -49,7 +49,7 @@ Check the installation with `aws --version`.
- Security groups: The SecurityGroups value from the AWS CloudFormation output that you generated with [Create your Amazon EKS cluster VPC](https://docs.aws.amazon.com/eks/latest/userguide/getting-started-console.html#vpc-create). This security group has ControlPlaneSecurityGroup in the drop-down name.
- For **Cluster endpoint access**, choose one of the following options:
- 
+ 
- Public: Enables only public access to your cluster's Kubernetes API server endpoint. Kubernetes API requests that originate from outside of your cluster's VPC use the public endpoint. By default, access is allowed from any source IP address. You can optionally restrict access to one or more CIDR ranges such as 192.168.0.0/16, for example, by selecting **Advanced settings** and then selecting **Add source**.
@@ -62,20 +62,20 @@ Check the installation with `aws --version`.
- Public and private: Enables public and private access.
4. Select **Next**. On the **Configure logging** page, you can optionally choose which log types that you want to enable. By default, each log type is **Disabled**. For more information, see [Amazon EKS control plane logging](https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html).
- 
+ 
5. Select **Next**. On the **Review and create page**, review the information that you entered or selected on the previous pages. Select **Edit** if you need to make changes to any of your selections. Once you're satisfied with your settings, select **Create**. The **Status** field shows **CREATING** until the cluster provisioning process completes.
- 
+ 
- For more information about the previous options, see [Modifying cluster endpoint access](https://docs.aws.amazon.com/eks/latest/userguide/cluster-endpoint.html#modify-endpoint-access).
When your cluster provisioning is complete (usually between 10 and 15 minutes), save the API server endpoint and Certificate authority values. These are used in your kubectl configuration.
- 
+ 
6. Create **Node Group** and define 3 nodes in this cluster.
- 
+ 
7. Configure the node group.
- 
+ 
{{< notice note >}}
@@ -166,10 +166,10 @@ Now that KubeSphere is installed, you can access the web console of KubeSphere b
```
- Edit the configuration of the service **ks-console** by executing `kubectl edit ks-console` and change `type` from `NodePort` to `LoadBalancer`. Save the file when you finish.
-
+
- Run `kubectl get svc -n kubesphere-system` and get your external IP.
- 
+ 
- Access the web console of KubeSphere using the external IP generated by EKS.
diff --git a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md
index 81b5d9f17..53bee466e 100644
--- a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md
+++ b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md
@@ -23,7 +23,7 @@ First, create a Kubernetes cluster based on the requirements below.
- Go to **Resource Management** > **Cluster Management** > **Basic Information** > **Network**, and bind `Public apiserver`.
- Select **kubectl** on the right column, go to **Download kubectl configuration file**, and click **Click here to download**, then you will get a public key for kubectl.
- 
+ 
After you get the configuration file for kubectl, use kubectl command line to verify the connection to the cluster.
@@ -83,7 +83,7 @@ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3
Go to **Workload** > **Pod**, and check the running status of the pod in `kubesphere-system` of its namespace to understand the minimal deployment of KubeSphere. Check `ks-console-xxxx` of the namespace to understand the availability of KubeSphere console.
- 
+ 
### Expose KubeSphere Console
@@ -91,11 +91,11 @@ Check the running status of Pods in `kubesphere-system` namespace and make sure
Go to **Resource Management** > **Network** and choose the service in `ks-console`. It is suggested that you choose `LoadBalancer` (Public IP is required). The configuration is shown below.
- 
+ 
Default settings are OK for other detailed configurations. You can also set them based on your needs.
- 
+ 
After you set LoadBalancer for KubeSphere console, you can visit it via the given address. Go to KubeSphere login page and use the default account (username `admin` and password `P@88w0rd`) to log in.
diff --git a/content/en/docs/v3.3/installing-on-kubernetes/introduction/overview.md b/content/en/docs/v3.3/installing-on-kubernetes/introduction/overview.md
index 4e93b072d..513fc2043 100644
--- a/content/en/docs/v3.3/installing-on-kubernetes/introduction/overview.md
+++ b/content/en/docs/v3.3/installing-on-kubernetes/introduction/overview.md
@@ -6,7 +6,7 @@ linkTitle: "Overview"
weight: 4110
---
-
+
As part of KubeSphere's commitment to provide a plug-and-play architecture for users, it can be easily installed on existing Kubernetes clusters. More specifically, KubeSphere can be deployed on Kubernetes either hosted on clouds (for example, AWS EKS, QingCloud QKE and Google GKE) or on-premises. This is because KubeSphere does not hack Kubernetes itself. It only interacts with the Kubernetes API to manage Kubernetes cluster resources. In other words, KubeSphere can be installed on any native Kubernetes cluster and Kubernetes distribution.
@@ -48,7 +48,7 @@ After you make sure your existing Kubernetes cluster meets all the requirements,
4. Make sure port 30880 is opened in security groups and access the web console through the NodePort (`IP:30880`) with the default account and password (`admin/P@88w0rd`).
- 
+ 
## Enable Pluggable Components (Optional)
diff --git a/content/en/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md b/content/en/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md
index af93374bb..2e3dfa4f6 100644
--- a/content/en/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md
+++ b/content/en/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md
@@ -30,7 +30,7 @@ You can use Harbor or any other private image registries. This tutorial uses Doc
2. Make sure you specify a domain name in the field `Common Name` when you are generating your own certificate. For instance, the field is set to `dockerhub.kubekey.local` in this example.
- 
+ 
### Start the Docker registry
diff --git a/content/en/docs/v3.3/installing-on-linux/_index.md b/content/en/docs/v3.3/installing-on-linux/_index.md
index f9a72d257..42bd83214 100644
--- a/content/en/docs/v3.3/installing-on-linux/_index.md
+++ b/content/en/docs/v3.3/installing-on-linux/_index.md
@@ -6,7 +6,7 @@ layout: "second"
linkTitle: "Installing on Linux"
weight: 3000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
This chapter demonstrates how to use KubeKey to provision a production-ready Kubernetes and KubeSphere cluster on Linux in different environments. You can also use KubeKey to easily scale out and in your cluster and set various storage classes based on your needs.
@@ -14,4 +14,4 @@ This chapter demonstrates how to use KubeKey to provision a production-ready Kub
Below you will find some of the most viewed and helpful pages in this chapter. It is highly recommended that you refer to them first.
-{{< popularPage icon="/images/docs/v3.3/qingcloud-2.svg" title="Deploy KubeSphere on QingCloud" description="Provision an HA KubeSphere cluster on QingCloud." link="../installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms/" >}}
+{{< popularPage icon="/images/docs/v3.x/qingcloud-2.svg" title="Deploy KubeSphere on QingCloud" description="Provision an HA KubeSphere cluster on QingCloud." link="../installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms/" >}}
diff --git a/content/en/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md b/content/en/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md
index 08d6cd7da..32e6d1338 100644
--- a/content/en/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md
+++ b/content/en/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md
@@ -8,7 +8,7 @@ weight: 3630
KubeSphere leverages [KubeEdge](https://kubeedge.io/en/), to extend native containerized application orchestration capabilities to hosts at edge. With separate cloud and edge core modules, KubeEdge provides complete edge computing solutions while the installation may be complex and difficult.
-
+
{{< notice note >}}
@@ -129,7 +129,7 @@ Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/
3. Click **Add**. In the dialog that appears, set a node name and enter an internal IP address of your edge node. Click **Validate** to continue.
- 
+ 
{{< notice note >}}
@@ -140,7 +140,7 @@ Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/
4. Copy the command automatically created under **Edge Node Configuration Command** and run it on your edge node.
- 
+ 
{{< notice note >}}
@@ -166,7 +166,7 @@ To collect monitoring information on edge node, you need to enable `metrics_serv
3. In the search bar on the right pane, enter `clusterconfiguration`, and click the result to go to its details page.
-4. Click
on the right of the member cluster, and click **Update KubeConfig**.
+2. On the **Cluster Management** page, click
on the right of the member cluster, and click **Update KubeConfig**.
3. In the **Update KubeConfig** dialog box that is diaplayed, enter the new kubeconfig,and click **update**.
diff --git a/content/en/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md b/content/en/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md
index abac113c6..f740d70f4 100644
--- a/content/en/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md
+++ b/content/en/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md
@@ -33,7 +33,7 @@ This tutorial demonstrates how to import an Alibaba Cloud Kubernetes (ACK) clust
3. Go to **CRDs**, enter `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
-4. Click
on the right and then select **Edit YAML** to edit `ks-installer`.
+4. Click
on the right and then select **Edit YAML** to edit `ks-installer`.
5. In the YAML file of `ks-installer`, change the value of `jwtSecret` to the corresponding value shown above and set the value of `clusterRole` to `member`. Click **Update** to save your changes.
@@ -57,7 +57,7 @@ This tutorial demonstrates how to import an Alibaba Cloud Kubernetes (ACK) clust
Log in to the web console of Alibaba Cloud. Go to **Clusters** under **Container Service - Kubernetes**, click your cluster to go to its detail page, and then select the **Connection Information** tab. You can see the kubeconfig file under the **Public Access** tab. Copy the contents of the kubeconfig file.
-
+
### Step 3: Import the ACK member cluster
diff --git a/content/en/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md b/content/en/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
index c1dc96bf9..02b407333 100644
--- a/content/en/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
+++ b/content/en/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
@@ -37,7 +37,7 @@ You need to deploy KubeSphere on your EKS cluster first. For more information ab
3. Go to **CRDs**, enter `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
-4. Click
on the right and then select **Edit YAML** to edit `ks-installer`.
+4. Click
on the right and then select **Edit YAML** to edit `ks-installer`.
5. In the YAML file of `ks-installer`, change the value of `jwtSecret` to the corresponding value shown above and set the value of `clusterRole` to `member`. Click **Update** to save your changes.
diff --git a/content/en/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-gke.md b/content/en/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-gke.md
index cae855811..c09be54cc 100644
--- a/content/en/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-gke.md
+++ b/content/en/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-gke.md
@@ -37,7 +37,7 @@ You need to deploy KubeSphere on your GKE cluster first. For more information ab
3. Go to **CRDs**, enter `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
-4. Click
on the right and then select **Edit YAML** to edit `ks-installer`.
+4. Click
on the right and then select **Edit YAML** to edit `ks-installer`.
5. In the YAML file of `ks-installer`, change the value of `jwtSecret` to the corresponding value shown above and set the value of `clusterRole` to `member`.
diff --git a/content/en/docs/v3.3/multicluster-management/introduction/kubefed-in-kubesphere.md b/content/en/docs/v3.3/multicluster-management/introduction/kubefed-in-kubesphere.md
index d687f98ec..4564770b5 100644
--- a/content/en/docs/v3.3/multicluster-management/introduction/kubefed-in-kubesphere.md
+++ b/content/en/docs/v3.3/multicluster-management/introduction/kubefed-in-kubesphere.md
@@ -16,7 +16,7 @@ There can only be one host cluster while multiple member clusters can exist at t
If you are using on-premises Kubernetes clusters built through kubeadm, install KubeSphere on your Kubernetes clusters by referring to [Air-gapped Installation on Kubernetes](../../../installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped/), and then enable KubeSphere multi-cluster management through direct connection or agent connection.
-
+
## Vendor Agnostic
diff --git a/content/en/docs/v3.3/multicluster-management/introduction/overview.md b/content/en/docs/v3.3/multicluster-management/introduction/overview.md
index 8568c836e..beb6d19b2 100644
--- a/content/en/docs/v3.3/multicluster-management/introduction/overview.md
+++ b/content/en/docs/v3.3/multicluster-management/introduction/overview.md
@@ -12,4 +12,4 @@ The most common use cases of multi-cluster management include service traffic lo
KubeSphere is developed to address multi-cluster and multi-cloud management challenges, including the scenarios mentioned above. It provides users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premises environments. KubeSphere also boasts rich observability across multiple clusters including centralized monitoring, logging, events, and auditing logs.
-
+
diff --git a/content/en/docs/v3.3/multicluster-management/unbind-cluster.md b/content/en/docs/v3.3/multicluster-management/unbind-cluster.md
index e6dc92b65..3d28fa476 100644
--- a/content/en/docs/v3.3/multicluster-management/unbind-cluster.md
+++ b/content/en/docs/v3.3/multicluster-management/unbind-cluster.md
@@ -21,7 +21,7 @@ You can remove a cluster by using either of the following methods:
1. Click **Platform** in the upper-left corner and select **Cluster Management**.
-2. In the **Member Clusters** area, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, navigate to `alerting` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
@@ -89,7 +89,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
diff --git a/content/en/docs/v3.3/pluggable-components/app-store.md b/content/en/docs/v3.3/pluggable-components/app-store.md
index 09c41607f..fcf03b76a 100644
--- a/content/en/docs/v3.3/pluggable-components/app-store.md
+++ b/content/en/docs/v3.3/pluggable-components/app-store.md
@@ -80,7 +80,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{ notice >}}
-3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, search for `openpitrix` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
@@ -98,7 +98,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
diff --git a/content/en/docs/v3.3/pluggable-components/auditing-logs.md b/content/en/docs/v3.3/pluggable-components/auditing-logs.md
index 36f691433..76d1344c1 100644
--- a/content/en/docs/v3.3/pluggable-components/auditing-logs.md
+++ b/content/en/docs/v3.3/pluggable-components/auditing-logs.md
@@ -106,7 +106,7 @@ By default, ks-installer will install Elasticsearch internally if Auditing is en
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
{{ notice >}}
-3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, navigate to `auditing` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
@@ -139,7 +139,7 @@ By default, Elasticsearch will be installed internally if Auditing is enabled. F
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
diff --git a/content/en/docs/v3.3/pluggable-components/devops.md b/content/en/docs/v3.3/pluggable-components/devops.md
index a090184a1..ac85c81e4 100644
--- a/content/en/docs/v3.3/pluggable-components/devops.md
+++ b/content/en/docs/v3.3/pluggable-components/devops.md
@@ -78,7 +78,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{ notice >}}
-3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, search for `devops` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
@@ -95,7 +95,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
diff --git a/content/en/docs/v3.3/pluggable-components/events.md b/content/en/docs/v3.3/pluggable-components/events.md
index f4454d145..a483743df 100644
--- a/content/en/docs/v3.3/pluggable-components/events.md
+++ b/content/en/docs/v3.3/pluggable-components/events.md
@@ -110,7 +110,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{ notice >}}
-3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, navigate to `events` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
@@ -144,7 +144,7 @@ By default, Elasticsearch will be installed internally if Events is enabled. For
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
diff --git a/content/en/docs/v3.3/pluggable-components/kubeedge.md b/content/en/docs/v3.3/pluggable-components/kubeedge.md
index a45841309..037e7ac1b 100644
--- a/content/en/docs/v3.3/pluggable-components/kubeedge.md
+++ b/content/en/docs/v3.3/pluggable-components/kubeedge.md
@@ -12,7 +12,7 @@ KubeEdge has components running in two separate places - cloud and edge nodes. T
After you enable KubeEdge, you can [add edge nodes to your cluster](../../installing-on-linux/cluster-operation/add-edge-nodes/) and deploy workloads on them.
-
+
## Enable KubeEdge Before Installation
@@ -110,7 +110,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
{{ notice >}}
-3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, navigate to `edgeruntime` and `kubeedge`, and change the value of `enabled` from `false` to `true` to enable all KubeEdge components. Click **OK**.
@@ -143,7 +143,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
diff --git a/content/en/docs/v3.3/pluggable-components/logging.md b/content/en/docs/v3.3/pluggable-components/logging.md
index bbe764c7e..324b23bf6 100644
--- a/content/en/docs/v3.3/pluggable-components/logging.md
+++ b/content/en/docs/v3.3/pluggable-components/logging.md
@@ -120,7 +120,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{ notice >}}
-3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, navigate to `logging` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
@@ -157,7 +157,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
diff --git a/content/en/docs/v3.3/pluggable-components/metrics-server.md b/content/en/docs/v3.3/pluggable-components/metrics-server.md
index e82801df1..d14e39361 100644
--- a/content/en/docs/v3.3/pluggable-components/metrics-server.md
+++ b/content/en/docs/v3.3/pluggable-components/metrics-server.md
@@ -77,7 +77,7 @@ If you install KubeSphere on some cloud hosted Kubernetes engines, it is probabl
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
{{ notice >}}
-3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, navigate to `metrics_server` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
@@ -94,7 +94,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
diff --git a/content/en/docs/v3.3/pluggable-components/network-policy.md b/content/en/docs/v3.3/pluggable-components/network-policy.md
index 437190c87..006fac5cd 100644
--- a/content/en/docs/v3.3/pluggable-components/network-policy.md
+++ b/content/en/docs/v3.3/pluggable-components/network-policy.md
@@ -83,7 +83,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
{{ notice >}}
-3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, navigate to `network.networkpolicy` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
@@ -101,7 +101,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
diff --git a/content/en/docs/v3.3/pluggable-components/pod-ip-pools.md b/content/en/docs/v3.3/pluggable-components/pod-ip-pools.md
index b8df7f4aa..cc76243e2 100644
--- a/content/en/docs/v3.3/pluggable-components/pod-ip-pools.md
+++ b/content/en/docs/v3.3/pluggable-components/pod-ip-pools.md
@@ -75,7 +75,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
{{ notice >}}
-3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, navigate to `network` and change `network.ippool.type` to `calico`. After you finish, click **OK** in the lower-right corner to save the configuration.
@@ -93,7 +93,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
diff --git a/content/en/docs/v3.3/pluggable-components/service-mesh.md b/content/en/docs/v3.3/pluggable-components/service-mesh.md
index 0b6685a4d..8cab80ede 100644
--- a/content/en/docs/v3.3/pluggable-components/service-mesh.md
+++ b/content/en/docs/v3.3/pluggable-components/service-mesh.md
@@ -93,7 +93,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
{{ notice >}}
-3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, navigate to `servicemesh` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
@@ -118,7 +118,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
diff --git a/content/en/docs/v3.3/pluggable-components/service-topology.md b/content/en/docs/v3.3/pluggable-components/service-topology.md
index 1dadea0bb..2e4949f60 100644
--- a/content/en/docs/v3.3/pluggable-components/service-topology.md
+++ b/content/en/docs/v3.3/pluggable-components/service-topology.md
@@ -75,7 +75,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
{{ notice >}}
-3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
4. In this YAML file, navigate to `network` and change `network.topology.type` to `weave-scope`. After you finish, click **OK** in the lower-right corner to save the configuration.
@@ -93,7 +93,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
-You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
{{ notice >}}
## Verify the Installation of the Component
diff --git a/content/en/docs/v3.3/project-administration/_index.md b/content/en/docs/v3.3/project-administration/_index.md
index a8c3c7e20..4964f300c 100644
--- a/content/en/docs/v3.3/project-administration/_index.md
+++ b/content/en/docs/v3.3/project-administration/_index.md
@@ -6,7 +6,7 @@ layout: "second"
linkTitle: "Project Administration"
weight: 13000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
diff --git a/content/en/docs/v3.3/project-administration/disk-log-collection.md b/content/en/docs/v3.3/project-administration/disk-log-collection.md
index 634317a22..1b9f2a295 100644
--- a/content/en/docs/v3.3/project-administration/disk-log-collection.md
+++ b/content/en/docs/v3.3/project-administration/disk-log-collection.md
@@ -19,7 +19,7 @@ This tutorial demonstrates how to collect logs for an example app.
1. Log in to the web console of KubeSphere as `project-admin` and go to your project.
-2. From the left navigation bar, click **Log Collection** in **Project Settings**, and then click
to enable the feature.
+2. From the left navigation bar, click **Log Collection** in **Project Settings**, and then click
to enable the feature.
## Create a Deployment
@@ -51,7 +51,7 @@ This tutorial demonstrates how to collect logs for an example app.
{{ notice >}}
-6. On the **Storage Settings** tab, click
on the right.
+4. Newly-created roles will be listed in **Project Roles**. To edit an existing role, click
on the right.
## Invite a New Member
1. Navigate to **Project Members** under **Project Settings**, and click **Invite**.
-2. Invite a user to the project by clicking
on the right of it and assign a role to it.
+2. Invite a user to the project by clicking
on the right of it and assign a role to it.
3. After you add the user to the project, click **OK**. In **Project Members**, you can see the user in the list.
-4. To edit the role of an existing user or remove the user from the project, click
on the right and select the corresponding operation.
+4. To edit the role of an existing user or remove the user from the project, click
on the right and select the corresponding operation.
diff --git a/content/en/docs/v3.3/project-user-guide/_index.md b/content/en/docs/v3.3/project-user-guide/_index.md
index 7dc50ce3b..f73099b6b 100644
--- a/content/en/docs/v3.3/project-user-guide/_index.md
+++ b/content/en/docs/v3.3/project-user-guide/_index.md
@@ -6,7 +6,7 @@ layout: "second"
linkTitle: "Project User Guide"
weight: 10000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
In KubeSphere, project users with necessary permissions are able to perform a series of tasks, such as creating different kinds of workloads, configuring volumes, Secrets, and ConfigMaps, setting various release strategies, monitoring app metrics, and creating alerting policies. As KubeSphere features great flexibility and compatibility without any code hacking into native Kubernetes, it is very convenient for users to get started with any feature required for their testing, development and production environments.
\ No newline at end of file
diff --git a/content/en/docs/v3.3/project-user-guide/alerting/alerting-policy.md b/content/en/docs/v3.3/project-user-guide/alerting/alerting-policy.md
index d46cea3bd..2610a09a4 100644
--- a/content/en/docs/v3.3/project-user-guide/alerting/alerting-policy.md
+++ b/content/en/docs/v3.3/project-user-guide/alerting/alerting-policy.md
@@ -47,7 +47,7 @@ KubeSphere provides alerting policies for nodes and workloads. This tutorial dem
## Edit an Alerting Policy
-To edit an alerting policy after it is created, on the **Alerting Policies** page, click
on the right.
+To edit an alerting policy after it is created, on the **Alerting Policies** page, click
on the right.
1. Click **Edit** from the drop-down menu and edit the alerting policy following the same steps as you create it. Click **OK** on the **Message Settings** page to save it.
diff --git a/content/en/docs/v3.3/project-user-guide/application-workloads/container-image-settings.md b/content/en/docs/v3.3/project-user-guide/application-workloads/container-image-settings.md
index f0a78fcae..10cad3968 100644
--- a/content/en/docs/v3.3/project-user-guide/application-workloads/container-image-settings.md
+++ b/content/en/docs/v3.3/project-user-guide/application-workloads/container-image-settings.md
@@ -18,7 +18,7 @@ You can enable **Edit YAML** in the upper-right corner to see corresponding valu
### Pod Replicas
-Set the number of replicated Pods by clicking
on the right and click
on the right and click
on the right and select the options from the menu to modify a DaemonSet.
+1. After a DaemonSet is created, it will be displayed in the list. You can click
on the right and select the options from the menu to modify a DaemonSet.
- **Edit Information**: View and edit the basic information.
- **Edit YAML**: View, upload, download, or update the YAML file.
@@ -122,9 +122,9 @@ Click the **Metadata** tab to view the labels and annotations of the DaemonSet.
2. Click the drop-down menu in the upper-right corner to customize the time range and sampling interval.
-3. Click
/
in the upper-right corner to start/stop automatic data refreshing.
+3. Click
/
in the upper-right corner to start/stop automatic data refreshing.
-4. Click
in the upper-right corner to manually refresh the data.
+4. Click
in the upper-right corner to manually refresh the data.
### Environment variables
diff --git a/content/en/docs/v3.3/project-user-guide/application-workloads/deployments.md b/content/en/docs/v3.3/project-user-guide/application-workloads/deployments.md
index 062a03ec7..5888ced36 100644
--- a/content/en/docs/v3.3/project-user-guide/application-workloads/deployments.md
+++ b/content/en/docs/v3.3/project-user-guide/application-workloads/deployments.md
@@ -27,7 +27,7 @@ Specify a name for the Deployment (for example, `demo-deployment`), select a pro
### Step 3: Set a Pod
-1. Before you set an image, define the number of replicated Pods in **Pod Replicas** by clicking
on the right and select options from the menu to modify your Deployment.
+1. After a Deployment is created, it will be displayed in the list. You can click
on the right and select options from the menu to modify your Deployment.
- **Edit Information**: View and edit the basic information.
- **Edit YAML**: View, upload, download, or update the YAML file.
@@ -104,7 +104,7 @@ You can set a policy for node scheduling and add metadata in this section. When
4. Click the **Resource Status** tab to view the port and Pod information of the Deployment.
- - **Replica Status**: Click
/
in the upper-right corner to start/stop automatic data refreshing.
+3. Click
/
in the upper-right corner to start/stop automatic data refreshing.
-4. Click
in the upper-right corner to manually refresh the data.
+4. Click
in the upper-right corner to manually refresh the data.
### Environment variables
diff --git a/content/en/docs/v3.3/project-user-guide/application-workloads/horizontal-pod-autoscaling.md b/content/en/docs/v3.3/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
index 663444f5c..8b56eb900 100755
--- a/content/en/docs/v3.3/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
+++ b/content/en/docs/v3.3/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
@@ -83,7 +83,7 @@ This section uses a Deployment that sends requests to the HPA Service to verify
1. After the load generator Deployment is created, go to **Workloads** in **Application Workloads** on the left navigation bar and click the HPA Deployment (for example, hpa-v1) on the right. The number of Pods displayed on the page automatically increases to meet the resource usage target.
-2. Choose **Workloads** in **Application Workloads** on the left navigation bar, click
on the right of the load generator Deployment (for example, load-generator-v1), and choose **Delete** from the drop-down list. After the load-generator Deployment is deleted, check the status of the HPA Deployment again. The number of Pods decreases to the minimum.
+2. Choose **Workloads** in **Application Workloads** on the left navigation bar, click
on the right of the load generator Deployment (for example, load-generator-v1), and choose **Delete** from the drop-down list. After the load-generator Deployment is deleted, check the status of the HPA Deployment again. The number of Pods decreases to the minimum.
{{< notice note >}}
@@ -99,6 +99,6 @@ You can repeat steps in [Configure HPA](#configure-hpa) to edit the HPA configur
1. Choose **Workloads** in **Application Workloads** on the left navigation bar and click the HPA Deployment (for example, hpa-v1) on the right.
-2. Click
on the right of **Autoscaling** and choose **Cancel** from the drop-down list.
+2. Click
on the right of **Autoscaling** and choose **Cancel** from the drop-down list.
diff --git a/content/en/docs/v3.3/project-user-guide/application-workloads/jobs.md b/content/en/docs/v3.3/project-user-guide/application-workloads/jobs.md
index cbfcf136f..95a12240e 100644
--- a/content/en/docs/v3.3/project-user-guide/application-workloads/jobs.md
+++ b/content/en/docs/v3.3/project-user-guide/application-workloads/jobs.md
@@ -115,7 +115,7 @@ You can set the values in this step or click **Next** to use the default values.
You can rerun the Job if it fails and the reason for failure is displayed under **Message**.
{{ notice >}}
-3. In **Resource Status**, you can inspect the Pod status. Two Pods were created each time as **Parallel Pods** was set to 2. Click
on the right and click
on the right and click
to refresh the execution records.
+2. Click
to refresh the execution records.
### Resource status
1. Click the **Resource Status** tab to view the Pods of the Job.
-2. Click
to refresh the Pod information, and click
/
to display/hide the containers in each Pod.
+2. Click
to refresh the Pod information, and click
/
to display/hide the containers in each Pod.
### Metadata
diff --git a/content/en/docs/v3.3/project-user-guide/application-workloads/services.md b/content/en/docs/v3.3/project-user-guide/application-workloads/services.md
index 5c19fafc0..da9a7280d 100644
--- a/content/en/docs/v3.3/project-user-guide/application-workloads/services.md
+++ b/content/en/docs/v3.3/project-user-guide/application-workloads/services.md
@@ -159,7 +159,7 @@ This value is specified by `.spec.type`. If you select **LoadBalancer**, you nee
### Details page
-1. After a Service is created, you can click
on the right to further edit it, such as its metadata (excluding **Name**), YAML, port, and Internet access.
+1. After a Service is created, you can click
on the right to further edit it, such as its metadata (excluding **Name**), YAML, port, and Internet access.
- **Edit Information**: View and edit the basic information.
- **Edit YAML**: View, upload, download, or update the YAML file.
@@ -179,7 +179,7 @@ This value is specified by `.spec.type`. If you select **LoadBalancer**, you nee
1. Click the **Resource Status** tab to view information about the Service ports, workloads, and Pods.
-2. In the **Pods** area, click
to refresh the Pod information, and click
/
to display/hide the containers in each Pod.
+2. In the **Pods** area, click
to refresh the Pod information, and click
/
to display/hide the containers in each Pod.
### Metadata
diff --git a/content/en/docs/v3.3/project-user-guide/application-workloads/statefulsets.md b/content/en/docs/v3.3/project-user-guide/application-workloads/statefulsets.md
index d5b160d06..4af61d6fa 100644
--- a/content/en/docs/v3.3/project-user-guide/application-workloads/statefulsets.md
+++ b/content/en/docs/v3.3/project-user-guide/application-workloads/statefulsets.md
@@ -39,7 +39,7 @@ Specify a name for the StatefulSet (for example, `demo-stateful`), select a proj
### Step 3: Set a Pod
-1. Before you set an image, define the number of replicated Pods in **Pod Replicas** by clicking
on the right to select options from the menu to modify your StatefulSet.
+1. After a StatefulSet is created, it will be displayed in the list. You can click
on the right to select options from the menu to modify your StatefulSet.
- **Edit Information**: View and edit the basic information.
- **Edit YAML**: View, upload, download, or update the YAML file.
@@ -112,7 +112,7 @@ You can set a policy for node scheduling and add StatefulSet metadata in this se
4. Click the **Resource Status** tab to view the port and Pod information of a StatefulSet.
- - **Replica Status**: Click
/
in the upper-right corner to start/stop automatic data refreshing.
+3. Click
/
in the upper-right corner to start/stop automatic data refreshing.
-4. Click
in the upper-right corner to manually refresh the data.
+4. Click
in the upper-right corner to manually refresh the data.
### Environment variables
diff --git a/content/en/docs/v3.3/project-user-guide/configuration/configmaps.md b/content/en/docs/v3.3/project-user-guide/configuration/configmaps.md
index c50682ecf..bd9fd7c13 100644
--- a/content/en/docs/v3.3/project-user-guide/configuration/configmaps.md
+++ b/content/en/docs/v3.3/project-user-guide/configuration/configmaps.md
@@ -48,7 +48,7 @@ You can see the ConfigMap manifest file in YAML format by enabling **Edit YAML**
## View ConfigMap Details
-1. After a ConfigMap is created, it is displayed on the **ConfigMaps** page. You can click
on the right and select the operation below from the drop-down list.
+1. After a ConfigMap is created, it is displayed on the **ConfigMaps** page. You can click
on the right and select the operation below from the drop-down list.
- **Edit Information**: View and edit the basic information.
- **Edit YAML**: View, upload, download, or update the YAML file.
diff --git a/content/en/docs/v3.3/project-user-guide/configuration/image-registry.md b/content/en/docs/v3.3/project-user-guide/configuration/image-registry.md
index 0cce22b71..3491c1273 100644
--- a/content/en/docs/v3.3/project-user-guide/configuration/image-registry.md
+++ b/content/en/docs/v3.3/project-user-guide/configuration/image-registry.md
@@ -101,4 +101,4 @@ When you set images, you can select the private image registry if the Secret of
If you use YAML to create a workload and need to use a private image registry, you need to manually add `kubesphere.io/imagepullsecrets` to `annotations` in your local YAML file, and enter the key-value pair in JSON format, where `key` must be the name of the container, and `value` must be the name of the secret, as shown in the following sample.
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/content/en/docs/v3.3/project-user-guide/configuration/secrets.md b/content/en/docs/v3.3/project-user-guide/configuration/secrets.md
index 16e9dfd05..d7606edc0 100644
--- a/content/en/docs/v3.3/project-user-guide/configuration/secrets.md
+++ b/content/en/docs/v3.3/project-user-guide/configuration/secrets.md
@@ -58,7 +58,7 @@ You can see the Secret's manifest file in YAML format by enabling **Edit YAML**
## Check Secret Details
-1. After a Secret is created, it will be displayed in the list. You can click
on the right and select the operation from the menu to modify it.
+1. After a Secret is created, it will be displayed in the list. You can click
on the right and select the operation from the menu to modify it.
- **Edit Information**: View and edit the basic information.
- **Edit YAML**: View, upload, download, or update the YAML file.
@@ -69,7 +69,7 @@ You can see the Secret's manifest file in YAML format by enabling **Edit YAML**
{{< notice note >}}
-As mentioned above, KubeSphere automatically converts the value of a key into its corresponding base64 character value. To see the actual decoded value, click
to drag and drop an item into the target group. To add a new group, click **Add Monitoring Group**. If you want to change the place of a group, hover over a group and click
or
arrow on the right.
+To group monitoring items, you can click
to drag and drop an item into the target group. To add a new group, click **Add Monitoring Group**. If you want to change the place of a group, hover over a group and click
or
arrow on the right.
{{< notice note >}}
diff --git a/content/en/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/querying.md b/content/en/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/querying.md
index c9f6f40d4..57deac2e6 100644
--- a/content/en/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/querying.md
+++ b/content/en/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/querying.md
@@ -8,6 +8,6 @@ weight: 10817
In the query editor, enter PromQL expressions in **Monitoring Metrics** to process and fetch metrics. To learn how to write PromQL, read [Query Examples](https://prometheus.io/docs/prometheus/latest/querying/examples/).
-
+
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/content/en/docs/v3.3/project-user-guide/grayscale-release/blue-green-deployment.md b/content/en/docs/v3.3/project-user-guide/grayscale-release/blue-green-deployment.md
index 4f5abe9a4..2ee90bf74 100644
--- a/content/en/docs/v3.3/project-user-guide/grayscale-release/blue-green-deployment.md
+++ b/content/en/docs/v3.3/project-user-guide/grayscale-release/blue-green-deployment.md
@@ -9,7 +9,7 @@ weight: 10520
The blue-green release provides a zero downtime deployment, which means the new version can be deployed with the old one preserved. At any time, only one of the versions is active serving all the traffic, while the other one remains idle. If there is a problem with running, you can quickly roll back to the old version.
-
+
## Prerequisites
diff --git a/content/en/docs/v3.3/project-user-guide/grayscale-release/canary-release.md b/content/en/docs/v3.3/project-user-guide/grayscale-release/canary-release.md
index 0aaa85250..1a124fc60 100644
--- a/content/en/docs/v3.3/project-user-guide/grayscale-release/canary-release.md
+++ b/content/en/docs/v3.3/project-user-guide/grayscale-release/canary-release.md
@@ -10,7 +10,7 @@ On the back of [Istio](https://istio.io/), KubeSphere provides users with necess
This method serves as an efficient way to test performance and reliability of a service. It can help detect potential problems in the actual environment while not affecting the overall system stability.
-
+
## Prerequisites
@@ -110,7 +110,7 @@ If everything runs smoothly, you can bring all the traffic to the new version.
1. In **Release Jobs**, click the canary release job.
-2. In the displayed dialog box, click
on the right of **reviews v2** and select **Take Over**. It means 100% of the traffic will be sent to the new version (v2).
+2. In the displayed dialog box, click
on the right of **reviews v2** and select **Take Over**. It means 100% of the traffic will be sent to the new version (v2).
{{< notice note >}}
If anything goes wrong with the new version, you can roll back to the previous version v1 anytime.
diff --git a/content/en/docs/v3.3/project-user-guide/image-builder/binary-to-image.md b/content/en/docs/v3.3/project-user-guide/image-builder/binary-to-image.md
index 63b377a2b..17d509615 100644
--- a/content/en/docs/v3.3/project-user-guide/image-builder/binary-to-image.md
+++ b/content/en/docs/v3.3/project-user-guide/image-builder/binary-to-image.md
@@ -33,7 +33,7 @@ For demonstration and testing purposes, here are some example artifacts you can
The steps below show how to upload an artifact, build an image and release it to Kubernetes by creating a Service in a B2I workflow.
-
+
### Step 1: Create a Docker Hub Secret
@@ -77,7 +77,7 @@ You must create a Docker Hub Secret so that the Docker image created through B2I
1. Wait for a while and you can see the status of the image builder has reached **Successful**.
-2. Click this image to go to its details page. Under **Job Records**, click
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
+2. Click this image to go to its details page. Under **Job Records**, click
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
3. Go back to the **Services**, **Deployments**, and **Jobs** page, and you can see the corresponding Service, Deployment, and Job of the image have been all created successfully.
@@ -99,7 +99,7 @@ You must create a Docker Hub Secret so that the Docker image created through B2I
The example above implements the entire workflow of B2I by creating a Service. Alternatively, you can use the Image Builder directly to build an image based on an artifact while this method will not publish the image to Kubernetes.
-
+
{{< notice note >}}
@@ -133,7 +133,7 @@ Make sure you have created a Secret for Docker Hub. For more information, see [C
1. Wait for a while and you can see the status of the image builder has reached **Successful**.
-2. Click this image builder to go to its details page. Under **Job Records**, click
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
+2. Click this image builder to go to its details page. Under **Job Records**, click
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
3. Go to the **Jobs** page, and you can see the corresponding Job of the image has been created successfully.
diff --git a/content/en/docs/v3.3/project-user-guide/image-builder/s2i-introduction.md b/content/en/docs/v3.3/project-user-guide/image-builder/s2i-introduction.md
index 329670856..36f9d731e 100644
--- a/content/en/docs/v3.3/project-user-guide/image-builder/s2i-introduction.md
+++ b/content/en/docs/v3.3/project-user-guide/image-builder/s2i-introduction.md
@@ -16,7 +16,7 @@ For more information about how to use S2I in KubeSphere, refer to [Source to Ima
For interpreted languages like Python and Ruby, the build-time and runtime environments for an application are typically the same. For example, a Ruby-based Image Builder usually contains Bundler, Rake, Apache, GCC, and other packages needed to set up a runtime environment. The following diagram describes the build workflow.
-
+
### How S2I works
@@ -28,7 +28,7 @@ S2I performs the following steps:
See the S2I workflow chart as below.
-
+
### Runtime Image
@@ -36,4 +36,4 @@ For compiled languages like Go, C, C++, or Java, the dependencies necessary for
See the building workflow as below.
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/content/en/docs/v3.3/project-user-guide/image-builder/source-to-image.md b/content/en/docs/v3.3/project-user-guide/image-builder/source-to-image.md
index 859c7c0d1..1851cd615 100644
--- a/content/en/docs/v3.3/project-user-guide/image-builder/source-to-image.md
+++ b/content/en/docs/v3.3/project-user-guide/image-builder/source-to-image.md
@@ -10,7 +10,7 @@ Source-to-Image (S2I) is a toolkit and workflow for building reproducible contai
This tutorial demonstrates how to use S2I to import source code of a Java sample project into KubeSphere by creating a Service. Based on the source code, the KubeSphere Image Builder will create a Docker image, push it to a target repository and publish it to Kubernetes.
-
+
## Prerequisites
@@ -89,7 +89,7 @@ You do not need to create the GitHub Secret if your forked repository is open to
1. Wait for a while and you can see the status of the image builder has reached **Successful**.
-2. Click this image builder to go to its details page. Under **Job Records**, click
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
+2. Click this image builder to go to its details page. Under **Job Records**, click
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
3. Go back to the **Services**, **Deployments**, and **Jobs** page, and you can see the corresponding Service, Deployment, and Job of the image have been all created successfully.
diff --git a/content/en/docs/v3.3/project-user-guide/storage/volumes.md b/content/en/docs/v3.3/project-user-guide/storage/volumes.md
index 3ec63ef55..30c7a8710 100644
--- a/content/en/docs/v3.3/project-user-guide/storage/volumes.md
+++ b/content/en/docs/v3.3/project-user-guide/storage/volumes.md
@@ -215,7 +215,7 @@ For more information about PVC monitoring, see [Research on Volume Monitoring](h
-2. Click
in the lower-right corner and select **Metering and Billing**.
+1. Log in to the KubeSphere console as `admin`, click
in the lower-right corner and select **Metering and Billing**.
2. Click **View Consumption** in the **Cluster Resource Consumption** section.
@@ -56,7 +56,7 @@ KubeSphere metering helps you track resource consumption within a given cluster
**Workspace (Project) Resource Consumption** contains resource usage information of workspaces (and projects included), such as CPU, memory and storage.
-1. Log in to the KubeSphere console as `admin`, click
in the lower-right corner and select **Metering and Billing**.
+1. Log in to the KubeSphere console as `admin`, click
in the lower-right corner and select **Metering and Billing**.
2. Click **View Consumption** in the **Workspace (Project) Resource Consumption** section.
diff --git a/content/en/docs/v3.3/toolbox/web-kubectl.md b/content/en/docs/v3.3/toolbox/web-kubectl.md
index 54a51b1f6..fd87e369c 100644
--- a/content/en/docs/v3.3/toolbox/web-kubectl.md
+++ b/content/en/docs/v3.3/toolbox/web-kubectl.md
@@ -24,7 +24,7 @@ This tutorial demonstrates how to use web kubectl to operate on and manage clust
kubectl get pvc --all-namespaces
```
- 
+ 
4. Use the following syntax to run kubectl commands from your terminal window:
diff --git a/content/en/docs/v3.3/upgrade/_index.md b/content/en/docs/v3.3/upgrade/_index.md
index a88033ba0..ba77a8d3a 100644
--- a/content/en/docs/v3.3/upgrade/_index.md
+++ b/content/en/docs/v3.3/upgrade/_index.md
@@ -7,7 +7,7 @@ linkTitle: "Upgrade"
weight: 7000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
diff --git a/content/en/docs/v3.3/workspace-administration/_index.md b/content/en/docs/v3.3/workspace-administration/_index.md
index 2024f8313..301a75c9f 100644
--- a/content/en/docs/v3.3/workspace-administration/_index.md
+++ b/content/en/docs/v3.3/workspace-administration/_index.md
@@ -7,7 +7,7 @@ linkTitle: "Workspace Administration and User Guide"
weight: 9000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
diff --git a/content/en/docs/v3.3/workspace-administration/department-management.md b/content/en/docs/v3.3/workspace-administration/department-management.md
index 1201d50db..0b802d034 100644
--- a/content/en/docs/v3.3/workspace-administration/department-management.md
+++ b/content/en/docs/v3.3/workspace-administration/department-management.md
@@ -42,7 +42,7 @@ A department in a workspace is a logical unit used for permission control. You c
1. On the **Departments** page, select a department in the department tree on the left and click **Not Assigned** on the right.
-2. In the user list, click
on the right of a user, and click **OK** for the displayed message to assign the user to the department.
+2. In the user list, click
on the right of a user, and click **OK** for the displayed message to assign the user to the department.
{{< notice note >}}
@@ -54,7 +54,7 @@ A department in a workspace is a logical unit used for permission control. You c
## Remove a User from a Department
1. On the **Departments** page, select a department in the department tree on the left and click **Assigned** on the right.
-2. In the assigned user list, click
on the right of a user, enter the username in the displayed dialog box, and click **OK** to remove the user.
+2. In the assigned user list, click
on the right of a user, enter the username in the displayed dialog box, and click **OK** to remove the user.
## Delete and Edit a Department
@@ -62,7 +62,7 @@ A department in a workspace is a logical unit used for permission control. You c
2. In the **Set Departments** dialog box, on the left, click the upper level of the department to be edited or deleted.
-3. Click
on the right of the department to edit it.
+3. Click
on the right of the department to edit it.
{{< notice note >}}
@@ -70,7 +70,7 @@ A department in a workspace is a logical unit used for permission control. You c
{{ notice >}}
-4. Click
on the right of the department, enter the department name in the displayed dialog box, and click **OK** to delete the department.
+4. Click
on the right of the department, enter the department name in the displayed dialog box, and click **OK** to delete the department.
{{< notice note >}}
diff --git a/content/en/docs/v3.3/workspace-administration/role-and-member-management.md b/content/en/docs/v3.3/workspace-administration/role-and-member-management.md
index eeffc5f4c..8bbe3884a 100644
--- a/content/en/docs/v3.3/workspace-administration/role-and-member-management.md
+++ b/content/en/docs/v3.3/workspace-administration/role-and-member-management.md
@@ -49,13 +49,13 @@ To view the permissions that a role contains:
{{ notice >}}
-4. Newly-created roles will be listed in **Workspace Roles**. To edit the information or permissions, or delete an existing role, click
on the right.
+4. Newly-created roles will be listed in **Workspace Roles**. To edit the information or permissions, or delete an existing role, click
on the right.
## Invite a New Member
1. Navigate to **Workspace Members** under **Workspace Settings**, and click **Invite**.
-2. Invite a user to the workspace by clicking
on the right of it and assign a role to it.
+2. Invite a user to the workspace by clicking
on the right of it and assign a role to it.
3. After you add the user to the workspace, click **OK**. In **Workspace Members**, you can see the user in the list.
-4. To edit the role of an existing user or remove the user from the workspace, click
on the right and select the corresponding operation.
\ No newline at end of file
+4. To edit the role of an existing user or remove the user from the workspace, click
on the right and select the corresponding operation.
\ No newline at end of file
diff --git a/content/en/docs/v3.3/workspace-administration/workspace-quotas.md b/content/en/docs/v3.3/workspace-administration/workspace-quotas.md
index 8a40dec53..3a67ccb30 100644
--- a/content/en/docs/v3.3/workspace-administration/workspace-quotas.md
+++ b/content/en/docs/v3.3/workspace-administration/workspace-quotas.md
@@ -24,7 +24,7 @@ You have an available workspace and a user (`ws-manager`). The user must have th
3. The **Workspace Quotas** page lists all the available clusters assigned to the workspace and their respective requests and limits of CPU and memory. Click **Edit Quotas** on the right of a cluster.
-4. In the displayed dialog box, you can see that KubeSphere does not set any requests or limits for the workspace by default. To set requests and limits to control CPU and memory resources, move
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. Add the following fields under `spec.authentication.jwtSecret`.
+
+ *Example of using [Google Identity Platform](https://developers.google.com/identity/protocols/oauth2/openid-connect)*:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: google
+ type: OIDCIdentityProvider
+ mappingMethod: auto
+ provider:
+ clientID: '********'
+ clientSecret: '********'
+ issuer: https://accounts.google.com
+ redirectURL: 'https://ks-console/oauth/redirect/google'
+ ```
+
+ See description of parameters as below:
+
+ | Parameter | Description |
+ | -------------------- | ------------------------------------------------------------ |
+ | clientID | The OAuth2 client ID. |
+ | clientSecret | The OAuth2 client secret. |
+ | redirectURL | The redirected URL to ks-console in the following format: `https://
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. Add the following fields under `spec.authentication.jwtSecret`.
+
+ Example:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ loginHistoryRetentionPeriod: 168h
+ maximumClockSkew: 10s
+ multipleLogin: true
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: LDAP
+ type: LDAPIdentityProvider
+ mappingMethod: auto
+ provider:
+ host: 192.168.0.2:389
+ managerDN: uid=root,cn=users,dc=nas
+ managerPassword: ********
+ userSearchBase: cn=users,dc=nas
+ loginAttribute: uid
+ mailAttribute: mail
+ ```
+
+ The fields are described as follows:
+
+ * `jwtSecret`: Secret used to sign user tokens. In a multi-cluster environment, all clusters must [use the same Secret](../../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-member-cluster).
+ * `authenticateRateLimiterMaxTries`: Maximum number of consecutive login failures allowed during a period specified by `authenticateRateLimiterDuration`. If the number of consecutive login failures of a user reaches the limit, the user will be blocked.
+ * `authenticateRateLimiterDuration`: Period during which `authenticateRateLimiterMaxTries` applies.
+ * `loginHistoryRetentionPeriod`: Retention period of login records. Outdated login records are automatically deleted.
+ * `maximumClockSkew`: Maximum clock skew for time-sensitive operations such as token expiration validation. The default value is `10s`.
+ * `multipleLogin`: Whether multiple users are allowed to log in from different locations. The default value is `true`.
+ * `oauthOptions`: OAuth settings.
+ * `accessTokenMaxAge`: Access token lifetime. For member clusters in a multi-cluster environment, the default value is `0h`, which means access tokens never expire. For other clusters, the default value is `2h`.
+ * `accessTokenInactivityTimeout`: Access token inactivity timeout period. An access token becomes invalid after it is idle for a period specified by this field. After an access token times out, the user needs to obtain a new access token to regain access.
+ * `identityProviders`: Identity providers.
+ * `name`: Identity provider name.
+ * `type`: Identity provider type.
+ * `mappingMethod`: Account mapping method. The value can be `auto` or `lookup`.
+ * If the value is `auto` (default), you need to specify a new username. KubeSphere automatically creates a user according to the username and maps the user to a third-party account.
+ * If the value is `lookup`, you need to perform step 3 to manually map an existing KubeSphere user to a third-party account.
+ * `provider`: Identity provider information. Fields in this section vary according to the identity provider type.
+
+3. If `mappingMethod` is set to `lookup`, run the following command and add the labels to map a KubeSphere user to a third-party account. Skip this step if `mappingMethod` is set to `auto`.
+
+ ```bash
+ kubectl edit user
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+ Example:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ maximumClockSkew: 10s
+ multipleLogin: true
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: LDAP
+ type: LDAPIdentityProvider
+ mappingMethod: auto
+ provider:
+ host: 192.168.0.2:389
+ managerDN: uid=root,cn=users,dc=nas
+ managerPassword: ********
+ userSearchBase: cn=users,dc=nas
+ loginAttribute: uid
+ mailAttribute: mail
+ ```
+
+2. Configure fields other than `oauthOptions:identityProviders` in the `spec:authentication` section. For details, see [Set Up External Authentication](../set-up-external-authentication/).
+
+3. Configure fields in `oauthOptions:identityProviders` section.
+
+ * `name`: User-defined LDAP service name.
+ * `type`: To use an LDAP service as an identity provider, you must set the value to `LDAPIdentityProvider`.
+ * `mappingMethod`: Account mapping method. The value can be `auto` or `lookup`.
+ * If the value is `auto` (default), you need to specify a new username. KubeSphere automatically creates a user according to the username and maps the user to an LDAP user.
+ * If the value is `lookup`, you need to perform step 4 to manually map an existing KubeSphere user to an LDAP user.
+ * `provider`:
+ * `host`: Address and port number of the LDAP service.
+ * `managerDN`: DN used to bind to the LDAP directory.
+ * `managerPassword`: Password corresponding to `managerDN`.
+ * `userSearchBase`: User search base. Set the value to the DN of the directory level below which all LDAP users can be found.
+ * `loginAttribute`: Attribute that identifies LDAP users.
+ * `mailAttribute`: Attribute that identifies email addresses of LDAP users.
+
+4. If `mappingMethod` is set to `lookup`, run the following command and add the labels to map a KubeSphere user to an LDAP user. Skip this step if `mappingMethod` is set to `auto`.
+
+ ```bash
+ kubectl edit user
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. Confiother than `oauthOptions:identityProviders` in the `spec:authentication` section. For details, see [Set Up External Authentication](../set-up-external-authentication/).
+
+3. Configure fields in `oauthOptions:identityProviders` section according to the identity provider plugin you have developed.
+
+ The following is a configuration example that uses GitHub as an external identity provider. For details, see the [official GitHub documentation](https://docs.github.com/en/developers/apps/building-oauth-apps) and the [source code of the GitHubIdentityProvider](https://github.com/kubesphere/kubesphere/blob/release-3.1/pkg/apiserver/authentication/identityprovider/github/github.go) plugin.
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: github
+ type: GitHubIdentityProvider
+ mappingMethod: auto
+ provider:
+ clientID: '******'
+ clgure fields ientSecret: '******'
+ redirectURL: 'https://ks-console/oauth/redirect/github'
+ ```
+
+ Similarly, you can also use Alibaba Cloud IDaaS as an external identity provider. For details, see the official [Alibaba IDaaS documentation](https://www.alibabacloud.com/help/product/111120.htm?spm=a3c0i.14898238.2766395700.1.62081da1NlxYV0) and the [source code of the AliyunIDaasProvider](https://github.com/kubesphere/kubesphere/blob/release-3.1/pkg/apiserver/authentication/identityprovider/github/github.go) plugin.
+
+4. After the fields are configured, save your changes, and wait until the restart of ks-installer is complete.
+
+ {{< notice note >}}
+
+ The KubeSphere web console is unavailable during the restart of ks-installer. Please wait until the restart is complete.
+
+ {{ notice >}}
+
+5. Go to the KubeSphere login page, click **Log In with XXX** (for example, **Log In with GitHub**).
+
+6. On the login page of the external identity provider, enter the username and password of a user configured at the identity provider to log in to KubeSphere.
+
+ 
+
diff --git a/content/en/docs/v3.4/access-control-and-account-management/multi-tenancy-in-kubesphere.md b/content/en/docs/v3.4/access-control-and-account-management/multi-tenancy-in-kubesphere.md
new file mode 100644
index 000000000..422fcf6de
--- /dev/null
+++ b/content/en/docs/v3.4/access-control-and-account-management/multi-tenancy-in-kubesphere.md
@@ -0,0 +1,57 @@
+---
+title: "Kubernetes Multi-tenancy in KubeSphere"
+keywords: "Kubernetes, KubeSphere, multi-tenancy"
+description: "Understand the multi-tenant architecture in KubeSphere."
+linkTitle: "Multi-tenancy in KubeSphere"
+weight: 12100
+---
+
+Kubernetes helps you orchestrate applications and schedule containers, greatly improving resource utilization. However, there are various challenges facing both enterprises and individuals in resource sharing and security as they use Kubernetes, which is different from how they managed and maintained clusters in the past.
+
+The first and foremost challenge is how to define multi-tenancy in an enterprise and the security boundary of tenants. [The discussion about multi-tenancy](https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY) has never stopped in the Kubernetes community, while there is no definite answer to how a multi-tenant system should be structured.
+
+## Challenges in Kubernetes Multi-tenancy
+
+Multi-tenancy is a common software architecture. Resources in a multi-tenant environment are shared by multiple users, also known as "tenants", with their respective data isolated from each other. The administrator of a multi-tenant Kubernetes cluster must minimize the damage that a compromised or malicious tenant can do to others and make sure resources are fairly allocated.
+
+No matter how an enterprise multi-tenant system is structured, it always comes with the following two building blocks: logical resource isolation and physical resource isolation.
+
+Logically, resource isolation mainly entails API access control and tenant-based permission control. [Role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in Kubernetes and namespaces provide logic isolation. Nevertheless, they are not applicable in most enterprise environments. Tenants in an enterprise often need to manage resources across multiple namespaces or even clusters. Besides, the ability to provide auditing logs for isolated tenants based on their behavior and event queries is also a must in multi-tenancy.
+
+The isolation of physical resources includes nodes and networks, while it also relates to container runtime security. For example, you can create [NetworkPolicy](../../pluggable-components/network-policy/) resources to control traffic flow and use PodSecurityPolicy objects to control container behavior. [Kata Containers](https://katacontainers.io/) provides a more secure container runtime.
+
+## Kubernetes Multi-tenancy in KubeSphere
+
+To solve the issues above, KubeSphere provides a multi-tenant management solution based on Kubernetes.
+
+
+
+In KubeSphere, the [workspace](../../workspace-administration/what-is-workspace/) is the smallest tenant unit. A workspace enables users to share resources across clusters and projects. Workspace members can create projects in an authorized cluster and invite other members to cooperate in the same project.
+
+A **user** is the instance of a KubeSphere account. Users can be appointed as platform administrators to manage clusters or added to workspaces to cooperate in projects.
+
+Multi-level access control and resource quota limits underlie resource isolation in KubeSphere. They decide how the multi-tenant architecture is built and administered.
+
+### Logical isolation
+
+Similar to Kubernetes, KubeSphere uses RBAC to manage permissions granted to users, thus logically implementing resource isolation.
+
+The access control in KubeSphere is divided into three levels: platform, workspace and project. You use roles to control what permissions users have at different levels for different resources.
+
+1. [Platform roles](/docs/v3.4/quick-start/create-workspace-and-project/): Control what permissions platform users have for platform resources, such as clusters, workspaces and platform members.
+2. [Workspace roles](/docs/v3.4/workspace-administration/role-and-member-management/): Control what permissions workspace members have for workspace resources, such as projects (i.e. namespaces) and DevOps projects.
+3. [Project roles](/docs/v3.4/project-administration/role-and-member-management/): Control what permissions project members have for project resources, such as workloads and pipelines.
+
+### Network isolation
+
+Apart from logically isolating resources, KubeSphere also allows you to set [network isolation policies](../../pluggable-components/network-policy/) for workspaces and projects.
+
+### Auditing
+
+KubeSphere also provides [auditing logs](../../pluggable-components/auditing-logs/) for users.
+
+### Authentication and authorization
+
+For a complete authentication and authorization chain in KubeSphere, see the following diagram. KubeSphere has expanded RBAC rules using the Open Policy Agent (OPA). The KubeSphere team looks to integrate [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) to provide more security management policies.
+
+
diff --git a/content/en/docs/v3.4/application-store/_index.md b/content/en/docs/v3.4/application-store/_index.md
new file mode 100644
index 000000000..8c0f1387f
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/_index.md
@@ -0,0 +1,16 @@
+---
+title: "App Store"
+description: "Getting started with the App Store of KubeSphere"
+layout: "second"
+
+
+linkTitle: "App Store"
+weight: 14000
+
+icon: "/images/docs/v3.x/docs.svg"
+
+---
+
+The KubeSphere App Store, powered by [OpenPitrix](https://github.com/openpitrix/openpitrix), an open-source platform that manages apps across clouds, provides users with enterprise-ready containerized solutions. You can upload your own apps through app templates or add app repositories that serve as an application tool for tenants to choose the app they want.
+
+The App Store features a highly productive integrated system for application lifecycle management, allowing users to quickly upload, release, deploy, upgrade and remove apps in ways that best suit them. This is how KubeSphere empowers developers to spend less time setting up and more time developing.
diff --git a/content/en/docs/v3.4/application-store/app-developer-guide/_index.md b/content/en/docs/v3.4/application-store/app-developer-guide/_index.md
new file mode 100644
index 000000000..3d1da2629
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/app-developer-guide/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Application Developer Guide"
+weight: 14400
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/application-store/app-developer-guide/helm-developer-guide.md b/content/en/docs/v3.4/application-store/app-developer-guide/helm-developer-guide.md
new file mode 100644
index 000000000..b7dc2f393
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/app-developer-guide/helm-developer-guide.md
@@ -0,0 +1,157 @@
+---
+title: "Helm Developer Guide"
+keywords: 'Kubernetes, KubeSphere, helm, development'
+description: 'Develop your own Helm-based app.'
+linkTitle: "Helm Developer Guide"
+weight: 14410
+---
+
+You can upload the Helm chart of an app to KubeSphere so that tenants with necessary permissions can deploy it. This tutorial demonstrates how to prepare Helm charts using NGINX as an example.
+
+## Install Helm
+
+If you have already installed KubeSphere, then Helm is deployed in your environment. Otherwise, refer to the [Helm documentation](https://helm.sh/docs/intro/install/) to install Helm first.
+
+## Create a Local Repository
+
+Execute the following commands to create a repository on your machine.
+
+```bash
+mkdir helm-repo
+```
+
+```bash
+cd helm-repo
+```
+
+## Create an App
+
+Use `helm create` to create a folder named `nginx`, which automatically creates YAML templates and directories for your app. Generally, it is not recommended to change the name of files and directories in the top level directory.
+
+```bash
+$ helm create nginx
+$ tree nginx/
+nginx/
+├── charts
+├── Chart.yaml
+├── templates
+│ ├── deployment.yaml
+│ ├── _helpers.tpl
+│ ├── ingress.yaml
+│ ├── NOTES.txt
+│ └── service.yaml
+└── values.yaml
+```
+
+`Chart.yaml` is used to define the basic information of the chart, including name, API, and app version. For more information, see [Chart.yaml File](../helm-specification/#chartyaml-file).
+
+An example of the `Chart.yaml` file:
+
+```yaml
+apiVersion: v1
+appVersion: "1.0"
+description: A Helm chart for Kubernetes
+name: nginx
+version: 0.1.0
+```
+
+When you deploy Helm-based apps to Kubernetes, you can edit the `values.yaml` file on the KubeSphere console directly.
+
+An example of the `values.yaml` file:
+
+```yaml
+# Default values for test.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+replicaCount: 1
+
+image:
+ repository: nginx
+ tag: stable
+ pullPolicy: IfNotPresent
+
+nameOverride: ""
+fullnameOverride: ""
+
+service:
+ type: ClusterIP
+ port: 80
+
+ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ path: /
+ hosts:
+ - chart-example.local
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
+
+resources: {}
+ # We usually recommend not to specify default resources and to leave this as a conscious
+ # choice for the user. This also increases chances charts run on environments with little
+ # resources, such as Minikube. If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
+```
+
+Refer to [Helm Specifications](../helm-specification/) to edit files in the `nginx` folder and save them when you finish editing.
+
+## Create an Index File (Optional)
+
+To add a repository with an HTTP or HTTPS URL in KubeSphere, you need to upload an `index.yaml` file to the object storage in advance. Use Helm to create the index file by executing the following command in the previous directory of `nginx`.
+
+```bash
+helm repo index .
+```
+
+```bash
+$ ls
+index.yaml nginx
+```
+
+{{< notice note >}}
+
+- If the repository URL is S3-styled, an index file will be created automatically in the object storage when you add apps to the repository.
+
+- For more information about how to add repositories to KubeSphere, see [Import an Helm Repository](../../../workspace-administration/app-repository/import-helm-repository/).
+
+{{ notice >}}
+
+## Package the Chart
+
+Go to the previous directory of `nginx` and execute the following command to package your chart which creates a .tgz package.
+
+```bash
+helm package nginx
+```
+
+```bash
+$ ls
+nginx nginx-0.1.0.tgz
+```
+
+## Upload Your App
+
+Now that you have your Helm-based app ready, you can load it to KubeSphere and test it on the platform.
+
+## See Also
+
+[Helm Specifications](../helm-specification/)
+
+[Import an Helm Repository](../../../workspace-administration/app-repository/import-helm-repository/)
diff --git a/content/en/docs/v3.4/application-store/app-developer-guide/helm-specification.md b/content/en/docs/v3.4/application-store/app-developer-guide/helm-specification.md
new file mode 100644
index 000000000..ab16d028a
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/app-developer-guide/helm-specification.md
@@ -0,0 +1,130 @@
+---
+title: "Helm Specifications"
+keywords: 'Kubernetes, KubeSphere, Helm, specifications'
+description: 'Understand the chart structure and specifications.'
+linkTitle: "Helm Specifications"
+weight: 14420
+---
+
+Helm charts serve as a packaging format. A chart is a collection of files that describe a related set of Kubernetes resources. For more information, see the [Helm documentation](https://helm.sh/docs/topics/charts/).
+
+## Structure
+
+All related files of a chart is stored in a directory which generally contains:
+
+```text
+chartname/
+ Chart.yaml # A YAML file containing basic information about the chart, such as version and name.
+ LICENSE # (Optional) A plain text file containing the license for the chart.
+ README.md # (Optional) The description of the app and how-to guide.
+ values.yaml # The default configuration values for this chart.
+ values.schema.json # (Optional) A JSON Schema for imposing a structure on the values.yaml file.
+ charts/ # A directory containing any charts upon which this chart depends.
+ crds/ # Custom Resource Definitions.
+ templates/ # A directory of templates that will generate valid Kubernetes configuration files with corresponding values provided.
+ templates/NOTES.txt # (Optional) A plain text file with usage notes.
+```
+
+## Chart.yaml File
+
+You must provide the `chart.yaml` file for a chart. Here is an example of the file with explanations for each field.
+
+```yaml
+apiVersion: (Required) The chart API version.
+name: (Required) The name of the chart.
+version: (Required) The version, following the SemVer 2 standard.
+kubeVersion: (Optional) The compatible Kubernetes version, following the SemVer 2 standard.
+description: (Optional) A single-sentence description of the app.
+type: (Optional) The type of the chart.
+keywords:
+ - (Optional) A list of keywords about the app.
+home: (Optional) The URL of the app.
+sources:
+ - (Optional) A list of URLs to source code for this app.
+dependencies: (Optional) A list of the chart requirements.
+ - name: The name of the chart, such as nginx.
+ version: The version of the chart, such as "1.2.3".
+ repository: The repository URL ("https://example.com/charts") or alias ("@repo-name").
+ condition: (Optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (for example, subchart1.enabled ).
+ tags: (Optional)
+ - Tags can be used to group charts for enabling/disabling together.
+ import-values: (Optional)
+ - ImportValues holds the mapping of source values to parent key to be imported. Each item can be a string or pair of child/parent sublist items.
+ alias: (Optional) Alias to be used for the chart. It is useful when you have to add the same chart multiple times.
+maintainers: (Optional)
+ - name: (Required) The maintainer name.
+ email: (Optional) The maintainer email.
+ url: (Optional) A URL for the maintainer.
+icon: (Optional) A URL to an SVG or PNG image to be used as an icon.
+appVersion: (Optional) The app version. This needn't be SemVer.
+deprecated: (Optional, boolean) Whether this chart is deprecated.
+annotations:
+ example: (Optional) A list of annotations keyed by name.
+```
+
+{{< notice note >}}
+
+- The field `dependencies` is used to define chart dependencies which were located in a separate file `requirements.yaml` for `v1` charts. For more information, see [Chart Dependencies](https://helm.sh/docs/topics/charts/#chart-dependencies).
+- The field `type` is used to define the type of chart. Allowed values are `application` and `library`. For more information, see [Chart Types](https://helm.sh/docs/topics/charts/#chart-types).
+
+{{ notice >}}
+
+## Values.yaml and Templates
+
+Written in the [Go template language](https://golang.org/pkg/text/template/), Helm chart templates are stored in the `templates` folder of a chart. There are two ways to provide values for the templates:
+
+1. Make a `values.yaml` file inside of a chart with default values that can be referenced.
+2. Make a YAML file that contains necessary values and use the file through the command line with `helm install`.
+
+Here is an example of the template in the `templates` folder.
+
+```yaml
+apiVersion: v1
+kind: ReplicationController
+metadata:
+ name: deis-database
+ namespace: deis
+ labels:
+ app.kubernetes.io/managed-by: deis
+spec:
+ replicas: 1
+ selector:
+ app.kubernetes.io/name: deis-database
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: deis-database
+ spec:
+ serviceAccount: deis-database
+ containers:
+ - name: deis-database
+ image: {{.Values.imageRegistry}}/postgres:{{.Values.dockerTag}}
+ imagePullPolicy: {{.Values.pullPolicy}}
+ ports:
+ - containerPort: 5432
+ env:
+ - name: DATABASE_STORAGE
+ value: {{default "minio" .Values.storage}}
+```
+
+The above example defines a ReplicationController template in Kubernetes. There are some values referenced in it which are defined in `values.yaml`.
+
+- `imageRegistry`: The Docker image registry.
+- `dockerTag`: The Docker image tag.
+- `pullPolicy`: The image pulling policy.
+- `storage`: The storage backend. It defaults to `minio`.
+
+An example `values.yaml` file:
+
+```text
+imageRegistry: "quay.io/deis"
+dockerTag: "latest"
+pullPolicy: "Always"
+storage: "s3"
+```
+
+## Reference
+
+[Helm Documentation](https://helm.sh/docs/)
+
+[Charts](https://helm.sh/docs/topics/charts/)
\ No newline at end of file
diff --git a/content/en/docs/v3.4/application-store/app-lifecycle-management.md b/content/en/docs/v3.4/application-store/app-lifecycle-management.md
new file mode 100644
index 000000000..4499d09f4
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/app-lifecycle-management.md
@@ -0,0 +1,220 @@
+---
+title: "Kubernetes Application Lifecycle Management"
+keywords: 'Kubernetes, KubeSphere, app-store'
+description: 'Manage your app across the entire lifecycle, including submission, review, test, release, upgrade and removal.'
+linkTitle: 'Application Lifecycle Management'
+weight: 14100
+---
+
+KubeSphere integrates [OpenPitrix](https://github.com/openpitrix/openpitrix), an open-source multi-cloud application management platform, to set up the App Store, managing Kubernetes applications throughout their entire lifecycle. The App Store supports two kinds of application deployment:
+
+- **Template-Based Apps** provide a way for developers and independent software vendors (ISVs) to share applications with users in a workspace. You can also import third-party app repositories within a workspace.
+- **Composed Apps** help users quickly build a complete application using multiple microservices to compose it. KubeSphere allows users to select existing services or create new services to create a composed app on the one-stop console.
+
+Using [Redis](https://redis.io/) as an example application, this tutorial demonstrates how to manage the Kubernetes app throughout the entire lifecycle, including submission, review, test, release, upgrade and removal.
+
+## Prerequisites
+
+- You need to enable the [KubeSphere App Store (OpenPitrix)](../../pluggable-components/app-store/).
+- You need to create a workspace, a project and a user (`project-regular`). For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+
+## Hands-on Lab
+
+### Step 1: Create a customized role and two users
+
+You need to create two users first, one for ISVs (`isv`) and the other (`reviewer`) for app technical reviewers.
+
+1. Log in to the KubeSphere console with the user `admin`. Click **Platform** in the upper-left corner and select **Access Control**. In **Platform Roles**, click **Create**.
+
+2. Set a name for the role, such as `app-review`, and click **Edit Permissions**.
+
+3. In **App Management**, choose **App Template Management** and **App Template Viewing** in the permission list, and then click **OK**.
+
+ {{< notice note >}}
+
+ The user who is granted the role `app-review` has the permission to view the App Store on the platform and manage apps, including review and removal.
+
+ {{ notice >}}
+
+4. As the role is ready now, you need to create a user and grant the role `app-review` to it. In **Users**, click **Create**. Provide the required information and click **OK**.
+
+5. Similarly, create another user `isv`, and grant the role of `platform-regular` to it.
+
+6. Invite both users created above to an existing workspace such as `demo-workspace`, and grant them the role of `workspace-admin`.
+
+### Step 2: Upload and submit an application
+
+1. Log in to KubeSphere as `isv` and go to your workspace. You need to upload the example app Redis to this workspace so that it can be used later. First, download the app [Redis 11.3.4](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/redis-11.3.4.tgz) and click **Upload Template** in **App Templates**.
+
+ {{< notice note >}}
+
+ In this example, a new version of Redis will be uploaded later to demonstrate the upgrade feature.
+
+ {{ notice >}}
+
+2. In the dialog that appears, click **Upload Helm Chart** to upload the chart file. Click **OK** to continue.
+
+3. Basic information of the app displays under **App Information**. To upload an icon for the app, click **Upload Icon**. You can also skip it and click **OK** directly.
+
+ {{< notice note >}}
+
+ The maximum accepted resolution of the app icon is 96 x 96 pixels.
+
+ {{ notice >}}
+
+4. The app displays in the template list with the status **Developing** after it is successfully uploaded, which means this app is under development. The uploaded app is visible to all members in the same workspace.
+
+5. Go to the detail page of the app template by clicking Redis from the list. You can edit the basic information of this app by clicking **Edit**.
+
+6. You can customize the app's basic information by specifying the fields in the pop-up window.
+
+7. Click **OK** to save your changes, then you can test this application by deploying it to Kubernetes. Click the draft version to expand the menu and click **Install**.
+
+ {{< notice note >}}
+
+ If you don't want to test the app, you can submit it for review directly. However, it is recommended that you test your app deployment and function first before you submit it for review, especially in a production environment. This helps you detect any problems in advance and accelerate the review process.
+
+ {{ notice >}}
+
+8. Select the cluster and project to which you want to deploy the app, set up different configurations for the app, and then click **Install**.
+
+ {{< notice note >}}
+
+ Some apps can be deployed with all configurations set in a form. You can use the toggle switch to see its YAML file, which contains all parameters you need to specify in the form.
+
+ {{ notice >}}
+
+9. Wait for a few minutes, then switch to the tab **App Instances**. You will find that Redis has been deployed successfully.
+
+10. After you test the app with no issues found, you can click **Submit for Release** to submit this application for release.
+
+ {{< notice note >}}
+
+The version number must start with a number and contain decimal points.
+
+{{ notice >}}
+
+11. After the app is submitted, the app status will change to **Submitted**. Now app reviewers can release it.
+
+### Step 3: Release the application
+
+1. Log out of KubeSphere and log back in as `app-reviewer`. Click **Platform** in the upper-left corner and select **App Store Management**. On the **App Release** page, the app submitted in the previous step displays under the tab **Unreleased**.
+
+2. To release this app, click it to inspect the app information, introduction, chart file and update logs from the pop-up window.
+
+3. The reviewer needs to decide whether the app meets the release criteria on the App Store. Click **Pass** to approve it or **Reject** to deny an app submission.
+
+### Step 4: Release the application to the App Store
+
+After the app is approved, `isv` can release the Redis application to the App Store, allowing all users on the platform to find and deploy this application.
+
+1. Log out of KubeSphere and log back in as `isv`. Go to your workspace and click Redis on the **Template-Based Apps** page. On its details page, expand the version menu, then click **Release to Store**. In the pop-up prompt, click **OK** to confirm.
+
+2. Under **App Release**, you can see the app status. **Activated** means it is available in the App Store.
+
+3. Click **View in Store** to go to its **Versions** page in the App Store. Alternatively, click **App Store** in the upper-left corner, and you can also see the app.
+
+ {{< notice note >}}
+
+ You may see two Redis apps in the App Store, one of which is a built-in app in KubeSphere. Note that a newly-released app displays at the beginning of the list in the App Store.
+
+ {{ notice >}}
+
+4. Now, users in the workspace can install Redis from the App Store. To install the app to Kubernetes, click the app to go to its **App Information** page, and click **Install**.
+
+ {{< notice note >}}
+
+ If you have trouble installing an application and the **Status** column shows **Failed**, you can hover your cursor over the **Failed** icon to see the error message.
+
+ {{ notice >}}
+
+### Step 5: Create an application category
+
+`app-reviewer` can create multiple categories for different types of applications based on their function and usage. It is similar to setting tags and categories can be used in the App Store as filters, such as Big Data, Middleware, and IoT.
+
+1. Log in to KubeSphere as `app-reviewer`. To create a category, go to the **App Store Management** page and click
in **App Categories**.
+
+2. Set a name and icon for the category in the dialog, then click **OK**. For Redis, you can enter `Database` for the field **Name**.
+
+ {{< notice note >}}
+
+ Usually, an app reviewer creates necessary categories in advance and ISVs select the category in which an app appears before submitting it for review. A newly-created category has no app in it.
+
+ {{ notice >}}
+
+3. As the category is created, you can assign the category to your app. In **Uncategorized**, select Redis and click **Change Category**.
+
+4. In the dialog, select the category (**Database**) from the drop-down list and click **OK**.
+
+5. The app displays in the category as expected.
+
+### Step 6: Add a new version
+
+To allow workspace users to upgrade apps, you need to add new app versions to KubeSphere first. Follow the steps below to add a new version for the example app.
+
+1. Log in to KubeSphere as `isv` again and navigate to **Template-Based Apps**. Click the app Redis in the list.
+
+2. Download [Redis 12.0.0](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/redis-12.0.0.tgz), which is a new version of Redis for demonstration in this tutorial. On the tab **Versions**, click **New Version** on the right to upload the package you just downloaded.
+
+3. Click **Upload Helm Chart** and click **OK** after it is uploaded.
+
+4. The new app version displays in the version list. You can click it to expand the menu and test the new version. Besides, you can also submit it for review and release it to the App Store, which is the same as the steps shown above.
+
+### Step 7: Upgrade an application
+
+After a new version is released to the App Store, all users can upgrade this application to the new version.
+
+{{< notice note >}}
+
+To follow the steps below, you must deploy an app of one of its old versions first. In this example, Redis 11.3.4 was already deployed in the project `demo-project` and its new version 12.0.0 was released to the App Store.
+
+{{ notice >}}
+
+1. Log in to KubeSphere as `project-regular`, navigate to the **Apps** page of the project, and click the app to upgrade.
+
+2. Click **More** and select **Edit Settings** from the drop-down list.
+
+3. In the window that appears, you can see the YAML file of application configurations. Select the new version from the drop-down list on the right. You can customize the YAML file of the new version. In this tutorial, click **Update** to use the default configurations directly.
+
+ {{< notice note >}}
+
+ You can select the same version from the drop-down list on the right as that on the left to customize current application configurations through the YAML file.
+
+ {{ notice >}}
+
+4. On the **Apps** page, you can see that the app is being upgraded. The status will change to **Running** when the upgrade finishes.
+
+### Step 8: Suspend an application
+
+You can choose to remove an app entirely from the App Store or suspend a specific app version.
+
+1. Log in to KubeSphere as `app-reviewer`. Click **Platform** in the upper-left corner and select **App Store Management**. On the **App Store** page, click Redis.
+
+2. On the detail page, click **Suspend App** and select **OK** in the dialog to confirm the operation to remove the app from the App Store.
+
+ {{< notice note >}}
+
+ Removing an app from the App Store does not affect tenants who are using the app.
+
+ {{ notice >}}
+
+3. To make the app available in the App Store again, click **Activate App**.
+
+4. To suspend a specific app version, expand the version menu and click **Suspend Version**. In the dialog that appears, click **OK** to confirm.
+
+ {{< notice note >}}
+
+ After an app version is suspended, this version is not available in the App Store. Suspending an app version does not affect tenants who are using this version.
+
+ {{ notice >}}
+
+5. To make the app version available in the App Store again, click **Activate Version**.
+
+
+
+
+
+
+
+
+
diff --git a/content/en/docs/v3.4/application-store/built-in-apps/_index.md b/content/en/docs/v3.4/application-store/built-in-apps/_index.md
new file mode 100644
index 000000000..2ee1bc0ca
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/built-in-apps/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Built-in Applications"
+weight: 14200
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/application-store/built-in-apps/deploy-chaos-mesh.md b/content/en/docs/v3.4/application-store/built-in-apps/deploy-chaos-mesh.md
new file mode 100644
index 000000000..2041c2eb0
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/built-in-apps/deploy-chaos-mesh.md
@@ -0,0 +1,82 @@
+---
+title: 'Deploy Chaos Mesh on KubeSphere'
+tag: 'KubeSphere, Kubernetes, Applications, Chaos Engineering, Chaos experiments, Chaos Mesh'
+keywords: 'Chaos Mesh, Kubernetes, Helm, KubeSphere'
+description: 'Learn how to deploy Chaos Mesh on KubeSphere and start running chaos experiments.'
+linkTitle: "Deploy Chaos Mesh on KubeSphere"
+---
+
+[Chaos Mesh](https://github.com/chaos-mesh/chaos-mesh) is a cloud-native Chaos Engineering platform that orchestrates chaos in Kubernetes environments. With Chaos Mesh, you can test your system's resilience and robustness on Kubernetes by injecting various types of faults into Pods, network, file system, and even the kernel.
+
+
+
+## Enable App Store on KubeSphere
+
+1. Make sure you have installed and enabled the [KubeSphere App Store](../../../pluggable-components/app-store/).
+
+2. You need to create a workspace, a project, and a user account (project-regular) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the operator role. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Chaos experiments with Chaos Mesh
+
+### Step 1: Deploy Chaos Mesh
+
+1. Login KubeSphere as `project-regular`, search for **chaos-mesh** in the **App Store**, and click on the search result to enter the app.
+
+ 
+
+2. In the **App Information** page, click **Install** on the upper right corner.
+
+ 
+
+3. In the **App Settings** page, set the application **Name,** **Location** (as your Namespace), and **App Version**, and then click **Next** on the upper right corner.
+
+ 
+
+4. Configure the `values.yaml` file as needed, or click **Install** to use the default configuration.
+
+ 
+
+5. Wait for the deployment to be finished. Upon completion, Chaos Mesh will be shown as **Running** in KubeSphere.
+
+ 
+
+
+### Step 2: Visit Chaos Dashboard
+
+1. In the **Resource Status** page, copy the **NodePort **of `chaos-dashboard`.
+
+ 
+
+2. Access the Chaos Dashboard by entering `${NodeIP}:${NODEPORT}` in your browser. Refer to [Manage User Permissions](https://chaos-mesh.org/docs/manage-user-permissions/) to generate a Token and log into Chaos Dashboard.
+
+ 
+
+### Step 3: Create a chaos experiment
+
+Before creating a chaos experiment, you should identify and deploy your experiment target, for example, to test how an application works under network latency. Here, we use a demo application `web-show` as the target application to be tested, and the test goal is to observe the system network latency. You can deploy a demo application `web-show` with the following command: `web-show`.
+
+```bash
+curl -sSL https://mirrors.chaos-mesh.org/latest/web-show/deploy.sh | bash
+```
+
+> Note: The network latency of the Pod can be observed directly from the web-show application pad to the kube-system pod.
+
+1. From your web browser, visit ${NodeIP}:8081 to access the **Web Show** application.
+
+ 
+
+2. Log in to Chaos Dashboard to create a chaos experiment. To observe the effect of network latency on the application, we set the **Target **as "Network Attack" to simulate a network delay scenario.
+
+ 
+
+ The **Scope** of the experiment is set to `app: web-show`.
+
+ 
+
+3. Start the chaos experiment by submitting it.
+
+ 
+
+Now, you should be able to visit **Web Show** to observe experiment results:
+
+
\ No newline at end of file
diff --git a/content/en/docs/v3.4/application-store/built-in-apps/etcd-app.md b/content/en/docs/v3.4/application-store/built-in-apps/etcd-app.md
new file mode 100644
index 000000000..f34455ffa
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/built-in-apps/etcd-app.md
@@ -0,0 +1,58 @@
+---
+title: "Deploy etcd on KubeSphere"
+keywords: 'Kubernetes, KubeSphere, etcd, app-store'
+description: 'Learn how to deploy etcd from the App Store of KubeSphere and access its service.'
+linkTitle: "Deploy etcd on KubeSphere"
+weight: 14210
+---
+
+Written in Go, [etcd](https://etcd.io/) is a distributed key-value store to store data that needs to be accessed by a distributed system or cluster of machines. In Kubernetes, it is the backend for service discovery and stores cluster states and configurations.
+
+This tutorial walks you through an example of deploying etcd from the App Store of KubeSphere.
+
+## Prerequisites
+
+- Please make sure you [enable the OpenPitrix system](../../../pluggable-components/app-store/).
+- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Hands-on Lab
+
+### Step 1: Deploy etcd from the App Store
+
+1. On the **Overview** page of the project `demo-project`, click **App Store** in the upper-left corner.
+
+2. Find etcd and click **Install** on the **App Information** page.
+
+3. Set a name and select an app version. Make sure etcd is deployed in `demo-project` and click **Next**.
+
+4. On the **App Settings** page, specify the size of the persistent volume for etcd and click **Install**.
+
+ {{< notice note >}}
+
+ To specify more values for etcd, use the toggle switch to see the app's manifest in YAML format and edit its configurations.
+
+ {{ notice >}}
+
+5. In **Template-Based Apps** of the **Apps** page, wait until etcd is up and running.
+
+### Step 2: Access the etcd service
+
+After the app is deployed, you can use etcdctl, a command-line tool for interacting with the etcd server, to access etcd on the KubeSphere console directly.
+
+1. Navigate to **StatefulSets** in **Workloads**, and click the service name of etcd.
+
+2. Under **Pods**, expand the menu to see container details, and then click the **Terminal** icon.
+
+3. In the terminal, you can read and write data directly. For example, execute the following two commands respectively.
+
+ ```bash
+ etcdctl set /name kubesphere
+ ```
+
+ ```bash
+ etcdctl get /name
+ ```
+
+4. For clients within the KubeSphere cluster, the etcd service can be accessed through `
on the right of a project gateway to select an operation from the drop-down menu:
+
+- **Edit**: Edit configurations of the project gateway.
+- **Disable**: Disable the project gateway.
+
+{{< notice note >}}
+
+If a project gateway exists prior to the creation of a cluster gateway, the project gateway address may switch between the address of the cluster gateway and that of the project gateway. It is recommended that you should use either the cluster gateway or project gateway.
+
+{{ notice >}}
+
+For more information about how to create project gateways, see [Project Gateway](../../../project-administration/project-gateway/).
\ No newline at end of file
diff --git a/content/en/docs/v3.4/cluster-administration/cluster-settings/cluster-visibility-and-authorization.md b/content/en/docs/v3.4/cluster-administration/cluster-settings/cluster-visibility-and-authorization.md
new file mode 100644
index 000000000..e95527e5b
--- /dev/null
+++ b/content/en/docs/v3.4/cluster-administration/cluster-settings/cluster-visibility-and-authorization.md
@@ -0,0 +1,53 @@
+---
+title: "Cluster Visibility and Authorization"
+keywords: "Cluster Visibility, Cluster Management"
+description: "Learn how to set up cluster visibility and authorization."
+linkTitle: "Cluster Visibility and Authorization"
+weight: 8610
+---
+
+In KubeSphere, you can allocate a cluster to multiple workspaces through authorization so that workspace resources can all run on the cluster. At the same time, a workspace can also be associated with multiple clusters. Workspace users with necessary permissions can create multi-cluster projects using clusters allocated to the workspace.
+
+This guide demonstrates how to set cluster visibility.
+
+## Prerequisites
+* You need to enable the [multi-cluster feature](../../../multicluster-management/).
+* You need to have a workspace and a user that has the permission to create workspaces, such as `ws-manager`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Set Cluster Visibility
+
+### Select available clusters when you create a workspace
+
+1. Log in to KubeSphere with a user that has the permission to create a workspace, such as `ws-manager`.
+
+2. Click **Platform** in the upper-left corner and select **Access Control**. In **Workspaces** from the navigation bar, click **Create**.
+
+3. Provide the basic information for the workspace and click **Next**.
+
+4. On the **Cluster Settings** page, you can see a list of available clusters. Select the clusters that you want to allocate to the workspace and click **Create**.
+
+5. After the workspace is created, workspace members with necessary permissions can create resources that run on the associated cluster.
+
+ {{< notice warning >}}
+
+Try not to create resources on the host cluster to avoid excessive loads, which can lead to a decrease in the stability across clusters.
+
+{{ notice >}}
+
+### Set cluster visibility after a workspace is created
+
+After a workspace is created, you can allocate additional clusters to the workspace through authorization or unbind a cluster from the workspace. Follow the steps below to adjust the visibility of a cluster.
+
+1. Log in to KubeSphere with a user that has the permission to manage clusters, such as `admin`.
+
+2. Click **Platform** in the upper-left corner and select **Cluster Management**. Select a cluster from the list to view cluster information.
+
+3. In **Cluster Settings** from the navigation bar, select **Cluster Visibility**.
+
+4. You can see the list of authorized workspaces, which means the current cluster is available to resources in all these workspaces.
+
+5. Click **Edit Visibility** to set the cluster visibility. You can select new workspaces that will be able to use the cluster or unbind it from a workspace.
+
+### Make a cluster public
+
+You can check **Set as Public Cluster** so that platform users can access the cluster, in which they are able to create and schedule resources.
diff --git a/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/_index.md b/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/_index.md
new file mode 100644
index 000000000..275de2bb0
--- /dev/null
+++ b/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Log Receivers"
+weight: 8620
+
+_build:
+ render: false
+---
\ No newline at end of file
diff --git a/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md b/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md
new file mode 100644
index 000000000..1b43c5c85
--- /dev/null
+++ b/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md
@@ -0,0 +1,35 @@
+---
+title: "Add Elasticsearch as a Receiver"
+keywords: 'Kubernetes, log, elasticsearch, pod, container, fluentbit, output'
+description: 'Learn how to add Elasticsearch to receive container logs, resource events, or audit logs.'
+linkTitle: "Add Elasticsearch as a Receiver"
+weight: 8622
+---
+You can use Elasticsearch, Kafka, and Fluentd as log receivers in KubeSphere. This tutorial demonstrates how to add an Elasticsearch receiver.
+
+## Prerequisites
+
+- You need a user granted a role including the permission of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the permission and assign it to a user.
+
+- Before adding a log receiver, you need to enable any of the `logging`, `events` or `auditing` components. For more information, see [Enable Pluggable Components](../../../../pluggable-components/). `logging` is enabled as an example in this tutorial.
+
+## Add Elasticsearch as a Receiver
+
+1. Log in to KubeSphere as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+ {{< notice note >}}
+
+If you have enabled the [multi-cluster feature](../../../../multicluster-management/), you can select a specific cluster.
+
+{{ notice >}}
+
+2. On the navigation pane on the left, click **Cluster Settings** > **Log Receivers**.
+
+3. Click **Add Log Receiver** and choose **Elasticsearch**.
+
+4. Provide the Elasticsearch service address and port number.
+
+5. Elasticsearch will appear in the receiver list on the **Log Receivers** page, the status of which is **Collecting**.
+
+6. To verify whether Elasticsearch is receiving logs sent from Fluent Bit, click **Log Search** in the **Toolbox** in the lower-right corner and search logs on the console. For more information, read [Log Query](../../../../toolbox/log-query/).
+
diff --git a/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md b/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md
new file mode 100644
index 000000000..b674da974
--- /dev/null
+++ b/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md
@@ -0,0 +1,154 @@
+---
+title: "Add Fluentd as a Receiver"
+keywords: 'Kubernetes, log, fluentd, pod, container, fluentbit, output'
+description: 'Learn how to add Fluentd to receive logs, events or audit logs.'
+linkTitle: "Add Fluentd as a Receiver"
+weight: 8624
+---
+You can use Elasticsearch, Kafka and Fluentd as log receivers in KubeSphere. This tutorial demonstrates:
+
+- How to deploy Fluentd as a Deployment and create the corresponding Service and ConfigMap.
+- How to add Fluentd as a log receiver to receive logs sent from Fluent Bit and then output to stdout.
+- How to verify if Fluentd receives logs successfully.
+
+## Prerequisites
+
+- You need a user granted a role including the permission of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the permission and assign it to a user.
+
+- Before adding a log receiver, you need to enable any of the `logging`, `events`, or `auditing` components. For more information, see [Enable Pluggable Components](../../../../pluggable-components/). `logging` is enabled as an example in this tutorial.
+
+## Step 1: Deploy Fluentd as a Deployment
+
+Usually, Fluentd is deployed as a DaemonSet in Kubernetes to collect container logs on each node. KubeSphere chooses Fluent Bit because of its low memory footprint. Besides, Fluentd features numerous output plugins. Hence, KubeSphere chooses to deploy Fluentd as a Deployment to forward logs it receives from Fluent Bit to more destinations such as S3, MongoDB, Cassandra, MySQL, syslog and Splunk.
+
+Run the following commands:
+
+{{< notice note >}}
+
+- The following commands create the Fluentd Deployment, Service, and ConfigMap in the `default` namespace and add a filter to the Fluentd ConfigMap to exclude logs from the `default` namespace to avoid Fluent Bit and Fluentd loop log collections.
+- Change the namespace if you want to deploy Fluentd into a different namespace.
+
+{{ notice >}}
+
+```yaml
+cat <
on the right of the alerting policy.
+
+1. Click **Edit** from the drop-down list and edit the alerting policy following the same steps as you create it. Click **OK** on the **Message Settings** page to save it.
+
+2. Click **Delete** from the drop-down list to delete an alerting policy.
+
+## View an Alerting Policy
+
+Click the name of an alerting policy on the **Alerting Policies** page to see its detail information, including the alerting rule and alerting history. You can also see the rule expression which is based on the template you use when creating the alerting policy.
+
+Under **Monitoring**, the **Alert Monitoring** chart shows the actual usage or amount of resources over time. **Alerting Message** displays the customized message you set in notifications.
+
+{{< notice note >}}
+
+You can click
on the top navigation bar.
+
+2. Go to the **Repositories** page and you can see that Nexus provides three types of repository.
+
+ - `proxy`: the proxy for a remote repository to download and store resources on Nexus as cache.
+ - `hosted`: the repository storing artifacts on Nexus.
+ - `group`: a group of configured Nexus repositories.
+
+3. You can click a repository to view its details. For example, click **maven-public** to go to its details page, and you can see its **URL**.
+
+### Step 2: Modify `pom.xml` in your GitHub repository
+
+1. Log in to GitHub. Fork [the example repository](https://github.com/devops-ws/learn-pipeline-java) to your own GitHub account.
+
+2. In your own GitHub repository of **learn-pipeline-java**, click the file `pom.xml` in the root directory.
+
+3. Click | Code Repository | +Parameter | +
|---|---|
| GitHub | +Credential: Select the credential of the code repository. | +
| GitLab | +
+
|
+
| Bitbucket | +
+
|
+
| Git | +
+
|
+
| Parameter | +Description | +
|---|---|
+
+
+ Revision + |
+
+
+
+ The commit ID, branch, or tag of the repository. For example, master, v1.2.0, 0a1b2c3, or HEAD. + |
+
+
+
+ Manifest File Path + |
+
+
+
+ The manifest file path. For example, config/default. + |
+
| Parameter | +Description | +
|---|---|
+
+
+ Prune resources + |
+
+
+
+ If checked, it will delete resources that are no longer defined in Git. By default and as a safety mechanism, auto sync will not delete resources. + |
+
+
+
+ Self-heal + |
+
+
+
+ If checked, it will force the state defined in Git into the cluster when a deviation in the cluster is detected. By default, changes that are made to the live cluster will not trigger auto sync. + |
+
| Parameter | +Description | +
|---|---|
+
+
+ Prune resources + |
+
+
+
+ If checked, it will delete resources that are no longer defined in Git. + By default and as a safety mechanism, manual sync will not delete resources, but mark the resource out-of-sync state. + |
+
+
+
+ Dry run + |
+
+
+
+ Preview apply without affecting the cluster. + |
+
+
+
+ Apply only + |
+
+
+
+ If checked, it will skip pre/post sync hooks and just run kubectl apply for application resources. + |
+
+
+
+ Force + |
+
+
+
+ If checked, it will use kubectl apply --force to sync resources. + |
+
| Parameter | +Description | +
|---|---|
+
+
+ Skip schema validation + |
+
+
+
+ Disables kubectl validation. --validate=false is added when kubectl apply runs. + |
+
+
+
+ Auto create project + |
+
+
+
+ Automatically creates projects for application resources if the projects do not exist. + |
+
+
+
+ Prune last + |
+
+
+
+ Resource pruning happened as a final, implicit wave of a sync operation, after other resources have been deployed and become healthy. + |
+
+
+
+ Selective sync + |
+
+
+
+ Syncs only out-of-sync resources. + |
+
| Parameter | +Description | +
|---|---|
+
+
+ foreground + |
+
+
+
+ Deletes dependent resources first, and then deletes the owner resource. + |
+
+
+
+ background + |
+
+
+
+ Deletes the owner resource immediately, and then deletes the dependent resources in the background. + |
+
+
+
+ orphan + |
+
+
+
+ Deletes the dependent resources that remain orphaned after the owner resource is deleted. + |
+
| Item | +Description | +
|---|---|
| Name | +Name of the continuous deployment. | +
| Health Status | +Health status of the continuous deployment, which includes the following: +
|
+
| Sync Status | +Synchronization status of the continuous deployment, which includes the following: +
|
+
| Deployment Location | +Cluster and project where resources are deployed. | +
| Update Time | +Time when resources are updated. | +
to edit the file. For example, change the value of `spec.replicas` to `3`.
+
+4. Click **Commit changes** at the bottom of the page.
+
+### Check the webhook deliveries
+
+1. On the **Webhooks** page of your own repository, click the webhook.
+
+2. Click **Recent Deliveries** and click a specific delivery record to view its details.
+
+### Check the pipeline
+
+1. Log in to the KubeSphere web console as `project-regular`. Go to your DevOps project and click the pipeline.
+
+2. On the **Run Records** tab, check that a new run is triggered by the pull request submitted to the `sonarqube` branch of the remote repository.
+
+3. Go to the **Pods** page of the project `kubesphere-sample-dev` and check the status of the 3 Pods. If the status of the 3 Pods is running, the pipeline is running properly.
+
+
+
diff --git a/content/en/docs/v3.4/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md b/content/en/docs/v3.4/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
new file mode 100644
index 000000000..1155c3bbf
--- /dev/null
+++ b/content/en/docs/v3.4/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
@@ -0,0 +1,104 @@
+---
+title: "Use Pipeline Templates"
+keywords: 'KubeSphere, Kubernetes, Jenkins, Graphical Pipelines, Pipeline Templates'
+description: 'Understand how to use pipeline templates on KubeSphere.'
+linkTitle: "Use Pipeline Templates"
+weight: 11213
+---
+
+KubeSphere offers a graphical editing panel where the stages and steps of a Jenkins pipeline can be defined through interactive operations. KubeSphere 3.4 provides built-in pipeline templates, such as Node.js, Maven, and Golang, to help users quickly create pipelines. Additionally, KubeSphere 3.4 also supports customization of pipeline templates to meet diversified needs of enterprises.
+
+This section describes how to use pipeline templates on KubeSphere.
+## Prerequisites
+
+- You have a workspace, a DevOps project and a user (`project-regular`) invited to the DevOps project with the `operator` role. If they are not ready yet, please refer to [Create Workspaces, Projects, Users and Roles](../../../../quick-start/create-workspace-and-project/).
+
+- You need to [enable the KubeSphere DevOps system](../../../../pluggable-components/devops/).
+
+- You need to [create a pipeline](../../../how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel/).
+
+## Use a Built-in Pipeline Template
+
+The following takes Node.js as an example to show how to use a built-in pipeline template. Steps for using Maven and Golang pipeline templates are alike.
+
+
+1. Log in to the KubeSphere console as `project-regular`. In the navigation pane on the left, click **DevOps Projects**.
+
+2. On the **DevOps Projects** page, click the DevOps project you created.
+
+3. In the navigation pane on the left, click **Pipelines**.
+
+4. On the pipeline list on the right, click the created pipeline to go to its details page.
+
+5. On the right pane, click **Edit Pipeline**.
+
+6. On the **Create Pipeline** dialog box, click **Node.js**, and then click **Next**.
+
+
+7. On the **Parameter Settings** tab, set the parameters based on the actual situation, and then click **Create**.
+
+ | Parameter | Meaning |
+ | ----------- | ------------------------- |
+ | GitURL | URL of the project repository to clone |
+ | GitRevision | Revision to check out from |
+ | NodeDockerImage | Docker image version of Node.js |
+ | InstallScript | Shell script for installing dependencies |
+ | TestScript | Shell script for testing |
+ | BuildScript | Shell script for building a project |
+ | ArtifactsPath | Path where the artifacts reside |
+
+8. On the left pane, the system has preset several steps, and you can add more steps and parallel stages.
+
+9. Click a specific step. On the right pane, you can perform the following operations:
+ - Change the stage name.
+ - Delete a stage.
+ - Set the agent type.
+ - Add conditions.
+ - Edit or delete a task.
+ - Add steps or nested steps.
+
+ {{< notice note >}}
+
+ You can also customize the stages and steps in the pipeline templates based on your needs. For more information about how to use the graphical editing panel, refer to [Create a Pipeline Using Graphical Editing Panels](../create-a-pipeline-using-graphical-editing-panel/).
+ {{ notice >}}
+
+10. On the **Agent** area on the left, select an agent type, and click **OK**. The default value is **kubernetes**.
+
+ The following table explains the agent types.
+
+
+ | Agent Type | Description |
+ | --------------- | ------------------------- |
+ | any | Uses the default base pod template to create a Jenkins agent to run pipelines. |
+ | node | Uses a pod template with the specific label to create a Jenkins agent to run pipelines. Available labels include base, java, nodejs, maven, go, and more. |
+ | kubernetes | Use a yaml file to customize a standard Kubernetes pod template to create a jenkins agent to run pipelines. |
+
+11. On the pipeline details page, you can view the created pipeline template. Click **Run** to run the pipeline.
+
+## Legacy Built-in Pipeline Templates
+
+In earlier versions, KubeSphere also provides the CI and CI & CD pipeline templates. However, as the two templates are hardly customizable, you are advised to use the Node.js, Maven, or Golang pipeline template, or directly customize a template based on your needs.
+The following briefly introduces the CI and CI & CD pipeline templates.
+
+- CI pipeline template
+
+ 
+
+ 
+
+ The CI pipeline template contains two stages. The **clone code** stage checks out code and the **build & push** stage builds an image and pushes it to Docker Hub. You need to create credentials for your code repository and your Docker Hub registry in advance, and then set the URL of your repository and these credentials in corresponding steps. After you finish editing, the pipeline is ready to run.
+
+- CI & CD pipeline template
+
+ 
+
+ 
+
+ The CI & CD pipeline template contains six stages. For more information about each stage, refer to [Create a Pipeline Using a Jenkinsfile](../create-a-pipeline-using-jenkinsfile/#pipeline-overview), where you can find similar stages and the descriptions. You need to create credentials for your code repository, your Docker Hub registry, and the kubeconfig of your cluster in advance, and then set the URL of your repository and these credentials in corresponding steps. After you finish editing, the pipeline is ready to run.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/faq/_index.md b/content/en/docs/v3.4/faq/_index.md
new file mode 100644
index 000000000..624319ca4
--- /dev/null
+++ b/content/en/docs/v3.4/faq/_index.md
@@ -0,0 +1,12 @@
+---
+title: "FAQ"
+description: "FAQ is designed to answer and summarize the questions users ask most frequently about KubeSphere."
+layout: "second"
+
+linkTitle: "FAQ"
+weight: 16000
+
+icon: "/images/docs/v3.x/docs.svg"
+---
+
+This chapter answers and summarizes the questions users ask most frequently about KubeSphere. You can find these questions and answers in their respective sections which are grouped based on KubeSphere functions.
diff --git a/content/en/docs/v3.4/faq/access-control/_index.md b/content/en/docs/v3.4/faq/access-control/_index.md
new file mode 100644
index 000000000..e36af958d
--- /dev/null
+++ b/content/en/docs/v3.4/faq/access-control/_index.md
@@ -0,0 +1,7 @@
+---
+title: "Access Control and Account Management FAQ"
+keywords: 'Kubernetes, KubeSphere, account, access control'
+description: 'Faq about access control and account management'
+layout: "second"
+weight: 16400
+---
diff --git a/content/en/docs/v3.4/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md b/content/en/docs/v3.4/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
new file mode 100644
index 000000000..f192366b2
--- /dev/null
+++ b/content/en/docs/v3.4/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
@@ -0,0 +1,38 @@
+---
+title: "Add Existing Kubernetes Namespaces to a KubeSphere Workspace"
+keywords: "namespace, project, KubeSphere, Kubernetes"
+description: "Add your existing Kubernetes namespaces to a KubeSphere workspace."
+linkTitle: "Add existing Kubernetes namespaces to a KubeSphere Workspace"
+Weight: 16430
+---
+
+A Kubernetes namespace is a KubeSphere project. If you create a namespace object not from the KubeSphere console, the namespace does not appear directly in a certain workspace. But cluster administrators can still see the namespace on the **Cluster Management** page. At the same time, you can also place the namespace into a workspace.
+
+This tutorial demonstrates how to add an existing Kubernetes namespace to a KubeSphere workspace.
+
+## Prerequisites
+
+- You need a user granted a role including the permission of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the permission and assign it to a user.
+
+- You have an available workspace so that the namespace can be assigned to it. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create a Kubernetes Namespace
+
+Create an example Kubernetes namespace first so that you can add it to a workspace later. Execute the following command:
+
+```bash
+kubectl create ns demo-namespace
+```
+
+For more information about creating a Kubernetes namespace, see [Namespaces Walkthrough](https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/).
+
+## Add the Namespace to a KubeSphere Workspace
+
+1. Log in to the KubeSphere console as `admin` and go to the **Cluster Management** page. Click **Projects**, and you can see all your projects running on the current cluster, including the one just created.
+
+2. The namespace created through kubectl does not belong to any workspace. Click
on the right and select **Assign Workspace**.
+
+3. In the dialog that appears, select a **Workspace** and a **Project Administrator** for the project and click **OK**.
+
+4. Go to your workspace and you can see the project on the **Projects** page.
+
diff --git a/content/en/docs/v3.4/faq/access-control/cannot-login.md b/content/en/docs/v3.4/faq/access-control/cannot-login.md
new file mode 100644
index 000000000..0a9026298
--- /dev/null
+++ b/content/en/docs/v3.4/faq/access-control/cannot-login.md
@@ -0,0 +1,141 @@
+---
+title: "User Login Failure"
+keywords: "login failure, user is not active, KubeSphere, Kubernetes"
+description: "How to solve the issue of login failure"
+linkTitle: "User Login Failure"
+Weight: 16440
+---
+
+KubeSphere automatically creates a default user (`admin/P@88w0rd`) when it is installed. A user cannot be used for login if the status is not **Active** or you use an incorrect password.
+
+Here are some of the frequently asked questions about user login failure.
+
+## User Not Active
+
+You may see an image below when the login fails. To find out the reason and solve the issue, perform the following steps:
+
+
+
+1. Execute the following command to check the status of the user.
+
+ ```bash
+ $ kubectl get users
+ NAME EMAIL STATUS
+ admin admin@kubesphere.io Active
+ ```
+
+2. Verify that `ks-controller-manager` is running and check if exceptions are contained in logs:
+
+ ```bash
+ kubectl -n kubesphere-system logs -l app=ks-controller-manager
+ ```
+
+Here are some possible reasons for this issue.
+
+### Admission webhooks malfunction in Kubernetes 1.19
+
+Kubernetes 1.19 uses Golang 1.15 in coding, requiring the certificate for admission webhooks to be updated. This causes the failure of `ks-controller` admission webhook.
+
+Related error logs:
+
+```bash
+Internal error occurred: failed calling webhook "validating-user.kubesphere.io": Post "https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2-user?timeout=30s": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0
+```
+
+For more information about the issue and solution, see this [GitHub issue](https://github.com/kubesphere/kubesphere/issues/2928).
+
+### ks-controller-manager malfunctions
+
+`ks-controller-manager` relies on two stateful Services: OpenLDAP and Jenkins. When OpenLDAP or Jenkins goes down, `ks-controller-manager` will be in the status of `reconcile`.
+
+Execute the following commands to verify that OpenLDAP and Jenkins are running normally.
+
+```
+kubectl -n kubesphere-devops-system get po | grep -v Running
+kubectl -n kubesphere-system get po | grep -v Running
+kubectl -n kubesphere-system logs -l app=openldap
+```
+
+Related error logs:
+
+```bash
+failed to connect to ldap service, please check ldap status, error: factory is not able to fill the pool: LDAP Result Code 200 \"Network Error\": dial tcp: lookup openldap.kubesphere-system.svc on 169.254.25.10:53: no such host
+```
+
+```bash
+Internal error occurred: failed calling webhook “validating-user.kubesphere.io”: Post https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2-user?timeout=4s: context deadline exceeded
+```
+
+#### Solution
+
+You need to restore OpenLDAP and Jenkins with good network connection, and then restart `ks-controller-manager`.
+
+```
+kubectl -n kubesphere-system rollout restart deploy ks-controller-manager
+```
+
+### Wrong code branch used
+
+If you used the incorrect version of ks-installer, the versions of different components would not match after the installation. Execute the following commands to check version consistency. Note that the correct image tag is `v3.4.0`.
+
+```
+kubectl -n kubesphere-system get deploy ks-installer -o jsonpath='{.spec.template.spec.containers[0].image}'
+kubectl -n kubesphere-system get deploy ks-apiserver -o jsonpath='{.spec.template.spec.containers[0].image}'
+kubectl -n kubesphere-system get deploy ks-controller-manager -o jsonpath='{.spec.template.spec.containers[0].image}'
+```
+
+## Wrong Username or Password
+
+
+
+Run the following command to verify that the username and the password are correct.
+
+```
+curl -u
on the right of `ks-installer` and select **Edit YAML**.
+
+5. Scroll down to the bottom of the file, add `telemetry_enabled: false`, and then click **OK**.
+
+
+{{< notice note >}}
+
+If you want to enable Telemetry again, you can update `ks-installer` by deleting `telemetry_enabled: false` or changing it to `telemetry_enabled: true`.
+
+{{ notice >}}
diff --git a/content/en/docs/v3.4/faq/multi-cluster-management/_index.md b/content/en/docs/v3.4/faq/multi-cluster-management/_index.md
new file mode 100644
index 000000000..05c8c18b9
--- /dev/null
+++ b/content/en/docs/v3.4/faq/multi-cluster-management/_index.md
@@ -0,0 +1,7 @@
+---
+title: "Multi-cluster Management"
+keywords: 'Kubernetes, KubeSphere, Multi-cluster Management, Host Cluster, Member Cluster'
+description: 'Faq about multi-cluster management in KubeSphere'
+layout: "second"
+weight: 16700
+---
diff --git a/content/en/docs/v3.4/faq/multi-cluster-management/host-cluster-access-member-cluster.md b/content/en/docs/v3.4/faq/multi-cluster-management/host-cluster-access-member-cluster.md
new file mode 100644
index 000000000..bd93b4c8b
--- /dev/null
+++ b/content/en/docs/v3.4/faq/multi-cluster-management/host-cluster-access-member-cluster.md
@@ -0,0 +1,71 @@
+---
+title: "Restore the Host Cluster Access to A Member Cluster"
+keywords: "Kubernetes, KubeSphere, Multi-cluster, Host Cluster, Member Cluster"
+description: "Learn how to restore the Host Cluster access to a Member Cluster."
+linkTitle: "Restore the Host Cluster Access to A Member Cluster"
+Weight: 16720
+---
+
+KubeSphere features [multi-cluster maganement](../../../multicluster-management/introduction/kubefed-in-kubesphere/) and tenants with necessary permissions (usually cluster administrators) can access the central control plane from the Host Cluster to manage all the Member Clusters. It is highly recommended that you manage your resources across your cluster through the Host Cluster.
+
+This tutorial demomstrates how to restore the Host Cluster access to a Member Cluster.
+
+## Possible Error Message
+
+If you can't access a Member Cluster from the central control plane and your browser keeps redirecting you to the login page of KubeSphere, run the following command on that Member Cluster to get the logs of the ks-apiserver.
+
+```
+kubectl -n kubesphere-system logs ks-apiserver-7c9c9456bd-qv6bs
+```
+
+{{< notice note >}}
+
+`ks-apiserver-7c9c9456bd-qv6bs` refers to the Pod ID on that Member Cluster. Make sure you use the ID of your own Pod.
+
+{{ notice >}}
+
+You will probably see the following error message:
+
+```
+E0305 03:46:42.105625 1 token.go:65] token not found in cache
+E0305 03:46:42.105725 1 jwt_token.go:45] token not found in cache
+E0305 03:46:42.105759 1 authentication.go:60] Unable to authenticate the request due to error: token not found in cache
+E0305 03:46:52.045964 1 token.go:65] token not found in cache
+E0305 03:46:52.045992 1 jwt_token.go:45] token not found in cache
+E0305 03:46:52.046004 1 authentication.go:60] Unable to authenticate the request due to error: token not found in cache
+E0305 03:47:34.502726 1 token.go:65] token not found in cache
+E0305 03:47:34.502751 1 jwt_token.go:45] token not found in cache
+E0305 03:47:34.502764 1 authentication.go:60] Unable to authenticate the request due to error: token not found in cache
+```
+
+## Solution
+
+### Step 1: Verify the jwtSecret
+
+Run the following command on your Host Cluster and Member Cluser respectively to confirm whether their jwtSecrets are identical.
+
+```
+kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v “apiVersion” | grep jwtSecret
+```
+
+### Step 2: Modify `accessTokenMaxAge`
+
+Make sure the jwtSecrets are identical, then run the following command on that Member Cluster to get the value of `accessTokenMaxAge`.
+
+```
+kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep accessTokenMaxAge
+```
+
+If the value is not `0`, run the following command to modify the value of `accessTokenMaxAge`.
+
+```
+kubectl -n kubesphere-system edit cm kubesphere-config -o yaml
+```
+
+After you modified the value of `accessTokenMaxAge` to `0`, run the following command to restart the ks-apiserver.
+
+```
+kubectl -n kubesphere-system rollout restart deploy ks-apiserver
+```
+
+Now, you can access that Member Cluster from the central control plane again.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/faq/multi-cluster-management/manage-multi-cluster.md b/content/en/docs/v3.4/faq/multi-cluster-management/manage-multi-cluster.md
new file mode 100644
index 000000000..5bb132e42
--- /dev/null
+++ b/content/en/docs/v3.4/faq/multi-cluster-management/manage-multi-cluster.md
@@ -0,0 +1,61 @@
+---
+title: "Manage a Multi-cluster Environment on KubeSphere"
+keywords: 'Kubernetes, KubeSphere, federation, multicluster, hybrid-cloud'
+description: 'Understand how to manage a multi-cluster environment on KubeSphere.'
+linkTitle: "Manage a Multi-cluster Environment on KubeSphere"
+weight: 16710
+---
+
+KubeSphere provides an easy-to-use multi-cluster feature to help you [build your multi-cluster environment on KubeSphere](../../../multicluster-management/). This guide illustrates how to manage a multi-cluster environment on KubeSphere.
+
+## Prerequisites
+
+- Make sure your Kubernetes clusters are installed with KubeSphere before you use them as your Host Cluster and Member Clusters.
+
+- Make sure the cluster role is set correctly on your Host Cluster and Member Clusters respectively, and the `jwtSecret` is the same between them.
+
+- It is recommended that your Member Cluster is in a clean environment where no resources have been created on it before it is imported to the Host Cluster.
+
+
+## Manage your KubeSphere Multi-cluster Environment
+
+Once you build a multi-cluster environment on KubeSphere, you can manage it through the central control plane from your Host Cluster. When creating resources, you can select a specific cluster while the Host Cluster should be avoided in case of overload. It is not recommended to log in to the KubeSphere web console of your Member Clusters to create resources on them as some resources (for example, workspaces) won't be synchronized to your Host Cluster for management.
+
+### Resource Management
+
+It is not recommended that you change a Host Cluster to a Member Cluster or the other way round. If a Member Cluster has been imported to a Host Cluster before, you have to use the same cluster name when importing it to a new Host Cluster after unbinding it from the previous Host Cluster.
+
+If you want to import the Member Cluster to a new Host Cluster while retaining existing projects, you can follow the steps as below.
+
+1. Run the following command on the Member Cluster to unbind the projects to be retained from your workspace.
+
+ ```bash
+ kubectl label ns | Parameter | +Description | +
|---|---|
kubernetes |
+ |
version |
+ The Kubernetes version to be installed. If you do not specify a Kubernetes version, {{< contentLink "docs/installing-on-linux/introduction/kubekey" "KubeKey" >}} v3.0.7 will install Kubernetes v1.23.10 by default. For more information, see {{< contentLink "docs/installing-on-linux/introduction/kubekey/#support-matrix" "Support Matrix" >}}. | +
imageRepo |
+ The Docker Hub repository where images will be downloaded. | +
clusterName |
+ The Kubernetes cluster name. | +
masqueradeAll* |
+ masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. It defaults to false. |
+
maxPods* |
+ The maximum number of Pods that can run on this Kubelet. It defaults to 110. |
+
nodeCidrMaskSize* |
+ The mask size for node CIDR in your cluster. It defaults to 24. |
+
proxyMode* |
+ The proxy mode to use. It defaults to ipvs. |
+
network |
+ |
plugin |
+ The CNI plugin to use. KubeKey installs Calico by default while you can also specify Flannel. Note that some features can only be used when Calico is adopted as the CNI plugin, such as Pod IP Pools. | +
calico.ipipMode* |
+ The IPIP Mode to use for the IPv4 POOL created at startup. If it is set to a value other than Never, vxlanMode should be set to Never. Allowed values are Always, CrossSubnet and Never. It defaults to Always. |
+
calico.vxlanMode* |
+ The VXLAN Mode to use for the IPv4 POOL created at startup. If it is set to a value other than Never, ipipMode should be set to Never. Allowed values are Always, CrossSubnet and Never. It defaults to Never. |
+
calico.vethMTU* |
+ The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. It defaults to 1440. |
+
kubePodsCIDR |
+ A valid CIDR block for your Kubernetes Pod subnet. It should not overlap with your node subnet and your Kubernetes Services subnet. | +
kubeServiceCIDR |
+ A valid CIDR block for your Kubernetes Services. It should not overlap with your node subnet and your Kubernetes Pod subnet. | +
registry |
+ |
registryMirrors |
+ Configure a Docker registry mirror to speed up downloads. For more information, see {{< contentLink "https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon" "Configure the Docker daemon" >}}. | +
insecureRegistries |
+ Set an address of insecure image registry. For more information, see {{< contentLink "https://docs.docker.com/registry/insecure/" "Test an insecure registry" >}}. | +
privateRegistry* |
+ Configure a private image registry for air-gapped installation (for example, a Docker local registry or Harbor). For more information, see {{< contentLink "docs/v3.4/installing-on-linux/introduction/air-gapped-installation/" "Air-gapped Installation on Linux" >}}. | +
on the right of the member cluster, and click **Update KubeConfig**.
+
+3. In the **Update KubeConfig** dialog box that is diaplayed, enter the new kubeconfig,and click **update**.
+
+
+
diff --git a/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/_index.md b/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/_index.md
new file mode 100644
index 000000000..92ba09b39
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Import Cloud-hosted Kubernetes Clusters"
+weight: 5300
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md b/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md
new file mode 100644
index 000000000..f740d70f4
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md
@@ -0,0 +1,70 @@
+---
+title: "Import an Alibaba Cloud Kubernetes (ACK) Cluster"
+keywords: 'Kubernetes, KubeSphere, multicluster, ACK'
+description: 'Learn how to import an Alibaba Cloud Kubernetes cluster.'
+titleLink: "Import an Alibaba Cloud Kubernetes (ACK) Cluster"
+weight: 5310
+---
+
+This tutorial demonstrates how to import an Alibaba Cloud Kubernetes (ACK) cluster through the [direct connection](../../../multicluster-management/enable-multicluster/direct-connection/) method. If you want to use the agent connection method, refer to [Agent Connection](../../../multicluster-management/enable-multicluster/agent-connection/).
+
+## Prerequisites
+
+- You have a Kubernetes cluster with KubeSphere installed, and prepared this cluster as the host cluster. For more information about how to prepare a host cluster, refer to [Prepare a host cluster](../../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-host-cluster).
+- You have an ACK cluster with KubeSphere installed to be used as the member cluster.
+
+## Import an ACK Cluster
+
+### Step 1: Prepare the ACK Member Cluster
+
+1. In order to manage the member cluster from the host cluster, you need to make `jwtSecret` the same between them. Therefore, get it first by executing the following command on your host cluster.
+
+ ```bash
+ kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret
+ ```
+
+ The output is similar to the following:
+
+ ```yaml
+ jwtSecret: "QVguGh7qnURywHn2od9IiOX6X8f8wK8g"
+ ```
+
+2. Log in to the KubeSphere console of the ACK cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**.
+
+3. Go to **CRDs**, enter `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
+
+4. Click
on the right and then select **Edit YAML** to edit `ks-installer`.
+
+5. In the YAML file of `ks-installer`, change the value of `jwtSecret` to the corresponding value shown above and set the value of `clusterRole` to `member`. Click **Update** to save your changes.
+
+ ```yaml
+ authentication:
+ jwtSecret: QVguGh7qnURywHn2od9IiOX6X8f8wK8g
+ ```
+
+ ```yaml
+ multicluster:
+ clusterRole: member
+ ```
+
+ {{< notice note >}}
+
+ Make sure you use the value of your own `jwtSecret`. You need to wait for a while so that the changes can take effect.
+
+ {{ notice >}}
+
+### Step 2: Get the kubeconfig file
+
+Log in to the web console of Alibaba Cloud. Go to **Clusters** under **Container Service - Kubernetes**, click your cluster to go to its detail page, and then select the **Connection Information** tab. You can see the kubeconfig file under the **Public Access** tab. Copy the contents of the kubeconfig file.
+
+
+
+### Step 3: Import the ACK member cluster
+
+1. Log in to the KubeSphere console on your host cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
+
+2. Enter the basic information based on your needs and click **Next**.
+
+3. In **Connection Method**, select **Direct connection**. Fill in the kubeconfig file of the ACK member cluster and then click **Create**.
+
+4. Wait for cluster initialization to finish.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md b/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
new file mode 100644
index 000000000..02b407333
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
@@ -0,0 +1,171 @@
+---
+title: "Import an AWS EKS Cluster"
+keywords: 'Kubernetes, KubeSphere, multicluster, Amazon EKS'
+description: 'Learn how to import an Amazon Elastic Kubernetes Service cluster.'
+titleLink: "Import an AWS EKS Cluster"
+weight: 5320
+---
+
+This tutorial demonstrates how to import an AWS EKS cluster through the [direct connection](../../../multicluster-management/enable-multicluster/direct-connection/) method. If you want to use the agent connection method, refer to [Agent Connection](../../../multicluster-management/enable-multicluster/agent-connection/).
+
+## Prerequisites
+
+- You have a Kubernetes cluster with KubeSphere installed, and prepared this cluster as the host cluster. For more information about how to prepare a host cluster, refer to [Prepare a host cluster](../../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-host-cluster).
+- You have an EKS cluster to be used as the member cluster.
+
+## Import an EKS Cluster
+
+### Step 1: Deploy KubeSphere on your EKS cluster
+
+You need to deploy KubeSphere on your EKS cluster first. For more information about how to deploy KubeSphere on EKS, refer to [Deploy KubeSphere on AWS EKS](../../../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks/#install-kubesphere-on-eks).
+
+### Step 2: Prepare the EKS member cluster
+
+1. In order to manage the member cluster from the host cluster, you need to make `jwtSecret` the same between them. Therefore, get it first by executing the following command on your host cluster.
+
+ ```bash
+ kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret
+ ```
+
+ The output is similar to the following:
+
+ ```yaml
+ jwtSecret: "QVguGh7qnURywHn2od9IiOX6X8f8wK8g"
+ ```
+
+2. Log in to the KubeSphere console of the EKS cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**.
+
+3. Go to **CRDs**, enter `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
+
+4. Click
on the right and then select **Edit YAML** to edit `ks-installer`.
+
+5. In the YAML file of `ks-installer`, change the value of `jwtSecret` to the corresponding value shown above and set the value of `clusterRole` to `member`. Click **Update** to save your changes.
+
+ ```yaml
+ authentication:
+ jwtSecret: QVguGh7qnURywHn2od9IiOX6X8f8wK8g
+ ```
+
+ ```yaml
+ multicluster:
+ clusterRole: member
+ ```
+
+ {{< notice note >}}
+
+ Make sure you use the value of your own `jwtSecret`. You need to wait for a while so that the changes can take effect.
+
+ {{ notice >}}
+
+### Step 3: Create a new kubeconfig file
+
+1. [Amazon EKS](https://docs.aws.amazon.com/eks/index.html) doesn’t provide a built-in kubeconfig file as a standard kubeadm cluster does. Nevertheless, you can create a kubeconfig file by referring to this [document](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html). The generated kubeconfig file will be like the following:
+
+ ```yaml
+ apiVersion: v1
+ clusters:
+ - cluster:
+ server:
on the right and then select **Edit YAML** to edit `ks-installer`.
+
+5. In the YAML file of `ks-installer`, change the value of `jwtSecret` to the corresponding value shown above and set the value of `clusterRole` to `member`.
+
+ ```yaml
+ authentication:
+ jwtSecret: QVguGh7qnURywHn2od9IiOX6X8f8wK8g
+ ```
+
+ ```yaml
+ multicluster:
+ clusterRole: member
+ ```
+
+ {{< notice note >}}
+
+ Make sure you use the value of your own `jwtSecret`. You need to wait for a while so that the changes can take effect.
+
+ {{ notice >}}
+
+### Step 3: Create a new kubeconfig file
+
+1. Run the following commands on your GKE Cloud Shell Terminal:
+
+ ```bash
+ TOKEN=$(kubectl -n kubesphere-system get secret $(kubectl -n kubesphere-system get sa kubesphere -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 -d)
+ kubectl config set-credentials kubesphere --token=${TOKEN}
+ kubectl config set-context --current --user=kubesphere
+ ```
+
+2. Retrieve the new kubeconfig file by running the following command:
+
+ ```bash
+ cat ~/.kube/config
+ ```
+
+ The output is similar to the following:
+
+ ```yaml
+ apiVersion: v1
+ clusters:
+ - cluster:
+ certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLekNDQWhPZ0F3SUJBZ0lSQUtPRUlDeFhyWEdSbjVQS0dlRXNkYzR3RFFZSktvWklodmNOQVFFTEJRQXcKTHpFdE1Dc0dBMVVFQXhNa1pqVTBNVFpoTlRVdFpEZzFZaTAwWkdZNUxXSTVNR1V0TkdNeE0yRTBPR1ZpWW1VMwpNQjRYRFRJeE1ETXhNVEl5TXpBMU0xb1hEVEkyTURNeE1ESXpNekExTTFvd0x6RXRNQ3NHQTFVRUF4TWtaalUwCk1UWmhOVFV0WkRnMVlpMDBaR1k1TFdJNU1HVXROR014TTJFME9HVmlZbVUzTUlJQklqQU5CZ2txaGtpRzl3MEIKQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBdkVHVGtKRjZLVEl3QktlbXNYd3dPSnhtU3RrMDlKdXh4Z1grM0dTMwpoeThVQm5RWEo1d3VIZmFGNHNWcDFzdGZEV2JOZitESHNxaC9MV3RxQk5iSlNCU1ppTC96V3V5OUZNeFZMS2czCjVLdnNnM2drdUpVaFVuK0tMUUFPdTNUWHFaZ2tTejE1SzFOSU9qYm1HZGVWSm5KQTd6NTF2ZkJTTStzQWhGWTgKejJPUHo4aCtqTlJseDAvV0UzTHZEUUMvSkV4WnRCRGFuVFU0anpHMHR2NGk1OVVQN2lWbnlwRHk0dkFkWm5mbgowZncwVnplUXJqT2JuQjdYQTZuUFhseXZubzErclRqakFIMUdtU053c1IwcDRzcEViZ0lXQTNhMmJzeUN5dEJsCjVOdmJKZkVpSTFoTmFOZ3hoSDJNenlOUWVhYXZVa29MdDdPN0xqYzVFWlo4cFFJREFRQUJvMEl3UURBT0JnTlYKSFE4QkFmOEVCQU1DQWdRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVUVyVkJrc3MydGV0Qgp6ZWhoRi92bGdVMlJiM2N3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUdEZVBVa3I1bDB2OTlyMHZsKy9WZjYrCitBanVNNFoyOURtVXFHVC80OHBaR1RoaDlsZDQxUGZKNjl4eXFvME1wUlIyYmJuTTRCL2NVT1VlTE5VMlV4VWUKSGRlYk1oQUp4Qy9Uaks2SHpmeExkTVdzbzVSeVAydWZEOFZob2ZaQnlBVWczajdrTFgyRGNPd1lzNXNrenZ0LwpuVUlhQURLaXhtcFlSSWJ6MUxjQmVHbWROZ21iZ0hTa3MrYUxUTE5NdDhDQTBnSExhMER6ODhYR1psSi80VmJzCjNaWVVXMVExY01IUHd5NnAwV2kwQkpQeXNaV3hZdFJyV3JFWUhZNVZIanZhUG90S3J4Y2NQMUlrNGJzVU1ZZ0wKaTdSaHlYdmJHc0pKK1lNc3hmalU5bm5XYVhLdXM5ZHl0WG1kRGw1R0hNU3VOeTdKYjIwcU5RQkxhWHFkVmY0PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
+ server: https://130.211.231.87
+ name: gke_grand-icon-307205_us-central1-c_cluster-3
+ contexts:
+ - context:
+ cluster: gke_grand-icon-307205_us-central1-c_cluster-3
+ user: gke_grand-icon-307205_us-central1-c_cluster-3
+ name: gke_grand-icon-307205_us-central1-c_cluster-3
+ current-context: gke_grand-icon-307205_us-central1-c_cluster-3
+ kind: Config
+ preferences: {}
+ users:
+ - name: gke_grand-icon-307205_us-central1-c_cluster-3
+ user:
+ auth-provider:
+ config:
+ cmd-args: config config-helper --format=json
+ cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
+ expiry-key: '{.credential.token_expiry}'
+ token-key: '{.credential.access_token}'
+ name: gcp
+ - name: kubesphere
+ user:
+ token: eyJhbGciOiJSUzI1NiIsImtpZCI6InNjOFpIb3RrY3U3bGNRSV9NWV8tSlJzUHJ4Y2xnMDZpY3hhc1BoVy0xTGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlc3BoZXJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlc3BoZXJlLXRva2VuLXpocmJ3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVzcGhlcmUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMGFmZGI1Ny01MTBkLTRjZDgtYTAwYS1hNDQzYTViNGM0M2MiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXNwaGVyZS1zeXN0ZW06a3ViZXNwaGVyZSJ9.ic6LaS5rEQ4tXt_lwp7U_C8rioweP-ZdDjlIZq91GOw9d6s5htqSMQfTeVlwTl2Bv04w3M3_pCkvRzMD0lHg3mkhhhP_4VU0LIo4XeYWKvWRoPR2kymLyskAB2Khg29qIPh5ipsOmGL9VOzD52O2eLtt_c6tn-vUDmI_Zw985zH3DHwUYhppGM8uNovHawr8nwZoem27XtxqyBkqXGDD38WANizyvnPBI845YqfYPY5PINPYc9bQBFfgCovqMZajwwhcvPqS6IpG1Qv8TX2lpuJIK0LLjiKaHoATGvHLHdAZxe_zgAC2cT_9Ars3HIN4vzaSX0f-xP--AcRgKVSY9g
+ ```
+
+### Step 4: Import the GKE member cluster
+
+1. Log in to the KubeSphere console on your host cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
+
+2. Enter the basic information based on your needs and click **Next**.
+
+3. In **Connection Method**, select **Direct connection**. Fill in the new kubeconfig file of the GKE member cluster and then click **Create**.
+
+4. Wait for cluster initialization to finish.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/multicluster-management/introduction/_index.md b/content/en/docs/v3.4/multicluster-management/introduction/_index.md
new file mode 100644
index 000000000..0b97cbae9
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/introduction/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Introduction"
+weight: 5100
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/multicluster-management/introduction/kubefed-in-kubesphere.md b/content/en/docs/v3.4/multicluster-management/introduction/kubefed-in-kubesphere.md
new file mode 100644
index 000000000..4564770b5
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/introduction/kubefed-in-kubesphere.md
@@ -0,0 +1,49 @@
+---
+title: "KubeSphere Federation"
+keywords: "Kubernetes, KubeSphere, federation, multicluster, hybrid-cloud"
+description: "Understand the fundamental concept of Kubernetes federation in KubeSphere, including member clusters and host clusters."
+linkTitle: "KubeSphere Federation"
+weight: 5120
+---
+
+The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters.
+
+## How the Multi-cluster Architecture Works
+
+Before you use the central control plane of KubeSphere to manage multiple clusters, you need to create a host cluster, also known as **host** cluster. The host cluster, essentially, is a KubeSphere cluster with the multi-cluster feature enabled. It provides you with the control plane for unified management of member clusters, also known as **member** cluster. Member clusters are common KubeSphere clusters without the central control plane. Namely, tenants with necessary permissions (usually cluster administrators) can access the control plane from the host cluster to manage all member clusters, such as viewing and editing resources on member clusters. Conversely, if you access the web console of any member cluster separately, you cannot see any resources on other clusters.
+
+There can only be one host cluster while multiple member clusters can exist at the same time. In a multi-cluster architecture, the network between the host cluster and member clusters can be [connected directly](../../enable-multicluster/direct-connection/) or [through an agent](../../enable-multicluster/agent-connection/). The network between member clusters can be set in a completely isolated environment.
+
+If you are using on-premises Kubernetes clusters built through kubeadm, install KubeSphere on your Kubernetes clusters by referring to [Air-gapped Installation on Kubernetes](../../../installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped/), and then enable KubeSphere multi-cluster management through direct connection or agent connection.
+
+
+
+## Vendor Agnostic
+
+KubeSphere features a powerful, inclusive central control plane so that you can manage any KubeSphere clusters in a unified way regardless of deployment environments or cloud providers.
+
+## Resource Requirements
+
+Before you enable multi-cluster management, make sure you have enough resources in your environment.
+
+| Namespace | kube-federation-system | kubesphere-system |
+| -------------- | ---------------------- | ----------------- |
+| Sub-component | 2 x controller-manager | tower |
+| CPU Request | 100 m | 100 m |
+| CPU Limit | 500 m | 500 m |
+| Memory Request | 64 MiB | 128 MiB |
+| Memory Limit | 512 MiB | 256 MiB |
+| Installation | Optional | Optional |
+
+{{< notice note >}}
+
+- The request and limit of CPU and memory resources all refer to single replica.
+- After the multi-cluster feature is enabled, tower and controller-manager will be installed on the host cluster. If you use [agent connection](../../../multicluster-management/enable-multicluster/agent-connection/), only tower is needed for member clusters. If you use [direct connection](../../../multicluster-management/enable-multicluster/direct-connection/), no additional component is needed for member clusters.
+
+{{ notice >}}
+
+## Use the App Store in a Multi-cluster Architecture
+
+Different from other components in KubeSphere, the [KubeSphere App Store](../../../pluggable-components/app-store/) serves as a global application pool for all clusters, including host cluster and member clusters. You only need to enable the App Store on the host cluster and you can use functions related to the App Store on member clusters directly (no matter whether the App Store is enabled on member clusters or not), such as [app templates](../../../project-user-guide/application/app-template/) and [app repositories](../../../workspace-administration/app-repository/import-helm-repository/).
+
+However, if you only enable the App Store on member clusters without enabling it on the host cluster, you will not be able to use the App Store on any cluster in the multi-cluster architecture.
diff --git a/content/en/docs/v3.4/multicluster-management/introduction/overview.md b/content/en/docs/v3.4/multicluster-management/introduction/overview.md
new file mode 100644
index 000000000..beb6d19b2
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/introduction/overview.md
@@ -0,0 +1,15 @@
+---
+title: "Kubernetes Multi-Cluster Management — Overview"
+keywords: 'Kubernetes, KubeSphere, multicluster, hybrid-cloud'
+description: 'Gain a basic understanding of multi-cluster management, such as its common use cases, and the benefits that KubeSphere can bring with its multi-cluster feature.'
+linkTitle: "Overview"
+weight: 5110
+---
+
+Today, it's very common for organizations to run and manage multiple Kubernetes clusters across different cloud providers or infrastructures. As each Kubernetes cluster is a relatively self-contained unit, the upstream community is struggling to research and develop a multi-cluster management solution. That said, Kubernetes Cluster Federation ([KubeFed](https://github.com/kubernetes-sigs/kubefed) for short) may be a possible approach among others.
+
+The most common use cases of multi-cluster management include service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and vendor lock-in avoidance.
+
+KubeSphere is developed to address multi-cluster and multi-cloud management challenges, including the scenarios mentioned above. It provides users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premises environments. KubeSphere also boasts rich observability across multiple clusters including centralized monitoring, logging, events, and auditing logs.
+
+
diff --git a/content/en/docs/v3.4/multicluster-management/unbind-cluster.md b/content/en/docs/v3.4/multicluster-management/unbind-cluster.md
new file mode 100644
index 000000000..3d28fa476
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/unbind-cluster.md
@@ -0,0 +1,61 @@
+---
+title: "Remove a Member Cluster"
+keywords: 'Kubernetes, KubeSphere, multicluster, hybrid-cloud'
+description: 'Learn how to remove a member cluster from your cluster pool in KubeSphere.'
+linkTitle: "Remove a Member Cluster"
+weight: 5500
+---
+
+This tutorial demonstrates how to remove a member cluster on the KubeSphere web console.
+
+## Prerequisites
+
+- You have enabled multi-cluster management.
+- You need a user granted a role including the authorization of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to a user.
+
+## Remove a Cluster
+
+You can remove a cluster by using either of the following methods:
+
+**Method 1**
+
+1. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. In the **Member Clusters** area, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `alerting` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ alerting:
+ enabled: true # Change "false" to "true".
+ ```
+
+5. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+If you can see **Alerting Messages** and **Alerting Policies** on the **Cluster Management** page, it means the installation is successful as the two parts won't display until the component is installed.
+
+
+
diff --git a/content/en/docs/v3.4/pluggable-components/app-store.md b/content/en/docs/v3.4/pluggable-components/app-store.md
new file mode 100644
index 000000000..088da39c8
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/app-store.md
@@ -0,0 +1,120 @@
+---
+title: "KubeSphere App Store"
+keywords: "Kubernetes, KubeSphere, app-store, OpenPitrix"
+description: "Learn how to enable the KubeSphere App Store to share data and apps internally and set industry standards of delivery process externally."
+linkTitle: "KubeSphere App Store"
+weight: 6200
+---
+
+As an open-source and app-centric container platform, KubeSphere provides users with a Helm-based App Store for application lifecycle management on the back of [OpenPitrix](https://github.com/openpitrix/openpitrix), an open-source web-based system to package, deploy and manage different types of apps. The KubeSphere App Store allows ISVs, developers, and users to upload, test, install, and release apps with just several clicks in a one-stop shop.
+
+Internally, the KubeSphere App Store can serve as a place for different teams to share data, middleware, and office applications. Externally, it is conducive to setting industry standards of building and delivery. After you enable this feature, you can add more apps with app templates.
+
+For more information, see [App Store](../../application-store/).
+
+## Enable the App Store Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by running the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the App Store in this mode (for example, for testing purposes), refer to [the following section](#enable-app-store-after-installation) to see how the App Store can be installed after installation.
+ {{ notice >}}
+
+2. In this file, search for `openpitrix` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ openpitrix:
+ store:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the KubeSphere App Store first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, search for `openpitrix` and enable the App Store by changing `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ openpitrix:
+ store:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Run the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## Enable the App Store After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+
+{{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, search for `openpitrix` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ openpitrix:
+ store:
+ enabled: true # Change "false" to "true".
+ ```
+
+5. Use the web kubectl to check the installation process by running the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+
+{{ notice >}}
+
+## Verify the Installation of the Component
+
+After you log in to the console, if you can see **App Store** in the upper-left corner and apps in it, it means the installation is successful.
+
+{{< notice note >}}
+
+- You can even access the App Store without logging in to the console by visiting `
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `auditing` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ auditing:
+ enabled: true # Change "false" to "true".
+ ```
+
+ {{< notice note >}}
+By default, Elasticsearch will be installed internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Auditing, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+Verify that you can use the **Audit Log Search** function from the **Toolbox** in the lower-right corner.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Execute the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n kubesphere-logging-system
+```
+
+The output may look as follows if the component runs successfully:
+
+```yaml
+NAME READY STATUS RESTARTS AGE
+elasticsearch-logging-curator-elasticsearch-curator-159872n9g9g 0/1 Completed 0 2d10h
+elasticsearch-logging-curator-elasticsearch-curator-159880tzb7x 0/1 Completed 0 34h
+elasticsearch-logging-curator-elasticsearch-curator-1598898q8w7 0/1 Completed 0 10h
+elasticsearch-logging-data-0 1/1 Running 1 2d20h
+elasticsearch-logging-data-1 1/1 Running 1 2d20h
+elasticsearch-logging-discovery-0 1/1 Running 1 2d20h
+fluent-bit-6v5fs 1/1 Running 1 2d20h
+fluentbit-operator-5bf7687b88-44mhq 1/1 Running 1 2d20h
+kube-auditing-operator-7574bd6f96-p4jvv 1/1 Running 1 2d20h
+kube-auditing-webhook-deploy-6dfb46bb6c-hkhmx 1/1 Running 1 2d20h
+kube-auditing-webhook-deploy-6dfb46bb6c-jp77q 1/1 Running 1 2d20h
+```
+
+{{ tab >}}
+
+{{ tabs >}}
diff --git a/content/en/docs/v3.4/pluggable-components/devops.md b/content/en/docs/v3.4/pluggable-components/devops.md
new file mode 100644
index 000000000..a7b27b86d
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/devops.md
@@ -0,0 +1,130 @@
+---
+title: "KubeSphere DevOps System"
+keywords: "Kubernetes, Jenkins, KubeSphere, DevOps, cicd"
+description: "Learn how to enable DevOps to further free your developers and let them focus on code writing."
+linkTitle: "KubeSphere DevOps System"
+weight: 6300
+---
+
+The KubeSphere DevOps System is designed for CI/CD workflows in Kubernetes. Based on [Jenkins](https://jenkins.io/), it provides one-stop solutions to help both development and Ops teams build, test and publish apps to Kubernetes in a straight-forward way. It also features plugin management, [Binary-to-Image (B2I)](../../project-user-guide/image-builder/binary-to-image/), [Source-to-Image (S2I)](../../project-user-guide/image-builder/source-to-image/), code dependency caching, code quality analysis, pipeline logging, and more.
+
+The DevOps System offers an automated environment for users as apps can be automatically released to the same platform. It is also compatible with third-party private image registries (for example, Harbor) and code repositories (for example, GitLab/GitHub/SVN/BitBucket). As such, it creates excellent user experience by providing users with comprehensive, visualized CI/CD pipelines which are extremely useful in air-gapped environments.
+
+For more information, see [DevOps User Guide](../../devops-user-guide/).
+
+## Enable DevOps Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by running the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable DevOps in this mode (for example, for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how DevOps can be installed after installation.
+ {{ notice >}}
+
+2. In this file, search for `devops` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ devops:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere DevOps first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, search for `devops` and enable DevOps by changing `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ devops:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Run the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## Enable DevOps After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+
+{{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, search for `devops` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ devops:
+ enabled: true # Change "false" to "true".
+ ```
+
+5. Use the web kubectl to check the installation process by running the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+
+{{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+Go to **System Components** and check that all components on the **DevOps** tab page is in **Healthy** state.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Run the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n kubesphere-devops-system
+```
+
+The output may look as follows if the component runs successfully:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+devops-jenkins-5cbbfbb975-hjnll 1/1 Running 0 40m
+s2ioperator-0 1/1 Running 0 41m
+```
+
+{{ tab >}}
+
+{{ tabs >}}
diff --git a/content/en/docs/v3.4/pluggable-components/events.md b/content/en/docs/v3.4/pluggable-components/events.md
new file mode 100644
index 000000000..470d6ebbd
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/events.md
@@ -0,0 +1,191 @@
+---
+title: "KubeSphere Events"
+keywords: "Kubernetes, events, KubeSphere, k8s-events"
+description: "Learn how to enable Events to keep track of everything that is happening on the platform."
+linkTitle: "KubeSphere Events"
+weight: 6500
+---
+
+KubeSphere events allow users to keep track of what is happening inside a cluster, such as node scheduling status and image pulling result. They will be accurately recorded with the specific reason, status and message displayed in the web console. To query events, users can quickly launch the web Toolkit and enter related information in the search bar with different filters (e.g keyword and project) available. Events can also be archived to third-party tools, such as Elasticsearch, Kafka, or Fluentd.
+
+For more information, see [Event Query](../../toolbox/events-query/).
+
+## Enable Events Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+
+If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Events in this mode (for example, for testing purposes), refer to [the following section](#enable-events-after-installation) to see how Events can be [installed after installation](#enable-events-after-installation).
+
+{{ notice >}}
+
+2. In this file, navigate to `events` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ events:
+ enabled: true # Change "false" to "true".
+ ```
+
+ {{< notice note >}}
+By default, KubeKey will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `events` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ events:
+ enabled: true # Change "false" to "true".
+ ```
+
+ {{< notice note >}}
+
+By default, Elasticsearch will be installed internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
in the lower-right corner of the console.
+
+{{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+Verify that you can use the **Resource Event Search** function from the **Toolbox** in the lower-right corner.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Execute the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n kubesphere-logging-system
+```
+
+The output may look as follows if the component runs successfully:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+elasticsearch-logging-data-0 1/1 Running 0 155m
+elasticsearch-logging-data-1 1/1 Running 0 154m
+elasticsearch-logging-discovery-0 1/1 Running 0 155m
+fluent-bit-bsw6p 1/1 Running 0 108m
+fluent-bit-smb65 1/1 Running 0 108m
+fluent-bit-zdz8b 1/1 Running 0 108m
+fluentbit-operator-9b69495b-bbx54 1/1 Running 0 109m
+ks-events-exporter-5cb959c74b-gx4hw 2/2 Running 0 7m55s
+ks-events-operator-7d46fcccc9-4mdzv 1/1 Running 0 8m
+ks-events-ruler-8445457946-cl529 2/2 Running 0 7m55s
+ks-events-ruler-8445457946-gzlm9 2/2 Running 0 7m55s
+logsidecar-injector-deploy-667c6c9579-cs4t6 2/2 Running 0 106m
+logsidecar-injector-deploy-667c6c9579-klnmf 2/2 Running 0 106m
+```
+
+{{ tab >}}
+
+{{ tabs >}}
+
diff --git a/content/en/docs/v3.4/pluggable-components/kubeedge.md b/content/en/docs/v3.4/pluggable-components/kubeedge.md
new file mode 100644
index 000000000..9f593a43e
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/kubeedge.md
@@ -0,0 +1,184 @@
+---
+title: "KubeEdge"
+keywords: "Kubernetes, KubeSphere, Kubeedge"
+description: "Learn how to enable KubeEdge to add edge nodes to your cluster."
+linkTitle: "KubeEdge"
+weight: 6930
+---
+
+[KubeEdge](https://kubeedge.io/en/) is an open-source system for extending native containerized application orchestration capabilities to hosts at edge. It supports multiple edge protocols and looks to provide unified management of cloud and edge applications and resources.
+
+KubeEdge has components running in two separate places - cloud and edge nodes. The components running on the cloud, collectively known as CloudCore, include Controllers and Cloud Hub. Cloud Hub serves as the gateway for the requests sent by edge nodes while Controllers function as orchestrators. The components running on edge nodes, collectively known as EdgeCore, include EdgeHub, EdgeMesh, MetadataManager, and DeviceTwin. For more information, see [the KubeEdge website](https://kubeedge.io/en/).
+
+After you enable KubeEdge, you can [add edge nodes to your cluster](../../installing-on-linux/cluster-operation/add-edge-nodes/) and deploy workloads on them.
+
+
+
+## Enable KubeEdge Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeEdge in this mode (for example, for testing purposes), refer to [the following section](#enable-kubeedge-after-installation) to see how KubeEdge can be installed after installation.
+ {{ notice >}}
+
+2. In this file, navigate to `edgeruntime` and `kubeedge`, and change the value of `enabled` from `false` to `true` to enable all KubeEdge components. Click **OK**.
+
+ ```yaml
+ edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
+ enabled: false
+ kubeedge: # kubeedge configurations
+ enabled: false
+ cloudCore:
+ cloudHub:
+ advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
+ - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
+ service:
+ cloudhubNodePort: "30000"
+ cloudhubQuicNodePort: "30001"
+ cloudhubHttpsNodePort: "30002"
+ cloudstreamNodePort: "30003"
+ tunnelNodePort: "30004"
+ # resources: {}
+ # hostNetWork: false
+ ```
+
+3. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes. Save the file when you finish editing.
+
+4. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeEdge first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `edgeruntime` and `kubeedge`, and change the value of `enabled` from `false` to `true` to enable all KubeEdge components. Click **OK**.
+
+ ```yaml
+ edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
+ enabled: false
+ kubeedge: # kubeedge configurations
+ enabled: false
+ cloudCore:
+ cloudHub:
+ advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
+ - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
+ service:
+ cloudhubNodePort: "30000"
+ cloudhubQuicNodePort: "30001"
+ cloudhubHttpsNodePort: "30002"
+ cloudstreamNodePort: "30003"
+ tunnelNodePort: "30004"
+ # resources: {}
+ # hostNetWork: false
+ ```
+
+3. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes.
+
+4. Save the file and execute the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## Enable KubeEdge After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `edgeruntime` and `kubeedge`, and change the value of `enabled` from `false` to `true` to enable all KubeEdge components. Click **OK**.
+
+ ```yaml
+ edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
+ enabled: false
+ kubeedge: # kubeedge configurations
+ enabled: false
+ cloudCore:
+ cloudHub:
+ advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
+ - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
+ service:
+ cloudhubNodePort: "30000"
+ cloudhubQuicNodePort: "30001"
+ cloudhubHttpsNodePort: "30002"
+ cloudstreamNodePort: "30003"
+ tunnelNodePort: "30004"
+ # resources: {}
+ # hostNetWork: false
+ ```
+
+5. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+6. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+On the **Cluster Management** page, verify that the **Edge Nodes** module has appeared under **Nodes**.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Execute the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n kubeedge
+```
+
+The output may look as follows if the component runs successfully:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+cloudcore-5f994c9dfd-r4gpq 1/1 Running 0 5h13m
+edge-watcher-controller-manager-bdfb8bdb5-xqfbk 2/2 Running 0 5h13m
+iptables-hphgf 1/1 Running 0 5h13m
+```
+
+{{ tab >}}
+
+{{ tabs >}}
+
+{{< notice note >}}
+
+CloudCore may malfunction (`CrashLoopBackOff`) if `kubeedge.cloudCore.cloudHub.advertiseAddress` was not set when you enabled KubeEdge. In this case, run `kubectl -n kubeedge edit cm cloudcore` to add the public IP address of your cluster or an IP address that can be accessed by edge nodes.
+
+{{ notice >}}
diff --git a/content/en/docs/v3.4/pluggable-components/logging.md b/content/en/docs/v3.4/pluggable-components/logging.md
new file mode 100644
index 000000000..1df17c602
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/logging.md
@@ -0,0 +1,199 @@
+---
+title: "KubeSphere Logging System"
+keywords: "Kubernetes, Elasticsearch, KubeSphere, Logging, logs"
+description: "Learn how to enable Logging to leverage the tenant-based system for log collection, query and management."
+linkTitle: "KubeSphere Logging System"
+weight: 6400
+---
+
+KubeSphere provides a powerful, holistic, and easy-to-use logging system for log collection, query, and management. It covers logs at varied levels, including tenants, infrastructure resources, and applications. Users can search logs from different dimensions, such as project, workload, Pod and keyword. Compared with Kibana, the tenant-based logging system of KubeSphere features better isolation and security among tenants as tenants can only view their own logs. Apart from KubeSphere's own logging system, the container platform also allows users to add third-party log collectors, such as Elasticsearch, Kafka, and Fluentd.
+
+For more information, see [Log Query](../../toolbox/log-query/).
+
+## Enable Logging Before Installation
+
+### Installing on Linux
+
+When you install KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+
+- If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Logging in this mode (for example, for testing purposes), refer to [the following section](#enable-logging-after-installation) to see how Logging can be installed after installation.
+
+- If you adopt [Multi-node Installation](../../installing-on-linux/introduction/multioverview/) and are using symbolic links for docker root directory, make sure all nodes follow the exactly same symbolic links. Logging agents are deployed in DaemonSets onto nodes. Any discrepancy in container log path may cause collection failures on that node.
+
+{{ notice >}}
+
+2. In this file, navigate to `logging` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ logging:
+ enabled: true # Change "false" to "true".
+ containerruntime: docker
+ ```
+
+ {{< notice info >}}To use containerd as the container runtime, change the value of the field `containerruntime` to `containerd`. If you upgraded to KubeSphere 3.4 from earlier versions, you have to manually add the field `containerruntime` under `logging` when enabling KubeSphere Logging system.
+
+ {{ notice >}}
+
+ {{< notice note >}}By default, KubeKey will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `logging` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ logging:
+ enabled: true # Change "false" to "true".
+ containerruntime: docker
+ ```
+
+ {{< notice info >}}To use containerd as the container runtime, change the value of the field `.logging.containerruntime` to `containerd`. If you upgraded to KubeSphere 3.4 from earlier versions, you have to manually add the field `containerruntime` under `logging` when enabling KubeSphere Logging system.
+
+ {{ notice >}}
+
+ {{< notice note >}}By default, Elasticsearch will be installed internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
in the lower-right corner of the console.
+
+{{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+Go to **System Components** and check that all components on the **Logging** tab page is in **Healthy** state.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Execute the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n kubesphere-logging-system
+```
+
+The output may look as follows if the component runs successfully:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+elasticsearch-logging-data-0 1/1 Running 0 87m
+elasticsearch-logging-data-1 1/1 Running 0 85m
+elasticsearch-logging-discovery-0 1/1 Running 0 87m
+fluent-bit-bsw6p 1/1 Running 0 40m
+fluent-bit-smb65 1/1 Running 0 40m
+fluent-bit-zdz8b 1/1 Running 0 40m
+fluentbit-operator-9b69495b-bbx54 1/1 Running 0 40m
+logsidecar-injector-deploy-667c6c9579-cs4t6 2/2 Running 0 38m
+logsidecar-injector-deploy-667c6c9579-klnmf 2/2 Running 0 38m
+```
+
+{{ tab >}}
+
+{{ tabs >}}
diff --git a/content/en/docs/v3.4/pluggable-components/metrics-server.md b/content/en/docs/v3.4/pluggable-components/metrics-server.md
new file mode 100644
index 000000000..4f89f0a23
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/metrics-server.md
@@ -0,0 +1,113 @@
+---
+title: "Metrics Server"
+keywords: "Kubernetes, KubeSphere, Metrics Server"
+description: "Learn how to enable Metrics Server to use HPA to autoscale a Deployment."
+linkTitle: "Metrics Server"
+weight: 6910
+---
+
+KubeSphere supports Horizontal Pod Autoscalers (HPA) for [Deployments](../../project-user-guide/application-workloads/deployments/). In KubeSphere, the Metrics Server controls whether the HPA is enabled. You use an HPA object to autoscale a Deployment based on different types of metrics, such as CPU and memory utilization, as well as the minimum and maximum number of replicas. In this way, an HPA helps to make sure your application runs smoothly and consistently in different situations.
+
+## Enable the Metrics Server Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Metrics Server in this mode (for example, for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how the Metrics Server can be installed after installation.
+ {{ notice >}}
+
+2. In this file, navigate to `metrics_server` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ metrics_server:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Metrics Server first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `metrics_server` and enable it by changing `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ metrics_server:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Execute the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+ {{< notice note >}}
+
+If you install KubeSphere on some cloud hosted Kubernetes engines, it is probable that the Metrics Server is already installed in your environment. In this case, it is not recommended that you enable it in `cluster-configuration.yaml` as it may cause conflicts during installation.
+ {{ notice >}}
+
+## Enable the Metrics Server After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `metrics_server` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ metrics_server:
+ enabled: true # Change "false" to "true".
+ ```
+
+5. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+Execute the following command to verify that the Pod of Metrics Server is up and running.
+
+```bash
+kubectl get pod -n kube-system
+```
+
+If the Metrics Server is successfully installed, your cluster may return the following output (excluding irrelevant Pods):
+
+```bash
+NAME READY STATUS RESTARTS AGE
+metrics-server-6c767c9f94-hfsb7 1/1 Running 0 9m38s
+```
\ No newline at end of file
diff --git a/content/en/docs/v3.4/pluggable-components/network-policy.md b/content/en/docs/v3.4/pluggable-components/network-policy.md
new file mode 100644
index 000000000..3eb4fdd5e
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/network-policy.md
@@ -0,0 +1,109 @@
+---
+title: "Network Policies"
+keywords: "Kubernetes, KubeSphere, NetworkPolicy"
+description: "Learn how to enable Network Policies to control traffic flow at the IP address or port level."
+linkTitle: "Network Policies"
+weight: 6900
+---
+
+Starting from v3.0.0, users can configure network policies of native Kubernetes in KubeSphere. Network Policies are an application-centric construct, enabling you to specify how a Pod is allowed to communicate with various network entities over the network. With network policies, users can achieve network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
+
+{{< notice note >}}
+
+- Please make sure that the CNI network plugin used by the cluster supports Network Policies before you enable the feature. There are a number of CNI network plugins that support Network Policies, including Calico, Cilium, Kube-router, Romana, and Weave Net.
+- It is recommended that you use [Calico](https://www.projectcalico.org/) as the CNI plugin before you enable Network Policies.
+
+{{ notice >}}
+
+For more information, see [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/).
+
+## Enable the Network Policy Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Network Policy in this mode (for example, for testing purposes), refer to [the following section](#enable-network-policy-after-installation) to see how the Network Policy can be installed after installation.
+ {{ notice >}}
+
+2. In this file, navigate to `network.networkpolicy` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ network:
+ networkpolicy:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Network Policy first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `network.networkpolicy` and enable it by changing `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ network:
+ networkpolicy:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Execute the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## Enable the Network Policy After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `network.networkpolicy` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ network:
+ networkpolicy:
+ enabled: true # Change "false" to "true".
+ ```
+
+5. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+If you can see the **Network Policies** module in **Network**, it means the installation is successful as this part won't display until you install the component.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/pluggable-components/overview.md b/content/en/docs/v3.4/pluggable-components/overview.md
new file mode 100644
index 000000000..04f63b922
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/overview.md
@@ -0,0 +1,98 @@
+---
+title: "Enable Pluggable Components — Overview"
+keywords: "Kubernetes, KubeSphere, pluggable-components, overview"
+description: "Develop a basic understanding of key components in KubeSphere, including features and resource consumption."
+linkTitle: "Overview"
+weight: 6100
+---
+
+KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be deployed with a minimal installation if you do not enable them.
+
+Different pluggable components are deployed in different namespaces. You can enable any of them based on your needs. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere.
+
+For more information about how to enable each component, see respective tutorials in this chapter.
+
+## Resource Requirements
+
+Before you enable pluggable components, make sure you have enough resources in your environment based on the tables below. Otherwise, components may crash due to a lack of resources.
+
+{{< notice note >}}
+
+The following request and limit of CPU and memory resources are required by a single replica.
+
+{{ notice >}}
+
+### KubeSphere App Store
+
+| Namespace | openpitrix-system |
+| -------------- | ------------------------------------------------------------ |
+| CPU Request | 0.3 core |
+| CPU Limit | None |
+| Memory Request | 300 MiB |
+| Memory Limit | None |
+| Installation | Optional |
+| Notes | Provide an App Store with application lifecycle management. The installation is recommended. |
+
+### KubeSphere DevOps System
+
+| Namespace | kubesphere-devops-system | kubesphere-devops-system |
+| -------------- | ------------------------------------------------------------ | ------------------------------------------------------- |
+| Pattern | All-in-One installation | Multi-node installation |
+| CPU Request | 34 m | 0.47 core |
+| CPU Limit | None | None |
+| Memory Request | 2.69 G | 8.6 G |
+| Memory Limit | None | None |
+| Installation | Optional | Optional |
+| Notes | Provide one-stop DevOps solutions with Jenkins pipelines and B2I & S2I. | The memory of one of the nodes must be larger than 8 G. |
+
+### KubeSphere Monitoring System
+
+| Namespace | kubesphere-monitoring-system | kubesphere-monitoring-system | kubesphere-monitoring-system |
+| -------------- | ------------------------------------------------------------ | ---------------------------- | ---------------------------- |
+| Sub-component | 2 x Prometheus | 3 x Alertmanager | Notification Manager |
+| CPU Request | 100 m | 10 m | 100 m |
+| CPU Limit | 4 cores | None | 500 m |
+| Memory Request | 400 MiB | 30 MiB | 20 MiB |
+| Memory Limit | 8 GiB | None | 1 GiB |
+| Installation | Required | Required | Required |
+| Notes | The memory consumption of Prometheus depends on the cluster size. 8 GiB is sufficient for a cluster with 200 nodes/16,000 Pods. | - | - |
+
+{{< notice note >}}
+
+The KubeSphere monitoring system is not a pluggable component. It is installed by default. The resource request and limit of it are also listed on this page for your reference as it is closely related to other components such as logging.
+
+{{ notice >}}
+
+### KubeSphere Logging System
+
+| Namespace | kubesphere-logging-system | kubesphere-logging-system | kubesphere-logging-system | kubesphere-logging-system |
+| -------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
+| Sub-component | 3 x Elasticsearch | fluent bit | kube-events | kube-auditing |
+| CPU Request | 50 m | 20 m | 90 m | 20 m |
+| CPU Limit | 1 core | 200 m | 900 m | 200 m |
+| Memory Request | 2 G | 50 MiB | 120 MiB | 50 MiB |
+| Memory Limit | None | 100 MiB | 1200 MiB | 100 MiB |
+| Installation | Optional | Required | Optional | Optional |
+| Notes | An optional component for log data storage. The internal Elasticsearch is not recommended for the production environment. | The log collection agent. It is a required component after you enable logging. | Collecting, filtering, exporting and alerting of Kubernetes events. | Collecting, filtering and alerting of Kubernetes and KubeSphere auditing logs. |
+
+### KubeSphere Alerting and Notification
+
+| Namespace | kubesphere-alerting-system |
+| -------------- | ------------------------------------------------------------ |
+| CPU Request | 0.08 core |
+| CPU Limit | None |
+| Memory Request | 80 M |
+| Memory Limit | None |
+| Installation | Optional |
+| Notes | Alerting and Notification need to be enabled at the same time. |
+
+### KubeSphere Service Mesh
+
+| Namespace | istio-system |
+| -------------- | ------------------------------------------------------------ |
+| CPU Request | 1 core |
+| CPU Limit | None |
+| Memory Request | 3.5 G |
+| Memory Limit | None |
+| Installation | Optional |
+| Notes | Support grayscale release strategies, traffic topology, traffic management and distributed tracing. |
diff --git a/content/en/docs/v3.4/pluggable-components/pod-ip-pools.md b/content/en/docs/v3.4/pluggable-components/pod-ip-pools.md
new file mode 100644
index 000000000..b2916ef1b
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/pod-ip-pools.md
@@ -0,0 +1,104 @@
+---
+title: "Pod IP Pools"
+keywords: "Kubernetes, KubeSphere, Pod, IP pools"
+description: "Learn how to enable Pod IP Pools to assign a specific Pod IP pool to your Pods."
+linkTitle: "Pod IP Pools"
+weight: 6920
+---
+
+A Pod IP pool is used to manage the Pod network address space, and the address space between each Pod IP pool cannot overlap. When you create a workload, you can select a specific Pod IP pool, so that created Pods will be assigned IP addresses from this Pod IP pool.
+
+## Enable Pod IP Pools Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Pod IP Pools in this mode (for example, for testing purposes), refer to [the following section](#enable-pod-ip-pools-after-installation) to see how Pod IP pools can be installed after installation.
+ {{ notice >}}
+
+2. In this file, navigate to `network.ippool.type` and change `none` to `calico`. Save the file after you finish.
+
+ ```yaml
+ network:
+ ippool:
+ type: calico # Change "none" to "calico".
+ ```
+
+3. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Pod IP Pools first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `network.ippool.type` and enable it by changing `none` to `calico`. Save the file after you finish.
+
+ ```yaml
+ network:
+ ippool:
+ type: calico # Change "none" to "calico".
+ ```
+
+3. Execute the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+
+## Enable Pod IP Pools After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `network` and change `network.ippool.type` to `calico`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ network:
+ ippool:
+ type: calico # Change "none" to "calico".
+ ```
+
+5. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+On the **Cluster Management** page, verify that you can see the **Pod IP Pools** module under **Network**.
+
+
+
diff --git a/content/en/docs/v3.4/pluggable-components/service-mesh.md b/content/en/docs/v3.4/pluggable-components/service-mesh.md
new file mode 100644
index 000000000..a8e0e6f36
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/service-mesh.md
@@ -0,0 +1,157 @@
+---
+title: "KubeSphere Service Mesh"
+keywords: "Kubernetes, Istio, KubeSphere, service-mesh, microservices"
+description: "Learn how to enable KubeSphere Service Mesh to use different traffic management strategies for microservices governance."
+linkTitle: "KubeSphere Service Mesh"
+weight: 6800
+---
+
+On the basis of [Istio](https://istio.io/), KubeSphere Service Mesh visualizes microservices governance and traffic management. It features a powerful toolkit including **circuit breaking, blue-green deployment, canary release, traffic mirroring, tracing, observability, and traffic control**. Developers can easily get started with KubeSphere Service Mesh without any code hacking, which greatly reduces the learning curve of Istio. All features of KubeSphere Service Mesh are designed to meet users' demand for their business.
+
+For more information, see [Grayscale Release](../../project-user-guide/grayscale-release/overview/).
+
+## Enable KubeSphere Service Mesh Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeSphere Service Mesh in this mode (for example, for testing purposes), refer to [the following section](#enable-service-mesh-after-installation) to see how KubeSphere Service Mesh can be installed after installation.
+ {{ notice >}}
+
+2. In this file, navigate to `servicemesh` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ servicemesh:
+ enabled: true # Change “false” to “true”.
+ istio: # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
+ components:
+ ingressGateways:
+ - name: istio-ingressgateway # Used to expose a service outside of the service mesh using an Istio Gateway. The value is false by defalut.
+ enabled: false
+ cni:
+ enabled: false # When the value is true, it identifies user application pods with sidecars requiring traffic redirection and sets this up in the Kubernetes pod lifecycle’s network setup phase.
+ ```
+
+ {{< notice note >}}
+ - For more information about how to access service after enabling Ingress Gateway, please refer to [Ingress Gateway](https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/).
+ - For more information about the Istio CNI plugin, please refer to [Install Istio with the Istio CNI plugin](https://istio.io/latest/docs/setup/additional-setup/cni/).
+ {{ notice >}}
+
+3. Run the following command to create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Service Mesh first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `servicemesh` and enable it by changing `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ servicemesh:
+ enabled: true # Change “false” to “true”.
+ istio: # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
+ components:
+ ingressGateways:
+ - name: istio-ingressgateway # Used to expose a service outside of the service mesh using an Istio Gateway. The value is false by defalut.
+ enabled: false
+ cni:
+ enabled: false # When the value is true, it identifies user application pods with sidecars requiring traffic redirection and sets this up in the Kubernetes pod lifecycle’s network setup phase.
+ ```
+
+3. Run the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## Enable KubeSphere Service Mesh After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `servicemesh` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ servicemesh:
+ enabled: true # Change “false” to “true”.
+ istio: # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
+ components:
+ ingressGateways:
+ - name: istio-ingressgateway # Used to expose a service outside of the service mesh using an Istio Gateway. The value is false by defalut.
+ enabled: false
+ cni:
+ enabled: false # When the value is true, it identifies user application pods with sidecars requiring traffic redirection and sets this up in the Kubernetes pod lifecycle’s network setup phase.
+ ```
+ ```
+
+5. Run the following command in kubectl to check the installation process:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+Go to **System Components** and check whether all components on the **Istio** tab page is in **Healthy** state. If yes, the component is successfully installed.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Run the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n istio-system
+```
+
+The following is an example of the output if the component runs successfully:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+istio-ingressgateway-78dbc5fbfd-f4cwt 1/1 Running 0 9m5s
+istiod-1-6-10-7db56f875b-mbj5p 1/1 Running 0 10m
+jaeger-collector-76bf54b467-k8blr 1/1 Running 0 6m48s
+jaeger-operator-7559f9d455-89hqm 1/1 Running 0 7m
+jaeger-query-b478c5655-4lzrn 2/2 Running 0 6m48s
+kiali-f9f7d6f9f-gfsfl 1/1 Running 0 4m1s
+kiali-operator-7d5dc9d766-qpkb6 1/1 Running 0 6m53s
+```
+
+{{ tab >}}
+
+{{ tabs >}}
diff --git a/content/en/docs/v3.4/pluggable-components/service-topology.md b/content/en/docs/v3.4/pluggable-components/service-topology.md
new file mode 100644
index 000000000..fe5836b74
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/service-topology.md
@@ -0,0 +1,130 @@
+---
+title: "Service Topology"
+keywords: "Kubernetes, KubeSphere, Services, Topology"
+description: "Learn how to enable Service Topology to view contextual details of your Pods based on Weave Scope."
+linkTitle: "Service Topology"
+weight: 6915
+---
+
+You can enable Service Topology to integrate [Weave Scope](https://www.weave.works/oss/scope/), a visualization and monitoring tool for Docker and Kubernetes. Weave Scope uses established APIs to collect information to build a topology of your apps and containers. The Service topology displays in your project, providing you with visual representations of connections based on traffic.
+
+## Enable Service Topology Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Service Topology in this mode (for example, for testing purposes), refer to [the following section](#enable-service-topology-after-installation) to see how Service Topology can be installed after installation.
+ {{ notice >}}
+
+2. In this file, navigate to `network.topology.type` and change `none` to `weave-scope`. Save the file after you finish.
+
+ ```yaml
+ network:
+ topology:
+ type: weave-scope # Change "none" to "weave-scope".
+ ```
+
+3. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Service Topology first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `network.topology.type` and enable it by changing `none` to `weave-scope`. Save the file after you finish.
+
+ ```yaml
+ network:
+ topology:
+ type: weave-scope # Change "none" to "weave-scope".
+ ```
+
+3. Execute the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+
+## Enable Service Topology After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `network` and change `network.topology.type` to `weave-scope`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ network:
+ topology:
+ type: weave-scope # Change "none" to "weave-scope".
+ ```
+
+5. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+Go to one of your project, navigate to **Services** under **Application Workloads**, and you can see a topology of your Services on the **Service Topology** tab page.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Execute the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n weave
+```
+
+The output may look as follows if the component runs successfully:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+weave-scope-agent-48cjp 1/1 Running 0 3m1s
+weave-scope-agent-9jb4g 1/1 Running 0 3m1s
+weave-scope-agent-ql5cf 1/1 Running 0 3m1s
+weave-scope-app-5b76897b6f-8bsls 1/1 Running 0 3m1s
+weave-scope-cluster-agent-8d9b8c464-5zlpp 1/1 Running 0 3m1s
+```
+
+{{ tab >}}
+
+{{ tabs >}}
\ No newline at end of file
diff --git a/content/en/docs/v3.4/pluggable-components/uninstall-pluggable-components.md b/content/en/docs/v3.4/pluggable-components/uninstall-pluggable-components.md
new file mode 100644
index 000000000..81a0f5035
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/uninstall-pluggable-components.md
@@ -0,0 +1,205 @@
+---
+title: "Uninstall Pluggable Components"
+keywords: "Installer, uninstall, KubeSphere, Kubernetes"
+description: "Learn how to uninstall each pluggable component in KubeSphere."
+linkTitle: "Uninstall Pluggable Components"
+Weight: 6940
+---
+
+After you [enable the pluggable components of KubeSphere](../../pluggable-components/), you can also uninstall them by performing the following steps. Please back up any necessary data before you uninstall these components.
+
+## Prerequisites
+
+You have to change the value of the field `enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration` before you uninstall any pluggable components except Service Topology and Pod IP Pools.
+
+Use either of the following methods to change the value of the field `enabled`:
+
+- Run the following command to edit `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit clusterconfiguration ks-installer
+ ```
+
+- Log in to the KubeSphere web console as `admin`, click **Platform** in the upper-left corner and select **Cluster Management**, and then go to **CRDs** to search for `ClusterConfiguration`. For more information, see [Enable Pluggable Components](../../../pluggable-components/).
+
+{{< notice note >}}
+
+After the value is changed, you need to wait until the updating process is complete before you continue with any further operations.
+
+{{ notice >}}
+
+## Uninstall KubeSphere App Store
+
+Change the value of `openpitrix.store.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+## Uninstall KubeSphere DevOps
+
+1. To uninstall DevOps:
+
+ ```bash
+ helm uninstall -n kubesphere-devops-system devops
+ kubectl patch -n kubesphere-system cc ks-installer --type=json -p='[{"op": "remove", "path": "/status/devops"}]'
+ kubectl patch -n kubesphere-system cc ks-installer --type=json -p='[{"op": "replace", "path": "/spec/devops/enabled", "value": false}]'
+ ```
+2. To delete DevOps resources:
+
+ ```bash
+ # Remove all resources related with DevOps
+ for devops_crd in $(kubectl get crd -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep "devops.kubesphere.io"); do
+ for ns in $(kubectl get ns -ojsonpath='{.items..metadata.name}'); do
+ for devops_res in $(kubectl get $devops_crd -n $ns -oname); do
+ kubectl patch $devops_res -n $ns -p '{"metadata":{"finalizers":[]}}' --type=merge
+ done
+ done
+ done
+ # Remove all DevOps CRDs
+ kubectl get crd -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep "devops.kubesphere.io" | xargs -I crd_name kubectl delete crd crd_name
+ # Remove DevOps namespace
+ kubectl delete namespace kubesphere-devops-system
+ ```
+
+
+## Uninstall KubeSphere Logging
+
+1. Change the value of `logging.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. To disable only log collection:
+
+ ```bash
+ kubectl delete inputs.logging.kubesphere.io -n kubesphere-logging-system tail
+ ```
+
+ {{< notice note >}}
+
+ After running this command, you can still view the container's recent logs provided by Kubernetes by default. However, the container history logs will be cleared and you cannot browse them any more.
+
+ {{ notice >}}
+
+3. To uninstall the Logging system, including Elasticsearch:
+
+ ```bash
+ kubectl delete crd fluentbitconfigs.logging.kubesphere.io
+ kubectl delete crd fluentbits.logging.kubesphere.io
+ kubectl delete crd inputs.logging.kubesphere.io
+ kubectl delete crd outputs.logging.kubesphere.io
+ kubectl delete crd parsers.logging.kubesphere.io
+ kubectl delete deployments.apps -n kubesphere-logging-system fluentbit-operator
+ helm uninstall elasticsearch-logging --namespace kubesphere-logging-system
+ ```
+
+ {{< notice warning >}}
+
+ This operation may cause anomalies in Auditing, Events, and Service Mesh.
+
+ {{ notice >}}
+
+4. Run the following command:
+
+ ```bash
+ kubectl delete deployment logsidecar-injector-deploy -n kubesphere-logging-system
+ kubectl delete ns kubesphere-logging-system
+ ```
+
+## Uninstall KubeSphere Events
+
+1. Change the value of `events.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following command:
+
+ ```bash
+ helm delete ks-events -n kubesphere-logging-system
+ ```
+
+## Uninstall KubeSphere Alerting
+
+1. Change the value of `alerting.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following command:
+
+ ```bash
+ kubectl -n kubesphere-monitoring-system delete thanosruler kubesphere
+ ```
+
+ {{< notice note >}}
+
+ Notification is installed in KubeSphere 3.4 by default, so you do not need to uninstall it.
+
+ {{ notice >}}
+
+
+## Uninstall KubeSphere Auditing
+
+1. Change the value of `auditing.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following commands:
+
+ ```bash
+ helm uninstall kube-auditing -n kubesphere-logging-system
+ kubectl delete crd rules.auditing.kubesphere.io
+ kubectl delete crd webhooks.auditing.kubesphere.io
+ ```
+
+## Uninstall KubeSphere Service Mesh
+
+1. Change the value of `servicemesh.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following commands:
+
+ ```bash
+ curl -L https://istio.io/downloadIstio | sh -
+ istioctl x uninstall --purge
+
+ kubectl -n istio-system delete kiali kiali
+ helm -n istio-system delete kiali-operator
+
+ kubectl -n istio-system delete jaeger jaeger
+ helm -n istio-system delete jaeger-operator
+ ```
+
+## Uninstall Network Policies
+
+For the component NetworkPolicy, disabling it does not require uninstalling the component as its controller is now inside `ks-controller-manager`. If you want to remove it from the KubeSphere console, change the value of `network.networkpolicy.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+## Uninstall Metrics Server
+
+1. Change the value of `metrics_server.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following commands:
+
+ ```bash
+ kubectl delete apiservice v1beta1.metrics.k8s.io
+ kubectl -n kube-system delete service metrics-server
+ kubectl -n kube-system delete deployment metrics-server
+ ```
+
+## Uninstall Service Topology
+
+1. Change the value of `network.topology.type` from `weave-scope` to `none` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following command:
+
+ ```bash
+ kubectl delete ns weave
+ ```
+
+## Uninstall Pod IP Pools
+
+Change the value of `network.ippool.type` from `calico` to `none` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+## Uninstall KubeEdge
+
+1. Change the value of `kubeedge.enabled` and `edgeruntime.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following commands:
+
+ ```bash
+ helm uninstall kubeedge -n kubeedge
+ kubectl delete ns kubeedge
+ ```
+
+ {{< notice note >}}
+
+ After uninstallation, you will not be able to add edge nodes to your cluster.
+
+ {{ notice >}}
+
diff --git a/content/en/docs/v3.4/project-administration/_index.md b/content/en/docs/v3.4/project-administration/_index.md
new file mode 100644
index 000000000..4964f300c
--- /dev/null
+++ b/content/en/docs/v3.4/project-administration/_index.md
@@ -0,0 +1,13 @@
+---
+title: "Project Administration"
+description: "Help you to better manage KubeSphere projects"
+layout: "second"
+
+linkTitle: "Project Administration"
+weight: 13000
+
+icon: "/images/docs/v3.x/docs.svg"
+
+---
+
+A KubeSphere project is a Kubernetes namespace. There are two types of projects, the single-cluster project and the multi-cluster project. The former one is the regular Kubernetes namespace, while the latter is the federated namespace across multiple clusters. As a project administrator, you are responsible for project creation, limit range settings, network isolation configuration, and more.
diff --git a/content/en/docs/v3.4/project-administration/container-limit-ranges.md b/content/en/docs/v3.4/project-administration/container-limit-ranges.md
new file mode 100644
index 000000000..8fa82fa9d
--- /dev/null
+++ b/content/en/docs/v3.4/project-administration/container-limit-ranges.md
@@ -0,0 +1,47 @@
+---
+title: "Container Limit Ranges"
+keywords: 'Kubernetes, KubeSphere, resource, quotas, limits, requests, limit ranges, containers'
+description: 'Learn how to set default container limit ranges in a project.'
+linkTitle: "Container Limit Ranges"
+weight: 13400
+---
+
+A container can use as much CPU and memory as set by [the resource quota for a project](../../workspace-administration/project-quotas/). At the same time, KubeSphere uses requests and limits to control resource (for example, CPU and memory) usage for a container, also known as [LimitRanges](https://kubernetes.io/docs/concepts/policy/limit-range/) in Kubernetes. Requests make sure the container can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that container can never use resources above a certain value.
+
+When you create a workload, such as a Deployment, you configure resource [Kubernetes requests and limits](https://kubesphere.io/blogs/understand-requests-and-limits-in-kubernetes/) for the container. To make these request and limit fields pre-populated with values, you can set default limit ranges.
+
+This tutorial demonstrates how to set default limit ranges for containers in a project.
+
+## Prerequisites
+
+You have an available workspace, a project and a user (`project-admin`). The user must have the `admin` role at the project level. For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+
+## Set Default Limit Ranges
+
+1. Log in to the console as `project-admin` and go to a project. On the **Overview** page, you can see default limit ranges remain unset if the project is newly created. Click **Edit Quotas** next to **Default Container Quotas Not Set** to configure limit ranges.
+
+2. In the dialog that appears, you can see that KubeSphere does not set any requests or limits by default. To set requests and limits to control CPU and memory resources, use the slider to move to a desired value or enter numbers directly. Leaving a field blank means you do not set any requests or limits.
+
+ {{< notice note >}}
+
+ The limit can never be lower than the request.
+
+ {{ notice >}}
+
+3. Click **OK** to finish setting limit ranges.
+
+4. Go to **Basic Information** in **Project Settings**, and you can see default limit ranges for containers in a project.
+
+5. To change default limit ranges, click **Edit Project** on the **Basic Information** page and select **Edit Default Container Quotas**.
+
+6. Change limit ranges directly in the dialog and click **OK**.
+
+7. When you create a workload, requests and limits of the container will be pre-populated with values.
+ {{< notice note >}}
+ For more information, see **Resource Request** in [Container Image Settings](../../project-user-guide/application-workloads/container-image-settings/).
+
+ {{ notice >}}
+
+## See Also
+
+[Project Quotas](../../workspace-administration/project-quotas/)
diff --git a/content/en/docs/v3.4/project-administration/disk-log-collection.md b/content/en/docs/v3.4/project-administration/disk-log-collection.md
new file mode 100644
index 000000000..1b9f2a295
--- /dev/null
+++ b/content/en/docs/v3.4/project-administration/disk-log-collection.md
@@ -0,0 +1,75 @@
+---
+title: "Log Collection"
+keywords: 'KubeSphere, Kubernetes, project, disk, log, collection'
+description: 'Enable log collection so that you can collect, manage, and analyze logs in a unified way.'
+linkTitle: "Log Collection"
+weight: 13600
+---
+
+KubeSphere supports multiple log collection methods so that Ops teams can collect, manage, and analyze logs in a unified and flexible way.
+
+This tutorial demonstrates how to collect logs for an example app.
+
+## Prerequisites
+
+- You need to create a workspace, a project and a user (`project-admin`). The user must be invited to the project with the role of `admin` at the project level. For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+- You need to enable [the KubeSphere Logging System](../../pluggable-components/logging/).
+
+## Enable Log Collection
+
+1. Log in to the web console of KubeSphere as `project-admin` and go to your project.
+
+2. From the left navigation bar, click **Log Collection** in **Project Settings**, and then click
to enable the feature.
+
+## Create a Deployment
+
+1. From the left navigation bar, select **Workloads** in **Application Workloads**. Under the **Deployments** tab, click **Create**.
+
+2. In the dialog that appears, set a name for the Deployment (for example, `demo-deployment`) and click **Next**.
+
+3. Under **Containers**, click **Add Container**.
+
+4. Enter `alpine` in the search bar to use the image (tag: `latest`) as an example.
+
+5. Scroll down to **Start Command** and select the checkbox. Enter the following values for **Command** and **Parameters** respectively, click **√**, and then click **Next**.
+
+ **Command**
+
+ ```bash
+ /bin/sh
+ ```
+
+ **Parameters**
+
+ ```bash
+ -c,if [ ! -d /data/log ];then mkdir -p /data/log;fi; while true; do date >> /data/log/app-test.log; sleep 30;done
+ ```
+
+ {{< notice note >}}
+
+ The command and parameters above mean that the date information will be exported to `app-test.log` in `/data/log` every 30 seconds.
+
+ {{ notice >}}
+
+6. On the **Storage Settings** tab, click | Built-in Roles | +Description | +
|---|---|
viewer |
+ Project viewer who can view all resources in the project. | +
operator |
+ Project operator who can manage resources other than users and roles in the project. | +
admin |
+ Project administrator who has full control over all resources in the project. | +
on the right.
+
+## Invite a New Member
+
+1. Navigate to **Project Members** under **Project Settings**, and click **Invite**.
+
+2. Invite a user to the project by clicking
on the right of it and assign a role to it.
+
+3. After you add the user to the project, click **OK**. In **Project Members**, you can see the user in the list.
+
+4. To edit the role of an existing user or remove the user from the project, click
on the right and select the corresponding operation.
diff --git a/content/en/docs/v3.4/project-user-guide/_index.md b/content/en/docs/v3.4/project-user-guide/_index.md
new file mode 100644
index 000000000..f73099b6b
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/_index.md
@@ -0,0 +1,12 @@
+---
+title: "Project User Guide"
+description: "Help you to better manage resources in a KubeSphere project"
+layout: "second"
+
+linkTitle: "Project User Guide"
+weight: 10000
+
+icon: "/images/docs/v3.x/docs.svg"
+---
+
+In KubeSphere, project users with necessary permissions are able to perform a series of tasks, such as creating different kinds of workloads, configuring volumes, Secrets, and ConfigMaps, setting various release strategies, monitoring app metrics, and creating alerting policies. As KubeSphere features great flexibility and compatibility without any code hacking into native Kubernetes, it is very convenient for users to get started with any feature required for their testing, development and production environments.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/alerting/_index.md b/content/en/docs/v3.4/project-user-guide/alerting/_index.md
new file mode 100644
index 000000000..1b4523bc0
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/alerting/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Alerting"
+weight: 10700
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/project-user-guide/alerting/alerting-message.md b/content/en/docs/v3.4/project-user-guide/alerting/alerting-message.md
new file mode 100644
index 000000000..507563542
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/alerting/alerting-message.md
@@ -0,0 +1,27 @@
+---
+title: "Alerting Messages (Workload Level)"
+keywords: 'KubeSphere, Kubernetes, Workload, Alerting, Message, Notification'
+description: 'Learn how to view alerting messages for workloads.'
+linkTitle: "Alerting Messages (Workload Level)"
+weight: 10720
+---
+
+Alerting messages record detailed information of alerts triggered based on the alerting policy defined. This tutorial demonstrates how to view alerting messages at the workload level.
+
+## Prerequisites
+
+- You have enabled [KubeSphere Alerting](../../../pluggable-components/alerting/).
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- You have created a workload-level alerting policy and an alert has been triggered. For more information, refer to [Alerting Policies (Workload Level)](../alerting-policy/).
+
+## View Alerting Messages
+
+1. Log in to the console as `project-regular`, go to your project, and go to **Alerting Messages** under **Monitoring & Alerting**.
+
+2. On the **Alerting Messages** page, you can see all alerting messages in the list. The first column displays the summary and message you have defined in the notification of the alert. To view details of an alerting message, click the name of the alerting policy and click the **Alerting History** tab on the displayed page.
+
+3. On the **Alerting History** tab, you can see alert severity, monitoring targets, and activation time.
+
+## View Notifications
+
+If you also want to receive alert notifications (for example, email and Slack messages), you need to configure [a notification channel](../../../cluster-administration/platform-settings/notification-management/configure-email/) first.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/alerting/alerting-policy.md b/content/en/docs/v3.4/project-user-guide/alerting/alerting-policy.md
new file mode 100644
index 000000000..2610a09a4
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/alerting/alerting-policy.md
@@ -0,0 +1,60 @@
+---
+title: "Alerting Policies (Workload Level)"
+keywords: 'KubeSphere, Kubernetes, Workload, Alerting, Policy, Notification'
+description: 'Learn how to set alerting policies for workloads.'
+linkTitle: "Alerting Policies (Workload Level)"
+weight: 10710
+---
+
+KubeSphere provides alerting policies for nodes and workloads. This tutorial demonstrates how to create alerting policies for workloads in a project. See [Alerting Policy (Node Level)](../../../cluster-administration/cluster-wide-alerting-and-notification/alerting-policy/) to learn how to configure alerting policies for nodes.
+
+## Prerequisites
+
+- You have enabled [KubeSphere Alerting](../../../pluggable-components/alerting/).
+- To receive alert notifications, you must configure a [notification channel](../../../cluster-administration/platform-settings/notification-management/configure-email/) beforehand.
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- You have workloads in this project. If they are not ready, see [Deploy and Access Bookinfo](../../../quick-start/deploy-bookinfo-to-k8s/) to create a sample app.
+
+## Create an Alerting Policy
+
+1. Log in to the console as `project-regular` and go to your project. Go to **Alerting Policies** under **Monitoring & Alerting**, then click **Create**.
+
+2. In the displayed dialog box, provide the basic information as follows. Click **Next** to continue.
+
+ - **Name**. A concise and clear name as its unique identifier, such as `alert-demo`.
+ - **Alias**. Help you distinguish alerting policies better.
+ - **Description**. A brief introduction to the alerting policy.
+ - **Threshold Duration (min)**. The status of the alerting policy becomes `Firing` when the duration of the condition configured in the alerting rule reaches the threshold.
+ - **Severity**. Allowed values include **Warning**, **Error** and **Critical**, providing an indication of how serious an alert is.
+
+3. On the **Rule Settings** tab, you can use the rule template or create a custom rule. To use the template, fill in the following fields.
+
+ - **Resource Type**. Select the resource type you want to monitor, such as **Deployment**, **StatefulSet**, and **DaemonSet**.
+ - **Monitoring Targets**. Depending on the resource type you select, the target can be different. You cannot see any target if you do not have any workload in the project.
+ - **Alerting Rule**. Define a rule for the alerting policy. These rules are based on Prometheus expressions and an alert will be triggered when conditions are met. You can monitor objects such as CPU and memory.
+
+ {{< notice note >}}
+
+ You can create a custom rule with PromQL by entering an expression in the **Monitoring Metrics** field (autocompletion supported). For more information, see [Querying Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/).
+
+ {{ notice >}}
+
+ Click **Next** to continue.
+
+4. On the **Message Settings** tab, enter the alert summary and message to be included in your notification, then click **Create**.
+
+5. An alerting policy will be **Inactive** when just created. If conditions in the rule expression are met, it reaches **Pending** first, then turn to **Firing** if conditions keep to be met in the given time range.
+
+## Edit an Alerting Policy
+
+To edit an alerting policy after it is created, on the **Alerting Policies** page, click
on the right.
+
+1. Click **Edit** from the drop-down menu and edit the alerting policy following the same steps as you create it. Click **OK** on the **Message Settings** page to save it.
+
+2. Click **Delete** from the drop-down menu to delete an alerting policy.
+
+## View an Alerting Policy
+
+Click an alerting policy on the **Alerting Policies** page to see its detail information, including alerting rules and alerting history. You can also see the rule expression which is based on the template you use when creating the alerting policy.
+
+Under **Alert Monitoring**, the **Alert Monitoring** chart shows the actual usage or amount of resources over time. **Alerting Message** displays the customized message you set in notifications.
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/_index.md b/content/en/docs/v3.4/project-user-guide/application-workloads/_index.md
new file mode 100644
index 000000000..d73a9f85a
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Application Workloads"
+weight: 10200
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/container-image-settings.md b/content/en/docs/v3.4/project-user-guide/application-workloads/container-image-settings.md
new file mode 100644
index 000000000..10cad3968
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/container-image-settings.md
@@ -0,0 +1,268 @@
+---
+title: "Pod Settings"
+keywords: 'KubeSphere, Kubernetes, image, workload, setting, container'
+description: 'Learn different properties on the dashboard in detail as you set Pods for your workload.'
+linkTitle: "Pod Settings"
+weight: 10280
+---
+
+When you create Deployments, StatefulSets or DaemonSets, you need to specify a Pod. At the same time, KubeSphere provides users with various options to customize workload configurations, such as health check probes, environment variables and start commands. This page illustrates detailed explanations of different properties in **Pod Settings**.
+
+{{< notice tip >}}
+
+You can enable **Edit YAML** in the upper-right corner to see corresponding values in the manifest file (YAML format) of properties on the dashboard.
+
+{{ notice >}}
+
+## Pod Settings
+
+### Pod Replicas
+
+Set the number of replicated Pods by clicking
on the right and click
on the right and select the options from the menu to modify a DaemonSet.
+
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create the DaemonSet.
+ - **Delete**: Delete the DaemonSet.
+
+2. Click the name of the DaemonSet and you can go to its details page.
+
+3. Click **More** to display what operations about this DaemonSet you can do.
+
+ - **Roll Back**: Select the revision to roll back.
+ - **Edit Settings**: Configure update strategies, containers and volumes.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create this DaemonSet.
+ - **Delete**: Delete the DaemonSet, and return to the DaemonSet list page.
+
+4. Click the **Resource Status** tab to view the port and Pod information of a DaemonSet.
+
+ - **Replica Status**: You cannot change the number of Pod replicas for a DaemonSet.
+ - **Pods**
+
+ - The Pod list provides detailed information of the Pod (status, node, Pod IP and resource usage).
+ - You can view the container information by clicking a Pod item.
+ - Click the container log icon to view output logs of the container.
+ - You can view the Pod details page by clicking the Pod name.
+
+### Revision records
+
+After the resource template of workload is changed, a new log will be generated and Pods will be rescheduled for a version update. The latest 10 versions will be saved by default. You can implement a redeployment based on the change log.
+
+### Metadata
+
+Click the **Metadata** tab to view the labels and annotations of the DaemonSet.
+
+### Monitoring
+
+1. Click the **Monitoring** tab to view the CPU usage, memory usage, outbound traffic, and inbound traffic of the DaemonSet.
+
+2. Click the drop-down menu in the upper-right corner to customize the time range and sampling interval.
+
+3. Click
/
in the upper-right corner to start/stop automatic data refreshing.
+
+4. Click
in the upper-right corner to manually refresh the data.
+
+### Environment variables
+
+Click the **Environment Variables** tab to view the environment variables of the DaemonSet.
+
+### Events
+
+Click the **Events** tab to view the events of the DaemonSet.
+
+
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/deployments.md b/content/en/docs/v3.4/project-user-guide/application-workloads/deployments.md
new file mode 100644
index 000000000..5888ced36
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/deployments.md
@@ -0,0 +1,139 @@
+---
+title: "Deployments"
+keywords: 'KubeSphere, Kubernetes, Deployments, workload'
+description: 'Learn basic concepts of Deployments and how to create Deployments in KubeSphere.'
+linkTitle: "Deployments"
+
+weight: 10210
+---
+
+A Deployment controller provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. As a Deployment runs a number of replicas of your application, it automatically replaces instances that go down or malfunction. This is how Deployments make sure app instances are available to handle user requests.
+
+For more information, see the [official documentation of Kubernetes](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
+
+## Prerequisites
+
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create a Deployment
+
+### Step 1: Open the dashboard
+
+Log in to the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the tab **Deployments**.
+
+### Step 2: Enter basic information
+
+Specify a name for the Deployment (for example, `demo-deployment`), select a project, and click **Next**.
+
+### Step 3: Set a Pod
+
+1. Before you set an image, define the number of replicated Pods in **Pod Replicas** by clicking
on the right and select options from the menu to modify your Deployment.
+
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create the Deployment.
+ - **Delete**: Delete the Deployment.
+
+2. Click the name of the Deployment and you can go to its details page.
+
+3. Click **More** to display the operations about this Deployment you can do.
+
+ - **Roll Back**: Select the revision to roll back.
+ - **Edit Autoscaling**: Autoscale the replicas according to CPU and memory usage. If both CPU and memory are specified, replicas are added or deleted if any of the conditions is met.
+ - **Edit Settings**: Configure update strategies, containers and volumes.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create this Deployment.
+ - **Delete**: Delete the Deployment, and return to the Deployment list page.
+
+4. Click the **Resource Status** tab to view the port and Pod information of the Deployment.
+
+ - **Replica Status**: Click
/
in the upper-right corner to start/stop automatic data refreshing.
+
+4. Click
in the upper-right corner to manually refresh the data.
+
+### Environment variables
+
+Click the **Environment Variables** tab to view the environment variables of the Deployment.
+
+### Events
+
+Click the **Events** tab to view the events of the Deployment.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/horizontal-pod-autoscaling.md b/content/en/docs/v3.4/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
new file mode 100755
index 000000000..809541228
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
@@ -0,0 +1,104 @@
+---
+title: "Kubernetes HPA (Horizontal Pod Autoscaling) on KubeSphere"
+keywords: "Horizontal, Pod, Autoscaling, Autoscaler"
+description: "How to configure Kubernetes Horizontal Pod Autoscaling on KubeSphere."
+weight: 10290
+
+---
+
+This document describes how to configure Horizontal Pod Autoscaling (HPA) on KubeSphere.
+
+The Kubernetes HPA feature automatically adjusts the number of Pods to maintain average resource usage (CPU and memory) of Pods around preset values. For details about how HPA functions, see the [official Kubernetes document](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
+
+This document uses HPA based on CPU usage as an example. Operations for HPA based on memory usage are similar.
+
+## Prerequisites
+
+- You need to [enable the Metrics Server](../../../pluggable-components/metrics-server/).
+- You need to create a workspace, a project and a user (for example, `project-regular`). `project-regular` must be invited to the project and assigned the `operator` role. For more information, see [Create Workspaces, Projects, Users and Roles](/docs/v3.4/quick-start/create-workspace-and-project/).
+
+## Create a Service
+
+1. Log in to the KubeSphere web console as `project-regular` and go to your project.
+
+2. Choose **Services** in **Application Workloads** on the left navigation bar and click **Create** on the right.
+
+3. In the **Create Service** dialog box, click **Stateless Service**.
+
+4. Set the Service name (for example, `hpa`) and click **Next**.
+
+5. Click **Add Container**, set **Image** to `mirrorgooglecontainers/hpa-example` and click **Use Default Ports**.
+
+6. Set the CPU request (for example, 0.15 cores) for each container, click **√**, and click **Next**.
+
+ {{< notice note >}}
+
+ * To use HPA based on CPU usage, you must set the CPU request for each container, which is the minimum CPU resource reserved for each container (for details, see the [official Kubernetes document](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)). The HPA feature compares the average Pod CPU usage with a target percentage of the average Pod CPU request.
+ * For HPA based on memory usage, you do not need to configure the memory request.
+
+ {{ notice >}}
+
+7. Click **Next** on the **Storage Settings** tab and click **Create** on the **Advanced Settings** tab.
+
+## Configure Kubernetes HPA
+
+1. Select **Deployments** in **Workloads** on the left navigation bar and click the HPA Deployment (for example, hpa-v1) on the right.
+
+2. Click **More** and select **Edit Autoscaling** from the drop-down menu.
+
+3. In the **Horizontal Pod Autoscaling** dialog box, configure the HPA parameters and click **OK**.
+
+ * **Target CPU Usage (%)**: Target percentage of the average Pod CPU request.
+ * **Target Memory Usage (MiB)**: Target average Pod memory usage in MiB.
+ * **Minimum Replicas**: Minimum number of Pods.
+ * **Maximum Replicas**: Maximum number of Pods.
+
+ In this example, **Target CPU Usage (%)** is set to `60`, **Minimum Replicas** is set to `1`, and **Maximum Replicas** is set to `10`.
+
+ {{< notice note >}}
+
+ Ensure that the cluster can provide sufficient resources for all Pods when the number of Pods reaches the maximum. Otherwise, the creation of some Pods will fail.
+
+ {{ notice >}}
+
+## Verify HPA
+
+This section uses a Deployment that sends requests to the HPA Service to verify that HPA automatically adjusts the number of Pods to meet the resource usage target.
+
+### Create a load generator Deployment
+
+1. Select **Workloads** in **Application Workloads** on the left navigation bar and click **Create** on the right.
+
+2. In the **Create Deployment** dialog box, set the Deployment name (for example, `load-generator`) and click **Next**.
+
+3. Click **Add Container** and set **Image** to `busybox`.
+
+4. Scroll down in the dialog box, select **Start Command**, and set **Command** to `sh,-c` and **Parameters** to `while true; do wget -q -O- http://
on the right of the load generator Deployment (for example, load-generator-v1), and choose **Delete** from the drop-down list. After the load-generator Deployment is deleted, check the status of the HPA Deployment again. The number of Pods decreases to the minimum.
+
+{{< notice note >}}
+
+The system may require a few minutes to adjust the number of Pods and collect data.
+
+{{ notice >}}
+
+## Edit HPA Configuration
+
+You can repeat steps in [Configure HPA](#configure-hpa) to edit the HPA configuration.
+
+## Cancel HPA
+
+1. Choose **Workloads** in **Application Workloads** on the left navigation bar and click the HPA Deployment (for example, hpa-v1) on the right.
+
+2. Click
on the right of **Autoscaling** and choose **Cancel** from the drop-down list.
+
+
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/jobs.md b/content/en/docs/v3.4/project-user-guide/application-workloads/jobs.md
new file mode 100644
index 000000000..95a12240e
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/jobs.md
@@ -0,0 +1,162 @@
+---
+title: "Jobs"
+keywords: "KubeSphere, Kubernetes, Docker, Jobs"
+description: "Learn basic concepts of Jobs and how to create Jobs on KubeSphere."
+linkTitle: "Jobs"
+
+weight: 10250
+---
+
+A Job creates one or more Pods and ensures that a specified number of them successfully terminates. As Pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (namely, Job) is complete. Deleting a Job will clean up the Pods it created.
+
+A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example, due to a node hardware failure or a node reboot). You can also use a Job to run multiple Pods in parallel.
+
+The following example demonstrates specific steps of creating a Job (computing π to 2000 decimal places) on KubeSphere.
+
+## Prerequisites
+
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create a Job
+
+### Step 1: Open the dashboard
+
+Log in to the console as `project-regular`. Go to **Jobs** under **Application Workloads** and click **Create**.
+
+### Step 2: Enter basic information
+
+Enter the basic information. The following describes the parameters:
+
+- **Name**: The name of the Job, which is also the unique identifier.
+- **Alias**: The alias name of the Job, making resources easier to identify.
+- **Description**: The description of the Job, which gives a brief introduction of the Job.
+
+### Step 3: Strategy settings (optional)
+
+You can set the values in this step or click **Next** to use the default values. Refer to the table below for detailed explanations of each field.
+
+| Name | Definition | Description |
+| ----------------------- | ---------------------------- | ------------------------------------------------------------ |
+| Maximum Retries | `spec.backoffLimit` | It specifies the maximum number of retries before this Job is marked as failed. It defaults to 6. |
+| Complete Pods | `spec.completions` | It specifies the desired number of successfully finished Pods the Job should be run with. Setting it to nil means that the success of any Pod signals the success of all Pods, and allows parallelism to have any positive value. Setting it to 1 means that parallelism is limited to 1 and the success of that Pod signals the success of the Job. For more information, see [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). |
+| Parallel Pods | `spec.parallelism` | It specifies the maximum desired number of Pods the Job should run at any given time. The actual number of Pods running in a steady state will be less than this number when the work left to do is less than max parallelism ((`.spec.completions - .status.successful`) < `.spec.parallelism`). For more information, see [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). |
+| Maximum Duration (s) | `spec.activeDeadlineSeconds` | It specifies the duration in seconds relative to the startTime that the Job may be active before the system tries to terminate it; the value must be a positive integer. |
+
+### Step 4: Set a Pod
+
+1. Select **Re-create Pod** for **Restart Policy**. You can only specify **Re-create Pod** or **Restart container** for **Restart Policy** when the Job is not completed:
+
+ - If **Restart Policy** is set to **Re-create Pod**, the Job creates a new Pod when the Pod fails, and the failed Pod does not disappear.
+
+ - If **Restart Policy** is set to **Restart container**, the Job will internally restart the container when the Pod fails, instead of creating a new Pod.
+
+2. Click **Add Container** which directs you to the **Add Container** page. Enter `perl` in the image search box and press **Enter**.
+
+3. On the same page, scroll down to **Start Command**. Enter the following commands in the box which computes pi to 2000 places then prints it. Click **√** in the lower-right corner and select **Next** to continue.
+
+ ```bash
+ perl,-Mbignum=bpi,-wle,print bpi(2000)
+ ```
+
+ {{< notice note >}}For more information about setting images, see [Pod Settings](../container-image-settings/).{{ notice >}}
+
+### Step 5: Inspect the Job manifest (optional)
+
+1. Enable **Edit YAML** in the upper-right corner which displays the manifest file of the Job. You can see all the values are set based on what you have specified in the previous steps.
+
+ ```yaml
+ apiVersion: batch/v1
+ kind: Job
+ metadata:
+ namespace: demo-project
+ labels:
+ app: job-test-1
+ name: job-test-1
+ annotations:
+ kubesphere.io/alias-name: Test
+ kubesphere.io/description: A job test
+ spec:
+ template:
+ metadata:
+ labels:
+ app: job-test-1
+ spec:
+ containers:
+ - name: container-4rwiyb
+ imagePullPolicy: IfNotPresent
+ image: perl
+ command:
+ - perl
+ - '-Mbignum=bpi'
+ - '-wle'
+ - print bpi(2000)
+ restartPolicy: Never
+ serviceAccount: default
+ initContainers: []
+ volumes: []
+ imagePullSecrets: null
+ backoffLimit: 5
+ completions: 4
+ parallelism: 2
+ activeDeadlineSeconds: 300
+ ```
+
+2. You can make adjustments in the manifest directly and click **Create** or disable the **Edit YAML** and get back to the **Create** page.
+
+ {{< notice note >}}You can skip **Storage Settings** and **Advanced Settings** for this tutorial. For more information, see [Mount volumes](../deployments/#step-4-mount-volumes) and [Configure advanced settings](../deployments/#step-5-configure-advanced-settings).{{ notice >}}
+
+### Step 6: Check the result
+
+1. In the final step of **Advanced Settings**, click **Create** to finish. A new item will be added to the Job list if the creation is successful.
+
+2. Click this Job and go to **Job Records** where you can see the information of each execution record. There are four completed Pods since **Completions** was set to `4` in Step 3.
+
+ {{< notice tip >}}
+You can rerun the Job if it fails and the reason for failure is displayed under **Message**.
+ {{ notice >}}
+
+3. In **Resource Status**, you can inspect the Pod status. Two Pods were created each time as **Parallel Pods** was set to 2. Click
on the right and click
to refresh the execution records.
+
+### Resource status
+
+1. Click the **Resource Status** tab to view the Pods of the Job.
+
+2. Click
to refresh the Pod information, and click
/
to display/hide the containers in each Pod.
+
+### Metadata
+
+Click the **Metadata** tab to view the labels and annotations of the Job.
+
+### Environment variables
+
+Click the **Environment Variables** tab to view the environment variables of the Job.
+
+### Events
+
+Click the **Events** tab to view the events of the Job.
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/routes.md b/content/en/docs/v3.4/project-user-guide/application-workloads/routes.md
new file mode 100644
index 000000000..9ec11b125
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/routes.md
@@ -0,0 +1,133 @@
+---
+title: "Routes"
+keywords: "KubeSphere, Kubernetes, Route, Ingress"
+description: "Learn basic concepts of Routes (i.e. Ingress) and how to create Routes in KubeSphere."
+weight: 10270
+---
+
+This document describes how to create, use, and edit a Route on KubeSphere.
+
+A Route on KubeSphere is the same as an [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress) on Kubernetes. You can use a Route and a single IP address to aggregate and expose multiple Services.
+
+## Prerequisites
+
+- You need to create a workspace, a project and two users (for example, `project-admin` and `project-regular`). In the project, the role of `admin` must be `project-admin` and that of `project-regular` must be `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](/docs/v3.4/quick-start/create-workspace-and-project/).
+- If the Route is to be accessed in HTTPS mode, you need to [create a Secret](/docs/v3.4/project-user-guide/configuration/secrets/) that contains the `tls.crt` (TLS certificate) and `tls.key` (TLS private key) keys used for encryption.
+- You need to [create at least one Service](/docs/v3.4/project-user-guide/application-workloads/services/). This document uses a demo Service as an example, which returns the Pod name to external requests.
+
+## Configure the Route Access Method
+
+1. Log in to the KubeSphere web console as `project-admin` and go to your project.
+
+2. Select **Gateway Settings** in **Project Settings** on the left navigation bar and click **Enable Gateway** on the right.
+
+3. In the displayed dialog box, set **Access Mode** to **NodePort** or **LoadBalancer**, and click **OK**.
+
+ {{< notice note >}}
+
+ If **Access Mode** is set to **LoadBalancer**, you may need to enable the load balancer plugin in your environment according to the plugin user guide.
+
+ {{ notice >}}
+
+## Create a Route
+
+### Step 1: Configure basic information
+
+1. Log out of the KubeSphere web console, log back in as `project-regular`, and go to the same project.
+
+2. Choose **Routes** in **Application Workloads** on the left navigation bar and click **Create** on the right.
+
+3. On the **Basic Information** tab, configure the basic information about the Route and click **Next**.
+ * **Name**: Name of the Route, which is used as a unique identifier.
+ * **Alias**: Alias of the Route.
+ * **Description**: Description of the Route.
+
+### Step 2: Configure routing rules
+
+1. On the **Routing Rules** tab, click **Add Routing Rule**.
+
+2. Select a mode, configure routing rules, click **√**, and click **Next**.
+
+ * **Auto Generate**: KubeSphere automatically generates a domain name in the `
on the right to further edit it, such as its metadata (excluding **Name**), YAML, port, and Internet access.
+
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Edit Service**: View the access type and set selectors and ports.
+ - **Edit External Access**: Edit external access method for the Service.
+ - **Delete**: When you delete a Service, associated resources will be displayed. If you check them, they will be deleted together with the Service.
+
+2. Click the name of the Service and you can go to its details page.
+
+ - Click **More** to expand the drop-down menu which is the same as the one in the Service list.
+ - The Pod list provides detailed information of the Pod (status, node, Pod IP and resource usage).
+ - You can view the container information by clicking a Pod item.
+ - Click the container log icon to view output logs of the container.
+ - You can view the Pod details page by clicking the Pod name.
+
+### Resource status
+
+1. Click the **Resource Status** tab to view information about the Service ports, workloads, and Pods.
+
+2. In the **Pods** area, click
to refresh the Pod information, and click
/
to display/hide the containers in each Pod.
+
+### Metadata
+
+Click the **Metadata** tab to view the labels and annotations of the Service.
+
+### Events
+
+Click the **Events** tab to view the events of the Service.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/statefulsets.md b/content/en/docs/v3.4/project-user-guide/application-workloads/statefulsets.md
new file mode 100644
index 000000000..4af61d6fa
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/statefulsets.md
@@ -0,0 +1,148 @@
+---
+title: "Kubernetes StatefulSet in KubeSphere"
+keywords: 'KubeSphere, Kubernetes, StatefulSets, Dashboard, Service'
+description: 'Learn basic concepts of StatefulSets and how to create StatefulSets on KubeSphere.'
+linkTitle: "StatefulSets"
+weight: 10220
+---
+
+As a workload API object, a Kubernetes StatefulSet is used to manage stateful applications. It is responsible for the deploying, scaling of a set of Pods, and guarantees the ordering and uniqueness of these Pods.
+
+Like a Deployment, a StatefulSet manages Pods that are based on an identical container specification. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These Pods are created from the same specification, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
+
+If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed.
+
+StatefulSets are valuable for applications that require one or more of the following.
+
+- Stable, unique network identifiers.
+- Stable, persistent storage.
+- Ordered, graceful deployment, and scaling.
+- Ordered, automated rolling updates.
+
+For more information, see the [official documentation of Kubernetes](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/).
+
+## Prerequisites
+
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create a Kubernetes StatefulSet
+
+In KubeSphere, a **Headless** service is also created when you create a StatefulSet. You can find the headless service in [Services](../services/) under **Application Workloads** in a project.
+
+### Step 1: Open the dashboard
+
+Log in to the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the **StatefulSets** tab.
+
+### Step 2: Enter basic information
+
+Specify a name for the StatefulSet (for example, `demo-stateful`), select a project, and click **Next**.
+
+### Step 3: Set a Pod
+
+1. Before you set an image, define the number of replicated Pods in **Pod Replicas** by clicking
on the right to select options from the menu to modify your StatefulSet.
+
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create the StatefulSet.
+ - **Delete**: Delete the StatefulSet.
+
+2. Click the name of the StatefulSet and you can go to its details page.
+
+3. Click **More** to display what operations about this StatefulSet you can do.
+
+ - **Roll Back**: Select the revision to roll back.
+ - **Edit Service**: Set the port to expose the container image and the service port.
+ - **Edit Settings**: Configure update strategies, containers and volumes.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create this StatefulSet.
+ - **Delete**: Delete the StatefulSet, and return to the StatefulSet list page.
+
+4. Click the **Resource Status** tab to view the port and Pod information of a StatefulSet.
+
+ - **Replica Status**: Click
/
in the upper-right corner to start/stop automatic data refreshing.
+
+4. Click
in the upper-right corner to manually refresh the data.
+
+### Environment variables
+
+Click the **Environment Variables** tab to view the environment variables of the StatefulSet.
+
+### Events
+
+Click the **Events** tab to view the events of the StatefulSet.
+
diff --git a/content/en/docs/v3.4/project-user-guide/application/_index.md b/content/en/docs/v3.4/project-user-guide/application/_index.md
new file mode 100644
index 000000000..7e0d6b2b6
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Applications"
+weight: 10100
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/project-user-guide/application/app-template.md b/content/en/docs/v3.4/project-user-guide/application/app-template.md
new file mode 100644
index 000000000..30958f0bc
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application/app-template.md
@@ -0,0 +1,33 @@
+---
+title: "App Templates"
+keywords: 'Kubernetes, Chart, Helm, KubeSphere, Application Template, Repository'
+description: 'Understand the concept of app templates and how they can help to deploy applications within enterprises.'
+linkTitle: "App Templates"
+weight: 10110
+---
+
+An app template serves as a way for users to upload, deliver, and manage apps. Generally, an app is composed of one or more Kubernetes workloads (for example, [Deployments](../../../project-user-guide/application-workloads/deployments/), [StatefulSets](../../../project-user-guide/application-workloads/statefulsets/) and [DaemonSets](../../../project-user-guide/application-workloads/daemonsets/)) and [Services](../../../project-user-guide/application-workloads/services/) based on how it functions and communicates with the external environment. Apps that are uploaded as app templates are built based on a [Helm](https://helm.sh/) package.
+
+## How App Templates Work
+
+You can deliver Helm charts to the public repository of KubeSphere or import a private app repository to offer app templates.
+
+The public repository, also known as the App Store on KubeSphere, is accessible to every tenant in a workspace. After [uploading the Helm chart of an app](../../../workspace-administration/upload-helm-based-application/), you can deploy your app to test its functions and submit it for review. Ultimately, you have the option to release it to the App Store after it is approved. For more information, see [Application Lifecycle Management](../../../application-store/app-lifecycle-management/).
+
+For a private repository, only users with required permissions are allowed to [add private repositories](../../../workspace-administration/app-repository/import-helm-repository/) in a workspace. Generally, the private repository is built based on object storage services, such as MinIO. After imported to KubeSphere, these private repositories serve as application pools to provide app templates.
+
+{{< notice note >}}
+
+[For individual apps that are uploaded as Helm charts](../../../workspace-administration/upload-helm-based-application/) to KubeSphere, they are displayed in the App Store together with built-in apps after approved and released. Besides, when you select app templates from private app repositories, you can also see **Current workspace** in the list, which stores these individual apps uploaded as Helm charts.
+
+{{ notice >}}
+
+KubeSphere deploys app repository services based on [OpenPitrix](https://github.com/openpitrix/openpitrix) as a [pluggable component](../../../pluggable-components/app-store/).
+
+## Why App Templates
+
+App templates enable users to deploy and manage apps in a visualized way. Internally, they play an important role as shared resources (for example, databases, middleware and operating systems) created by enterprises for the coordination and cooperation within teams. Externally, app templates set industry standards of building and delivery. Users can take advantage of app templates in different scenarios to meet their own needs through one-click deployment.
+
+In addition, as OpenPitrix is integrated to KubeSphere to provide application management across the entire lifecycle, the platform allows ISVs, developers and regular users to all participate in the process. Backed by the multi-tenant system of KubeSphere, each tenant is only responsible for their own part, such as app uploading, app review, release, test, and version management. Ultimately, enterprises can build their own App Store and enrich their application pools with their customized standards. As such, apps can also be delivered in a standardized fashion.
+
+For more information about how to use app templates, see [Deploy Apps from App Templates](../deploy-app-from-template/).
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/application/compose-app.md b/content/en/docs/v3.4/project-user-guide/application/compose-app.md
new file mode 100644
index 000000000..5a7e7bb27
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application/compose-app.md
@@ -0,0 +1,96 @@
+---
+title: "Create a Microservices-based App"
+keywords: 'KubeSphere, Kubernetes, service mesh, microservices'
+description: 'Learn how to compose a microservice-based application from scratch.'
+linkTitle: "Create a Microservices-based App"
+weight: 10140
+---
+
+With each microservice handling a single part of the app's functionality, an app can be divided into different components. These components have their own responsibilities and limitations, independent from each other. In KubeSphere, this kind of app is called **Composed App**, which can be built through newly created Services or existing Services.
+
+This tutorial demonstrates how to create a microservices-based app Bookinfo, which is composed of four Services, and set a customized domain name to access the app.
+
+## Prerequisites
+
+- You need to create a workspace, a project, and a user (`project-regular`) for this tutorial. The user needs to be invited to the project with the `operator` role. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- `project-admin` needs to [set the project gateway](../../../project-administration/project-gateway/) so that `project-regular` can define a domain name when creating the app.
+
+## Create Microservices that Compose an App
+
+1. Log in to the web console of KubeSphere and navigate to **Apps** in **Application Workloads** of your project. On the **Composed Apps** tab, click **Create**.
+
+2. Set a name for the app (for example, `bookinfo`) and click **Next**.
+
+3. On the **Service Settings** page, you need to create microservices that compose the app. Click **Create Service** and select **Stateless Service**.
+
+4. Set a name for the Service (e.g `productpage`) and click **Next**.
+
+ {{< notice note >}}
+
+ You can create a Service on the dashboard directly or enable **Edit YAML** in the upper-right corner to edit the YAML file.
+
+ {{ notice >}}
+
+5. Click **Add Container** under **Containers** and enter `kubesphere/examples-bookinfo-productpage-v1:1.13.0` in the search box to use the Docker Hub image.
+
+ {{< notice note >}}
+
+ You must press **Enter** in your keyboard after you enter the image name.
+
+ {{ notice >}}
+
+6. Click **Use Default Ports**. For more information about image settings, see [Pod Settings](../../../project-user-guide/application-workloads/container-image-settings/). Click **√** in the lower-right corner and **Next** to continue.
+
+7. On the **Storage Settings** page, [add a volume](../../../project-user-guide/storage/volumes/) or click **Next** to continue.
+
+8. Click **Create** on the **Advanced Settings** page.
+
+9. Similarly, add the other three microservices for the app. Here is the image information:
+
+ | Service | Name | Image |
+ | --------- | --------- | ------------------------------------------------ |
+ | Stateless | `details` | `kubesphere/examples-bookinfo-details-v1:1.13.0` |
+ | Stateless | `reviews` | `kubesphere/examples-bookinfo-reviews-v1:1.13.0` |
+ | Stateless | `ratings` | `kubesphere/examples-bookinfo-ratings-v1:1.13.0` |
+
+10. When you finish adding microservices, click **Next**.
+
+11. On the **Route Settings** page, click **Add Routing Rule**. On the **Specify Domain** tab, set a domain name for your app (for example, `demo.bookinfo`) and select `HTTP` in the **Protocol** field. For `Paths`, select the Service `productpage` and port `9080`. Click **OK** to continue.
+
+ {{< notice note >}}
+
+The button **Add Routing Rule** is not visible if the project gateway is not set.
+
+{{ notice >}}
+
+12. You can add more rules or click **Create** to finish the process.
+
+13. Wait for your app to reach the **Ready** status.
+
+
+## Access the App
+
+1. As you set a domain name for the app, you need to add an entry in the hosts (`/etc/hosts`) file. For example, add the IP address and hostname as below:
+
+ ```txt
+ 192.168.0.9 demo.bookinfo
+ ```
+
+ {{< notice note >}}
+
+ You must add your **own** IP address and hostname.
+
+ {{ notice >}}
+
+2. In **Composed Apps**, click the app you just created.
+
+3. In **Resource Status**, click **Access Service** under **Routes** to access the app.
+
+ {{< notice note >}}
+
+ Make sure you open the port in your security group.
+
+ {{ notice >}}
+
+4. Click **Normal user** and **Test user** respectively to see other **Services**.
+
diff --git a/content/en/docs/v3.4/project-user-guide/application/deploy-app-from-appstore.md b/content/en/docs/v3.4/project-user-guide/application/deploy-app-from-appstore.md
new file mode 100644
index 000000000..f9613b89c
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application/deploy-app-from-appstore.md
@@ -0,0 +1,62 @@
+---
+title: "Deploy Apps from the App Store"
+keywords: 'Kubernetes, Chart, Helm, KubeSphere, Application, App Store'
+description: 'Learn how to deploy an application from the App Store.'
+linkTitle: "Deploy Apps from the App Store"
+weight: 10130
+---
+
+The [App Store](../../../application-store/) is also the public app repository on the platform, which means every tenant on the platform can view the applications in the Store regardless of which workspace they belong to. The App Store contains 16 featured enterprise-ready containerized apps and apps released by tenants from different workspaces on the platform. Any authenticated users can deploy applications from the Store. This is different from private app repositories which are only accessible to tenants in the workspace where private app repositories are imported.
+
+This tutorial demonstrates how to quickly deploy [NGINX](https://www.nginx.com/) from the KubeSphere App Store powered by [OpenPitrix](https://github.com/openpitrix/openpitrix) and access its service through a NodePort.
+
+## Prerequisites
+
+- You have enabled [OpenPitrix (App Store)](../../../pluggable-components/app-store/).
+- You need to create a workspace, a project, and a user (`project-regular`) for this tutorial. The user must be invited to the project and granted the `operator` role. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Hands-on Lab
+
+### Step 1: Deploy NGINX from the App Store
+
+1. Log in to the web console of KubeSphere as `project-regular` and click **App Store** in the upper-left corner.
+
+ {{< notice note >}}
+
+ You can also go to **Apps** under **Application Workloads** in your project, click **Create**, and select **From App Store** to go to the App Store.
+
+ {{ notice >}}
+
+2. Search for NGINX, click it, and click **Install** on the **App Information** page. Make sure you click **Agree** in the displayed **Deployment Agreement** dialog box.
+
+3. Set a name and select an app version, confirm the location where NGINX will be deployed , and click **Next**.
+
+4. In **App Settings**, specify the number of replicas to deploy for the app and enable Ingress based on your needs. When you finish, click **Install**.
+
+ {{< notice note >}}
+
+ To specify more values for NGINX, use the toggle to see the app’s manifest in YAML format and edit its configurations.
+
+ {{ notice >}}
+
+5. Wait until NGINX is up and running.
+
+### Step 2: Access NGINX
+
+To access NGINX outside the cluster, you need to expose the app through a NodePort first.
+
+1. Go to **Services** in the created project and click the service name of NGINX.
+
+2. On the Service details page, click **More** and select **Edit External Access** from the drop-down menu.
+
+3. Select **NodePort** for **Access Method** and click **OK**. For more information, see [Project Gateway](../../../project-administration/project-gateway/).
+
+4. Under **Ports**, view the exposed port.
+
+5. Access NGINX through `
on the right and select the operation below from the drop-down list.
+
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Edit Settings**: Modify the key-value pair of the ConfigMap.
+ - **Delete**: Delete the ConfigMap.
+
+2. Click the name of the ConfigMap to go to its details page. Under the tab **Data**, you can see all the key-value pairs you have added for the ConfigMap.
+
+3. Click **More** to display what operations about this ConfigMap you can do.
+
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Edit Settings**: Modify the key-value pair of the ConfigMap.
+ - **Delete**: Delete the ConfigMap, and return to the list page.
+
+4. Click **Edit Information** to view and edit the basic information.
+
+
+## Use a ConfigMap
+
+When you create workloads, [Services](../../../project-user-guide/application-workloads/services/), [Jobs](../../../project-user-guide/application-workloads/jobs/) or [CronJobs](../../../project-user-guide/application-workloads/cronjobs/), you may need to add environment variables for containers. On the **Add Container** page, check **Environment Variables** and click **From secret** to use a ConfigMap from the list.
diff --git a/content/en/docs/v3.4/project-user-guide/configuration/image-registry.md b/content/en/docs/v3.4/project-user-guide/configuration/image-registry.md
new file mode 100644
index 000000000..3491c1273
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/configuration/image-registry.md
@@ -0,0 +1,104 @@
+---
+title: "Image Registries"
+keywords: 'KubeSphere, Kubernetes, docker, Secrets'
+description: 'Learn how to create an image registry on KubeSphere.'
+linkTitle: "Image Registries"
+weight: 10430
+---
+
+A Docker image is a read-only template that can be used to deploy container services. Each image has a unique identifier (for example, image name:tag). For example, an image can contain a complete package of an Ubuntu operating system environment with only Apache and a few applications installed. An image registry is used to store and distribute Docker images.
+
+This tutorial demonstrates how to create Secrets for different image registries.
+
+## Prerequisites
+
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create a Secret
+
+When you create workloads, [Services](../../../project-user-guide/application-workloads/services/), [Jobs](../../../project-user-guide/application-workloads/jobs/), or [CronJobs](../../../project-user-guide/application-workloads/cronjobs/), you can select images from your private registry in addition to the public registry. To use images from your private registry, you must create a Secret for it so that the registry can be integrated to KubeSphere.
+
+### Step 1: Open the dashboard
+
+Log in to the web console of KubeSphere as `project-regular`. Go to **Configuration** of a project, select **Secrets** and click **Create**.
+
+### Step 2: Enter basic information
+
+Specify a name for the Secret (for example, `demo-registry-secret`) and click **Next** to continue.
+
+{{< notice tip >}}
+
+You can see the Secret's manifest file in YAML format by enabling **Edit YAML** in the upper-right corner. KubeSphere allows you to edit the manifest file directly to create a Secret. Alternatively, you can follow the steps below to create a Secret via the dashboard.
+
+{{ notice >}}
+
+### Step 3: Specify image registry information
+
+Select **Image registry information** for **Type**. To use images from your private registry as you create application workloads, you need to specify the following fields.
+
+- **Registry Address**. The address of the image registry that stores images for you to use when creating application workloads.
+- **Username**. The account name you use to log in to the registry.
+- **Password**. The password you use to log in to the registry.
+- **Email** (optional). Your email address.
+
+#### Add the Docker Hub registry
+
+1. Before you add your image registry in [Docker Hub](https://hub.docker.com/), make sure you have an available Docker Hub account. On the **Secret Settings** page, enter `docker.io` for **Registry Address** and enter your Docker ID and password for **User Name** and **Password**. Click **Validate** to check whether the address is available.
+
+2. Click **Create**. Later, the Secret is displayed on the **Secrets** page. For more information about how to edit the Secret after you create it, see [Check Secret Details](../../../project-user-guide/configuration/secrets/#check-secret-details).
+
+#### Add the Harbor image registry
+
+[Harbor](https://goharbor.io/) is an open-source trusted cloud-native registry project that stores, signs, and scans content. Harbor extends the open-source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Harbor uses HTTP and HTTPS to serve registry requests.
+
+**HTTP**
+
+1. You need to modify the Docker configuration for all nodes within the cluster. For example, if there is an external Harbor registry and its IP address is `http://192.168.0.99`, then you need to add the field `--insecure-registry=192.168.0.99` to `/etc/systemd/system/docker.service.d/docker-options.conf`:
+
+ ```bash
+ [Service]
+ Environment="DOCKER_OPTS=--registry-mirror=https://registry.docker-cn.com --insecure-registry=10.233.0.0/18 --data-root=/var/lib/docker --log-opt max-size=50m --log-opt max-file=5 \
+ --insecure-registry=192.168.0.99"
+ ```
+
+ {{< notice note >}}
+
+ - Replace the image registry address with your own registry address.
+
+ - `Environment` represents [dockerd options](https://docs.docker.com/engine/reference/commandline/dockerd/).
+
+ - `--insecure-registry` is required by the Docker daemon for the communication with an insecure registry. Refer to [Docker documentation](https://docs.docker.com/engine/reference/commandline/dockerd/#insecure-registries) for its syntax.
+
+ {{ notice >}}
+
+2. After that, reload the configuration file and restart Docker:
+
+ ```bash
+ sudo systemctl daemon-reload
+ ```
+
+ ```bash
+ sudo systemctl restart docker
+ ```
+
+3. Go back to the **Data Settings** page and select **Image registry information** for **Type**. Enter your Harbor IP address for **Registry Address** and enter the username and password.
+
+ {{< notice note >}}
+
+ If you want to use the domain name instead of the IP address with Harbor, you may need to configure the CoreDNS and nodelocaldns within the cluster.
+
+ {{ notice >}}
+
+4. Click **Create**. Later, the Secret is displayed on the **Secrets** page. For more information about how to edit the Secret after you create it, see [Check Secret Details](../../../project-user-guide/configuration/secrets/#check-secret-details).
+
+**HTTPS**
+
+For the integration of the HTTPS-based Harbor registry, refer to [Harbor Documentation](https://goharbor.io/docs/1.10/install-config/configure-https/). Make sure you use `docker login` to connect to your Harbor registry.
+
+## Use an Image Registry
+
+When you set images, you can select the private image registry if the Secret of it is created in advance. For example, click the arrow on the **Add Container** page to expand the registry list when you create a [Deployment](../../../project-user-guide/application-workloads/deployments/). After you choose the image registry, enter the image name and tag to use the image.
+
+If you use YAML to create a workload and need to use a private image registry, you need to manually add `kubesphere.io/imagepullsecrets` to `annotations` in your local YAML file, and enter the key-value pair in JSON format, where `key` must be the name of the container, and `value` must be the name of the secret, as shown in the following sample.
+
+
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/configuration/secrets.md b/content/en/docs/v3.4/project-user-guide/configuration/secrets.md
new file mode 100644
index 000000000..d7606edc0
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/configuration/secrets.md
@@ -0,0 +1,121 @@
+---
+title: "Kubernetes Secrets in KubeSphere"
+keywords: 'KubeSphere, Kubernetes, Secrets'
+description: 'Learn how to create a Secret on KubeSphere.'
+linkTitle: "Secrets"
+weight: 10410
+---
+
+A Kubernetes [Secret](https://kubernetes.io/docs/concepts/configuration/secret/) is used to store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. To use a Secret, a Pod needs to reference it in one of [the following ways](https://kubernetes.io/docs/concepts/configuration/secret/#overview-of-secrets).
+
+- As a file in a volume mounted and consumed by containerized applications running in a Pod.
+- As environment variables used by containers in a Pod.
+- As image registry credentials when images are pulled for the Pod by the kubelet.
+
+This tutorial demonstrates how to create a Secret in KubeSphere.
+
+## Prerequisites
+
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create a Kubernetes Secret
+
+### Step 1: Open the dashboard
+
+Log in to the console as `project-regular`. Go to **Configuration** of a project, select **Secrets** and click **Create**.
+
+### Step 2: Enter basic information
+
+Specify a name for the Secret (for example, `demo-secret`) and click **Next** to continue.
+
+{{< notice tip >}}
+
+You can see the Secret's manifest file in YAML format by enabling **Edit YAML** in the upper-right corner. KubeSphere allows you to edit the manifest file directly to create a Secret. Alternatively, you can follow the steps below to create a Secret via the dashboard.
+
+{{ notice >}}
+
+### Step 3: Set a Secret
+
+1. Under the tab **Data Settings**, you must select a Secret type. In KubeSphere, you can create the following Kubernetes Secret types, indicated by the `type` field.
+
+ {{< notice note >}}
+
+ For all Secret types, values for all keys under the field `data` in the manifest must be base64-encoded strings. After you specify values on the KubeSphere dashboard, KubeSphere converts them into corresponding base64 character values in the YAML file. For example, if you enter `password` and `hello123` for **Key** and **Value** respectively on the **Edit Data** page when you create the default type of Secret, the actual value displaying in the YAML file is `aGVsbG8xMjM=` (namely, `hello123` in base64 format), automatically created by KubeSphere.
+
+ {{ notice >}}
+
+ - **Default**. The type of [Opaque](https://kubernetes.io/docs/concepts/configuration/secret/#opaque-secrets) in Kubernetes, which is also the default Secret type in Kubernetes. You can create arbitrary user-defined data for this type of Secret. Click **Add Data** to add key-value pairs for it.
+
+ - **TLS information**. The type of [kubernetes.io/tls](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) in Kubernetes, which is used to store a certificate and its associated key that are typically used for TLS, such as TLS termination of Ingress resources. You must specify **Credential** and **Private Key** for it, indicated by `tls.crt` and `tls.key` in the YAML file respectively.
+
+ - **Image registry information**. The type of [kubernetes.io/dockerconfigjson](https://kubernetes.io/docs/concepts/configuration/secret/#docker-config-secrets) in Kubernetes, which is used to store the credentials for accessing a Docker registry for images. For more information, see [Image Registries](../image-registry/).
+
+ - **Username and password**. The type of [kubernetes.io/basic-auth](https://kubernetes.io/docs/concepts/configuration/secret/#basic-authentication-secret) in Kubernetes, which is used to store credentials needed for basic authentication. You must specify **Username** and **Password** for it, indicated by `username` and `password` in the YAML file respectively.
+
+2. For this tutorial, select the default type of Secret. Click **Add Data** and enter the **Key** (`MYSQL_ROOT_PASSWORD`) and **Value** (`123456`) to specify a Secret for MySQL.
+
+3. Click **√** in the lower-right corner to confirm. You can continue to add key-value pairs to the Secret or click **Create** to finish the creation. For more information about how to use the Secret, see [Compose and Deploy WordPress](../../../quick-start/wordpress-deployment/#task-3-create-an-application).
+
+## Check Secret Details
+
+1. After a Secret is created, it will be displayed in the list. You can click
on the right and select the operation from the menu to modify it.
+
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Edit Settings**: Modify the key-value pair of the Secret.
+ - **Delete**: Delete the Secret.
+
+2. Click the name of the Secret and you can go to its details page. Under the tab **Data**, you can see all the key-value pairs you have added for the Secret.
+
+ {{< notice note >}}
+
+As mentioned above, KubeSphere automatically converts the value of a key into its corresponding base64 character value. To see the actual decoded value, click
to drag and drop an item into the target group. To add a new group, click **Add Monitoring Group**. If you want to change the place of a group, hover over a group and click
or
arrow on the right.
+
+{{< notice note >}}
+
+The place of group on the right is consistent with the place of charts in the middle. In other words, if you change the order of groups, the place of their respective charts will also change accordingly.
+
+{{ notice >}}
+
+## Dashboard Templates
+
+Find and share dashboard templates in [Monitoring Dashboards Gallery](https://github.com/kubesphere/monitoring-dashboard/tree/master/contrib/gallery). It is a place for KubeSphere community users to contribute their masterpieces.
diff --git a/content/en/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/panel.md b/content/en/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/panel.md
new file mode 100644
index 000000000..1dd9703d8
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/panel.md
@@ -0,0 +1,34 @@
+---
+title: "Charts"
+keywords: 'monitoring, Prometheus, Prometheus Operator'
+description: 'Explore dashboard properties and chart metrics.'
+linkTitle: "Charts"
+weight: 10816
+---
+
+KubeSphere currently supports two kinds of charts: text charts and graphs.
+
+## Text Chart
+
+A text chart is preferable for displaying a single metric value. The editing window for the text chart is composed of two parts. The upper part displays the real-time metric value, and the lower part is for editing. You can enter a PromQL expression to fetch a single metric value.
+
+- **Chart Name**: The name of the text chart.
+- **Unit**: The metric data unit.
+- **Decimal Places**: Accept an integer.
+- **Monitoring Metric**: Specify a monitoring metric from the drop-down list of available Prometheus metrics.
+
+## Graph Chart
+
+A graph chart is preferable for displaying multiple metric values. The editing window for the graph is composed of three parts. The upper part displays real-time metric values. The left part is for setting the graph theme. The right part is for editing metrics and chart descriptions.
+
+- **Chart Types**: Support basic charts and bar charts.
+- **Graph Types**: Support basic charts and stacked charts.
+- **Chart Colors**: Change line colors.
+- **Chart Name**: The name of the chart.
+- **Description**: The chart description.
+- **Add**: Add a new query editor.
+- **Metric Name**: Legend for the line. It supports variables. For example, `{{pod}}` means using the value of the Prometheus metric label `pod` to name this line.
+- **Interval**: The step value between two data points.
+- **Monitoring Metric**: A list of available Prometheus metrics.
+- **Unit**: The metric data unit.
+- **Decimal Places**: Accept an integer.
diff --git a/content/en/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/querying.md b/content/en/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/querying.md
new file mode 100644
index 000000000..57deac2e6
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/querying.md
@@ -0,0 +1,13 @@
+---
+title: "Querying"
+keywords: 'monitoring, Prometheus, Prometheus Operator, querying'
+description: 'Learn how to specify monitoring metrics.'
+linkTitle: "Querying"
+weight: 10817
+---
+
+In the query editor, enter PromQL expressions in **Monitoring Metrics** to process and fetch metrics. To learn how to write PromQL, read [Query Examples](https://prometheus.io/docs/prometheus/latest/querying/examples/).
+
+
+
+
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/grayscale-release/_index.md b/content/en/docs/v3.4/project-user-guide/grayscale-release/_index.md
new file mode 100644
index 000000000..f86106d9d
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/grayscale-release/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Grayscale Release"
+weight: 10500
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/project-user-guide/grayscale-release/blue-green-deployment.md b/content/en/docs/v3.4/project-user-guide/grayscale-release/blue-green-deployment.md
new file mode 100644
index 000000000..2ee90bf74
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/grayscale-release/blue-green-deployment.md
@@ -0,0 +1,74 @@
+---
+title: "Kubernetes Blue-Green Deployment on Kubesphere"
+keywords: 'KubeSphere, Kubernetes, Service Mesh, Istio, Grayscale Release, Blue-Green deployment'
+description: 'Learn how to release a blue-green deployment on KubeSphere.'
+linkTitle: "Blue-Green Deployment with Kubernetes"
+weight: 10520
+---
+
+
+The blue-green release provides a zero downtime deployment, which means the new version can be deployed with the old one preserved. At any time, only one of the versions is active serving all the traffic, while the other one remains idle. If there is a problem with running, you can quickly roll back to the old version.
+
+
+
+
+## Prerequisites
+
+- You need to enable [KubeSphere Service Mesh](../../../pluggable-components/service-mesh/).
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to enable **Application Governance** and have an available app so that you can implement the blue-green deployment for it. The sample app used in this tutorial is Bookinfo. For more information, see [Deploy Bookinfo and Manage Traffic](../../../quick-start/deploy-bookinfo-to-k8s/).
+
+## Create a Blue-green Deployment Job
+
+1. Log in to KubeSphere as `project-regular` and go to **Grayscale Release**. Under **Release Modes**, click **Create** on the right of **Blue-Green Deployment**.
+
+2. Set a name for it and click **Next**.
+
+3. On the **Service Settings** tab, select your app from the drop-down list and the Service for which you want to implement the blue-green deployment. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
+
+4. On the **New Version Settings** tab, add another version (e.g `kubesphere/examples-bookinfo-reviews-v2:1.16.2`) as shown in the following figure and click **Next**.
+
+5. On the **Strategy Settings** tab, to allow the app version `v2` to take over all the traffic, select **Take Over** and click **Create**.
+
+6. The blue-green deployment job created is displayed under the **Release Jobs** tab. Click it to view details.
+
+7. Wait for a while and you can see all the traffic go to the version `v2`.
+
+8. The new **Deployment** is created as well.
+
+9. You can get the virtual service to identify the weight by running the following command:
+
+ ```bash
+ kubectl -n demo-project get virtualservice -o yaml
+ ```
+
+ {{< notice note >}}
+
+ - When you run the command above, replace `demo-project` with your own project (namely, namespace) name.
+ - If you want to run the command from the web kubectl on the KubeSphere console, you need to use the user `admin`.
+
+ {{ notice >}}
+
+10. Expected output:
+
+ ```yaml
+ ...
+ spec:
+ hosts:
+ - reviews
+ http:
+ - route:
+ - destination:
+ host: reviews
+ port:
+ number: 9080
+ subset: v2
+ weight: 100
+ ...
+ ```
+
+## Take a Job Offline
+
+After you implement the blue-green deployment, and the result meets your expectation, you can take the task offline with the version `v1` removed by clicking **Delete**.
+
+
diff --git a/content/en/docs/v3.4/project-user-guide/grayscale-release/canary-release.md b/content/en/docs/v3.4/project-user-guide/grayscale-release/canary-release.md
new file mode 100644
index 000000000..1a124fc60
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/grayscale-release/canary-release.md
@@ -0,0 +1,120 @@
+---
+title: "Canary Release"
+keywords: 'KubeSphere, Kubernetes, Canary Release, Istio, Service Mesh'
+description: 'Learn how to deploy a canary service on KubeSphere.'
+linkTitle: "Canary Release"
+weight: 10530
+---
+
+On the back of [Istio](https://istio.io/), KubeSphere provides users with necessary control to deploy canary services. In a canary release, you introduce a new version of a service and test it by sending a small percentage of traffic to it. At the same time, the old version is responsible for handling the rest of the traffic. If everything goes well, you can gradually increase the traffic sent to the new version, while simultaneously phasing out the old version. In the case of any occurring issues, KubeSphere allows you to roll back to the previous version as you change the traffic percentage.
+
+This method serves as an efficient way to test performance and reliability of a service. It can help detect potential problems in the actual environment while not affecting the overall system stability.
+
+
+
+## Prerequisites
+
+- You need to enable [KubeSphere Service Mesh](../../../pluggable-components/service-mesh/).
+- You need to enable [KubeSphere Logging](../../../pluggable-components/logging/) so that you can use the Tracing feature.
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to enable **Application Governance** and have an available app so that you can implement the canary release for it. The sample app used in this tutorial is Bookinfo. For more information, see [Deploy and Access Bookinfo](../../../quick-start/deploy-bookinfo-to-k8s/).
+
+## Step 1: Create a Canary Release Job
+
+1. Log in to KubeSphere as `project-regular` and navigate to **Grayscale Release**. Under **Release Modes**, click **Create** on the right of **Canary Release**.
+
+2. Set a name for it and click **Next**.
+
+3. On the **Service Settings** tab, select your app from the drop-down list and the Service for which you want to implement the canary release. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
+
+4. On the **New Version Settings** tab, add another version of it (e.g `kubesphere/examples-bookinfo-reviews-v2:1.16.2`; change `v1` to `v2`) and click **Next**.
+
+5. You send traffic to these two versions (`v1` and `v2`) either by a specific percentage or by the request content such as `Http Header`, `Cookie` and `URI`. Select **Specify Traffic Distribution** and move the slider to the middle to change the percentage of traffic sent to these two versions respectively (for example, set 50% for either one). When you finish, click **Create**.
+
+## Step 2: Verify the Canary Release
+
+Now that you have two available app versions, access the app to verify the canary release.
+
+1. Visit the Bookinfo website and refresh your browser repeatedly. You can see that the **Book Reviews** section switching between v1 and v2 at a rate of 50%.
+
+2. The created canary release job is displayed under the tab **Release Jobs**. Click it to view details.
+
+3. You can see half of the traffic goes to each of them.
+
+4. The new Deployment is created as well.
+
+5. You can directly get the virtual Service to identify the weight by executing the following command:
+
+ ```bash
+ kubectl -n demo-project get virtualservice -o yaml
+ ```
+
+ {{< notice note >}}
+
+ - When you execute the command above, replace `demo-project` with your own project (namely, namespace) name.
+ - If you want to execute the command from the web kubectl on the KubeSphere console, you need to use the user `admin`.
+
+ {{ notice >}}
+
+6. Expected output:
+
+ ```bash
+ ...
+ spec:
+ hosts:
+ - reviews
+ http:
+ - route:
+ - destination:
+ host: reviews
+ port:
+ number: 9080
+ subset: v1
+ weight: 50
+ - destination:
+ host: reviews
+ port:
+ number: 9080
+ subset: v2
+ weight: 50
+ ...
+ ```
+
+## Step 3: View Network Topology
+
+1. Execute the following command on the machine where KubeSphere runs to bring in real traffic to simulate the access to Bookinfo every 0.5 seconds.
+
+ ```bash
+ watch -n 0.5 "curl http://productpage.demo-project.192.168.0.2.nip.io:32277/productpage?u=normal"
+ ```
+
+ {{< notice note >}}
+ Make sure you replace the hostname and port number in the above command with your own.
+ {{ notice >}}
+
+2. In **Traffic Monitoring**, you can see communications, dependency, health and performance among different microservices.
+
+3. Click a component (for example, **reviews**) and you can see the information of traffic monitoring on the right, displaying real-time data of **Traffic**, **Success rate**, and **Duration**.
+
+## Step 4: View Tracing Details
+
+KubeSphere provides the distributed tracing feature based on [Jaeger](https://www.jaegertracing.io/), which is used to monitor and troubleshoot microservices-based distributed applications.
+
+1. On the **Tracing** tab, you can see all phases and internal calls of requests, as well as the period in each phase.
+
+2. Click any item, and you can even drill down to see request details and where this request is being processed (which machine or container).
+
+## Step 5: Take Over All Traffic
+
+If everything runs smoothly, you can bring all the traffic to the new version.
+
+1. In **Release Jobs**, click the canary release job.
+
+2. In the displayed dialog box, click
on the right of **reviews v2** and select **Take Over**. It means 100% of the traffic will be sent to the new version (v2).
+
+ {{< notice note >}}
+ If anything goes wrong with the new version, you can roll back to the previous version v1 anytime.
+ {{ notice >}}
+
+3. Access Bookinfo again and refresh the browser several times. You can find that it only shows the result of **reviews v2** (i.e. ratings with black stars).
+
diff --git a/content/en/docs/v3.4/project-user-guide/grayscale-release/overview.md b/content/en/docs/v3.4/project-user-guide/grayscale-release/overview.md
new file mode 100644
index 000000000..cf48a86f9
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/grayscale-release/overview.md
@@ -0,0 +1,39 @@
+---
+title: "Grayscale Release — Overview"
+keywords: 'Kubernetes, KubeSphere, grayscale release, overview, service mesh'
+description: 'Understand the basic concept of grayscale release.'
+linkTitle: "Overview"
+weight: 10510
+---
+
+Modern, cloud-native applications are often composed of a group of independently deployable components, also known as microservices. In a microservices architecture, developers are able to make adjustments to their apps with great flexibility as they do not affect the network of services each performing a specific function. This kind of network of microservices making up an application is also called **service mesh**.
+
+A KubeSphere service mesh, built on the open-source project of [Istio](https://istio.io/), controls how different parts of an app interact with one another. Among others, grayscale release strategies represent an important part for users to test and release new app versions without affecting the communication among microservices.
+
+## Grayscale Release Strategies
+
+A grayscale release in KubeSphere ensures smooth transition as you upgrade your apps to a new version. The specific strategy adopted may be different but the ultimate goal is the same - identify potential problems in advance without affecting your apps running in the production environment. This not only minimizes risks of a version upgrade but also tests the performance of new app builds.
+
+KubeSphere provides users with three grayscale release strategies.
+
+### [Blue-green Deployment](../blue-green-deployment/)
+
+A blue-green deployment provides an efficient method of releasing new versions with zero downtime and outages as it creates an identical standby environment where the new app version runs. With this approach, KubeSphere routes all the traffic to either version. Namely, only one environment is live at any given time. In the case of any issues with the new build, it allows you to immediately roll back to the previous version.
+
+### [Canary Release](../canary-release/)
+
+A canary deployment reduces the risk of version upgrades to a minimum as it slowly rolls out changes to a small subset of users. More specifically, you have the option to expose a new app version to a portion of production traffic, which is defined by yourself on the highly responsive dashboard. Besides, KubeSphere gives you a visualized view of real-time traffic as it monitors requests after you implement a canary deployment. During the process, you can analyze the behavior of the new app version and choose to gradually increase the percentage of traffic sent to it. Once you are confident of the build, you can route all the traffic to it.
+
+### [Traffic Mirroring](../traffic-mirroring/)
+
+Traffic mirroring copies live production traffic and sends it to a mirrored service. By default, KubeSphere mirrors all the traffic while you can also manually define the percentage of traffic to be mirrored by specifying a value. Common use cases include:
+
+- Test new app versions. You can compare the real-time output of mirrored traffic and production traffic.
+- Test clusters. You can use production traffic of instances for cluster testing.
+- Test databases. You can use an empty database to store and load data.
+
+{{< notice note >}}
+
+The current KubeSphere version does not support grayscale release strategies for multi-cluster apps.
+
+{{ notice >}}
diff --git a/content/en/docs/v3.4/project-user-guide/grayscale-release/traffic-mirroring.md b/content/en/docs/v3.4/project-user-guide/grayscale-release/traffic-mirroring.md
new file mode 100644
index 000000000..7d7568fd4
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/grayscale-release/traffic-mirroring.md
@@ -0,0 +1,81 @@
+---
+title: "Traffic Mirroring"
+keywords: 'KubeSphere, Kubernetes, Traffic Mirroring, Istio'
+description: 'Learn how to conduct a traffic mirroring job on KubeSphere.'
+linkTitle: "Traffic Mirroring"
+weight: 10540
+---
+
+Traffic mirroring, also called shadowing, is a powerful, risk-free method of testing your app versions as it sends a copy of live traffic to a service that is being mirrored. Namely, you implement a similar setup for acceptance test so that problems can be detected in advance. As mirrored traffic happens out of band of the critical request path for the primary service, your end users will not be affected during the whole process.
+
+## Prerequisites
+
+- You need to enable [KubeSphere Service Mesh](../../../pluggable-components/service-mesh/).
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to enable **Application Governance** and have an available app so that you can mirror the traffic of it. The sample app used in this tutorial is Bookinfo. For more information, see [Deploy Bookinfo and Manage Traffic](../../../quick-start/deploy-bookinfo-to-k8s/).
+
+## Create a Traffic Mirroring Job
+
+1. Log in to KubeSphere as `project-regular` and go to **Grayscale Release**. Under **Release Modes**, click **Create** on the right of **Traffic Mirroring**.
+
+2. Set a name for it and click **Next**.
+
+3. On the **Service Settings** tab, select your app from the drop-down list and the Service of which you want to mirror the traffic. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
+
+4. On the **New Version Settings** tab, add another version of it (for example, `kubesphere/examples-bookinfo-reviews-v2:1.16.2`; change `v1` to `v2`) and click **Next**.
+
+5. On the **Strategy Settings** tab, click **Create**.
+
+6. The traffic mirroring job created is displayed under the **Release Jobs** tab. Click it to view details.
+
+7. You can see the traffic is being mirrored to `v2` with real-time traffic displayed in the line chart.
+
+8. The new **Deployment** is created as well.
+
+9. You can get the virtual service to view `mirror` and `weight` by running the following command:
+
+ ```bash
+ kubectl -n demo-project get virtualservice -o yaml
+ ```
+
+ {{< notice note >}}
+
+ - When you run the command above, replace `demo-project` with your own project (namely, namespace) name.
+ - If you want to run the command from the web kubectl on the KubeSphere console, you need to use the user `admin`.
+
+ {{ notice >}}
+
+10. Expected output:
+
+ ```bash
+ ...
+ spec:
+ hosts:
+ - reviews
+ http:
+ - route:
+ - destination:
+ host: reviews
+ port:
+ number: 9080
+ subset: v1
+ weight: 100
+ mirror:
+ host: reviews
+ port:
+ number: 9080
+ subset: v2
+ ...
+ ```
+
+ This route rule sends 100% of the traffic to `v1`. The `mirror` field specifies that you want to mirror to the service `reviews v2`. When traffic gets mirrored, the requests are sent to the mirrored service with their Host/Authority headers appended with `-shadow`. For example, `cluster-1` becomes `cluster-1-shadow`.
+
+ {{< notice note >}}
+
+These requests are mirrored as “fire and forget”, which means that the responses are discarded. You can specify the `weight` field to mirror a fraction of the traffic, instead of mirroring all requests. If this field is absent, for compatibility with older versions, all traffic will be mirrored. For more information, see [Mirroring](https://istio.io/v1.5/pt-br/docs/tasks/traffic-management/mirroring/).
+
+{{ notice >}}
+
+## Take a Job Offline
+
+You can remove the traffic mirroring job by clicking **Delete**, which does not affect the current app version.
diff --git a/content/en/docs/v3.4/project-user-guide/image-builder/_index.md b/content/en/docs/v3.4/project-user-guide/image-builder/_index.md
new file mode 100644
index 000000000..d10a9e339
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/image-builder/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Image Builder"
+weight: 10600
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/project-user-guide/image-builder/binary-to-image.md b/content/en/docs/v3.4/project-user-guide/image-builder/binary-to-image.md
new file mode 100644
index 000000000..17d509615
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/image-builder/binary-to-image.md
@@ -0,0 +1,141 @@
+---
+title: "Binary to Image: Publish an Artifact to Kubernetes"
+keywords: "KubeSphere, Kubernetes, Docker, B2I, Binary-to-Image"
+description: "Use B2I to import an artifact and push it to a target repository."
+linkTitle: "Binary to Image: Publish an Artifact to Kubernetes"
+weight: 10620
+---
+
+Binary-to-Image (B2I) is a toolkit and workflow for building reproducible container images from binary executables such as Jar, War, and binary packages. More specifically, you upload an artifact and specify a target repository such as Docker Hub or Harbor where you want to push the image. If everything runs successfully, your image will be pushed to the target repository and your application will be automatically deployed to Kubernetes if you create a Service in the workflow.
+
+In a B2I workflow, you do not need to write any Dockerfile. This not only reduces learning costs but improves release efficiency, which allows users to focus more on business.
+
+This tutorial demonstrates two different ways to build an image based on an artifact in a B2I workflow. Ultimately, the image will be released to Docker Hub.
+
+For demonstration and testing purposes, here are some example artifacts you can use to implement the B2I workflow:
+
+| Artifact Package | GitHub Repository |
+| ------------------------------------------------------------ | ------------------------------------------------------------ |
+| [b2i-war-java8.war](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java8.war) | [spring-mvc-showcase](https://github.com/spring-projects/spring-mvc-showcase) |
+| [b2i-war-java11.war](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java11.war) | [springmvc5](https://github.com/kubesphere/s2i-java-container/tree/master/tomcat/examples/springmvc5) |
+| [b2i-binary](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-binary) | [devops-go-sample](https://github.com/runzexia/devops-go-sample) |
+| [b2i-jar-java11.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java11.jar) | [ java-maven-example](https://github.com/kubesphere/s2i-java-container/tree/master/java/examples/maven) |
+| [b2i-jar-java8.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java8.jar) | [devops-maven-sample](https://github.com/kubesphere/devops-maven-sample) |
+
+## Prerequisites
+
+- You have enabled the [KubeSphere DevOps System](../../../pluggable-components/devops/).
+- You need to create a [Docker Hub](https://www.dockerhub.com/) account. GitLab and Harbor are also supported.
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- Set a CI dedicated node for building images. This is not mandatory but recommended for the development and production environment as it caches dependencies and reduces build time. For more information, see [Set a CI Node for Dependency Caching](../../../devops-user-guide/how-to-use/devops-settings/set-ci-node/).
+
+## Create a Service Using Binary-to-Image (B2I)
+
+The steps below show how to upload an artifact, build an image and release it to Kubernetes by creating a Service in a B2I workflow.
+
+
+
+### Step 1: Create a Docker Hub Secret
+
+You must create a Docker Hub Secret so that the Docker image created through B2I can be push to Docker Hub. Log in to KubeSphere as `project-regular`, go to your project and create a Secret for Docker Hub. For more information, see [Create the Most Common Secrets](../../../project-user-guide/configuration/secrets/#create-the-most-common-secrets).
+
+### Step 2: Create a Service
+
+1. In the same project, navigate to **Services** under **Application Workloads** and click **Create**.
+
+2. Scroll down to **Create Service from Artifact** and select **WAR**. This tutorial uses the [spring-mvc-showcase](https://github.com/spring-projects/spring-mvc-showcase) project as a sample and uploads a war artifact to KubeSphere. Set a name, such as `b2i-war-java8`, and click **Next**.
+
+3. On the **Build Settings** page, provide the following information accordingly and click **Next**.
+
+ **Service Type**: Select **Stateless Service** for this example. For more information about different Services, see [Service Type](../../../project-user-guide/application-workloads/services/#service-type).
+
+ **Artifact File**: Upload the war artifact ([b2i-war-java8](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java8.war)).
+
+ **Build Environment**: Select **kubesphere/tomcat85-java8-centos7:v2.1.0**.
+
+ **Image Name**: Enter `
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
+
+3. Go back to the **Services**, **Deployments**, and **Jobs** page, and you can see the corresponding Service, Deployment, and Job of the image have been all created successfully.
+
+4. In your Docker Hub repository, you can see that KubeSphere has pushed the image to the repository with the expected tag.
+
+### Step 4: Access the B2I Service
+
+1. On the **Services** page, click the B2I Service to go to its details page, where you can see the port number has been exposed.
+
+2. Access the Service at `http://
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
+
+3. Go to the **Jobs** page, and you can see the corresponding Job of the image has been created successfully.
+
+4. In your Docker Hub repository, you can see that KubeSphere has pushed the image to the repository with the expected tag.
+
diff --git a/content/en/docs/v3.4/project-user-guide/image-builder/s2i-and-b2i-webhooks.md b/content/en/docs/v3.4/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
new file mode 100644
index 000000000..3b89fba20
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
@@ -0,0 +1,81 @@
+---
+title: "Configure S2I and B2I Webhooks"
+keywords: 'KubeSphere, Kubernetes, S2I, Source-to-Image, B2I, Binary-to-Image, Webhook'
+description: 'Learn how to configure S2I and B2I webhooks.'
+linkTitle: "Configure S2I and B2I Webhooks"
+weight: 10650
+---
+
+KubeSphere provides Source-to-Image (S2I) and Binary-to-Image (B2I) features to automate image building and pushing and application deployment. In KubeSphere v3.1.x and later versions, you can configure S2I and B2I webhooks so that your Image Builder can be automatically triggered when there is any relevant activity in your code repository.
+
+This tutorial demonstrates how to configure S2I and B2I webhooks.
+
+## Prerequisites
+
+- You need to enable the [KubeSphere DevOps System](../../../pluggable-components/devops/).
+- You need to create a workspace, a project (`demo-project`) and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to create an S2I Image Builder and a B2I Image Builder. For more information, refer to [Source to Image: Publish an App without a Dockerfile](../source-to-image/) and [Binary to Image: Publish an Artifact to Kubernetes](../binary-to-image/).
+
+## Configure an S2I Webhook
+
+### Step 1: Expose the S2I trigger Service
+
+1. Log in to the KubeSphere web console as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**.
+
+2. In **Services** under **Application Workloads**, select **kubesphere-devops-system** from the drop-down list and click **s2ioperator-trigger-service** to go to its details page.
+
+3. Click **More** and select **Edit External Access**.
+
+4. In the displayed dialog box, select **NodePort** from the drop-down list for **Access Method** and then click **OK**.
+
+ {{< notice note >}}
+
+ This tutorial selects **NodePort** for demonstration purposes. You can also select **LoadBalancer** based on your needs.
+
+ {{ notice >}}
+
+5. You can view the **NodePort** on the details page. It is going to be included in the S2I webhook URL.
+
+### Step 2: Configure an S2I webhook
+
+1. Log out of KubeSphere and log back in as `project-regular`. Go to `demo-project`.
+
+2. In **Image Builders**, click the S2I Image Builder to go to its details page.
+
+3. You can see an auto-generated link shown in **Remote Trigger**. Copy `/s2itrigger/v1alpha1/general/namespaces/demo-project/s2ibuilders/felixnoo-s2i-sample-latest-zhd/` as it is going to be included in the S2I webhook URL.
+
+4. Log in to your GitHub account and go to the source code repository used for the S2I Image Builder. Go to **Webhooks** under **Settings** and then click **Add webhook**.
+
+5. In **Payload URL**, enter `http://
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
+
+3. Go back to the **Services**, **Deployments**, and **Jobs** page, and you can see the corresponding Service, Deployment, and Job of the image have been all created successfully.
+
+4. In your Docker Hub repository, you can see that KubeSphere has pushed the image to the repository with the expected tag.
+
+### Step 5: Access the S2I Service
+
+1. On the **Services** page, click the S2I Service to go to its details page.
+
+2. To access the Service, you can either use the endpoint with the `curl` command or visit `| Parameter | +Description | +
|---|---|
| Name | +Name of the PV. It is specified by the field .metadata.name in the manifest file of the PV. | +
| Status | +
+ Current status of the PV. It is specified by the field .status.phase in the manifest file of the PV, including:
+
|
+
| Capacity | +Capacity of the PV. It is specified by the field .spec.capacity.storage in the manifest file of the PV. | +
| Access Mode | +
+ Access mode of the PV. It is specified by the field .spec.accessModes in the manifest file of the PV, including:
+
|
+
| Reclaim Policy | +
+ Reclaim policy of the PV. It is specified by the field .spec.persistentVolumeReclaimPolicy in the manifest file of the PV, including:
+
|
+
| Creation Time | +Time when the PV was created. | +
| OS | +Minimum Requirements | +
|---|---|
| Ubuntu 16.04, 18.04, 20.04, 22.04 | +2 CPU cores, 4 GB memory, and 40 GB disk space | +
| Debian Buster, Stretch | +2 CPU cores, 4 GB memory, and 40 GB disk space | +
| CentOS 7.x | +2 CPU cores, 4 GB memory, and 40 GB disk space | +
| Red Hat Enterprise Linux 7 | +2 CPU cores, 4 GB memory, and 40 GB disk space | +
| SUSE Linux Enterprise Server 15/openSUSE Leap 15.2 | +2 CPU cores, 4 GB memory, and 40 GB disk space | +
| Supported Container Runtime | +Version | +
|---|---|
| Docker | +19.3.8 + | +
| containerd | +Latest | +
| CRI-O (experimental, not fully tested) | +Latest | +
| iSula (experimental, not fully tested) | +Latest | +
| Dependency | +Kubernetes Version ≥ 1.18 | +Kubernetes Version < 1.18 | +
|---|---|---|
socat |
+ Required | +Optional but recommended | +
conntrack |
+ Required | +Optional but recommended | +
ebtables |
+ Optional but recommended | +Optional but recommended | +
ipset |
+ Optional but recommended | +Optional but recommended | +
| Built-in Roles | +Description | +
|---|---|
platform-self-provisioner |
+ Create workspaces and become the admin of the created workspaces. | +
platform-regular |
+ Has no access to any resources before joining a workspace or cluster. | +
platform-admin |
+ Manage all resources on the platform. | +
| User | +Assigned Platform Role | +User Permissions | +
|---|---|---|
ws-admin |
+ platform-regular |
+ Manage all resources in a workspace after being invited to the workspace (This user is used to invite new members to a workspace in this example). | +
project-admin |
+ platform-regular |
+ Create and manage projects and DevOps projects, and invite new members to the projects. | +
project-regular |
+ platform-regular |
+ project-regular will be invited to a project or DevOps project by project-admin. This user will be used to create workloads, pipelines and other resources in a specified project. |
+
| User | +Assigned Workspace Role | +Role Permissions | +
|---|---|---|
ws-admin |
+ demo-workspace-admin |
+ Manage all resources under the workspace (use this user to invite new members to the workspace). | +
project-admin |
+ demo-workspace-self-provisioner |
+ Create and manage projects and DevOps projects, and invite new members to join the projects. | +
project-regular |
+ demo-workspace-viewer |
+ project-regular will be invited by project-admin to join a project or DevOps project. The user can be used to create workloads, pipelines, etc. |
+
| Parameter | +Description | +
|---|---|
| Cluster | +Cluster where the operation happens. It is enabled if the multi-cluster feature is turned on. | +
| Project | +Project where the operation happens. It supports exact query and fuzzy query. | +
| Workspace | +Workspace where the operation happens. It supports exact query and fuzzy query. | +
| Resource Type | +Type of resource associated with the request. It supports fuzzy query. | +
| Resource Name | +Name of the resource associated with the request. It supports fuzzy query. | +
| Verb | +Kubernetes verb associated with the request. For non-resource requests, this is the lower-case HTTP method. It supports exact query. | +
| Status Code | +HTTP response code. It supports exact query. | +
| Operation Account | +User who calls this request. It supports exact and fuzzy query. | +
| Source IP | +IP address from where the request originated and intermediate proxies. It supports fuzzy query. | +
| Time Range | +Time when the request reaches the apiserver. | +
| Parameter | +Description | +
|---|---|
retentionDay |
+ retentionDay determines the date range displayed on the Metering and Billing page for users. The value of this parameter must be the same as the value of retention in Prometheus. |
+
currencyUnit |
+ The currency that is displayed on the Metering and Billing page. Currently allowed values are CNY (Renminbi) and USD (US dollars). If you specify other currencies, the console will display cost in USD by default. |
+
cpuCorePerHour |
+ The unit price of CPU per core/hour. | +
memPerGigabytesPerHour |
+ The unit price of memory per GB/hour. | +
ingressNetworkTrafficPerMegabytesPerHour |
+ The unit price of ingress traffic per MB/hour. | +
egressNetworkTrafficPerMegabytesPerHour |
+ The unit price of egress traffic per MB/hour. | +
pvcPerGigabytesPerHour |
+ The unit price of PVC per GB/hour. Note that KubeSphere calculates the total cost of volumes based on the storage capacity PVCs request regardless of the actual storage in use. | +
in the lower-right corner and select **Metering and Billing**.
+
+2. Click **View Consumption** in the **Cluster Resource Consumption** section.
+
+3. On the left side of the dashboard, you can see a cluster list containing your host cluster and all member clusters if you have enabled [multi-cluster management](../../../multicluster-management/). There is only one cluster called `default` in the list if it is not enabled.
+
+ On the right side, there are three parts showing resource consumption in different ways.
+
+ | Module | +Description | +
|---|---|
| Overview | +Displays a consumption overview of different resources in a cluster since its creation. You can also see the billing information if you have set prices for these resources in the ConfigMap kubesphere-config. |
+
| Consumption by Yesterday | +Displays the total resource consumption by yesterday. You can also customize the time range and internal to see data within a specific period. | +
| Current Resources Included | +Displays the consumption of resources included in the selected target object (in this case, all nodes in the selected cluster) over the last hour. | +
in the lower-right corner and select **Metering and Billing**.
+
+2. Click **View Consumption** in the **Workspace (Project) Resource Consumption** section.
+
+3. On the left side of the dashboard, you can see a list containing all the workspaces in the current cluster. The right part displays detailed consumption information in the selected workspace, the layout of which is basically the same as that of a cluster.
+
+ {{< notice note >}}
+
+ In a multi-cluster architecture, you cannot see the metering and billing information of a workspace if it does not have any available cluster assigned to it. For more information, see [Cluster Visibility and Authorization](../../../cluster-administration/cluster-settings/cluster-visibility-and-authorization/).
+
+ {{ notice >}}
+
+4. Click a workspace on the left and dive deeper into a project or workload (for example, Deployment and StatefulSet) to see detailed consumption information.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/toolbox/web-kubectl.md b/content/en/docs/v3.4/toolbox/web-kubectl.md
new file mode 100644
index 000000000..fd87e369c
--- /dev/null
+++ b/content/en/docs/v3.4/toolbox/web-kubectl.md
@@ -0,0 +1,44 @@
+---
+title: "Web Kubectl"
+keywords: 'KubeSphere, Kubernetes, kubectl, cli'
+description: 'The web kubectl tool is integrated into KubeSphere to provide consistent user experiences for Kubernetes users.'
+linkTitle: "Web Kubectl"
+weight: 15500
+---
+
+The Kubernetes command-line tool, kubectl, allows you to run commands on Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, view logs, and more.
+
+KubeSphere provides web kubectl on the console for user convenience. By default, in the current version, only the account granted the `platform-admin` role (such as the default account `admin`) has the permission to use web kubectl for cluster resource operation and management.
+
+This tutorial demonstrates how to use web kubectl to operate on and manage cluster resources.
+
+## Use Web Kubectl
+
+1. Log in to KubeSphere with a user granted the `platform-admin` role, hover over the **Toolbox** in the lower-right corner and select **Kubectl**.
+
+2. You can see the kubectl interface in the pop-up window. If you have enabled the multi-cluster feature, you need to select the target cluster first from the drop-down list in the upper-right corner. This drop-down list is not visible if the multi-cluster feature is not enabled.
+
+3. Enter kubectl commands in the command-line tool to query and manage Kubernetes cluster resources. For example, execute the following command to query the status of all PVCs in the cluster.
+
+ ```bash
+ kubectl get pvc --all-namespaces
+ ```
+
+ 
+
+4. Use the following syntax to run kubectl commands from your terminal window:
+
+ ```bash
+ kubectl [command] [TYPE] [NAME] [flags]
+ ```
+
+ {{< notice note >}}
+
+- Where `command`, `TYPE`, `NAME`, and `flags` are:
+ - `command`: Specifies the operation that you want to perform on one or more resources, such as `create`, `get`, `describe` and `delete`.
+ - `TYPE`: Specifies the [resource type](https://kubernetes.io/docs/reference/kubectl/overview/#resource-types). Resource types are case-insensitive and you can specify the singular, plural, or abbreviated forms.
+ - `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, such as `kubectl get pods`.
+ - `flags`: Specifies optional flags. For example, you can use the `-s` or `--server` flags to specify the address and port of the Kubernetes API server.
+- If you need help, run `kubectl help` from the terminal window or refer to the [Kubernetes kubectl CLI documentation](https://kubernetes.io/docs/reference/kubectl/overview/).
+
+ {{ notice >}}
diff --git a/content/en/docs/v3.4/upgrade/_index.md b/content/en/docs/v3.4/upgrade/_index.md
new file mode 100644
index 000000000..1ecc197ed
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/_index.md
@@ -0,0 +1,14 @@
+---
+title: "Upgrade"
+description: "Upgrade KubeSphere and Kubernetes"
+layout: "second"
+
+linkTitle: "Upgrade"
+
+weight: 7000
+
+icon: "/images/docs/v3.x/docs.svg"
+
+---
+
+This chapter demonstrates how cluster operators can upgrade KubeSphere to 3.4.0.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/upgrade/air-gapped-upgrade-with-ks-installer.md b/content/en/docs/v3.4/upgrade/air-gapped-upgrade-with-ks-installer.md
new file mode 100644
index 000000000..81af1de27
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/air-gapped-upgrade-with-ks-installer.md
@@ -0,0 +1,182 @@
+---
+title: "Air-Gapped Upgrade with ks-installer"
+keywords: "Air-Gapped, upgrade, kubesphere, 3.4"
+description: "Use ks-installer and offline package to upgrade KubeSphere."
+linkTitle: "Air-Gapped Upgrade with ks-installer"
+weight: 7500
+---
+
+ks-installer is recommended for users whose Kubernetes clusters were not set up by [KubeKey](../../installing-on-linux/introduction/kubekey/), but hosted by cloud vendors or created by themselves. This tutorial is for **upgrading KubeSphere only**. Cluster operators are responsible for upgrading Kubernetes beforehand.
+
+
+## Prerequisites
+
+- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
+- Read [Release Notes for 3.4.0](../../../v3.4/release/release-v340/) carefully.
+- Back up any important component beforehand.
+- A Docker registry. You need to have a Harbor or other Docker registries. For more information, see [Prepare a Private Image Registry](../../installing-on-linux/introduction/air-gapped-installation/#step-2-prepare-a-private-image-registry).
+- Supported Kubernetes versions of KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+
+## Major Updates
+
+In KubeSphere 3.4.0, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.4.0, please note the following:
+
+ - Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project).
+
+ - Some permission of custom roles are removed:
+ - Removed permissions of platform-level custom roles: user management, role management, and workspace management.
+ - Removed permissions of workspace-level custom roles: user management, role management, and user group management.
+ - Removed permissions of namespace-level custom roles: user management and role management.
+ - After you upgrade KubeSphere to 3.4.0, custom roles will be retained, but removed permissions of the custom roles will be revoked.
+## Step 1: Prepare Installation Images
+
+As you install KubeSphere in an air-gapped environment, you need to prepare an image package containing all the necessary images in advance.
+
+1. Download the image list file `images-list.txt` from a machine that has access to Internet through the following command:
+
+ ```bash
+ curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/images-list.txt
+ ```
+
+ {{< notice note >}}
+
+ This file lists images under `##+modulename` based on different modules. You can add your own images to this file following the same rule. To view the complete file, see [Appendix](../../installing-on-linux/introduction/air-gapped-installation/#image-list-of-kubesphere-v310).
+
+ {{ notice >}}
+
+2. Download `offline-installation-tool.sh`.
+
+ ```bash
+ curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/offline-installation-tool.sh
+ ```
+
+3. Make the `.sh` file executable.
+
+ ```bash
+ chmod +x offline-installation-tool.sh
+ ```
+
+4. You can execute the command `./offline-installation-tool.sh -h` to see how to use the script:
+
+ ```bash
+ root@master:/home/ubuntu# ./offline-installation-tool.sh -h
+ Usage:
+
+ ./offline-installation-tool.sh [-l IMAGES-LIST] [-d IMAGES-DIR] [-r PRIVATE-REGISTRY] [-v KUBERNETES-VERSION ]
+
+ Description:
+ -b : save kubernetes' binaries.
+ -d IMAGES-DIR : the dir of files (tar.gz) which generated by `docker save`. default: ./kubesphere-images
+ -l IMAGES-LIST : text file with list of images.
+ -r PRIVATE-REGISTRY : target private registry:port.
+ -s : save model will be applied. Pull the images in the IMAGES-LIST and save images as a tar.gz file.
+ -v KUBERNETES-VERSION : download kubernetes' binaries. default: v1.17.9
+ -h : usage message
+ ```
+
+5. Pull images in `offline-installation-tool.sh`.
+
+ ```bash
+ ./offline-installation-tool.sh -s -l images-list.txt -d ./kubesphere-images
+ ```
+
+ {{< notice note >}}
+
+ You can choose to pull images as needed. For example, you can delete `##k8s-images` and related images under it in `images-list.text` if you already have a Kubernetes cluster.
+
+ {{ notice >}}
+
+## Step 2: Push Images to Your Private Registry
+
+Transfer your packaged image file to your local machine and execute the following command to push it to the registry.
+
+```bash
+./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r dockerhub.kubekey.local
+```
+
+{{< notice note >}}
+
+The domain name is `dockerhub.kubekey.local` in the command. Make sure you use your **own registry address**.
+
+{{ notice >}}
+
+## Step 3: Download ks-installer
+
+Similar to installing KubeSphere on an existing Kubernetes cluster in an online environment, you also need to download `kubesphere-installer.yaml`.
+
+1. Execute the following command to download ks-installer and transfer it to your machine that serves as the taskbox for installation.
+
+ ```bash
+ curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+ ```
+
+2. Verify that you have specified your private image registry in `spec.local_registry` in `cluster-configuration.yaml`. Note that if your existing cluster was installed in an air-gapped environment, you may already have this field specified. Otherwise, run the following command to edit `cluster-configuration.yaml` of your existing KubeSphere v3.1.x cluster and add the private image registry:
+
+ ```
+ kubectl edit cc -n kubesphere-system
+ ```
+
+ For example, `dockerhub.kubekey.local` is the registry address in this tutorial, then use it as the value of `.spec.local_registry` as below:
+
+ ```yaml
+ spec:
+ persistence:
+ storageClass: ""
+ authentication:
+ jwtSecret: ""
+ local_registry: dockerhub.kubekey.local # Add this line manually; make sure you use your own registry address.
+ ```
+
+3. Save `cluster-configuration.yaml` after you finish editing it. Replace `ks-installer` with your **own registry address** with the following command:
+
+ ```bash
+ sed -i "s#^\s*image: kubesphere.*/ks-installer:.*# image: dockerhub.kubekey.local/kubesphere/ks-installer:v3.4.0#" kubesphere-installer.yaml
+ ```
+
+ {{< notice warning >}}
+
+ `dockerhub.kubekey.local` is the registry address in the command. Make sure you use your own registry address.
+
+ {{ notice >}}
+
+## Step 4: Upgrade KubeSphere
+
+Execute the following command after you make sure that all steps above are completed.
+
+```bash
+kubectl apply -f kubesphere-installer.yaml
+```
+
+## Step 5: Verify Installation
+
+When the installation finishes, you can see the content as follows:
+
+```bash
+#####################################################
+### Welcome to KubeSphere! ###
+#####################################################
+
+Console: http://192.168.0.2:30880
+Account: admin
+Password: P@88w0rd
+
+NOTES:
+ 1. After you log into the console, please check the
+ monitoring status of service components in
+ the "Cluster Management". If any service is not
+ ready, please wait patiently until all components
+ are up and running.
+ 2. Please change the default password after login.
+
+#####################################################
+https://kubesphere.io 20xx-xx-xx xx:xx:xx
+#####################################################
+```
+
+Now, you will be able to access the web console of KubeSphere through `http://{IP}:30880` with the default account and password `admin/P@88w0rd`.
+
+{{< notice note >}}
+
+To access the console, make sure port 30880 is opened in your security group.
+
+{{ notice >}}
diff --git a/content/en/docs/v3.4/upgrade/air-gapped-upgrade-with-kubekey.md b/content/en/docs/v3.4/upgrade/air-gapped-upgrade-with-kubekey.md
new file mode 100644
index 000000000..10f4bf06b
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/air-gapped-upgrade-with-kubekey.md
@@ -0,0 +1,349 @@
+---
+title: "Air-Gapped Upgrade with KubeKey"
+keywords: "Air-Gapped, kubernetes, upgrade, kubesphere, 3.4.0"
+description: "Use the offline package to upgrade Kubernetes and KubeSphere."
+linkTitle: "Air-Gapped Upgrade with KubeKey"
+weight: 7400
+---
+Air-gapped upgrade with KubeKey is recommended for users whose KubeSphere and Kubernetes were both deployed by [KubeKey](../../installing-on-linux/introduction/kubekey/). If your Kubernetes cluster was provisioned by yourself or cloud providers, refer to [Air-gapped Upgrade with ks-installer](../air-gapped-upgrade-with-ks-installer/).
+
+## Prerequisites
+
+- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
+- Your Kubernetes version must be v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+- Read [Release Notes for 3.4.0](../../../v3.4/release/release-v340/) carefully.
+- Back up any important component beforehand.
+- A Docker registry. You need to have a Harbor or other Docker registries.
+- Make sure every node can push and pull images from the Docker Registry.
+
+## Major Updates
+
+In KubeSphere 3.4.0, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.4.0, please note the following:
+
+ - Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project).
+
+ - Some permission of custom roles are removed:
+ - Removed permissions of platform-level custom roles: user management, role management, and workspace management.
+ - Removed permissions of workspace-level custom roles: user management, role management, and user group management.
+ - Removed permissions of namespace-level custom roles: user management and role management.
+ - After you upgrade KubeSphere to 3.4.0, custom roles will be retained, but removed permissions of the custom roles will be revoked.
+
+## Upgrade KubeSphere and Kubernetes
+
+Upgrading steps are different for single-node clusters (all-in-one) and multi-node clusters.
+
+{{< notice info >}}
+
+KubeKey upgrades Kubernetes from one MINOR version to the next MINOR version until the target version. For example, you may see the upgrading process going from 1.16 to 1.17 and to 1.18, instead of directly jumping to 1.18 from 1.16.
+
+{{ notice >}}
+
+
+### System Requirements
+
+| Systems | Minimum Requirements (Each node) |
+| --------------------------------------------------------------- | ------------------------------------------- |
+| **Ubuntu** *16.04, 18.04, 20.04* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
+| **Debian** *Buster, Stretch* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
+| **CentOS** *7.x* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
+| **Red Hat Enterprise Linux** *7* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
+| **SUSE Linux Enterprise Server** *15* **/openSUSE Leap** *15.2* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
+
+{{< notice note >}}
+
+[KubeKey](https://github.com/kubesphere/kubekey) uses `/var/lib/docker` as the default directory where all Docker related files, including images, are stored. It is recommended you add additional storage volumes with at least **100G** mounted to `/var/lib/docker` and `/mnt/registry` respectively. See [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
+
+{{ notice >}}
+
+
+### Step 1: Download KubeKey
+1. 1. Run the following commands to download KubeKey.
+ {{< tabs >}}
+
+ {{< tab "Good network connections to GitHub/Googleapis" >}}
+
+ Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
+
+ ```bash
+ curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
+ ```
+
+ {{ tab >}}
+
+ {{< tab "Poor network connections to GitHub/Googleapis" >}}
+
+ Run the following command first to make sure you download KubeKey from the correct zone.
+
+ ```bash
+ export KKZONE=cn
+ ```
+
+ Run the following command to download KubeKey:
+
+ ```bash
+ curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
+ ```
+ {{ tab >}}
+
+ {{ tabs >}}
+
+2. After you uncompress the file, execute the following command to make `kk` executable:
+
+ ```bash
+ chmod +x kk
+ ```
+
+### Step 2: Prepare installation images
+
+As you install KubeSphere and Kubernetes on Linux, you need to prepare an image package containing all the necessary images and download the Kubernetes binary file in advance.
+
+1. Download the image list file `images-list.txt` from a machine that has access to Internet through the following command:
+
+ ```bash
+ curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/images-list.txt
+ ```
+
+ {{< notice note >}}
+
+ This file lists images under `##+modulename` based on different modules. You can add your own images to this file following the same rule.
+
+ {{ notice >}}
+
+2. Download `offline-installation-tool.sh`.
+
+ ```bash
+ curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/offline-installation-tool.sh
+ ```
+
+3. Make the `.sh` file executable.
+
+ ```bash
+ chmod +x offline-installation-tool.sh
+ ```
+
+4. You can execute the command `./offline-installation-tool.sh -h` to see how to use the script:
+
+ ```bash
+ root@master:/home/ubuntu# ./offline-installation-tool.sh -h
+ Usage:
+
+ ./offline-installation-tool.sh [-l IMAGES-LIST] [-d IMAGES-DIR] [-r PRIVATE-REGISTRY] [-v KUBERNETES-VERSION ]
+
+ Description:
+ -b : save kubernetes' binaries.
+ -d IMAGES-DIR : the dir of files (tar.gz) which generated by `docker save`. default: /home/ubuntu/kubesphere-images
+ -l IMAGES-LIST : text file with list of images.
+ -r PRIVATE-REGISTRY : target private registry:port.
+ -s : save model will be applied. Pull the images in the IMAGES-LIST and save images as a tar.gz file.
+ -v KUBERNETES-VERSION : download kubernetes' binaries. default: v1.17.9
+ -h : usage message
+ ```
+
+5. Download the Kubernetes binary file.
+
+ ```bash
+ ./offline-installation-tool.sh -b -v v1.22.12
+ ```
+
+ If you cannot access the object storage service of Google, run the following command instead to add the environment variable to change the source.
+
+ ```bash
+ export KKZONE=cn;./offline-installation-tool.sh -b -v v1.22.12
+ ```
+
+ {{< notice note >}}
+
+ - You can change the Kubernetes version downloaded based on your needs. Recommended Kubernetes versions for KubeSphere 3.4 are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix).
+
+ - After you run the script, a folder `kubekey` is automatically created. Note that this file and `kk` must be placed in the same directory when you create the cluster later.
+
+ {{ notice >}}
+
+6. Pull images in `offline-installation-tool.sh`.
+
+ ```bash
+ ./offline-installation-tool.sh -s -l images-list.txt -d ./kubesphere-images
+ ```
+
+ {{< notice note >}}
+
+ You can choose to pull images as needed. For example, you can delete `##k8s-images` and related images under it in `images-list.text` if you already have a Kubernetes cluster.
+
+ {{ notice >}}
+
+### Step 3: Push images to your private registry
+
+Transfer your packaged image file to your local machine and execute the following command to push it to the registry.
+
+```bash
+./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r dockerhub.kubekey.local
+```
+
+ {{< notice note >}}
+
+ The domain name is `dockerhub.kubekey.local` in the command. Make sure you use your **own registry address**.
+
+ {{ notice >}}
+
+### Air-gapped upgrade for all-in-one clusters
+
+#### Example machines
+| Host Name | IP | Role | Port | URL |
+| --------- | ----------- | -------------------- | ---- | ----------------------- |
+| master | 192.168.1.1 | Docker registry | 5000 | http://192.168.1.1:5000 |
+| master | 192.168.1.1 | master, etcd, worker | | |
+
+#### Versions
+
+| | Kubernetes | KubeSphere |
+| ------ | ---------- | ---------- |
+| Before | v1.18.6 | v3.2.x |
+| After | v1.22.12 | v3.3.x |
+
+#### Upgrade a cluster
+
+In this example, KubeSphere is installed on a single node, and you need to specify a configuration file to add host information. Besides, for air-gapped installation, pay special attention to `.spec.registry.privateRegistry`, which must be set to **your own registry address**. For more information, see the following sections.
+
+#### Create an example configuration file
+
+Execute the following command to generate an example configuration file for installation:
+
+```bash
+./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]
+```
+
+For example:
+
+```bash
+./kk create config --with-kubernetes v1.22.12 --with-kubesphere v3.4.0 -f config-sample.yaml
+```
+
+{{< notice note >}}
+
+Make sure the Kubernetes version is the one you downloaded.
+
+{{ notice >}}
+
+#### Edit the configuration file
+
+Edit the configuration file `config-sample.yaml`. Here is [an example for your reference](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md).
+
+ {{< notice warning >}}
+
+For air-gapped installation, you must specify `privateRegistry`, which is `dockerhub.kubekey.local` in this example.
+
+ {{ notice >}}
+
+ Set `hosts` of your `config-sample.yaml` file:
+
+```yaml
+ hosts:
+ - {name: ks.master, address: 192.168.1.1, internalAddress: 192.168.1.1, user: root, password: Qcloud@123}
+ roleGroups:
+ etcd:
+ - ks.master
+ control-plane:
+ - ks.master
+ worker:
+ - ks.master
+```
+
+Set `privateRegistry` of your `config-sample.yaml` file:
+```yaml
+ registry:
+ registryMirrors: []
+ insecureRegistries: []
+ privateRegistry: dockerhub.kubekey.local
+```
+
+#### Upgrade your single-node cluster to KubeSphere 3.4 and Kubernetes v1.22.12
+
+```bash
+./kk upgrade -f config-sample.yaml
+```
+
+To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+
+### Air-gapped upgrade for multi-node clusters
+
+#### Example machines
+| Host Name | IP | Role | Port | URL |
+| --------- | ----------- | --------------- | ---- | ----------------------- |
+| master | 192.168.1.1 | Docker registry | 5000 | http://192.168.1.1:5000 |
+| master | 192.168.1.1 | master, etcd | | |
+| slave1 | 192.168.1.2 | worker | | |
+| slave1 | 192.168.1.3 | worker | | |
+
+
+#### Versions
+
+| | Kubernetes | KubeSphere |
+| ------ | ---------- | ---------- |
+| Before | v1.18.6 | v3.2.x |
+| After | v1.22.12 | v3.3.x |
+
+#### Upgrade a cluster
+
+In this example, KubeSphere is installed on multiple nodes, so you need to specify a configuration file to add host information. Besides, for air-gapped installation, pay special attention to `.spec.registry.privateRegistry`, which must be set to **your own registry address**. For more information, see the following sections.
+
+#### Create an example configuration file
+
+ Execute the following command to generate an example configuration file for installation:
+
+```bash
+./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]
+```
+
+ For example:
+
+```bash
+./kk create config --with-kubernetes v1.22.12 --with-kubesphere v3.4.0 -f config-sample.yaml
+```
+
+{{< notice note >}}
+
+Make sure the Kubernetes version is the one you downloaded.
+
+{{ notice >}}
+
+#### Edit the configuration file
+
+Edit the configuration file `config-sample.yaml`. Here is [an example for your reference](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md).
+
+ {{< notice warning >}}
+
+ For air-gapped installation, you must specify `privateRegistry`, which is `dockerhub.kubekey.local` in this example.
+
+ {{ notice >}}
+
+Set `hosts` of your `config-sample.yaml` file:
+
+```yaml
+ hosts:
+ - {name: ks.master, address: 192.168.1.1, internalAddress: 192.168.1.1, user: root, password: Qcloud@123}
+ - {name: ks.slave1, address: 192.168.1.2, internalAddress: 192.168.1.2, user: root, privateKeyPath: "/root/.ssh/kp-qingcloud"}
+ - {name: ks.slave2, address: 192.168.1.3, internalAddress: 192.168.1.3, user: root, privateKeyPath: "/root/.ssh/kp-qingcloud"}
+ roleGroups:
+ etcd:
+ - ks.master
+ control-plane:
+ - ks.master
+ worker:
+ - ks.slave1
+ - ks.slave2
+```
+Set `privateRegistry` of your `config-sample.yaml` file:
+```yaml
+ registry:
+ registryMirrors: []
+ insecureRegistries: []
+ privateRegistry: dockerhub.kubekey.local
+```
+
+#### Upgrade your multi-node cluster to KubeSphere 3.4 and Kubernetes v1.22.12
+
+```bash
+./kk upgrade -f config-sample.yaml
+```
+
+To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
diff --git a/content/en/docs/v3.4/upgrade/overview.md b/content/en/docs/v3.4/upgrade/overview.md
new file mode 100644
index 000000000..5e937a2dd
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/overview.md
@@ -0,0 +1,28 @@
+---
+title: "Upgrade — Overview"
+keywords: "Kubernetes, upgrade, KubeSphere, 3.4, upgrade"
+description: "Understand what you need to pay attention to before the upgrade, such as versions, and upgrade tools."
+linkTitle: "Overview"
+weight: 7100
+---
+
+## Make Your Upgrade Plan
+
+KubeSphere 3.4 is compatible with Kubernetes v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x:
+
+- Before you upgrade your cluster to KubeSphere 3.4, you need to have a KubeSphere cluster running v3.2.x.
+- You can choose to only upgrade KubeSphere to 3.4 or upgrade Kubernetes (to a higher version) and KubeSphere (to 3.4) at the same time.
+- For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+## Before the Upgrade
+
+{{< notice warning >}}
+
+- You are supposed to implement a simulation for the upgrade in a testing environment first. After the upgrade is successful in the testing environment and all applications are running normally, upgrade your cluster in your production environment.
+- During the upgrade process, there may be a short interruption of applications (especially for those single-replica Pods). Please arrange a reasonable period of time for your upgrade.
+- It is recommended to back up etcd and stateful applications before in production. You can use [Velero](https://velero.io/) to implement the backup and migrate Kubernetes resources and persistent volumes.
+
+{{ notice >}}
+
+## Upgrade Tool
+
+Depending on how your existing cluster was set up, you can use KubeKey or ks-installer to upgrade your cluster. It is recommended that you [use KubeKey to upgrade your cluster](../upgrade-with-kubekey/) if it was created by KubeKey. Otherwise, [use ks-installer to upgrade your cluster](../upgrade-with-ks-installer/).
\ No newline at end of file
diff --git a/content/en/docs/v3.4/upgrade/upgrade-with-ks-installer.md b/content/en/docs/v3.4/upgrade/upgrade-with-ks-installer.md
new file mode 100644
index 000000000..f66373a5b
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/upgrade-with-ks-installer.md
@@ -0,0 +1,41 @@
+---
+title: "Upgrade with ks-installer"
+keywords: "Kubernetes, upgrade, KubeSphere, v3.4.0"
+description: "Use ks-installer to upgrade KubeSphere."
+linkTitle: "Upgrade with ks-installer"
+weight: 7300
+---
+
+ks-installer is recommended for users whose Kubernetes clusters were not set up by [KubeKey](../../installing-on-linux/introduction/kubekey/), but hosted by cloud vendors or created by themselves. This tutorial is for **upgrading KubeSphere only**. Cluster operators are responsible for upgrading Kubernetes beforehand.
+
+## Prerequisites
+
+- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
+- Read [Release Notes for 3.4.0](../../../v3.4/release/release-v340/) carefully.
+- Back up any important component beforehand.
+- Supported Kubernetes versions of KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+
+## Major Updates
+
+In KubeSphere 3.4.0, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.4.0, please note the following:
+
+ - Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project).
+
+ - Some permission of custom roles are removed:
+ - Removed permissions of platform-level custom roles: user management, role management, and workspace management.
+ - Removed permissions of workspace-level custom roles: user management, role management, and user group management.
+ - Removed permissions of namespace-level custom roles: user management and role management.
+ - After you upgrade KubeSphere to 3.4.0, custom roles will be retained, but removed permissions of the custom roles will be revoked.
+
+## Apply ks-installer
+
+Run the following command to upgrade your cluster.
+
+```bash
+kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml --force
+```
+
+## Enable Pluggable Components
+
+You can [enable new pluggable components](../../pluggable-components/overview/) of KubeSphere 3.4 after the upgrade to explore more features of the container platform.
+
diff --git a/content/en/docs/v3.4/upgrade/upgrade-with-kubekey.md b/content/en/docs/v3.4/upgrade/upgrade-with-kubekey.md
new file mode 100644
index 000000000..3de055d1b
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/upgrade-with-kubekey.md
@@ -0,0 +1,146 @@
+---
+title: "Upgrade with KubeKey"
+keywords: "Kubernetes, upgrade, KubeSphere, 3.4, KubeKey"
+description: "Use KubeKey to upgrade Kubernetes and KubeSphere."
+linkTitle: "Upgrade with KubeKey"
+weight: 7200
+---
+KubeKey is recommended for users whose KubeSphere and Kubernetes were both installed by [KubeKey](../../installing-on-linux/introduction/kubekey/). If your Kubernetes cluster was provisioned by yourself or cloud providers, refer to [Upgrade with ks-installer](../upgrade-with-ks-installer/).
+
+This tutorial demonstrates how to upgrade your cluster using KubeKey.
+
+## Prerequisites
+
+- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
+- Read [Release Notes for 3.4.0](../../../v3.4/release/release-v340/) carefully.
+- Back up any important component beforehand.
+- Make your upgrade plan. Two scenarios are provided in this document for [all-in-one clusters](#all-in-one-cluster) and [multi-node clusters](#multi-node-cluster) respectively.
+
+## Major Updates
+
+In KubeSphere 3.4.0, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.4.0, please note the following:
+
+ - Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project).
+
+ - Some permission of custom roles are removed:
+ - Removed permissions of platform-level custom roles: user management, role management, and workspace management.
+ - Removed permissions of workspace-level custom roles: user management, role management, and user group management.
+ - Removed permissions of namespace-level custom roles: user management and role management.
+ - After you upgrade KubeSphere to 3.4.0, custom roles will be retained, but removed permissions of the custom roles will be revoked.
+
+## Download KubeKey
+
+Follow the steps below to download KubeKey before you upgrade your cluster.
+
+{{< tabs >}}
+
+{{< tab "Good network connections to GitHub/Googleapis" >}}
+
+Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
+
+```bash
+curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
+```
+
+{{ tab >}}
+
+{{< tab "Poor network connections to GitHub/Googleapis" >}}
+
+Run the following command first to make sure you download KubeKey from the correct zone.
+
+```bash
+export KKZONE=cn
+```
+
+Run the following command to download KubeKey:
+
+```bash
+curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
+```
+
+{{< notice note >}}
+
+After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
+
+{{ notice >}}
+
+{{ tab >}}
+
+{{ tabs >}}
+
+{{< notice note >}}
+
+The commands above download the latest release of KubeKey. You can change the version number in the command to download a specific version.
+
+{{ notice >}}
+
+Make `kk` executable:
+
+```bash
+chmod +x kk
+```
+
+## Upgrade KubeSphere and Kubernetes
+
+Upgrading steps are different for single-node clusters (all-in-one) and multi-node clusters.
+
+{{< notice info >}}
+
+When upgrading Kubernetes, KubeKey will upgrade from one MINOR version to the next MINOR version until the target version. For example, you may see the upgrading process going from 1.16 to 1.17 and to 1.18, instead of directly jumping to 1.18 from 1.16.
+
+{{ notice >}}
+
+### All-in-one cluster
+
+Run the following command to use KubeKey to upgrade your single-node cluster to KubeSphere 3.4 and Kubernetes v1.22.12:
+
+```bash
+./kk upgrade --with-kubernetes v1.22.12 --with-kubesphere v3.4.0
+```
+
+To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+### Multi-node cluster
+
+#### Step 1: Generate a configuration file using KubeKey
+
+This command creates a configuration file `sample.yaml` of your cluster.
+
+```bash
+./kk create config --from-cluster
+```
+
+{{< notice note >}}
+
+It assumes your kubeconfig is allocated in `~/.kube/config`. You can change it with the flag `--kubeconfig`.
+
+{{ notice >}}
+
+#### Step 2: Edit the configuration file template
+
+Edit `sample.yaml` based on your cluster configuration. Make sure you replace the following fields correctly.
+
+- `hosts`: The basic information of your hosts (hostname and IP address) and how to connect to them using SSH.
+- `roleGroups.etcd`: Your etcd nodes.
+- `controlPlaneEndpoint`: Your load balancer address (optional).
+- `registry`: Your image registry information (optional).
+
+{{< notice note >}}
+
+For more information, see [Edit the configuration file](../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file) or refer to the `Cluster` section of [the complete configuration file](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) for more information.
+
+{{ notice >}}
+
+#### Step 3: Upgrade your cluster
+The following command upgrades your cluster to KubeSphere 3.4 and Kubernetes v1.22.12:
+
+```bash
+./kk upgrade --with-kubernetes v1.22.12 --with-kubesphere v3.4.0 -f sample.yaml
+```
+
+To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+
+{{< notice note >}}
+
+To use new features of KubeSphere 3.4, you may need to enable some pluggable components after the upgrade.
+
+{{ notice >}}
\ No newline at end of file
diff --git a/content/en/docs/v3.4/upgrade/what-changed.md b/content/en/docs/v3.4/upgrade/what-changed.md
new file mode 100644
index 000000000..52a90aa6c
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/what-changed.md
@@ -0,0 +1,11 @@
+---
+title: "Changes after Upgrade"
+keywords: "Kubernetes, upgrade, KubeSphere, 3.4"
+description: "Understand what will be changed after the upgrade."
+linkTitle: "Changes after Upgrade"
+weight: 7600
+---
+
+This section covers the changes after upgrade for existing settings in previous versions. If you want to know all the new features and enhancements in KubeSphere 3.4, see [Release Notes for 3.4.0](../../../v3.4/release/release-v340/).
+
+
diff --git a/content/en/docs/v3.4/workspace-administration/_index.md b/content/en/docs/v3.4/workspace-administration/_index.md
new file mode 100644
index 000000000..301a75c9f
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/_index.md
@@ -0,0 +1,16 @@
+---
+title: "Workspace Administration and User Guide"
+description: "This chapter helps you to better manage KubeSphere workspaces."
+layout: "second"
+
+linkTitle: "Workspace Administration and User Guide"
+
+weight: 9000
+
+icon: "/images/docs/v3.x/docs.svg"
+
+---
+
+KubeSphere tenants work in a workspace to manage projects and apps. Among others, workspace administrators are responsible for the management of app repositories. Tenants with necessary permissions can further deploy and use app templates from app repositories. They can also leverage individual app templates which are uploaded and released to the App Store. Besides, administrators also control whether the network of a workspace is isolated from others'.
+
+This chapter demonstrates how workspace administrators and tenants work at the workspace level.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/workspace-administration/app-repository/_index.md b/content/en/docs/v3.4/workspace-administration/app-repository/_index.md
new file mode 100644
index 000000000..656e5cfaf
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/app-repository/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "App Repositories"
+weight: 9300
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/workspace-administration/app-repository/import-helm-repository.md b/content/en/docs/v3.4/workspace-administration/app-repository/import-helm-repository.md
new file mode 100644
index 000000000..4e9a017f1
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/app-repository/import-helm-repository.md
@@ -0,0 +1,52 @@
+---
+title: "Import a Helm Repository"
+keywords: "Kubernetes, Helm, KubeSphere, Application"
+description: "Import a Helm repository to KubeSphere to provide app templates for tenants in a workspace."
+linkTitle: "Import a Helm Repository"
+weight: 9310
+---
+
+KubeSphere builds app repositories that allow users to use Kubernetes applications based on Helm charts. App repositories are powered by [OpenPitrix](https://github.com/openpitrix/openpitrix), an open source platform for cross-cloud application management sponsored by QingCloud. In an app repository, every application serves as a base package library. To deploy and manage an app from an app repository, you need to create the repository in advance.
+
+To create a repository, you use an HTTP/HTTPS server or object storage solutions to store packages. More specifically, an app repository relies on external storage independent of OpenPitrix, such as [MinIO](https://min.io/) object storage, [QingStor object storage](https://github.com/qingstor), and [AWS object storage](https://aws.amazon.com/what-is-cloud-object-storage/). These object storage services are used to store configuration packages and index files created by developers. After a repository is registered, the configuration packages are automatically indexed as deployable applications.
+
+This tutorial demonstrates how to add an app repository to KubeSphere.
+
+## Prerequisites
+
+- You need to enable the [KubeSphere App Store (OpenPitrix)](../../../pluggable-components/app-store/).
+- You need to have an app repository. Refer to [the official documentation of Helm](https://v2.helm.sh/docs/developing_charts/#the-chart-repository-guide) to create repositories or [upload your own apps to the public repository of KubeSphere](../upload-app-to-public-repository/). Alternatively, use the example repository in the steps below, which is only for demonstration purposes.
+- You need to create a workspace and a user (`ws-admin`). The user must be granted the role of `workspace-admin` in the workspace. For more information, refer to [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Add an App Repository
+
+1. Log in to the web console of KubeSphere as `ws-admin`. In your workspace, go to **App Repositories** under **App Management**, and then click **Add**.
+
+2. In the dialog that appears, specify an app repository name and add your repository URL. For example, enter `https://charts.kubesphere.io/main`.
+
+ - **Name**: Set a simple and clear name for the repository, which is easy for users to identify.
+ - **URL**: Follow the RFC 3986 specification with the following three protocols supported:
+ - S3: The URL is S3-styled, such as `s3.
on the right of a user, and click **OK** for the displayed message to assign the user to the department.
+
+ {{< notice note >}}
+
+ * If permissions provided by the department overlap with existing permissions of the user, new permissions are added to the user. Existing permissions of the user are not affected.
+ * Users assigned to a department can perform operations according to the workspace role, project roles, and DevOps project roles associated with the department without being invited to the workspace, projects, and DevOps projects.
+
+ {{ notice >}}
+
+## Remove a User from a Department
+
+1. On the **Departments** page, select a department in the department tree on the left and click **Assigned** on the right.
+2. In the assigned user list, click
on the right of a user, enter the username in the displayed dialog box, and click **OK** to remove the user.
+
+## Delete and Edit a Department
+
+1. On the **Departments** page, click **Set Departments**.
+
+2. In the **Set Departments** dialog box, on the left, click the upper level of the department to be edited or deleted.
+
+3. Click
on the right of the department to edit it.
+
+ {{< notice note >}}
+
+ For details, see [Create a Department](#create-a-department).
+
+ {{ notice >}}
+
+4. Click
on the right of the department, enter the department name in the displayed dialog box, and click **OK** to delete the department.
+
+ {{< notice note >}}
+
+ * If a department contains sub-departments, the sub-departments will also be deleted.
+ * After a department is deleted, the associated roles will be unbound from the users.
+
+ {{ notice >}}
\ No newline at end of file
diff --git a/content/en/docs/v3.4/workspace-administration/project-quotas.md b/content/en/docs/v3.4/workspace-administration/project-quotas.md
new file mode 100644
index 000000000..ad59de15f
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/project-quotas.md
@@ -0,0 +1,56 @@
+---
+title: "Project Quotas"
+keywords: 'KubeSphere, Kubernetes, projects, quotas, resources, requests, limits'
+description: 'Set requests and limits to control resource usage in a project.'
+linkTitle: "Project Quotas"
+weight: 9600
+---
+
+KubeSphere uses [Kubernetes requests and limits](https://kubesphere.io/blogs/understand-requests-and-limits-in-kubernetes/) to control resource (for example, CPU and memory) usage in a project, also known as [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value.
+
+Besides CPU and memory, you can also set resource quotas for other objects separately such as Pods, [Deployments](../../project-user-guide/application-workloads/deployments/), [Jobs](../../project-user-guide/application-workloads/jobs/), [Services](../../project-user-guide/application-workloads/services/), and [ConfigMaps](../../project-user-guide/configuration/configmaps/) in a project.
+
+This tutorial demonstrates how to configure quotas for a project.
+
+## Prerequisites
+
+You have an available workspace, a project and a user (`ws-admin`). The user must have the `admin` role at the workspace level. For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+
+{{< notice note >}}
+
+If you use the user `project-admin` (a user of the `admin` role at the project level), you can set project quotas as well for a new project (i.e. its quotas remain unset). However, `project-admin` cannot change project quotas once they are set. Generally, it is the responsibility of `ws-admin` to set limits and requests for a project. `project-admin` is responsible for [setting limit ranges](../../project-administration/container-limit-ranges/) for containers in a project.
+
+{{ notice >}}
+
+## Set Project Quotas
+
+1. Log in to the console as `ws-admin` and go to a project. On the **Overview** page, you can see project quotas remain unset if the project is newly created. Click **Edit Quotas** to configure quotas.
+
+2. In the displayed dialog box, you can see that KubeSphere does not set any requests or limits for a project by default. To set
+limits to control CPU and memory resources, use the slider to move to a desired value or enter numbers directly. Leaving a field blank means you do not set any requests or limits.
+
+ {{< notice note >}}
+
+ The limit can never be lower than the request.
+
+ {{ notice >}}
+
+3. To set quotas for other resources, click **Add** under **Project Resource Quotas**, and then select a resource or enter a recource name and set a quota.
+
+4. Click **OK** to finish setting quotas.
+
+5. Go to **Basic Information** in **Project Settings**, and you can see all resource quotas for the project.
+
+6. To change project quotas, click **Edit Project** on the **Basic Information** page and select **Edit Project Quotas**.
+
+ {{< notice note >}}
+
+ For [a multi-cluster project](../../project-administration/project-and-multicluster-project/#multi-cluster-projects), the option **Edit Project Quotas** does not display in the **Manage Project** drop-down menu. To set quotas for a multi-cluster project, go to **Projects Quotas** under **Project Settings** and click **Edit Quotas**. Note that as a multi-cluster project runs across clusters, you can set resource quotas on different clusters separately.
+
+ {{ notice >}}
+
+7. Change project quotas in the dialog that appears and click **OK**.
+
+## See Also
+
+[Container Limit Ranges](../../project-administration/container-limit-ranges/)
diff --git a/content/en/docs/v3.4/workspace-administration/role-and-member-management.md b/content/en/docs/v3.4/workspace-administration/role-and-member-management.md
new file mode 100644
index 000000000..8bbe3884a
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/role-and-member-management.md
@@ -0,0 +1,61 @@
+---
+title: "Workspace Role and Member Management"
+keywords: "Kubernetes, workspace, KubeSphere, multitenancy"
+description: "Customize a workspace role and grant it to tenants."
+linkTitle: "Workspace Role and Member Management"
+weight: 9400
+---
+
+This tutorial demonstrates how to manage roles and members in a workspace.
+
+## Prerequisites
+
+At least one workspace has been created, such as `demo-workspace`. Besides, you need a user of the `workspace-admin` role (for example, `ws-admin`) at the workspace level. For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+
+{{< notice note >}}
+
+The actual role name follows a naming convention: `workspace name-role name`. For example, for a workspace named `demo-workspace`, the actual role name of the role `admin` is `demo-workspace-admin`.
+
+{{ notice >}}
+
+## Built-in Roles
+
+In **Workspace Roles**, there are four available built-in roles. Built-in roles are created automatically by KubeSphere when a workspace is created and they cannot be edited or deleted. You can only view permissions included in a built-in role or assign it to a user.
+
+| Built-in Roles | Description |
+| ------------------ | ------------------------------------------------------------ |
+| `workspace-viewer` | Workspace viewer who can view all resources in the workspace. |
+| `workspace-self-provisioner` | Workspace regular member who can view workspace settings, manage app templates, and create projects and DevOps projects. |
+| `workspace-regular` | Workspace regular member who can view workspace settings. |
+| `workspace-admin` | Workspace administrator who has full control over all resources in the workspace. |
+
+To view the permissions that a role contains:
+
+1. Log in to the console as `ws-admin`. In **Workspace Roles**, click a role (for example, `workspace-admin`) and you can see role details.
+
+2. Click the **Authorized Users** tab to see all the users that are granted the role.
+
+## Create a Workspace Role
+
+1. Navigate to **Workspace Roles** under **Workspace Settings**.
+
+2. In **Workspace Roles**, click **Create** and set a role **Name** (for example, `demo-project-admin`). Click **Edit Permissions** to continue.
+
+3. In the pop-up window, permissions are categorized into different **Modules**. In this example, click **Project Management** and select **Project Creation**, **Project Management**, and **Project Viewing** for this role. Click **OK** to finish creating the role.
+
+ {{< notice note >}}
+
+ **Depends on** means the major permission (the one listed after **Depends on**) needs to be selected first so that the affiliated permission can be assigned.
+
+ {{ notice >}}
+
+4. Newly-created roles will be listed in **Workspace Roles**. To edit the information or permissions, or delete an existing role, click
on the right.
+
+## Invite a New Member
+
+1. Navigate to **Workspace Members** under **Workspace Settings**, and click **Invite**.
+2. Invite a user to the workspace by clicking
on the right of it and assign a role to it.
+
+3. After you add the user to the workspace, click **OK**. In **Workspace Members**, you can see the user in the list.
+
+4. To edit the role of an existing user or remove the user from the workspace, click
on the right and select the corresponding operation.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/workspace-administration/upload-helm-based-application.md b/content/en/docs/v3.4/workspace-administration/upload-helm-based-application.md
new file mode 100644
index 000000000..1a2236f91
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/upload-helm-based-application.md
@@ -0,0 +1,38 @@
+---
+title: "Upload Helm-based Applications"
+keywords: "Kubernetes, Helm, KubeSphere, OpenPitrix, Application"
+description: "Learn how to upload a Helm-based application as an app template to your workspace."
+linkTitle: "Upload Helm-based Applications"
+weight: 9200
+---
+
+KubeSphere provides full lifecycle management for applications. Among other things, workspace administrators can upload or create new app templates and test them quickly. Furthermore, they publish well-tested apps to the [App Store](../../application-store/) so that other users can deploy them with one click. To develop app templates, workspace administrators need to upload packaged [Helm charts](https://helm.sh/) to KubeSphere first.
+
+This tutorial demonstrates how to develop an app template by uploading a packaged Helm chart.
+
+## Prerequisites
+
+- You need to enable the [KubeSphere App Store (OpenPitrix)](../../pluggable-components/app-store/).
+- You need to create a workspace and a user (`project-admin`). The user must be invited to the workspace with the role of `workspace-self-provisioner`. For more information, refer to [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+
+## Hands-on Lab
+
+1. Log in to KubeSphere as `project-admin`. In your workspace, go to **App Templates** under **App Management**, and click **Create**.
+
+2. In the dialog that appears, click **Upload**. You can upload your own Helm chart or download the [Nginx chart](/files/application-templates/nginx-0.1.0.tgz) and use it as an example for the following steps.
+
+3. After the package is uploaded, click **OK** to continue.
+
+4. You can view the basic information of the app under **App Information**. To upload an icon for the app, click **Upload Icon**. You can also skip it and click **OK** directly.
+
+ {{< notice note >}}
+
+Maximum accepted resolutions of the app icon: 96 x 96 pixels.
+
+{{ notice >}}
+
+5. The app appears in the template list with the status **Developing** after successfully uploaded, which means this app is under development. The uploaded app is visible to all members in the same workspace.
+
+6. Click the app and the page opens with the **Versions** tab selected. Click the draft version to expand the menu, where you can see options including **Delete**, **Install**, and **Submit for Release**.
+
+7. For more information about how to release your app to the App Store, refer to [Application Lifecycle Management](../../application-store/app-lifecycle-management/#step-2-upload-and-submit-application).
diff --git a/content/en/docs/v3.4/workspace-administration/what-is-workspace.md b/content/en/docs/v3.4/workspace-administration/what-is-workspace.md
new file mode 100644
index 000000000..98e650db7
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/what-is-workspace.md
@@ -0,0 +1,83 @@
+---
+title: "Workspace Overview"
+keywords: "Kubernetes, KubeSphere, workspace"
+description: "Understand the concept of workspaces in KubeSphere and learn how to create and delete a workspace."
+
+linkTitle: "Workspace Overview"
+weight: 9100
+---
+
+A workspace is a logical unit to organize your [projects](../../project-administration/) and [DevOps projects](../../devops-user-guide/) and manage [app templates](../upload-helm-based-application/) and app repositories. It is the place for you to control resource access and share resources within your team in a secure way.
+
+It is a best practice to create a new workspace for tenants (excluding cluster administrators). A same tenant can work in multiple workspaces, while a workspace allows multiple tenants to access it in different ways.
+
+This tutorial demonstrates how to create and delete a workspace.
+
+## Prerequisites
+
+You have a user granted the role of `workspaces-manager`, such as `ws-manager` in [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+
+## Create a Workspace
+
+1. Log in to the web console of KubeSphere as `ws-manager`. Click **Platform** on the upper-left corner, and then select **Access Control**. On the **Workspaces** page, click **Create**.
+
+
+2. For single-node cluster, on the **Basic Information** page, specify a name for the workspace and select an administrator from the drop-down list. Click **Create**.
+
+ - **Name**: Set a name for the workspace which serves as a unique identifier.
+ - **Alias**: An alias name for the workspace.
+ - **Administrator**: User that administers the workspace.
+ - **Description**: A brief introduction of the workspace.
+
+ For multi-node cluster, after the basic information about the workspace is set, click **Next** to continue. On the **Cluster Settings** page, select clusters to be used in the workspace, and then click **Create**.
+
+3. The workspace is displayed in the workspace list after it is created.
+
+4. Click the workspace and you can see resource status of the workspace on the **Overview** page.
+
+## Delete a Workspace
+
+In KubeSphere, you use a workspace to group and manage different projects, which means the lifecycle of a project is dependent on the workspace. More specifically, all the projects and related resources in a workspace will be deleted if the workspace is deleted.
+
+Before you delete a workspace, decide whether you want to unbind some key projects.
+
+### Unbind projects before deletion
+
+To delete a workspace while preserving some projects in it, run the following command first:
+
+```bash
+kubectl label ns
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
```bash
kubectl -n kubesphere-system edit cc ks-installer
diff --git a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/oidc-identity-provider.md b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/oidc-identity-provider.md
index fa144bc98..65353f753 100644
--- a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/oidc-identity-provider.md
+++ b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/oidc-identity-provider.md
@@ -17,7 +17,7 @@ weight: 12221
## 步骤
-1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
```bash
kubectl -n kubesphere-system edit cc ks-installer
diff --git a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md
index bc880d62b..948fbebd1 100644
--- a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md
+++ b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md
@@ -18,7 +18,7 @@ KubeSphere 提供了一个内置的 OAuth 服务。用户通过获取 OAuth 访
## 步骤
-1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
```bash
kubectl -n kubesphere-system edit cc ks-installer
diff --git a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/use-an-ldap-service.md b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/use-an-ldap-service.md
index a488de9f2..738be3efc 100644
--- a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/use-an-ldap-service.md
+++ b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/use-an-ldap-service.md
@@ -16,7 +16,7 @@ weight: 12220
## 步骤
-1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
```bash
kubectl -n kubesphere-system edit cc ks-installer
diff --git a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/use-an-oauth2-identity-provider.md b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/use-an-oauth2-identity-provider.md
index 41f8e443f..20d3f47f5 100644
--- a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/use-an-oauth2-identity-provider.md
+++ b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/use-an-oauth2-identity-provider.md
@@ -10,7 +10,7 @@ weight: 12230
下图显示了 KubeSphere 与外部 OAuth 2.0 身份提供者之间的身份验证过程。
-
+
## 准备工作
@@ -81,7 +81,7 @@ KubeSphere 提供了两个内置的 OAuth 2.0 插件:GitHub 的 [GitHubIdentit
## 集成身份提供者
-1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
```bash
kubectl -n kubesphere-system edit cc ks-installer
@@ -126,5 +126,5 @@ KubeSphere 提供了两个内置的 OAuth 2.0 插件:GitHub 的 [GitHubIdentit
6. 在外部身份提供者的登录界面,输入身份提供者配置的用户名和密码,登录 KubeSphere 。
- 
+ 
diff --git a/content/zh/docs/v3.3/access-control-and-account-management/multi-tenancy-in-kubesphere.md b/content/zh/docs/v3.3/access-control-and-account-management/multi-tenancy-in-kubesphere.md
index a82d14e8e..75fd6fb26 100644
--- a/content/zh/docs/v3.3/access-control-and-account-management/multi-tenancy-in-kubesphere.md
+++ b/content/zh/docs/v3.3/access-control-and-account-management/multi-tenancy-in-kubesphere.md
@@ -24,7 +24,7 @@ Kubernetes 解决了应用编排、容器调度的难题,极大地提高了资
为了解决上述问题,KubeSphere 提供了基于 Kubernetes 的多租户管理方案。
-
+
在 KubeSphere 中[企业空间](../../workspace-administration/what-is-workspace/)是最小的租户单元,企业空间提供了跨集群、跨项目(即 Kubernetes 中的命名空间)共享资源的能力。企业空间中的成员可以在授权集群中创建项目,并通过邀请授权的方式参与项目协同。
@@ -54,4 +54,4 @@ KubeSphere 还提供了针对用户的[操作审计](../../pluggable-components/
KubeSphere 完整的认证鉴权链路如下图所示,可以通过 OPA 拓展 Kubernetes 的 RBAC 规则。KubeSphere 团队计划集成 [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) 以支持更为丰富的安全管控策略。
-
+
diff --git a/content/zh/docs/v3.3/application-store/_index.md b/content/zh/docs/v3.3/application-store/_index.md
index 26fc4d589..4ce4d34d0 100644
--- a/content/zh/docs/v3.3/application-store/_index.md
+++ b/content/zh/docs/v3.3/application-store/_index.md
@@ -7,7 +7,7 @@ layout: "second"
linkTitle: "应用商店"
weight: 14000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
diff --git a/content/zh/docs/v3.3/application-store/app-lifecycle-management.md b/content/zh/docs/v3.3/application-store/app-lifecycle-management.md
index 020b0780c..0089397d4 100644
--- a/content/zh/docs/v3.3/application-store/app-lifecycle-management.md
+++ b/content/zh/docs/v3.3/application-store/app-lifecycle-management.md
@@ -139,7 +139,7 @@ KubeSphere 集成了 [OpenPitrix](https://github.com/openpitrix/openpitrix)(
`reviewer` 可以根据不同类型应用程序的功能和用途创建多个分类。这类似于设置标签,可以在应用商店中将分类用作筛选器,例如大数据、中间件和物联网等。
-1. 以 `reviewer` 身份登录 KubeSphere。要创建分类,请转到**应用商店管理**页面,再点击**应用分类**页面中的
。
+1. 以 `reviewer` 身份登录 KubeSphere。要创建分类,请转到**应用商店管理**页面,再点击**应用分类**页面中的
。
2. 在弹出的对话框中设置分类名称和图标,然后点击**确定**。对于 Redis,您可以将**分类名称**设置为 `Database`。
diff --git a/content/zh/docs/v3.3/application-store/built-in-apps/chaos-mesh-app.md b/content/zh/docs/v3.3/application-store/built-in-apps/chaos-mesh-app.md
index 8cdb60fa4..6393486d0 100644
--- a/content/zh/docs/v3.3/application-store/built-in-apps/chaos-mesh-app.md
+++ b/content/zh/docs/v3.3/application-store/built-in-apps/chaos-mesh-app.md
@@ -7,7 +7,7 @@ linkTitle: "部署 Chaos Mesh"
[Chaos Mesh](https://github.com/chaos-mesh/chaos-mesh) 是一个开源的云原生混沌工程平台,提供丰富的故障模拟类型,具有强大的故障场景编排能力,方便用户在开发测试中以及生产环境中模拟现实世界中可能出现的各类异常,帮助用户发现系统潜在的问题。
-
+
本教程演示了如何在 KubeSphere 上部署 Chaos Mesh 进行混沌实验。
@@ -23,38 +23,38 @@ linkTitle: "部署 Chaos Mesh"
1. 使用 `project-regular` 身份登陆,在应用市场中搜索 `chaos-mesh`,点击搜索结果进入应用。
- 
+ 
2. 进入应用信息页后,点击右上角**安装**按钮。
- 
+ 
3. 进入应用设置页面,可以设置应用**名称**(默认会随机一个唯一的名称)和选择安装的**位置**(对应的 Namespace) 和**版本**,然后点击右上角**下一步**。
- 
+ 
4. 根据实际需要编辑 `values.yaml` 文件,也可以直接点击**安装**使用默认配置。
- 
+ 
5. 等待 Chaos Mesh 开始正常运行。
- 
+ 
6. 访问**应用负载**, 可以看到 Chaos Mesh 创建的三个部署。
- 
+ 
### 步骤 2: 访问 Chaos Mesh
1. 前往**应用负载**下服务页面,复制 chaos-dashboard 的 **NodePort**。
- 
+ 
2. 您可以通过 `${NodeIP}:${NODEPORT}` 方式访问 Chaos Dashboard。并参考[管理用户权限](https://chaos-mesh.org/zh/docs/manage-user-permissions/)文档,生成 Token,并登陆 Chaos Dashboard。
- 
+ 
### 步骤 3: 创建混沌实验
@@ -72,22 +72,22 @@ linkTitle: "部署 Chaos Mesh"
2. 访问 **web-show** 应用程序。从您的网络浏览器,进入 `${NodeIP}:8081`。
- 
+ 
3. 登陆 Chaos Dashboard 创建混沌实验,为了更好的观察混沌实验效果,这里只创建一个独立的混沌实验,混沌实验的类型选择**网络攻击**,模拟网络延迟的场景:
- 
+ 
实验范围设置为 web-show 应用:
- 
+ 
4. 提交混沌实验后,查看实验状态:
- 
+ 
5. 访问 web-show 应用观察实验结果 :
- 
+ 
更多详情参考 [Chaos Mesh 使用文档](https://chaos-mesh.org/zh/docs/)。
\ No newline at end of file
diff --git a/content/zh/docs/v3.3/application-store/built-in-apps/harbor-app.md b/content/zh/docs/v3.3/application-store/built-in-apps/harbor-app.md
index eae277b2f..12b57767f 100644
--- a/content/zh/docs/v3.3/application-store/built-in-apps/harbor-app.md
+++ b/content/zh/docs/v3.3/application-store/built-in-apps/harbor-app.md
@@ -49,7 +49,7 @@ weight: 14220
1. 基于配置文件中 `expose.type` 字段的设置,访问方式可能会不同。本示例使用 `nodePort` 访问 Harbor,按照先前步骤中的设置,访问 `http://nodeIP:30002`。
- 
+ 
{{< notice note >}}
@@ -59,7 +59,7 @@ weight: 14220
2. 使用默认帐户和密码 (`admin/Harbor12345`) 登录 Harbor。密码由配置文件中的 `harborAdminPassword` 字段定义。
- 
+ 
## 常见问题
diff --git a/content/zh/docs/v3.3/application-store/built-in-apps/jh-gitlab.md b/content/zh/docs/v3.3/application-store/built-in-apps/jh-gitlab.md
index 3e83bdc93..017ac0856 100644
--- a/content/zh/docs/v3.3/application-store/built-in-apps/jh-gitlab.md
+++ b/content/zh/docs/v3.3/application-store/built-in-apps/jh-gitlab.md
@@ -20,47 +20,47 @@ linkTitle: "部署极狐GitLab"
1. 创建一个 `Workspace`:
-
+
2. 创建一个 `Project`
-
+
3. 在左侧导航栏 `Application Workload` 的 `App` 中,创建一个 `App`:
-
+
4. 在出现的安装选项界面中选择 **From App Store**(从应用商店安装):
-
+
5. 在 `App Store` 中输入 **jh** 进行搜索,会出现 **jh-gitlab** 的应用:
-
+
6. 点击 jh-gitlab 应用,在出现的界面上点击 `install`,即可开始安装。根据表单填写基本信息,然后点击 `next`:
-
+
7. 接着需要根据自身需求填写 App 的设置信息(也就是 values.yaml 文件内容,详细说明可以参考[极狐GitLab Helm Chart 官网](https://jihulab.com/gitlab-cn/charts/gitlab/-/blob/main-jh/values.yaml))。
-
+
8. 然后点击 `install` 开始安装,整个过程需要持续一段时间,最后可以在 `Application Workload` 的 `App` 选项里面看到安装成功的极狐GitLab 应用程序:
-
+
9. 如果需要调试,可以利用 KubeSphere 的小工具(下图右下角红色方框所示的小锤子)来查看安装的极狐GitLab实例所对应的 Kubernetes 资源:
-
+
10. `Pod` 和 `Ingress` 的内容如下:
-
+
11. 使用 `gitlab.jihu-xiaomage.cn`(需要根据自身需求设置访问域名)来访问已经安装成功的极狐GitLab实例:
-
+
接下来你就可以使用极狐GitLab实例来开启你的 DevOps 之旅了。
diff --git a/content/zh/docs/v3.3/application-store/built-in-apps/minio-app.md b/content/zh/docs/v3.3/application-store/built-in-apps/minio-app.md
index 9ef720444..2a56fa57d 100644
--- a/content/zh/docs/v3.3/application-store/built-in-apps/minio-app.md
+++ b/content/zh/docs/v3.3/application-store/built-in-apps/minio-app.md
@@ -45,9 +45,9 @@ weight: 14240
6. 通过 `
,从下拉菜单中选择操作:
+点击项目网关右侧的
,从下拉菜单中选择操作:
- **编辑**:编辑项目网关的配置。
- **关闭**:关闭项目网关。
diff --git a/content/zh/docs/v3.3/cluster-administration/cluster-wide-alerting-and-notification/alerting-policy.md b/content/zh/docs/v3.3/cluster-administration/cluster-wide-alerting-and-notification/alerting-policy.md
index 84e44c2d3..8b3e4332d 100644
--- a/content/zh/docs/v3.3/cluster-administration/cluster-wide-alerting-and-notification/alerting-policy.md
+++ b/content/zh/docs/v3.3/cluster-administration/cluster-wide-alerting-and-notification/alerting-policy.md
@@ -49,7 +49,7 @@ KubeSphere 还具有内置策略,一旦满足为这些策略定义的条件,
## 编辑告警策略
-如需在创建后编辑告警策略,在**告警策略**页面点击右侧的
。
+如需在创建后编辑告警策略,在**告警策略**页面点击右侧的
。
1. 点击下拉菜单中的**编辑**,根据与创建时相同的步骤来编辑告警策略。点击**消息设置**页面的**确定**保存更改。
@@ -63,8 +63,8 @@ KubeSphere 还具有内置策略,一旦满足为这些策略定义的条件,
{{< notice note >}}
-您可以点击右上角的
,然后选择**添加子部门**。
+1. 在**通讯录**页面的**组织架构**选项卡下,点击**测试**(本教程使用`测试`部门作为示例)右侧的
,然后选择**添加子部门**。
2. 在弹出对话框中,输入部门名称(例如`测试二组`),然后点击**确定**。
3. 创建部门后,您可以点击右侧的**添加成员**、**批量导入**或**从其他部门移入**来添加成员。添加成员后,点击该成员进入详情页面,查看其帐号。
-4. 您可以点击`测试二组`右侧的
来查看其部门 ID。
+4. 您可以点击`测试二组`右侧的
来查看其部门 ID。
5. 点击**标签**选项卡,然后点击**添加标签**来创建标签。若管理界面无**标签**选项卡,请点击加号图标来创建标签。
@@ -72,7 +72,7 @@ weight: 8724
- 操作符**存在**和**不存在**判断某个标签是否存在,无需设置标签值。
{{ notice >}}
- 您可以点击**添加**来添加多个通知条件,或点击通知条件右侧的
。
+1. 用 `admin` 帐户登录 Nexus 控制台,然后在顶部导航栏点击
。
2. 转到**仓库**页面,您可以看到 Nexus 提供了三种仓库类型。
@@ -44,9 +44,9 @@ weight: 11450
2. 在您的 **learn-pipline-java** GitHub 仓库中,点击根目录下的文件 `pom.xml`。
-3. 在文件中点击
以编辑文件。 例如,将 `spec.replicas` 的值改变为 `3`。
+3. 点击
以编辑文件。 例如,将 `spec.replicas` 的值改变为 `3`。
4. 在页面底部点击 **Commit changes**。
diff --git a/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md b/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
index 4d7d6b9a2..2093e6083 100644
--- a/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
+++ b/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
@@ -77,16 +77,16 @@ KubeSphere 提供图形编辑面板,您可以通过交互式操作定义 Jenki
- CI 流水线模板
- 
+ 
- 
+ 
CI 流水线模板包含两个阶段。**clone code** 阶段用于检出代码,**build & push** 阶段用于构建镜像并将镜像推送至 Docker Hub。您需要预先为代码仓库和 Docker Hub 仓库创建凭证,然后在相应的步骤中设置仓库的 URL 以及凭证。完成编辑后,流水线即可开始运行。
- CI & CD 流水线模板
- 
+ 
- 
+ 
CI & CD 流水线模板包含六个阶段。有关每个阶段的更多信息,请参考[使用 Jenkinsfile 创建流水线](../create-a-pipeline-using-jenkinsfile/#流水线概述),您可以在该文档中找到相似的阶段及描述。您需要预先为代码仓库、Docker Hub 仓库和集群的 kubeconfig 创建凭证,然后在相应的步骤中设置仓库的 URL 以及凭证。完成编辑后,流水线即可开始运行。
diff --git a/content/zh/docs/v3.3/faq/_index.md b/content/zh/docs/v3.3/faq/_index.md
index af2e47209..10e4c912b 100644
--- a/content/zh/docs/v3.3/faq/_index.md
+++ b/content/zh/docs/v3.3/faq/_index.md
@@ -6,7 +6,7 @@ layout: "second"
linkTitle: "常见问题"
weight: 16000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
本章节总结并回答了有关 KubeSphere 最常见的问题,问题根据 KubeSphere 的功能进行分类,您可以在对应部分找到有关的问题和答案。
diff --git a/content/zh/docs/v3.3/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md b/content/zh/docs/v3.3/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
index d51d22ad7..009cc4fdb 100644
--- a/content/zh/docs/v3.3/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
+++ b/content/zh/docs/v3.3/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
@@ -30,7 +30,7 @@ kubectl create ns demo-namespace
1. 以 `admin` 身份登录 KubeSphere 控制台,转到**集群管理**页面。点击**项目**,您可以查看在当前集群中运行的所有项目),包括前述刚刚创建的项目。
-2. 通过 kubectl 创建的命名空间不属于任何企业空间。请点击右侧的
,选择**分配企业空间**。
+2. 通过 kubectl 创建的命名空间不属于任何企业空间。请点击右侧的
,选择**分配企业空间**。
3. 在弹出的对话框中,为该项目选择一个**企业空间**和**项目管理员**,然后点击**确定**。
diff --git a/content/zh/docs/v3.3/faq/access-control/cannot-login.md b/content/zh/docs/v3.3/faq/access-control/cannot-login.md
index 266a96f96..6bb556d28 100644
--- a/content/zh/docs/v3.3/faq/access-control/cannot-login.md
+++ b/content/zh/docs/v3.3/faq/access-control/cannot-login.md
@@ -14,7 +14,7 @@ KubeSphere 安装时会自动创建默认用户 (`admin/P@88w0rd`),密码错
登录失败时,您可能看到以下提示。请根据以下步骤排查并解决问题:
-
+
1. 执行以下命令检查用户状态:
@@ -88,7 +88,7 @@ kubectl -n kubesphere-system get deploy ks-controller-manager -o jsonpath='{.spe
## 用户名或密码错误
-
+
通过以下命令检查用户密码是否正确:
diff --git a/content/zh/docs/v3.3/faq/console/change-console-language.md b/content/zh/docs/v3.3/faq/console/change-console-language.md
index 62f653412..c0b485b5b 100644
--- a/content/zh/docs/v3.3/faq/console/change-console-language.md
+++ b/content/zh/docs/v3.3/faq/console/change-console-language.md
@@ -22,4 +22,4 @@ KubeSphere Web 控制台目前支持四种语言:简体中文、繁体中文
3. 在**基本信息**页面,从**语言**下拉列表中选择所需的语言。
-4. 点击
,并选择**编辑 YAML**。
+4. 点击 `ks-installer` 右侧的
,并选择**编辑 YAML**。
5. 在文件末尾添加 `telemetry_enabled: false` 字段,点击**确定**。
diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/_index.md b/content/zh/docs/v3.3/installing-on-kubernetes/_index.md
index 543d6aa84..ddae63bfd 100644
--- a/content/zh/docs/v3.3/installing-on-kubernetes/_index.md
+++ b/content/zh/docs/v3.3/installing-on-kubernetes/_index.md
@@ -6,7 +6,7 @@ layout: "second"
linkTitle: "在 Kubernetes 上安装 KubeSphere"
weight: 4000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
本章演示如何在云上或本地托管的现有 Kubernetes 集群上部署 KubeSphere。KubeSphere 为容器编排提供了高度灵活的解决方案,可以部署在多种 Kubernetes 引擎和服务上。
@@ -15,4 +15,4 @@ icon: "/images/docs/v3.3/docs.svg"
在下面的章节中,您将找到一些最受欢迎的页面。强烈建议您先参考它们。
-{{< popularPage icon="/images/docs/v3.3/bitmap.jpg" title="基于 AWS EKS 安装 KubeSphere" description="在 EKS 上的现有 Kubernetes 集群上配置 KubeSphere。" link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks/" >}}
+{{< popularPage icon="/images/docs/v3.x/bitmap.jpg" title="基于 AWS EKS 安装 KubeSphere" description="在 EKS 上的现有 Kubernetes 集群上配置 KubeSphere。" link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks/" >}}
diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-ks-on-tencent-tke.md b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-ks-on-tencent-tke.md
index 388185f36..a12e95844 100644
--- a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-ks-on-tencent-tke.md
+++ b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-ks-on-tencent-tke.md
@@ -23,7 +23,7 @@ weight: 4270
- 创建完集群后,进入 `容器服务` > `集群` 界面,选择刚创建的集群,在 `基本信息` 面板中, `集群APIServer信息` 中开启 `外网访问` 。
- 然后在下方 `kubeconfig` 列表项中点击 `下载`,即可获取公用可用的 kubectl 证书。
-
+
- 获取 kubectl 配置文件后,可通过 kubectl 命令行工具来验证集群连接:
@@ -91,7 +91,7 @@ kubectl apply -f cluster-configuration.yaml
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
```
-
+
### 访问 KubeSphere 控制台
@@ -101,7 +101,7 @@ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app
- 在 `容器服务` > `集群` 界面中,选择创建好的集群,在 `节点管理` > `节点` 面板中,查看任意一个节点的 `公网 IP`(集群安装时默认会免费为每个节点绑定公网 IP)。
-
+
- 由于服务安装时默认开启 NodePort 且端口为 30880,浏览器输入 `<公网 IP>:30880` ,并以默认帐户(用户名 `admin`,密码 `P@88w0rd`)即可登录控制台。
@@ -109,15 +109,15 @@ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app
- 在 `容器服务` > `集群` 界面中,选择创建好的集群,在 `服务与路由` > `service` 面板中,点击 `ks-console` 一行中 `更新访问方式`。
-
+
- `服务访问方式` 选择 `提供公网访问`,`端口映射` 中 `服务端口` 填写您希望的端口号,点击 `更新访问方式`。
-
+
- 此时界面您将会看到 LoadBalancer 公网 IP:
-
+
- 浏览器输入 `
,选择**编辑 YAML** 来编辑 `ks-installer`。
+4. 点击右侧的
,选择**编辑 YAML** 来编辑 `ks-installer`。
5. 在 `ks-installer` 的 YAML 文件中,将 `jwtSecret` 的值修改为如上所示的相应值,将 `clusterRole` 的值设置为 `member`。点击**更新**保存更改。
@@ -57,7 +57,7 @@ weight: 5310
登录阿里云的控制台。访问**容器服务 - Kubernetes** 下的**集群**,点击您的集群访问其详情页,然后选择**连接信息**选项卡。您可以看到**公网访问**选项卡下的 kubeconfig 文件。复制 kubeconfig 文件的内容。
-
+
### 步骤 3:导入 ACK 成员集群
diff --git a/content/zh/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md b/content/zh/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
index 671298e9b..06bb2d5da 100644
--- a/content/zh/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
+++ b/content/zh/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
@@ -37,7 +37,7 @@ weight: 5320
3. 访问**定制资源定义**,在搜索栏输入 `ClusterConfiguration`,然后按下键盘上的**回车键**。点击 **ClusterConfiguration** 访问其详情页。
-4. 点击右侧的
,选择**编辑 YAML** 来编辑 `ks-installer`。
+4. 点击右侧的
,选择**编辑 YAML** 来编辑 `ks-installer`。
5. 在 `ks-installer` 的 YAML 文件中,将 `jwtSecret` 的值改为如上所示的相应值,将 `clusterRole` 的值改为 `member`。点击**更新**保存更改。
diff --git a/content/zh/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-gke.md b/content/zh/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-gke.md
index a58c41d85..a5408b428 100644
--- a/content/zh/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-gke.md
+++ b/content/zh/docs/v3.3/multicluster-management/import-cloud-hosted-k8s/import-gke.md
@@ -37,7 +37,7 @@ weight: 5330
3. 访问**定制资源定义**,在搜索栏中输入 `ClusterConfiguration`,然后按下键盘上的**回车键**。点击 **ClusterConfiguration** 访问其详情页。
-4. 点击右侧的
,选择**编辑 YAML** 来编辑 `ks-installer`。
+4. 点击右侧的
,选择**编辑 YAML** 来编辑 `ks-installer`。
5. 在 `ks-installer` 的 YAML 文件中,将 `jwtSecret` 的值改为如上所示的相应值,将 `clusterRole` 的值改为 `member`。
diff --git a/content/zh/docs/v3.3/multicluster-management/introduction/kubefed-in-kubesphere.md b/content/zh/docs/v3.3/multicluster-management/introduction/kubefed-in-kubesphere.md
index 57b21b6b5..0ad3917e3 100644
--- a/content/zh/docs/v3.3/multicluster-management/introduction/kubefed-in-kubesphere.md
+++ b/content/zh/docs/v3.3/multicluster-management/introduction/kubefed-in-kubesphere.md
@@ -16,7 +16,7 @@ weight: 5120
如果您是使用通过 kubeadm 搭建的自建 Kubernetes 集群,请参阅[离线安装](../../../installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped/)在您的 Kubernetes 集群上安装 KubeSphere,然后通过直接连接或者代理连接来启用 KubeSphere 多集群管理功能。
-
+
## 厂商无锁定
diff --git a/content/zh/docs/v3.3/multicluster-management/introduction/overview.md b/content/zh/docs/v3.3/multicluster-management/introduction/overview.md
index 2f23eaf32..b54f7a833 100644
--- a/content/zh/docs/v3.3/multicluster-management/introduction/overview.md
+++ b/content/zh/docs/v3.3/multicluster-management/introduction/overview.md
@@ -12,4 +12,4 @@ weight: 5110
开发 KubeSphere 旨在解决多集群和多云管理(包括上述使用场景)的难题,为用户提供统一的控制平面,将应用程序及其副本分发到位于公有云和本地环境的多个集群。KubeSphere 还拥有跨多个集群的丰富可观测性,包括集中监控、日志系统、事件和审计日志等。
-
+
diff --git a/content/zh/docs/v3.3/multicluster-management/unbind-cluster.md b/content/zh/docs/v3.3/multicluster-management/unbind-cluster.md
index 04275cad1..086cf2402 100644
--- a/content/zh/docs/v3.3/multicluster-management/unbind-cluster.md
+++ b/content/zh/docs/v3.3/multicluster-management/unbind-cluster.md
@@ -21,7 +21,7 @@ weight: 5500
1. 点击左上角的**平台管理**,选择**集群管理**。
-2. 在**成员集群**区域,点击要从中央控制平面移除的集群右侧的
,选择**编辑 YAML**。
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
4. 在该 YAML 文件中,搜寻到 `alerting`,将 `enabled` 的 `false` 更改为 `true`。完成后,点击右下角的**确定**,保存配置。
@@ -89,7 +89,7 @@ weight: 6600
{{< notice note >}}
-您可以通过点击控制台右下角的
找到 kubectl 工具。
+您可以通过点击控制台右下角的
找到 kubectl 工具。
{{ notice >}}
## 验证组件的安装
diff --git a/content/zh/docs/v3.3/pluggable-components/app-store.md b/content/zh/docs/v3.3/pluggable-components/app-store.md
index c02f8eeb1..69ce93206 100644
--- a/content/zh/docs/v3.3/pluggable-components/app-store.md
+++ b/content/zh/docs/v3.3/pluggable-components/app-store.md
@@ -78,7 +78,7 @@ weight: 6200
定制资源定义(CRD)允许用户在不增加额外 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
{{ notice >}}
-3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
4. 在该 YAML 文件中,搜索 `openpitrix`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**,保存配置。
@@ -96,7 +96,7 @@ weight: 6200
{{< notice note >}}
-您可以通过点击控制台右下角的
找到 kubectl 工具。
+您可以通过点击控制台右下角的
找到 kubectl 工具。
{{ notice >}}
## 验证组件的安装
diff --git a/content/zh/docs/v3.3/pluggable-components/auditing-logs.md b/content/zh/docs/v3.3/pluggable-components/auditing-logs.md
index 4a7bb9fd8..3aab30eeb 100644
--- a/content/zh/docs/v3.3/pluggable-components/auditing-logs.md
+++ b/content/zh/docs/v3.3/pluggable-components/auditing-logs.md
@@ -106,7 +106,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排
定制资源定义 (CRD) 允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
{{ notice >}}
-3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
4. 在该 YAML 文件中,搜索 `auditing`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**,保存配置。
@@ -139,7 +139,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排
{{< notice note >}}
-您可以点击控制台右下角的
找到 kubectl 工具。
+您可以点击控制台右下角的
找到 kubectl 工具。
{{ notice >}}
## 验证组件的安装
diff --git a/content/zh/docs/v3.3/pluggable-components/devops.md b/content/zh/docs/v3.3/pluggable-components/devops.md
index 2bb3a7c35..9a5dcbfea 100644
--- a/content/zh/docs/v3.3/pluggable-components/devops.md
+++ b/content/zh/docs/v3.3/pluggable-components/devops.md
@@ -76,7 +76,7 @@ DevOps 系统为用户提供了一个自动化的环境,应用可以自动发
定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
{{ notice >}}
-3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
4. 在该 YAML 文件中,搜索 `devops`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**,保存配置。
@@ -93,7 +93,7 @@ DevOps 系统为用户提供了一个自动化的环境,应用可以自动发
{{< notice note >}}
-您可以点击控制台右下角的
找到 kubectl 工具。
+您可以点击控制台右下角的
找到 kubectl 工具。
{{ notice >}}
## 验证组件的安装
diff --git a/content/zh/docs/v3.3/pluggable-components/events.md b/content/zh/docs/v3.3/pluggable-components/events.md
index de647fd54..25405342a 100644
--- a/content/zh/docs/v3.3/pluggable-components/events.md
+++ b/content/zh/docs/v3.3/pluggable-components/events.md
@@ -110,7 +110,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如
{{ notice >}}
-3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
4. 在该配置文件中,搜索 `events`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**,保存配置。
@@ -144,7 +144,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如
{{< notice note >}}
-您可以通过点击控制台右下角的
找到 kubectl 工具。
+您可以通过点击控制台右下角的
找到 kubectl 工具。
{{ notice >}}
diff --git a/content/zh/docs/v3.3/pluggable-components/kubeedge.md b/content/zh/docs/v3.3/pluggable-components/kubeedge.md
index 6e19a6be5..ba6d45d6f 100644
--- a/content/zh/docs/v3.3/pluggable-components/kubeedge.md
+++ b/content/zh/docs/v3.3/pluggable-components/kubeedge.md
@@ -12,7 +12,7 @@ KubeEdge 的组件在两个单独的位置运行——云上和边缘节点上
启用 KubeEdge 后,您可以[为集群添加边缘节点](../../installing-on-linux/cluster-operation/add-edge-nodes/)并在这些节点上部署工作负载。
-
+
## 安装前启用 KubeEdge
@@ -111,7 +111,7 @@ KubeEdge 的组件在两个单独的位置运行——云上和边缘节点上
定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
{{ notice >}}
-3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,然后选择**编辑 YAML**。
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,然后选择**编辑 YAML**。
4. 在该配置文件中,搜索 `edgeruntime` 和 `kubeedge`,然后将它们 `enabled` 值从 `false` 更改为 `true` 以便开启所有 KubeEdge 组件。完成后保存文件。
@@ -144,7 +144,7 @@ KubeEdge 的组件在两个单独的位置运行——云上和边缘节点上
{{< notice note >}}
-您可以通过点击控制台右下角的
来找到 kubectl 工具。
+您可以通过点击控制台右下角的
来找到 kubectl 工具。
{{ notice >}}
## 验证组件的安装
diff --git a/content/zh/docs/v3.3/pluggable-components/logging.md b/content/zh/docs/v3.3/pluggable-components/logging.md
index a3fef71e0..d4e9e076a 100644
--- a/content/zh/docs/v3.3/pluggable-components/logging.md
+++ b/content/zh/docs/v3.3/pluggable-components/logging.md
@@ -120,7 +120,7 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的
{{ notice >}}
-3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
4. 在该 YAML 文件中,搜索 `logging`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**以保存配置。
@@ -157,7 +157,7 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的
{{< notice note >}}
-您可以通过点击控制台右下角的
找到 kubectl 工具。
+您可以通过点击控制台右下角的
找到 kubectl 工具。
{{ notice >}}
diff --git a/content/zh/docs/v3.3/pluggable-components/metrics-server.md b/content/zh/docs/v3.3/pluggable-components/metrics-server.md
index f804ab4ae..f1bb77648 100644
--- a/content/zh/docs/v3.3/pluggable-components/metrics-server.md
+++ b/content/zh/docs/v3.3/pluggable-components/metrics-server.md
@@ -78,7 +78,7 @@ KubeSphere 支持用于[部署](../../project-user-guide/application-workloads/d
{{ notice >}}
-3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
4. 在该 YAML 文件中,搜索 `metrics_server`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**以保存配置。
@@ -95,7 +95,7 @@ KubeSphere 支持用于[部署](../../project-user-guide/application-workloads/d
{{< notice note >}}
-可以通过点击控制台右下角的
找到 kubectl 工具。
+可以通过点击控制台右下角的
找到 kubectl 工具。
{{ notice >}}
## 验证组件的安装
diff --git a/content/zh/docs/v3.3/pluggable-components/network-policy.md b/content/zh/docs/v3.3/pluggable-components/network-policy.md
index 29c0d7552..3d0a46538 100644
--- a/content/zh/docs/v3.3/pluggable-components/network-policy.md
+++ b/content/zh/docs/v3.3/pluggable-components/network-policy.md
@@ -83,7 +83,7 @@ weight: 6900
定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
{{ notice >}}
-3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
4. 在该 YAML 文件中,搜寻到 `network.networkpolicy`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**,保存配置。
@@ -101,7 +101,7 @@ weight: 6900
{{< notice note >}}
-您可以通过点击控制台右下角的
找到 kubectl 工具。
+您可以通过点击控制台右下角的
找到 kubectl 工具。
{{ notice >}}
## 验证组件的安装
diff --git a/content/zh/docs/v3.3/pluggable-components/pod-ip-pools.md b/content/zh/docs/v3.3/pluggable-components/pod-ip-pools.md
index eb2520400..b494bbcab 100644
--- a/content/zh/docs/v3.3/pluggable-components/pod-ip-pools.md
+++ b/content/zh/docs/v3.3/pluggable-components/pod-ip-pools.md
@@ -76,7 +76,7 @@ weight: 6920
定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
{{ notice >}}
-3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,然后选择**编辑 YAML**。
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,然后选择**编辑 YAML**。
4. 在该配置文件中,搜寻到 `network`,将 `network.ippool.type` 更改为 `calico`。完成后,点击右下角的**确定**保存配置。
@@ -94,7 +94,7 @@ weight: 6920
{{< notice note >}}
-您可以通过点击控制台右下角的
来找到 kubectl 工具。
+您可以通过点击控制台右下角的
来找到 kubectl 工具。
{{ notice >}}
## 验证组件的安装
diff --git a/content/zh/docs/v3.3/pluggable-components/service-mesh.md b/content/zh/docs/v3.3/pluggable-components/service-mesh.md
index 476304a2f..6be7da94a 100644
--- a/content/zh/docs/v3.3/pluggable-components/service-mesh.md
+++ b/content/zh/docs/v3.3/pluggable-components/service-mesh.md
@@ -93,7 +93,7 @@ KubeSphere 服务网格基于 [Istio](https://istio.io/),将微服务治理和
定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
{{ notice >}}
-3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
4. 在该配置文件中,搜索 `servicemesh`,并将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**,保存配置。
@@ -118,7 +118,7 @@ KubeSphere 服务网格基于 [Istio](https://istio.io/),将微服务治理和
{{< notice note >}}
-您可以通过点击控制台右下角的
找到 kubectl 工具。
+您可以通过点击控制台右下角的
找到 kubectl 工具。
{{ notice >}}
## 验证组件的安装
diff --git a/content/zh/docs/v3.3/pluggable-components/service-topology.md b/content/zh/docs/v3.3/pluggable-components/service-topology.md
index 83bf661b2..585341106 100644
--- a/content/zh/docs/v3.3/pluggable-components/service-topology.md
+++ b/content/zh/docs/v3.3/pluggable-components/service-topology.md
@@ -76,7 +76,7 @@ weight: 6915
定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
{{ notice >}}
-3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,然后选择**编辑 YAML**。
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,然后选择**编辑 YAML**。
4. 在该配置文件中,搜寻到 `network`,将 `network.topology.type` 更改为 `weave-scope`。完成后,点击右下角的**确定**保存配置。
@@ -94,7 +94,7 @@ weight: 6915
{{< notice note >}}
-您可以通过点击控制台右下角的
来找到 kubectl 工具。
+您可以通过点击控制台右下角的
来找到 kubectl 工具。
{{ notice >}}
## 验证组件的安装
diff --git a/content/zh/docs/v3.3/project-administration/_index.md b/content/zh/docs/v3.3/project-administration/_index.md
index 0c7518ed4..b81449813 100644
--- a/content/zh/docs/v3.3/project-administration/_index.md
+++ b/content/zh/docs/v3.3/project-administration/_index.md
@@ -6,7 +6,7 @@ layout: "second"
linkTitle: "项目管理"
weight: 13000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
diff --git a/content/zh/docs/v3.3/project-administration/disk-log-collection.md b/content/zh/docs/v3.3/project-administration/disk-log-collection.md
index 63578fba7..c22e5da64 100644
--- a/content/zh/docs/v3.3/project-administration/disk-log-collection.md
+++ b/content/zh/docs/v3.3/project-administration/disk-log-collection.md
@@ -20,7 +20,7 @@ KubeSphere 支持多种日志收集方式,使运维团队能够以灵活统一
1. 以 `project-admin` 身份登录 KubeSphere 的 Web 控制台,进入项目。
-2. 在左侧导航栏中,选择**项目设置**中的**日志收集**,点击
以启用该功能。
+2. 在左侧导航栏中,选择**项目设置**中的**日志收集**,点击
以启用该功能。
## 创建部署
@@ -53,7 +53,7 @@ KubeSphere 支持多种日志收集方式,使运维团队能够以灵活统一
{{ notice >}}
-6. 在**存储设置**选项卡下,切换
以编辑该角色。
+4. 新创建的角色将在**项目角色**中列出,点击右侧的
以编辑该角色。
## 邀请新成员
1. 转到**项目设置**下的**项目成员**,点击**邀请**。
-2. 点击右侧的
以邀请一名成员加入项目,并为其分配一个角色。
+2. 点击右侧的
以邀请一名成员加入项目,并为其分配一个角色。
3. 将成员加入项目后,点击**确定**。您可以在**项目成员**列表中查看新邀请的成员。
-4. 若要编辑现有成员的角色或将其从项目中移除,点击右侧的
并选择对应的操作。
+4. 若要编辑现有成员的角色或将其从项目中移除,点击右侧的
并选择对应的操作。
diff --git a/content/zh/docs/v3.3/project-user-guide/_index.md b/content/zh/docs/v3.3/project-user-guide/_index.md
index b868e4620..eb46927a5 100644
--- a/content/zh/docs/v3.3/project-user-guide/_index.md
+++ b/content/zh/docs/v3.3/project-user-guide/_index.md
@@ -6,7 +6,7 @@ layout: "second"
linkTitle: "项目用户指南"
weight: 10000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
在 KubeSphere 中,具有必要权限的项目用户能够执行一系列任务,例如创建各种工作负载,配置卷、密钥和 ConfigMap,设置各种发布策略,监控应用程序指标以及创建告警策略。由于 KubeSphere 具有极大的灵活性和兼容性,无需将任何代码植入到原生 Kubernetes 中,因此用户可以在测试、开发和生产环境快速上手 KubeSphere 的各种功能。
\ No newline at end of file
diff --git a/content/zh/docs/v3.3/project-user-guide/alerting/alerting-policy.md b/content/zh/docs/v3.3/project-user-guide/alerting/alerting-policy.md
index 62390f8ac..b3ca47cf9 100644
--- a/content/zh/docs/v3.3/project-user-guide/alerting/alerting-policy.md
+++ b/content/zh/docs/v3.3/project-user-guide/alerting/alerting-policy.md
@@ -47,7 +47,7 @@ KubeSphere 支持针对节点和工作负载的告警策略。本教程演示如
## 编辑告警策略
-若要在创建后编辑告警策略,点击**告警策略**页面右侧的
。
+若要在创建后编辑告警策略,点击**告警策略**页面右侧的
。
1. 点击下拉菜单中的**编辑**,按照创建时相同的步骤来编辑告警策略。点击**消息设置**页面的**确定**保存更改。
diff --git a/content/zh/docs/v3.3/project-user-guide/application-workloads/container-image-settings.md b/content/zh/docs/v3.3/project-user-guide/application-workloads/container-image-settings.md
index 9bbd013ce..2453b7edd 100644
--- a/content/zh/docs/v3.3/project-user-guide/application-workloads/container-image-settings.md
+++ b/content/zh/docs/v3.3/project-user-guide/application-workloads/container-image-settings.md
@@ -18,7 +18,7 @@ weight: 10280
### 容器组副本数量
-点击
,然后点击
,然后点击
,在弹出菜单中选择操作,修改您的守护进程集。
+1. 守护进程集创建后会显示列表中。您可以点击右边的
,在弹出菜单中选择操作,修改您的守护进程集。
- **编辑信息**:查看并编辑基本信息。
- **编辑 YAML**:查看、上传、下载或者更新 YAML 文件。
@@ -123,9 +123,9 @@ weight: 10230
2. 点击右上角的下拉菜单以自定义时间范围和采样间隔。
-3. 点击右上角的
/
以开始或停止自动刷新数据。
+3. 点击右上角的
/
以开始或停止自动刷新数据。
-4. 点击右上角的
以手动刷新数据。
+4. 点击右上角的
以手动刷新数据。
### 环境变量
diff --git a/content/zh/docs/v3.3/project-user-guide/application-workloads/deployments.md b/content/zh/docs/v3.3/project-user-guide/application-workloads/deployments.md
index beb4e12d7..1c01bd41c 100644
--- a/content/zh/docs/v3.3/project-user-guide/application-workloads/deployments.md
+++ b/content/zh/docs/v3.3/project-user-guide/application-workloads/deployments.md
@@ -27,7 +27,7 @@ weight: 10210
### 步骤 3:设置容器组
-1. 设置镜像前,请点击**容器组副本数量**中的
,在弹出菜单中选择操作,修改您的部署。
+1. 部署创建后会显示在列表中。您可以点击右边的
,在弹出菜单中选择操作,修改您的部署。
- **编辑信息**:查看并编辑基本信息。
- **编辑 YAML**:查看、上传、下载或者更新 YAML 文件。
@@ -104,7 +104,7 @@ weight: 10210
4. 点击**资源状态**选项卡,查看该部署的端口和容器组信息。
- - **副本运行状态**:点击
/
以开始或停止数据自动刷新。
+3. 点击右上角的
/
以开始或停止数据自动刷新。
-4. 点击右上角的
以手动刷新数据。
+4. 点击右上角的
以手动刷新数据。
### 环境变量
diff --git a/content/zh/docs/v3.3/project-user-guide/application-workloads/horizontal-pod-autoscaling.md b/content/zh/docs/v3.3/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
index 2f88f3a3d..f9b2abdd1 100755
--- a/content/zh/docs/v3.3/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
+++ b/content/zh/docs/v3.3/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
@@ -83,7 +83,7 @@ HPA 功能会自动调整容器组的数量,将容器组的平均资源使用
1. 负载生成器部署创建好后,在左侧导航栏中选择**应用负载**下的**工作负载**,然后点击右侧的 HPA 部署(例如,hpa-v1)。页面中显示的容器组的数量会自动增加以满足资源使用目标。
-2. 在左侧导航栏选择**应用负载**中的**工作负载**,点击负载生成器部署(例如,load-generator-v1)右侧的
,从下拉菜单中选择**删除**。负载生成器部署删除后,再次检查 HPA 部署的状态。容器组的数量会减少到最小值。
+2. 在左侧导航栏选择**应用负载**中的**工作负载**,点击负载生成器部署(例如,load-generator-v1)右侧的
,从下拉菜单中选择**删除**。负载生成器部署删除后,再次检查 HPA 部署的状态。容器组的数量会减少到最小值。
{{< notice note >}}
@@ -99,5 +99,5 @@ HPA 功能会自动调整容器组的数量,将容器组的平均资源使用
1. 在左侧导航栏选择**应用负载**中的**工作负载**,点击右侧的 HPA 部署(例如,hpa-v1)。
-2. 点击**自动伸缩**右侧的
,从下拉菜单中选择**取消**。
+2. 点击**自动伸缩**右侧的
,从下拉菜单中选择**取消**。
diff --git a/content/zh/docs/v3.3/project-user-guide/application-workloads/jobs.md b/content/zh/docs/v3.3/project-user-guide/application-workloads/jobs.md
index 15028da8d..2f98be74f 100644
--- a/content/zh/docs/v3.3/project-user-guide/application-workloads/jobs.md
+++ b/content/zh/docs/v3.3/project-user-guide/application-workloads/jobs.md
@@ -115,7 +115,7 @@ weight: 10250
{{< notice tip >}}如果任务失败,您可以重新运行该任务,失败原因显示在**消息**下。{{ notice >}}
-3. 在**资源状态**中,您可以查看容器组状态。先前将**并行容器组数量**设置为 2,因此每次会创建两个容器组。点击右侧的
,然后点击
,然后点击
刷新执行记录。
+2. 点击
刷新执行记录。
### 资源状态
1. 点击**资源状态**选项卡查看任务的容器组。
-2. 点击
刷新容器组信息,点击
/
显示或隐藏每个容器组中的容器。
+2. 点击
刷新容器组信息,点击
/
显示或隐藏每个容器组中的容器。
### 元数据
diff --git a/content/zh/docs/v3.3/project-user-guide/application-workloads/services.md b/content/zh/docs/v3.3/project-user-guide/application-workloads/services.md
index 71f37690a..14a846fad 100644
--- a/content/zh/docs/v3.3/project-user-guide/application-workloads/services.md
+++ b/content/zh/docs/v3.3/project-user-guide/application-workloads/services.md
@@ -159,7 +159,7 @@ KubeSphere 提供三种创建服务的基本方法:**无状态服务**、**有
### 详情页面
-1. 创建服务后,您可以点击右侧的
进一步编辑它,例如元数据(**名称**无法编辑)、配置文件、端口以及外部访问。
+1. 创建服务后,您可以点击右侧的
进一步编辑它,例如元数据(**名称**无法编辑)、配置文件、端口以及外部访问。
- **编辑信息**:查看和编辑基本信息。
- **编辑 YAML**:查看、上传、下载或者更新 YAML 文件。
@@ -179,7 +179,7 @@ KubeSphere 提供三种创建服务的基本方法:**无状态服务**、**有
1. 点击**资源状态**选项卡以查看服务端口、工作负载和容器组信息。
-2. 在**容器组**区域,点击
以刷新容器组信息,点击
/
以显示或隐藏每个容器组中的容器。
+2. 在**容器组**区域,点击
以刷新容器组信息,点击
/
以显示或隐藏每个容器组中的容器。
### 元数据
diff --git a/content/zh/docs/v3.3/project-user-guide/application-workloads/statefulsets.md b/content/zh/docs/v3.3/project-user-guide/application-workloads/statefulsets.md
index ec90a5d6b..f1375e6db 100644
--- a/content/zh/docs/v3.3/project-user-guide/application-workloads/statefulsets.md
+++ b/content/zh/docs/v3.3/project-user-guide/application-workloads/statefulsets.md
@@ -40,7 +40,7 @@ weight: 10220
### 步骤 3:设置容器组
-1. 设置镜像前,请点击**容器组副本数量**中的
,在弹出菜单中选择操作,修改您的有状态副本集。
+1. 有状态副本集创建后会显示列表中。您可以点击右边的
,在弹出菜单中选择操作,修改您的有状态副本集。
- **编辑信息**:查看并编辑基本信息。
- **编辑 YAML**:查看、上传、下载或者更新 YAML 文件。
@@ -113,7 +113,7 @@ weight: 10220
4. 点击**资源状态**选项卡,查看该有状态副本集的端口和容器组信息。
- - **副本运行状态**:点击
/
以开始或停止自动刷新数据。
+3. 点击右上角的
/
以开始或停止自动刷新数据。
-4. 点击右上角的
以手动刷新数据。
+4. 点击右上角的
以手动刷新数据。
### 环境变量
diff --git a/content/zh/docs/v3.3/project-user-guide/configuration/configmaps.md b/content/zh/docs/v3.3/project-user-guide/configuration/configmaps.md
index 768824948..916172dba 100644
--- a/content/zh/docs/v3.3/project-user-guide/configuration/configmaps.md
+++ b/content/zh/docs/v3.3/project-user-guide/configuration/configmaps.md
@@ -47,7 +47,7 @@ Kubernetes [配置字典(ConfigMap)](https://kubernetes.io/docs/concepts/con
## 查看配置字典详情
-1. 配置字典创建后会显示在**配置字典**页面。您可以点击右侧的
,并从下拉菜单中选择操作来修改配置字典。
+1. 配置字典创建后会显示在**配置字典**页面。您可以点击右侧的
,并从下拉菜单中选择操作来修改配置字典。
- **编辑**:查看和编辑基本信息。
- **编辑 YAML**:查看、上传、下载或更新 YAML 文件。
diff --git a/content/zh/docs/v3.3/project-user-guide/configuration/image-registry.md b/content/zh/docs/v3.3/project-user-guide/configuration/image-registry.md
index 5bbd26352..e2a4bb3ed 100644
--- a/content/zh/docs/v3.3/project-user-guide/configuration/image-registry.md
+++ b/content/zh/docs/v3.3/project-user-guide/configuration/image-registry.md
@@ -101,4 +101,4 @@ Docker 镜像是一个只读的模板,可用于部署容器服务。每个镜
如果您使用 YAML 文件创建工作负载且需要使用私有镜像仓库,需要在本地 YAML 文件中手动添加 `kubesphere.io/imagepullsecrets` 字段,并且取值是 JSON 格式的字符串(其中 `key` 为容器名称,`value` 为保密字典名),以保证 `imagepullsecrets` 字段不被丢失,如下示例图所示。
-
+
diff --git a/content/zh/docs/v3.3/project-user-guide/configuration/secrets.md b/content/zh/docs/v3.3/project-user-guide/configuration/secrets.md
index 45641802f..231a710eb 100644
--- a/content/zh/docs/v3.3/project-user-guide/configuration/secrets.md
+++ b/content/zh/docs/v3.3/project-user-guide/configuration/secrets.md
@@ -58,7 +58,7 @@ Kubernetes [保密字典 (Secret)](https://kubernetes.io/zh/docs/concepts/config
## 查看保密字典详情
-1. 保密字典创建后会显示在如图所示的列表中。您可以点击右边的
,并从下拉菜单中选择操作来修改保密字典。
+1. 保密字典创建后会显示在如图所示的列表中。您可以点击右边的
,并从下拉菜单中选择操作来修改保密字典。
- **编辑信息**:查看和编辑基本信息。
- **编辑 YAML**:查看、上传、下载或更新 YAML 文件。
@@ -69,7 +69,7 @@ Kubernetes [保密字典 (Secret)](https://kubernetes.io/zh/docs/concepts/config
{{< notice note >}}
-如上文所述,KubeSphere 自动将键值对的值转换成对应的 base64 编码。您可以点击右边的
将右侧的项目拖放至目标组。若要添加新的分组,点击**添加监控组**。如果您想修改监控组的位置,请将鼠标悬停至监控组上并点击右侧的
或
。
+若要将监控项分组,您可以点击
将右侧的项目拖放至目标组。若要添加新的分组,点击**添加监控组**。如果您想修改监控组的位置,请将鼠标悬停至监控组上并点击右侧的
或
。
{{< notice note >}}
diff --git a/content/zh/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/querying.md b/content/zh/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/querying.md
index aed99b5e0..048450b00 100644
--- a/content/zh/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/querying.md
+++ b/content/zh/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/querying.md
@@ -8,6 +8,6 @@ weight: 10817
在查询编辑器中,在**监控指标**中输入 PromQL 表达式以处理和获取指标。若要了解如何编写 PromQL,请参阅 [Query Examples](https://prometheus.io/docs/prometheus/latest/querying/examples/)。
-
+
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/content/zh/docs/v3.3/project-user-guide/grayscale-release/blue-green-deployment.md b/content/zh/docs/v3.3/project-user-guide/grayscale-release/blue-green-deployment.md
index 3b027f3bf..4d13f5ad8 100644
--- a/content/zh/docs/v3.3/project-user-guide/grayscale-release/blue-green-deployment.md
+++ b/content/zh/docs/v3.3/project-user-guide/grayscale-release/blue-green-deployment.md
@@ -10,7 +10,7 @@ weight: 10520
蓝绿发布提供零宕机部署,即在保留旧版本的同时部署新版本。在任何时候,只有其中一个版本处于活跃状态,接收所有流量,另一个版本保持空闲状态。如果运行出现问题,您可以快速回滚到旧版本。
-
+
## 准备工作
diff --git a/content/zh/docs/v3.3/project-user-guide/grayscale-release/canary-release.md b/content/zh/docs/v3.3/project-user-guide/grayscale-release/canary-release.md
index f69777572..2ff8069d0 100644
--- a/content/zh/docs/v3.3/project-user-guide/grayscale-release/canary-release.md
+++ b/content/zh/docs/v3.3/project-user-guide/grayscale-release/canary-release.md
@@ -10,7 +10,7 @@ KubeSphere 基于 [Istio](https://istio.io/) 向用户提供部署金丝雀服
该方法能够高效地测试服务性能和可靠性,有助于在实际环境中发现潜在问题,同时不影响系统整体稳定性。
-
+
## 视频演示
@@ -116,7 +116,7 @@ KubeSphere 提供基于 [Jaeger](https://www.jaegertracing.io/) 的分布式追
1. 在**任务状态**中,点击金丝雀发布任务。
-2. 在弹出的对话框中,点击 **reviews v2** 右侧的
,选择**接管**。这代表 100% 的流量将会被发送到新版本 (v2)。
+2. 在弹出的对话框中,点击 **reviews v2** 右侧的
,选择**接管**。这代表 100% 的流量将会被发送到新版本 (v2)。
{{< notice note >}}
如果新版本出现任何问题,可以随时回滚到之前的 v1 版本。
diff --git a/content/zh/docs/v3.3/project-user-guide/image-builder/binary-to-image.md b/content/zh/docs/v3.3/project-user-guide/image-builder/binary-to-image.md
index d08c5daf6..042274ab9 100644
--- a/content/zh/docs/v3.3/project-user-guide/image-builder/binary-to-image.md
+++ b/content/zh/docs/v3.3/project-user-guide/image-builder/binary-to-image.md
@@ -39,7 +39,7 @@ Binary-to-Image (B2I) 是一个工具箱和工作流,用于从二进制可执
下图中的步骤展示了如何在 B2I 工作流中通过创建服务来上传制品、构建镜像并将其发布至 Kubernetes。
-
+
### 步骤 1:创建 Docker Hub 保密字典
@@ -83,7 +83,7 @@ Binary-to-Image (B2I) 是一个工具箱和工作流,用于从二进制可执
1. 稍等片刻,您可以看到镜像构建器状态变为**成功**。
-2. 点击该镜像前往其详情页面。在**任务记录**下,点击记录右侧的
查看构建日志。如果一切运行正常,您可以在日志末尾看到 `Build completed successfully`。
+2. 点击该镜像前往其详情页面。在**任务记录**下,点击记录右侧的
查看构建日志。如果一切运行正常,您可以在日志末尾看到 `Build completed successfully`。
3. 回到**服务**、**部署**和**任务**页面,您可以看到该镜像相应的服务、部署和任务都已成功创建。
@@ -105,7 +105,7 @@ Binary-to-Image (B2I) 是一个工具箱和工作流,用于从二进制可执
前述示例通过创建服务来实现整个 B2I 工作流。此外,您也可以直接使用镜像构建器基于制品构建镜像,但这个方式不会将镜像发布至 Kubernetes。
-
+
{{< notice note >}}
@@ -139,7 +139,7 @@ Binary-to-Image (B2I) 是一个工具箱和工作流,用于从二进制可执
1. 稍等片刻,您可以看到镜像构建器状态变为**成功**。
-2. 点击该镜像构建器前往其详情页面。在**任务记录**下,点击记录右侧的
查看构建日志。如果一切运行正常,您可以在日志末尾看到 `Build completed successfully`。
+2. 点击该镜像构建器前往其详情页面。在**任务记录**下,点击记录右侧的
查看构建日志。如果一切运行正常,您可以在日志末尾看到 `Build completed successfully`。
3. 前往**任务**页面,您可以看到该镜像相应的任务已成功创建。
diff --git a/content/zh/docs/v3.3/project-user-guide/image-builder/s2i-introduction.md b/content/zh/docs/v3.3/project-user-guide/image-builder/s2i-introduction.md
index 142e758b9..b4070502e 100644
--- a/content/zh/docs/v3.3/project-user-guide/image-builder/s2i-introduction.md
+++ b/content/zh/docs/v3.3/project-user-guide/image-builder/s2i-introduction.md
@@ -16,7 +16,7 @@ Source-to-Image (S2I) 是一个将源代码构建成镜像的自动化工具。S
对于 Python 和 Ruby 等解释型语言,程序的构建环境和运行时环境通常是相同的。例如,基于 Ruby 的镜像构建器通常包含 Bundler、Rake、Apache、GCC 以及其他构建运行时环境所需的安装包。构建的工作流程如下图所示:
-
+
### S2I 工作原理
@@ -28,7 +28,7 @@ S2I 执行以下步骤:
S2I 流程图如下:
-
+
### 运行时镜像
@@ -36,4 +36,4 @@ S2I 流程图如下:
构建的工作流程如下:
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/content/zh/docs/v3.3/project-user-guide/image-builder/source-to-image.md b/content/zh/docs/v3.3/project-user-guide/image-builder/source-to-image.md
index 5f6bb5155..61f0bbffd 100644
--- a/content/zh/docs/v3.3/project-user-guide/image-builder/source-to-image.md
+++ b/content/zh/docs/v3.3/project-user-guide/image-builder/source-to-image.md
@@ -10,7 +10,7 @@ Source-to-Image (S2I) 是一个工具箱和工作流,用于从源代码构建
本教程演示如何通过创建服务 (Service) 使用 S2I 将 Java 示例项目的源代码导入 KubeSphere。KubeSphere Image Builder 将基于源代码创建 Docker 镜像,将其推送至目标仓库,并发布至 Kubernetes。
-
+
## 视频演示
@@ -95,7 +95,7 @@ Source-to-Image (S2I) 是一个工具箱和工作流,用于从源代码构建
1. 稍等片刻,您可以看到镜像构建器状态变为**成功**。
-2. 点击该镜像构建器前往其详情页面。在**任务记录**下,点击记录右侧的
查看构建日志。如果一切运行正常,您可以在日志末尾看到 `Build completed successfully`。
+2. 点击该镜像构建器前往其详情页面。在**任务记录**下,点击记录右侧的
查看构建日志。如果一切运行正常,您可以在日志末尾看到 `Build completed successfully`。
3. 回到**服务**、**部署**和**任务**页面,您可以看到该镜像相应的服务、部署和任务都已成功创建。
diff --git a/content/zh/docs/v3.3/quick-start/_index.md b/content/zh/docs/v3.3/quick-start/_index.md
index 547c3cc4b..1eb4de06a 100644
--- a/content/zh/docs/v3.3/quick-start/_index.md
+++ b/content/zh/docs/v3.3/quick-start/_index.md
@@ -7,7 +7,7 @@ linkTitle: "快速入门"
weight: 2000
-icon: "/images/docs/v3.3/docs.svg"
+icon: "/images/docs/v3.x/docs.svg"
---
diff --git a/content/zh/docs/v3.3/quick-start/create-workspace-and-project.md b/content/zh/docs/v3.3/quick-start/create-workspace-and-project.md
index acfbbd8f4..6b962ace1 100644
--- a/content/zh/docs/v3.3/quick-start/create-workspace-and-project.md
+++ b/content/zh/docs/v3.3/quick-start/create-workspace-and-project.md
@@ -103,7 +103,7 @@ KubeSphere 的多租户系统分**三个**层级,即集群、企业空间和
{{< notice note >}}
- 您可以点击用户名称后的
以编辑角色、编辑角色权限或删除该角色。
+5. 在**平台角色**页面,可以点击所创建角色的名称查看角色详情,点击
以编辑角色、编辑角色权限或删除该角色。
6. 在**用户**页面,可以在创建帐户或编辑现有帐户时为帐户分配该角色。
diff --git a/content/zh/docs/v3.3/quick-start/deploy-bookinfo-to-k8s.md b/content/zh/docs/v3.3/quick-start/deploy-bookinfo-to-k8s.md
index d45d32432..b6984daf4 100644
--- a/content/zh/docs/v3.3/quick-start/deploy-bookinfo-to-k8s.md
+++ b/content/zh/docs/v3.3/quick-start/deploy-bookinfo-to-k8s.md
@@ -41,7 +41,7 @@ Bookinfo 应用由以下四个独立的微服务组成,其中 **reviews** 微
这个应用的端到端架构如下所示。有关更多详细信息,请参见 [Bookinfo 应用](https://istio.io/latest/zh/docs/examples/bookinfo/)。
-
+
## 动手实验
@@ -53,7 +53,7 @@ Bookinfo 应用由以下四个独立的微服务组成,其中 **reviews** 微
{{< notice note >}}
-KubeSphere 会自动创建主机名。若要更改主机名,请将鼠标悬停在默认路由规则上,然后点击
,然后选择**资源消费统计**。
+1. 使用 `admin` 用户登录 KubeSphere Web 控制台,点击右下角的
,然后选择**资源消费统计**。
2. 在**集群资源消费情况**一栏,点击**查看消费**。
@@ -58,7 +58,7 @@ KubeSphere 计量功能帮助您在不同层级追踪集群或企业空间中的
**企业空间(项目)资源消费情况**包含企业空间(包括项目)的资源使用情况,如 CPU、内存、存储等。
-1. 使用 `admin` 用户登录 KubeSphere Web 控制台,点击右下角的
,对出现的提示消息点击**确定**,以将用户分配到该部门。
+2. 在用户列表中,点击用户右侧的
,对出现的提示消息点击**确定**,以将用户分配到该部门。
{{< notice note >}}
@@ -54,7 +54,7 @@ weight: 9800
## 从部门中移除用户
1. 在**部门**页面,选择左侧部门树中的一个部门,然后点击右侧的**已分配**。
-2. 在已分配用户列表中,点击用户右侧的
,在出现的对话框中输入相应的用户名,然后点击**确定**来移除用户。
+2. 在已分配用户列表中,点击用户右侧的
,在出现的对话框中输入相应的用户名,然后点击**确定**来移除用户。
## 删除和编辑部门
@@ -62,7 +62,7 @@ weight: 9800
2. 在**设置部门**对话框的左侧,点击需要编辑或删除部门的上级部门。
-3. 点击部门右侧的
进行编辑。
+3. 点击部门右侧的
进行编辑。
{{< notice note >}}
@@ -70,7 +70,7 @@ weight: 9800
{{ notice >}}
-4. 点击部门右侧的
,在出现的对话框中输入相应的部门名称,然后点击**确定**来删除该部门。
+4. 点击部门右侧的
,在出现的对话框中输入相应的部门名称,然后点击**确定**来删除该部门。
{{< notice note >}}
diff --git a/content/zh/docs/v3.3/workspace-administration/role-and-member-management.md b/content/zh/docs/v3.3/workspace-administration/role-and-member-management.md
index 1623f4e28..5bca697cf 100644
--- a/content/zh/docs/v3.3/workspace-administration/role-and-member-management.md
+++ b/content/zh/docs/v3.3/workspace-administration/role-and-member-management.md
@@ -49,15 +49,15 @@ weight: 9400
{{ notice >}}
-4. 新创建的角色将在**企业空间角色**中列出,点击右侧的
以编辑该角色的信息、权限,或删除该角色。
+4. 新创建的角色将在**企业空间角色**中列出,点击右侧的
以编辑该角色的信息、权限,或删除该角色。
## 邀请新成员
1. 转到**企业空间设置**下**企业空间成员**,点击**邀请**。
-2. 点击右侧的
以邀请一名成员加入企业空间,并为其分配一个角色。
+2. 点击右侧的
以邀请一名成员加入企业空间,并为其分配一个角色。
3. 将成员加入企业空间后,点击**确定**。您可以在**企业空间成员**列表中查看新邀请的成员。
-4. 若要编辑现有成员的角色或将其从企业空间中移除,点击右侧的
并选择对应的操作。
\ No newline at end of file
+4. 若要编辑现有成员的角色或将其从企业空间中移除,点击右侧的
并选择对应的操作。
\ No newline at end of file
diff --git a/content/zh/docs/v3.3/workspace-administration/workspace-quotas.md b/content/zh/docs/v3.3/workspace-administration/workspace-quotas.md
index e9041cae2..39436cf86 100644
--- a/content/zh/docs/v3.3/workspace-administration/workspace-quotas.md
+++ b/content/zh/docs/v3.3/workspace-administration/workspace-quotas.md
@@ -24,7 +24,7 @@ weight: 9700
3. **企业空间配额**页面列有分配到该企业空间的全部可用集群,以及各集群的 CPU 限额、CPU 需求、内存限额和内存需求。
-4. 在列表右侧点击**编辑配额**即可查看企业空间配额信息。默认情况下,KubeSphere 不为企业空间设置任何资源预留或资源限制。如需设置资源预留或资源限制来管理 CPU 和内存资源,您可以移动
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. 在 `spec.authentication.jwtSecret` 字段下添加以下字段。
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: cas
+ type: CASIdentityProvider
+ mappingMethod: auto
+ provider:
+ redirectURL: "https://ks-console:30880/oauth/redirect/cas"
+ casServerURL: "https://cas.example.org/cas"
+ insecureSkipVerify: true
+ ```
+
+ 字段描述如下:
+
+ | 参数 | 描述 |
+ | -------------------- | ------------------------------------------------------------ |
+ | redirectURL | 重定向到 ks-console 的 URL,格式为:`https://<域名>/oauth/redirect/<身份提供者名称>`。URL 中的 `<身份提供者名称>` 对应 `oauthOptions:identityProviders:name` 的值。 |
+ | casServerURL | 定义cas 认证的url 地址 |
+ | insecureSkipVerify | 关闭 TLS 证书验证。 |
+
+
diff --git a/content/zh/docs/v3.4/access-control-and-account-management/external-authentication/oidc-identity-provider.md b/content/zh/docs/v3.4/access-control-and-account-management/external-authentication/oidc-identity-provider.md
new file mode 100644
index 000000000..65353f753
--- /dev/null
+++ b/content/zh/docs/v3.4/access-control-and-account-management/external-authentication/oidc-identity-provider.md
@@ -0,0 +1,64 @@
+---
+title: "OIDC 身份提供者"
+keywords: "OIDC, 身份提供者"
+description: "如何使用外部 OIDC 身份提供者。"
+
+linkTitle: "OIDC 身份提供者"
+weight: 12221
+---
+
+## OIDC 身份提供者
+
+[OpenID Connect](https://openid.net/connect/) 是一种基于 OAuth 2.0 系列规范的可互操作的身份认证协议。使用简单的 REST/JSON 消息流,其设计目标是“让简单的事情变得简单,让复杂的事情成为可能”。与之前的任何身份认证协议(例如 Keycloak、Okta、Dex、Auth0、Gluu、Casdoor 等)相比,开发人员集成起来非常容易。
+
+## 准备工作
+
+您需要部署一个 Kubernetes 集群,并在集群中安装 KubeSphere。有关详细信息,请参阅[在 Linux 上安装](../../../installing-on-linux/)和[在 Kubernetes 上安装](../../../installing-on-kubernetes/)。
+
+## 步骤
+
+1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. 在 `spec.authentication.jwtSecret` 字段下添加以下字段。
+
+ *使用 [Google Identity Platform](https://developers.google.com/identity/protocols/oauth2/openid-connect) 的示例*:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: google
+ type: OIDCIdentityProvider
+ mappingMethod: auto
+ provider:
+ clientID: '********'
+ clientSecret: '********'
+ issuer: https://accounts.google.com
+ redirectURL: 'https://ks-console/oauth/redirect/google'
+ ```
+
+ 字段描述如下:
+
+ | 参数 | 描述 |
+ | -------------------- | ------------------------------------------------------------ |
+ | clientID | 客户端 ID。 |
+ | clientSecret | 客户端密码。 |
+ | redirectURL | 重定向到 ks-console 的 URL,格式为:`https://<域名>/oauth/redirect/<身份提供者名称>`。URL 中的 `<身份提供者名称>` 对应 `oauthOptions:identityProviders:name` 的值。 |
+ | issuer | 定义客户端如何动态发现有关 OpenID 提供者的信息。 |
+ | preferredUsernameKey | 可配置的密钥,包含首选用户声明。此参数为可选参数。 |
+ | emailKey | 可配置的密钥,包含电子邮件声明。此参数为可选参数。 |
+ | getUserInfo | 使用 userinfo 端点获取令牌的附加声明。非常适用于上游返回 “thin” ID 令牌的场景。此参数为可选参数。 |
+ | insecureSkipVerify | 关闭 TLS 证书验证。 |
+
+
+
diff --git a/content/zh/docs/v3.4/access-control-and-account-management/external-authentication/set-up-external-authentication.md b/content/zh/docs/v3.4/access-control-and-account-management/external-authentication/set-up-external-authentication.md
new file mode 100644
index 000000000..948fbebd1
--- /dev/null
+++ b/content/zh/docs/v3.4/access-control-and-account-management/external-authentication/set-up-external-authentication.md
@@ -0,0 +1,112 @@
+---
+title: "设置外部身份验证"
+keywords: "LDAP, 外部, 第三方, 身份验证"
+description: "如何在 KubeSphere 上设置外部身份验证。"
+
+linkTitle: "设置外部身份验证"
+weight: 12210
+---
+
+本文档描述了如何在 KubeSphere 上使用外部身份提供者,例如 LDAP 服务或 Active Directory 服务。
+
+KubeSphere 提供了一个内置的 OAuth 服务。用户通过获取 OAuth 访问令牌以对 API 进行身份验证。作为 KubeSphere 管理员,您可以编辑 CRD `ClusterConfiguration` 中的 `ks-installer` 来配置 OAuth 并指定身份提供者。
+
+## 准备工作
+
+您需要部署一个 Kubernetes 集群,并在集群中安装 KubeSphere。有关详细信息,请参阅[在 Linux 上安装](../../../installing-on-linux/)和[在 Kubernetes 上安装](../../../installing-on-kubernetes/)。
+
+
+## 步骤
+
+1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. 在 `spec.authentication.jwtSecret` 字段下添加以下字段。
+
+ 示例:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ loginHistoryRetentionPeriod: 168h
+ maximumClockSkew: 10s
+ multipleLogin: true
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: LDAP
+ type: LDAPIdentityProvider
+ mappingMethod: auto
+ provider:
+ host: 192.168.0.2:389
+ managerDN: uid=root,cn=users,dc=nas
+ managerPassword: ********
+ userSearchBase: cn=users,dc=nas
+ loginAttribute: uid
+ mailAttribute: mail
+ ```
+
+ 字段描述如下:
+
+ * `jwtSecret`:签发用户令牌的密钥。在多集群环境下,所有的集群必须[使用相同的密钥](../../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-member-cluster)。
+ * `authenticateRateLimiterMaxTries`:`authenticateLimiterDuration` 指定的期间内允许的最大连续登录失败次数。如果用户连续登录失败次数达到限制,则该用户将被封禁。
+ * `authenticateRateLimiterDuration`:`authenticateRateLimiterMaxTries` 适用的时间段。
+ * `loginHistoryRetentionPeriod`:用户登录记录保留期限,过期的登录记录将被自动删除。
+ * `maximumClockSkew`:时间敏感操作(例如验证用户令牌的过期时间)的最大时钟偏差,默认值为10秒。
+ * `multipleLogin`:是否允许多个用户同时从不同位置登录,默认值为 `true`。
+ * `oauthOptions`:
+ * `accessTokenMaxAge`:访问令牌有效期。对于多集群环境中的成员集群,默认值为 `0h`,这意味着访问令牌永不过期。对于其他集群,默认值为 `2h`。
+ * `accessTokenInactivityTimeout`:令牌空闲超时时间。该值表示令牌过期后,刷新用户令牌最大的间隔时间,如果不在此时间窗口内刷新用户身份令牌,用户将需要重新登录以获得访问权。
+ * `identityProviders`:
+ * `name`:身份提供者的名称。
+ * `type`:身份提供者的类型。
+ * `mappingMethod`:帐户映射方式,值可以是 `auto` 或者 `lookup`。
+ * 如果值为 `auto`(默认),需要指定新的用户名。通过第三方帐户登录时,KubeSphere 会根据用户名自动创建关联帐户。
+ * 如果值为 `lookup`,需要执行步骤 3 以手动关联第三方帐户与 KubeSphere 帐户。
+ * `provider`:身份提供者信息。此部分中的字段根据身份提供者的类型而异。
+
+3. 如果 `mappingMethod` 设置为 `lookup`,可以运行以下命令并添加标签来进行帐户关联。如果 `mappingMethod` 是 `auto` 可以跳过这个部分。
+
+ ```bash
+ kubectl edit user
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+ 示例:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ maximumClockSkew: 10s
+ multipleLogin: true
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: LDAP
+ type: LDAPIdentityProvider
+ mappingMethod: auto
+ provider:
+ host: 192.168.0.2:389
+ managerDN: uid=root,cn=users,dc=nas
+ managerPassword: ********
+ userSearchBase: cn=users,dc=nas
+ loginAttribute: uid
+ mailAttribute: mail
+ ```
+
+2. 在 `spec:authentication` 部分配置 `oauthOptions:identityProviders` 以外的字段信息请参阅[设置外部身份认证](../set-up-external-authentication/)。
+
+3. 在 `oauthOptions:identityProviders` 部分配置字段。
+
+ * `name`: 用户定义的 LDAP 服务名称。
+ * `type`: 必须将该值设置为 `LDAPIdentityProvider` 才能将 LDAP 服务用作身份提供者。
+ * `mappingMethod`: 帐户映射方式,值可以是 `auto` 或者 `lookup`。
+ * 如果值为 `auto`(默认),需要指定新的用户名。KubeSphere 根据用户名自动创建并关联 LDAP 用户。
+ * 如果值为 `lookup`,需要执行步骤 4 以手动关联现有 KubeSphere 用户和 LDAP 用户。
+ * `provider`:
+ * `host`: LDAP 服务的地址和端口号。
+ * `managerDN`: 用于绑定到 LDAP 目录的 DN 。
+ * `managerPassword`: `managerDN` 对应的密码。
+ * `userSearchBase`: 用户搜索基。设置为所有 LDAP 用户所在目录级别的 DN 。
+ * `loginAttribute`: 标识 LDAP 用户的属性。
+ * `mailAttribute`: 标识 LDAP 用户的电子邮件地址的属性。
+
+4. 如果 `mappingMethod` 设置为 `lookup`,可以运行以下命令并添加标签来进行帐户关联。如果 `mappingMethod` 是 `auto` 可以跳过这个部分。
+
+ ```bash
+ kubectl edit user
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. 在 `spec:authentication` 部分配置的 `oauthOptions:identityProviders` 以外的字段信息请参阅[设置外部身份认证](../set-up-external-authentication/)。
+
+3. 根据开发的身份提供者插件来配置 `oauthOptions:identityProviders` 中的字段。
+
+ 以下是使用 GitHub 作为外部身份提供者的配置示例。详情请参阅 [GitHub 官方文档](https://docs.github.com/en/developers/apps/building-oauth-apps)和 [GitHubIdentityProvider 源代码](https://github.com/kubesphere/kubesphere/blob/release-3.1/pkg/apiserver/authentication/identityprovider/github/github.go) 。
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: github
+ type: GitHubIdentityProvider
+ mappingMethod: auto
+ provider:
+ clientID: '******'
+ clientSecret: '******'
+ redirectURL: 'https://ks-console/oauth/redirect/github'
+ ```
+
+ 同样,您也可以使用阿里云 IDaaS 作为外部身份提供者。详情请参阅[阿里云 IDaaS 文档](https://www.alibabacloud.com/help/product/111120.htm?spm=a3c0i.14898238.2766395700.1.62081da1NlxYV0)和 [AliyunIDaasProvider 源代码](https://github.com/kubesphere/kubesphere/blob/release-3.1/pkg/apiserver/authentication/identityprovider/aliyunidaas/idaas.go)。
+
+4. 字段配置完成后,保存修改,然后等待 ks-installer 完成重启。
+
+ {{< notice note >}}
+
+ KubeSphere Web 控制台在 ks-installer 重新启动期间不可用。请等待重启完成。
+
+ {{ notice >}}
+
+5. 进入 KubeSphere 登录界面,点击 **Log In with XXX** (例如,**Log In with GitHub**)。
+
+6. 在外部身份提供者的登录界面,输入身份提供者配置的用户名和密码,登录 KubeSphere 。
+
+ 
+
diff --git a/content/zh/docs/v3.4/access-control-and-account-management/multi-tenancy-in-kubesphere.md b/content/zh/docs/v3.4/access-control-and-account-management/multi-tenancy-in-kubesphere.md
new file mode 100644
index 000000000..75fd6fb26
--- /dev/null
+++ b/content/zh/docs/v3.4/access-control-and-account-management/multi-tenancy-in-kubesphere.md
@@ -0,0 +1,57 @@
+---
+title: "KubeSphere 中的多租户"
+keywords: "Kubernetes, KubeSphere, 多租户"
+description: "理解 KubeSphere 中的多租户架构。"
+linkTitle: "KubeSphere 中的多租户"
+weight: 12100
+---
+
+Kubernetes 解决了应用编排、容器调度的难题,极大地提高了资源的利用率。有别于传统的集群运维方式,在使用 Kubernetes 的过程中,企业和个人用户在资源共享和安全性方面均面临着诸多挑战。
+
+首当其冲的就是企业环境中多租户形态该如何定义,租户的安全边界该如何划分。Kubernetes 社区[关于多租户的讨论](https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY)从未停歇,但到目前为止最终的形态尚无定论。
+
+## Kubernetes 多租户面临的挑战
+
+多租户是一种常见的软件架构,简单概括就是在多用户环境下实现资源共享,并保证各用户间数据的隔离性。在多租户集群环境中,集群管理员需要最大程度地避免恶意租户对其他租户的攻击,公平地分配集群资源。
+
+无论企业的多租户形态如何,多租户都无法避免以下两个层面的问题:逻辑层面的资源隔离;物理资源的隔离。
+
+逻辑层面的资源隔离主要包括 API 的访问控制,针对用户的权限控制。Kubernetes 中的 [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) 和命名空间 (namespace) 提供了基本的逻辑隔离能力,但在大部分企业环境中并不适用。企业中的租户往往需要跨多个命名空间甚至是多个集群进行资源管理。除此之外,针对用户的行为审计、租户隔离的日志、事件查询也是不可或缺的能力。
+
+物理资源的隔离主要包括节点、网络的隔离,当然也包括容器运行时安全。您可以通过 [NetworkPolicy](../../pluggable-components/network-policy/) 对网络进行划分,通过 PodSecurityPolicy 限制容器的行为,[Kata Containers](https://katacontainers.io/) 也提供了更安全的容器运行时。
+
+## KubeSphere 中的多租户
+
+为了解决上述问题,KubeSphere 提供了基于 Kubernetes 的多租户管理方案。
+
+
+
+在 KubeSphere 中[企业空间](../../workspace-administration/what-is-workspace/)是最小的租户单元,企业空间提供了跨集群、跨项目(即 Kubernetes 中的命名空间)共享资源的能力。企业空间中的成员可以在授权集群中创建项目,并通过邀请授权的方式参与项目协同。
+
+**用户**是 KubeSphere 的帐户实例,可以被设置为平台层面的管理员参与集群的管理,也可以被添加到企业空间中参与项目协同。
+
+多级的权限控制和资源配额限制是 KubeSphere 中资源隔离的基础,奠定了多租户最基本的形态。
+
+### 逻辑隔离
+
+与 Kubernetes 相同,KubeSphere 通过 RBAC 对用户的权限加以控制,实现逻辑层面的资源隔离。
+
+KubeSphere 中的权限控制分为平台、企业空间、项目三个层级,通过角色来控制用户在不同层级的资源访问权限。
+
+1. [平台角色](../../quick-start/create-workspace-and-project/):主要控制用户对平台资源的访问权限,如集群的管理、企业空间的管理、平台用户的管理等。
+2. [企业空间角色](../../workspace-administration/role-and-member-management/):主要控制企业空间成员在企业空间下的资源访问权限,如企业空间下项目、DevOps 项目的管理等。
+3. [项目角色](../../project-administration/role-and-member-management/):主要控制项目下资源的访问权限,如工作负载的管理、流水线的管理等。
+
+### 网络隔离
+
+除了逻辑层面的资源隔离,KubeSphere 中还可以针对企业空间和项目设置[网络隔离策略](../../pluggable-components/network-policy/)。
+
+### 操作审计
+
+KubeSphere 还提供了针对用户的[操作审计](../../pluggable-components/auditing-logs/)。
+
+### 认证鉴权
+
+KubeSphere 完整的认证鉴权链路如下图所示,可以通过 OPA 拓展 Kubernetes 的 RBAC 规则。KubeSphere 团队计划集成 [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) 以支持更为丰富的安全管控策略。
+
+
diff --git a/content/zh/docs/v3.4/application-store/_index.md b/content/zh/docs/v3.4/application-store/_index.md
new file mode 100644
index 000000000..4ce4d34d0
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/_index.md
@@ -0,0 +1,16 @@
+---
+title: "应用商店"
+description: "上手 KubeSphere 应用商店"
+layout: "second"
+
+
+linkTitle: "应用商店"
+weight: 14000
+
+icon: "/images/docs/v3.x/docs.svg"
+
+---
+
+KubeSphere 应用商店基于 [OpenPitrix](https://github.com/openpitrix/openpitrix) (一个跨云管理应用的开源平台)为用户提供企业就绪的容器化解决方案。您可以通过应用模板上传自己的应用,或者添加应用仓库作为应用工具,供租户选择他们想要的应用。
+
+应用商店为应用生命周期管理提供了一个高效的集成系统,用户可以用最合适的方式快速上传、发布、部署、升级和下架应用。因此,开发者借助 KubeSphere 就能减少花在设置上的时间,更多地专注于开发。
diff --git a/content/zh/docs/v3.4/application-store/app-developer-guide/_index.md b/content/zh/docs/v3.4/application-store/app-developer-guide/_index.md
new file mode 100644
index 000000000..cb4e2189f
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/app-developer-guide/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "应用开发者指南"
+weight: 14400
+
+_build:
+ render: false
+---
diff --git a/content/zh/docs/v3.4/application-store/app-developer-guide/helm-developer-guide.md b/content/zh/docs/v3.4/application-store/app-developer-guide/helm-developer-guide.md
new file mode 100644
index 000000000..3b2a72436
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/app-developer-guide/helm-developer-guide.md
@@ -0,0 +1,158 @@
+---
+title: "Helm 开发者指南"
+keywords: 'Kubernetes, KubeSphere, Helm, 开发'
+description: '开发基于 Helm 的应用。'
+linkTitle: "Helm 开发者指南"
+weight: 14410
+---
+
+您可以上传应用的 Helm Chart 至 KubeSphere,以便具有必要权限的租户能够进行部署。本教程以 NGINX 为示例演示如何准备 Helm Chart。
+
+## 安装 Helm
+
+如果您已经安装 KubeSphere,那么您的环境中已部署 Helm。如果未安装,请先参考 [Helm 文档](https://helm.sh/docs/intro/install/)安装 Helm。
+
+## 创建本地仓库
+
+执行以下命令在您的机器上创建仓库。
+
+```bash
+mkdir helm-repo
+```
+
+```bash
+cd helm-repo
+```
+
+## 创建应用
+
+使用 `helm create` 创建一个名为 `nginx` 的文件夹,它会自动为您的应用创建 YAML 模板和目录。一般情况下,不建议修改顶层目录中的文件名和目录名。
+
+```bash
+$ helm create nginx
+$ tree nginx/
+nginx/
+├── charts
+├── Chart.yaml
+├── templates
+│ ├── deployment.yaml
+│ ├── _helpers.tpl
+│ ├── ingress.yaml
+│ ├── NOTES.txt
+│ └── service.yaml
+└── values.yaml
+```
+
+`Chart.yaml` 用于定义 Chart 的基本信息,包括名称、API 和应用版本。有关更多信息,请参见 [Chart.yaml 文件](../helm-specification/#chartyaml-文件)。
+
+该 `Chart.yaml` 文件的示例:
+
+```yaml
+apiVersion: v1
+appVersion: "1.0"
+description: A Helm chart for Kubernetes
+name: nginx
+version: 0.1.0
+```
+
+当您向 Kubernetes 部署基于 Helm 的应用时,可以直接在 KubeSphere 控制台上编辑 `values.yaml` 文件。
+
+该 `values.yaml` 文件的示例:
+
+```yaml
+# 默认值仅供测试使用。
+# 此文件为 YAML 格式。
+# 对要传入您的模板的变量进行声明。
+
+replicaCount: 1
+
+image:
+ repository: nginx
+ tag: stable
+ pullPolicy: IfNotPresent
+
+nameOverride: ""
+fullnameOverride: ""
+
+service:
+ type: ClusterIP
+ port: 80
+
+ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ path: /
+ hosts:
+ - chart-example.local
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
+
+resources: {}
+ # 通常不建议对默认资源进行指定,用户可以去主动选择是否指定。
+ # 这也有助于 Chart 在资源较少的环境上运行,例如 Minikube。
+ # 如果您要指定资源,请将下面几行内容取消注释,
+ # 按需调整,并删除 'resources:' 后面的大括号。
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
+```
+
+请参考 [Helm 规范](../helm-specification/)对 `nginx` 文件夹中的文件进行编辑,完成编辑后进行保存。
+
+## 创建索引文件(可选)
+
+要在 KubeSphere 中使用 HTTP 或 HTTPS URL 添加仓库,您需要事先向对象存储上传一个 `index.yaml` 文件。在 `nginx` 的上一个目录中使用 Helm 执行以下命令,创建索引文件。
+
+```bash
+helm repo index .
+```
+
+```bash
+$ ls
+index.yaml nginx
+```
+
+{{< notice note >}}
+
+- 如果仓库 URL 是 S3 格式,您向仓库添加应用时会自动在对象存储中创建索引文件。
+
+- 有关何如向 KubeSphere 添加仓库的更多信息,请参见[导入 Helm 仓库](../../../workspace-administration/app-repository/import-helm-repository/)。
+
+{{ notice >}}
+
+## 打包 Chart
+
+前往 `nginx` 的上一个目录,执行以下命令打包您的 Chart,这会创建一个 .tgz 包。
+
+```bash
+helm package nginx
+```
+
+```bash
+$ ls
+nginx nginx-0.1.0.tgz
+```
+
+## 上传您的应用
+
+现在您已经准备好了基于 Helm 的应用,您可以将它上传至 KubeSphere 并在平台上进行测试。
+
+## 另请参见
+
+[Helm 规范](../helm-specification/)
+
+[导入 Helm 仓库](../../../workspace-administration/app-repository/import-helm-repository/)
+
diff --git a/content/zh/docs/v3.4/application-store/app-developer-guide/helm-specification.md b/content/zh/docs/v3.4/application-store/app-developer-guide/helm-specification.md
new file mode 100644
index 000000000..c33f28596
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/app-developer-guide/helm-specification.md
@@ -0,0 +1,131 @@
+---
+title: "Helm 规范"
+keywords: 'Kubernetes, KubeSphere, Helm, 规范'
+description: '了解 Chart 结构和规范。'
+linkTitle: "Helm 规范"
+weight: 14420
+---
+
+Helm Chart 是一种打包格式。Chart 是一个描述一组 Kubernetes 相关资源的文件集合。有关更多信息,请参见 [Helm 文档](https://helm.sh/zh/docs/topics/charts/)。
+
+## 结构
+
+Chart 的所有相关文件都存储在一个目录中,该目录通常包含:
+
+```text
+chartname/
+ Chart.yaml # 包含 Chart 基本信息(例如版本和名称)的 YAML 文件。
+ LICENSE # (可选)包含 Chart 许可证的纯文本文件。
+ README.md # (可选)应用说明和使用指南。
+ values.yaml # 该 Chart 的默认配置值。
+ values.schema.json # (可选)向 values.yaml 文件添加结构的 JSON Schema。
+ charts/ # 一个目录,包含该 Chart 所依赖的任意 Chart。
+ crds/ # 定制资源定义。
+ templates/ # 模板的目录,若提供相应值便可以生成有效的 Kubernetes 配置文件。
+ templates/NOTES.txt # (可选)包含使用说明的纯文本文件。
+```
+
+## Chart.yaml 文件
+
+您必须为 Chart 提供 `chart.yaml` 文件。下面是一个示例文件,每个字段都有说明。
+
+```yaml
+apiVersion: (必需)Chart API 版本。
+name: (必需)Chart 名称。
+version: (必需)版本,遵循 SemVer 2 标准。
+kubeVersion: (可选)兼容的 Kubernetes 版本,遵循 SemVer 2 标准。
+description: (可选)对应用的一句话说明。
+type: (可选)Chart 的类型。
+keywords:
+ - (可选)关于应用的关键字列表。
+home: (可选)应用的 URL。
+sources:
+ - (可选)应用源代码的 URL 列表。
+dependencies: (可选)Chart 必要条件的列表。
+ - name: Chart 的名称,例如 nginx。
+ version: Chart 的版本,例如 "1.2.3"。
+ repository: 仓库 URL ("https://example.com/charts") 或别名 ("@repo-name")。
+ condition: (可选)解析为布尔值的 YAML 路径,用于启用/禁用 Chart (例如 subchart1.enabled)。
+ tags: (可选)
+ - 用于将 Chart 分组,一同启用/禁用。
+ import-values: (可选)
+ - ImportValues 保存源值到待导入父键的映射。每一项可以是字符串或者一对子/父子列表项。
+ alias: (可选)Chart 要使用的别名。当您要多次添加同一个 Chart 时,它会很有用。
+maintainers: (可选)
+ - name: (必需)维护者姓名。
+ email: (可选)维护者电子邮件。
+ url: (可选)维护者 URL。
+icon: (可选)要用作图标的 SVG 或 PNG 图片的 URL。
+appVersion: (可选)应用版本。不需要是 SemVer。
+deprecated: (可选,布尔值)该 Chart 是否已被弃用。
+annotations:
+ example: (可选)按名称输入的注解列表。
+```
+
+{{< notice note >}}
+
+- `dependencies` 字段用于定义 Chart 依赖项,`v1` Chart 的依赖项都位于单独文件 `requirements.yaml` 中。有关更多信息,请参见 [Chart 依赖项](https://helm.sh/zh/docs/topics/charts/#chart-dependency)。
+- `type` 字段用于定义 Chart 的类型。允许的值有 `application` 和 `library`。有关更多信息,请参见 [Chart 类型](https://helm.sh/zh/docs/topics/charts/#chart-types)。
+
+{{ notice >}}
+
+## Values.yaml 和模板
+
+Helm Chart 模板采用 [Go 模板语言](https://golang.org/pkg/text/template/)编写并存储在 Chart 的 `templates` 文件夹。有两种方式可以为模板提供值:
+
+1. 在 Chart 中创建一个包含可供引用的默认值的 `values.yaml` 文件。
+2. 创建一个包含必要值的 YAML 文件,通过在命令行使用 `helm install` 命令来使用该文件。
+
+下面是 `templates` 文件夹中模板的示例。
+
+```yaml
+apiVersion: v1
+kind: ReplicationController
+metadata:
+ name: deis-database
+ namespace: deis
+ labels:
+ app.kubernetes.io/managed-by: deis
+spec:
+ replicas: 1
+ selector:
+ app.kubernetes.io/name: deis-database
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: deis-database
+ spec:
+ serviceAccount: deis-database
+ containers:
+ - name: deis-database
+ image: {{.Values.imageRegistry}}/postgres:{{.Values.dockerTag}}
+ imagePullPolicy: {{.Values.pullPolicy}}
+ ports:
+ - containerPort: 5432
+ env:
+ - name: DATABASE_STORAGE
+ value: {{default "minio" .Values.storage}}
+```
+
+上述示例在 Kubernetes 中定义 ReplicationController 模板,其中引用的一些值已在 `values.yaml` 文件中进行定义。
+
+- `imageRegistry`:Docker 镜像仓库。
+- `dockerTag`:Docker 镜像标签 (tag)。
+- `pullPolicy`:镜像拉取策略。
+- `storage`:存储后端,默认为 `minio`。
+
+下面是 `values.yaml` 文件的示例:
+
+```text
+imageRegistry: "quay.io/deis"
+dockerTag: "latest"
+pullPolicy: "Always"
+storage: "s3"
+```
+
+## 参考
+
+[Helm 文档](https://helm.sh/zh/docs/)
+
+[Chart](https://helm.sh/zh/docs/topics/charts/)
+
diff --git a/content/zh/docs/v3.4/application-store/app-lifecycle-management.md b/content/zh/docs/v3.4/application-store/app-lifecycle-management.md
new file mode 100644
index 000000000..0089397d4
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/app-lifecycle-management.md
@@ -0,0 +1,230 @@
+---
+title: "应用程序生命周期管理"
+keywords: 'Kubernetes, KubeSphere, 应用商店'
+description: '您可以跨整个生命周期管理应用,包括提交、审核、测试、发布、升级和下架。'
+linkTitle: '应用程序生命周期管理'
+weight: 14100
+---
+
+KubeSphere 集成了 [OpenPitrix](https://github.com/openpitrix/openpitrix)(一个跨云管理应用程序的开源平台)来构建应用商店,管理应用程序的整个生命周期。应用商店支持两种应用程序部署方式:
+
+- **应用模板**:这种方式让开发者和独立软件供应商 (ISV) 能够与企业空间中的用户共享应用程序。您也可以在企业空间中导入第三方应用仓库。
+- **自制应用**:这种方式帮助用户使用多个微服务来快速构建一个完整的应用程序。KubeSphere 让用户可以选择现有服务或者创建新的服务,用于在一站式控制台上创建自制应用。
+
+本教程使用 [Redis](https://redis.io/) 作为示例应用程序,演示如何进行应用全生命周期管理,包括提交、审核、测试、发布、升级和下架。
+
+## 视频演示
+
+
+
+## 准备工作
+
+- 您需要启用 [KubeSphere 应用商店 (OpenPitrix)](../../pluggable-components/app-store/)。
+- 您需要创建一个企业空间、一个项目以及一个用户 (`project-regular`)。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../quick-start/create-workspace-and-project/)。
+
+## 动手实验
+
+### 步骤一:创建自定义角色和帐户
+
+首先,您需要创建两个帐户,一个是 ISV 的帐户 (`isv`),另一个是应用技术审核员的帐户 (`reviewer`)。
+
+1. 使用 `admin` 帐户登录 KubeSphere 控制台。点击左上角的**平台管理**,选择**访问控制**。转到**平台角色**,点击**创建**。
+
+2. 为角色设置一个名称,例如 `app-review`,然后点击**编辑权限**。
+
+3. 转到**应用管理**,选择权限列表中的**应用商店管理**和**应用商店查看**,然后点击**确定**。
+
+ {{< notice note >}}
+
+ 被授予 `app-review` 角色的用户能够查看平台上的应用商店并管理应用,包括审核和下架应用。
+
+ {{ notice >}}
+
+4. 创建角色后,您需要创建一个用户,并授予 `app-review` 角色。转到**用户**,点击**创建**。输入必需的信息,然后点击**确定**。
+
+5. 再创建另一个用户 `isv`,把 `platform-regular` 角色授予它。
+
+6. 邀请上面创建好的两个帐户进入现有的企业空间,例如 `demo-workspace`,并授予它们 `workspace-admin` 角色。
+
+### 步骤二:上传和提交应用程序
+
+1. 以 `isv` 身份登录控制台,转到您的企业空间。您需要上传示例应用 Redis 至该企业空间,供后续使用。首先,下载应用 [Redis 11.3.4](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/redis-11.3.4.tgz),然后转到**应用模板**,点击**上传模板**。
+
+ {{< notice note >}}
+
+ 在本示例中,稍后会上传新版本的 Redis 来演示升级功能。
+
+ {{ notice >}}
+
+2. 在弹出的对话框中,点击**上传 Helm Chart** 上传 Chart 文件。点击**确定**继续。
+
+3. **应用信息**下显示了应用的基本信息。要上传应用的图标,点击**上传图标**。您也可以跳过上传图标,直接点击**确定**。
+
+ {{< notice note >}}
+
+ 应用图标支持的最大分辨率为 96 × 96 像素。
+
+ {{ notice >}}
+
+4. 成功上传后,模板列表中会列出应用,状态为**开发中**,意味着该应用正在开发中。上传的应用对同一企业空间下的所有成员均可见。
+
+5. 点击列表中的 Redis 进入应用模板详情页面。您可以点击**编辑**来编辑该应用的基本信息。
+
+6. 您可以通过在弹出窗口中指定字段来自定义应用的基本信息。
+
+7. 点击**确定**保存更改,然后您可以通过将其部署到 Kubernetes 来测试该应用程序。点击待提交版本展开菜单,选择**安装**。
+
+ {{< notice note >}}
+
+ 如果您不想测试应用,可以直接提交审核。但是,建议您先测试应用部署和功能,再提交审核,尤其是在生产环境中。这会帮助您提前发现问题,加快审核过程。
+
+ {{ notice >}}
+
+8. 选择要部署应用的集群和项目,为应用设置不同的配置,然后点击**安装**。
+
+ {{< notice note >}}
+
+ 有些应用可以在表单中设置所有配置后进行部署。您可以使用拨动开关查看它的 YAML 文件,文件中包含了需要在表单中指定的所有参数。
+
+ {{ notice >}}
+
+9. 稍等几分钟,切换到**应用实例**选项卡。您会看到 Redis 已经部署成功。
+
+10. 测试应用并且没有发现问题后,便可以点击**提交发布**,提交该应用程序进行发布。
+
+ {{< notice note >}}
+
+版本号必须以数字开头并包含小数点。
+
+{{ notice >}}
+
+11. 应用提交后,它的状态会变成**已提交**。现在,应用审核员便可以进行审核。
+
+
+### 步骤三:发布应用程序
+
+1. 登出控制台,然后以 `reviewer` 身份重新登录 KubeSphere。点击左上角的**平台管理**,选择**应用商店管理**。在**应用发布**页面,上一步中提交的应用会显示在**待发布**选项卡下。
+
+2. 点击该应用进行审核,在弹出窗口中查看应用信息、介绍、配置文件和更新日志。
+
+3. 审核员的职责是决定该应用是否符合发布至应用商店的标准。点击**通过**来批准,或者点击**拒绝**来拒绝提交的应用。
+
+### 步骤四:发布应用程序至应用商店
+
+应用获批后,`isv` 便可以将 Redis 应用程序发布至应用商店,让平台上的所有用户都能找到并部署该应用程序。
+
+1. 登出控制台,然后以 `isv` 身份重新登录 KubeSphere。转到您的企业空间,点击**应用模板**页面上的 Redis。在详情页面上展开版本菜单,然后点击**发布到商店**。在弹出的提示框中,点击**确定**以确认操作。
+
+2. 在**应用发布**下,您可以查看应用状态。**已上架**意味着它在应用商店中可用。
+
+3. 点击**在商店查看**转到应用商店的**应用信息**页面,或者点击左上角的**应用商店**也可以查看该应用。
+
+ {{< notice note >}}
+
+ 您可能会在应用商店看到两个 Redis 应用,其中一个是 KubeSphere 中的内置应用。请注意,新发布的应用会显示在应用商店列表的开头。
+
+ {{ notice >}}
+
+4. 现在,企业空间中的用户可以从应用商店中部署 Redis。要将应用部署至 Kubernetes,请点击应用转到**应用信息**页面,然后点击**安装**。
+
+ {{< notice note >}}
+
+ 如果您在部署应用时遇到问题,**状态**栏显示为**失败**,您可以将光标移至**失败**图标上方查看错误信息。
+
+ {{ notice >}}
+
+### 步骤五:创建应用分类
+
+`reviewer` 可以根据不同类型应用程序的功能和用途创建多个分类。这类似于设置标签,可以在应用商店中将分类用作筛选器,例如大数据、中间件和物联网等。
+
+1. 以 `reviewer` 身份登录 KubeSphere。要创建分类,请转到**应用商店管理**页面,再点击**应用分类**页面中的
。
+
+2. 在弹出的对话框中设置分类名称和图标,然后点击**确定**。对于 Redis,您可以将**分类名称**设置为 `Database`。
+
+ {{< notice note >}}
+
+ 通常,应用审核员会提前创建必要的分类,ISV 会选择应用所属的分类,然后提交审核。新创建的分类中没有应用。
+
+ {{ notice >}}
+
+3. 创建好分类后,您可以给您的应用分配分类。在**未分类**中选择 Redis,点击**调整分类**。
+
+4. 在弹出对话框的下拉列表中选择分类 (**Database**) 然后点击**确定**。
+
+5. 该应用便会显示在对应分类中。
+
+
+### 步骤六:添加新版本
+
+要让企业空间用户能够更新应用,您需要先向 KubeSphere 添加新的应用版本。按照下列步骤为示例应用添加新版本。
+
+1. 再次以 `isv` 身份登录 KubeSphere,点击**应用模板**,点击列表中的 Redis 应用。
+
+2. 下载 [Redis 12.0.0](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/redis-12.0.0.tgz),这是 Redis 的一个新版本,本教程用它来演示。在**版本**选项卡中点击右侧的**上传新版本**,上传您刚刚下载的文件包。
+
+3. 点击**上传 Helm Chart**,上传完成后点击**确定**。
+
+4. 新的应用版本会显示在版本列表中。您可以通过点击来展开菜单并测试新的版本。另外,您也可以提交审核并发布至应用商店,操作步骤和上面说明的一样。
+
+
+### 步骤七:升级
+
+新版本发布至应用商店后,所有用户都可以升级该应用程序至新版本。
+
+{{< notice note >}}
+
+要完成下列步骤,您必须先部署应用的一个旧版本。本示例中,Redis 11.3.4 已经部署至项目 `demo-project`,它的新版本 12.0.0 也已经发布至应用商店。
+
+{{ notice >}}
+
+1. 以 `project-regular` 身份登录 KubeSphere,搜寻到项目的**应用**页面,点击要升级的应用。
+
+2. 点击**更多操作**,在下拉菜单中选择**编辑设置**。
+
+3. 在弹出窗口中,您可以查看应用配置 YAML 文件。在右侧的下拉列表中选择新版本,您可以自定义新版本的 YAML 文件。在本教程中,点击**更新**,直接使用默认配置。
+
+ {{< notice note >}}
+
+ 您可以在右侧的下拉列表中选择与左侧相同的版本,通过 YAML 文件自定义当前应用的配置。
+
+ {{ notice >}}
+
+4. 在**应用**页面,您会看到应用正在升级中。升级完成后,应用状态会变成**运行中**。
+
+
+### 步骤八:下架应用程序
+
+您可以选择将应用完全从应用商店下架,或者下架某个特定版本。
+
+1. 以 `reviewer` 身份登录 KubeSphere。点击左上角的**平台管理**,选择**应用商店管理**。在**应用商店**页面,点击 Redis。
+
+2. 在详情页面,点击**下架应用**,在弹出的对话框中选择**确定**,确认将应用从应用商店下架的操作。
+
+ {{< notice note >}}
+
+ 将应用从应用商店下架不影响正在使用该应用的租户。
+
+ {{ notice >}}
+
+3. 要让应用再次在应用商店可用,点击**上架应用**。
+
+4. 要下架应用的特定版本,展开版本菜单,点击**下架版本**。在弹出的对话框中,点击**确定**以确认操作。
+
+ {{< notice note >}}
+
+ 下架应用版本后,该版本在应用商店将不可用。下架应用版本不影响正在使用该版本的租户。
+
+ {{ notice >}}
+
+5. 要让应用版本再次在应用商店可用,点击**上架版本**。
+
+
+
+
+
+
+
+
+
diff --git a/content/zh/docs/v3.4/application-store/built-in-apps/_index.md b/content/zh/docs/v3.4/application-store/built-in-apps/_index.md
new file mode 100644
index 000000000..49cf0ae27
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/built-in-apps/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "内置应用"
+weight: 14200
+
+_build:
+ render: false
+---
diff --git a/content/zh/docs/v3.4/application-store/built-in-apps/chaos-mesh-app.md b/content/zh/docs/v3.4/application-store/built-in-apps/chaos-mesh-app.md
new file mode 100644
index 000000000..6393486d0
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/built-in-apps/chaos-mesh-app.md
@@ -0,0 +1,93 @@
+---
+title: "在 KubeSphere 中部署 Chaos Mesh"
+keywords: 'KubeSphere, Kubernetes, Chaos Mesh, Chaos Engineering'
+description: '了解如何在 KubeSphere 中部署 Chaos Mesh 并进行混沌实验。'
+linkTitle: "部署 Chaos Mesh"
+---
+
+[Chaos Mesh](https://github.com/chaos-mesh/chaos-mesh) 是一个开源的云原生混沌工程平台,提供丰富的故障模拟类型,具有强大的故障场景编排能力,方便用户在开发测试中以及生产环境中模拟现实世界中可能出现的各类异常,帮助用户发现系统潜在的问题。
+
+
+
+本教程演示了如何在 KubeSphere 上部署 Chaos Mesh 进行混沌实验。
+
+## **准备工作**
+
+* 部署 [KubeSphere 应用商店](../../../pluggable-components/app-store/)。
+* 您需要为本教程创建一个企业空间、一个项目和两个帐户(ws-admin 和 project-regular)。帐户 ws-admin 必须在企业空间中被赋予 workspace-admin 角色,帐户 project-regular 必须被邀请至项目中赋予 operator 角色。若还未创建好,请参考[创建企业空间、项目、用户和角色](https://kubesphere.io/zh/docs/quick-start/create-workspace-and-project/)。
+
+
+## **开始混沌实验**
+
+### 步骤1: 部署 Chaos Mesh
+
+1. 使用 `project-regular` 身份登陆,在应用市场中搜索 `chaos-mesh`,点击搜索结果进入应用。
+
+ 
+
+
+2. 进入应用信息页后,点击右上角**安装**按钮。
+
+ 
+
+3. 进入应用设置页面,可以设置应用**名称**(默认会随机一个唯一的名称)和选择安装的**位置**(对应的 Namespace) 和**版本**,然后点击右上角**下一步**。
+
+ 
+
+4. 根据实际需要编辑 `values.yaml` 文件,也可以直接点击**安装**使用默认配置。
+
+ 
+
+5. 等待 Chaos Mesh 开始正常运行。
+
+ 
+
+6. 访问**应用负载**, 可以看到 Chaos Mesh 创建的三个部署。
+
+ 
+
+### 步骤 2: 访问 Chaos Mesh
+
+1. 前往**应用负载**下服务页面,复制 chaos-dashboard 的 **NodePort**。
+
+ 
+
+2. 您可以通过 `${NodeIP}:${NODEPORT}` 方式访问 Chaos Dashboard。并参考[管理用户权限](https://chaos-mesh.org/zh/docs/manage-user-permissions/)文档,生成 Token,并登陆 Chaos Dashboard。
+
+ 
+
+### 步骤 3: 创建混沌实验
+
+1. 在开始混沌实验之前,需要先确定并部署您的实验目标,比如,测试某应用在网络延时下的工作状态。本文使用了一个 demo 应用 `web-show` 作为待测试目标,观测系统网络延迟。 你可以使用下面命令部署一个 Demo 应用 `web-show` :
+
+ ```bash
+ curl -sSL https://mirrors.chaos-mesh.org/latest/web-show/deploy.sh | bash
+ ```
+
+ {{< notice note >}}
+
+ web-show 应用页面上可以直接观察到自身到 kube-system 命名空间下 Pod 的网络延迟。
+
+ {{ notice >}}
+
+2. 访问 **web-show** 应用程序。从您的网络浏览器,进入 `${NodeIP}:8081`。
+
+ 
+
+3. 登陆 Chaos Dashboard 创建混沌实验,为了更好的观察混沌实验效果,这里只创建一个独立的混沌实验,混沌实验的类型选择**网络攻击**,模拟网络延迟的场景:
+
+ 
+
+ 实验范围设置为 web-show 应用:
+
+ 
+
+4. 提交混沌实验后,查看实验状态:
+
+ 
+
+5. 访问 web-show 应用观察实验结果 :
+
+ 
+
+更多详情参考 [Chaos Mesh 使用文档](https://chaos-mesh.org/zh/docs/)。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/application-store/built-in-apps/etcd-app.md b/content/zh/docs/v3.4/application-store/built-in-apps/etcd-app.md
new file mode 100644
index 000000000..ca74dfa5a
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/built-in-apps/etcd-app.md
@@ -0,0 +1,60 @@
+---
+title: "在 KubeSphere 中部署 etcd"
+keywords: 'Kubernetes, KubeSphere, etcd, 应用商店'
+description: '了解如何从 KubeSphere 应用商店中部署 etcd 并访问服务。'
+linkTitle: "在 KubeSphere 中部署 etcd"
+weight: 14210
+---
+
+[etcd](https://etcd.io/) 是一个采用 Go 语言编写的分布式键值存储库,用来存储供分布式系统或机器集群访问的数据。在 Kubernetes 中,etcd 是服务发现的后端,存储集群状态和配置。
+
+本教程演示如何从 KubeSphere 应用商店部署 etcd。
+
+## 准备工作
+
+- 请确保[已启用 OpenPitrix 系统](../../../pluggable-components/app-store/)。
+- 您需要创建一个企业空间、一个项目和一个用户帐户 (`project-regular`) 供本教程操作使用。该帐户需要是平台普通用户,并邀请至项目中赋予 `operator` 角色作为项目操作员。本教程中,请以 `project-regular` 身份登录控制台,在企业空间 `demo-workspace` 中的 `demo-project` 项目中进行操作。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 动手实验
+
+### 步骤 1:从应用商店中部署 etcd
+
+1. 在 `demo-project` 项目的**概览**页面,点击左上角的**应用商店**。
+
+2. 找到 etcd,点击**应用信息**页面上的**安装**。
+
+3. 设置名称并选择应用版本。请确保将 etcd 部署在 `demo-project` 中,点击**下一步**。
+
+4. 在**应用设置**页面,指定 etcd 的持久化持久卷大小,点击**安装**。
+
+ {{< notice note >}}
+
+ 要指定 etcd 的更多值,请使用右上角的**编辑YAML**查看 YAML 格式的应用清单文件,并编辑其配置。
+
+ {{ notice >}}
+
+5. 在**应用**页面的**基于模板的应用**选项卡下,稍等片刻待 etcd 启动并运行。
+
+
+### 步骤 2:访问 etcd 服务
+
+应用部署后,您可以在 KubeSphere 控制台上使用 etcdctl 命令行工具与 etcd 服务器进行交互,直接访问 etcd。
+
+1. 在**工作负载**的**有状态副本集**选项卡中,点击 etcd 的服务名称。
+
+2. 在**容器组**下,展开菜单查看容器详情,然后点击**终端**图标。
+
+3. 在终端中,您可以直接读写数据。例如,分别执行以下两个命令。
+
+ ```bash
+ etcdctl set /name kubesphere
+ ```
+
+ ```bash
+ etcdctl get /name
+ ```
+
+4. KubeSphere 集群内的客户端可以通过 `
,从下拉菜单中选择操作:
+
+- **编辑**:编辑项目网关的配置。
+- **关闭**:关闭项目网关。
+
+{{< notice note >}}
+
+如果在创建集群网关之前存在项目网关,则项目网关地址可能会在集群网关地址和项目网关地址之间切换。建议您只使用集群网关或项目网关。
+
+{{ notice >}}
+
+关于如何创建项目网关的更多信息,请参见[项目网关](../../../project-administration/project-gateway/)。
+
diff --git a/content/zh/docs/v3.4/cluster-administration/cluster-settings/cluster-visibility-and-authorization.md b/content/zh/docs/v3.4/cluster-administration/cluster-settings/cluster-visibility-and-authorization.md
new file mode 100644
index 000000000..0b5c6b61c
--- /dev/null
+++ b/content/zh/docs/v3.4/cluster-administration/cluster-settings/cluster-visibility-and-authorization.md
@@ -0,0 +1,53 @@
+---
+title: "集群可见性和授权"
+keywords: "集群可见性, 集群管理"
+description: "了解如何设置集群可见性和授权。"
+linkTitle: "集群可见性和授权"
+weight: 8610
+---
+
+在 KubeSphere 中,您可以通过授权将一个集群分配给多个企业空间,让企业空间资源都可以在该集群上运行。同时,一个企业空间也可以关联多个集群。拥有必要权限的企业空间用户可以使用分配给该企业空间的集群来创建多集群项目。
+
+本指南演示如何设置集群可见性。
+
+## 准备工作
+* 您需要启用[多集群功能](../../../multicluster-management/)。
+* 您需要有一个企业空间和一个拥有创建企业空间权限的帐户,例如 `ws-manager`。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 设置集群可见性
+
+### 在创建企业空间时选择可用集群
+
+1. 使用拥有创建企业空间权限的用户登录 KubeSphere,例如 `ws-manager`。
+
+2. 点击左上角的**平台管理**,选择**访问控制**。在左侧导航栏选择**企业空间**,然后点击**创建**。
+
+3. 输入企业空间的基本信息,点击**下一步**。
+
+4. 在**集群设置**页面,您可以看到可用的集群列表,选择要分配给企业空间的集群并点击**创建**。
+
+5. 创建企业空间后,拥有必要权限的企业空间成员可以创建资源,在关联集群上运行。
+
+ {{< notice warning >}}
+
+尽量不要在主集群上创建资源,避免负载过高导致多集群稳定性下降。
+
+{{ notice >}}
+
+### 在创建企业空间后设置集群可见性
+
+创建企业空间后,您可以通过授权向该企业空间分配其他集群,或者将集群从企业空间中解绑。按照以下步骤调整集群可见性。
+
+1. 使用拥有集群管理权限的帐户登录 KubeSphere,例如 `admin`。
+
+2. 点击左上角的**平台管理**,选择**集群管理**。从列表中选择一个集群查看集群信息。
+
+3. 在左侧导航栏找到**集群设置**,选择**集群可见性**。
+
+4. 您可以看到已授权企业空间的列表,这意味着所有这些企业空间中的资源都能使用当前集群。
+
+5. 点击**编辑可见性**设置集群可见性。您可以选择让新的企业空间使用该集群,或者将该集群从企业空间解绑。
+
+### 将集群设置为公开集群
+
+您可以打开**设置为公开集群**,以便平台用户访问该集群,并在该集群上创建和调度资源。
diff --git a/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/_index.md b/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/_index.md
new file mode 100644
index 000000000..bce4fe493
--- /dev/null
+++ b/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "日志接收器"
+weight: 8620
+
+_build:
+ render: false
+---
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md b/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md
new file mode 100644
index 000000000..70a1807f8
--- /dev/null
+++ b/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md
@@ -0,0 +1,34 @@
+---
+title: "添加 Elasticsearch 作为接收器"
+keywords: 'Kubernetes, 日志, Elasticsearch, Pod, 容器, Fluentbit, 输出'
+description: '了解如何添加 Elasticsearch 来接收容器日志、资源事件或审计日志。'
+linkTitle: "添加 Elasticsearch 作为接收器"
+weight: 8622
+---
+您可以在 KubeSphere 中使用 Elasticsearch、Kafka 和 Fluentd 日志接收器。本教程演示如何添加 Elasticsearch 接收器。
+
+## 准备工作
+
+- 您需要一个被授予**集群管理**权限的用户。例如,您可以直接用 `admin` 用户登录控制台,或创建一个具有**集群管理**权限的角色然后将此角色授予一个用户。
+- 添加日志接收器前,您需要启用组件 `logging`、`events` 或 `auditing`。有关更多信息,请参见[启用可插拔组件](../../../../pluggable-components/)。本教程启用 `logging` 作为示例。
+
+## 添加 Elasticsearch 作为接收器
+
+1. 以 `admin` 身份登录 KubeSphere 的 Web 控制台。点击左上角的**平台管理**,然后选择**集群管理**。
+
+ {{< notice note >}}
+
+如果您启用了[多集群功能](../../../../multicluster-management/),您可以选择一个集群。
+
+{{ notice >}}
+
+2. 在左侧导航栏,选择**集群设置**下的**日志接收器**。
+
+3. 点击**添加日志接收器**并选择 **Elasticsearch**。
+
+4. 提供 Elasticsearch 服务地址和端口信息。
+
+5. Elasticsearch 会显示在**日志接收器**页面的接收器列表中,状态为**收集中**。
+
+6. 若要验证 Elasticsearch 是否从 Fluent Bit 接收日志,从右下角的**工具箱**中点击**日志查询**,在控制台中搜索日志。有关更多信息,请参阅[日志查询](../../../../toolbox/log-query/)。
+
diff --git a/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md b/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md
new file mode 100644
index 000000000..dc90d4e52
--- /dev/null
+++ b/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md
@@ -0,0 +1,154 @@
+---
+title: "添加 Fluentd 作为接收器"
+keywords: 'Kubernetes, 日志, Fluentd, 容器组, 容器, Fluentbit, 输出'
+description: '了解如何添加 Fluentd 来接收容器日志、资源事件或审计日志。'
+linkTitle: "添加 Fluentd 作为接收器"
+weight: 8624
+---
+您可以在 KubeSphere 中使用 Elasticsearch、Kafka 和 Fluentd 日志接收器。本教程演示:
+
+- 创建 Fluentd 部署以及对应的服务(Service)和配置字典(ConfigMap)。
+- 添加 Fluentd 作为日志接收器以接收来自 Fluent Bit 的日志,并输出为标准输出。
+- 验证 Fluentd 能否成功接收日志。
+
+## 准备工作
+
+- 您需要一个被授予**集群管理**权限的用户。例如,您可以直接用 `admin` 用户登录控制台,或创建一个具有**集群管理**权限的角色然后将此角色授予一个用户。
+
+- 添加日志接收器前,您需要启用组件 `logging`、`events` 或 `auditing`。有关更多信息,请参见[启用可插拔组件](../../../../pluggable-components/)。本教程启用 `logging` 作为示例。
+
+## 步骤 1:创建 Fluentd 部署
+
+由于内存消耗低,KubeSphere 选择 Fluent Bit。Fluentd 一般在 Kubernetes 中以守护进程集的形式部署,在每个节点上收集容器日志。此外,Fluentd 支持多个插件。因此,Fluentd 会以部署的形式在 KubeSphere 中创建,将从 Fluent Bit 接收到的日志发送到多个目标,例如 S3、MongoDB、Cassandra、MySQL、syslog 和 Splunk 等。
+
+执行以下命令:
+
+{{< notice note >}}
+
+- 以下命令将在默认命名空间 `default` 中创建 Fluentd 部署、服务和配置字典,并为该 Fluentd 配置字典添加 `filter` 以排除 `default` 命名空间中的日志,避免 Fluent Bit 和 Fluentd 重复日志收集。
+- 如果您想要将 Fluentd 部署至其他命名空间,请修改以下命令中的命名空间名称。
+
+{{ notice >}}
+
+```yaml
+cat <
。
+
+1. 点击下拉菜单中的**编辑**,根据与创建时相同的步骤来编辑告警策略。点击**消息设置**页面的**确定**保存更改。
+
+2. 点击下拉菜单中的**删除**以删除告警策略。
+
+## 查看告警策略
+
+在**告警策略**页面,点击一个告警策略的名称查看其详情,包括告警规则和告警历史。您还可以看到创建告警策略时基于所使用模板的告警规则表达式。
+
+在**监控**下,**告警监控**图显示一段时间内的实际资源使用情况或使用量。**告警消息**显示您在通知中设置的自定义消息。
+
+{{< notice note >}}
+
+您可以点击右上角的
,然后选择**添加子部门**。
+
+2. 在弹出对话框中,输入部门名称(例如`测试二组`),然后点击**确定**。
+
+3. 创建部门后,您可以点击右侧的**添加成员**、**批量导入**或**从其他部门移入**来添加成员。添加成员后,点击该成员进入详情页面,查看其帐号。
+
+4. 您可以点击`测试二组`右侧的
来查看其部门 ID。
+
+5. 点击**标签**选项卡,然后点击**添加标签**来创建标签。若管理界面无**标签**选项卡,请点击加号图标来创建标签。
+
+6. 在弹出对话框中,输入标签名称,例如`组长`。您可以按需指定**可使用人**,点击**确定**完成操作。
+
+7. 创建标签后,您可以点击右侧的**添加部门/成员**或**批量导入**来添加部门或成员。点击**标签详情**进入详情页面,可以查看此标签的 ID。
+
+8. 要查看企业 ID,请点击**我的企业**,在**企业信息**页面查看 ID。
+
+### 步骤 3:在 KubeSphere 控制台配置企业微信通知
+
+您必须在 KubeSphere 控制台提供企业微信的相关 ID 和凭证,以便 KubeSphere 将通知发送至您的企业微信。
+
+1. 使用具有 `platform-admin` 角色的用户(例如,`admin`)登录 KubeSphere Web 控制台。
+
+2. 点击左上角的**平台管理**,选择**平台设置**。
+
+3. 前往**通知管理**下的**通知配置**,选择**企业微信**。
+
+4. 在**服务器设置**下的**企业 ID**、**应用 AgentId** 以及**应用 Secret** 中分别输入您的企业 ID、应用 AgentId 以及应用 Secret。
+
+5. 在**接收设置**中,从下拉列表中选择**用户 ID**、**部门 ID** 或者**标签 ID**,输入对应 ID 后点击**添加**。您可以添加多个 ID。
+
+6. 勾选**通知条件**左侧的复选框即可设置通知条件。
+
+ - **标签**:告警策略的名称、级别或监控目标。您可以选择一个标签或者自定义标签。
+ - **操作符**:标签与值的匹配关系,包括**包含值**,**不包含值**,**存在**和**不存在**。
+ - **值**:标签对应的值。
+ {{< notice note >}}
+ - 操作符**包含值**和**不包含值**需要添加一个或多个标签值。使用回车分隔多个值。
+ - 操作符**存在**和**不存在**判断某个标签是否存在,无需设置标签值。
+ {{ notice >}}
+
+ 您可以点击**添加**来添加多个通知条件,或点击通知条件右侧的
。
+
+2. 转到**仓库**页面,您可以看到 Nexus 提供了三种仓库类型。
+
+ - `proxy`:远程仓库代理,用于下载资源并将其作为缓存存储在 Nexus 上。
+
+ - `hosted`:在 Nexus 上存储制品的仓库。
+
+ - `group`:一组已配置好的 Nexus 仓库。
+
+3. 点击仓库查看它的详细信息。例如:点击 **maven-public** 进去详情页面,并且查看它的 **URL**。
+
+### 步骤 2:在 GitHub 仓库修改 `pom.xml`
+
+1. 登录 GitHub,Fork [示例仓库](https://github.com/devops-ws/learn-pipeline-java)到您的 GitHub 帐户。
+
+2. 在您的 **learn-pipline-java** GitHub 仓库中,点击根目录下的文件 `pom.xml`。
+
+3. 在文件中点击 | 代码仓库 | +参数 | +
|---|---|
| GitHub | +凭证:选择访问代码仓库的凭证。 | +
| GitLab | +
+
|
+
| Bitbucket | +
+
|
+
| Git | +
+
|
+
| 参数 | +描述 | +
|---|---|
+
+
+ 修订版本 + |
+
+
+
+ Git 仓库中的 commit ID、分支或标签。例如,master, v1.2.0, 0a1b2c3 或 HEAD。 + |
+
+
+
+ 清单文件路径 + |
+
+
+
+ 设置清单文件路径。例如,config/default。 + |
+
| 参数 | +描述 | +
|---|---|
+
+
+ 清理资源 + |
+
+
+
+ 如果勾选,自动同步时会删除 Git 仓库中不存在的资源。不勾选时,自动同步触发时不会删除集群中的资源。 + |
+
+
+
+ 自纠正 + |
+
+
+
+ 如果勾选,当检测到 Git 仓库中定义的状态与部署资源中有偏差时,将强制应用 Git 仓库中的定义。不勾选时,对部署资源做更改时不会触发自动同步。 + |
+
| 参数 | +描述 | +
|---|---|
+
+
+ 清理资源 + |
+
+
+
+ 如果勾选,同步会删除 Git 仓库中不存在的资源。不勾选时,同步不会删除集群中的资源,而是会显示 out-of-sync。 + |
+
+
+
+ 模拟运行 + |
+
+
+
+ 模拟同步,不影响最终部署资源。 + |
+
+
+
+ 仅执行 Apply + |
+
+
+
+ 如果勾选,同步应用资源时会跳过 pre/post 钩子,仅执行 kubectl apply。 + |
+
+
+
+ 强制 Apply + |
+
+
+
+ 如果勾选,同步时会执行 kubectl apply --force。 + |
+
| 参数 | +描述 | +
|---|---|
+
+
+ 跳过规范校验 + |
+
+
+
+ 跳过 kubectl 验证。执行 kubectl apply 时,增加 --validate=false 标识。 + |
+
+
+
+ 自动创建项目 + |
+
+
+
+ 在项目不存在的情况下自动为应用程序资源创建项目。 + |
+
+
+
+ 最后清理 + |
+
+
+
+ 同步操作时,其他资源都完成部署且处于健康状态后,再清理资源。 + |
+
+
+
+ 选择性同步 + |
+
+
+
+ 仅同步 out-of-sync 状态的资源。 + |
+
| 参数 | +描述 | +
|---|---|
+
+
+ foreground + |
+
+
+
+ 先删除依赖资源,再删除主资源。 + |
+
+
+
+ background + |
+
+
+
+ 先删除主资源,再删除依赖资源。 + |
+
+
+
+ orphan + |
+
+
+
+ 删除主资源,留下依赖资源成为孤儿。 + |
+
| 参数 | +描述信息 | +
|---|---|
| 名称 | +持续部署的名称。 | +
| 健康状态 | +持续部署的健康状态。主要包含以下几种状态: +
|
+
| 同步状态 | +持续部署的同步状态。主要包含以下几种状态: +
|
+
| 部署位置 | +资源部署的集群和项目。 | +
| 更新时间 | +资源更新的时间。 | +
以编辑文件。 例如,将 `spec.replicas` 的值改变为 `3`。
+
+4. 在页面底部点击 **Commit changes**。
+
+### 检查 webhook 交付
+
+1. 在您仓库的 **Webhooks** 页面,点击 webhook。
+
+2. 点击 **Recent Deliveries**,然后点击一个具体交付记录查看详情。
+
+### 检查流水线
+
+1. 使用 `project-regular` 帐户登录 Kubesphere Web 控制台。转到 DevOps 项目,点击流水线。
+
+2. 在**运行记录**选项卡,检查提交到远程仓库 `sonarqube` 分支的拉取请求是否触发了新的运行。
+
+3. 转到 `kubesphere-sample-dev` 项目的 **Pods** 页面,检查 3 个 Pods 的状态。如果 3 个 Pods 为运行状态,表示流水线运行正常。
+
+
+
diff --git a/content/zh/docs/v3.4/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md b/content/zh/docs/v3.4/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
new file mode 100644
index 000000000..035060869
--- /dev/null
+++ b/content/zh/docs/v3.4/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
@@ -0,0 +1,92 @@
+---
+title: "使用流水线模板"
+keywords: 'KubeSphere, Kubernetes, Jenkins, 图形化流水线, 流水线模板'
+description: '了解如何在 KubeSphere 上使用流水线模板。'
+linkTitle: "使用流水线模板"
+weight: 11213
+---
+
+KubeSphere 提供图形编辑面板,您可以通过交互式操作定义 Jenkins 流水线的阶段和步骤。KubeSphere 3.4 中提供了内置流水线模板,如 Node.js、Maven 以及 Golang,使用户能够快速创建对应模板的流水线。同时,KubeSphere 3.4 还支持自定义流水线模板,以满足企业不同的需求。
+
+本文档演示如何在 KubeSphere 上使用流水线模板。
+
+## 准备工作
+
+- 您需要有一个企业空间、一个 DevOps 项目和一个用户 (`project-regular`),并已邀请此帐户至 DevOps 项目中且授予 `operator` 角色。如果尚未准备好,请参考[创建企业空间、项目、用户和角色](../../../../quick-start/create-workspace-and-project/)。
+
+- 您需要启用 [KubeSphere DevOps 系统](../../../../pluggable-components/devops/)。
+
+- 您需要[创建流水线](../../../how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel/)。
+
+## 使用内置流水线模板
+
+下面以 Node.js 为例演示如何使用内置流水线模板。如果需要使用 Maven 以及 Golang 流水线模板,可参考该部分内容。
+
+1. 以 `project-regular` 用户登录 KubeSphere 控制台,在左侧导航树,点击 **DevOps 项目**。
+
+2. 在右侧的 **DevOps 项目**页面,点击您创建的 DevOps 项目。
+
+3. 在左侧的导航树,点击**流水线**。
+
+4. 在右侧的**流水线**页面,点击已创建的流水线。
+
+5. 在右侧的**任务状态**页签,点击**编辑流水线**。
+
+
+6. 在**创建流水线**对话框,点击 **Node.js**,然后点击**下一步**。
+
+7. 在**参数设置**页签,按照实际情况设置以下参数,点击**创建**。
+
+ | 参数 | 参数解释 |
+ | ----------- | ------------------------- |
+ | GitURL | 需要克隆的项目仓库的地址。 |
+ | GitRevision | 需要检出的分支。 |
+ | NodeDockerImage | Node.js 的 Docker 镜像版本。 |
+ | InstallScript | 安装依赖项的 Shell 脚本。 |
+ | TestScript | 项目测试的 Shell 脚本。 |
+ | BuildScript | 构建项目的 Sell 脚本。 |
+ | ArtifactsPath | 归档文件所在的路径。 |
+
+8. 在左侧的可视化编辑页面,系统默认已添加一系列步骤,您可以添加步骤或并行阶段.
+
+9. 点击指定步骤,在页面右侧,您可以执行以下操作:
+ - 修改阶段名称。
+ - 删除阶段。
+ - 设置代理类型。
+ - 添加条件。
+ - 编辑或删除某一任务。
+ - 添加步骤或嵌套步骤。
+
+ {{< notice note >}}
+
+ 您还可以按需在流水线模板中自定义步骤和阶段。有关如何使用图形编辑面板的更多信息,请参考[使用图形编辑面板创建流水线](../create-a-pipeline-using-graphical-editing-panel/)。
+
+ {{ notice >}}
+
+10. 在右侧的**代理**区域,选择代理类型,默认值为 **kubernetes**,点击**确定**。
+
+ | 代理类型 | 说明 |
+ | ----------- | ------------------------- |
+ | any | 调用默认的 base pod 模板创建 Jenkins agent 运行流水线。 |
+ | node | 调用指定类型的 pod 模板创建 Jenkins agent 运行流水线,可配置的 label 标签为 base、java、nodejs、maven、go 等。 |
+ | kubernetes | 通过 yaml 文件自定义标准的 kubernetes pod 模板运行 agent 执行流水线任务。 |
+
+11. 在弹出的页面,您可以查看已创建的流水线模板详情,点击**运行**即可运行该流水线。
+
+在之前的版本中,KubeSphere 还提供了 CI 以及 CI & CD 流水线模板,但是由于这两个模板难以满足定制化需求,因为建议您采用其它内置模板或直接自定义模板。下面分别介绍了这两个模板。
+
+- CI 流水线模板
+
+ 
+
+ 
+
+ CI 流水线模板包含两个阶段。**clone code** 阶段用于检出代码,**build & push** 阶段用于构建镜像并将镜像推送至 Docker Hub。您需要预先为代码仓库和 Docker Hub 仓库创建凭证,然后在相应的步骤中设置仓库的 URL 以及凭证。完成编辑后,流水线即可开始运行。
+
+- CI & CD 流水线模板
+
+ 
+
+ 
+
+ CI & CD 流水线模板包含六个阶段。有关每个阶段的更多信息,请参考[使用 Jenkinsfile 创建流水线](../create-a-pipeline-using-jenkinsfile/#流水线概述),您可以在该文档中找到相似的阶段及描述。您需要预先为代码仓库、Docker Hub 仓库和集群的 kubeconfig 创建凭证,然后在相应的步骤中设置仓库的 URL 以及凭证。完成编辑后,流水线即可开始运行。
diff --git a/content/zh/docs/v3.4/faq/_index.md b/content/zh/docs/v3.4/faq/_index.md
new file mode 100644
index 000000000..10e4c912b
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/_index.md
@@ -0,0 +1,12 @@
+---
+title: "常见问题"
+description: "FAQ is designed to answer and summarize the questions users ask most frequently about KubeSphere."
+layout: "second"
+
+linkTitle: "常见问题"
+weight: 16000
+
+icon: "/images/docs/v3.x/docs.svg"
+---
+
+本章节总结并回答了有关 KubeSphere 最常见的问题,问题根据 KubeSphere 的功能进行分类,您可以在对应部分找到有关的问题和答案。
diff --git a/content/zh/docs/v3.4/faq/access-control/_index.md b/content/zh/docs/v3.4/faq/access-control/_index.md
new file mode 100644
index 000000000..95af6334a
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/access-control/_index.md
@@ -0,0 +1,7 @@
+---
+title: "访问控制和帐户管理"
+keywords: 'Kubernetes, KubeSphere, 帐户, 访问控制'
+description: '关于访问控制和帐户管理的常见问题'
+layout: "second"
+weight: 16400
+---
diff --git a/content/zh/docs/v3.4/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md b/content/zh/docs/v3.4/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
new file mode 100644
index 000000000..009cc4fdb
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
@@ -0,0 +1,38 @@
+---
+title: "添加现有 Kubernetes 命名空间至 KubeSphere 企业空间"
+keywords: "命名空间, 项目, KubeSphere, Kubernetes"
+description: "将您现有 Kubernetes 集群中的命名空间添加至 KubeSphere 的企业空间。"
+linkTitle: "添加现有 Kubernetes 命名空间至 KubeSphere 企业空间"
+Weight: 16430
+---
+
+Kubernetes 命名空间即 KubeSphere 项目。如果您不是在 KubeSphere 控制台创建命名空间对象,则该命名空间不会直接在企业空间中显示。不过,集群管理员依然可以在**集群管理**页面查看该命名空间。同时,您也可以将该命名空间添加至企业空间。
+
+本教程演示如何添加现有 Kubernetes 命名空间至 KubeSphere 企业空间。
+
+## 准备工作
+
+- 您需要有一个具有**集群管理**权限的用户。例如,您可以直接以 `admin` 身份登录控制台,或者创建一个具有该权限的新角色并将其分配至一个用户。
+
+- 您需要有一个可用的企业空间,以便将命名空间分配至该企业空间。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 创建 Kubernetes 命名空间
+
+首先,创建一个示例 Kubernetes 命名空间,以便稍后将其添加至企业空间。执行以下命令:
+
+```bash
+kubectl create ns demo-namespace
+```
+
+有关创建 Kubernetes 命名空间的更多信息,请参见[命名空间演练](https://kubernetes.io/zh/docs/tasks/administer-cluster/namespaces-walkthrough/)。
+
+## 添加命名空间至 KubeSphere 企业空间
+
+1. 以 `admin` 身份登录 KubeSphere 控制台,转到**集群管理**页面。点击**项目**,您可以查看在当前集群中运行的所有项目),包括前述刚刚创建的项目。
+
+2. 通过 kubectl 创建的命名空间不属于任何企业空间。请点击右侧的
,选择**分配企业空间**。
+
+3. 在弹出的对话框中,为该项目选择一个**企业空间**和**项目管理员**,然后点击**确定**。
+
+4. 转到您的企业空间,可以在**项目**页面看到该项目已显示。
+
diff --git a/content/zh/docs/v3.4/faq/access-control/cannot-login.md b/content/zh/docs/v3.4/faq/access-control/cannot-login.md
new file mode 100644
index 000000000..f3283db16
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/access-control/cannot-login.md
@@ -0,0 +1,143 @@
+---
+title: "用户无法登录"
+keywords: "无法登录, 用户不活跃, KubeSphere, Kubernetes"
+description: "如何解决无法登录的问题"
+linkTitle: "用户无法登录"
+Weight: 16440
+---
+
+KubeSphere 安装时会自动创建默认用户 (`admin/P@88w0rd`),密码错误或者用户状态不是**活跃**会导致无法登录。
+
+下面是用户无法登录时,一些常见的问题:
+
+## Account Not Active
+
+登录失败时,您可能看到以下提示。请根据以下步骤排查并解决问题:
+
+
+
+1. 执行以下命令检查用户状态:
+
+ ```bash
+ $ kubectl get users
+ NAME EMAIL STATUS
+ admin admin@kubesphere.io Active
+ ```
+
+2. 检查 `ks-controller-manager` 是否正常运行,是否有异常日志:
+
+ ```bash
+ kubectl -n kubesphere-system logs -l app=ks-controller-manager
+ ```
+
+以下是导致此问题的可能原因。
+
+### Kubernetes 1.19 中的 admission webhook 无法正常工作
+
+Kubernetes 1.19 使用了 Golang 1.15 进行编译,需要更新 admission webhook 用到的证书,该问题导致 `ks-controller` admission webhook 无法正常使用。
+
+相关错误日志:
+
+```bash
+Internal error occurred: failed calling webhook "validating-user.kubesphere.io": Post "https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2-user?timeout=30s": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0
+```
+
+有关该问题和解决方式的更多信息,请参见[此 GitHub Issue](https://github.com/kubesphere/kubesphere/issues/2928)。
+
+### ks-controller-manager 无法正常工作
+
+`ks-controller-manager` 依赖 openldap、Jenkins 这两个有状态服务,当 openldap 或 Jenkins 无法正常运行时会导致 `ks-controller-manager` 一直处于 `reconcile` 状态。
+
+可以通过以下命令检查 openldap 和 Jeknins 服务是否正常:
+
+```
+kubectl -n kubesphere-devops-system get po | grep -v Running
+kubectl -n kubesphere-system get po | grep -v Running
+kubectl -n kubesphere-system logs -l app=openldap
+```
+
+相关错误日志:
+
+```bash
+failed to connect to ldap service, please check ldap status, error: factory is not able to fill the pool: LDAP Result Code 200 \"Network Error\": dial tcp: lookup openldap.kubesphere-system.svc on 169.254.25.10:53: no such host
+```
+
+```bash
+Internal error occurred: failed calling webhook “validating-user.kubesphere.io”: Post https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2-user?timeout=4s: context deadline exceeded
+```
+
+**解决方式**
+
+您需要先恢复 openldap、Jenkins 这两个服务并保证网络的连通性,重启 `ks-controller-manager`。
+
+```
+kubectl -n kubesphere-system rollout restart deploy ks-controller-manager
+```
+
+### 使用了错误的代码分支
+
+如果您使用了错误的 ks-installer 版本,会导致安装之后各组件版本不匹配。
+
+通过以下方式检查各组件版本是否一致,正确的 image tag 应该是 v3.4.0。
+
+```
+kubectl -n kubesphere-system get deploy ks-installer -o jsonpath='{.spec.template.spec.containers[0].image}'
+kubectl -n kubesphere-system get deploy ks-apiserver -o jsonpath='{.spec.template.spec.containers[0].image}'
+kubectl -n kubesphere-system get deploy ks-controller-manager -o jsonpath='{.spec.template.spec.containers[0].image}'
+```
+
+## 用户名或密码错误
+
+
+
+通过以下命令检查用户密码是否正确:
+
+```
+curl -u
,并选择**编辑 YAML**。
+
+5. 在文件末尾添加 `telemetry_enabled: false` 字段,点击**确定**。
+
+
+{{< notice note >}}
+
+如需重新启用 Telemetry,请删除 `telemetry_enabled: false` 字段或将其更改为 `telemetry_enabled: true`,并更新 `ks-installer`。
+
+{{ notice >}}
diff --git a/content/zh/docs/v3.4/faq/multi-cluster-management/_index.md b/content/zh/docs/v3.4/faq/multi-cluster-management/_index.md
new file mode 100644
index 000000000..57f23b873
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/multi-cluster-management/_index.md
@@ -0,0 +1,7 @@
+---
+title: "多集群管理"
+keywords: 'Kubernetes, KubeSphere, 多集群管理, 主集群, 成员集群'
+description: 'KubeSphere 多集群管理常见问题'
+layout: "second"
+weight: 16700
+---
diff --git a/content/zh/docs/v3.4/faq/multi-cluster-management/host-cluster-access-member-cluster.md b/content/zh/docs/v3.4/faq/multi-cluster-management/host-cluster-access-member-cluster.md
new file mode 100644
index 000000000..8f66b661b
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/multi-cluster-management/host-cluster-access-member-cluster.md
@@ -0,0 +1,71 @@
+---
+title: "恢复主集群对成员集群的访问权限"
+keywords: "Kubernetes, KubeSphere, 多集群, 主集群, 成员集群"
+description: "了解如何恢复主集群对成员集群的访问。"
+linkTitle: "恢复主集群对成员集群的访问权限"
+Weight: 16720
+---
+
+[多集群管理](../../../multicluster-management/introduction/kubefed-in-kubesphere/)是 KubeSphere 的一大特色,拥有必要权限的租户(通常是集群管理员)能够从主集群访问中央控制平面,以管理全部成员集群。强烈建议您通过主集群管理整个集群的资源。
+
+本教程演示如何恢复主集群对成员集群的访问权限。
+
+## 可能出现的错误信息
+
+如果您无法从中央控制平面访问成员集群,并且浏览器一直将您重新定向到 KubeSphere 的登录页面,请在该成员集群上运行以下命令来获取 ks-apiserver 的日志。
+
+```
+kubectl -n kubesphere-system logs ks-apiserver-7c9c9456bd-qv6bs
+```
+
+{{< notice note >}}
+
+`ks-apiserver-7c9c9456bd-qv6bs` 指的是该成员集群上的容器组 ID。请确保您使用自己的容器组 ID。
+
+{{ notice >}}
+
+您可能会看到以下错误信息:
+
+```
+E0305 03:46:42.105625 1 token.go:65] token not found in cache
+E0305 03:46:42.105725 1 jwt_token.go:45] token not found in cache
+E0305 03:46:42.105759 1 authentication.go:60] Unable to authenticate the request due to error: token not found in cache
+E0305 03:46:52.045964 1 token.go:65] token not found in cache
+E0305 03:46:52.045992 1 jwt_token.go:45] token not found in cache
+E0305 03:46:52.046004 1 authentication.go:60] Unable to authenticate the request due to error: token not found in cache
+E0305 03:47:34.502726 1 token.go:65] token not found in cache
+E0305 03:47:34.502751 1 jwt_token.go:45] token not found in cache
+E0305 03:47:34.502764 1 authentication.go:60] Unable to authenticate the request due to error: token not found in cache
+```
+
+## 解决方案
+
+### 步骤 1:验证 jwtSecret
+
+分别在主集群和成员集群上运行以下命令,确认它们的 jwtSecret 是否相同。
+
+```
+kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v “apiVersion” | grep jwtSecret
+```
+
+### 步骤 2:更改 `accessTokenMaxAge`
+
+请确保主集群和成员集群的 jwtSecret 相同,然后在该成员集群上运行以下命令获取 `accessTokenMaxAge` 的值。
+
+```
+kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep accessTokenMaxAge
+```
+
+如果该值不为 `0`,请运行以下命令更改 `accessTokenMaxAge` 的值。
+
+```
+kubectl -n kubesphere-system edit cm kubesphere-config -o yaml
+```
+
+将 `accessTokenMaxAge` 的值更改为 `0` 之后,运行以下命令重启 ks-apiserver。
+
+```
+kubectl -n kubesphere-system rollout restart deploy ks-apiserver
+```
+
+现在,您可以再次从中央控制平面访问该成员集群。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/faq/multi-cluster-management/manage-multi-cluster.md b/content/zh/docs/v3.4/faq/multi-cluster-management/manage-multi-cluster.md
new file mode 100644
index 000000000..d50fc8330
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/multi-cluster-management/manage-multi-cluster.md
@@ -0,0 +1,60 @@
+---
+title: "在 KubeSphere 上管理多集群环境"
+keywords: 'Kubernetes,KubeSphere,联邦,多集群,混合云'
+description: '理解如何在 KubeSphere 上管理多集群环境。'
+linkTitle: "在 KubeSphere 上管理多集群环境"
+weight: 16710
+
+---
+
+KubeSphere 提供了易于使用的多集群功能,帮助您[在 KubeSphere 上构建多集群环境](../../../multicluster-management/)。本指南说明如何在 KubeSphere 上管理多集群环境。
+
+## 准备工作
+
+- 请确保您的 Kubernetes 集群在用作主集群和成员集群之前已安装 KubeSphere。
+- 请确保主集群和成员集群分别设置了正确的集群角色,并且在主集群和成员集群上的 `jwtSecret` 也相同。
+- 建议成员集群在导入主集群之前是干净环境,即没有创建任何资源。
+
+
+## 管理 KubeSphere 多集群环境
+
+当您在 KubeSphere 上创建多集群环境之后,您可以通过主集群的中央控制平面管理该环境。在创建资源的时候,您可以选择一个特定的集群,但是需要避免您的主集群过载。不建议您登录成员集群的 KubeSphere Web 控制台去创建资源,因为部分资源(例如:企业空间)将不会同步到您的主集群进行管理。
+
+### 资源管理
+
+不建议您将主集群转换为成员集群,或将成员集群转换成主集群。如果一个成员集群曾经被导入进主集群,您将该成员集群从先前的主集群解绑后,再导入进新的主集群时必须使用相同的集群名称。
+
+如果您想在将成员集群导入新的主集群时保留现有项目,请按照以下步骤进行操作。
+
+1. 在成员集群上运行以下命令将需要保留的项目从企业空间解绑。
+
+ ```bash
+ kubectl label ns | 参数 | +描述 | +
|---|---|
kubernetes |
+ |
version |
+ Kubernetes 安装版本。如未指定 Kubernetes 版本,{{< contentLink "docs/installing-on-linux/introduction/kubekey" "KubeKey" >}} v3.0.7 默认安装 Kubernetes v1.23.10。有关更多信息,请参阅{{< contentLink "docs/installing-on-linux/introduction/kubekey/#support-matrix" "支持矩阵" >}}。 | +
imageRepo |
+ 用于下载镜像的 Docker Hub 仓库 | +
clusterName |
+ Kubernetes 集群名称。 | +
masqueradeAll* |
+ 如果使用纯 iptables 代理模式,masqueradeAll 即让 kube-proxy 对所有流量进行源地址转换 (SNAT)。它默认值为 false。 |
+
maxPods* |
+ Kubelet 可运行 Pod 的最大数量,默认值为 110。 |
+
nodeCidrMaskSize* |
+ 集群中节点 CIDR 的掩码大小,默认值为 24。 |
+
proxyMode* |
+ 使用的代理模式,默认为 ipvs。 |
+
network |
+ |
plugin |
+ 是否使用 CNI 插件。KubeKey 默认安装 Calico,您也可以指定为 Flannel。请注意,只有使用 Calico 作为 CNI 插件时,才能使用某些功能,例如 Pod IP 池。 | +
calico.ipipMode* |
+ 用于集群启动时创建 IPv4 池的 IPIP 模式。如果值设置除 Never 以外的值,则参数 vxlanMode 应该被设置成 Never。此参数允许设置值 Always,CrossSubnet 和 Never。默认值为 Always。
+ |
+
calico.vxlanMode* |
+ 用于集群启动时创建 IPv4 池的 VXLAN 模式。如果该值不设为 Never,则参数 ipipMode 应该设为 Never。此参数允许设置值 Always,CrossSubnet 和 Never。默认值为 Never。 |
+
calico.vethMTU* |
+ 最大传输单元(maximum transmission unit 简称 MTU)设置可以通过网络传输的最大数据包大小。默认值为 1440。 |
+
kubePodsCIDR |
+ Kubernetes Pod 子网的有效 CIDR 块。CIDR 块不应与您的节点子网和 Kubernetes 服务子网重叠。 | +
kubeServiceCIDR |
+ Kubernetes 服务的有效 CIDR 块。CIDR 块不应与您的节点子网和 Kubernetes Pod 子网重叠。 | +
registry |
+ |
registryMirrors |
+ 配置 Docker 仓库镜像以加速下载。有关详细信息,请参阅{{< contentLink "https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon" "配置 Docker 守护进程" >}}。 | +
insecureRegistries |
+ 设置不安全镜像仓库的地址。有关详细信息,请参阅{{< contentLink "https://docs.docker.com/registry/insecure/" "测试不安全仓库" >}}。 | +
privateRegistry* |
+ 配置私有镜像仓库,用于离线安装(例如,Docker 本地仓库或 Harbor)。有关详细信息,请参阅{{< contentLink "docs/v3.4/installing-on-linux/introduction/air-gapped-installation/" "离线安装" >}}。 | +
,选择**编辑 YAML** 来编辑 `ks-installer`。
+
+5. 在 `ks-installer` 的 YAML 文件中,将 `jwtSecret` 的值修改为如上所示的相应值,将 `clusterRole` 的值设置为 `member`。点击**更新**保存更改。
+
+ ```yaml
+ authentication:
+ jwtSecret: QVguGh7qnURywHn2od9IiOX6X8f8wK8g
+ ```
+
+ ```yaml
+ multicluster:
+ clusterRole: member
+ ```
+
+ {{< notice note >}}
+
+ 请确保您使用自己的 `jwtSecret`。您需要等待一段时间使更改生效。
+
+ {{ notice >}}
+
+### 步骤 2:获取 kubeconfig 文件
+
+登录阿里云的控制台。访问**容器服务 - Kubernetes** 下的**集群**,点击您的集群访问其详情页,然后选择**连接信息**选项卡。您可以看到**公网访问**选项卡下的 kubeconfig 文件。复制 kubeconfig 文件的内容。
+
+
+
+### 步骤 3:导入 ACK 成员集群
+
+1. 以 `admin` 身份登录主集群的 KubeSphere Web 控制台。点击左上角的**平台管理**,选择**集群管理**。在**集群管理**页面,点击**添加集群**。
+
+2. 按需填写基本信息,然后点击**下一步**。
+
+3. **连接方式**选择**直接连接 Kubernetes 集群**。填写 ACK 的 kubeconfig,然后点击**创建**。
+
+4. 等待集群初始化完成。
diff --git a/content/zh/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md b/content/zh/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
new file mode 100644
index 000000000..b49b5b0c1
--- /dev/null
+++ b/content/zh/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
@@ -0,0 +1,190 @@
+---
+title: "导入 AWS EKS 集群"
+keywords: 'Kubernetes, KubeSphere, 多集群, Amazon EKS'
+description: '了解如何导入 Amazon Elastic Kubernetes 服务集群。'
+titleLink: "导入 AWS EKS 集群"
+weight: 5320
+---
+
+本教程演示如何使用[直接连接](../../enable-multicluster/direct-connection)方法将 AWS EKS 集群导入 KubeSphere。如果您想使用代理连接方法,请参考[代理连接](../../../multicluster-management/enable-multicluster/agent-connection/)。
+
+## 准备工作
+
+- 您需要准备一个已安装 KubeSphere 的 Kubernetes 集群,并将其设置为主集群。有关如何准备主集群的更多信息,请参考[准备主集群](../../../multicluster-management/enable-multicluster/direct-connection/#准备-host-集群)。
+- 您需要准备一个 EKS 集群,用作成员集群。
+
+## 导入 EKS 集群
+
+### 步骤 1:在 EKS 集群上部署 KubeSphere
+
+您需要首先在 EKS 集群上部署 KubeSphere。有关如何在 EKS 上部署 KubeSphere 的更多信息,请参考[在 AWS EKS 上部署 KubeSphere](../../../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks/#在-eks-上安装-kubesphere)。
+
+### 步骤 2:准备 EKS 成员集群
+
+1. 为了通过主集群管理,您需要使它们之间的 `jwtSecret` 相同。首先,需要在主集群上执行以下命令获取 `jwtSecret`。
+
+ ```bash
+ kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret
+ ```
+
+ 输出类似如下:
+
+ ```yaml
+ jwtSecret: "QVguGh7qnURywHn2od9IiOX6X8f8wK8g"
+ ```
+
+2. 以 `admin` 身份登录 EKS 集群的 KubeSphere Web 控制台。点击左上角的**平台管理**,选择**集群管理**。
+
+3. 访问**定制资源定义**,在搜索栏输入 `ClusterConfiguration`,然后按下键盘上的**回车键**。点击 **ClusterConfiguration** 访问其详情页。
+
+4. 点击右侧的
,选择**编辑 YAML** 来编辑 `ks-installer`。
+
+5. 在 `ks-installer` 的 YAML 文件中,将 `jwtSecret` 的值改为如上所示的相应值,将 `clusterRole` 的值改为 `member`。点击**更新**保存更改。
+
+ ```yaml
+ authentication:
+ jwtSecret: QVguGh7qnURywHn2od9IiOX6X8f8wK8g
+ ```
+
+ ```yaml
+ multicluster:
+ clusterRole: member
+ ```
+
+ {{< notice note >}}
+
+ 请确保使用您自己的 `jwtSecret`。您需要等待一段时间使更改生效。
+
+ {{ notice >}}
+
+### 步骤 3:创建新的 kubeconfig 文件
+
+1. [Amazon EKS](https://docs.aws.amazon.com/zh_cn/eks/index.html) 不像标准的 kubeadm 集群那样提供内置的 kubeconfig 文件。但您可以参考此[文档](https://docs.aws.amazon.com/zh_cn/eks/latest/userguide/create-kubeconfig.html)创建 kubeconfig 文件。生成的 kubeconfig 文件类似如下:
+
+ ```yaml
+ apiVersion: v1
+ clusters:
+ - cluster:
+ server:
,选择**编辑 YAML** 来编辑 `ks-installer`。
+
+5. 在 `ks-installer` 的 YAML 文件中,将 `jwtSecret` 的值改为如上所示的相应值,将 `clusterRole` 的值改为 `member`。
+
+ ```yaml
+ authentication:
+ jwtSecret: QVguGh7qnURywHn2od9IiOX6X8f8wK8g
+ ```
+
+ ```yaml
+ multicluster:
+ clusterRole: member
+ ```
+
+ {{< notice note >}}
+
+ 请确保使用自己的 `jwtSecret`。您需要等待一段时间使更改生效。
+
+ {{ notice >}}
+
+### 步骤 3:创建新的 kubeconfig 文件
+
+1. 在 GKE Cloud Shell 终端运行以下命令:
+
+ ```bash
+ TOKEN=$(kubectl -n kubesphere-system get secret $(kubectl -n kubesphere-system get sa kubesphere -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 -d)
+ kubectl config set-credentials kubesphere --token=${TOKEN}
+ kubectl config set-context --current --user=kubesphere
+ ```
+
+2. 运行以下命令获取新的 kubeconfig 文件:
+
+ ```bash
+ cat ~/.kube/config
+ ```
+
+ 输出类似如下:
+
+ ```yaml
+ apiVersion: v1
+ clusters:
+ - cluster:
+ certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLekNDQWhPZ0F3SUJBZ0lSQUtPRUlDeFhyWEdSbjVQS0dlRXNkYzR3RFFZSktvWklodmNOQVFFTEJRQXcKTHpFdE1Dc0dBMVVFQXhNa1pqVTBNVFpoTlRVdFpEZzFZaTAwWkdZNUxXSTVNR1V0TkdNeE0yRTBPR1ZpWW1VMwpNQjRYRFRJeE1ETXhNVEl5TXpBMU0xb1hEVEkyTURNeE1ESXpNekExTTFvd0x6RXRNQ3NHQTFVRUF4TWtaalUwCk1UWmhOVFV0WkRnMVlpMDBaR1k1TFdJNU1HVXROR014TTJFME9HVmlZbVUzTUlJQklqQU5CZ2txaGtpRzl3MEIKQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBdkVHVGtKRjZLVEl3QktlbXNYd3dPSnhtU3RrMDlKdXh4Z1grM0dTMwpoeThVQm5RWEo1d3VIZmFGNHNWcDFzdGZEV2JOZitESHNxaC9MV3RxQk5iSlNCU1ppTC96V3V5OUZNeFZMS2czCjVLdnNnM2drdUpVaFVuK0tMUUFPdTNUWHFaZ2tTejE1SzFOSU9qYm1HZGVWSm5KQTd6NTF2ZkJTTStzQWhGWTgKejJPUHo4aCtqTlJseDAvV0UzTHZEUUMvSkV4WnRCRGFuVFU0anpHMHR2NGk1OVVQN2lWbnlwRHk0dkFkWm5mbgowZncwVnplUXJqT2JuQjdYQTZuUFhseXZubzErclRqakFIMUdtU053c1IwcDRzcEViZ0lXQTNhMmJzeUN5dEJsCjVOdmJKZkVpSTFoTmFOZ3hoSDJNenlOUWVhYXZVa29MdDdPN0xqYzVFWlo4cFFJREFRQUJvMEl3UURBT0JnTlYKSFE4QkFmOEVCQU1DQWdRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVUVyVkJrc3MydGV0Qgp6ZWhoRi92bGdVMlJiM2N3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUdEZVBVa3I1bDB2OTlyMHZsKy9WZjYrCitBanVNNFoyOURtVXFHVC80OHBaR1RoaDlsZDQxUGZKNjl4eXFvME1wUlIyYmJuTTRCL2NVT1VlTE5VMlV4VWUKSGRlYk1oQUp4Qy9Uaks2SHpmeExkTVdzbzVSeVAydWZEOFZob2ZaQnlBVWczajdrTFgyRGNPd1lzNXNrenZ0LwpuVUlhQURLaXhtcFlSSWJ6MUxjQmVHbWROZ21iZ0hTa3MrYUxUTE5NdDhDQTBnSExhMER6ODhYR1psSi80VmJzCjNaWVVXMVExY01IUHd5NnAwV2kwQkpQeXNaV3hZdFJyV3JFWUhZNVZIanZhUG90S3J4Y2NQMUlrNGJzVU1ZZ0wKaTdSaHlYdmJHc0pKK1lNc3hmalU5bm5XYVhLdXM5ZHl0WG1kRGw1R0hNU3VOeTdKYjIwcU5RQkxhWHFkVmY0PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
+ server: https://130.211.231.87
+ name: gke_grand-icon-307205_us-central1-c_cluster-3
+ contexts:
+ - context:
+ cluster: gke_grand-icon-307205_us-central1-c_cluster-3
+ user: gke_grand-icon-307205_us-central1-c_cluster-3
+ name: gke_grand-icon-307205_us-central1-c_cluster-3
+ current-context: gke_grand-icon-307205_us-central1-c_cluster-3
+ kind: Config
+ preferences: {}
+ users:
+ - name: gke_grand-icon-307205_us-central1-c_cluster-3
+ user:
+ auth-provider:
+ config:
+ cmd-args: config config-helper --format=json
+ cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
+ expiry-key: '{.credential.token_expiry}'
+ token-key: '{.credential.access_token}'
+ name: gcp
+ - name: kubesphere
+ user:
+ token: eyJhbGciOiJSUzI1NiIsImtpZCI6InNjOFpIb3RrY3U3bGNRSV9NWV8tSlJzUHJ4Y2xnMDZpY3hhc1BoVy0xTGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlc3BoZXJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlc3BoZXJlLXRva2VuLXpocmJ3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVzcGhlcmUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMGFmZGI1Ny01MTBkLTRjZDgtYTAwYS1hNDQzYTViNGM0M2MiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXNwaGVyZS1zeXN0ZW06a3ViZXNwaGVyZSJ9.ic6LaS5rEQ4tXt_lwp7U_C8rioweP-ZdDjlIZq91GOw9d6s5htqSMQfTeVlwTl2Bv04w3M3_pCkvRzMD0lHg3mkhhhP_4VU0LIo4XeYWKvWRoPR2kymLyskAB2Khg29qIPh5ipsOmGL9VOzD52O2eLtt_c6tn-vUDmI_Zw985zH3DHwUYhppGM8uNovHawr8nwZoem27XtxqyBkqXGDD38WANizyvnPBI845YqfYPY5PINPYc9bQBFfgCovqMZajwwhcvPqS6IpG1Qv8TX2lpuJIK0LLjiKaHoATGvHLHdAZxe_zgAC2cT_9Ars3HIN4vzaSX0f-xP--AcRgKVSY9g
+ ```
+
+### 步骤 4:导入 GKE 成员集群
+
+1. 以 `admin` 身份登录主集群的 KubeSphere Web 控制台。点击左上角的**平台管理**,选择**集群管理**。在**集群管理**页面,点击**添加集群**。
+
+2. 按需输入基本信息,然后点击**下一步**。
+
+3. **连接方式**选择**直接连接 Kubernetes 集群**。填写 GKE 的新 kubeconfig,然后点击**创建**。
+
+4. 等待集群初始化完成。
diff --git a/content/zh/docs/v3.4/multicluster-management/introduction/_index.md b/content/zh/docs/v3.4/multicluster-management/introduction/_index.md
new file mode 100644
index 000000000..08a69aacd
--- /dev/null
+++ b/content/zh/docs/v3.4/multicluster-management/introduction/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "介绍"
+weight: 5100
+
+_build:
+ render: false
+---
diff --git a/content/zh/docs/v3.4/multicluster-management/introduction/kubefed-in-kubesphere.md b/content/zh/docs/v3.4/multicluster-management/introduction/kubefed-in-kubesphere.md
new file mode 100644
index 000000000..0ad3917e3
--- /dev/null
+++ b/content/zh/docs/v3.4/multicluster-management/introduction/kubefed-in-kubesphere.md
@@ -0,0 +1,49 @@
+---
+title: "KubeSphere 联邦"
+keywords: 'Kubernetes, KubeSphere, 联邦, 多集群, 混合云'
+description: '了解 KubeSphere 中的 Kubernetes 联邦的基本概念,包括成员集群和主集群。'
+linkTitle: "KubeSphere 联邦"
+weight: 5120
+---
+
+多集群功能与多个集群之间的网络连接有关。因此,了解集群的拓扑关系很重要。
+
+## 多集群架构如何运作
+
+在使用 KubeSphere 的中央控制平面管理多个集群之前,您需要创建一个主集群。主集群实际上是一个启用了多集群功能的 KubeSphere 集群,您可以使用它提供的控制平面统一管理。成员集群是没有中央控制平面的普通 KubeSphere 集群。也就是说,拥有必要权限的租户(通常是集群管理员)能够通过主集群访问控制平面,管理所有成员集群,例如查看和编辑成员集群上面的资源。反过来,如果您单独访问任意成员集群的 Web 控制台,您将无法查看其他集群的任何资源。
+
+只能有一个主集群存在,而多个成员集群可以同时存在。在多集群架构中,主集群和成员集群之间的网络可以[直接连接](../../enable-multicluster/direct-connection/),或者通过[代理连接](../../enable-multicluster/agent-connection/)。成员集群之间的网络可以设置在完全隔离的环境中。
+
+如果您是使用通过 kubeadm 搭建的自建 Kubernetes 集群,请参阅[离线安装](../../../installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped/)在您的 Kubernetes 集群上安装 KubeSphere,然后通过直接连接或者代理连接来启用 KubeSphere 多集群管理功能。
+
+
+
+## 厂商无锁定
+
+KubeSphere 拥有功能强大的中央控制平面,您可以统一纳管部署在任意环境或云厂商上的 KubeSphere 集群。
+
+## 资源要求
+
+启用多集群管理前,请确保您的环境中有足够的资源。
+
+| 命名空间 | kube-federation-system | kubesphere-system |
+| -------- | ---------------------- | ----------------- |
+| 子组件 | 2 x controller-manager | Tower |
+| CPU 请求 | 100 m | 100 m |
+| CPU 限制 | 500 m | 500 m |
+| 内存请求 | 64 MiB | 128 MiB |
+| 内存限制 | 512 MiB | 256 MiB |
+| 安装 | 可选 | 可选 |
+
+{{< notice note >}}
+
+- CPU 和内存的资源请求和限制均指单个副本的要求。
+- 多集群功能启用后,主集群上会安装 Tower 和 controller-manager。如果您使用[代理连接](../../../multicluster-management/enable-multicluster/agent-connection/),成员集群仅需要 Tower。如果您使用[直接连接](../../../multicluster-management/enable-multicluster/direct-connection/),成员集群无需额外组件。
+
+{{ notice >}}
+
+## 在多集群架构中使用应用商店
+
+与 KubeSphere 中的其他组件不同,[KubeSphere 应用商店](../../../pluggable-components/app-store/)是所有集群(包括主集群和成员集群)的全局应用程序池。您只需要在主集群上启用应用商店,便可以直接在成员集群上使用应用商店的相关功能(无论成员集群是否启用应用商店),例如[应用模板](../../../project-user-guide/application/app-template/)和[应用仓库](../../../workspace-administration/app-repository/import-helm-repository/)。
+
+但是,如果只在成员集群上启用应用商店而没有在主集群上启用,您将无法在多集群架构中的任何集群上使用应用商店。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/multicluster-management/introduction/overview.md b/content/zh/docs/v3.4/multicluster-management/introduction/overview.md
new file mode 100644
index 000000000..b54f7a833
--- /dev/null
+++ b/content/zh/docs/v3.4/multicluster-management/introduction/overview.md
@@ -0,0 +1,15 @@
+---
+title: "概述"
+keywords: 'Kubernetes, KubeSphere, 多集群, 混合云'
+description: '对多集群管理有个基本的了解,例如多集群管理的常见用例,以及 KubeSphere 可以通过多集群功能带来的好处。'
+linkTitle: "概述"
+weight: 5110
+---
+
+如今,各种组织跨不同的云厂商或者在不同的基础设施上运行和管理多个 Kubernetes 集群的做法非常普遍。由于每个 Kubernetes 集群都是一个相对独立的单元,上游社区正在艰难地研究和开发多集群管理解决方案。即便如此,Kubernetes 集群联邦(Kubernetes Cluster Federation,简称 [KubeFed](https://github.com/kubernetes-sigs/kubefed))可能是其中一种可行的方法。
+
+多集群管理最常见的使用场景包括服务流量负载均衡、隔离开发和生产环境、解耦数据处理和数据存储、跨云备份和灾难恢复、灵活分配计算资源、跨区域服务的低延迟访问以及避免厂商锁定等。
+
+开发 KubeSphere 旨在解决多集群和多云管理(包括上述使用场景)的难题,为用户提供统一的控制平面,将应用程序及其副本分发到位于公有云和本地环境的多个集群。KubeSphere 还拥有跨多个集群的丰富可观测性,包括集中监控、日志系统、事件和审计日志等。
+
+
diff --git a/content/zh/docs/v3.4/multicluster-management/unbind-cluster.md b/content/zh/docs/v3.4/multicluster-management/unbind-cluster.md
new file mode 100644
index 000000000..086cf2402
--- /dev/null
+++ b/content/zh/docs/v3.4/multicluster-management/unbind-cluster.md
@@ -0,0 +1,61 @@
+---
+title: "移除成员集群"
+keywords: 'Kubernetes, KubeSphere, 多集群, 混合云'
+description: '了解如何从 KubeSphere 的集群池中移除成员集群。'
+linkTitle: "移除成员集群"
+weight: 5500
+---
+
+本教程演示如何在 KubeSphere 控制台移除成员集群。
+
+## 准备工作
+
+- 您已经启用多集群管理。
+- 您需要有一个拥有**集群管理**权限角色的用户。例如,您可以直接以 `admin` 身份登录控制台,或者创建一个拥有该权限的新角色并授予至一个用户。
+
+## 移除成员集群
+
+你可以使用以下任一方法移除成员集群:
+
+**方法 1**
+
+1. 点击左上角的**平台管理**,选择**集群管理**。
+
+2. 在**成员集群**区域,点击要从中央控制平面移除的集群右侧的
,选择**编辑 YAML**。
+
+4. 在该 YAML 文件中,搜寻到 `alerting`,将 `enabled` 的 `false` 更改为 `true`。完成后,点击右下角的**确定**,保存配置。
+
+ ```yaml
+ alerting:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+5. 在 kubectl 中执行以下命令检查安装过程:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+您可以通过点击控制台右下角的
找到 kubectl 工具。
+ {{ notice >}}
+
+## 验证组件的安装
+
+如果您可以在**集群管理**页面看到**告警消息**和**告警策略**,则说明安装成功。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/pluggable-components/app-store.md b/content/zh/docs/v3.4/pluggable-components/app-store.md
new file mode 100644
index 000000000..9f957fd51
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/app-store.md
@@ -0,0 +1,118 @@
+---
+title: "KubeSphere 应用商店"
+keywords: "Kubernetes, KubeSphere, App Store, OpenPitrix"
+description: "了解如何启用应用商店,一个可以在内部实现数据和应用共享、并制定应用交付流程的行业标准的组件。"
+linkTitle: "KubeSphere 应用商店"
+weight: 6200
+---
+
+作为一个开源的、以应用为中心的容器平台,KubeSphere 在 [OpenPitrix](https://github.com/openpitrix/openpitrix) 的基础上,为用户提供了一个基于 Helm 的应用商店,用于应用生命周期管理。OpenPitrix 是一个开源的 Web 平台,用于打包、部署和管理不同类型的应用。KubeSphere 应用商店让 ISV、开发者和用户能够在一站式服务中只需点击几下就可以上传、测试、安装和发布应用。
+
+对内,KubeSphere 应用商店可以作为不同团队共享数据、中间件和办公应用的场所。对外,有利于设立构建和交付的行业标准。启用该功能后,您可以通过应用模板添加更多应用。
+
+有关更多信息,请参阅[应用商店](../../application-store/)。
+
+## 在安装前启用应用商店
+
+### 在 Linux 上安装
+
+当您在 Linux 上安装多节点 KubeSphere 时,首先需要创建一个配置文件,该文件列出了所有 KubeSphere 组件。
+
+1. [在 Linux 上安装 KubeSphere](../../installing-on-linux/introduction/multioverview/) 时,您需要创建一个默认文件 `config-sample.yaml`,通过执行以下命令修改该文件:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+如果您采用 [All-in-One 安装](../../quick-start/all-in-one-on-linux/),则不需要创建 `config-sample.yaml` 文件,因为可以直接创建集群。一般来说,All-in-One 模式是为那些刚接触 KubeSphere 并希望熟悉系统的用户而准备的。如果您想在这个模式下启用应用商店(比如用于测试),请参考[下面的部分](#在安装后启用应用商店),查看如何在安装后启用应用商店。
+ {{ notice >}}
+
+2. 在该文件中,搜索 `openpitrix`,并将 `enabled` 的 `false` 改为 `true`,完成后保存文件。
+
+ ```yaml
+ openpitrix:
+ store:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+3. 执行以下命令使用该配置文件创建集群:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### 在 Kubernetes 上安装
+
+当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件中启用应用商店。
+
+1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件,执行以下命令打开并编辑该文件。
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. 在 `cluster-configuration.yaml` 文件中,搜索 `openpitrix`,并将 `enabled` 的 `false` 改为 `true`。完成后保存文件。
+
+ ```yaml
+ openpitrix:
+ store:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+3. 执行以下命令开始安装:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## 在安装后启用应用商店
+
+1. 使用 `admin` 用户登录控制台,点击左上角的**平台管理**,选择**集群管理**。
+
+2. 点击**定制资源定义**,在搜索栏中输入 `clusterconfiguration`,点击结果查看其详细页面。
+
+ {{< notice info >}}
+定制资源定义(CRD)允许用户在不增加额外 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
+ {{ notice >}}
+
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+
+4. 在该 YAML 文件中,搜索 `openpitrix`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**,保存配置。
+
+ ```yaml
+ openpitrix:
+ store:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+5. 在 kubectl 中执行以下命令检查安装过程:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+您可以通过点击控制台右下角的
找到 kubectl 工具。
+ {{ notice >}}
+
+## 验证组件的安装
+
+在您登录控制台后,如果您能看到页面左上角的**应用商店**以及其中的应用,则说明安装成功。
+
+{{< notice note >}}
+
+- 您可以在不登录控制台的情况下直接访问 `<节点 IP 地址>:30880/apps` 进入应用商店。
+- KubeSphere 3.2.x 中的应用商店启用后,**OpenPitrix** 页签不会显示在**系统组件**页面。
+
+{{ notice >}}
+
+## 在多集群架构中使用应用商店
+
+[在多集群架构中](../../multicluster-management/introduction/kubefed-in-kubesphere/),一个主集群管理所有成员集群。与 KubeSphere 中的其他组件不同,应用商店是所有集群(包括主集群和成员集群)的全局应用程序池。您只需要在主集群上启用应用商店,便可以直接在成员集群上使用应用商店的相关功能(无论成员集群是否启用应用商店),例如[应用模板](../../project-user-guide/application/app-template/)和[应用仓库](../../workspace-administration/app-repository/import-helm-repository/)。
+
+但是,如果只在成员集群上启用应用商店而没有在主集群上启用,您将无法在多集群架构中的任何集群上使用应用商店。
+
diff --git a/content/zh/docs/v3.4/pluggable-components/auditing-logs.md b/content/zh/docs/v3.4/pluggable-components/auditing-logs.md
new file mode 100644
index 000000000..faba6dd67
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/auditing-logs.md
@@ -0,0 +1,182 @@
+---
+title: "KubeSphere 审计日志"
+keywords: "Kubernetes, 审计, KubeSphere, 日志"
+description: "了解如何启用审计来记录平台事件和活动。"
+linkTitle: "KubeSphere 审计日志"
+weight: 6700
+---
+
+KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排列的记录,按顺序记录了与单个用户、管理人员或系统其他组件相关的活动。对 KubeSphere 的每个请求都会生成一个事件,然后写入 Webhook,并根据一定的规则进行处理。
+
+有关更多信息,请参见[审计日志查询](../../toolbox/auditing/auditing-query/)。
+
+## 在安装前启用审计日志
+
+### 在 Linux 上安装
+
+当您在 Linux 上安装多节点 KubeSphere 时,需要创建一个配置文件,该文件列出了所有 KubeSphere 组件。
+
+1. [在 Linux 上安装 KubeSphere](../../installing-on-linux/introduction/multioverview/) 时,您需要创建一个默认文件 `config-sample.yaml`。执行以下命令修改该文件:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+如果您采用 [All-in-One 安装](../../quick-start/all-in-one-on-linux/),则不需要创建 `config-sample.yaml` 文件,因为可以直接创建集群。一般来说,All-in-One 模式是为那些刚接触 KubeSphere 并希望熟悉系统的用户而准备的,如果您想在该模式下启用审计日志(例如用于测试),请参考[下面的部分](#在安装后启用审计日志),查看如何在安装后启用审计功能。
+ {{ notice >}}
+
+2. 在该文件中,搜索 `auditing`,并将 `enabled` 的 `false` 改为 `true`。完成后保存文件。
+
+ ```yaml
+ auditing:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+ {{< notice note >}}
+默认情况下,如果启用了审计功能,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
,选择**编辑 YAML**。
+
+4. 在该 YAML 文件中,搜索 `auditing`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**,保存配置。
+
+ ```yaml
+ auditing:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+ {{< notice note >}}
+默认情况下,如果启用了审计功能,将安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
找到 kubectl 工具。
+ {{ notice >}}
+
+## 验证组件的安装
+
+{{< tabs >}}
+
+{{< tab "在仪表板中验证组件的安装" >}}
+
+验证您可以使用右下角**工具箱**中的**审计日志查询**功能。
+
+{{ tab >}}
+
+{{< tab "通过 kubectl 验证组件的安装" >}}
+
+执行以下命令来检查容器组的状态:
+
+```bash
+kubectl get pod -n kubesphere-logging-system
+```
+
+如果组件运行成功,输出结果如下:
+
+```yaml
+NAME READY STATUS RESTARTS AGE
+elasticsearch-logging-curator-elasticsearch-curator-159872n9g9g 0/1 Completed 0 2d10h
+elasticsearch-logging-curator-elasticsearch-curator-159880tzb7x 0/1 Completed 0 34h
+elasticsearch-logging-curator-elasticsearch-curator-1598898q8w7 0/1 Completed 0 10h
+elasticsearch-logging-data-0 1/1 Running 1 2d20h
+elasticsearch-logging-data-1 1/1 Running 1 2d20h
+elasticsearch-logging-discovery-0 1/1 Running 1 2d20h
+fluent-bit-6v5fs 1/1 Running 1 2d20h
+fluentbit-operator-5bf7687b88-44mhq 1/1 Running 1 2d20h
+kube-auditing-operator-7574bd6f96-p4jvv 1/1 Running 1 2d20h
+kube-auditing-webhook-deploy-6dfb46bb6c-hkhmx 1/1 Running 1 2d20h
+kube-auditing-webhook-deploy-6dfb46bb6c-jp77q 1/1 Running 1 2d20h
+```
+
+{{ tab >}}
+
+{{ tabs >}}
diff --git a/content/zh/docs/v3.4/pluggable-components/devops.md b/content/zh/docs/v3.4/pluggable-components/devops.md
new file mode 100644
index 000000000..88de35858
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/devops.md
@@ -0,0 +1,127 @@
+---
+title: "KubeSphere DevOps 系统"
+keywords: "Kubernetes, Jenkins, KubeSphere, DevOps, cicd"
+description: "了解如何启用 DevOps 系统来进一步解放您的开发人员,让他们专注于代码编写。"
+linkTitle: "KubeSphere DevOps"
+weight: 6300
+---
+
+基于 [Jenkins](https://jenkins.io/) 的 KubeSphere DevOps 系统是专为 Kubernetes 中的 CI/CD 工作流设计的,它提供了一站式的解决方案,帮助开发和运维团队用非常简单的方式构建、测试和发布应用到 Kubernetes。它还具有插件管理、[Binary-to-Image (B2I)](../../project-user-guide/image-builder/binary-to-image/)、[Source-to-Image (S2I)](../../project-user-guide/image-builder/source-to-image/)、代码依赖缓存、代码质量分析、流水线日志等功能。
+
+DevOps 系统为用户提供了一个自动化的环境,应用可以自动发布到同一个平台。它还兼容第三方私有镜像仓库(如 Harbor)和代码库(如 GitLab/GitHub/SVN/BitBucket)。它为用户提供了全面的、可视化的 CI/CD 流水线,打造了极佳的用户体验,而且这种兼容性强的流水线能力在离线环境中非常有用。
+
+有关更多信息,请参见 [DevOps 用户指南](../../devops-user-guide/)。
+
+## 在安装前启用 DevOps
+
+### 在 Linux 上安装
+
+当您在 Linux 上安装多节点 KubeSphere 时,首先需要创建一个配置文件,该文件列出了所有 KubeSphere 组件。
+
+1. [在 Linux 上安装 KubeSphere](../../installing-on-linux/introduction/multioverview/) 时,您需要创建一个默认文件 `config-sample.yaml`,通过执行以下命令修改该文件:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+如果您采用 [All-in-one 安装](../../quick-start/all-in-one-on-linux/),则不需要创建 `config-sample.yaml` 文件,因为可以直接创建集群。一般来说,All-in-one 模式是为那些刚接触 KubeSphere 并希望熟悉系统的用户而准备的,如果您想在这个模式下启用 DevOps(比如用于测试),请参考[下面的部分](#在安装后启用-devops),查看如何在安装后启用 DevOps。
+ {{ notice >}}
+
+2. 在该文件中,搜索 `devops`,并将 `enabled` 的 `false `改为 `true`,完成后保存文件。
+
+ ```yaml
+ devops:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+3. 执行以下命令使用该配置文件创建集群:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### 在 Kubernetes 上安装
+
+当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件中启用 DevOps。
+
+1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件,执行以下命令打开并编辑该文件:
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. 在 `cluster-configuration.yaml` 文件中,搜索 `devops`,并将 `enabled` 的 `false` 改为 `true`。完成后保存文件。
+
+ ```yaml
+ devops:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+3. 执行以下命令开始安装:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## 在安装后启用 DevOps
+
+1. 以 `admin` 用户登录控制台,点击左上角的**平台管理**,选择**集群管理**。
+
+2. 点击**定制资源定义**,在搜索栏中输入 `clusterconfiguration`,点击搜索结果查看其详细页面。
+
+ {{< notice info >}}
+定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
+ {{ notice >}}
+
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+
+4. 在该 YAML 文件中,搜索 `devops`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**,保存配置。
+
+ ```yaml
+ devops:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+5. 在 kubectl 中执行以下命令检查安装过程:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+您可以点击控制台右下角的
找到 kubectl 工具。
+ {{ notice >}}
+
+## 验证组件的安装
+
+{{< tabs >}}
+
+{{< tab "在仪表板中验证组件的安装" >}}
+
+进入**系统组件**,检查是否 **DevOps** 标签页中的所有组件都处于**健康**状态。如果是,组件安装成功。
+
+{{ tab >}}
+
+{{< tab "通过 kubectl 验证组件的安装" >}}
+
+执行以下命令来检查容器组的状态:
+
+```bash
+kubectl get pod -n kubesphere-devops-system
+```
+
+如果组件运行成功,输出结果如下:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+devops-jenkins-5cbbfbb975-hjnll 1/1 Running 0 40m
+s2ioperator-0 1/1 Running 0 41m
+```
+
+{{ tab >}}
+
+{{ tabs >}}
diff --git a/content/zh/docs/v3.4/pluggable-components/events.md b/content/zh/docs/v3.4/pluggable-components/events.md
new file mode 100644
index 000000000..aa372397d
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/events.md
@@ -0,0 +1,191 @@
+---
+title: "KubeSphere 事件系统"
+keywords: "Kubernetes, 事件, KubeSphere, k8s-events"
+description: "了解如何启用 KubeSphere 事件模块来跟踪平台上发生的所有事件。"
+linkTitle: "KubeSphere 事件系统"
+weight: 6500
+---
+
+KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如节点调度状态和镜像拉取结果。这些事件会被准确记录下来,并在 Web 控制台中显示具体的原因、状态和信息。要查询事件,用户可以快速启动 Web 工具箱,在搜索栏中输入相关信息,并有不同的过滤器(如关键字和项目)可供选择。事件也可以归档到第三方工具,例如 Elasticsearch、Kafka 或 Fluentd。
+
+有关更多信息,请参见[事件查询](../../toolbox/events-query/)。
+
+## 在安装前启用事件系统
+
+### 在 Linux 上安装
+
+当您在 Linux 上安装多节点 KubeSphere 时,需要创建一个配置文件,该文件列出了所有 KubeSphere 组件。
+
+1. [在 Linux 上安装 KubeSphere](../../installing-on-linux/introduction/multioverview/) 时,您需要创建一个默认文件 `config-sample.yaml`。执行以下命令修改该文件:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+
+如果您采用 [All-in-One 安装](../../quick-start/all-in-one-on-linux/),则不需要创建 `config-sample.yaml` 文件,因为可以直接创建集群。一般来说,All-in-One 模式是为那些刚接触 KubeSphere 并希望熟悉系统的用户而准备的。如果您想在该模式下启用事件系统(例如用于测试),请参考[下面的部分](#在安装后启用事件系统),查看[如何在安装后启用](#在安装后启用事件系统)事件系统。
+
+{{ notice >}}
+
+2. 在该文件中,搜寻到 `events`,并将 `enabled` 的 `false` 改为 `true`。完成后保存文件。
+
+ ```yaml
+ events:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+ {{< notice note >}}
+默认情况下,如果启用了事件系统,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用事件系统,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
,选择**编辑 YAML**。
+
+4. 在该配置文件中,搜索 `events`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**,保存配置。
+
+ ```yaml
+ events:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+ {{< notice note >}}
+
+默认情况下,如果启用了事件系统,将会安装内置 Elasticsearch。对于生产环境,如果您想启用事件系统,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在文件中提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
找到 kubectl 工具。
+
+{{ notice >}}
+
+## 验证组件的安装
+
+{{< tabs >}}
+
+{{< tab "在仪表板中验证组件的安装" >}}
+
+验证您可以使用右下角**工具箱**中的**资源事件查询**功能。
+
+{{ tab >}}
+
+{{< tab "通过 kubectl 验证组件的安装" >}}
+
+执行以下命令来检查容器组的状态:
+
+```bash
+kubectl get pod -n kubesphere-logging-system
+```
+
+如果组件运行成功,输出结果如下:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+elasticsearch-logging-data-0 1/1 Running 0 155m
+elasticsearch-logging-data-1 1/1 Running 0 154m
+elasticsearch-logging-discovery-0 1/1 Running 0 155m
+fluent-bit-bsw6p 1/1 Running 0 108m
+fluent-bit-smb65 1/1 Running 0 108m
+fluent-bit-zdz8b 1/1 Running 0 108m
+fluentbit-operator-9b69495b-bbx54 1/1 Running 0 109m
+ks-events-exporter-5cb959c74b-gx4hw 2/2 Running 0 7m55s
+ks-events-operator-7d46fcccc9-4mdzv 1/1 Running 0 8m
+ks-events-ruler-8445457946-cl529 2/2 Running 0 7m55s
+ks-events-ruler-8445457946-gzlm9 2/2 Running 0 7m55s
+logsidecar-injector-deploy-667c6c9579-cs4t6 2/2 Running 0 106m
+logsidecar-injector-deploy-667c6c9579-klnmf 2/2 Running 0 106m
+```
+
+{{ tab >}}
+
+{{ tabs >}}
+
diff --git a/content/zh/docs/v3.4/pluggable-components/kubeedge.md b/content/zh/docs/v3.4/pluggable-components/kubeedge.md
new file mode 100644
index 000000000..b18095d20
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/kubeedge.md
@@ -0,0 +1,185 @@
+---
+title: "KubeEdge"
+keywords: "Kubernetes, KubeSphere, KubeEdge"
+description: "了解如何启用 KubeEdge 为您的集群添加边缘节点。"
+linkTitle: "KubeEdge"
+weight: 6930
+---
+
+[KubeEdge](https://kubeedge.io/zh/) 是一个开源系统,用于将容器化应用程序编排功能扩展到边缘的主机。KubeEdge 支持多个边缘协议,旨在对部署于云端和边端的应用程序与资源等进行统一管理。
+
+KubeEdge 的组件在两个单独的位置运行——云上和边缘节点上。在云上运行的组件统称为 CloudCore,包括 Controller 和 Cloud Hub。Cloud Hub 作为接收边缘节点发送请求的网关,Controller 则作为编排器。在边缘节点上运行的组件统称为 EdgeCore,包括 EdgeHub,EdgeMesh,MetadataManager 和 DeviceTwin。有关更多信息,请参见 [KubeEdge 网站](https://kubeedge.io/zh/)。
+
+启用 KubeEdge 后,您可以[为集群添加边缘节点](../../installing-on-linux/cluster-operation/add-edge-nodes/)并在这些节点上部署工作负载。
+
+
+
+## 安装前启用 KubeEdge
+
+### 在 Linux 上安装
+
+在 Linux 上多节点安装 KubeSphere 时,您需要创建一个配置文件,该文件会列出所有 KubeSphere 组件。
+
+1. [在 Linux 上安装 KubeSphere](../../installing-on-linux/introduction/multioverview/) 时,您需要创建一个默认文件 `config-sample.yaml`。执行以下命令修改该文件:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+ 如果您采用 [All-in-one 安装](../../quick-start/all-in-one-on-linux/),则不需要创建 `config-sample.yaml` 文件,因为可以直接创建集群。一般来说,All-in-one 模式针对那些刚接触 KubeSphere 并希望熟悉系统的用户。如果您想在该模式下启用 KubeEdge(比如用于测试),请参考[下面的部分](#在安装后启用-kubeedge),查看如何在安装后启用 KubeEdge。
+
+ {{ notice >}}
+
+2. 在该文件中,搜索 `edgeruntime` 和 `kubeedge`,然后将它们 `enabled` 值从 `false` 更改为 `true` 以便开启所有 KubeEdge 组件。完成后保存文件。
+
+ ```yaml
+ edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
+ enabled: false
+ kubeedge: # kubeedge configurations
+ enabled: false
+ cloudCore:
+ cloudHub:
+ advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
+ - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
+ service:
+ cloudhubNodePort: "30000"
+ cloudhubQuicNodePort: "30001"
+ cloudhubHttpsNodePort: "30002"
+ cloudstreamNodePort: "30003"
+ tunnelNodePort: "30004"
+ # resources: {}
+ # hostNetWork: false
+ ```
+
+3. 将 `kubeedge.cloudCore.cloudHub.advertiseAddress` 的值设置为集群的公共 IP 地址或边缘节点可以访问的 IP 地址。编辑完成后保存文件。
+
+4. 使用该配置文件创建一个集群:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### 在 Kubernetes 上安装
+
+当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件中启用 KubeEdge。
+
+1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件并进行编辑。
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. 在本地 `cluster-configuration.yaml` 文件中,搜索 `edgeruntime` 和 `kubeedge`,然后将它们 `enabled` 值从 `false` 更改为 `true` 以便开启所有 KubeEdge 组件。完成后保存文件。
+
+ ```yaml
+ edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
+ enabled: false
+ kubeedge: # kubeedge configurations
+ enabled: false
+ cloudCore:
+ cloudHub:
+ advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
+ - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
+ service:
+ cloudhubNodePort: "30000"
+ cloudhubQuicNodePort: "30001"
+ cloudhubHttpsNodePort: "30002"
+ cloudstreamNodePort: "30003"
+ tunnelNodePort: "30004"
+ # resources: {}
+ # hostNetWork: false
+ ```
+
+3. 将 `kubeedge.cloudCore.cloudHub.advertiseAddress` 的值设置为集群的公共 IP 地址或边缘节点可以访问的 IP 地址。
+
+4. 执行以下命令开始安装:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## 在安装后启用 KubeEdge
+
+1. 使用 `admin` 用户登录控制台。点击左上角的**平台管理**,然后选择**集群管理**。
+
+2. 点击**定制资源定义**,然后在搜索栏中输入 `clusterconfiguration`。点击搜索结果查看其详情页。
+
+ {{< notice info >}}
+定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
+ {{ notice >}}
+
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,然后选择**编辑 YAML**。
+
+4. 在该配置文件中,搜索 `edgeruntime` 和 `kubeedge`,然后将它们 `enabled` 值从 `false` 更改为 `true` 以便开启所有 KubeEdge 组件。完成后保存文件。
+
+ ```yaml
+ edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
+ enabled: false
+ kubeedge: # kubeedge configurations
+ enabled: false
+ cloudCore:
+ cloudHub:
+ advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
+ - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
+ service:
+ cloudhubNodePort: "30000"
+ cloudhubQuicNodePort: "30001"
+ cloudhubHttpsNodePort: "30002"
+ cloudstreamNodePort: "30003"
+ tunnelNodePort: "30004"
+ # resources: {}
+ # hostNetWork: false
+ ```
+
+5. 将 `kubeedge.cloudCore.cloudHub.advertiseAddress` 的值设置为集群的公共 IP 地址或边缘节点可以访问的 IP 地址。完成后,点击右下角的**确定**保存配置。
+
+6. 在 kubectl 中执行以下命令检查安装过程:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+您可以通过点击控制台右下角的
来找到 kubectl 工具。
+ {{ notice >}}
+
+## 验证组件的安装
+
+{{< tabs >}}
+
+{{< tab "在仪表板中验证组件的安装" >}}
+
+在**集群管理**页面,您可以看到**节点**下出现**边缘节点**板块。
+
+{{ tab >}}
+
+{{< tab "通过 Kubectl 验证组件的安装" >}}
+
+执行以下命令来检查容器组的状态:
+
+```bash
+kubectl get pod -n kubeedge
+```
+
+如果组件运行成功,输出结果可能如下:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+cloudcore-5f994c9dfd-r4gpq 1/1 Running 0 5h13m
+edge-watcher-controller-manager-bdfb8bdb5-xqfbk 2/2 Running 0 5h13m
+iptables-hphgf 1/1 Running 0 5h13m
+```
+
+{{ tab >}}
+
+{{ tabs >}}
+
+{{< notice note >}}
+
+如果您在启用 KubeEdge 时未设置 `kubeedge.cloudCore.cloudHub.advertiseAddress`,则 CloudCore 无法正常运行 (`CrashLoopBackOff`)。在这种情况下,请运行 `kubectl -n kubeedge edit cm cloudcore` 添加集群的公共 IP 地址或边缘节点可以访问的 IP 地址。
+
+{{ notice >}}
diff --git a/content/zh/docs/v3.4/pluggable-components/logging.md b/content/zh/docs/v3.4/pluggable-components/logging.md
new file mode 100644
index 000000000..991b44461
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/logging.md
@@ -0,0 +1,199 @@
+---
+title: "KubeSphere 日志系统"
+keywords: "Kubernetes, Elasticsearch, KubeSphere, 日志系统, 日志"
+description: "了解如何启用日志,利用基于租户的系统进行日志收集、查询和管理。"
+linkTitle: "KubeSphere 日志系统"
+weight: 6400
+---
+
+KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的、易于使用的日志系统。它涵盖了不同层级的日志,包括租户、基础设施资源和应用。用户可以从项目、工作负载、容器组和关键字等不同维度对日志进行搜索。与 Kibana 相比,KubeSphere 基于租户的日志系统中,每个租户只能查看自己的日志,从而可以在租户之间提供更好的隔离性和安全性。除了 KubeSphere 自身的日志系统,该容器平台还允许用户添加第三方日志收集器,如 Elasticsearch、Kafka 和 Fluentd。
+
+有关更多信息,请参见[日志查询](../../toolbox/log-query/)。
+
+## 在安装前启用日志系统
+
+### 在 Linux 上安装
+
+当您在 Linux 上安装 KubeSphere 时,首先需要创建一个配置文件,该文件列出了所有 KubeSphere 组件。
+
+1. [在 Linux 上安装 KubeSphere](../../installing-on-linux/introduction/multioverview/) 时,您需要创建一个默认文件 `config-sample.yaml`。通过执行以下命令修改该文件:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+
+- 如果您采用 [All-in-one 安装](../../quick-start/all-in-one-on-linux/),则不需要创建 `config-sample.yaml` 文件,因为可以直接创建集群。一般来说,All-in-one 模式是为那些刚接触 KubeSphere 并希望熟悉系统的用户而准备的。如果您想在这个模式下启用日志系统(比如用于测试),请参考[下面的部分](#在安装后启用日志系统),查看如何在安装后启用日志系统。
+
+- 如果您采用[多节点安装](../../installing-on-linux/introduction/multioverview/),并且使用符号链接作为 Docker 根目录,请确保所有节点遵循完全相同的符号链接。日志代理在守护进程集中部署到节点上。容器日志路径的任何差异都可能导致该节点的收集失败。
+
+{{ notice >}}
+
+2. 在该文件中,搜寻到 `logging`,并将 `enabled` 的 `false` 改为 `true`。完成后保存文件。
+
+ ```yaml
+ logging:
+ enabled: true # 将“false”更改为“true”。
+ containerruntime: docker
+ ```
+
+ {{< notice info >}}若使用 containerd 作为容器运行时,请将 `containerruntime` 字段的值更改为 `containerd`。如果您从低版本升级至 KubeSphere 3.4,则启用 KubeSphere 日志系统时必须在 `logging` 字段下手动添加 `containerruntime` 字段。
+
+ {{ notice >}}
+
+ {{< notice note >}}默认情况下,如果启用了日志系统,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
,选择**编辑 YAML**。
+
+4. 在该 YAML 文件中,搜索 `logging`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**以保存配置。
+
+ ```yaml
+ logging:
+ enabled: true # 将“false”更改为“true”。
+ containerruntime: docker
+ ```
+
+ {{< notice info >}}若使用 containerd 作为容器运行时,请将 `.logging.containerruntime` 字段的值更改为 `containerd`。如果您从低版本升级至 KubeSphere 3.4,则启用 KubeSphere 日志系统时必须在 `logging` 字段下手动添加 `containerruntime` 字段。
+
+ {{ notice >}}
+
+ {{< notice note >}}默认情况下,如果启用了日志系统,将会安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在文件中提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
找到 kubectl 工具。
+
+{{ notice >}}
+
+## 验证组件的安装
+
+{{< tabs >}}
+
+{{< tab "在仪表板中验证组件的安装" >}}
+
+进入**系统组件**,检查**日志**标签页中的所有组件都处于**健康**状态。如果是,组件安装成功。
+
+{{ tab >}}
+
+{{< tab "通过 kubectl 验证组件的安装" >}}
+
+执行以下命令来检查容器组的状态:
+
+```bash
+kubectl get pod -n kubesphere-logging-system
+```
+
+如果组件运行成功,输出结果如下:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+elasticsearch-logging-data-0 1/1 Running 0 87m
+elasticsearch-logging-data-1 1/1 Running 0 85m
+elasticsearch-logging-discovery-0 1/1 Running 0 87m
+fluent-bit-bsw6p 1/1 Running 0 40m
+fluent-bit-smb65 1/1 Running 0 40m
+fluent-bit-zdz8b 1/1 Running 0 40m
+fluentbit-operator-9b69495b-bbx54 1/1 Running 0 40m
+logsidecar-injector-deploy-667c6c9579-cs4t6 2/2 Running 0 38m
+logsidecar-injector-deploy-667c6c9579-klnmf 2/2 Running 0 38m
+```
+
+{{ tab >}}
+
+{{ tabs >}}
diff --git a/content/zh/docs/v3.4/pluggable-components/metrics-server.md b/content/zh/docs/v3.4/pluggable-components/metrics-server.md
new file mode 100644
index 000000000..29e613a16
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/metrics-server.md
@@ -0,0 +1,114 @@
+---
+title: "Metrics Server"
+keywords: "Kubernetes, KubeSphere, Metrics Server"
+description: "了解如何启用 Metrics Server 以使用 HPA 对部署进行自动伸缩。"
+linkTitle: "Metrics Server"
+weight: 6910
+---
+
+KubeSphere 支持用于[部署](../../project-user-guide/application-workloads/deployments/)的容器组(Pod)弹性伸缩程序 (HPA)。在 KubeSphere 中,Metrics Server 控制着 HPA 是否启用。您可以根据不同类型的指标(例如 CPU 和内存使用率,以及最小和最大副本数),使用 HPA 对象对部署 (Deployment) 自动伸缩。通过这种方式,HPA 可以帮助确保您的应用程序在不同情况下都能平稳、一致地运行。
+
+## 在安装前启用 Metrics Server
+
+### 在 Linux 上安装
+
+当您在 Linux 上安装多节点 KubeSphere 时,首先需要创建一个配置文件,该文件列出了所有 KubeSphere 组件。
+
+1. [在 Linux 上安装 KubeSphere](../../installing-on-linux/introduction/multioverview/) 时,您需要创建一个默认文件 `config-sample.yaml`,通过执行以下命令修改该文件:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+ 如果您采用 [All-in-One 安装](../../quick-start/all-in-one-on-linux/),则不需要创建 `config-sample.yaml` 文件,因为可以直接创建集群。一般来说,All-in-One 模式是为那些刚接触 KubeSphere 并希望熟悉系统的用户而准备的。如果您想在这个模式下启用 Metrics Server(比如用于测试),请参考[下面的部分](#在安装后启用应用商店),查看如何在安装后启用 Metrics Server。
+ {{ notice >}}
+
+2. 在该文件中,搜寻到 `metrics_server`,并将 `enabled` 的 `false` 改为 `true`。完成后保存文件。
+
+ ```yaml
+ metrics_server:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+3. 使用该配置文件创建集群:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### 在 Kubernetes 上安装
+
+当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件中先启用 Metrics Server组件。
+
+1. 下载文件 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml),并打开文件进行编辑。
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. 在 `cluster-configuration.yaml` 中,搜索 `metrics_server`,并将 `enabled` 的 `false` 改为 `true`。完成后保存文件。
+
+ ```yaml
+ metrics_server:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+3. 执行以下命令以开始安装:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+ {{< notice note >}}
+
+如果您在某些云托管的 Kubernetes 引擎上安装 KubeSphere,那么很可能您的环境中已经安装了 Metrics Server。在这种情况下,不建议您在 `cluster-configuration.yaml` 中启用 Metrics Server,因为这可能会在安装过程中引起冲突。 {{ notice >}}
+
+## 在安装后启用 Metrics Server
+
+1. 以 `admin` 用户登录控制台。点击左上角**平台管理**,选择**集群管理**。
+
+2. 点击**定制资源定义**,在搜索栏中输入 `clusterconfiguration`。点击搜索结果查看详情页。
+
+ {{< notice info >}}
+
+定制资源定义(CRD)允许用户在不增加额外 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
+
+ {{ notice >}}
+
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+
+4. 在该 YAML 文件中,搜索 `metrics_server`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**以保存配置。
+
+ ```yaml
+ metrics_server:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+5. 在 kubectl 中执行以下命令检查安装过程:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+可以通过点击控制台右下角的
找到 kubectl 工具。
+ {{ notice >}}
+
+## 验证组件的安装
+
+执行以下命令以验证 Metrics Server 的容器组是否正常运行:
+
+```bash
+kubectl get pod -n kube-system
+```
+
+如果 Metrics Server 安装成功,那么集群可能会返回以下输出(不包括无关容器组):
+
+```bash
+NAME READY STATUS RESTARTS AGE
+metrics-server-6c767c9f94-hfsb7 1/1 Running 0 9m38s
+```
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/pluggable-components/network-policy.md b/content/zh/docs/v3.4/pluggable-components/network-policy.md
new file mode 100644
index 000000000..b2dfb6c8f
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/network-policy.md
@@ -0,0 +1,109 @@
+---
+title: "网络策略"
+keywords: "Kubernetes, KubeSphere, NetworkPolicy"
+description: "了解如何启用网络策略来控制 IP 地址或端口级别的流量。"
+linkTitle: "网络策略"
+weight: 6900
+---
+
+从 3.0.0 版本开始,用户可以在 KubeSphere 中配置原生 Kubernetes 的网络策略。网络策略是一种以应用为中心的结构,使您能够指定如何允许容器组通过网络与各种网络实体进行通信。通过网络策略,用户可以在同一集群内实现网络隔离,这意味着可以在某些实例(容器组)之间设置防火墙。
+
+{{< notice note >}}
+
+- 在启用之前,请确保集群使用的 CNI 网络插件支持网络策略。支持网络策略的 CNI 网络插件有很多,包括 Calico、Cilium、Kube-router、Romana 和 Weave Net 等。
+- 建议您在启用网络策略之前,使用 [Calico](https://www.projectcalico.org/) 作为 CNI 插件。
+
+{{ notice >}}
+
+有关更多信息,请参见[网络策略](https://kubernetes.io/zh/docs/concepts/services-networking/network-policies/)。
+
+## 在安装前启用网络策略
+
+### 在 Linux 上安装
+
+当您在 Linux 上安装多节点 KubeSphere 时,需要创建一个配置文件,该文件列出了所有 KubeSphere 组件。
+
+1. 在[在 Linux 上安装 KubeSphere](../../installing-on-linux/introduction/multioverview/) 时,您需要创建一个默认文件 `config-sample.yaml`。执行以下命令修改该文件:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+如果您采用 [All-in-One 安装](../../quick-start/all-in-one-on-linux/),则不需要创建 `config-sample.yaml` 文件,因为可以直接创建集群。一般来说,All-in-One 模式是为那些刚接触 KubeSphere 并希望熟悉系统的用户而准备的。如果您想在该模式下启用网络策略(例如用于测试),可以参考[下面的部分](#在安装后启用网络策略),查看如何在安装后启用网络策略。
+ {{ notice >}}
+
+2. 在该文件中,搜索 `network.networkpolicy`,并将 `enabled` 的 `false` 改为 `true`。完成后保存文件。
+
+ ```yaml
+ network:
+ networkpolicy:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+3. 使用配置文件创建一个集群:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### 在 Kubernetes 上安装
+
+当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件中启用网络策略。
+
+1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件,然后打开并开始编辑。
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. 在该本地 `cluster-configuration.yaml` 文件中,搜索 `network.networkpolicy`,并将 `enabled` 的 `false` 改为 `true`。完成后保存文件。
+
+ ```yaml
+ network:
+ networkpolicy:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+3. 执行以下命令开始安装:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## 在安装后启用网络策略
+
+1. 以 `admin` 身份登录控制台。点击左上角的**平台管理**,选择**集群管理**。
+
+2. 点击**定制资源定义**,在搜索栏中输入 `clusterconfiguration`。点击结果查看其详细页面。
+
+ {{< notice info >}}
+定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
+ {{ notice >}}
+
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+
+4. 在该 YAML 文件中,搜寻到 `network.networkpolicy`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**,保存配置。
+
+ ```yaml
+ network:
+ networkpolicy:
+ enabled: true # 将“false”更改为“true”。
+ ```
+
+5. 在 kubectl 中执行以下命令检查安装过程:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+您可以通过点击控制台右下角的
找到 kubectl 工具。
+ {{ notice >}}
+
+## 验证组件的安装
+
+如果您能在**网络**中看到**网络策略**,说明安装成功。
diff --git a/content/zh/docs/v3.4/pluggable-components/overview.md b/content/zh/docs/v3.4/pluggable-components/overview.md
new file mode 100644
index 000000000..044a17ac6
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/overview.md
@@ -0,0 +1,98 @@
+---
+title: "概述"
+keywords: "Kubernetes, KubeSphere, 可插拔组件, 概述"
+description: "了解 KubeSphere 中的关键组件以及对应的资源消耗。"
+linkTitle: "概述"
+weight: 6100
+---
+
+从 2.1.0 版本开始,KubeSphere 解耦了一些核心功能组件。这些组件设计成了可插拔式,您可以在安装之前或之后启用它们。如果您不启用它们,KubeSphere 会默认以最小化进行安装部署。
+
+不同的可插拔组件部署在不同的命名空间中。您可以根据需求启用任意组件。强烈建议您安装这些可插拔组件来深度体验 KubeSphere 提供的全栈特性和功能。
+
+有关如何启用每个组件的更多信息,请参见本章的各个教程。
+
+## 资源要求
+
+在您启用可插拔组件之前,请确保您的环境中有足够的资源,具体参见下表。否则,可能会因为缺乏资源导致组件崩溃。
+
+{{< notice note >}}
+
+CPU 和内存的资源请求和限制均指单个副本的要求。
+
+{{ notice >}}
+
+### KubeSphere 应用商店
+
+| 命名空间 | openpitrix-system |
+| -------- | ---------------------------------------- |
+| CPU 请求 | 0.3 核 |
+| CPU 限制 | 无 |
+| 内存请求 | 300 MiB |
+| 内存限制 | 无 |
+| 安装 | 可选 |
+| 备注 | 该组件可用于管理应用生命周期。建议安装。 |
+
+### KubeSphere DevOps 系统
+
+| 命名空间 | kubesphere-devops-system | kubesphere-devops-system |
+| -------- | ------------------------------------------------------------ | -------------------------------- |
+| 安装模式 | All-in-One 安装 | 多节点安装 |
+| CPU 请求 | 34 m | 0.47 核 |
+| CPU 限制 | 无 | 无 |
+| 内存请求 | 2.69 G | 8.6 G |
+| 内存限制 | 无 | 无 |
+| 安装 | 可选 | 可选 |
+| 备注 | 提供一站式 DevOps 解决方案,包括 Jenkins 流水线、B2I 和 S2I。 | 其中一个节点的内存必须大于 8 G。 |
+
+### KubeSphere 监控系统
+
+| 命名空间 | kubesphere-monitoring-system | kubesphere-monitoring-system | kubesphere-monitoring-system |
+| -------- | ------------------------------------------------------------ | ---------------------------- | ---------------------------- |
+| 子组件 | 2 x Prometheus | 3 x Alertmanager | Notification Manager |
+| CPU 请求 | 100 m | 10 m | 100 m |
+| CPU 限制 | 4 core | 无 | 500 m |
+| 内存请求 | 400 MiB | 30 MiB | 20 MiB |
+| 内存限制 | 8 GiB | | 1 GiB |
+| 安装 | 必需 | 必需 | 必需 |
+| 备注 | Prometheus 的内存消耗取决于集群大小。8 GiB 可满足 200 个节点/16,000 个容器组的集群规模。 | | |
+
+{{< notice note >}}
+
+KubeSphere 监控系统不是可插拔组件,会默认安装。它与其他组件(例如日志系统)紧密关联,因此将其资源请求和限制也列在本页中,供您参考。
+
+{{ notice >}}
+
+### KubeSphere 日志系统
+
+| 命名空间 | kubesphere-logging-system | kubesphere-logging-system | kubesphere-logging-system | kubesphere-logging-system |
+| -------- | ------------------------------------------------------------ | -------------------------------------------- | --------------------------------------- | --------------------------------------------------- |
+| 子组件 | 3 x Elasticsearch | fluent bit | kube-events | kube-auditing |
+| CPU 请求 | 50 m | 20 m | 90 m | 20 m |
+| CPU 限制 | 1 core | 200 m | 900 m | 200 m |
+| 内存请求 | 2 G | 50 MiB | 120 MiB | 50 MiB |
+| 内存限制 | 无 | 100 MiB | 1200 MiB | 100 MiB |
+| 安装 | 可选 | 必需 | 可选 | 可选 |
+| 备注 | 可选组件,用于存储日志数据。不建议在生产环境中使用内置 Elasticsearch。 | 日志收集代理。启用日志系统后,它是必需组件。 | Kubernetes 事件收集、过滤、导出和告警。 | Kubernetes 和 KubeSphere 审计日志收集、过滤和告警。 |
+
+### KubeSphere 告警和通知
+
+| 命名空间 | kubesphere-alerting-system |
+| -------- | -------------------------- |
+| CPU 请求 | 0.08 core |
+| CPU 限制 | 无 |
+| 内存请求 | 80 M |
+| 内存限制 | 无 |
+| 安装 | 可选 |
+| 备注 | 告警和通知需要同时启用。 |
+
+### KubeSphere 服务网格
+
+| 命名空间 | istio-system |
+| -------- | ------------------------------------------------------ |
+| CPU 请求 | 1 core |
+| CPU 限制 | 无 |
+| 内存请求 | 3.5 G |
+| 内存限制 | 无 |
+| 安装 | 可选 |
+| 备注 | 支持灰度发布策略、流量拓扑、流量管理和分布式链路追踪。 |
diff --git a/content/zh/docs/v3.4/pluggable-components/pod-ip-pools.md b/content/zh/docs/v3.4/pluggable-components/pod-ip-pools.md
new file mode 100644
index 000000000..8ad693845
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/pod-ip-pools.md
@@ -0,0 +1,102 @@
+---
+title: "容器组 IP 池"
+keywords: "Kubernetes, KubeSphere, 容器组, IP 池"
+description: "了解如何启用容器组 IP 池,为您的容器组分配一个特定的容器组 IP 池。"
+linkTitle: "容器组 IP 池"
+weight: 6920
+---
+
+容器组 IP 池用于规划容器组网络地址空间,每个容器组 IP 池之间的地址空间不能重叠。创建工作负载时,可选择特定的容器组 IP 池,这样创建出的容器组将从该容器组 IP 池中分配 IP 地址。
+
+## 安装前启用容器组 IP 池
+
+### 在 Linux 上安装
+
+在 Linux 上多节点安装 KubeSphere 时,您需要创建一个配置文件,该文件会列出所有 KubeSphere 组件。
+
+1. [在 Linux 上安装 KubeSphere](../../installing-on-linux/introduction/multioverview/) 时,您需要创建一个默认文件 `config-sample.yaml`。执行以下命令修改该文件:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+ 如果您采用 [All-in-one 安装](../../quick-start/all-in-one-on-linux/),则不需要创建 `config-sample.yaml` 文件,因为可以直接创建集群。一般来说,All-in-one 模式针对那些刚接触 KubeSphere 并希望熟悉系统的用户。如果您想在该模式下启用容器组 IP 池(比如用于测试),请参考[下面的部分](#在安装后启用容器组-ip-池),查看如何在安装后启用容器组 IP 池。
+
+ {{ notice >}}
+
+2. 在该文件中,搜索 `network.ippool.type`,然后将 `none` 更改为 `calico`。完成后保存文件。
+
+ ```yaml
+ network:
+ ippool:
+ type: calico # 将“none”更改为“calico”。
+ ```
+
+3. 使用该配置文件创建一个集群:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### 在 Kubernetes 上安装
+
+当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要现在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件中启用容器组 IP 池。
+
+1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件并进行编辑。
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. 在本地 `cluster-configuration.yaml` 文件中,搜索 `network.ippool.type`,将 `none` 更改为 `calico` 以启用容器组 IP 池。完成后保存文件。
+
+ ```yaml
+ network:
+ ippool:
+ type: calico # 将“none”更改为“calico”。
+ ```
+
+3. 执行以下命令开始安装。
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+
+## 在安装后启用容器组 IP 池
+
+1. 使用 `admin` 用户登录控制台。点击左上角的**平台管理**,然后选择**集群管理**。
+
+2. 点击**定制资源定义**,然后在搜索栏中输入 `clusterconfiguration`。点击搜索结果查看其详情页。
+
+ {{< notice info >}}
+定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
+ {{ notice >}}
+
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,然后选择**编辑 YAML**。
+
+4. 在该配置文件中,搜寻到 `network`,将 `network.ippool.type` 更改为 `calico`。完成后,点击右下角的**确定**保存配置。
+
+ ```yaml
+ network:
+ ippool:
+ type: calico # 将“none”更改为“calico”。
+ ```
+
+5. 在 kubectl 中执行以下命令检查安装过程:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+您可以通过点击控制台右下角的
来找到 kubectl 工具。
+ {{ notice >}}
+
+## 验证组件的安装
+
+在**集群管理**页面,您可以在**网络**下看到**容器组 IP 池**。
diff --git a/content/zh/docs/v3.4/pluggable-components/service-mesh.md b/content/zh/docs/v3.4/pluggable-components/service-mesh.md
new file mode 100644
index 000000000..fd3b61c63
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/service-mesh.md
@@ -0,0 +1,157 @@
+---
+title: "KubeSphere 服务网格"
+keywords: "Kubernetes, Istio, KubeSphere, 服务网格, 微服务"
+description: "了解如何启用服务网格,从而提供不同的流量管理策略进行微服务治理。"
+linkTitle: "KubeSphere 服务网格"
+weight: 6800
+---
+
+KubeSphere 服务网格基于 [Istio](https://istio.io/),将微服务治理和流量管理可视化。它拥有强大的工具包,包括**熔断机制、蓝绿部署、金丝雀发布、流量镜像、链路追踪、可观测性和流量控制**等。KubeSphere 服务网格支持代码无侵入的微服务治理,帮助开发者快速上手,Istio 的学习曲线也极大降低。KubeSphere 服务网格的所有功能都旨在满足用户的业务需求。
+
+有关更多信息,请参见[灰度发布](../../project-user-guide/grayscale-release/overview/)。
+
+## 在安装前启用服务网格
+
+### 在 Linux 上安装
+
+当您在 Linux 上安装多节点 KubeSphere 时,需要创建一个配置文件,该文件列出了所有 KubeSphere 组件。
+
+1. [在 Linux 上安装 KubeSphere](../../installing-on-linux/introduction/multioverview/) 时,您需要创建一个默认文件 `config-sample.yaml`。执行以下命令修改该文件:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+如果您采用 [All-in-One 安装](../../quick-start/all-in-one-on-linux/),则不需要创建 `config-sample.yaml` 文件,因为可以直接创建集群。一般来说,All-in-One 模式是为那些刚接触 KubeSphere 并希望熟悉系统的用户而准备的。如果您想在该模式下启用服务网格(例如用于测试),请参考[下面的部分](#在安装后启用服务网格),查看如何在安装后启用服务网格。
+ {{ notice >}}
+
+2. 在该文件中,搜索 `servicemesh`,并将 `enabled` 的 `false` 改为 `true`。完成后保存文件。
+
+ ```yaml
+ servicemesh:
+ enabled: true # 将“false”更改为“true”。
+ istio: # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
+ components:
+ ingressGateways:
+ - name: istio-ingressgateway # 将服务暴露至服务网格之外。默认不开启。
+ enabled: false
+ cni:
+ enabled: false # 启用后,会在 Kubernetes pod 生命周期的网络设置阶段完成 Istio 网格的 pod 流量转发设置工作。
+ ```
+
+ {{< notice note >}}
+ - 关于开启 Ingress Gateway 后如何访问服务,请参阅 [Ingress Gateway](https://istio.io/latest/zh/docs/tasks/traffic-management/ingress/ingress-control/)。
+ - 更多关于 Istio CNI 插件的信息,请参阅[安装 Istio CNI 插件](https://istio.io/latest/zh/docs/setup/additional-setup/cni/)。
+ {{ notice >}}
+
+3. 执行以下命令使用该配置文件创建集群:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### 在 Kubernetes 上安装
+
+当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件中启用服务网格。
+
+1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件,执行以下命令打开并编辑该文件:
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. 在 `cluster-configuration.yaml` 文件中,搜索 `servicemesh`,并将 `enabled` 的 `false` 改为 `true`。完成后保存文件。
+
+ ```yaml
+ servicemesh:
+ enabled: true # 将“false”更改为“true”。
+ istio: # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
+ components:
+ ingressGateways:
+ - name: istio-ingressgateway # 将服务暴露至服务网格之外。默认不开启。
+ enabled: false
+ cni:
+ enabled: false # 启用后,会在 Kubernetes pod 生命周期的网络设置阶段完成 Istio 网格的 pod 流量转发设置工作。
+ ```
+
+3. 执行以下命令开始安装:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## 在安装后启用服务网格
+
+1. 以 `admin` 用户登录控制台。点击左上角的**平台管理**,选择**集群管理**。
+
+2. 点击**定制资源定义**,在搜索栏中输入 `clusterconfiguration`。点击结果查看其详情页。
+
+ {{< notice info >}}
+定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
+ {{ notice >}}
+
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,选择**编辑 YAML**。
+
+4. 在该配置文件中,搜索 `servicemesh`,并将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的**确定**,保存配置。
+
+ ```yaml
+ servicemesh:
+ enabled: true # 将“false”更改为“true”。
+ istio: # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
+ components:
+ ingressGateways:
+ - name: istio-ingressgateway # 将服务暴露至服务网格之外。默认不开启。
+ enabled: false
+ cni:
+ enabled: false # 启用后,会在 Kubernetes pod 生命周期的网络设置阶段完成 Istio 网格的 pod 流量转发设置工作。
+ ```
+
+5. 在 kubectl 中执行以下命令检查安装过程:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+
+您可以通过点击控制台右下角的
找到 kubectl 工具。
+ {{ notice >}}
+
+## 验证组件的安装
+
+{{< tabs >}}
+
+{{< tab "在仪表板中验证组件的安装" >}}
+
+进入**系统组件**,检查 **Istio** 标签页中的所有组件是否都处于**健康**状态。如果是,组件安装成功。
+
+{{ tab >}}
+
+{{< tab "通过 kubectl 验证组件的安装" >}}
+
+执行以下命令检查容器组的状态:
+
+```bash
+kubectl get pod -n istio-system
+```
+
+如果组件运行成功,输出结果可能如下:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+istio-ingressgateway-78dbc5fbfd-f4cwt 1/1 Running 0 9m5s
+istiod-1-6-10-7db56f875b-mbj5p 1/1 Running 0 10m
+jaeger-collector-76bf54b467-k8blr 1/1 Running 0 6m48s
+jaeger-operator-7559f9d455-89hqm 1/1 Running 0 7m
+jaeger-query-b478c5655-4lzrn 2/2 Running 0 6m48s
+kiali-f9f7d6f9f-gfsfl 1/1 Running 0 4m1s
+kiali-operator-7d5dc9d766-qpkb6 1/1 Running 0 6m53s
+```
+
+{{ tab >}}
+
+{{ tabs >}}
diff --git a/content/zh/docs/v3.4/pluggable-components/service-topology.md b/content/zh/docs/v3.4/pluggable-components/service-topology.md
new file mode 100644
index 000000000..5f922f424
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/service-topology.md
@@ -0,0 +1,131 @@
+---
+title: "服务拓扑图"
+keywords: "Kubernetes, KubeSphere, 服务, 拓扑图"
+description: "了解如何启用服务拓扑图,以基于 Weave Scope 查看容器组的上下文详情。"
+linkTitle: "服务拓扑图"
+weight: 6915
+---
+
+您可以启用服务拓扑图以集成 [Weave Scope](https://www.weave.works/oss/scope/)(Docker 和 Kubernetes 的可视化和监控工具)。Weave Scope 使用既定的 API 收集信息,为应用和容器构建拓扑图。服务拓扑图显示在您的项目中,将服务之间的连接关系可视化。
+
+## 安装前启用服务拓扑图
+
+### 在 Linux 上安装
+
+在 Linux 上多节点安装 KubeSphere 时,您需要创建一个配置文件,该文件会列出所有 KubeSphere 组件。
+
+1. [在 Linux 上安装 KubeSphere](../../installing-on-linux/introduction/multioverview/) 时,您需要创建一个默认文件 `config-sample.yaml`。执行以下命令修改该文件:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+ 如果您采用 [All-in-one 安装](../../quick-start/all-in-one-on-linux/),则不需要创建 `config-sample.yaml` 文件,因为可以直接创建集群。一般来说,All-in-one 模式针对那些刚接触 KubeSphere 并希望熟悉系统的用户。如果您想在该模式下启用服务拓扑图(比如用于测试),请参考[下面的部分](#在安装后启用服务拓扑图),查看如何在安装后启用服务拓扑图。
+
+ {{ notice >}}
+
+2. 在该文件中,搜索 `network.topology.type`,并将 `none` 改为 `weave-scope`。完成后保存文件。
+
+ ```yaml
+ network:
+ topology:
+ type: weave-scope # 将“none”更改为“weave-scope”。
+ ```
+
+3. 执行以下命令使用该配置文件创建集群:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### 在 Kubernetes 上安装
+
+当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在[cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件中启用服务拓扑图。
+
+1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/cluster-configuration.yaml) 文件并进行编辑。
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. 在 `cluster-configuration.yaml` 文件中,搜索 `network.topology.type`,将 `none` 更改为 `weave-scope` 以启用服务拓扑图。完成后保存文件。
+
+ ```yaml
+ network:
+ topology:
+ type: weave-scope # 将“none”更改为“weave-scope”。
+ ```
+
+3. 执行以下命令开始安装:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.4.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+
+## 在安装后启用服务拓扑图
+
+1. 以 `admin` 用户登录控制台。点击左上角的**平台管理**,然后选择**集群管理**。
+
+2. 点击**定制资源定义**,然后在搜索栏中输入 `clusterconfiguration`。点击搜索结果查看其详情页。
+
+ {{< notice info >}}
+定制资源定义(CRD)允许用户在不新增 API 服务器的情况下创建一种新的资源类型,用户可以像使用其他 Kubernetes 原生对象一样使用这些定制资源。
+ {{ notice >}}
+
+3. 在**自定义资源**中,点击 `ks-installer` 右侧的
,然后选择**编辑 YAML**。
+
+4. 在该配置文件中,搜寻到 `network`,将 `network.topology.type` 更改为 `weave-scope`。完成后,点击右下角的**确定**保存配置。
+
+ ```yaml
+ network:
+ topology:
+ type: weave-scope # 将“none”更改为“weave-scope”。
+ ```
+
+5. 在 kubectl 中执行以下命令检查安装过程:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+您可以通过点击控制台右下角的
来找到 kubectl 工具。
+ {{ notice >}}
+
+## 验证组件的安装
+
+{{< tabs >}}
+
+{{< tab "在仪表板中验证组件的安装" >}}
+
+进入一个项目中,导航到**应用负载**下的**服务**,即可看到**服务拓扑**页签下**服务**的拓扑图。
+
+{{ tab >}}
+
+{{< tab "通过 kubectl 验证组件的安装" >}}
+
+执行以下命令来检查容器组的状态:
+
+```bash
+kubectl get pod -n weave
+```
+
+如果组件运行成功,输出结果可能如下:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+weave-scope-agent-48cjp 1/1 Running 0 3m1s
+weave-scope-agent-9jb4g 1/1 Running 0 3m1s
+weave-scope-agent-ql5cf 1/1 Running 0 3m1s
+weave-scope-app-5b76897b6f-8bsls 1/1 Running 0 3m1s
+weave-scope-cluster-agent-8d9b8c464-5zlpp 1/1 Running 0 3m1s
+```
+
+{{ tab >}}
+
+{{ tabs >}}
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/pluggable-components/uninstall-pluggable-components.md b/content/zh/docs/v3.4/pluggable-components/uninstall-pluggable-components.md
new file mode 100644
index 000000000..5e8de6718
--- /dev/null
+++ b/content/zh/docs/v3.4/pluggable-components/uninstall-pluggable-components.md
@@ -0,0 +1,204 @@
+---
+title: "卸载可插拔组件"
+keywords: "Installer, uninstall, KubeSphere, Kubernetes"
+description: "学习如何在 KubeSphere上卸载所有可插拔组件。"
+linkTitle: "卸载可插拔组件"
+Weight: 6940
+---
+
+[启用 KubeSphere 可插拔组件之后](../../pluggable-components/),还可以根据以下步骤卸载他们。请在卸载这些组件之前,备份所有重要数据。
+## 准备工作
+
+在卸载除服务拓扑图和容器组 IP 池之外的可插拔组件之前,必须将 CRD 配置文件 `ClusterConfiguration` 中的 `ks-installer` 中的 `enabled` 字段的值从 `true` 改为 `false`。
+
+使用下列任一方法更改 `enabled` 字段的值:
+
+- 运行以下命令编辑 `ks-installer`:
+
+```bash
+kubectl -n kubesphere-system edit clusterconfiguration ks-installer
+```
+
+- 使用 `admin` 身份登录 KubeSphere Web 控制台,左上角点击**平台管理**,选择**集群管理**,在**定制资源定义**中搜索 `ClusterConfiguration`。有关更多信息,请参见[启用可插拔组件](../../pluggable-components/)。
+
+{{< notice note >}}
+
+更改值之后,需要等待配置更新完成,然后继续进行后续操作。
+
+{{ notice >}}
+
+## 卸载 KubeSphere 应用商店
+
+将 CRD `ClusterConfiguration` 配置文件中 `ks-installer` 参数的 `openpitrix.store.enabled` 字段的值从 `true` 改为 `false`。
+
+## 卸载 KubeSphere DevOps
+
+1. 卸载 DevOps:
+
+ ```bash
+ helm uninstall -n kubesphere-devops-system devops
+ kubectl patch -n kubesphere-system cc ks-installer --type=json -p='[{"op": "remove", "path": "/status/devops"}]'
+ kubectl patch -n kubesphere-system cc ks-installer --type=json -p='[{"op": "replace", "path": "/spec/devops/enabled", "value": false}]'
+ ```
+2. 删除 DevOps 资源:
+
+ ```bash
+ # 删除所有 DevOps 相关资源
+ for devops_crd in $(kubectl get crd -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep "devops.kubesphere.io"); do
+ for ns in $(kubectl get ns -ojsonpath='{.items..metadata.name}'); do
+ for devops_res in $(kubectl get $devops_crd -n $ns -oname); do
+ kubectl patch $devops_res -n $ns -p '{"metadata":{"finalizers":[]}}' --type=merge
+ done
+ done
+ done
+ # 删除所有 DevOps CRD
+ kubectl get crd -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep "devops.kubesphere.io" | xargs -I crd_name kubectl delete crd crd_name
+ # 删除 DevOps 命名空间
+ kubectl delete namespace kubesphere-devops-system
+ ```
+
+
+## 卸载 KubeSphere 日志系统
+
+1. 将 CRD `ClusterConfiguration` 配置文件中 `ks-installer` 参数的 `logging.enabled` 字段的值从 `true` 改为 `false`。
+
+2. 仅禁用日志收集:
+
+ ```bash
+ kubectl delete inputs.logging.kubesphere.io -n kubesphere-logging-system tail
+ ```
+
+ {{< notice note >}}
+
+ 运行此命令后,默认情况下仍可查看 Kubernetes 提供的容器最近日志。但是,容器历史记录日志将被清除,您无法再浏览它们。
+
+ {{ notice >}}
+
+3. 卸载包括 Elasticsearch 的日志系统,请执行以下操作:
+
+ ```bash
+ kubectl delete crd fluentbitconfigs.logging.kubesphere.io
+ kubectl delete crd fluentbits.logging.kubesphere.io
+ kubectl delete crd inputs.logging.kubesphere.io
+ kubectl delete crd outputs.logging.kubesphere.io
+ kubectl delete crd parsers.logging.kubesphere.io
+ kubectl delete deployments.apps -n kubesphere-logging-system fluentbit-operator
+ helm uninstall elasticsearch-logging --namespace kubesphere-logging-system
+ ```
+
+ {{< notice warning >}}
+
+ 此操作可能导致审计、事件和服务网格的异常。
+
+ {{ notice >}}
+
+3. 运行以下命令:
+
+ ```bash
+ kubectl delete deployment logsidecar-injector-deploy -n kubesphere-logging-system
+ kubectl delete ns kubesphere-logging-system
+ ```
+
+## 卸载 KubeSphere 事件系统
+
+1. 将 CRD `ClusterConfiguration` 配置文件中 `ks-installer` 参数的 `events.enabled` 字段的值从 `true` 改为 `false`。
+
+2. 运行以下命令:
+
+ ```bash
+ helm delete ks-events -n kubesphere-logging-system
+ ```
+
+## 卸载 KubeSphere 告警系统
+
+1. 将 CRD `ClusterConfiguration` 配置文件中 `ks-installer` 参数的 `alerting.enabled` 字段的值从 `true` 改为 `false`。
+
+2. 运行以下命令:
+
+ ```bash
+ kubectl -n kubesphere-monitoring-system delete thanosruler kubesphere
+ ```
+
+ {{< notice note >}}
+
+ KubeSphere 3.4 通知系统为默认安装,您无需卸载。
+
+ {{ notice >}}
+
+
+## 卸载 KubeSphere 审计
+
+1. 将 CRD `ClusterConfiguration` 配置文件中 `ks-installer` 参数的 `auditing.enabled` 字段的值从 `true` 改为 `false`。
+
+2. 运行以下命令:
+
+ ```bash
+ helm uninstall kube-auditing -n kubesphere-logging-system
+ kubectl delete crd rules.auditing.kubesphere.io
+ kubectl delete crd webhooks.auditing.kubesphere.io
+ ```
+
+## 卸载 KubeSphere 服务网格
+
+1. 将 CRD `ClusterConfiguration` 配置文件中 `ks-installer` 参数的 `servicemesh.enabled` 字段的值从 `true` 改为 `false`。
+
+2. 运行以下命令:
+
+ ```bash
+ curl -L https://istio.io/downloadIstio | sh -
+ istioctl x uninstall --purge
+
+ kubectl -n istio-system delete kiali kiali
+ helm -n istio-system delete kiali-operator
+
+ kubectl -n istio-system delete jaeger jaeger
+ helm -n istio-system delete jaeger-operator
+ ```
+
+## 卸载网络策略
+
+对于 NetworkPolicy 组件,禁用它不需要卸载组件,因为其控制器位于 `ks-controller-manager` 中。如果想要将其从 KubeSphere 控制台中移除,将 CRD `ClusterConfiguration` 配置文件中参数 `ks-installer` 中 `network.networkpolicy.enabled` 的值从 `true` 改为 `false`。
+
+## 卸载 Metrics Server
+
+1. 将 CRD `ClusterConfiguration` 配置文件中参数 `ks-installer` 中 `metrics_server.enabled` 的值从 `true` 改为 `false`。
+
+2. 运行以下命令:
+
+ ```bash
+ kubectl delete apiservice v1beta1.metrics.k8s.io
+ kubectl -n kube-system delete service metrics-server
+ kubectl -n kube-system delete deployment metrics-server
+ ```
+
+## 卸载服务拓扑图
+
+1. 将 CRD `ClusterConfiguration` 配置文件中参数 `ks-installer` 中 `network.topology.type` 的值从 `weave-scope` 改为 `none`。
+
+2. 运行以下命令:
+
+ ```bash
+ kubectl delete ns weave
+ ```
+
+## 卸载容器组 IP 池
+
+将 CRD `ClusterConfiguration` 配置文件中参数 `ks-installer` 中 `network.ippool.type` 的值从 `calico` 改为 `none`。
+
+## 卸载 KubeEdge
+
+1. 将 CRD `ClusterConfiguration` 配置文件中参数 `ks-installer` 中 `kubeedege.enabled` 和 `edgeruntime.enabled` 的值从 `true` 改为 `false`。
+
+2. 运行以下命令:
+
+ ```bash
+ helm uninstall kubeedge -n kubeedge
+ kubectl delete ns kubeedge
+ ```
+
+ {{< notice note >}}
+
+ 卸载后,您将无法为集群添加边缘节点。
+
+ {{ notice >}}
+
diff --git a/content/zh/docs/v3.4/project-administration/_index.md b/content/zh/docs/v3.4/project-administration/_index.md
new file mode 100644
index 000000000..b81449813
--- /dev/null
+++ b/content/zh/docs/v3.4/project-administration/_index.md
@@ -0,0 +1,13 @@
+---
+title: "项目管理"
+description: "帮助您更好地管理 KubeSphere 项目"
+layout: "second"
+
+linkTitle: "项目管理"
+weight: 13000
+
+icon: "/images/docs/v3.x/docs.svg"
+
+---
+
+KubeSphere 的项目即 Kubernetes 的命名空间。项目有两种类型,即单集群项目和多集群项目。单集群项目是 Kubernetes 常规命名空间,多集群项目是跨多个集群的联邦命名空间。项目管理员负责创建项目、设置限制范围、配置网络隔离以及其他操作。
diff --git a/content/zh/docs/v3.4/project-administration/container-limit-ranges.md b/content/zh/docs/v3.4/project-administration/container-limit-ranges.md
new file mode 100644
index 000000000..00341fae6
--- /dev/null
+++ b/content/zh/docs/v3.4/project-administration/container-limit-ranges.md
@@ -0,0 +1,49 @@
+---
+title: "容器限制范围"
+keywords: 'Kubernetes, KubeSphere, 资源, 配额, 限制, 请求, 限制范围, 容器'
+description: '了解如何在项目中设置默认容器限制范围。'
+linkTitle: "容器限制范围"
+weight: 13400
+---
+
+容器所使用的 CPU 和内存资源上限由[项目资源配额](../../workspace-administration/project-quotas/)指定。同时,KubeSphere 使用请求 (Request) 和限制 (Limit) 来控制单个容器的资源(例如 CPU 和内存)使用情况,在 Kubernetes 中也称为 [LimitRange](https://kubernetes.io/zh/docs/concepts/policy/limit-range/)。请求确保容器能够获得其所需要的资源,因为这些资源已经得到明确保障和预留。相反地,限制确保容器不能使用超过特定值的资源。
+
+当您创建工作负载(例如部署)时,您可以为容器配置资源请求和资源限制。要预先填充这些请求字段和限制字段的值,您可以设置默认限制范围。
+
+本教程演示如何为项目中的容器设置默认限制范围。
+
+## 准备工作
+
+您需要有一个可用的企业空间、一个项目和一个用户 (`project-admin`)。该用户必须在项目层级拥有 `admin` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../quick-start/create-workspace-and-project/)。
+
+## 设置默认限制范围
+
+1. 以 `project-admin` 身份登录控制台,进入一个项目。如果该项目是新创建的项目,您在**概览**页面上会看到默认配额尚未设置。点击**默认容器配额未设置**旁的**编辑配额**来配置限制范围。
+
+2. 在弹出的对话框中,您可以看到 KubeSphere 默认不设置任何请求或限制。要设置请求和限制来控制 CPU 和内存资源,请移动滑块至期望的值或者直接输入数值。字段留空意味着不设置任何请求或限制。
+
+ {{< notice note >}}
+
+ 限制必须大于请求。
+
+ {{ notice >}}
+
+3. 点击**确定**完成限制范围设置。
+
+4. 在**项目设置**下的**基本信息**页面,您可以查看项目中容器的默认容器配额。
+
+5. 要更改默认容器配额,请在**基本信息**页面点击**管理**,然后选择**编辑默认容器配额**。
+
+6. 在弹出的对话框中直接更改容器配额,然后点击**确定**。
+
+7. 当您创建工作负载时,容器的请求和限制将预先填充对应的值。
+
+ {{< notice note >}}
+
+ 有关更多信息,请参见[容器镜像设置](../../project-user-guide/application-workloads/container-image-settings/)中的**资源请求**。
+
+ {{ notice >}}
+
+## 另请参见
+
+[项目配额](../../workspace-administration/project-quotas/)
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/project-administration/disk-log-collection.md b/content/zh/docs/v3.4/project-administration/disk-log-collection.md
new file mode 100644
index 000000000..c22e5da64
--- /dev/null
+++ b/content/zh/docs/v3.4/project-administration/disk-log-collection.md
@@ -0,0 +1,78 @@
+---
+title: "日志收集"
+keywords: 'KubeSphere, Kubernetes, 项目, 日志, 收集'
+description: '启用日志收集,对日志进行统一收集、管理和分析。'
+linkTitle: "日志收集"
+weight: 13600
+---
+
+KubeSphere 支持多种日志收集方式,使运维团队能够以灵活统一的方式收集、管理和分析日志。
+
+本教程演示了如何为示例应用收集日志。
+
+## 准备工作
+
+- 您需要创建企业空间、项目和帐户 (`project-admin`)。该用户必须被邀请到项目中,并在项目级别具有 `admin` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../quick-start/create-workspace-and-project/)。
+
+- 您需要启用 [KubeSphere 日志系统](../../pluggable-components/logging/)。
+
+## 启用日志收集
+
+1. 以 `project-admin` 身份登录 KubeSphere 的 Web 控制台,进入项目。
+
+2. 在左侧导航栏中,选择**项目设置**中的**日志收集**,点击
以启用该功能。
+
+
+## 创建部署
+
+1. 在左侧导航栏中,选择**应用负载**中的**工作负载**。在**部署**选项卡下,点击**创建**。
+
+2. 在出现的对话框中,设置部署的名称(例如 `demo-deployment`),选择将要创建资源的项目,点击**下一步**。
+
+3. 在**容器组设置**下,点击**添加容器**。
+
+4. 在搜索栏中输入 `alpine`,以该镜像(标签:`latest`)作为示例。
+
+5. 向下滚动并勾选**启动命令**。在**命令**和**参数**中分别输入以下值,点击 **√**,然后点击**下一步**。
+
+ **命令**
+
+ ```bash
+ /bin/sh
+ ```
+
+ **参数**
+
+ ```bash
+ -c,if [ ! -d /data/log ];then mkdir -p /data/log;fi; while true; do date >> /data/log/app-test.log; sleep 30;done
+ ```
+
+ {{< notice note >}}
+
+ 以上命令及参数意味着每 30 秒将日期信息导出到 `/data/log` 的 `app-test.log` 中。
+
+ {{ notice >}}
+
+6. 在**存储设置**选项卡下,切换 | 内置角色 | +描述 | +
|---|---|
viewer |
+ 项目观察者,可以查看项目下所有的资源。 | +
operator |
+ 项目维护者,可以管理项目下除用户和角色之外的资源。 | +
admin |
+ 项目管理员,可以对项目下的所有资源执行所有操作。此角色可以完全控制项目下的所有资源。 | +
以编辑该角色。
+
+
+## 邀请新成员
+
+1. 转到**项目设置**下的**项目成员**,点击**邀请**。
+
+2. 点击右侧的
以邀请一名成员加入项目,并为其分配一个角色。
+
+3. 将成员加入项目后,点击**确定**。您可以在**项目成员**列表中查看新邀请的成员。
+
+4. 若要编辑现有成员的角色或将其从项目中移除,点击右侧的
并选择对应的操作。
+
+
+
diff --git a/content/zh/docs/v3.4/project-user-guide/_index.md b/content/zh/docs/v3.4/project-user-guide/_index.md
new file mode 100644
index 000000000..eb46927a5
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/_index.md
@@ -0,0 +1,12 @@
+---
+title: "项目用户指南"
+description: "帮助您更好地管理 KubeSphere 项目中的资源"
+layout: "second"
+
+linkTitle: "项目用户指南"
+weight: 10000
+
+icon: "/images/docs/v3.x/docs.svg"
+---
+
+在 KubeSphere 中,具有必要权限的项目用户能够执行一系列任务,例如创建各种工作负载,配置卷、密钥和 ConfigMap,设置各种发布策略,监控应用程序指标以及创建告警策略。由于 KubeSphere 具有极大的灵活性和兼容性,无需将任何代码植入到原生 Kubernetes 中,因此用户可以在测试、开发和生产环境快速上手 KubeSphere 的各种功能。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/project-user-guide/alerting/_index.md b/content/zh/docs/v3.4/project-user-guide/alerting/_index.md
new file mode 100644
index 000000000..22c233ac3
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/alerting/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "告警"
+weight: 10700
+
+_build:
+ render: false
+---
diff --git a/content/zh/docs/v3.4/project-user-guide/alerting/alerting-message.md b/content/zh/docs/v3.4/project-user-guide/alerting/alerting-message.md
new file mode 100644
index 000000000..0dbba710c
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/alerting/alerting-message.md
@@ -0,0 +1,28 @@
+---
+title: "告警消息(工作负载级别)"
+keywords: 'KubeSphere, Kubernetes, 工作负载, 告警, 消息, 通知'
+description: '了解如何查看工作负载的告警策略。'
+
+linkTitle: "告警消息(工作负载级别)"
+weight: 10720
+---
+
+告警消息中记录着按照告警规则触发的告警的详细信息。本教程演示如何查看工作负载级别的告警消息。
+
+## 准备工作
+
+* 您需要启用 [KubeSphere 告警系统](../../../pluggable-components/alerting/)。
+* 您需要创建一个企业空间、一个项目和一个用户 (`project-regular`)。该用户必须已邀请至该项目,并具有 `operator` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+* 您需要创建一个工作负载级别的告警策略并且已经触发该告警。有关更多信息,请参考[告警策略(工作负载级别)](../alerting-policy/)。
+
+## 查看告警消息
+
+1. 使用 `project-regular` 帐户登录控制台并进入您的项目,导航到**监控告警**下的**告警消息**。
+
+2. 在**告警消息**页面,可以看到列表中的全部告警消息。第一列显示您在告警通知中定义的标题和消息。如需查看某一告警消息的详情,点击该告警策略的名称,然后在显示的页面中点击**告警历史**选项卡。
+
+3. 在**告警历史**选项卡,您可以看到告警级别、监控目标以及告警激活时间。
+
+## 查看通知
+
+如果需要接收告警通知(例如,邮件和 Slack 消息),则须先配置[一个通知渠道](../../../cluster-administration/platform-settings/notification-management/configure-email/)。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/project-user-guide/alerting/alerting-policy.md b/content/zh/docs/v3.4/project-user-guide/alerting/alerting-policy.md
new file mode 100644
index 000000000..b3ca47cf9
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/alerting/alerting-policy.md
@@ -0,0 +1,60 @@
+---
+title: "告警策略(工作负载级别)"
+keywords: 'KubeSphere, Kubernetes, 工作负载, 告警, 策略, 通知'
+description: '了解如何为工作负载设置告警策略。'
+linkTitle: "告警策略(工作负载级别)"
+weight: 10710
+---
+
+KubeSphere 支持针对节点和工作负载的告警策略。本教程演示如何为项目中的工作负载创建告警策略。有关如何为节点配置告警策略,请参见[告警策略(节点级别)](../../../cluster-administration/cluster-wide-alerting-and-notification/alerting-policy/)。
+
+## 准备工作
+
+- 您需要启用 [KubeSphere 告警系统](../../../pluggable-components/alerting/)。
+- 若想接收告警通知,您需要预先配置一个[通知渠道](../../../cluster-administration/platform-settings/notification-management/configure-email/)。
+- 您需要创建一个企业空间、一个项目和一个用户(例如 `project-regular`)。该用户必须已邀请至该项目,并具有 `operator` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+- 您需要确保项目中存在工作负载。如果项目中没有工作负载,请参见[部署并访问 Bookinfo](../../../quick-start/deploy-bookinfo-to-k8s/) 来创建示例应用。
+
+## 创建告警策略
+
+1. 以 `project-regular` 身份登录控制台并访问您的项目。导航到**监控告警**下的**告警策略**,点击**创建**。
+
+2. 在弹出的对话框中,提供如下基本信息。点击**下一步**继续。
+
+ - **名称**:使用简明名称作为其唯一标识符,例如 `alert-demo`。
+ - **别名**:帮助您更好地识别告警策略。
+ - **描述信息**:对该告警策略的简要介绍。
+ - **阈值时间(分钟)**:告警规则中设置的情形持续时间达到该阈值后,告警策略将变为触发中状态。
+ - **告警级别**:提供的值包括**一般告警**、**重要告警**和**危险告警**,代表告警的严重程度。
+
+3. 在**规则设置**选项卡,您可以使用规则模板或创建自定义规则。若想使用模板,请填写以下字段。
+
+ - **资源类型**:选择想要监控的资源类型,例如**部署**、**有状态副本集**或**守护进程集**。
+ - **监控目标**:取决于您所选择的资源类型,目标可能有所不同。如果项目中没有工作负载,则无法看到任何监控目标。
+ - **告警规则**:为告警策略定义规则。这些规则基于 Prometheus 表达式,满足条件时将会触发告警。您可以对 CPU、内存等对象进行监控。
+
+ {{< notice note >}}
+
+ 您可以在**监控指标**字段输入表达式(支持自动补全),以使用 PromQL 创建自定义规则。有关更多信息,请参见 [Querying Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/)。
+
+ {{ notice >}}
+
+ 点击**下一步**继续。
+
+4. 在**消息设置**选项卡,输入想要在包含在通知中的告警标题和消息,然后点击**创建**。
+
+5. 告警策略刚创建后将显示为**未触发**状态;一旦满足规则表达式中的条件,则会首先达到**待触发**状态;满足告警条件的时间达到阈值时间后,将变为**触发中**状态。
+
+## 编辑告警策略
+
+若要在创建后编辑告警策略,点击**告警策略**页面右侧的
。
+
+1. 点击下拉菜单中的**编辑**,按照创建时相同的步骤来编辑告警策略。点击**消息设置**页面的**确定**保存更改。
+
+2. 点击下拉菜单中的**删除**来删除告警策略。
+
+## 查看告警策略
+
+在**告警策略**页面,点击任一告警策略来查看其详情,包括告警规则和告警历史。您还可以看到创建告警策略时基于所使用模板的告警规则表达式。
+
+在**告警监控**下,**告警监控**图显示一段时间内的实际资源使用情况或使用量。**告警消息**显示您在通知中设置的自定义消息。
diff --git a/content/zh/docs/v3.4/project-user-guide/application-workloads/_index.md b/content/zh/docs/v3.4/project-user-guide/application-workloads/_index.md
new file mode 100644
index 000000000..009f71bd9
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/application-workloads/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "应用负载"
+weight: 10200
+
+_build:
+ render: false
+---
diff --git a/content/zh/docs/v3.4/project-user-guide/application-workloads/container-image-settings.md b/content/zh/docs/v3.4/project-user-guide/application-workloads/container-image-settings.md
new file mode 100644
index 000000000..2453b7edd
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/application-workloads/container-image-settings.md
@@ -0,0 +1,268 @@
+---
+title: "容器组设置"
+keywords: 'KubeSphere, Kubernetes, 镜像, 工作负载, 设置, 容器'
+description: '在为工作负载设置容器组时,详细了解仪表板上的不同属性。'
+
+weight: 10280
+---
+
+创建部署 (Deployment)、有状态副本集 (StatefulSet) 或者守护进程集 (DaemonSet) 时,您需要指定一个容器组。同时,KubeSphere 向用户提供多种选项,用于自定义工作负载配置,例如健康检查探针、环境变量和启动命令。本页内容详细说明了**容器组设置**中的不同属性。
+
+{{< notice tip >}}
+
+您可以在右上角启用**编辑 YAML**,查看仪表板上的属性对应到清单文件(YAML 格式)中的值。
+
+{{ notice >}}
+
+## 容器组设置
+
+### 容器组副本数量
+
+点击
,然后点击
,在弹出菜单中选择操作,修改您的守护进程集。
+
+ - **编辑信息**:查看并编辑基本信息。
+ - **编辑 YAML**:查看、上传、下载或者更新 YAML 文件。
+ - **重新创建**:重新创建该守护进程集。
+ - **删除**:删除该守护进程集。
+
+2. 点击守护进程集名称可以进入它的详情页面。
+
+3. 点击**更多操作**,显示您可以对该守护进程集进行的操作。
+
+ - **回退**:选择要回退的版本。
+ - **编辑设置**:配置更新策略、容器和存储。
+ - **编辑 YAML**:查看、上传、下载或者更新 YAML 文件。
+ - **重新创建**:重新创建该守护进程集。
+ - **删除**:删除该守护进程集并返回守护进程集列表页面。
+
+4. 点击**资源状态**选项卡,查看该守护进程集的端口和容器组信息。
+
+ - **副本运行状态**:您无法更改守护进程集的容器组副本数量。
+ - **容器组**
+
+ - 容器组列表中显示了容器组详情(运行状态、节点、容器组IP 以及资源使用情况)。
+ - 您可以点击容器组条目查看容器信息。
+ - 点击容器日志图标查看容器的输出日志。
+ - 您可以点击容器组名称查看容器组详情页面。
+
+### 版本记录
+
+修改工作负载的资源模板后,会生成一个新的日志并重新调度容器组进行版本更新。默认保存 10 个最近的版本。您可以根据修改日志进行重新创建。
+
+### 元数据
+
+点击**元数据**选项卡以查看守护进程集的标签和注解。
+
+### 监控
+
+1. 点击**监控**选项卡以查看 CPU 使用量、内存使用量、网络流入速率和网络流出速率。
+
+2. 点击右上角的下拉菜单以自定义时间范围和采样间隔。
+
+3. 点击右上角的
/
以开始或停止自动刷新数据。
+
+4. 点击右上角的
以手动刷新数据。
+
+### 环境变量
+
+点击**环境变量**选项卡以查看守护进程集的环境变量。
+
+### 事件
+
+点击**事件**以查看守护进程集的事件。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/project-user-guide/application-workloads/deployments.md b/content/zh/docs/v3.4/project-user-guide/application-workloads/deployments.md
new file mode 100644
index 000000000..1c01bd41c
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/application-workloads/deployments.md
@@ -0,0 +1,139 @@
+---
+title: "部署"
+keywords: 'KubeSphere, Kubernetes, 部署, 工作负载'
+description: '了解部署的基本概念以及如何在 KubeSphere 中创建部署。'
+linkTitle: "部署"
+
+weight: 10210
+---
+
+部署控制器为容器组和副本集提供声明式升级。您可以在部署对象中描述一个期望状态,部署控制器会以受控速率将实际状态变更为期望状态。一个部署运行着应用程序的几个副本,它会自动替换宕机或故障的实例。因此,部署能够确保应用实例可用,处理用户请求。
+
+有关更多信息,请参见 [Kubernetes 官方文档](https://kubernetes.io/zh/docs/concepts/workloads/controllers/deployment/)。
+
+## 准备工作
+
+您需要创建一个企业空间、一个项目和一个用户 (`project-regular`),务必邀请该用户到项目中并赋予 `operator` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 创建部署
+
+### 步骤 1:打开仪表板
+
+以 `project-regular` 身份登录控制台。转到项目的**应用负载**,选择**工作负载**,点击**部署**选项卡下面的**创建**。
+
+### 步骤 2:输入基本信息
+
+为该部署指定一个名称(例如 `demo-deployment`),选择一个项目,点击**下一步**继续。
+
+### 步骤 3:设置容器组
+
+1. 设置镜像前,请点击**容器组副本数量**中的
,在弹出菜单中选择操作,修改您的部署。
+
+ - **编辑信息**:查看并编辑基本信息。
+ - **编辑 YAML**:查看、上传、下载或者更新 YAML 文件。
+ - **重新创建**:重新创建该部署。
+ - **删除**:删除该部署。
+
+2. 点击部署名称可以进入它的详情页面。
+
+3. 点击**更多操作**,显示您可以对该部署进行的操作。
+
+ - **回退**:选择要回退的版本。
+ - **编辑自动扩缩**:根据 CPU 和内存使用情况自动伸缩副本。如果 CPU 和内存都已指定,则在满足任一条件时会添加或删除副本。
+ - **编辑设置**:配置更新策略、容器和存储。
+ - **编辑 YAML**:查看、上传、下载或者更新 YAML 文件。
+ - **重新创建**:重新创建该部署。
+ - **删除**:删除该部署并返回部署列表页面。
+
+4. 点击**资源状态**选项卡,查看该部署的端口和容器组信息。
+
+ - **副本运行状态**:点击
/
以开始或停止数据自动刷新。
+
+4. 点击右上角的
以手动刷新数据。
+
+### 环境变量
+
+点击**环境变量**选项卡以查看部署的环境变量。
+
+### 事件
+
+点击**事件**选项卡以查看部署的事件。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/project-user-guide/application-workloads/horizontal-pod-autoscaling.md b/content/zh/docs/v3.4/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
new file mode 100755
index 000000000..f9b2abdd1
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
@@ -0,0 +1,103 @@
+---
+title: "容器组弹性伸缩"
+keywords: "容器组, 弹性伸缩, 弹性伸缩程序"
+description: "如何在 KubeSphere 上配置容器组弹性伸缩."
+weight: 10290
+
+---
+
+本文档描述了如何在 KubeSphere 上配置容器组弹性伸缩 (HPA)。
+
+HPA 功能会自动调整容器组的数量,将容器组的平均资源使用(CPU 和内存)保持在预设值附近。有关 HPA 功能的详细情况,请参见 [Kubernetes 官方文档](https://kubernetes.io/zh/docs/tasks/run-application/horizontal-pod-autoscale/)。
+
+本文档使用基于 CPU 使用率的 HPA 作为示例,基于内存使用量的 HPA 操作与其相似。
+
+## 准备工作
+
+- 您需要[启用 Metrics Server](../../../pluggable-components/metrics-server/)。
+- 您需要创建一个企业空间、一个项目以及一个用户(例如,`project-regular`)。`project-regular` 必须被邀请至此项目中,并被赋予 `operator` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 创建服务
+
+1. 以 `project-regular` 身份登录 KubeSphere 的 Web 控制台,然后访问您的项目。
+
+2. 在左侧导航栏中选择**应用负载**下的**服务**,然后点击右侧的**创建**。
+
+3. 在**创建服务**对话框中,点击**无状态服务**。
+
+4. 设置服务名称(例如,`hpa`),然后点击**下一步**。
+
+5. 点击**添加容器**,将**镜像**设置为 `mirrorgooglecontainers/hpa-example` 并点击**使用默认端口**。
+
+6. 为每个容器设置 CPU 请求(例如,0.15 core),点击 **√**,然后点击**下一步**。
+
+ {{< notice note >}}
+
+ * 若要使用基于 CPU 使用率的 HPA,就必须为每个容器设置 CPU 请求,即为每个容器预留的最低 CPU 资源(有关详细信息,请参见 [Kubernetes 官方文档](https://kubernetes.io/zh/docs/tasks/run-application/horizontal-pod-autoscale/))。HPA 功能会将容器组平均 CPU 使用率与容器组平均 CPU 请求的目标比率进行比较。
+ * 若要使用基于内存使用量的 HPA,则不需要配置内存请求。
+
+ {{ notice >}}
+
+7. 点击**存储设置**选项卡上的**下一步**,然后点击**高级设置**选项卡上的**创建**。
+
+## 配置 HPA
+
+1. 左侧导航栏上选择**工作负载**中的**部署**,然后点击右侧的 HPA 部署(例如,hpa-v1)。
+
+2. 点击**更多操作**,从下拉菜单中选择**编辑自动扩缩**。
+
+3. 在**自动伸缩**对话框中,配置 HPA 参数,然后点击**确定**。
+
+ * **目标 CPU 用量(%)**:容器组平均 CPU 请求的目标比率。
+ * **目标内存用量(MiB)**:以 MiB 为单位的容器组平均内存目标使用量。
+ * **最小副本数**:容器组的最小数量。
+ * **最大副本数**:容器组的最大数量。
+
+ 在示例中,**目标 CPU 用量(%)**设置为 `60`,**最小副本数**设置为 `1`,**最大副本数**设置为 `10`。
+
+ {{< notice note >}}
+
+ 当容器组的数量达到最大值时,请确保集群可以为所有容器组提供足够的资源。否则,一些容器组将创建失败。
+
+ {{ notice >}}
+
+## 验证 HPA
+
+本节使用将请求发送到 HPA 服务的部署,以验证 HPA 是否会自动调整容器组的数量来满足资源使用目标。
+
+### 创建负载生成器部署
+
+1. 在左侧导航栏中选择**应用负载**中的**工作负载**,然后点击右侧的**创建**。
+
+2. 在**创建部署**对话框中,设置部署名称(例如,`load-generator`),然后点击**下一步**。
+
+3. 点击**添加容器**,将**镜像**设置为 `busybox`。
+
+4. 在对话框中向下滚动,选择**启动命令**,然后将**命令**设置为 `sh,-c`,将**参数**设置为 `while true; do wget -q -O- http://
,从下拉菜单中选择**删除**。负载生成器部署删除后,再次检查 HPA 部署的状态。容器组的数量会减少到最小值。
+
+{{< notice note >}}
+
+系统可能需要一些时间来调整容器组的数量以及收集数据。
+
+{{ notice >}}
+
+## 编辑 HPA 配置
+
+您可以重复[配置 HPA](#配置-hpa) 中的步骤来编辑 HPA 配置。
+
+## 取消 HPA
+
+1. 在左侧导航栏选择**应用负载**中的**工作负载**,点击右侧的 HPA 部署(例如,hpa-v1)。
+
+2. 点击**自动伸缩**右侧的
,从下拉菜单中选择**取消**。
+
diff --git a/content/zh/docs/v3.4/project-user-guide/application-workloads/jobs.md b/content/zh/docs/v3.4/project-user-guide/application-workloads/jobs.md
new file mode 100644
index 000000000..2f98be74f
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/application-workloads/jobs.md
@@ -0,0 +1,163 @@
+---
+title: 任务
+keywords: "KubeSphere, Kubernetes, Docker, 任务"
+description: "了解任务的基本概念以及如何在 KubeSphere 中创建任务。"
+linkTitle: "任务"
+
+weight: 10250
+---
+
+任务会创建一个或者多个容器组,并确保指定数量的容器组成功结束。随着容器组成功结束,任务跟踪记录成功结束的容器组数量。当达到指定的成功结束数量时,任务(即 Job)完成。删除任务的操作会清除其创建的全部容器组。
+
+在简单的使用场景中,您可以创建一个任务对象,以便可靠地运行一个容器组直到结束。当第一个容器组故障或者被删除(例如因为节点硬件故障或者节点重启)时,任务对象会启动一个新的容器组。您也可以使用一个任务并行运行多个容器组。
+
+下面的示例演示了在 KubeSphere 中创建任务的具体步骤,该任务会计算 π 到小数点后 2000 位。
+
+## 准备工作
+
+您需要创建一个企业空间、一个项目和一个用户 (`project-regular`),务必邀请该用户到项目中并赋予 `operator` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 创建任务
+
+### 步骤 1:打开仪表板
+
+以 `project-regular` 身份登录控制台。转到**应用负载**下的**任务**,点击**创建**。
+
+### 步骤 2:输入基本信息
+
+输入基本信息。参数解释如下:
+
+- **名称**:任务的名称,也是唯一标识符。
+- **别名**:任务的别名,使资源易于识别。
+- **描述信息**:任务的描述,简要介绍任务。
+
+### 步骤 3:策略设置(可选)
+
+您可以在该步骤设置值,或点击**下一步**以使用默认值。有关每个字段的详细说明,请参考下表。
+
+| 名称 | 定义 | 描述信息 |
+| ---------------------- | ---------------------------- | ------------------------------------------------------------ |
+| 最大重试次数 | `spec.backoffLimit` | 指定将该任务视为失败之前的重试次数。默认值为 6。 |
+| 容器组完成数量 | `spec.completions` | 指定该任务应该运行至成功结束的容器组的期望数量。如果设置为 nil,则意味着任何容器组成功结束即标志着所有容器组成功结束,并且允许并行数为任何正数值。如果设置为 1,则意味着并行数限制为 1,并且该容器组成功结束标志着任务成功完成。有关更多信息,请参见 [Jobs](https://kubernetes.io/zh/docs/concepts/workloads/controllers/job/)。 |
+| 并行容器组数量 | `spec.parallelism` | 指定该任务在任何给定时间应该运行的最大期望容器组数量。当剩余工作小于最大并行数时 ((`.spec.completions - .status.successful`) < `.spec.parallelism`),实际稳定运行的容器组数量会小于该值。有关更多信息,请参见 [Jobs](https://kubernetes.io/zh/docs/concepts/workloads/controllers/job/)。 |
+| 最大运行时间(s) | `spec.activeDeadlineSeconds` | 指定该任务在系统尝试终止任务前处于运行状态的持续时间(相对于 stratTime),单位为秒;该值必须是正整数。 |
+
+### 步骤 4:设置容器组
+
+1. **重启策略**选择**重新创建容器组**。当任务未完成时,您只能将**重启策略**指定为**重新创建容器组**或**重启容器**:
+
+ - 如果将**重启策略**设置为**重新创建容器组**,当容器组发生故障时,任务将创建一个新的容器组,并且故障的容器组不会消失。
+
+ - 如果将**重启策略**设置为**重启容器**,当容器组发生故障时,任务会在内部重启容器,而不是创建新的容器组。
+
+2. 点击**添加容器**,它将引导您进入**添加容器**页面。在镜像搜索栏中输入 `perl`,然后按**回车**键。
+
+3. 在该页面向下滚动到**启动命令**。在命令框中输入以下命令,计算 pi 到小数点后 2000 位并输出结果。点击右下角的 **√**,然后选择**下一步**继续。
+
+ ```bash
+ perl,-Mbignum=bpi,-wle,print bpi(2000)
+ ```
+
+ {{< notice note >}}有关设置镜像的更多信息,请参见[容器组设置](../../../project-user-guide/application-workloads/container-image-settings/)。{{ notice >}}
+
+### 步骤 5:检查任务清单(可选)
+
+1. 在右上角启用**编辑 YAML**,显示任务的清单文件。您可以看到所有值都是根据先前步骤中指定的值而设置。
+
+ ```yaml
+ apiVersion: batch/v1
+ kind: Job
+ metadata:
+ namespace: cc
+ labels:
+ app: job-test-1
+ name: job-test-1
+ annotations:
+ kubesphere.io/alias-name: Test
+ kubesphere.io/description: A job test
+ spec:
+ template:
+ metadata:
+ labels:
+ app: job-test-1
+ annotations:
+ kubesphere.io/containerSecrets: null
+ spec:
+ containers:
+ - name: container-xv4p2o
+ imagePullPolicy: IfNotPresent
+ image: perl
+ command:
+ - perl
+ - '-Mbignum=bpi'
+ - '-wle'
+ - print bpi(2000)
+ restartPolicy: Never
+ serviceAccount: default
+ initContainers: []
+ volumes: []
+ imagePullSecrets: null
+ backoffLimit: 5
+ parallelism: 2
+ completions: 4
+ activeDeadlineSeconds: 300
+ ```
+
+2. 您可以直接在清单文件中进行调整,然后点击**创建**,或者关闭**编辑 YAML**然后返回**创建任务**页面。
+
+ {{< notice note >}}您可以跳过本教程的**存储设置**和**高级设置**。有关更多信息,请参见[挂载持久卷](../../../project-user-guide/application-workloads/deployments/#步骤-4挂载持久卷)和[配置高级设置](../../../project-user-guide/application-workloads/deployments/#步骤-5配置高级设置)。{{ notice >}}
+
+### 步骤 6:检查结果
+
+1. 在最后一步**高级设置**中,点击**创建**完成操作。如果创建成功,将添加新条目到任务列表中。
+
+2. 点击此任务,然后转到**任务记录**选项卡,您可以在其中查看每个执行记录的信息。先前在步骤 3 中**完成数**设置为 `4`,因此有四个已结束的容器组。
+
+ {{< notice tip >}}如果任务失败,您可以重新运行该任务,失败原因显示在**消息**下。{{ notice >}}
+
+3. 在**资源状态**中,您可以查看容器组状态。先前将**并行容器组数量**设置为 2,因此每次会创建两个容器组。点击右侧的
,然后点击
刷新执行记录。
+
+### 资源状态
+
+1. 点击**资源状态**选项卡查看任务的容器组。
+
+2. 点击
刷新容器组信息,点击
/
显示或隐藏每个容器组中的容器。
+
+### 元数据
+
+点击**元数据**选项卡查看任务的标签和注解。
+
+### 环境变量
+
+点击**环境变量**选项卡查看任务的环境变量。
+
+### 事件
+
+点击**事件**选项卡查看任务的事件。
+
diff --git a/content/zh/docs/v3.4/project-user-guide/application-workloads/routes.md b/content/zh/docs/v3.4/project-user-guide/application-workloads/routes.md
new file mode 100644
index 000000000..7d09c1690
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/application-workloads/routes.md
@@ -0,0 +1,132 @@
+---
+title: "应用路由"
+keywords: "KubeSphere, Kubernetes, 路由, 应用路由"
+description: "了解应用路由(即 Ingress)的基本概念以及如何在 KubeSphere 中创建应用路由。"
+weight: 10270
+---
+
+本文档介绍了如何在 KubeSphere 上创建、使用和编辑应用路由。
+
+KubeSphere 上的应用路由和 Kubernetes 上的 [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress) 相同,您可以使用应用路由和单个 IP 地址来聚合和暴露多个服务。
+
+## 准备工作
+
+- 您需要创建一个企业空间、一个项目以及两个用户(例如,`project-admin` 和 `project-regular`)。在此项目中,`project-admin` 必须具有 `admin` 角色,`project-regular` 必须具有 `operator` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+- 若要以 HTTPS 模式访问应用路由,则需要[创建保密证字典](../../../project-user-guide/configuration/secrets/)用于加密,密钥中需要包含 `tls.crt`(TLS 证书)和 `tls.key`(TLS 私钥)。
+- 您需要[创建至少一个服务](../../../project-user-guide/application-workloads/services/)。本文档使用演示服务作为示例,该服务会将容器组名称返回给外部请求。
+
+## 配置应用路由访问方式
+
+1. 以 `project-admin` 身份登录 KubeSphere 的 Web 控制台,然后访问您的项目。
+
+2. 在左侧导航栏中选择**项目设置**下的**网关设置**,点击右侧的**开启网关**。
+
+3. 在出现的对话框中,将**访问模式**设置为 **NodePort** 或 **LoadBalancer**,然后点击**确认**。
+
+ {{< notice note >}}
+
+ 若将**访问模式**设置为 **LoadBalancer**,则可能需要根据插件用户指南在您的环境中启用负载均衡器插件。
+
+ {{ notice >}}
+
+## 创建应用路由
+
+### 步骤 1:配置基本信息
+
+1. 登出 KubeSphere 的 Web 控制台,以 `project-regular` 身份登录,并访问同一个项目。
+
+2. 选择左侧导航栏**应用负载**中的**应用路由**,点击右侧的**创建**。
+
+3. 在**基本信息**选项卡中,配置应用路由的基本信息,并点击**下一步**。
+ * **名称**:应用路由的名称,用作此应用路由的唯一标识符。
+ * **别名**:应用路由的别名。
+ * **描述信息**:应用路由的描述信息。
+
+### 步骤 2:配置路由规则
+
+1. 在**路由规则**选项卡中,点击**添加路由规则**。
+
+2. 选择一种模式来配置路由规则,点击 **√**,然后点击**下一步**。
+
+ * **自动生成**:KubeSphere 自动以`<服务名称>.<项目名称>.<网关地址>.nip.io` 格式生成域名,该域名由 [nip.io](https://nip.io/) 自动解析为网关地址。该模式仅支持 HTTP。
+
+ * **域名**:为应用路由设置域名。
+ * **协议**:选择 `http` 或 `https`。如果选择了 `https`,则需要选择包含 `tls.crt`(TLS 证书)和 `tls.key`(TLS 私钥)的密钥用于加密。
+ * **路径**:将每个服务映射到一条路径。您可以点击**添加**来添加多条路径。
+
+### (可选)步骤 3:配置高级设置
+
+1. 在**高级设置**选项卡,选择**添加元数据**。
+
+ 为应用路由配置注解和标签,并点击**创建**。
+
+ {{< notice note >}}
+
+ 您可以使用注解来自定义应用路由的行为。有关更多信息,请参见 [Nginx Ingress controller 官方文档](https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/)。
+
+ {{ notice >}}
+
+### 步骤 4:获取域名、服务路径和网关地址
+
+1. 在左侧导航栏中选择**应用负载**中的**应用路由**,点击右侧的应用路由名称。
+
+2. 在**规则**区域获取域名和服务路径以及网关地址。
+
+ * 如果[应用路由访问模式](#配置应用路由访问方式)设置为 NodePort,则会使用 Kubernetes 集群节点的 IP 地址作为网关地址,NodePort 位于域名之后。
+
+ * 如果[应用路由访问模式](#配置应用路由访问方式)设置为 LoadBalancer,则网关地址由负载均衡器插件指定。
+
+## 配置域名解析
+
+若在[配置路由规则](#步骤-2配置路由规则)中选择**自动生成**,则不需要配置域名解析,域名会自动由 [nip.io](https://nip.io/) 解析为网关地址。
+
+若在[配置路由规则](#步骤-2配置路由规则)中选择**指定域名**,则需要在 DNS 服务器配置域名解析,或者在客户端机器上将`<路由网关地址> <路由域名>`添加到 `etc/hosts` 文件。
+
+## 访问应用路由
+
+### NodePort 访问模式
+
+1. 登录连接到应用路由网关地址的客户端机器。
+
+2. 使用`<路由域名>:
进一步编辑它,例如元数据(**名称**无法编辑)、配置文件、端口以及外部访问。
+
+ - **编辑信息**:查看和编辑基本信息。
+ - **编辑 YAML**:查看、上传、下载或者更新 YAML 文件。
+ - **编辑服务**:查看访问类型并设置选择器和端口。
+ - **编辑外部访问**:编辑服务的外部访问方法。
+ - **删除**:当您删除服务时,会在弹出的对话框中显示关联资源。如果您勾选这些关联资源,则会与服务一同删除。
+
+2. 点击服务名称可以转到它的详情页面。
+
+ - 点击**更多操作**展开下拉菜单,菜单内容与服务列表中的下拉菜单相同。
+ - 容器组列表提供容器组的详细信息(运行状态、节点、容器组IP 以及资源使用情况)。
+ - 您可以点击容器组条目查看容器信息。
+ - 点击容器日志图标查看容器的输出日志。
+ - 您可以点击容器组名称来查看容器组详情页面。
+
+### 资源状态
+
+1. 点击**资源状态**选项卡以查看服务端口、工作负载和容器组信息。
+
+2. 在**容器组**区域,点击
以刷新容器组信息,点击
/
以显示或隐藏每个容器组中的容器。
+
+### 元数据
+
+点击**元数据**选项卡以查看服务的标签和注解。
+
+### 事件
+
+点击**事件**选项卡以查看服务的事件。
+
diff --git a/content/zh/docs/v3.4/project-user-guide/application-workloads/statefulsets.md b/content/zh/docs/v3.4/project-user-guide/application-workloads/statefulsets.md
new file mode 100644
index 000000000..f1375e6db
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/application-workloads/statefulsets.md
@@ -0,0 +1,148 @@
+---
+title: "有状态副本集"
+keywords: 'KubeSphere, Kubernetes, 有状态副本集, 仪表板, 服务'
+description: '了解有状态副本集的基本概念以及如何在 KubeSphere 中创建有状态副本集。'
+linkTitle: "有状态副本集"
+
+weight: 10220
+---
+
+有状态副本集是用于管理有状态应用的工作负载 API 对象,负责一组容器组的部署和扩缩,并保证这些容器组的顺序性和唯一性。
+
+与部署类似,有状态副本集管理基于相同容器规范的容器组。与部署不同的是,有状态副本集为其每个容器组维护一个粘性身份。这些容器组根据相同的规范而创建,但不能相互替换:每个容器组都有一个持久的标识符,无论容器组如何调度,该标识符均保持不变。
+
+如果您想使用持久卷为工作负载提供持久化存储,可以使用有状态副本集作为解决方案的一部分。尽管有状态副本集中的单个容器组容易出现故障,但持久的容器组标识符可以更容易地将现有持久卷匹配到替换任意故障容器组的新容器组。
+
+对于需要满足以下一个或多个需求的应用程序来说,有状态副本集非常有用。
+
+- 稳定的、唯一的网络标识符。
+- 稳定的、持久的存储。
+- 有序的、优雅的部署和扩缩。
+- 有序的、自动的滚动更新。
+
+有关更多信息,请参见 [Kubernetes 官方文档](https://kubernetes.io/zh/docs/concepts/workloads/controllers/statefulset/)。
+
+## 准备工作
+
+您需要创建一个企业空间、一个项目以及一个用户 (`project-regular`),务必邀请该用户到项目中并赋予 `operator` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 创建有状态副本集
+
+在 KubeSphere 中,创建有状态副本集时也会创建 **Headless** 服务。您可以在项目的**应用负载**下的[服务](../services/)中找到 Headless 服务。
+
+### 步骤 1:打开仪表板
+
+以 `project-regular` 身份登录控制台。转到项目的**应用负载**,选择**工作负载**,然后在**有状态副本集**选项卡下点击**创建**。
+
+### 步骤 2:输入基本信息
+
+为有状态副本集指定一个名称(例如 `demo-stateful`),选择项目,然后点击**下一步**继续。
+
+### 步骤 3:设置容器组
+
+1. 设置镜像前,请点击**容器组副本数量**中的
,在弹出菜单中选择操作,修改您的有状态副本集。
+
+ - **编辑信息**:查看并编辑基本信息。
+ - **编辑 YAML**:查看、上传、下载或者更新 YAML 文件。
+ - **重新创建**:重新创建该有状态副本集。
+ - **删除**:删除该有状态副本集。
+
+2. 点击有状态副本集名称可以进入它的详情页面。
+
+3. 点击**更多操作**,显示您可以对该有状态副本集进行的操作。
+
+ - **回退**:选择要回退的版本。
+ - **编辑服务**:设置端口来暴露容器镜像和服务端口。
+ - **编辑设置**:配置更新策略、容器和存储。
+ - **编辑 YAML**:查看、上传、下载或者更新 YAML 文件。
+ - **重新创建**:重新创建该有状态副本集。
+ - **删除**:删除该有状态副本集并返回有状态副本集列表页面。
+
+4. 点击**资源状态**选项卡,查看该有状态副本集的端口和容器组信息。
+
+ - **副本运行状态**:点击
/
以开始或停止自动刷新数据。
+
+4. 点击右上角的
以手动刷新数据。
+
+### 环境变量
+
+点击**环境变量**选项卡查看有状态副本集的环境变量。
+
+### 事件
+
+点击**事件**查看有状态副本集的事件。
diff --git a/content/zh/docs/v3.4/project-user-guide/application/_index.md b/content/zh/docs/v3.4/project-user-guide/application/_index.md
new file mode 100644
index 000000000..017e041ba
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/application/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "应用程序"
+weight: 10100
+
+_build:
+ render: false
+---
diff --git a/content/zh/docs/v3.4/project-user-guide/application/app-template.md b/content/zh/docs/v3.4/project-user-guide/application/app-template.md
new file mode 100644
index 000000000..bd62a79bd
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/application/app-template.md
@@ -0,0 +1,33 @@
+---
+title: "应用模板"
+keywords: 'Kubernetes, Chart, Helm, KubeSphere, 应用程序, 仓库, 模板'
+description: '了解应用模板的概念以及它们如何在企业内部帮助部署应用程序。'
+linkTitle: "应用模板"
+weight: 10110
+---
+
+应用模板是用户上传、交付和管理应用的一种方式。一般来说,根据一个应用的功能以及与外部环境通信的方式,它可以由一个或多个 Kubernetes 工作负载(例如[部署](../../../project-user-guide/application-workloads/deployments/)、[有状态副本集](../../../project-user-guide/application-workloads/statefulsets/)和[守护进程集](../../../project-user-guide/application-workloads/daemonsets/))和[服务](../../../project-user-guide/application-workloads/services/)组成。作为应用模板上传的应用基于 [Helm](https://helm.sh/) 包构建。
+
+## 应用模板的使用方式
+
+您可以将 Helm Chart 交付至 KubeSphere 的公共仓库,或者导入私有应用仓库来提供应用模板。
+
+KubeSphere 的公共仓库也称作应用商店,企业空间中的每位租户都能访问。[上传应用的 Helm Chart](../../../workspace-administration/upload-helm-based-application/) 后,您可以部署应用来测试它的功能,并提交审核。最终待应用审核通过后,您可以选择将它发布至应用商店。有关更多信息,请参见[应用程序生命周期管理](../../../application-store/app-lifecycle-management/)。
+
+对于私有仓库,只有拥有必要权限的用户才能在企业空间中[添加私有仓库](../../../workspace-administration/app-repository/import-helm-repository/)。一般来说,私有仓库基于对象存储服务构建,例如 MinIO。这些私有仓库在导入 KubeSphere 后会充当应用程序池,提供应用模板。
+
+{{< notice note >}}
+
+对于 KubeSphere 中[作为 Helm Chart 上传的单个应用](../../../workspace-administration/upload-helm-based-application/),待审核通过并发布后,会和内置应用一同显示在应用商店中。此外,当您从私有应用仓库中选择应用模板时,在下拉列表中也可以看到**当前企业空间**,其中存储了这些作为 Helm Chart 上传的单个应用。
+
+{{ notice >}}
+
+KubeSphere 基于 [OpenPitrix](https://github.com/openpitrix/openpitrix)(一个[可插拔组件](../../../pluggable-components/app-store/))部署应用仓库服务。
+
+## 为什么选用应用模板
+
+应用模板使用户能够以可视化的方式部署并管理应用。对内,应用模板作为企业为团队内部协调和合作而创建的共享资源(例如,数据库、中间件和操作系统)发挥着重要作用。对外,应用模板设立了构建和交付的行业标准。在不同场景中,用户可以通过一键部署来利用应用模板满足他们的自身需求。
+
+此外,KubeSphere 集成了 OpenPitrix 来提供应用程序全生命周期管理,平台上的 ISV、开发者和普通用户都可以参与到管理流程中。基于 KubeSphere 的多租户体系,每位租户只负责自己的部分,例如应用上传、应用审核、发布、测试以及版本管理。最终,企业可以通过自定义的标准来构建自己的应用商店并丰富应用程序池,同时也能以标准化的方式来交付应用。
+
+有关如何使用应用模板的更多信息,请参见[使用应用模板部署应用](../../../project-user-guide/application/deploy-app-from-template/)。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/project-user-guide/application/compose-app.md b/content/zh/docs/v3.4/project-user-guide/application/compose-app.md
new file mode 100644
index 000000000..01709fc9a
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/application/compose-app.md
@@ -0,0 +1,96 @@
+---
+title: "构建基于微服务的应用"
+keywords: 'KubeSphere, Kubernetes, 服务网格, 微服务'
+description: '了解如何从零开始构建基于微服务的应用程序。'
+linkTitle: "构建基于微服务的应用"
+weight: 10140
+---
+
+由于每个微服务都在处理应用的一部分功能,因此一个应用可以被划分为不同的组件。这些组件彼此独立,具有各自的职责和局限。在 KubeSphere 中,这类应用被称为**自制应用**,用户可以通过新创建的服务或者现有服务来构建自制应用。
+
+本教程演示了如何创建基于微服务的应用 Bookinfo(包含四种服务),以及如何设置自定义域名以访问该应用。
+
+## 准备工作
+
+- 您需要为本教程创建一个企业空间、一个项目以及一个用户 (`project-regular`)。该用户需要被邀请至项目中并赋予 `operator` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+- `project-admin` 需要[设置项目网关](../../../project-administration/project-gateway/),以便 `project-regular` 能在创建应用时定义域名。
+
+## 构建自制应用的微服务
+
+1. 登录 KubeSphere 的 Web 控制台,导航到项目**应用负载**中的**应用**。在**自制应用**选项卡中,点击**创建**。
+
+2. 设置应用名称(例如 `bookinfo`)并点击**下一步**。
+
+3. 在**服务设置**页面,您需要构建自制应用的微服务。点击**创建服务**,选择**无状态服务**。
+
+4. 设置服务名称(例如 `productpage`)并点击**下一步**。
+
+ {{< notice note >}}
+
+ 您可以直接在面板上创建服务,或者启用右上角的**编辑 YAML**以编辑 YAML 文件。
+
+ {{ notice >}}
+
+5. 点击**容器**下的**添加容器**,在搜索栏中输入 `kubesphere/examples-bookinfo-productpage-v1:1.13.0` 以使用 Docker Hub 镜像。
+
+ {{< notice note >}}
+
+ 输入镜像名称之后,必须按下键盘上的**回车**键。
+
+ {{ notice >}}
+
+6. 点击**使用默认端口**。有关更多镜像设置的信息,请参见[容器组设置](../../../project-user-guide/application-workloads/container-image-settings/)。点击右下角的 **√** 和**下一步**以继续操作。
+
+7. 在**存储设置**页面,[添加持久卷声明](../../../project-user-guide/storage/volumes/)或点击**下一步**以继续操作。
+
+8. 在**高级设置**页面,直接点击**创建**。
+
+9. 同样,为该应用添加其他三个微服务。以下是相应的镜像信息:
+
+ | 服务 | 名称 | 镜像 |
+ | ---------- | --------- | ------------------------------------------------ |
+ | 无状态服务 | `details` | `kubesphere/examples-bookinfo-details-v1:1.13.0` |
+ | 无状态服务 | `reviews` | `kubesphere/examples-bookinfo-reviews-v1:1.13.0` |
+ | 无状态服务 | `ratings` | `kubesphere/examples-bookinfo-ratings-v1:1.13.0` |
+
+10. 添加微服务完成后,点击**下一步**。
+
+11. 在**路由设置**页面,点击**添加路由规则**。在**指定域名**选项卡中,为您的应用设置域名(例如 `demo.bookinfo`)并在**协议**字段选择 `HTTP`。在`路径`一栏,选择服务 `productpage` 以及端口 `9080`。点击**确定**以继续操作。
+
+ {{< notice note >}}
+
+若未设置项目网关,则无法看见**添加路由规则**按钮。
+
+{{ notice >}}
+
+12. 您可以添加更多规则或点击**创建**以完成创建过程。
+
+13. 等待应用达到**就绪**状态。
+
+
+## 访问应用
+
+1. 在为应用设置域名时,您需要在 hosts (`/etc/hosts`) 文件中添加一个条目。 例如,添加如下所示的 IP 地址和主机名:
+
+ ```txt
+ 192.168.0.9 demo.bookinfo
+ ```
+
+ {{< notice note >}}
+
+ 您必须添加**自己的** IP 地址和主机名。
+
+ {{ notice >}}
+
+2. 在**自制应用**中,点击刚才创建的应用。
+
+3. 在**资源状态**中,点击**路由**下的**访问服务**以访问该应用。
+
+ {{< notice note >}}
+
+ 请确保在您的安全组中打开端口。
+
+ {{ notice >}}
+
+4. 分别点击 **Normal user** 和 **Test user** 以查看其他**服务**。
+
diff --git a/content/zh/docs/v3.4/project-user-guide/application/deploy-app-from-appstore.md b/content/zh/docs/v3.4/project-user-guide/application/deploy-app-from-appstore.md
new file mode 100644
index 000000000..3fe6a97b0
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/application/deploy-app-from-appstore.md
@@ -0,0 +1,62 @@
+---
+title: "从应用商店部署应用"
+keywords: 'Kubernetes, Chart, Helm, KubeSphere, 应用, 应用商店'
+description: '了解如何从应用商店中部署应用程序。'
+linkTitle: "从应用商店部署应用"
+weight: 10130
+---
+
+[应用商店](../../../application-store/)是平台上的公共应用仓库。平台上的每个租户,无论属于哪个企业空间,都可以查看应用商店中的应用。应用商店包含 16 个精选的企业就绪的容器化应用,以及平台上不同企业空间的租户发布的应用。任何经过身份验证的用户都可以从应用商店部署应用。这与私有应用仓库不同,访问私有应用仓库的租户必须属于私有应用仓库所在的企业空间。
+
+本教程演示如何从基于 [OpenPitrix](https://github.com/openpitrix/openpitrix) 的 KubeSphere 应用商店快速部署 [NGINX](https://www.nginx.com/),并通过 NodePort 访问其服务。
+
+## 准备工作
+
+- 您需要启用 [OpenPitrix (App Store)](../../../pluggable-components/app-store/)。
+- 您需要创建一个企业空间、一个项目和一个用户(例如 `project-regular`)。该用户必须被邀请至该项目,并具有 `operator` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 动手实验
+
+### 步骤 1:从应用商店部署 NGINX
+
+1. 以 `project-regular` 身份登录 KubeSphere Web 控制台,点击左上角的**应用商店**。
+
+ {{< notice note >}}
+
+ 您也可以在您的项目中前往**应用负载**下的**应用**页面,点击**创建**,并选择**来自应用商店**进入应用商店。
+
+ {{ notice >}}
+
+2. 找到并点击 NGINX,在**应用信息**页面点击**安装**。请确保在**安装须知**对话框中点击**同意**。
+
+3. 设置应用的名称和版本,确保 NGINX 部署的位置,点击**下一步**。
+
+4. 在**应用设置**页面,设置应用部署的副本数,根据需要启用或禁用 Ingress,然后点击**安装**。
+
+ {{< notice note >}}
+
+ 如需为 NGINX 设置更多的参数, 可点击 **YAML** 后的切换开关打开应用的 YAML 配置文件,并在配置文件中设置相关参数。
+
+ {{ notice >}}
+
+5. 等待应用创建完成并开始运行。
+
+### 步骤 2:访问 NGINX
+
+要从集群外访问 NGINX,您需要先用 NodePort 暴露该应用。
+
+1. 在已创建的项目中打开**服务**页面并点击 NGINX 的服务名称。
+
+2. 在服务详情页面,点击**更多操作**,在下拉菜单中选择**编辑外部访问**。
+
+3. 将**访问方式**设置为 **NodePort** 并点击**确定**。有关更多信息,请参见[项目网关](../../../project-administration/project-gateway/)。
+
+4. 在**端口**区域查看暴露的端口。
+
+5. 用 `
,并从下拉菜单中选择操作来修改配置字典。
+
+ - **编辑**:查看和编辑基本信息。
+ - **编辑 YAML**:查看、上传、下载或更新 YAML 文件。
+ - **编辑设置**:修改配置字典键值对。
+ - **删除**:删除配置字典。
+
+2. 点击配置字典名称打开其详情页面。在**数据**选项卡,您可以查看配置字典的所有键值对。
+
+3. 点击**更多操作**对配置字典进行其他操作。
+
+ - **编辑 YAML**:查看、上传、下载或更新 YAML 文件。
+ - **编辑设置**:修改配置字典键值对。
+ - **删除**:删除配置字典并返回配置字典列表页面。
+
+4. 点击**编辑信息**来查看和编辑配置字典的基本信息。
+
+
+## 使用配置字典
+
+在创建工作负载、[服务](../../../project-user-guide/application-workloads/services/)、[任务](../../../project-user-guide/application-workloads/jobs/)或[定时任务](../../../project-user-guide/application-workloads/cronjobs/)时,您可以用配置字典为容器添加环境变量。您可以在**添加容器**页面勾选**环境变量**,点击**引用配置字典或保密字典**,然后从下拉列表中选择一个配置字典。
+
diff --git a/content/zh/docs/v3.4/project-user-guide/configuration/image-registry.md b/content/zh/docs/v3.4/project-user-guide/configuration/image-registry.md
new file mode 100644
index 000000000..e2a4bb3ed
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/configuration/image-registry.md
@@ -0,0 +1,104 @@
+---
+title: "镜像仓库"
+keywords: 'KubeSphere, Kubernetes, Docker, 保密字典'
+description: '了解如何在 KubeSphere 中创建镜像仓库。'
+linkTitle: "镜像仓库"
+weight: 10430
+---
+
+Docker 镜像是一个只读的模板,可用于部署容器服务。每个镜像都有一个唯一标识符(即`镜像名称:标签`)。例如,一个镜像可以包含只安装有 Apache 和几个应用的完整的 Ubuntu 操作系统软件包。镜像仓库可用于存储和分发 Docker 镜像。
+
+本教程演示如何为不同的镜像仓库创建保密字典。
+
+## 准备工作
+
+您需要创建一个企业空间、一个项目和一个用户(例如 `project-regular`)。该用户必须已邀请至该项目,并具有 `operator` 角色。有关更多信息,请参阅[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 创建保密字典
+
+创建工作负载、[服务](../../../project-user-guide/application-workloads/services/)、[任务](../../../project-user-guide/application-workloads/jobs/)或[定时任务](../../../project-user-guide/application-workloads/cronjobs/)时,除了从公共仓库选择镜像,您还可以从私有仓库选择镜像。要使用私有仓库中的镜像,您必须先为私有仓库创建保密字典,以便在 KubeSphere 中集成该私有仓库。
+
+### 步骤 1:进入保密字典页面
+
+以 `project-regular` 用户登录 KubeSphere Web 控制台并进入项目,在左侧导航栏中选择**配置**下的**保密字典**,然后点击**创建**。
+
+### 步骤 2:配置基本信息
+
+设置保密字典的名称(例如 `demo-registry-secret`),然后点击**下一步**。
+
+{{< notice tip >}}
+
+您可以在对话框右上角启用**编辑 YAML** 来查看保密字典的 YAML 清单文件,并通过直接编辑清单文件来创建保密字典。您也可以继续执行后续步骤在控制台上创建保密字典。
+
+{{ notice >}}
+
+### 步骤 3:配置镜像服务信息
+
+将**类型**设置为 **镜像服务信息**。要在创建应用负载时使用私有仓库中的镜像,您需要配置以下字段:
+
+- **仓库地址**:镜像仓库的地址,其中包含创建应用负载时需要使用的镜像。
+- **用户名**:登录镜像仓库所需的用户名。
+- **密码**:登录镜像仓库所需的密码。
+- **邮箱**(可选):您的邮箱地址。
+
+#### 添加 Docker Hub 仓库
+
+1. 在 [Docker Hub](https://hub.docker.com/) 上添加镜像仓库之前,您需要注册一个 Docker Hub 帐户。在**保密字典设置**页面,将**仓库地址**设置为 `docker.io`,将**用户名**和**密码**分别设置为您的 Docker ID 和密码,然后点击**验证**以检查地址是否可用。
+
+2. 点击**创建**。保密字典创建后会显示在**保密字典**界面。有关保密字典创建后如何编辑保密字典,请参阅[查看保密字典详情](../../../project-user-guide/configuration/secrets/#查看保密字典详情)。
+
+#### 添加 Harbor 镜像仓库
+
+[Harbor](https://goharbor.io/) 是一个开源的可信云原生仓库项目,用于对内容进行存储、签名和扫描。通过增加用户经常需要的功能,例如安全、身份验证和管理,Harbor 扩展了开源的 Docker Distribution。Harbor 使用 HTTP 和 HTTPS 为仓库请求提供服务。
+
+**HTTP**
+
+1. 您需要修改集群中所有节点的 Docker 配置。例如,如果外部 Harbor 仓库的 IP 地址为 `http://192.168.0.99`,您需要在 `/etc/systemd/system/docker.service.d/docker-options.conf` 文件中增加 `--insecure-registry=192.168.0.99` 标签。
+
+ ```bash
+ [Service]
+ Environment="DOCKER_OPTS=--registry-mirror=https://registry.docker-cn.com --insecure-registry=10.233.0.0/18 --data-root=/var/lib/docker --log-opt max-size=50m --log-opt max-file=5 \
+ --insecure-registry=192.168.0.99"
+ ```
+
+ {{< notice note >}}
+
+ - 请将镜像仓库的地址替换成实际的地址。
+
+ - 有关 `Environment` 字段中的标签,请参阅 [Dockerd Options](https://docs.docker.com/engine/reference/commandline/dockerd/)。
+
+ - Docker 守护进程需要 `--insecure-registry` 标签才能与不安全的仓库通信。有关该标签的更多信息,请参阅 [Docker 官方文档](https://docs.docker.com/engine/reference/commandline/dockerd/#insecure-registries)。
+
+ {{ notice >}}
+
+2. 重新加载配置文件并重启 Docker。
+
+ ```bash
+ sudo systemctl daemon-reload
+ ```
+
+ ```bash
+ sudo systemctl restart docker
+ ```
+
+3. 在 KubeSphere 控制台上进入创建保密字典的**数据设置**页面,将**类型**设置为**镜像服务信息**,将**仓库地址**设置为您的 Harbor IP 地址,并设置用户名和密码。
+
+ {{< notice note >}}
+
+ 如需使用 Harbor 域名而非 IP 地址,您需要在集群中配置 CoreDNS 和 nodelocaldns。
+
+ {{ notice >}}
+
+4. 点击**创建**。保密字典创建后会显示在**保密字典**页面。有关保密字典创建后如何编辑保密字典,请参阅[查看保密字典详情](../../../project-user-guide/configuration/secrets/#查看保密字典详情)。
+
+**HTTPS**
+
+有关如何集成基于 HTTPS 的 Harbor 仓库,请参阅 [Harbor 官方文档](https://goharbor.io/docs/1.10/install-config/configure-https/)。请确保您已使用 `docker login` 命令连接到您的 Harbor 仓库。
+
+## 使用镜像仓库
+
+如果您已提前创建了私有镜像仓库的保密字典,您可以选择私有镜像仓库中的镜像。例如,创建[部署](../../../project-user-guide/application-workloads/deployments/)时,您可以在**添加容器**页面点击**镜像**下拉列表选择一个仓库,然后输入镜像名称和标签使用镜像。
+
+如果您使用 YAML 文件创建工作负载且需要使用私有镜像仓库,需要在本地 YAML 文件中手动添加 `kubesphere.io/imagepullsecrets` 字段,并且取值是 JSON 格式的字符串(其中 `key` 为容器名称,`value` 为保密字典名),以保证 `imagepullsecrets` 字段不被丢失,如下示例图所示。
+
+
diff --git a/content/zh/docs/v3.4/project-user-guide/configuration/secrets.md b/content/zh/docs/v3.4/project-user-guide/configuration/secrets.md
new file mode 100644
index 000000000..231a710eb
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/configuration/secrets.md
@@ -0,0 +1,121 @@
+---
+title: "保密字典"
+keywords: 'KubeSphere, Kubernetes, 保密字典'
+description: '了解如何在 KubeSphere 中创建保密字典。'
+linkTitle: "保密字典"
+weight: 10410
+---
+
+Kubernetes [保密字典 (Secret)](https://kubernetes.io/zh/docs/concepts/configuration/secret/) 可用于存储和管理密码、OAuth 令牌和 SSH 保密字典等敏感信息。容器组可以通过[三种方式](https://kubernetes.io/zh/docs/concepts/configuration/secret/#overview-of-secrets)使用保密字典:
+
+- 作为挂载到容器组中容器化应用上的卷中的文件。
+- 作为容器组中容器使用的环境变量。
+- 作为 kubelet 为容器组拉取镜像时的镜像仓库凭证。
+
+本教程演示如何在 KubeSphere 中创建保密字典。
+
+## 准备工作
+
+您需要创建一个企业空间、一个项目和一个用户(例如 `project-regular`)。该用户必须已邀请至该项目,并具有 `operator` 角色。有关更多信息,请参阅[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 创建保密字典
+
+### 步骤 1:进入保密字典页面
+
+以 `project-regular` 用户登录控制台并进入项目,在左侧导航栏中选择**配置**下的**保密字典**,然后点击**创建**。
+
+### 步骤 2:配置基本信息
+
+设置保密字典的名称(例如 `demo-secret`),然后点击**下一步**。
+
+{{< notice tip >}}
+
+您可以在对话框右上角启用**编辑 YAML** 来查看保密字典的 YAML 清单文件,并通过直接编辑清单文件来创建保密字典。您也可以继续执行后续步骤在控制台上创建保密字典。
+
+{{ notice >}}
+
+### 步骤 3:设置保密字典
+
+1. 在**数据设置**选项卡,从**类型**下拉列表中选择保密字典类型。您可以在 KubeSphere 中创建以下保密字典,类型对应 YAML 文件中的 `type` 字段。
+
+ {{< notice note >}}
+
+ 对于所有的保密字典类型,配置在清单文件中 `data` 字段的所有键值对的值都必须是 base64 编码的字符串。KubeSphere 会自动将您在控制台上配置的值转换成 base64 编码并保存到 YAML 文件中。例如,保密字典类型为**默认**时,如果您在**添加数据**页面将**键**和**值**分别设置为 `password` 和 `hello123`,YAML 文件中显示的实际值为 `aGVsbG8xMjM=`(即 `hello123` 的 base64 编码,由 KubeSphere 自动转换)。
+
+ {{ notice >}}
+
+ - **默认**:对应 Kubernetes 的 [Opaque](https://kubernetes.io/zh/docs/concepts/configuration/secret/#opaque-secret) 保密字典类型,同时也是 Kubernetes 的默认保密字典类型。您可以用此类型保密字典创建任意自定义数据。点击**添加数据**为其添加键值对。
+
+ - **TLS 信息**:对应 Kubernetes 的 [kubernetes.io/tls](https://kubernetes.io/zh/docs/concepts/configuration/secret/#tls-secret) 保密字典类型,用于存储证书及其相关保密字典。这类数据通常用于 TLS 场景,例如提供给应用路由 (Ingress) 资源用于终结 TLS 链接。使用此类型的保密字典时,您必须为其指定**凭证**和**私钥**,分别对应 YAML 文件中的 `tls.crt` 和 `tls.key` 字段。
+
+ - **镜像服务信息**:对应 Kubernetes 的 [kubernetes.io/dockerconfigjson](https://kubernetes.io/zh/docs/concepts/configuration/secret/#docker-config-secrets) 保密字典类型,用于存储访问 Docker 镜像仓库所需的凭证。有关更多信息,请参阅[镜像仓库](../image-registry/)。
+
+ - **用户名和密码**:对应 Kubernetes 的 [kubernetes.io/basic-auth](https://kubernetes.io/zh/docs/concepts/configuration/secret/#basic-authentication-secret) 保密字典类型,用于存储基本身份认证所需的凭证。使用此类型的保密字典时,您必须为其指定**用户名**和**密码**,分别对应 YAML 文件中的 `username` 和 `password` 字段。
+
+2. 本教程以默认类型为例。点击**添加数据**,将**键**设置为 `MYSQL_ROOT_PASSWORD` 并将**值**设置为 `123456`,为 MySQL 设置保密字典。
+
+3. 点击对话框右下角的 **√** 以确认配置。您可以继续为保密字典添加键值对或点击**创建**完成操作。有关保密字典使用的更多信息,请参阅[创建并发布 WordPress](../../../quick-start/wordpress-deployment/#任务-3创建应用程序)。
+
+## 查看保密字典详情
+
+1. 保密字典创建后会显示在如图所示的列表中。您可以点击右边的
,并从下拉菜单中选择操作来修改保密字典。
+
+ - **编辑信息**:查看和编辑基本信息。
+ - **编辑 YAML**:查看、上传、下载或更新 YAML 文件。
+ - **编辑设置**:修改保密字典键值对。
+ - **删除**:删除保密字典。
+
+2. 点击保密字典名称打开保密字典详情页面。在**数据**选项卡,您可以查看保密字典的所有键值对。
+
+ {{< notice note >}}
+
+如上文所述,KubeSphere 自动将键值对的值转换成对应的 base64 编码。您可以点击右边的
将右侧的项目拖放至目标组。若要添加新的分组,点击**添加监控组**。如果您想修改监控组的位置,请将鼠标悬停至监控组上并点击右侧的
或
。
+
+{{< notice note >}}
+
+监控组在右侧所显示的位置和中间栏图表的位置一致。换言之,如果您修改监控组在右侧的顺序,其所对应的图表位置也会随之变化。
+
+{{ notice >}}
+
+## 面板模板
+
+您可以在 [Monitoring Dashboards Gallery](https://github.com/kubesphere/monitoring-dashboard/tree/master/contrib/gallery) 中找到并分享面板模板,KubeSphere 社区用户可以在这里贡献他们模板设计。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/panel.md b/content/zh/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/panel.md
new file mode 100644
index 000000000..457b7b14d
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/panel.md
@@ -0,0 +1,34 @@
+---
+title: "图表"
+keywords: '监控, Prometheus, Prometheus Operator'
+description: '探索仪表板属性和图表指标。'
+linkTitle: "图表"
+weight: 10816
+---
+
+KubeSphere 当前支持两种图表:文本图表和图形图表。
+
+## 文本图表
+
+文本图表适合显示单个指标的数值。文本图表的编辑窗口包括两部分,上半部分显示指标的实时数值,下半部分可进行编辑。您可以输入 PromQL 表达式以获取单个指标的数值。
+
+- **图表名称**:该文本图表的名称。
+- **单位**:指标数据的单位。
+- **精确位**:支持整数。
+- **监控指标**:从包含可用 Prometheus 指标的下拉列表中指定一个监控指标。
+
+## 图形图表
+
+图形图表适合显示多个指标的数值。图形图表的编辑窗口包括三部分,上半部分显示指标的实时数值,左侧栏用于设置图表主题,右侧栏用于编辑指标和图表描述。
+
+- **图表类型**:支持折线图和柱状图。
+- **图例类型**:支持基础图和堆叠图。
+- **图表配色**:修改图表各个指标的颜色。
+- **图表名称**:图表的名称。
+- **描述信息**:图表描述。
+- **添加**:新增查询编辑器。
+- **图例名称**:图表中线条的图例名称,支持参数。例如 `{{pod}}` 表示使用 Prometheus 指标标签 `pod` 来给图表中的线条命名。
+- **间隔**:两个数据点间的步骤值 (Step Value)。
+- **监控指标**:包含可用的 Prometheus 指标。
+- **单位**:指标数据的单位。
+- **精确位**:支持整数。
diff --git a/content/zh/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/querying.md b/content/zh/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/querying.md
new file mode 100644
index 000000000..048450b00
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/querying.md
@@ -0,0 +1,13 @@
+---
+title: "查询"
+keywords: '监控, Prometheus, Prometheus Operator, 查询'
+description: '了解如何指定监控指标。'
+linkTitle: "查询"
+weight: 10817
+---
+
+在查询编辑器中,在**监控指标**中输入 PromQL 表达式以处理和获取指标。若要了解如何编写 PromQL,请参阅 [Query Examples](https://prometheus.io/docs/prometheus/latest/querying/examples/)。
+
+
+
+
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/project-user-guide/grayscale-release/_index.md b/content/zh/docs/v3.4/project-user-guide/grayscale-release/_index.md
new file mode 100644
index 000000000..83875c1cc
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/grayscale-release/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "灰度发布"
+weight: 10500
+
+_build:
+ render: false
+---
diff --git a/content/zh/docs/v3.4/project-user-guide/grayscale-release/blue-green-deployment.md b/content/zh/docs/v3.4/project-user-guide/grayscale-release/blue-green-deployment.md
new file mode 100644
index 000000000..4d13f5ad8
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/grayscale-release/blue-green-deployment.md
@@ -0,0 +1,74 @@
+---
+title: "蓝绿部署"
+keywords: 'KubeSphere, Kubernetes, 服务网格, istio, 发布, 蓝绿部署'
+description: '了解如何在 KubeSphere 中发布蓝绿部署。'
+
+linkTitle: "蓝绿部署"
+weight: 10520
+---
+
+
+蓝绿发布提供零宕机部署,即在保留旧版本的同时部署新版本。在任何时候,只有其中一个版本处于活跃状态,接收所有流量,另一个版本保持空闲状态。如果运行出现问题,您可以快速回滚到旧版本。
+
+
+
+
+## 准备工作
+
+- 您需要启用 [KubeSphere 服务网格](../../../pluggable-components/service-mesh/)。
+- 您需要创建一个企业空间、一个项目和一个用户 (`project-regular`),务必邀请该用户到项目中并赋予 `operator` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+- 您需要启用**应用治理**并有一个可用应用,以便您可以实现该应用的蓝绿部署。本教程使用示例应用 Bookinfo。有关更多信息,请参见[部署 Bookinfo 和管理流量](../../../quick-start/deploy-bookinfo-to-k8s/)。
+
+## 创建蓝绿部署任务
+
+1. 以 `project-regular` 身份登录 KubeSphere,前往**灰度发布**页面,在**发布模式**选项卡下,点击**蓝绿部署**右侧的**创建**。
+
+2. 输入名称然后点击**下一步**。
+
+3. 在**服务设置**选项卡,从下拉列表选择您的应用以及想实现蓝绿部署的服务。如果您也使用示例应用 Bookinfo,请选择 **reviews** 并点击**下一步**。
+
+4. 在**新版本设置**选项卡,添加另一个版本(例如 `kubesphere/examples-bookinfo-reviews-v2:1.16.2`),然后点击**下一步**。
+
+5. 在**策略设置**选项卡,要让应用版本 `v2` 接管所有流量,请选择**接管**,然后点击**创建**。
+
+6. 蓝绿部署任务创建后,会显示在**任务状态**选项卡下。点击可查看详情。
+
+7. 稍等片刻后,您可以看到所有流量都流向 `v2` 版本。
+
+8. 新的**部署**也已创建。
+
+9. 您可以执行以下命令直接获取虚拟服务来查看权重:
+
+ ```bash
+ kubectl -n demo-project get virtualservice -o yaml
+ ```
+
+ {{< notice note >}}
+
+ - 当您执行上述命令时,请将 `demo-project` 替换成您自己的项目(即命名空间)名称。
+ - 如果您想使用 KubeSphere 控制台上的 Web Kubectl 来执行命令,则需要使用 `admin` 帐户。
+
+ {{ notice >}}
+
+10. 预期输出结果:
+
+ ```yaml
+ ...
+ spec:
+ hosts:
+ - reviews
+ http:
+ - route:
+ - destination:
+ host: reviews
+ port:
+ number: 9080
+ subset: v2
+ weight: 100
+ ...
+ ```
+
+## 下线任务
+
+待您实现蓝绿部署并且结果满足您的预期,您可以点击**删除**来移除 `v1` 版本,从而下线任务。
+
diff --git a/content/zh/docs/v3.4/project-user-guide/grayscale-release/canary-release.md b/content/zh/docs/v3.4/project-user-guide/grayscale-release/canary-release.md
new file mode 100644
index 000000000..2ff8069d0
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/grayscale-release/canary-release.md
@@ -0,0 +1,127 @@
+---
+title: "金丝雀发布"
+keywords: 'KubeSphere, Kubernetes, 金丝雀发布, istio, 服务网格'
+description: '了解如何在 KubeSphere 中部署金丝雀服务。'
+linkTitle: "金丝雀发布"
+weight: 10530
+---
+
+KubeSphere 基于 [Istio](https://istio.io/) 向用户提供部署金丝雀服务所需的控制功能。在金丝雀发布中,您可以引入服务的新版本,并向其发送一小部分流量来进行测试。同时,旧版本负责处理其余的流量。如果一切顺利,您就可以逐渐增加向新版本发送的流量,同时逐步停用旧版本。如果出现任何问题,您可以用 KubeSphere 更改流量比例来回滚至先前版本。
+
+该方法能够高效地测试服务性能和可靠性,有助于在实际环境中发现潜在问题,同时不影响系统整体稳定性。
+
+
+
+## 视频演示
+
+
+
+## 准备工作
+
+- 您需要启用 [KubeSphere 服务网格](../../../pluggable-components/service-mesh/)。
+- 您需要启用 [KubeSphere 日志系统](../../../pluggable-components/logging/)以使用 Tracing 功能。
+- 您需要创建一个企业空间、一个项目和一个用户 (`project-regular`)。请务必邀请该用户至项目中并赋予 `operator` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+- 您需要开启**应用治理**并有一个可用应用,以便实现该应用的金丝雀发布。本教程中使用的示例应用是 Bookinfo。有关更多信息,请参见[部署 Bookinfo 和管理流量](../../../quick-start/deploy-bookinfo-to-k8s/)。
+
+## 步骤 1:创建金丝雀发布任务
+
+1. 以 `project-regular` 身份登录 KubeSphere 控制台,转到**灰度发布**页面,在**发布模式**选项卡下,点击**金丝雀发布**右侧的**创建**。
+
+2. 设置任务名称,点击**下一步**。
+
+3. 在**服务设置**选项卡,从下拉列表中选择您的应用和要实现金丝雀发布的服务。如果您同样使用示例应用 Bookinfo,请选择 **reviews** 并点击**下一步**。
+
+4. 在**新版本设置**选项卡,添加另一个版本(例如 `kubesphere/examples-bookinfo-reviews-v2:1.16.2`;将 `v1` 改为 `v2`)并点击**下一步**。
+
+5. 您可以使用具体比例或者使用请求内容(例如 `Http Header`、`Cookie` 和 `URI`)分别向这两个版本(`v1` 和 `v2`)发送流量。选择**指定流量分配**,并拖动中间的滑块来更改向这两个版本分别发送的流量比例(例如设置为各 50%)。操作完成后,点击**创建**。
+
+## 步骤 2:验证金丝雀发布
+
+现在您有两个可用的应用版本,请访问该应用以验证金丝雀发布。
+
+1. 访问 Bookinfo 网站,重复刷新浏览器。您会看到 **Book Reviews** 板块以 50% 的比例在 v1 版本和 v2 版本之间切换。
+
+2. 金丝雀发布任务创建后会显示在**任务状态**选项卡下。点击该任务查看详情。
+
+3. 您可以看到每个版本分别收到一半流量。
+
+4. 新的部署也已创建。
+
+5. 您可以执行以下命令直接获取虚拟服务来识别权重:
+
+ ```bash
+ kubectl -n demo-project get virtualservice -o yaml
+ ```
+
+ {{< notice note >}}
+
+- 当您执行上述命令时,请将 `demo-project` 替换为您自己项目(即命名空间)的名称。
+- 如果您想在 KubeSphere 控制台使用 Web kubectl 执行命令,则需要使用 `admin` 帐户登录。
+
+{{ notice >}}
+
+6. 预期输出:
+
+ ```bash
+ ...
+ spec:
+ hosts:
+ - reviews
+ http:
+ - route:
+ - destination:
+ host: reviews
+ port:
+ number: 9080
+ subset: v1
+ weight: 50
+ - destination:
+ host: reviews
+ port:
+ number: 9080
+ subset: v2
+ weight: 50
+ ...
+ ```
+
+## 步骤 3:查看网络拓扑
+
+1. 在运行 KubeSphere 的机器上执行以下命令引入真实流量,每 0.5 秒模拟访问一次 Bookinfo。
+
+ ```bash
+ watch -n 0.5 "curl http://productpage.demo-project.192.168.0.2.nip.io:32277/productpage?u=normal"
+ ```
+
+ {{< notice note >}}
+ 请确保将以上命令中的主机名和端口号替换成您自己环境的。
+ {{ notice >}}
+
+2. 在**流量监控**中,您可以看到不同服务之间的通信、依赖关系、运行状态及性能。
+
+3. 点击组件(例如 **reviews**),在右侧可以看到流量监控信息,显示**流量**、**成功率**和**持续时间**的实时数据。
+
+## 步骤 4:查看链路追踪详情
+
+KubeSphere 提供基于 [Jaeger](https://www.jaegertracing.io/) 的分布式追踪功能,用来对基于微服务的分布式应用程序进行监控及故障排查。
+
+1. 在**链路追踪**选项卡中,可以清楚地看到请求的所有阶段及内部调用,以及每个阶段的调用耗时。
+
+2. 点击任意条目,可以深入查看请求的详细信息及该请求被处理的位置(在哪个机器或者容器)。
+
+## 步骤 5:接管所有流量
+
+如果一切运行顺利,则可以将所有流量引入新版本。
+
+1. 在**任务状态**中,点击金丝雀发布任务。
+
+2. 在弹出的对话框中,点击 **reviews v2** 右侧的
,选择**接管**。这代表 100% 的流量将会被发送到新版本 (v2)。
+
+ {{< notice note >}}
+ 如果新版本出现任何问题,可以随时回滚到之前的 v1 版本。
+ {{ notice >}}
+
+3. 再次访问 Bookinfo,多刷新几次浏览器,您会发现页面只会显示 **reviews v2** 的结果(即带有黑色星标的评级)。
+
+
diff --git a/content/zh/docs/v3.4/project-user-guide/grayscale-release/overview.md b/content/zh/docs/v3.4/project-user-guide/grayscale-release/overview.md
new file mode 100644
index 000000000..e87963f51
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/grayscale-release/overview.md
@@ -0,0 +1,39 @@
+---
+title: "概述"
+keywords: 'Kubernetes, KubeSphere, 灰度发布, 概述, 服务网格'
+description: '了解灰度发布的基本概念。'
+linkTitle: "概述"
+weight: 10510
+---
+
+现代云原生应用程序通常由一组可独立部署的组件组成,这些组件也称作微服务。在微服务架构中,每个服务网络执行特定功能,开发者能够非常灵活地对应用做出调整,不会影响服务网络。这种组成应用程序的微服务网络也称作**服务网格**。
+
+KubeSphere 服务网格基于开源项目 [Istio](https://istio.io/) 构建,可以控制应用程序不同部分之间的通信方式。其中,灰度发布策略为用户在不影响微服务之间通信的情况下测试和发布新的应用版本发挥了重要作用。
+
+## 灰度发布策略
+
+当您在 KubeSphere 中升级应用至新版本时,灰度发布可以确保平稳过渡。采用的具体策略可能不同,但最终目标相同,即提前识别潜在问题,避免影响在生产环境中运行的应用。这样不仅可以将版本升级的风险降到最低,还能测试应用新构建版本的性能。
+
+KubeSphere 为用户提供三种灰度发布策略。
+
+### [蓝绿部署](../blue-green-deployment/)
+
+蓝绿部署会创建一个相同的备用环境,在该环境中运行新的应用版本,从而为发布新版本提供一个高效的方式,不会出现宕机或者服务中断。通过这种方法,KubeSphere 将所有流量路由至其中一个版本,即在任意给定时间只有一个环境接收流量。如果新构建版本出现任何问题,您可以立刻回滚至先前版本。
+
+### [金丝雀发布](../canary-release/)
+
+金丝雀部署缓慢地向一小部分用户推送变更,从而将版本升级的风险降到最低。具体来讲,您可以在高度响应的仪表板上进行定义,选择将新的应用版本暴露给一部分生产流量。另外,您执行金丝雀部署后,KubeSphere 会监控请求,为您提供实时流量的可视化视图。在整个过程中,您可以分析新的应用版本的行为,选择逐渐增加向它发送的流量比例。待您对构建版本有把握后,便可以把所有流量路由至该构建版本。
+
+### [流量镜像](../traffic-mirroring/)
+
+流量镜像复制实时生产流量并发送至镜像服务。默认情况下,KubeSphere 会镜像所有流量,您也可以指定一个值来手动定义镜像流量的百分比。常见用例包括:
+
+- 测试新的应用版本。您可以对比镜像流量和生产流量的实时输出。
+- 测试集群。您可以将实例的生产流量用于集群测试。
+- 测试数据库。您可以使用空数据库来存储和加载数据。
+
+{{< notice note >}}
+
+当前版本的 KubeSphere 暂不支持为多集群应用创建灰度发布策略。
+
+{{ notice >}}
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/project-user-guide/grayscale-release/traffic-mirroring.md b/content/zh/docs/v3.4/project-user-guide/grayscale-release/traffic-mirroring.md
new file mode 100644
index 000000000..57f8998a9
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/grayscale-release/traffic-mirroring.md
@@ -0,0 +1,81 @@
+---
+title: "流量镜像"
+keywords: 'KubeSphere, Kubernetes, 流量镜像, Istio'
+description: '了解如何在 KubeSphere 中执行流量镜像任务。'
+linkTitle: "流量镜像"
+weight: 10540
+---
+
+流量镜像 (Traffic Mirroring),也称为流量影子 (Traffic Shadowing),是一种强大的、无风险的测试应用版本的方法,它将实时流量的副本发送给被镜像的服务。采用这种方法,您可以搭建一个与原环境类似的环境以进行验收测试,从而提前发现问题。由于镜像流量存在于主服务关键请求路径带外,终端用户在测试全过程不会受到影响。
+
+## 准备工作
+
+- 您需要启用 [KubeSphere 服务网络](../../../pluggable-components/service-mesh/)。
+- 您需要创建一个企业空间、一个项目和一个用户(例如 `project-regular`)。该用户必须已邀请至该项目,并具有 `operator` 角色。有关更多信息,请参阅[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+- 您需要启用**应用治理**,并有可用的应用,以便为该应用进行流量镜像。本教程以 Bookinfo 为例。有关更多信息,请参阅[部署 Bookinfo 和管理流量](../../../quick-start/deploy-bookinfo-to-k8s/)。
+
+## 创建流量镜像任务
+
+1. 以 `project-regular` 用户登录 KubeSphere 并进入项目。前往**灰度发布**页面,在页面右侧点击**流量镜像**右侧的**创建**。
+
+2. 设置发布任务的名称并点击**下一步**。
+
+3. 在**服务设置**选项卡,从下拉列表中选择需要进行流量镜像的应用和对应的服务(本教程以 Bookinfo 应用的 reviews 服务为例),然后点击**下一步**。
+
+4. 在**新版本设置**选项卡,为应用添加另一个版本(例如 `kubesphere/examples-bookinfo-reviews-v2:1.16.2`;将 `v1` 改为 `v2`),然后点击**下一步**。
+
+5. 在**策略设置**选项卡,点击**创建**。
+
+6. 新建的流量镜像任务显示在**任务状态**页面。点击该任务查看详情。
+
+7. 在详情页面,您可以看到流量被镜像至 `v2` 版本,同时折线图中显示实时流量。
+
+8. 新建的部署也显示在**工作负载**下的**部署**页面。
+
+9. 您可以执行以下命令查看虚拟服务的 `mirror` 和 `weight` 字段。
+
+ ```bash
+ kubectl -n demo-project get virtualservice -o yaml
+ ```
+
+ {{< notice note >}}
+
+ - 请将上述命令中的 `demo-project` 修改成实际的项目(即命名空间)名称。
+ - 您需要以 `admin` 用户重新登录才能在 KubeSphere 控制台的 Web kubectl 页面执行上述命令。
+
+ {{ notice >}}
+
+10. 预期输出结果:
+
+ ```bash
+ ...
+ spec:
+ hosts:
+ - reviews
+ http:
+ - route:
+ - destination:
+ host: reviews
+ port:
+ number: 9080
+ subset: v1
+ weight: 100
+ mirror:
+ host: reviews
+ port:
+ number: 9080
+ subset: v2
+ ...
+ ```
+
+ 此路由规则将 100% 流量发送至 `v1`。`mirror` 部分的字段指定将流量镜像至 `reviews v2` 服务。当流量被镜像时,发送至镜像服务的请求的 Host/Authority 头部会附带 `-shadow` 标识。例如, `cluster-1` 会变成 `cluster-1-shadow`。
+
+ {{< notice note >}}
+
+这些请求以 Fire and Forget 方式镜像,亦即请求的响应会被丢弃。您可以指定 `weight` 字段来只镜像一部分而不是全部流量。如果该字段缺失,为与旧版本兼容,所有流量都会被镜像。有关更多信息,请参阅 [Mirroring](https://istio.io/v1.5/pt-br/docs/tasks/traffic-management/mirroring/)。
+
+{{ notice >}}
+
+## 下线任务
+
+您可以点击**删除**移除流量镜像任务。此操作不会影响当前的应用版本。
diff --git a/content/zh/docs/v3.4/project-user-guide/image-builder/_index.md b/content/zh/docs/v3.4/project-user-guide/image-builder/_index.md
new file mode 100644
index 000000000..d05c53dda
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/image-builder/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "镜像构建器"
+weight: 10600
+
+_build:
+ render: false
+---
diff --git a/content/zh/docs/v3.4/project-user-guide/image-builder/binary-to-image.md b/content/zh/docs/v3.4/project-user-guide/image-builder/binary-to-image.md
new file mode 100644
index 000000000..042274ab9
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/image-builder/binary-to-image.md
@@ -0,0 +1,148 @@
+---
+title: "Binary to Image:发布制品到 Kubernetes"
+keywords: "KubeSphere, Kubernetes, Docker, B2I, Binary-to-Image"
+description: "如何使用 Binary-to-Image 发布制品到 Kubernetes。"
+linkTitle: "Binary to Image:发布制品到 Kubernetes"
+weight: 10620
+---
+
+Binary-to-Image (B2I) 是一个工具箱和工作流,用于从二进制可执行文件(例如 Jar、War 和二进制包)构建可再现容器镜像。更确切地说,您可以上传一个制品并指定一个目标仓库,例如 Docker Hub 或者 Harbor,用于推送镜像。如果一切运行成功,会推送您的镜像至目标仓库,并且如果您在工作流中创建服务 (Service),也会自动部署应用程序至 Kubernetes。
+
+在 B2I 工作流中,您不需要编写 Dockerfile。这不仅能降低学习成本,也能提升发布效率,使用户更加专注于业务。
+
+本教程演示在 B2I 工作流中基于制品构建镜像的两种不同方式。最终,镜像会发布至 Docker Hub。
+
+以下是一些示例制品,用于演示和测试,您可以用来实现 B2I 工作流:
+
+| 制品包 | GitHub 仓库 |
+| ------------------------------------------------------------ | ------------------------------------------------------------ |
+| [b2i-war-java8.war](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java8.war) | [spring-mvc-showcase](https://github.com/spring-projects/spring-mvc-showcase) |
+| [b2i-war-java11.war](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java11.war) | [springmvc5](https://github.com/kubesphere/s2i-java-container/tree/master/tomcat/examples/springmvc5) |
+| [b2i-binary](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-binary) | [devops-go-sample](https://github.com/runzexia/devops-go-sample) |
+| [b2i-jar-java11.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java11.jar) | [ java-maven-example](https://github.com/kubesphere/s2i-java-container/tree/master/java/examples/maven) |
+| [b2i-jar-java8.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java8.jar) | [devops-maven-sample](https://github.com/kubesphere/devops-maven-sample) |
+
+## 视频演示
+
+
+
+## 准备工作
+
+- 您已启用 [KubeSphere DevOps 系统](../../../pluggable-components/devops/)。
+- 您需要创建一个 [Docker Hub](http://www.dockerhub.com/) 帐户,也支持 GitLab 和 Harbor。
+- 您需要创建一个企业空间、一个项目和一个用户 (`project-regular`),请务必邀请该用户至项目中并赋予 `operator` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+- 设置一个 CI 专用节点用于构建镜像。该操作不是必需,但建议开发和生产环境进行设置,专用节点会缓存依赖项并缩短构建时间。有关更多信息,请参见[为缓存依赖项设置 CI 节点](../../../devops-user-guide/how-to-use/devops-settings/set-ci-node/)。
+
+## 使用 Binary-to-Image (B2I) 创建服务
+
+下图中的步骤展示了如何在 B2I 工作流中通过创建服务来上传制品、构建镜像并将其发布至 Kubernetes。
+
+
+
+### 步骤 1:创建 Docker Hub 保密字典
+
+您必须创建 Docker Hub 保密字典,以便将通过 B2I 创建的 Docker 镜像推送至 Docker Hub。以 `project-regular` 身份登录 KubeSphere,转到您的项目并创建一个 Docker Hub 保密字典。有关更多信息,请参见[创建常用保密字典](../../../project-user-guide/configuration/secrets/#创建常用保密字典)。
+
+### 步骤 2:创建服务
+
+1. 在该项目中,转到**应用负载**下的**服务**,点击**创建**。
+
+2. 下拉至**通过制品构建服务**,选择 **WAR**。本教程使用 [spring-mvc-showcase](https://github.com/spring-projects/spring-mvc-showcase) 项目作为示例并上传 WAR 制品至 KubeSphere。设置一个名称,例如 `b2i-war-java8`,点击**下一步**。
+
+3. 在**构建设置**页面,请提供以下相应信息,并点击**下一步**。
+
+ **服务类型**:本示例选择**无状态服务**。有关不同服务的更多信息,请参见[服务类型](../../../project-user-guide/application-workloads/services/#服务类型)。
+
+ **制品文件**:上传 WAR 制品 ([b2i-war-java8](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java8.war))。
+
+ **构建环境**:选择 **kubesphere/tomcat85-java8-centos7:v2.1.0**。
+
+ **镜像名称**:输入 `
查看构建日志。如果一切运行正常,您可以在日志末尾看到 `Build completed successfully`。
+
+3. 回到**服务**、**部署**和**任务**页面,您可以看到该镜像相应的服务、部署和任务都已成功创建。
+
+4. 在您的 Docker Hub 仓库,您可以看到 KubeSphere 已经向仓库推送了带有预期标签的镜像。
+
+### 步骤 4:访问 B2I 服务
+
+1. 在**服务**页面,请点击 B2I 服务前往其详情页面,您可以查看暴露的端口号。
+
+2. 通过 `http://
查看构建日志。如果一切运行正常,您可以在日志末尾看到 `Build completed successfully`。
+
+3. 前往**任务**页面,您可以看到该镜像相应的任务已成功创建。
+
+4. 在您的 Docker Hub 仓库,您可以看到 KubeSphere 已经向仓库推送了带有预期标签的镜像。
+
+
diff --git a/content/zh/docs/v3.4/project-user-guide/image-builder/s2i-and-b2i-webhooks.md b/content/zh/docs/v3.4/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
new file mode 100644
index 000000000..7356f6617
--- /dev/null
+++ b/content/zh/docs/v3.4/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
@@ -0,0 +1,84 @@
+---
+title: "配置 S2I 和 B2I Webhooks"
+keywords: 'KubeSphere, Kubernetes, S2I, Source-to-Image, B2I, Binary-to-Image, Webhook'
+description: '学习如何配置 S2I 和 B2I webhooks。'
+linkTitle: "配置 S2I 和 B2I Webhooks"
+weight: 10650
+
+---
+
+KubeSphere 提供 Source-to-Image (S2I) 和 Binary-to-Image (B2I) 功能,以自动化镜像构建、推送和应用程序部署。在 KubeSphere 3.4 中,您可以配置 S2I 和 B2I Webhook,以便当代码仓库中存在任何相关活动时,自动触发镜像构建器。
+
+本教程演示如何配置 S2I 和 B2I webhooks。
+
+## 准备工作
+
+- 您需要启用 [KubeSphere DevOps 系统](../../../pluggable-components/devops/),该系统已集成 S2I。
+- 您需要创建一个创建企业空间,一个项目 (`demo-project`) 和一个用户 (`project-regular`)。`project-regular` 需要被邀请到项目中,并赋予 `operator` 角色。有关详细信息,请参考[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/#step-1-create-an-account)。
+- 您需要创建一个 S2I 镜像构建器和 B2I 镜像构建器。有关更多信息,请参见 [Source to Image:无需 Dockerfile 发布应用](../source-to-image/)和[Binary to Image:发布制品到 Kubernetes](../binary-to-image/)。
+
+## 配置 S2I Webhook
+
+### 步骤 1: 暴露 S2I trigger 服务
+
+1. 以 `admin` 身份登录 KubeSphere Web 控制台。在左上角点击**平台管理**,然后选择**集群管理**。
+
+2. 前往**应用负载**下的**服务**,从下拉框中选择 **kubesphere-devops-system**,然后点击 **s2ioperator-trigger-service** 进入详情页面。
+
+3. 点击**更多操作**,选择**编辑外部访问**。
+
+4. 在弹出的对话框中,从**访问方式**的下拉菜单中选择 **NodePort**,然后点击**确定**。
+
+ {{< notice note >}}
+
+ 本教程出于演示目的选择 **NodePort**。根据您的需要,您也可以选择 **LoadBalancer**。
+
+ {{ notice >}}
+
+5. 在详情界面可以查看 **NodePort**。S2I webhook URL 中将包含此 NodePort。
+
+### 步骤 2:配置 S2I webhook
+
+1. 登出 KubeSphere 并以 `project-regular` 用户登回。然后转到 `demo-project`。
+
+2. 在**镜像构建器**中,点击 S2I 镜像构建器,进入详情页面。
+
+3. 您可以在**远程触发器**中看到自动生成的链接。复制 `/s2itrigger/v1alpha1/general/namespaces/demo-project/s2ibuilders/felixnoo-s2i-sample-latest-zhd/`,S2I webhook URL 中将包含这个链接。
+
+4. 登录您的 GitHub 帐户,转到用于 S2I 镜像构建器的源代码仓库。转到 **Settings** 下的 **Webhooks**,然后点击 **Add webhook**。
+
+5. 在 **Payload URL**,输入 `http://
查看构建日志。如果一切运行正常,您可以在日志末尾看到 `Build completed successfully`。
+
+3. 回到**服务**、**部署**和**任务**页面,您可以看到该镜像相应的服务、部署和任务都已成功创建。
+
+4. 在您的 Docker Hub 仓库,您可以看到 KubeSphere 已经向仓库推送了带有预期标签的镜像。
+
+### 步骤 5:访问 S2I 服务
+
+1. 在**服务**页面,请点击 S2I 服务前往其详情页面。
+
+2. 要访问该服务,您可以执行 `curl` 命令使用 Endpoint 或者访问 `| 参数 | +描述 | +
|---|---|
| Name | +持久卷名称,在该持久卷的清单文件中由 .metadata.name 字段指定。 | +
| 状态 | +
+ 持久卷的当前状态,在该持久卷的清单文件中由 .status.phase 字段指定,包括:
+
|
+
| 容量 | +持久卷的容量,在该持久卷的清单文件中由 .spec.capacity.storage 字段指定。 | +
| 访问模式 | +
+ 持久卷的访问模式,在该持久卷的清单文件中由 .spec.accessModes 字段指定,包括:
+
|
+
| 回收策略 | +
+ 持久卷实例的回收策略,在该持久卷实例的清单文件中由 .spec.persistentVolumeReclaimPolicy 字段指定,包括:
+
|
+
| 创建时间 | +持久卷的创建时间。 | +
| 操作系统 | +最低配置 | +
|---|---|
| Ubuntu 16.04, 18.04, 20.04, 22.04 | +2 核 CPU,4 GB 内存,40 GB 磁盘空间 | +
| Debian Buster, Stretch | +2 核 CPU,4 GB 内存,40 GB 磁盘空间 | +
| CentOS 7.x | +2 核 CPU,4 GB 内存,40 GB 磁盘空间 | +
| Red Hat Enterprise Linux 7 | +2 核 CPU,4 GB 内存,40 GB 磁盘空间 | +
| SUSE Linux Enterprise Server 15/openSUSE Leap 15.2 | +2 核 CPU,4 GB 内存,40 GB 磁盘空间 | +
| 支持的容器运行时 | +版本 | +
|---|---|
| Docker | +19.3.8 + | +
| containerd | +最新版 | +
| CRI-O(试验版,未经充分测试) | +最新版 | +
| iSula(试验版,未经充分测试) | +最新版 | +
| 依赖项 | +Kubernetes 版本 ≥ 1.18 | +Kubernetes 版本 < 1.18 | +
|---|---|---|
socat |
+ 必须 | +可选但建议 | +
conntrack |
+ 必须 | +可选但建议 | +
ebtables |
+ 可选但建议 | +可选但建议 | +
ipset |
+ 可选但建议 | +可选但建议 | +
| 内置角色 | +描述 | +
|---|---|
platform-self-provisioner |
+ 创建企业空间并成为所创建企业空间的管理员。 | +
platform-regular |
+ 平台普通用户,在被邀请加入企业空间或集群之前没有任何资源操作权限。 | +
platform-admin |
+ 平台管理员,可以管理平台内的所有资源。 | +
| 用户 | +指定的平台角色 | +用户权限 | +
|---|---|---|
ws-admin |
+ platform-regular |
+ 被邀请到企业空间后,管理该企业空间中的所有资源(在此示例中,此用户用于邀请新成员加入该企业空间)。 | +
project-admin |
+ platform-regular |
+ 创建和管理项目以及 DevOps 项目,并邀请新成员加入项目。 | +
project-regular |
+ platform-regular |
+ project-regular 将由 project-admin 邀请至项目或 DevOps 项目。该用户将用于在指定项目中创建工作负载、流水线和其他资源。 |
+
| 用户 | +分配的企业空间角色 | +角色权限 | +
|---|---|---|
ws-admin |
+ demo-workspace-admin |
+ 管理指定企业空间中的所有资源(在此示例中,此用户用于邀请新成员加入企业空间)。 | +
project-admin |
+ demo-workspace-self-provisioner |
+ 创建和管理项目以及 DevOps 项目,并邀请新成员加入项目。 | +
project-regular |
+ demo-workspace-viewer |
+ project-regular 将由 project-admin 邀请至项目或 DevOps 项目。该用户将用于在指定项目中创建工作负载、流水线和其他资源。 |
+
以编辑角色、编辑角色权限或删除该角色。
+
+6. 在**用户**页面,可以在创建帐户或编辑现有帐户时为帐户分配该角色。
+
+### 步骤 5:创建 DevOps 项目(可选)
+
+{{< notice note >}}
+
+若要创建 DevOps 项目,需要预先启用 KubeSphere DevOps 系统,该系统是个可插拔的组件,提供 CI/CD 流水线、Binary-to-Image 和 Source-to-Image 等功能。有关如何启用 DevOps 的更多信息,请参见 [KubeSphere DevOps 系统](../../pluggable-components/devops/)。
+
+{{ notice >}}
+
+1. 以 `project-admin` 身份登录控制台,在 **DevOps 项目**中,点击**创建**。
+
+2. 输入 DevOps 项目名称(例如 `demo-devops`),然后点击**确定**,也可以为该项目添加别名和描述。
+
+3. 点击刚创建的项目查看其详细页面。
+
+4. 转到 **DevOps 项目设置**,然后选择 **DevOps 项目成员**。点击**邀请**授予 `project-regular` 用户 `operator` 的角色,允许其创建流水线和凭证。
+
+
+至此,您已熟悉 KubeSphere 的多租户管理系统。在其他教程中,`project-regular` 帐户还将用于演示如何在项目或 DevOps 项目中创建应用程序和资源。
diff --git a/content/zh/docs/v3.4/quick-start/deploy-bookinfo-to-k8s.md b/content/zh/docs/v3.4/quick-start/deploy-bookinfo-to-k8s.md
new file mode 100644
index 000000000..b6984daf4
--- /dev/null
+++ b/content/zh/docs/v3.4/quick-start/deploy-bookinfo-to-k8s.md
@@ -0,0 +1,101 @@
+---
+title: "部署并访问 Bookinfo"
+keywords: 'KubeSphere, Kubernetes, Bookinfo, Istio'
+description: '通过部署示例应用程序 Bookinfo 来探索 KubeSphere 服务网格的基本功能。'
+linkTitle: "部署并访问 Bookinfo"
+weight: 2400
+---
+
+作为开源的服务网格解决方案,[Istio](https://istio.io/) 为微服务提供了强大的流量管理功能。以下是 [Istio](https://istio.io/latest/zh/docs/concepts/traffic-management/) 官方网站上关于流量管理的简介:
+
+*Istio 的流量路由规则可以让您很容易地控制服务之间的流量和 API 调用。Istio 简化了服务级别属性的配置,比如熔断器、超时、重试等,并且能轻松地设置重要的任务,如 A/B 测试、金丝雀发布、基于流量百分比切分的概率发布等。它还提供了开箱即用的故障恢复特性,有助于增强应用的健壮性,从而更好地应对被依赖的服务或网络发生故障的情况。*
+
+为了给用户提供管理微服务的一致体验,KubeSphere 在容器平台上集成了 Istio。本教程演示了如何部署由四个独立的微服务组成的示例应用程序 Bookinfo,以及如何通过 NodePort 访问该应用。
+
+## 视频演示
+
+
+
+## 准备工作
+
+- 您需要启用 [KubeSphere 服务网格](../../pluggable-components/service-mesh/)。
+
+- 您需要完成[创建企业空间、项目、用户和角色](../create-workspace-and-project/)中的所有任务。
+
+- 您需要启用**链路追踪**。有关更多信息,请参见[设置网关](../../project-administration/project-gateway/#设置网关)。
+
+ {{< notice note >}}
+ 您需要启用**链路追踪**以使用追踪功能。启用后若无法访问路由 (Ingress),请检查您的路由是否已经添加注释(例如:`nginx.ingress.kubernetes.io/service-upstream: true`)。
+ {{ notice >}}
+
+## 什么是 Bookinfo 应用
+
+Bookinfo 应用由以下四个独立的微服务组成,其中 **reviews** 微服务有三个版本。
+
+- **productpage** 微服务会调用 **details** 和 **reviews** 用来生成页面。
+- **details** 微服务中包含了书籍的信息。
+- **reviews** 微服务中包含了书籍相关的评论,它还会调用 **ratings** 微服务。
+- **ratings** 微服务中包含了由书籍评价组成的评级信息。
+
+这个应用的端到端架构如下所示。有关更多详细信息,请参见 [Bookinfo 应用](https://istio.io/latest/zh/docs/examples/bookinfo/)。
+
+
+
+## 动手实验
+
+### 步骤 1:部署 Bookinfo
+
+1. 使用帐户 `project-regular` 登录控制台并访问项目 (`demo-project`)。前往**应用负载**下的**应用**,点击右侧的**部署示例应用**。
+
+2. 在出现的对话框中点击**下一步**,其中必填字段已经预先填好,相关组件也已经设置完成。您无需修改设置,只需在最后一页(**路由设置**)点击**创建**。
+
+ {{< notice note >}}
+
+KubeSphere 会自动创建主机名。若要更改主机名,请将鼠标悬停在默认路由规则上,然后点击 | 参数 | +描述 | +
|---|---|
| 集群 | +发生操作的集群。如果开启了多集群功能,则会启用该参数。 | +
| 项目 | +发生操作的项目。支持精确匹配和模糊匹配。 | +
| 企业空间 | +发生操作的企业空间。支持精确匹配和模糊匹配。 | +
| 资源类型 | +与请求相关联的资源类型。支持模糊匹配。 | +
| 资源名称 | +与请求相关联的资源名称。支持模糊匹配。 | +
| 操作行为 | +与请求相关联的 Kubernetes 操作行为。对于非资源请求,该参数为小写 HTTP 方式。支持精确匹配。 | +
| 状态码 | +HTTP 响应码。支持精确匹配。 | +
| 操作帐户 | +调用该请求的用户。支持精确匹配和模糊匹配。 | +
| 来源 IP | +该请求源自的 IP 地址和中间代理。支持模糊匹配。 | +
| 时间范围 | +该请求到达 Apiserver 的时间。 | +
| 参数 | +描述 | +
|---|---|
retentionDay |
+ retentionDay 决定用户的资源消费统计页面显示的日期范围。该参数的值必须与 Prometheus 中 retention 的值相同. |
+
currencyUnit |
+ 资源消费统计页面显示的货币单位。目前可用的单位有 CNY(人民币)和 USD(美元)。若指定其他货币,控制台将默认以美元为单位显示消费情况。 |
+
cpuCorePerHour |
+ 每核/小时的 CPU 单价。 | +
memPerGigabytesPerHour |
+ 每 GB/小时的内存单价。 | +
ingressNetworkTrafficPerMegabytesPerHour |
+ 每 MB/小时的入站流量单价。 | +
egressNetworkTrafficPerMegabytesPerHour |
+ 每 MB/小时的出站流量单价。 | +
pvcPerGigabytesPerHour |
+ 每 GB/小时的 PVC 单价。请注意,无论实际使用的存储是多少,KubeSphere 都会根据 PVC 请求的存储容量来计算存储卷的总消费情况。 | +
,然后选择**资源消费统计**。
+
+2. 在**集群资源消费情况**一栏,点击**查看消费**。
+
+3. 如果您已经启用[多集群管理](../../../multicluster-management/),则可以在控制面板左侧看到包含 Host 集群和全部 Member 集群的集群列表。如果您未启用该功能,那么列表中只会显示一个 `default` 集群。
+
+ 在右侧,有三个模块以不同的方式显示资源消费情况。
+
+ | 模块 | +描述 | +
|---|---|
| 资源消费统计 | +显示自集群创建以来不同资源的消费概览。如果您在 ConfigMap kubesphere-config 中已经配置资源的价格,则可以看到计费信息。 |
+
| 消费历史 | +显示截止到昨天的资源消费总况,您也可以自定义时间范围和时间间隔,以查看特定周期内的数据。 | +
| 当前消费 | +显示过去一小时所选目标对象的资源消费情况。 | +
,对出现的提示消息点击**确定**,以将用户分配到该部门。
+
+ {{< notice note >}}
+
+ * 如果部门提供的权限与用户的现有权限重叠,则会为用户添加新的权限。用户的现有权限不受影响。
+ * 分配到某个部门的用户可以根据与该部门关联的企业空间角色、项目角色和 DevOps 项目角色来执行操作,而无需被邀请到企业空间、项目和 DevOps 项目中。
+
+ {{ notice >}}
+
+## 从部门中移除用户
+
+1. 在**部门**页面,选择左侧部门树中的一个部门,然后点击右侧的**已分配**。
+2. 在已分配用户列表中,点击用户右侧的
,在出现的对话框中输入相应的用户名,然后点击**确定**来移除用户。
+
+## 删除和编辑部门
+
+1. 在**部门**页面,点击**设置部门**。
+
+2. 在**设置部门**对话框的左侧,点击需要编辑或删除部门的上级部门。
+
+3. 点击部门右侧的
进行编辑。
+
+ {{< notice note >}}
+
+ 有关详细信息,请参见[创建部门](../../workspace-administration/department-management/#创建部门)。
+
+ {{ notice >}}
+
+4. 点击部门右侧的
,在出现的对话框中输入相应的部门名称,然后点击**确定**来删除该部门。
+
+ {{< notice note >}}
+
+ * 如果删除的部门包含子部门,则子部门也将被删除。
+ * 部门删除后,所有部门成员的授权也将被取消。
+
+ {{ notice >}}
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/workspace-administration/project-quotas.md b/content/zh/docs/v3.4/workspace-administration/project-quotas.md
new file mode 100644
index 000000000..91d6b9bb1
--- /dev/null
+++ b/content/zh/docs/v3.4/workspace-administration/project-quotas.md
@@ -0,0 +1,55 @@
+---
+title: "项目配额"
+keywords: 'KubeSphere, Kubernetes, 项目, 配额, 资源, 请求, 限制'
+description: '设置请求和限制,控制项目中的资源使用情况。'
+linkTitle: "项目配额"
+weight: 9600
+---
+
+KubeSphere 使用预留(Request)和限制(Limit)来控制项目中的资源(例如 CPU 和内存)使用情况,在 Kubernetes 中也称为[资源配额](https://kubernetes.io/zh/docs/concepts/policy/resource-quotas/)。请求确保项目能够获得其所需的资源,因为这些资源已经得到明确保障和预留。相反地,限制确保项目不能使用超过特定值的资源。
+
+除了 CPU 和内存,您还可以单独为其他对象设置资源配额,例如项目中的容器组、[部署](../../project-user-guide/application-workloads/deployments/)、[任务](../../project-user-guide/application-workloads/jobs/)、[服务](../../project-user-guide/application-workloads/services/)和[配置字典](../../project-user-guide/configuration/configmaps/)。
+
+本教程演示如何配置项目配额。
+
+## 准备工作
+
+您需要有一个可用的企业空间、一个项目和一个用户 (`ws-admin`)。该用户必须在企业空间层级拥有 `admin` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../quick-start/create-workspace-and-project/)。
+
+{{< notice note >}}
+
+如果使用 `project-admin` 用户(该用户在项目层级拥有 `admin` 角色),您也可以为新项目(即其配额尚未设置)设置项目配额。不过,项目配额设置完成之后,`project-admin` 无法更改配额。一般情况下,`ws-admin` 负责为项目设置限制和请求。`project-admin` 负责为项目中的容器[设置限制范围](../../project-administration/container-limit-ranges/)。
+
+{{ notice >}}
+
+## 设置项目配额
+
+1. 以 `ws-admin` 身份登录控制台,进入一个项目。如果该项目是新创建的项目,您可以在**概览**页面看到项目配额尚未设置。点击**编辑配额**来配置配额。
+
+2. 在弹出对话框中,您可以看到 KubeSphere 默认不为项目设置任何请求或限制。要设置请求和限制来控制 CPU 和内存资源,请将滑块移动到期望的值或者直接输入数字。字段留空意味着您不设置任何请求或限制。
+
+ {{< notice note >}}
+
+ 限制必须大于请求。
+
+ {{ notice >}}
+
+3. 要为其他资源设置配额,在**项目资源配额**下点击**添加**,选择一个资源或输入资源名称并设置配额。
+
+4. 点击**确定**完成配额设置。
+
+5. 在**项目设置**下的**基本信息**页面,您可以查看该项目的所有资源配额。
+
+6. 要更改项目配额,请在**基本信息**页面点击**编辑项目**,然后选择**编辑项目配额**。
+
+ {{< notice note >}}
+
+ 对于[多集群项目](../../project-administration/project-and-multicluster-project/#多集群项目),**管理项目**下拉菜单中不会显示**编辑配额**选项。若要为多集群项目设置配额,前往**项目设置**下的**项目配额**,并点击**编辑配额**。请注意,由于多集群项目跨集群运行,您可以为多集群项目针对不同集群分别设置资源配额。
+
+ {{ notice >}}
+
+7. 在**项目配额**页面更改项目配额,然后点击**确定**。
+
+## 另请参见
+
+[容器限制范围](../../project-administration/container-limit-ranges/)
diff --git a/content/zh/docs/v3.4/workspace-administration/role-and-member-management.md b/content/zh/docs/v3.4/workspace-administration/role-and-member-management.md
new file mode 100644
index 000000000..5bca697cf
--- /dev/null
+++ b/content/zh/docs/v3.4/workspace-administration/role-and-member-management.md
@@ -0,0 +1,63 @@
+---
+title: "企业空间角色和成员管理"
+keywords: "Kubernetes, 企业空间, KubeSphere, 多租户"
+description: "自定义企业空间角色并将角色授予用户。"
+linkTitle: "企业空间角色和成员管理"
+weight: 9400
+---
+
+本教程演示如何在企业空间中管理角色和成员。
+
+## 准备工作
+
+至少已创建一个企业空间,例如 `demo-workspace`。您还需要准备一个用户(如 `ws-admin`),该用户在企业空间级别具有 `workspace-admin` 角色。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../quick-start/create-workspace-and-project/)。
+
+{{< notice note >}}
+
+实际角色名称的格式:`workspace name-role name`。例如,在名为 `demo-workspace` 的企业空间中,角色 `admin` 的实际角色名称为 `demo-workspace-admin`。
+
+{{ notice >}}
+
+## 内置角色
+
+**企业空间角色**页面列出了以下四个可用的内置角色。创建企业空间时,KubeSphere 会自动创建内置角色,并且内置角色无法进行编辑或删除。您只能查看内置角色的权限或将其分配给用户。
+
+| **名称** | **描述** |
+| ------------------ | ------------------------------------------------------------ |
+| `workspace-viewer` | 企业空间观察员,可以查看企业空间中的所有资源。 |
+| `workspace-self-provisioner` | 企业空间普通成员,可以查看企业设置、管理应用模板、创建项目和 DevOps 项目。 |
+| `workspace-regular` | 企业空间普通成员,可以查看企业空间设置。 |
+| `workspace-admin` | 企业空间管理员,可以管理企业空间中的所有资源。 |
+
+若要查看角色所含权限:
+
+1. 以 `ws-admin` 身份登录控制台。在**企业空间角色**中,点击一个角色(例如,`workspace-admin`)以查看角色详情。
+
+2. 点击**授权用户**选项卡,查看所有被授予该角色的用户。
+
+## 创建企业空间角色
+
+1. 转到**企业空间设置**下的**企业空间角色**。
+
+2. 在**企业空间角色**中,点击**创建**并设置**名称**(例如,`demo-project-admin`)。点击**编辑权限**继续。
+
+3. 在弹出的窗口中,权限归类在不同的**功能模块**下。在本示例中,点击**项目管理**,并为该角色选择**项目创建**、**项目管理**和**项目查看**。点击**确定**完成操作。
+
+ {{< notice note >}}
+
+**依赖于**表示当前授权项依赖所列出的授权项,勾选该权限后系统会自动选上所有依赖权限。
+
+ {{ notice >}}
+
+4. 新创建的角色将在**企业空间角色**中列出,点击右侧的
以编辑该角色的信息、权限,或删除该角色。
+
+## 邀请新成员
+
+1. 转到**企业空间设置**下**企业空间成员**,点击**邀请**。
+2. 点击右侧的
以邀请一名成员加入企业空间,并为其分配一个角色。
+
+
+
+3. 将成员加入企业空间后,点击**确定**。您可以在**企业空间成员**列表中查看新邀请的成员。
+
+4. 若要编辑现有成员的角色或将其从企业空间中移除,点击右侧的
并选择对应的操作。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/workspace-administration/upload-helm-based-application.md b/content/zh/docs/v3.4/workspace-administration/upload-helm-based-application.md
new file mode 100644
index 000000000..8fc3091b2
--- /dev/null
+++ b/content/zh/docs/v3.4/workspace-administration/upload-helm-based-application.md
@@ -0,0 +1,38 @@
+---
+title: "上传基于 Helm 的应用程序"
+keywords: "Kubernetes, Helm, KubeSphere, OpenPitrix, 应用程序"
+description: "了解如何向您的企业空间上传基于 Helm 的应用程序用作应用模板。"
+linkTitle: "上传基于 Helm 的应用程序"
+weight: 9200
+---
+
+KubeSphere 提供应用程序的全生命周期管理。例如,企业空间管理员可以上传或创建新的应用模板,并进行快速测试。此外,管理员会将经过充分测试的应用发布到[应用商店](../../application-store/),这样其他用户能一键部署这些应用。为了开发应用模板,企业空间管理员首先需要将打包的 [Helm chart](https://helm.sh/) 上传到 KubeSphere。
+
+本教程演示了如何通过上传打包的 Helm chart 来开发应用模板。
+
+## 准备工作
+
+- 您需要启用 [KubeSphere 应用商店 (OpenPitrix)](../../pluggable-components/app-store/)。
+- 您需要创建一个企业空间和一个用户 (`project-admin`)。该用户必须被邀请至企业空间中,并被授予 `workspace-self-provisioner` 角色。有关更多信息,请参考[创建企业空间、项目、用户和角色](../../quick-start/create-workspace-and-project/)。
+
+## 动手实验
+
+1. 用 `project-admin` 帐户登录 KubeSphere。在企业空间页面,转到**应用管理**下的**应用模板**,点击**创建**。
+
+2. 在弹出的对话框中,点击**上传**。您可以上传自己的 Helm chart,或者下载 [Nginx chart](/files/application-templates/nginx-0.1.0.tgz) 用它作为示例来完成接下来的步骤。
+
+3. 文件包上传完毕后,点击**确定**继续。
+
+4. 您可以在**应用信息**下查看应用的基本信息。点击**上传图标**来上传应用的图标。您也可以跳过上传图标,直接点击**确定**。
+
+ {{< notice note >}}
+
+应用图标支持的最大分辨率为 96 × 96 像素。
+
+{{ notice >}}
+
+5. 成功上传后,模板列表中会列出应用,状态为**开发中**,意味着该应用正在开发中。上传的应用对同一企业空间下的所有成员均可见。
+
+6. 点击应用,随后打开的页面默认选中**版本**标签。点击待提交版本以展开菜单,您可以在菜单上看到**删除**、**测试**、**提交发布**的选项。
+
+7. 有关如何将应用发布到应用商店的更多信息,请参考[应用程序生命周期管理](../../application-store/app-lifecycle-management/)。
diff --git a/content/zh/docs/v3.4/workspace-administration/what-is-workspace.md b/content/zh/docs/v3.4/workspace-administration/what-is-workspace.md
new file mode 100644
index 000000000..7c22be939
--- /dev/null
+++ b/content/zh/docs/v3.4/workspace-administration/what-is-workspace.md
@@ -0,0 +1,81 @@
+---
+title: "企业空间概述"
+keywords: "Kubernetes, KubeSphere, workspace"
+description: "了解 KubeSphere 企业空间的概念以及如何创建和删除企业空间。"
+linkTitle: "企业空间概述"
+weight: 9100
+---
+
+企业空间是用来管理[项目](../../project-administration/)、[DevOps 项目](../../devops-user-guide/)、[应用模板](../upload-helm-based-application/)和应用仓库的一种逻辑单元。您可以在企业空间中控制资源访问权限,也可以安全地在团队内部分享资源。
+
+最佳的做法是为租户(集群管理员除外)创建新的企业空间。同一名租户可以在多个企业空间中工作,并且多个租户可以通过不同方式访问同一个企业空间。
+
+本教程演示如何创建和删除企业空间。
+
+## 准备工作
+
+准备一个被授予 `workspaces-manager` 角色的用户,例如[创建企业空间、项目、用户和角色](../../quick-start/create-workspace-and-project/)中创建的 `ws-manager` 帐户。
+
+## 创建企业空间
+
+1. 以 `ws-manager` 身份登录 KubeSphere Web 控制台。点击左上角的**平台管理**并选择**访问控制**。在**企业空间**页面,点击**创建**。
+
+
+2. 对于单节点集群,您需要在**基本信息**页面,为创建的企业空间输入名称,并从下拉菜单中选择一名企业空间管理员。点击**创建**。
+
+ - **名称**:为企业空间设置一个专属名称。
+ - **别名**:该企业空间的另一种名称。
+ - **管理员**:管理该企业空间的用户。
+ - **描述**:企业空间的简短介绍。
+
+ 对于多节点集群,设置企业空间的基本信息后,点击**下一步**。在**集群设置**页面,选择企业空间需要使用的集群,然后点击**创建**。
+
+3. 企业空间创建后将显示在企业空间列表中。
+
+4. 点击该企业空间,您可以在**概览**页面查看企业空间中的资源状态。
+
+## 删除企业空间
+
+在 KubeSphere 中,可以通过企业空间对项目进行分组管理,企业空间下项目的生命周期会受到企业空间的影响。具体来说,企业空间删除之后,企业空间下的项目及关联的资源也同时会被销毁。
+
+删除企业空间之前,请先确定您是否要解绑部分关键项目。
+
+### 删除前解绑项目
+
+若要删除企业空间并保留其中的部分项目,删除前请先执行以下命令:
+
+```
+kubectl label ns