Revision
+
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. Add the following fields under `spec.authentication.jwtSecret`.
+
+ *Example of using [Google Identity Platform](https://developers.google.com/identity/protocols/oauth2/openid-connect)*:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: google
+ type: OIDCIdentityProvider
+ mappingMethod: auto
+ provider:
+ clientID: '********'
+ clientSecret: '********'
+ issuer: https://accounts.google.com
+ redirectURL: 'https://ks-console/oauth/redirect/google'
+ ```
+
+ See description of parameters as below:
+
+ | Parameter | Description |
+ | -------------------- | ------------------------------------------------------------ |
+ | clientID | The OAuth2 client ID. |
+ | clientSecret | The OAuth2 client secret. |
+ | redirectURL | The redirected URL to ks-console in the following format: `https://
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. Add the following fields under `spec.authentication.jwtSecret`.
+
+ Example:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ loginHistoryRetentionPeriod: 168h
+ maximumClockSkew: 10s
+ multipleLogin: true
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: LDAP
+ type: LDAPIdentityProvider
+ mappingMethod: auto
+ provider:
+ host: 192.168.0.2:389
+ managerDN: uid=root,cn=users,dc=nas
+ managerPassword: ********
+ userSearchBase: cn=users,dc=nas
+ loginAttribute: uid
+ mailAttribute: mail
+ ```
+
+ The fields are described as follows:
+
+ * `jwtSecret`: Secret used to sign user tokens. In a multi-cluster environment, all clusters must [use the same Secret](../../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-member-cluster).
+ * `authenticateRateLimiterMaxTries`: Maximum number of consecutive login failures allowed during a period specified by `authenticateRateLimiterDuration`. If the number of consecutive login failures of a user reaches the limit, the user will be blocked.
+ * `authenticateRateLimiterDuration`: Period during which `authenticateRateLimiterMaxTries` applies.
+ * `loginHistoryRetentionPeriod`: Retention period of login records. Outdated login records are automatically deleted.
+ * `maximumClockSkew`: Maximum clock skew for time-sensitive operations such as token expiration validation. The default value is `10s`.
+ * `multipleLogin`: Whether multiple users are allowed to log in from different locations. The default value is `true`.
+ * `oauthOptions`: OAuth settings.
+ * `accessTokenMaxAge`: Access token lifetime. For member clusters in a multi-cluster environment, the default value is `0h`, which means access tokens never expire. For other clusters, the default value is `2h`.
+ * `accessTokenInactivityTimeout`: Access token inactivity timeout period. An access token becomes invalid after it is idle for a period specified by this field. After an access token times out, the user needs to obtain a new access token to regain access.
+ * `identityProviders`: Identity providers.
+ * `name`: Identity provider name.
+ * `type`: Identity provider type.
+ * `mappingMethod`: Account mapping method. The value can be `auto` or `lookup`.
+ * If the value is `auto` (default), you need to specify a new username. KubeSphere automatically creates a user according to the username and maps the user to a third-party account.
+ * If the value is `lookup`, you need to perform step 3 to manually map an existing KubeSphere user to a third-party account.
+ * `provider`: Identity provider information. Fields in this section vary according to the identity provider type.
+
+3. If `mappingMethod` is set to `lookup`, run the following command and add the labels to map a KubeSphere user to a third-party account. Skip this step if `mappingMethod` is set to `auto`.
+
+ ```bash
+ kubectl edit user
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+ Example:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ maximumClockSkew: 10s
+ multipleLogin: true
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: LDAP
+ type: LDAPIdentityProvider
+ mappingMethod: auto
+ provider:
+ host: 192.168.0.2:389
+ managerDN: uid=root,cn=users,dc=nas
+ managerPassword: ********
+ userSearchBase: cn=users,dc=nas
+ loginAttribute: uid
+ mailAttribute: mail
+ ```
+
+2. Configure fields other than `oauthOptions:identityProviders` in the `spec:authentication` section. For details, see [Set Up External Authentication](../set-up-external-authentication/).
+
+3. Configure fields in `oauthOptions:identityProviders` section.
+
+ * `name`: User-defined LDAP service name.
+ * `type`: To use an LDAP service as an identity provider, you must set the value to `LDAPIdentityProvider`.
+ * `mappingMethod`: Account mapping method. The value can be `auto` or `lookup`.
+ * If the value is `auto` (default), you need to specify a new username. KubeSphere automatically creates a user according to the username and maps the user to an LDAP user.
+ * If the value is `lookup`, you need to perform step 4 to manually map an existing KubeSphere user to an LDAP user.
+ * `provider`:
+ * `host`: Address and port number of the LDAP service.
+ * `managerDN`: DN used to bind to the LDAP directory.
+ * `managerPassword`: Password corresponding to `managerDN`.
+ * `userSearchBase`: User search base. Set the value to the DN of the directory level below which all LDAP users can be found.
+ * `loginAttribute`: Attribute that identifies LDAP users.
+ * `mailAttribute`: Attribute that identifies email addresses of LDAP users.
+
+4. If `mappingMethod` is set to `lookup`, run the following command and add the labels to map a KubeSphere user to an LDAP user. Skip this step if `mappingMethod` is set to `auto`.
+
+ ```bash
+ kubectl edit user
in the lower-right corner, click **kubectl**, and run the following command to edit `ks-installer` of the CRD `ClusterConfiguration`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. Confiother than `oauthOptions:identityProviders` in the `spec:authentication` section. For details, see [Set Up External Authentication](../set-up-external-authentication/).
+
+3. Configure fields in `oauthOptions:identityProviders` section according to the identity provider plugin you have developed.
+
+ The following is a configuration example that uses GitHub as an external identity provider. For details, see the [official GitHub documentation](https://docs.github.com/en/developers/apps/building-oauth-apps) and the [source code of the GitHubIdentityProvider](https://github.com/kubesphere/kubesphere/blob/release-3.1/pkg/apiserver/authentication/identityprovider/github/github.go) plugin.
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: github
+ type: GitHubIdentityProvider
+ mappingMethod: auto
+ provider:
+ clientID: '******'
+ clgure fields ientSecret: '******'
+ redirectURL: 'https://ks-console/oauth/redirect/github'
+ ```
+
+ Similarly, you can also use Alibaba Cloud IDaaS as an external identity provider. For details, see the official [Alibaba IDaaS documentation](https://www.alibabacloud.com/help/product/111120.htm?spm=a3c0i.14898238.2766395700.1.62081da1NlxYV0) and the [source code of the AliyunIDaasProvider](https://github.com/kubesphere/kubesphere/blob/release-3.1/pkg/apiserver/authentication/identityprovider/github/github.go) plugin.
+
+4. After the fields are configured, save your changes, and wait until the restart of ks-installer is complete.
+
+ {{< notice note >}}
+
+ The KubeSphere web console is unavailable during the restart of ks-installer. Please wait until the restart is complete.
+
+ {{ notice >}}
+
+5. Go to the KubeSphere login page, click **Log In with XXX** (for example, **Log In with GitHub**).
+
+6. On the login page of the external identity provider, enter the username and password of a user configured at the identity provider to log in to KubeSphere.
+
+ 
+
diff --git a/content/en/docs/v3.4/access-control-and-account-management/multi-tenancy-in-kubesphere.md b/content/en/docs/v3.4/access-control-and-account-management/multi-tenancy-in-kubesphere.md
new file mode 100644
index 000000000..fbe355bf9
--- /dev/null
+++ b/content/en/docs/v3.4/access-control-and-account-management/multi-tenancy-in-kubesphere.md
@@ -0,0 +1,57 @@
+---
+title: "Kubernetes Multi-tenancy in KubeSphere"
+keywords: "Kubernetes, KubeSphere, multi-tenancy"
+description: "Understand the multi-tenant architecture in KubeSphere."
+linkTitle: "Multi-tenancy in KubeSphere"
+weight: 12100
+---
+
+Kubernetes helps you orchestrate applications and schedule containers, greatly improving resource utilization. However, there are various challenges facing both enterprises and individuals in resource sharing and security as they use Kubernetes, which is different from how they managed and maintained clusters in the past.
+
+The first and foremost challenge is how to define multi-tenancy in an enterprise and the security boundary of tenants. [The discussion about multi-tenancy](https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY) has never stopped in the Kubernetes community, while there is no definite answer to how a multi-tenant system should be structured.
+
+## Challenges in Kubernetes Multi-tenancy
+
+Multi-tenancy is a common software architecture. Resources in a multi-tenant environment are shared by multiple users, also known as "tenants", with their respective data isolated from each other. The administrator of a multi-tenant Kubernetes cluster must minimize the damage that a compromised or malicious tenant can do to others and make sure resources are fairly allocated.
+
+No matter how an enterprise multi-tenant system is structured, it always comes with the following two building blocks: logical resource isolation and physical resource isolation.
+
+Logically, resource isolation mainly entails API access control and tenant-based permission control. [Role-based access control (RBAC)](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) in Kubernetes and namespaces provide logic isolation. Nevertheless, they are not applicable in most enterprise environments. Tenants in an enterprise often need to manage resources across multiple namespaces or even clusters. Besides, the ability to provide auditing logs for isolated tenants based on their behavior and event queries is also a must in multi-tenancy.
+
+The isolation of physical resources includes nodes and networks, while it also relates to container runtime security. For example, you can create [NetworkPolicy](../../pluggable-components/network-policy/) resources to control traffic flow and use PodSecurityPolicy objects to control container behavior. [Kata Containers](https://katacontainers.io/) provides a more secure container runtime.
+
+## Kubernetes Multi-tenancy in KubeSphere
+
+To solve the issues above, KubeSphere provides a multi-tenant management solution based on Kubernetes.
+
+
+
+In KubeSphere, the [workspace](../../workspace-administration/what-is-workspace/) is the smallest tenant unit. A workspace enables users to share resources across clusters and projects. Workspace members can create projects in an authorized cluster and invite other members to cooperate in the same project.
+
+A **user** is the instance of a KubeSphere account. Users can be appointed as platform administrators to manage clusters or added to workspaces to cooperate in projects.
+
+Multi-level access control and resource quota limits underlie resource isolation in KubeSphere. They decide how the multi-tenant architecture is built and administered.
+
+### Logical isolation
+
+Similar to Kubernetes, KubeSphere uses RBAC to manage permissions granted to users, thus logically implementing resource isolation.
+
+The access control in KubeSphere is divided into three levels: platform, workspace and project. You use roles to control what permissions users have at different levels for different resources.
+
+1. [Platform roles](/docs/v3.3/quick-start/create-workspace-and-project/): Control what permissions platform users have for platform resources, such as clusters, workspaces and platform members.
+2. [Workspace roles](/docs/v3.3/workspace-administration/role-and-member-management/): Control what permissions workspace members have for workspace resources, such as projects (i.e. namespaces) and DevOps projects.
+3. [Project roles](/docs/v3.3/project-administration/role-and-member-management/): Control what permissions project members have for project resources, such as workloads and pipelines.
+
+### Network isolation
+
+Apart from logically isolating resources, KubeSphere also allows you to set [network isolation policies](../../pluggable-components/network-policy/) for workspaces and projects.
+
+### Auditing
+
+KubeSphere also provides [auditing logs](../../pluggable-components/auditing-logs/) for users.
+
+### Authentication and authorization
+
+For a complete authentication and authorization chain in KubeSphere, see the following diagram. KubeSphere has expanded RBAC rules using the Open Policy Agent (OPA). The KubeSphere team looks to integrate [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) to provide more security management policies.
+
+
diff --git a/content/en/docs/v3.4/application-store/_index.md b/content/en/docs/v3.4/application-store/_index.md
new file mode 100644
index 000000000..348390088
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/_index.md
@@ -0,0 +1,16 @@
+---
+title: "App Store"
+description: "Getting started with the App Store of KubeSphere"
+layout: "second"
+
+
+linkTitle: "App Store"
+weight: 14000
+
+icon: "/images/docs/v3.3/docs.svg"
+
+---
+
+The KubeSphere App Store, powered by [OpenPitrix](https://github.com/openpitrix/openpitrix), an open-source platform that manages apps across clouds, provides users with enterprise-ready containerized solutions. You can upload your own apps through app templates or add app repositories that serve as an application tool for tenants to choose the app they want.
+
+The App Store features a highly productive integrated system for application lifecycle management, allowing users to quickly upload, release, deploy, upgrade and remove apps in ways that best suit them. This is how KubeSphere empowers developers to spend less time setting up and more time developing.
diff --git a/content/en/docs/v3.4/application-store/app-developer-guide/_index.md b/content/en/docs/v3.4/application-store/app-developer-guide/_index.md
new file mode 100644
index 000000000..3d1da2629
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/app-developer-guide/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Application Developer Guide"
+weight: 14400
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/application-store/app-developer-guide/helm-developer-guide.md b/content/en/docs/v3.4/application-store/app-developer-guide/helm-developer-guide.md
new file mode 100644
index 000000000..b7dc2f393
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/app-developer-guide/helm-developer-guide.md
@@ -0,0 +1,157 @@
+---
+title: "Helm Developer Guide"
+keywords: 'Kubernetes, KubeSphere, helm, development'
+description: 'Develop your own Helm-based app.'
+linkTitle: "Helm Developer Guide"
+weight: 14410
+---
+
+You can upload the Helm chart of an app to KubeSphere so that tenants with necessary permissions can deploy it. This tutorial demonstrates how to prepare Helm charts using NGINX as an example.
+
+## Install Helm
+
+If you have already installed KubeSphere, then Helm is deployed in your environment. Otherwise, refer to the [Helm documentation](https://helm.sh/docs/intro/install/) to install Helm first.
+
+## Create a Local Repository
+
+Execute the following commands to create a repository on your machine.
+
+```bash
+mkdir helm-repo
+```
+
+```bash
+cd helm-repo
+```
+
+## Create an App
+
+Use `helm create` to create a folder named `nginx`, which automatically creates YAML templates and directories for your app. Generally, it is not recommended to change the name of files and directories in the top level directory.
+
+```bash
+$ helm create nginx
+$ tree nginx/
+nginx/
+├── charts
+├── Chart.yaml
+├── templates
+│ ├── deployment.yaml
+│ ├── _helpers.tpl
+│ ├── ingress.yaml
+│ ├── NOTES.txt
+│ └── service.yaml
+└── values.yaml
+```
+
+`Chart.yaml` is used to define the basic information of the chart, including name, API, and app version. For more information, see [Chart.yaml File](../helm-specification/#chartyaml-file).
+
+An example of the `Chart.yaml` file:
+
+```yaml
+apiVersion: v1
+appVersion: "1.0"
+description: A Helm chart for Kubernetes
+name: nginx
+version: 0.1.0
+```
+
+When you deploy Helm-based apps to Kubernetes, you can edit the `values.yaml` file on the KubeSphere console directly.
+
+An example of the `values.yaml` file:
+
+```yaml
+# Default values for test.
+# This is a YAML-formatted file.
+# Declare variables to be passed into your templates.
+
+replicaCount: 1
+
+image:
+ repository: nginx
+ tag: stable
+ pullPolicy: IfNotPresent
+
+nameOverride: ""
+fullnameOverride: ""
+
+service:
+ type: ClusterIP
+ port: 80
+
+ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ path: /
+ hosts:
+ - chart-example.local
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
+
+resources: {}
+ # We usually recommend not to specify default resources and to leave this as a conscious
+ # choice for the user. This also increases chances charts run on environments with little
+ # resources, such as Minikube. If you do want to specify resources, uncomment the following
+ # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
+```
+
+Refer to [Helm Specifications](../helm-specification/) to edit files in the `nginx` folder and save them when you finish editing.
+
+## Create an Index File (Optional)
+
+To add a repository with an HTTP or HTTPS URL in KubeSphere, you need to upload an `index.yaml` file to the object storage in advance. Use Helm to create the index file by executing the following command in the previous directory of `nginx`.
+
+```bash
+helm repo index .
+```
+
+```bash
+$ ls
+index.yaml nginx
+```
+
+{{< notice note >}}
+
+- If the repository URL is S3-styled, an index file will be created automatically in the object storage when you add apps to the repository.
+
+- For more information about how to add repositories to KubeSphere, see [Import an Helm Repository](../../../workspace-administration/app-repository/import-helm-repository/).
+
+{{ notice >}}
+
+## Package the Chart
+
+Go to the previous directory of `nginx` and execute the following command to package your chart which creates a .tgz package.
+
+```bash
+helm package nginx
+```
+
+```bash
+$ ls
+nginx nginx-0.1.0.tgz
+```
+
+## Upload Your App
+
+Now that you have your Helm-based app ready, you can load it to KubeSphere and test it on the platform.
+
+## See Also
+
+[Helm Specifications](../helm-specification/)
+
+[Import an Helm Repository](../../../workspace-administration/app-repository/import-helm-repository/)
diff --git a/content/en/docs/v3.4/application-store/app-developer-guide/helm-specification.md b/content/en/docs/v3.4/application-store/app-developer-guide/helm-specification.md
new file mode 100644
index 000000000..ab16d028a
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/app-developer-guide/helm-specification.md
@@ -0,0 +1,130 @@
+---
+title: "Helm Specifications"
+keywords: 'Kubernetes, KubeSphere, Helm, specifications'
+description: 'Understand the chart structure and specifications.'
+linkTitle: "Helm Specifications"
+weight: 14420
+---
+
+Helm charts serve as a packaging format. A chart is a collection of files that describe a related set of Kubernetes resources. For more information, see the [Helm documentation](https://helm.sh/docs/topics/charts/).
+
+## Structure
+
+All related files of a chart is stored in a directory which generally contains:
+
+```text
+chartname/
+ Chart.yaml # A YAML file containing basic information about the chart, such as version and name.
+ LICENSE # (Optional) A plain text file containing the license for the chart.
+ README.md # (Optional) The description of the app and how-to guide.
+ values.yaml # The default configuration values for this chart.
+ values.schema.json # (Optional) A JSON Schema for imposing a structure on the values.yaml file.
+ charts/ # A directory containing any charts upon which this chart depends.
+ crds/ # Custom Resource Definitions.
+ templates/ # A directory of templates that will generate valid Kubernetes configuration files with corresponding values provided.
+ templates/NOTES.txt # (Optional) A plain text file with usage notes.
+```
+
+## Chart.yaml File
+
+You must provide the `chart.yaml` file for a chart. Here is an example of the file with explanations for each field.
+
+```yaml
+apiVersion: (Required) The chart API version.
+name: (Required) The name of the chart.
+version: (Required) The version, following the SemVer 2 standard.
+kubeVersion: (Optional) The compatible Kubernetes version, following the SemVer 2 standard.
+description: (Optional) A single-sentence description of the app.
+type: (Optional) The type of the chart.
+keywords:
+ - (Optional) A list of keywords about the app.
+home: (Optional) The URL of the app.
+sources:
+ - (Optional) A list of URLs to source code for this app.
+dependencies: (Optional) A list of the chart requirements.
+ - name: The name of the chart, such as nginx.
+ version: The version of the chart, such as "1.2.3".
+ repository: The repository URL ("https://example.com/charts") or alias ("@repo-name").
+ condition: (Optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (for example, subchart1.enabled ).
+ tags: (Optional)
+ - Tags can be used to group charts for enabling/disabling together.
+ import-values: (Optional)
+ - ImportValues holds the mapping of source values to parent key to be imported. Each item can be a string or pair of child/parent sublist items.
+ alias: (Optional) Alias to be used for the chart. It is useful when you have to add the same chart multiple times.
+maintainers: (Optional)
+ - name: (Required) The maintainer name.
+ email: (Optional) The maintainer email.
+ url: (Optional) A URL for the maintainer.
+icon: (Optional) A URL to an SVG or PNG image to be used as an icon.
+appVersion: (Optional) The app version. This needn't be SemVer.
+deprecated: (Optional, boolean) Whether this chart is deprecated.
+annotations:
+ example: (Optional) A list of annotations keyed by name.
+```
+
+{{< notice note >}}
+
+- The field `dependencies` is used to define chart dependencies which were located in a separate file `requirements.yaml` for `v1` charts. For more information, see [Chart Dependencies](https://helm.sh/docs/topics/charts/#chart-dependencies).
+- The field `type` is used to define the type of chart. Allowed values are `application` and `library`. For more information, see [Chart Types](https://helm.sh/docs/topics/charts/#chart-types).
+
+{{ notice >}}
+
+## Values.yaml and Templates
+
+Written in the [Go template language](https://golang.org/pkg/text/template/), Helm chart templates are stored in the `templates` folder of a chart. There are two ways to provide values for the templates:
+
+1. Make a `values.yaml` file inside of a chart with default values that can be referenced.
+2. Make a YAML file that contains necessary values and use the file through the command line with `helm install`.
+
+Here is an example of the template in the `templates` folder.
+
+```yaml
+apiVersion: v1
+kind: ReplicationController
+metadata:
+ name: deis-database
+ namespace: deis
+ labels:
+ app.kubernetes.io/managed-by: deis
+spec:
+ replicas: 1
+ selector:
+ app.kubernetes.io/name: deis-database
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: deis-database
+ spec:
+ serviceAccount: deis-database
+ containers:
+ - name: deis-database
+ image: {{.Values.imageRegistry}}/postgres:{{.Values.dockerTag}}
+ imagePullPolicy: {{.Values.pullPolicy}}
+ ports:
+ - containerPort: 5432
+ env:
+ - name: DATABASE_STORAGE
+ value: {{default "minio" .Values.storage}}
+```
+
+The above example defines a ReplicationController template in Kubernetes. There are some values referenced in it which are defined in `values.yaml`.
+
+- `imageRegistry`: The Docker image registry.
+- `dockerTag`: The Docker image tag.
+- `pullPolicy`: The image pulling policy.
+- `storage`: The storage backend. It defaults to `minio`.
+
+An example `values.yaml` file:
+
+```text
+imageRegistry: "quay.io/deis"
+dockerTag: "latest"
+pullPolicy: "Always"
+storage: "s3"
+```
+
+## Reference
+
+[Helm Documentation](https://helm.sh/docs/)
+
+[Charts](https://helm.sh/docs/topics/charts/)
\ No newline at end of file
diff --git a/content/en/docs/v3.4/application-store/app-lifecycle-management.md b/content/en/docs/v3.4/application-store/app-lifecycle-management.md
new file mode 100644
index 000000000..7d2cd0003
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/app-lifecycle-management.md
@@ -0,0 +1,220 @@
+---
+title: "Kubernetes Application Lifecycle Management"
+keywords: 'Kubernetes, KubeSphere, app-store'
+description: 'Manage your app across the entire lifecycle, including submission, review, test, release, upgrade and removal.'
+linkTitle: 'Application Lifecycle Management'
+weight: 14100
+---
+
+KubeSphere integrates [OpenPitrix](https://github.com/openpitrix/openpitrix), an open-source multi-cloud application management platform, to set up the App Store, managing Kubernetes applications throughout their entire lifecycle. The App Store supports two kinds of application deployment:
+
+- **Template-Based Apps** provide a way for developers and independent software vendors (ISVs) to share applications with users in a workspace. You can also import third-party app repositories within a workspace.
+- **Composed Apps** help users quickly build a complete application using multiple microservices to compose it. KubeSphere allows users to select existing services or create new services to create a composed app on the one-stop console.
+
+Using [Redis](https://redis.io/) as an example application, this tutorial demonstrates how to manage the Kubernetes app throughout the entire lifecycle, including submission, review, test, release, upgrade and removal.
+
+## Prerequisites
+
+- You need to enable the [KubeSphere App Store (OpenPitrix)](../../pluggable-components/app-store/).
+- You need to create a workspace, a project and a user (`project-regular`). For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+
+## Hands-on Lab
+
+### Step 1: Create a customized role and two users
+
+You need to create two users first, one for ISVs (`isv`) and the other (`reviewer`) for app technical reviewers.
+
+1. Log in to the KubeSphere console with the user `admin`. Click **Platform** in the upper-left corner and select **Access Control**. In **Platform Roles**, click **Create**.
+
+2. Set a name for the role, such as `app-review`, and click **Edit Permissions**.
+
+3. In **App Management**, choose **App Template Management** and **App Template Viewing** in the permission list, and then click **OK**.
+
+ {{< notice note >}}
+
+ The user who is granted the role `app-review` has the permission to view the App Store on the platform and manage apps, including review and removal.
+
+ {{ notice >}}
+
+4. As the role is ready now, you need to create a user and grant the role `app-review` to it. In **Users**, click **Create**. Provide the required information and click **OK**.
+
+5. Similarly, create another user `isv`, and grant the role of `platform-regular` to it.
+
+6. Invite both users created above to an existing workspace such as `demo-workspace`, and grant them the role of `workspace-admin`.
+
+### Step 2: Upload and submit an application
+
+1. Log in to KubeSphere as `isv` and go to your workspace. You need to upload the example app Redis to this workspace so that it can be used later. First, download the app [Redis 11.3.4](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/redis-11.3.4.tgz) and click **Upload Template** in **App Templates**.
+
+ {{< notice note >}}
+
+ In this example, a new version of Redis will be uploaded later to demonstrate the upgrade feature.
+
+ {{ notice >}}
+
+2. In the dialog that appears, click **Upload Helm Chart** to upload the chart file. Click **OK** to continue.
+
+3. Basic information of the app displays under **App Information**. To upload an icon for the app, click **Upload Icon**. You can also skip it and click **OK** directly.
+
+ {{< notice note >}}
+
+ The maximum accepted resolution of the app icon is 96 x 96 pixels.
+
+ {{ notice >}}
+
+4. The app displays in the template list with the status **Developing** after it is successfully uploaded, which means this app is under development. The uploaded app is visible to all members in the same workspace.
+
+5. Go to the detail page of the app template by clicking Redis from the list. You can edit the basic information of this app by clicking **Edit**.
+
+6. You can customize the app's basic information by specifying the fields in the pop-up window.
+
+7. Click **OK** to save your changes, then you can test this application by deploying it to Kubernetes. Click the draft version to expand the menu and click **Install**.
+
+ {{< notice note >}}
+
+ If you don't want to test the app, you can submit it for review directly. However, it is recommended that you test your app deployment and function first before you submit it for review, especially in a production environment. This helps you detect any problems in advance and accelerate the review process.
+
+ {{ notice >}}
+
+8. Select the cluster and project to which you want to deploy the app, set up different configurations for the app, and then click **Install**.
+
+ {{< notice note >}}
+
+ Some apps can be deployed with all configurations set in a form. You can use the toggle switch to see its YAML file, which contains all parameters you need to specify in the form.
+
+ {{ notice >}}
+
+9. Wait for a few minutes, then switch to the tab **App Instances**. You will find that Redis has been deployed successfully.
+
+10. After you test the app with no issues found, you can click **Submit for Release** to submit this application for release.
+
+ {{< notice note >}}
+
+The version number must start with a number and contain decimal points.
+
+{{ notice >}}
+
+11. After the app is submitted, the app status will change to **Submitted**. Now app reviewers can release it.
+
+### Step 3: Release the application
+
+1. Log out of KubeSphere and log back in as `app-reviewer`. Click **Platform** in the upper-left corner and select **App Store Management**. On the **App Release** page, the app submitted in the previous step displays under the tab **Unreleased**.
+
+2. To release this app, click it to inspect the app information, introduction, chart file and update logs from the pop-up window.
+
+3. The reviewer needs to decide whether the app meets the release criteria on the App Store. Click **Pass** to approve it or **Reject** to deny an app submission.
+
+### Step 4: Release the application to the App Store
+
+After the app is approved, `isv` can release the Redis application to the App Store, allowing all users on the platform to find and deploy this application.
+
+1. Log out of KubeSphere and log back in as `isv`. Go to your workspace and click Redis on the **Template-Based Apps** page. On its details page, expand the version menu, then click **Release to Store**. In the pop-up prompt, click **OK** to confirm.
+
+2. Under **App Release**, you can see the app status. **Activated** means it is available in the App Store.
+
+3. Click **View in Store** to go to its **Versions** page in the App Store. Alternatively, click **App Store** in the upper-left corner, and you can also see the app.
+
+ {{< notice note >}}
+
+ You may see two Redis apps in the App Store, one of which is a built-in app in KubeSphere. Note that a newly-released app displays at the beginning of the list in the App Store.
+
+ {{ notice >}}
+
+4. Now, users in the workspace can install Redis from the App Store. To install the app to Kubernetes, click the app to go to its **App Information** page, and click **Install**.
+
+ {{< notice note >}}
+
+ If you have trouble installing an application and the **Status** column shows **Failed**, you can hover your cursor over the **Failed** icon to see the error message.
+
+ {{ notice >}}
+
+### Step 5: Create an application category
+
+`app-reviewer` can create multiple categories for different types of applications based on their function and usage. It is similar to setting tags and categories can be used in the App Store as filters, such as Big Data, Middleware, and IoT.
+
+1. Log in to KubeSphere as `app-reviewer`. To create a category, go to the **App Store Management** page and click
in **App Categories**.
+
+2. Set a name and icon for the category in the dialog, then click **OK**. For Redis, you can enter `Database` for the field **Name**.
+
+ {{< notice note >}}
+
+ Usually, an app reviewer creates necessary categories in advance and ISVs select the category in which an app appears before submitting it for review. A newly-created category has no app in it.
+
+ {{ notice >}}
+
+3. As the category is created, you can assign the category to your app. In **Uncategorized**, select Redis and click **Change Category**.
+
+4. In the dialog, select the category (**Database**) from the drop-down list and click **OK**.
+
+5. The app displays in the category as expected.
+
+### Step 6: Add a new version
+
+To allow workspace users to upgrade apps, you need to add new app versions to KubeSphere first. Follow the steps below to add a new version for the example app.
+
+1. Log in to KubeSphere as `isv` again and navigate to **Template-Based Apps**. Click the app Redis in the list.
+
+2. Download [Redis 12.0.0](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/redis-12.0.0.tgz), which is a new version of Redis for demonstration in this tutorial. On the tab **Versions**, click **New Version** on the right to upload the package you just downloaded.
+
+3. Click **Upload Helm Chart** and click **OK** after it is uploaded.
+
+4. The new app version displays in the version list. You can click it to expand the menu and test the new version. Besides, you can also submit it for review and release it to the App Store, which is the same as the steps shown above.
+
+### Step 7: Upgrade an application
+
+After a new version is released to the App Store, all users can upgrade this application to the new version.
+
+{{< notice note >}}
+
+To follow the steps below, you must deploy an app of one of its old versions first. In this example, Redis 11.3.4 was already deployed in the project `demo-project` and its new version 12.0.0 was released to the App Store.
+
+{{ notice >}}
+
+1. Log in to KubeSphere as `project-regular`, navigate to the **Apps** page of the project, and click the app to upgrade.
+
+2. Click **More** and select **Edit Settings** from the drop-down list.
+
+3. In the window that appears, you can see the YAML file of application configurations. Select the new version from the drop-down list on the right. You can customize the YAML file of the new version. In this tutorial, click **Update** to use the default configurations directly.
+
+ {{< notice note >}}
+
+ You can select the same version from the drop-down list on the right as that on the left to customize current application configurations through the YAML file.
+
+ {{ notice >}}
+
+4. On the **Apps** page, you can see that the app is being upgraded. The status will change to **Running** when the upgrade finishes.
+
+### Step 8: Suspend an application
+
+You can choose to remove an app entirely from the App Store or suspend a specific app version.
+
+1. Log in to KubeSphere as `app-reviewer`. Click **Platform** in the upper-left corner and select **App Store Management**. On the **App Store** page, click Redis.
+
+2. On the detail page, click **Suspend App** and select **OK** in the dialog to confirm the operation to remove the app from the App Store.
+
+ {{< notice note >}}
+
+ Removing an app from the App Store does not affect tenants who are using the app.
+
+ {{ notice >}}
+
+3. To make the app available in the App Store again, click **Activate App**.
+
+4. To suspend a specific app version, expand the version menu and click **Suspend Version**. In the dialog that appears, click **OK** to confirm.
+
+ {{< notice note >}}
+
+ After an app version is suspended, this version is not available in the App Store. Suspending an app version does not affect tenants who are using this version.
+
+ {{ notice >}}
+
+5. To make the app version available in the App Store again, click **Activate Version**.
+
+
+
+
+
+
+
+
+
diff --git a/content/en/docs/v3.4/application-store/built-in-apps/_index.md b/content/en/docs/v3.4/application-store/built-in-apps/_index.md
new file mode 100644
index 000000000..2ee1bc0ca
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/built-in-apps/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Built-in Applications"
+weight: 14200
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/application-store/built-in-apps/deploy-chaos-mesh.md b/content/en/docs/v3.4/application-store/built-in-apps/deploy-chaos-mesh.md
new file mode 100644
index 000000000..9e10be832
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/built-in-apps/deploy-chaos-mesh.md
@@ -0,0 +1,82 @@
+---
+title: 'Deploy Chaos Mesh on KubeSphere'
+tag: 'KubeSphere, Kubernetes, Applications, Chaos Engineering, Chaos experiments, Chaos Mesh'
+keywords: 'Chaos Mesh, Kubernetes, Helm, KubeSphere'
+description: 'Learn how to deploy Chaos Mesh on KubeSphere and start running chaos experiments.'
+linkTitle: "Deploy Chaos Mesh on KubeSphere"
+---
+
+[Chaos Mesh](https://github.com/chaos-mesh/chaos-mesh) is a cloud-native Chaos Engineering platform that orchestrates chaos in Kubernetes environments. With Chaos Mesh, you can test your system's resilience and robustness on Kubernetes by injecting various types of faults into Pods, network, file system, and even the kernel.
+
+
+
+## Enable App Store on KubeSphere
+
+1. Make sure you have installed and enabled the [KubeSphere App Store](../../../pluggable-components/app-store/).
+
+2. You need to create a workspace, a project, and a user account (project-regular) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the operator role. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Chaos experiments with Chaos Mesh
+
+### Step 1: Deploy Chaos Mesh
+
+1. Login KubeSphere as `project-regular`, search for **chaos-mesh** in the **App Store**, and click on the search result to enter the app.
+
+ 
+
+2. In the **App Information** page, click **Install** on the upper right corner.
+
+ 
+
+3. In the **App Settings** page, set the application **Name,** **Location** (as your Namespace), and **App Version**, and then click **Next** on the upper right corner.
+
+ 
+
+4. Configure the `values.yaml` file as needed, or click **Install** to use the default configuration.
+
+ 
+
+5. Wait for the deployment to be finished. Upon completion, Chaos Mesh will be shown as **Running** in KubeSphere.
+
+ 
+
+
+### Step 2: Visit Chaos Dashboard
+
+1. In the **Resource Status** page, copy the **NodePort **of `chaos-dashboard`.
+
+ 
+
+2. Access the Chaos Dashboard by entering `${NodeIP}:${NODEPORT}` in your browser. Refer to [Manage User Permissions](https://chaos-mesh.org/docs/manage-user-permissions/) to generate a Token and log into Chaos Dashboard.
+
+ 
+
+### Step 3: Create a chaos experiment
+
+Before creating a chaos experiment, you should identify and deploy your experiment target, for example, to test how an application works under network latency. Here, we use a demo application `web-show` as the target application to be tested, and the test goal is to observe the system network latency. You can deploy a demo application `web-show` with the following command: `web-show`.
+
+```bash
+curl -sSL https://mirrors.chaos-mesh.org/latest/web-show/deploy.sh | bash
+```
+
+> Note: The network latency of the Pod can be observed directly from the web-show application pad to the kube-system pod.
+
+1. From your web browser, visit ${NodeIP}:8081 to access the **Web Show** application.
+
+ 
+
+2. Log in to Chaos Dashboard to create a chaos experiment. To observe the effect of network latency on the application, we set the **Target **as "Network Attack" to simulate a network delay scenario.
+
+ 
+
+ The **Scope** of the experiment is set to `app: web-show`.
+
+ 
+
+3. Start the chaos experiment by submitting it.
+
+ 
+
+Now, you should be able to visit **Web Show** to observe experiment results:
+
+
\ No newline at end of file
diff --git a/content/en/docs/v3.4/application-store/built-in-apps/etcd-app.md b/content/en/docs/v3.4/application-store/built-in-apps/etcd-app.md
new file mode 100644
index 000000000..f34455ffa
--- /dev/null
+++ b/content/en/docs/v3.4/application-store/built-in-apps/etcd-app.md
@@ -0,0 +1,58 @@
+---
+title: "Deploy etcd on KubeSphere"
+keywords: 'Kubernetes, KubeSphere, etcd, app-store'
+description: 'Learn how to deploy etcd from the App Store of KubeSphere and access its service.'
+linkTitle: "Deploy etcd on KubeSphere"
+weight: 14210
+---
+
+Written in Go, [etcd](https://etcd.io/) is a distributed key-value store to store data that needs to be accessed by a distributed system or cluster of machines. In Kubernetes, it is the backend for service discovery and stores cluster states and configurations.
+
+This tutorial walks you through an example of deploying etcd from the App Store of KubeSphere.
+
+## Prerequisites
+
+- Please make sure you [enable the OpenPitrix system](../../../pluggable-components/app-store/).
+- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Hands-on Lab
+
+### Step 1: Deploy etcd from the App Store
+
+1. On the **Overview** page of the project `demo-project`, click **App Store** in the upper-left corner.
+
+2. Find etcd and click **Install** on the **App Information** page.
+
+3. Set a name and select an app version. Make sure etcd is deployed in `demo-project` and click **Next**.
+
+4. On the **App Settings** page, specify the size of the persistent volume for etcd and click **Install**.
+
+ {{< notice note >}}
+
+ To specify more values for etcd, use the toggle switch to see the app's manifest in YAML format and edit its configurations.
+
+ {{ notice >}}
+
+5. In **Template-Based Apps** of the **Apps** page, wait until etcd is up and running.
+
+### Step 2: Access the etcd service
+
+After the app is deployed, you can use etcdctl, a command-line tool for interacting with the etcd server, to access etcd on the KubeSphere console directly.
+
+1. Navigate to **StatefulSets** in **Workloads**, and click the service name of etcd.
+
+2. Under **Pods**, expand the menu to see container details, and then click the **Terminal** icon.
+
+3. In the terminal, you can read and write data directly. For example, execute the following two commands respectively.
+
+ ```bash
+ etcdctl set /name kubesphere
+ ```
+
+ ```bash
+ etcdctl get /name
+ ```
+
+4. For clients within the KubeSphere cluster, the etcd service can be accessed through `
on the right of a project gateway to select an operation from the drop-down menu:
+
+- **Edit**: Edit configurations of the project gateway.
+- **Disable**: Disable the project gateway.
+
+{{< notice note >}}
+
+If a project gateway exists prior to the creation of a cluster gateway, the project gateway address may switch between the address of the cluster gateway and that of the project gateway. It is recommended that you should use either the cluster gateway or project gateway.
+
+{{ notice >}}
+
+For more information about how to create project gateways, see [Project Gateway](../../../project-administration/project-gateway/).
\ No newline at end of file
diff --git a/content/en/docs/v3.4/cluster-administration/cluster-settings/cluster-visibility-and-authorization.md b/content/en/docs/v3.4/cluster-administration/cluster-settings/cluster-visibility-and-authorization.md
new file mode 100644
index 000000000..e95527e5b
--- /dev/null
+++ b/content/en/docs/v3.4/cluster-administration/cluster-settings/cluster-visibility-and-authorization.md
@@ -0,0 +1,53 @@
+---
+title: "Cluster Visibility and Authorization"
+keywords: "Cluster Visibility, Cluster Management"
+description: "Learn how to set up cluster visibility and authorization."
+linkTitle: "Cluster Visibility and Authorization"
+weight: 8610
+---
+
+In KubeSphere, you can allocate a cluster to multiple workspaces through authorization so that workspace resources can all run on the cluster. At the same time, a workspace can also be associated with multiple clusters. Workspace users with necessary permissions can create multi-cluster projects using clusters allocated to the workspace.
+
+This guide demonstrates how to set cluster visibility.
+
+## Prerequisites
+* You need to enable the [multi-cluster feature](../../../multicluster-management/).
+* You need to have a workspace and a user that has the permission to create workspaces, such as `ws-manager`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Set Cluster Visibility
+
+### Select available clusters when you create a workspace
+
+1. Log in to KubeSphere with a user that has the permission to create a workspace, such as `ws-manager`.
+
+2. Click **Platform** in the upper-left corner and select **Access Control**. In **Workspaces** from the navigation bar, click **Create**.
+
+3. Provide the basic information for the workspace and click **Next**.
+
+4. On the **Cluster Settings** page, you can see a list of available clusters. Select the clusters that you want to allocate to the workspace and click **Create**.
+
+5. After the workspace is created, workspace members with necessary permissions can create resources that run on the associated cluster.
+
+ {{< notice warning >}}
+
+Try not to create resources on the host cluster to avoid excessive loads, which can lead to a decrease in the stability across clusters.
+
+{{ notice >}}
+
+### Set cluster visibility after a workspace is created
+
+After a workspace is created, you can allocate additional clusters to the workspace through authorization or unbind a cluster from the workspace. Follow the steps below to adjust the visibility of a cluster.
+
+1. Log in to KubeSphere with a user that has the permission to manage clusters, such as `admin`.
+
+2. Click **Platform** in the upper-left corner and select **Cluster Management**. Select a cluster from the list to view cluster information.
+
+3. In **Cluster Settings** from the navigation bar, select **Cluster Visibility**.
+
+4. You can see the list of authorized workspaces, which means the current cluster is available to resources in all these workspaces.
+
+5. Click **Edit Visibility** to set the cluster visibility. You can select new workspaces that will be able to use the cluster or unbind it from a workspace.
+
+### Make a cluster public
+
+You can check **Set as Public Cluster** so that platform users can access the cluster, in which they are able to create and schedule resources.
diff --git a/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/_index.md b/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/_index.md
new file mode 100644
index 000000000..275de2bb0
--- /dev/null
+++ b/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Log Receivers"
+weight: 8620
+
+_build:
+ render: false
+---
\ No newline at end of file
diff --git a/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md b/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md
new file mode 100644
index 000000000..1b43c5c85
--- /dev/null
+++ b/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md
@@ -0,0 +1,35 @@
+---
+title: "Add Elasticsearch as a Receiver"
+keywords: 'Kubernetes, log, elasticsearch, pod, container, fluentbit, output'
+description: 'Learn how to add Elasticsearch to receive container logs, resource events, or audit logs.'
+linkTitle: "Add Elasticsearch as a Receiver"
+weight: 8622
+---
+You can use Elasticsearch, Kafka, and Fluentd as log receivers in KubeSphere. This tutorial demonstrates how to add an Elasticsearch receiver.
+
+## Prerequisites
+
+- You need a user granted a role including the permission of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the permission and assign it to a user.
+
+- Before adding a log receiver, you need to enable any of the `logging`, `events` or `auditing` components. For more information, see [Enable Pluggable Components](../../../../pluggable-components/). `logging` is enabled as an example in this tutorial.
+
+## Add Elasticsearch as a Receiver
+
+1. Log in to KubeSphere as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+ {{< notice note >}}
+
+If you have enabled the [multi-cluster feature](../../../../multicluster-management/), you can select a specific cluster.
+
+{{ notice >}}
+
+2. On the navigation pane on the left, click **Cluster Settings** > **Log Receivers**.
+
+3. Click **Add Log Receiver** and choose **Elasticsearch**.
+
+4. Provide the Elasticsearch service address and port number.
+
+5. Elasticsearch will appear in the receiver list on the **Log Receivers** page, the status of which is **Collecting**.
+
+6. To verify whether Elasticsearch is receiving logs sent from Fluent Bit, click **Log Search** in the **Toolbox** in the lower-right corner and search logs on the console. For more information, read [Log Query](../../../../toolbox/log-query/).
+
diff --git a/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md b/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md
new file mode 100644
index 000000000..b674da974
--- /dev/null
+++ b/content/en/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md
@@ -0,0 +1,154 @@
+---
+title: "Add Fluentd as a Receiver"
+keywords: 'Kubernetes, log, fluentd, pod, container, fluentbit, output'
+description: 'Learn how to add Fluentd to receive logs, events or audit logs.'
+linkTitle: "Add Fluentd as a Receiver"
+weight: 8624
+---
+You can use Elasticsearch, Kafka and Fluentd as log receivers in KubeSphere. This tutorial demonstrates:
+
+- How to deploy Fluentd as a Deployment and create the corresponding Service and ConfigMap.
+- How to add Fluentd as a log receiver to receive logs sent from Fluent Bit and then output to stdout.
+- How to verify if Fluentd receives logs successfully.
+
+## Prerequisites
+
+- You need a user granted a role including the permission of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the permission and assign it to a user.
+
+- Before adding a log receiver, you need to enable any of the `logging`, `events`, or `auditing` components. For more information, see [Enable Pluggable Components](../../../../pluggable-components/). `logging` is enabled as an example in this tutorial.
+
+## Step 1: Deploy Fluentd as a Deployment
+
+Usually, Fluentd is deployed as a DaemonSet in Kubernetes to collect container logs on each node. KubeSphere chooses Fluent Bit because of its low memory footprint. Besides, Fluentd features numerous output plugins. Hence, KubeSphere chooses to deploy Fluentd as a Deployment to forward logs it receives from Fluent Bit to more destinations such as S3, MongoDB, Cassandra, MySQL, syslog and Splunk.
+
+Run the following commands:
+
+{{< notice note >}}
+
+- The following commands create the Fluentd Deployment, Service, and ConfigMap in the `default` namespace and add a filter to the Fluentd ConfigMap to exclude logs from the `default` namespace to avoid Fluent Bit and Fluentd loop log collections.
+- Change the namespace if you want to deploy Fluentd into a different namespace.
+
+{{ notice >}}
+
+```yaml
+cat <
on the right of the alerting policy.
+
+1. Click **Edit** from the drop-down list and edit the alerting policy following the same steps as you create it. Click **OK** on the **Message Settings** page to save it.
+
+2. Click **Delete** from the drop-down list to delete an alerting policy.
+
+## View an Alerting Policy
+
+Click the name of an alerting policy on the **Alerting Policies** page to see its detail information, including the alerting rule and alerting history. You can also see the rule expression which is based on the template you use when creating the alerting policy.
+
+Under **Monitoring**, the **Alert Monitoring** chart shows the actual usage or amount of resources over time. **Alerting Message** displays the customized message you set in notifications.
+
+{{< notice note >}}
+
+You can click
on the top navigation bar.
+
+2. Go to the **Repositories** page and you can see that Nexus provides three types of repository.
+
+ - `proxy`: the proxy for a remote repository to download and store resources on Nexus as cache.
+ - `hosted`: the repository storing artifacts on Nexus.
+ - `group`: a group of configured Nexus repositories.
+
+3. You can click a repository to view its details. For example, click **maven-public** to go to its details page, and you can see its **URL**.
+
+### Step 2: Modify `pom.xml` in your GitHub repository
+
+1. Log in to GitHub. Fork [the example repository](https://github.com/devops-ws/learn-pipeline-java) to your own GitHub account.
+
+2. In your own GitHub repository of **learn-pipeline-java**, click the file `pom.xml` in the root directory.
+
+3. Click | Code Repository | +Parameter | +
|---|---|
| GitHub | +Credential: Select the credential of the code repository. | +
| GitLab | +
+
|
+
| Bitbucket | +
+
|
+
| Git | +
+
|
+
| Parameter | +Description | +
|---|---|
+
+
+ Revision + |
+
+
+
+ The commit ID, branch, or tag of the repository. For example, master, v1.2.0, 0a1b2c3, or HEAD. + |
+
+
+
+ Manifest File Path + |
+
+
+
+ The manifest file path. For example, config/default. + |
+
| Parameter | +Description | +
|---|---|
+
+
+ Prune resources + |
+
+
+
+ If checked, it will delete resources that are no longer defined in Git. By default and as a safety mechanism, auto sync will not delete resources. + |
+
+
+
+ Self-heal + |
+
+
+
+ If checked, it will force the state defined in Git into the cluster when a deviation in the cluster is detected. By default, changes that are made to the live cluster will not trigger auto sync. + |
+
| Parameter | +Description | +
|---|---|
+
+
+ Prune resources + |
+
+
+
+ If checked, it will delete resources that are no longer defined in Git. + By default and as a safety mechanism, manual sync will not delete resources, but mark the resource out-of-sync state. + |
+
+
+
+ Dry run + |
+
+
+
+ Preview apply without affecting the cluster. + |
+
+
+
+ Apply only + |
+
+
+
+ If checked, it will skip pre/post sync hooks and just run kubectl apply for application resources. + |
+
+
+
+ Force + |
+
+
+
+ If checked, it will use kubectl apply --force to sync resources. + |
+
| Parameter | +Description | +
|---|---|
+
+
+ Skip schema validation + |
+
+
+
+ Disables kubectl validation. --validate=false is added when kubectl apply runs. + |
+
+
+
+ Auto create project + |
+
+
+
+ Automatically creates projects for application resources if the projects do not exist. + |
+
+
+
+ Prune last + |
+
+
+
+ Resource pruning happened as a final, implicit wave of a sync operation, after other resources have been deployed and become healthy. + |
+
+
+
+ Selective sync + |
+
+
+
+ Syncs only out-of-sync resources. + |
+
| Parameter | +Description | +
|---|---|
+
+
+ foreground + |
+
+
+
+ Deletes dependent resources first, and then deletes the owner resource. + |
+
+
+
+ background + |
+
+
+
+ Deletes the owner resource immediately, and then deletes the dependent resources in the background. + |
+
+
+
+ orphan + |
+
+
+
+ Deletes the dependent resources that remain orphaned after the owner resource is deleted. + |
+
| Item | +Description | +
|---|---|
| Name | +Name of the continuous deployment. | +
| Health Status | +Health status of the continuous deployment, which includes the following: +
|
+
| Sync Status | +Synchronization status of the continuous deployment, which includes the following: +
|
+
| Deployment Location | +Cluster and project where resources are deployed. | +
| Update Time | +Time when resources are updated. | +
to edit the file. For example, change the value of `spec.replicas` to `3`.
+
+4. Click **Commit changes** at the bottom of the page.
+
+### Check the webhook deliveries
+
+1. On the **Webhooks** page of your own repository, click the webhook.
+
+2. Click **Recent Deliveries** and click a specific delivery record to view its details.
+
+### Check the pipeline
+
+1. Log in to the KubeSphere web console as `project-regular`. Go to your DevOps project and click the pipeline.
+
+2. On the **Run Records** tab, check that a new run is triggered by the pull request submitted to the `sonarqube` branch of the remote repository.
+
+3. Go to the **Pods** page of the project `kubesphere-sample-dev` and check the status of the 3 Pods. If the status of the 3 Pods is running, the pipeline is running properly.
+
+
+
diff --git a/content/en/docs/v3.4/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md b/content/en/docs/v3.4/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
new file mode 100644
index 000000000..fcdb34cec
--- /dev/null
+++ b/content/en/docs/v3.4/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
@@ -0,0 +1,104 @@
+---
+title: "Use Pipeline Templates"
+keywords: 'KubeSphere, Kubernetes, Jenkins, Graphical Pipelines, Pipeline Templates'
+description: 'Understand how to use pipeline templates on KubeSphere.'
+linkTitle: "Use Pipeline Templates"
+weight: 11213
+---
+
+KubeSphere offers a graphical editing panel where the stages and steps of a Jenkins pipeline can be defined through interactive operations. KubeSphere 3.3 provides built-in pipeline templates, such as Node.js, Maven, and Golang, to help users quickly create pipelines. Additionally, KubeSphere 3.3 also supports customization of pipeline templates to meet diversified needs of enterprises.
+
+This section describes how to use pipeline templates on KubeSphere.
+## Prerequisites
+
+- You have a workspace, a DevOps project and a user (`project-regular`) invited to the DevOps project with the `operator` role. If they are not ready yet, please refer to [Create Workspaces, Projects, Users and Roles](../../../../quick-start/create-workspace-and-project/).
+
+- You need to [enable the KubeSphere DevOps system](../../../../pluggable-components/devops/).
+
+- You need to [create a pipeline](../../../how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel/).
+
+## Use a Built-in Pipeline Template
+
+The following takes Node.js as an example to show how to use a built-in pipeline template. Steps for using Maven and Golang pipeline templates are alike.
+
+
+1. Log in to the KubeSphere console as `project-regular`. In the navigation pane on the left, click **DevOps Projects**.
+
+2. On the **DevOps Projects** page, click the DevOps project you created.
+
+3. In the navigation pane on the left, click **Pipelines**.
+
+4. On the pipeline list on the right, click the created pipeline to go to its details page.
+
+5. On the right pane, click **Edit Pipeline**.
+
+6. On the **Create Pipeline** dialog box, click **Node.js**, and then click **Next**.
+
+
+7. On the **Parameter Settings** tab, set the parameters based on the actual situation, and then click **Create**.
+
+ | Parameter | Meaning |
+ | ----------- | ------------------------- |
+ | GitURL | URL of the project repository to clone |
+ | GitRevision | Revision to check out from |
+ | NodeDockerImage | Docker image version of Node.js |
+ | InstallScript | Shell script for installing dependencies |
+ | TestScript | Shell script for testing |
+ | BuildScript | Shell script for building a project |
+ | ArtifactsPath | Path where the artifacts reside |
+
+8. On the left pane, the system has preset several steps, and you can add more steps and parallel stages.
+
+9. Click a specific step. On the right pane, you can perform the following operations:
+ - Change the stage name.
+ - Delete a stage.
+ - Set the agent type.
+ - Add conditions.
+ - Edit or delete a task.
+ - Add steps or nested steps.
+
+ {{< notice note >}}
+
+ You can also customize the stages and steps in the pipeline templates based on your needs. For more information about how to use the graphical editing panel, refer to [Create a Pipeline Using Graphical Editing Panels](../create-a-pipeline-using-graphical-editing-panel/).
+ {{ notice >}}
+
+10. On the **Agent** area on the left, select an agent type, and click **OK**. The default value is **kubernetes**.
+
+ The following table explains the agent types.
+
+
+ | Agent Type | Description |
+ | --------------- | ------------------------- |
+ | any | Uses the default base pod template to create a Jenkins agent to run pipelines. |
+ | node | Uses a pod template with the specific label to create a Jenkins agent to run pipelines. Available labels include base, java, nodejs, maven, go, and more. |
+ | kubernetes | Use a yaml file to customize a standard Kubernetes pod template to create a jenkins agent to run pipelines. |
+
+11. On the pipeline details page, you can view the created pipeline template. Click **Run** to run the pipeline.
+
+## Legacy Built-in Pipeline Templates
+
+In earlier versions, KubeSphere also provides the CI and CI & CD pipeline templates. However, as the two templates are hardly customizable, you are advised to use the Node.js, Maven, or Golang pipeline template, or directly customize a template based on your needs.
+The following briefly introduces the CI and CI & CD pipeline templates.
+
+- CI pipeline template
+
+ 
+
+ 
+
+ The CI pipeline template contains two stages. The **clone code** stage checks out code and the **build & push** stage builds an image and pushes it to Docker Hub. You need to create credentials for your code repository and your Docker Hub registry in advance, and then set the URL of your repository and these credentials in corresponding steps. After you finish editing, the pipeline is ready to run.
+
+- CI & CD pipeline template
+
+ 
+
+ 
+
+ The CI & CD pipeline template contains six stages. For more information about each stage, refer to [Create a Pipeline Using a Jenkinsfile](../create-a-pipeline-using-jenkinsfile/#pipeline-overview), where you can find similar stages and the descriptions. You need to create credentials for your code repository, your Docker Hub registry, and the kubeconfig of your cluster in advance, and then set the URL of your repository and these credentials in corresponding steps. After you finish editing, the pipeline is ready to run.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/faq/_index.md b/content/en/docs/v3.4/faq/_index.md
new file mode 100644
index 000000000..753d10890
--- /dev/null
+++ b/content/en/docs/v3.4/faq/_index.md
@@ -0,0 +1,12 @@
+---
+title: "FAQ"
+description: "FAQ is designed to answer and summarize the questions users ask most frequently about KubeSphere."
+layout: "second"
+
+linkTitle: "FAQ"
+weight: 16000
+
+icon: "/images/docs/v3.3/docs.svg"
+---
+
+This chapter answers and summarizes the questions users ask most frequently about KubeSphere. You can find these questions and answers in their respective sections which are grouped based on KubeSphere functions.
diff --git a/content/en/docs/v3.4/faq/access-control/_index.md b/content/en/docs/v3.4/faq/access-control/_index.md
new file mode 100644
index 000000000..e36af958d
--- /dev/null
+++ b/content/en/docs/v3.4/faq/access-control/_index.md
@@ -0,0 +1,7 @@
+---
+title: "Access Control and Account Management FAQ"
+keywords: 'Kubernetes, KubeSphere, account, access control'
+description: 'Faq about access control and account management'
+layout: "second"
+weight: 16400
+---
diff --git a/content/en/docs/v3.4/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md b/content/en/docs/v3.4/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
new file mode 100644
index 000000000..e887f6b72
--- /dev/null
+++ b/content/en/docs/v3.4/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
@@ -0,0 +1,38 @@
+---
+title: "Add Existing Kubernetes Namespaces to a KubeSphere Workspace"
+keywords: "namespace, project, KubeSphere, Kubernetes"
+description: "Add your existing Kubernetes namespaces to a KubeSphere workspace."
+linkTitle: "Add existing Kubernetes namespaces to a KubeSphere Workspace"
+Weight: 16430
+---
+
+A Kubernetes namespace is a KubeSphere project. If you create a namespace object not from the KubeSphere console, the namespace does not appear directly in a certain workspace. But cluster administrators can still see the namespace on the **Cluster Management** page. At the same time, you can also place the namespace into a workspace.
+
+This tutorial demonstrates how to add an existing Kubernetes namespace to a KubeSphere workspace.
+
+## Prerequisites
+
+- You need a user granted a role including the permission of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the permission and assign it to a user.
+
+- You have an available workspace so that the namespace can be assigned to it. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create a Kubernetes Namespace
+
+Create an example Kubernetes namespace first so that you can add it to a workspace later. Execute the following command:
+
+```bash
+kubectl create ns demo-namespace
+```
+
+For more information about creating a Kubernetes namespace, see [Namespaces Walkthrough](https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/).
+
+## Add the Namespace to a KubeSphere Workspace
+
+1. Log in to the KubeSphere console as `admin` and go to the **Cluster Management** page. Click **Projects**, and you can see all your projects running on the current cluster, including the one just created.
+
+2. The namespace created through kubectl does not belong to any workspace. Click
on the right and select **Assign Workspace**.
+
+3. In the dialog that appears, select a **Workspace** and a **Project Administrator** for the project and click **OK**.
+
+4. Go to your workspace and you can see the project on the **Projects** page.
+
diff --git a/content/en/docs/v3.4/faq/access-control/cannot-login.md b/content/en/docs/v3.4/faq/access-control/cannot-login.md
new file mode 100644
index 000000000..ba0d8bd39
--- /dev/null
+++ b/content/en/docs/v3.4/faq/access-control/cannot-login.md
@@ -0,0 +1,141 @@
+---
+title: "User Login Failure"
+keywords: "login failure, user is not active, KubeSphere, Kubernetes"
+description: "How to solve the issue of login failure"
+linkTitle: "User Login Failure"
+Weight: 16440
+---
+
+KubeSphere automatically creates a default user (`admin/P@88w0rd`) when it is installed. A user cannot be used for login if the status is not **Active** or you use an incorrect password.
+
+Here are some of the frequently asked questions about user login failure.
+
+## User Not Active
+
+You may see an image below when the login fails. To find out the reason and solve the issue, perform the following steps:
+
+
+
+1. Execute the following command to check the status of the user.
+
+ ```bash
+ $ kubectl get users
+ NAME EMAIL STATUS
+ admin admin@kubesphere.io Active
+ ```
+
+2. Verify that `ks-controller-manager` is running and check if exceptions are contained in logs:
+
+ ```bash
+ kubectl -n kubesphere-system logs -l app=ks-controller-manager
+ ```
+
+Here are some possible reasons for this issue.
+
+### Admission webhooks malfunction in Kubernetes 1.19
+
+Kubernetes 1.19 uses Golang 1.15 in coding, requiring the certificate for admission webhooks to be updated. This causes the failure of `ks-controller` admission webhook.
+
+Related error logs:
+
+```bash
+Internal error occurred: failed calling webhook "validating-user.kubesphere.io": Post "https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2-user?timeout=30s": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0
+```
+
+For more information about the issue and solution, see this [GitHub issue](https://github.com/kubesphere/kubesphere/issues/2928).
+
+### ks-controller-manager malfunctions
+
+`ks-controller-manager` relies on two stateful Services: OpenLDAP and Jenkins. When OpenLDAP or Jenkins goes down, `ks-controller-manager` will be in the status of `reconcile`.
+
+Execute the following commands to verify that OpenLDAP and Jenkins are running normally.
+
+```
+kubectl -n kubesphere-devops-system get po | grep -v Running
+kubectl -n kubesphere-system get po | grep -v Running
+kubectl -n kubesphere-system logs -l app=openldap
+```
+
+Related error logs:
+
+```bash
+failed to connect to ldap service, please check ldap status, error: factory is not able to fill the pool: LDAP Result Code 200 \"Network Error\": dial tcp: lookup openldap.kubesphere-system.svc on 169.254.25.10:53: no such host
+```
+
+```bash
+Internal error occurred: failed calling webhook “validating-user.kubesphere.io”: Post https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2-user?timeout=4s: context deadline exceeded
+```
+
+#### Solution
+
+You need to restore OpenLDAP and Jenkins with good network connection, and then restart `ks-controller-manager`.
+
+```
+kubectl -n kubesphere-system rollout restart deploy ks-controller-manager
+```
+
+### Wrong code branch used
+
+If you used the incorrect version of ks-installer, the versions of different components would not match after the installation. Execute the following commands to check version consistency. Note that the correct image tag is `v3.3.2`.
+
+```
+kubectl -n kubesphere-system get deploy ks-installer -o jsonpath='{.spec.template.spec.containers[0].image}'
+kubectl -n kubesphere-system get deploy ks-apiserver -o jsonpath='{.spec.template.spec.containers[0].image}'
+kubectl -n kubesphere-system get deploy ks-controller-manager -o jsonpath='{.spec.template.spec.containers[0].image}'
+```
+
+## Wrong Username or Password
+
+
+
+Run the following command to verify that the username and the password are correct.
+
+```
+curl -u
on the right of `ks-installer` and select **Edit YAML**.
+
+5. Scroll down to the bottom of the file, add `telemetry_enabled: false`, and then click **OK**.
+
+
+{{< notice note >}}
+
+If you want to enable Telemetry again, you can update `ks-installer` by deleting `telemetry_enabled: false` or changing it to `telemetry_enabled: true`.
+
+{{ notice >}}
diff --git a/content/en/docs/v3.4/faq/multi-cluster-management/_index.md b/content/en/docs/v3.4/faq/multi-cluster-management/_index.md
new file mode 100644
index 000000000..05c8c18b9
--- /dev/null
+++ b/content/en/docs/v3.4/faq/multi-cluster-management/_index.md
@@ -0,0 +1,7 @@
+---
+title: "Multi-cluster Management"
+keywords: 'Kubernetes, KubeSphere, Multi-cluster Management, Host Cluster, Member Cluster'
+description: 'Faq about multi-cluster management in KubeSphere'
+layout: "second"
+weight: 16700
+---
diff --git a/content/en/docs/v3.4/faq/multi-cluster-management/host-cluster-access-member-cluster.md b/content/en/docs/v3.4/faq/multi-cluster-management/host-cluster-access-member-cluster.md
new file mode 100644
index 000000000..bd93b4c8b
--- /dev/null
+++ b/content/en/docs/v3.4/faq/multi-cluster-management/host-cluster-access-member-cluster.md
@@ -0,0 +1,71 @@
+---
+title: "Restore the Host Cluster Access to A Member Cluster"
+keywords: "Kubernetes, KubeSphere, Multi-cluster, Host Cluster, Member Cluster"
+description: "Learn how to restore the Host Cluster access to a Member Cluster."
+linkTitle: "Restore the Host Cluster Access to A Member Cluster"
+Weight: 16720
+---
+
+KubeSphere features [multi-cluster maganement](../../../multicluster-management/introduction/kubefed-in-kubesphere/) and tenants with necessary permissions (usually cluster administrators) can access the central control plane from the Host Cluster to manage all the Member Clusters. It is highly recommended that you manage your resources across your cluster through the Host Cluster.
+
+This tutorial demomstrates how to restore the Host Cluster access to a Member Cluster.
+
+## Possible Error Message
+
+If you can't access a Member Cluster from the central control plane and your browser keeps redirecting you to the login page of KubeSphere, run the following command on that Member Cluster to get the logs of the ks-apiserver.
+
+```
+kubectl -n kubesphere-system logs ks-apiserver-7c9c9456bd-qv6bs
+```
+
+{{< notice note >}}
+
+`ks-apiserver-7c9c9456bd-qv6bs` refers to the Pod ID on that Member Cluster. Make sure you use the ID of your own Pod.
+
+{{ notice >}}
+
+You will probably see the following error message:
+
+```
+E0305 03:46:42.105625 1 token.go:65] token not found in cache
+E0305 03:46:42.105725 1 jwt_token.go:45] token not found in cache
+E0305 03:46:42.105759 1 authentication.go:60] Unable to authenticate the request due to error: token not found in cache
+E0305 03:46:52.045964 1 token.go:65] token not found in cache
+E0305 03:46:52.045992 1 jwt_token.go:45] token not found in cache
+E0305 03:46:52.046004 1 authentication.go:60] Unable to authenticate the request due to error: token not found in cache
+E0305 03:47:34.502726 1 token.go:65] token not found in cache
+E0305 03:47:34.502751 1 jwt_token.go:45] token not found in cache
+E0305 03:47:34.502764 1 authentication.go:60] Unable to authenticate the request due to error: token not found in cache
+```
+
+## Solution
+
+### Step 1: Verify the jwtSecret
+
+Run the following command on your Host Cluster and Member Cluser respectively to confirm whether their jwtSecrets are identical.
+
+```
+kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v “apiVersion” | grep jwtSecret
+```
+
+### Step 2: Modify `accessTokenMaxAge`
+
+Make sure the jwtSecrets are identical, then run the following command on that Member Cluster to get the value of `accessTokenMaxAge`.
+
+```
+kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep accessTokenMaxAge
+```
+
+If the value is not `0`, run the following command to modify the value of `accessTokenMaxAge`.
+
+```
+kubectl -n kubesphere-system edit cm kubesphere-config -o yaml
+```
+
+After you modified the value of `accessTokenMaxAge` to `0`, run the following command to restart the ks-apiserver.
+
+```
+kubectl -n kubesphere-system rollout restart deploy ks-apiserver
+```
+
+Now, you can access that Member Cluster from the central control plane again.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/faq/multi-cluster-management/manage-multi-cluster.md b/content/en/docs/v3.4/faq/multi-cluster-management/manage-multi-cluster.md
new file mode 100644
index 000000000..5bb132e42
--- /dev/null
+++ b/content/en/docs/v3.4/faq/multi-cluster-management/manage-multi-cluster.md
@@ -0,0 +1,61 @@
+---
+title: "Manage a Multi-cluster Environment on KubeSphere"
+keywords: 'Kubernetes, KubeSphere, federation, multicluster, hybrid-cloud'
+description: 'Understand how to manage a multi-cluster environment on KubeSphere.'
+linkTitle: "Manage a Multi-cluster Environment on KubeSphere"
+weight: 16710
+---
+
+KubeSphere provides an easy-to-use multi-cluster feature to help you [build your multi-cluster environment on KubeSphere](../../../multicluster-management/). This guide illustrates how to manage a multi-cluster environment on KubeSphere.
+
+## Prerequisites
+
+- Make sure your Kubernetes clusters are installed with KubeSphere before you use them as your Host Cluster and Member Clusters.
+
+- Make sure the cluster role is set correctly on your Host Cluster and Member Clusters respectively, and the `jwtSecret` is the same between them.
+
+- It is recommended that your Member Cluster is in a clean environment where no resources have been created on it before it is imported to the Host Cluster.
+
+
+## Manage your KubeSphere Multi-cluster Environment
+
+Once you build a multi-cluster environment on KubeSphere, you can manage it through the central control plane from your Host Cluster. When creating resources, you can select a specific cluster while the Host Cluster should be avoided in case of overload. It is not recommended to log in to the KubeSphere web console of your Member Clusters to create resources on them as some resources (for example, workspaces) won't be synchronized to your Host Cluster for management.
+
+### Resource Management
+
+It is not recommended that you change a Host Cluster to a Member Cluster or the other way round. If a Member Cluster has been imported to a Host Cluster before, you have to use the same cluster name when importing it to a new Host Cluster after unbinding it from the previous Host Cluster.
+
+If you want to import the Member Cluster to a new Host Cluster while retaining existing projects, you can follow the steps as below.
+
+1. Run the following command on the Member Cluster to unbind the projects to be retained from your workspace.
+
+ ```bash
+ kubectl label ns | Parameter | +Description | +
|---|---|
kubernetes |
+ |
version |
+ The Kubernetes version to be installed. If you do not specify a Kubernetes version, {{< contentLink "docs/installing-on-linux/introduction/kubekey" "KubeKey" >}} v3.0.7 will install Kubernetes v1.23.10 by default. For more information, see {{< contentLink "docs/installing-on-linux/introduction/kubekey/#support-matrix" "Support Matrix" >}}. | +
imageRepo |
+ The Docker Hub repository where images will be downloaded. | +
clusterName |
+ The Kubernetes cluster name. | +
masqueradeAll* |
+ masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. It defaults to false. |
+
maxPods* |
+ The maximum number of Pods that can run on this Kubelet. It defaults to 110. |
+
nodeCidrMaskSize* |
+ The mask size for node CIDR in your cluster. It defaults to 24. |
+
proxyMode* |
+ The proxy mode to use. It defaults to ipvs. |
+
network |
+ |
plugin |
+ The CNI plugin to use. KubeKey installs Calico by default while you can also specify Flannel. Note that some features can only be used when Calico is adopted as the CNI plugin, such as Pod IP Pools. | +
calico.ipipMode* |
+ The IPIP Mode to use for the IPv4 POOL created at startup. If it is set to a value other than Never, vxlanMode should be set to Never. Allowed values are Always, CrossSubnet and Never. It defaults to Always. |
+
calico.vxlanMode* |
+ The VXLAN Mode to use for the IPv4 POOL created at startup. If it is set to a value other than Never, ipipMode should be set to Never. Allowed values are Always, CrossSubnet and Never. It defaults to Never. |
+
calico.vethMTU* |
+ The maximum transmission unit (MTU) setting determines the largest packet size that can be transmitted through your network. It defaults to 1440. |
+
kubePodsCIDR |
+ A valid CIDR block for your Kubernetes Pod subnet. It should not overlap with your node subnet and your Kubernetes Services subnet. | +
kubeServiceCIDR |
+ A valid CIDR block for your Kubernetes Services. It should not overlap with your node subnet and your Kubernetes Pod subnet. | +
registry |
+ |
registryMirrors |
+ Configure a Docker registry mirror to speed up downloads. For more information, see {{< contentLink "https://docs.docker.com/registry/recipes/mirror/#configure-the-docker-daemon" "Configure the Docker daemon" >}}. | +
insecureRegistries |
+ Set an address of insecure image registry. For more information, see {{< contentLink "https://docs.docker.com/registry/insecure/" "Test an insecure registry" >}}. | +
privateRegistry* |
+ Configure a private image registry for air-gapped installation (for example, a Docker local registry or Harbor). For more information, see {{< contentLink "docs/v3.3/installing-on-linux/introduction/air-gapped-installation/" "Air-gapped Installation on Linux" >}}. | +
on the right of the member cluster, and click **Update KubeConfig**.
+
+3. In the **Update KubeConfig** dialog box that is diaplayed, enter the new kubeconfig,and click **update**.
+
+
+
diff --git a/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/_index.md b/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/_index.md
new file mode 100644
index 000000000..92ba09b39
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Import Cloud-hosted Kubernetes Clusters"
+weight: 5300
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md b/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md
new file mode 100644
index 000000000..abac113c6
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aliyun-ack.md
@@ -0,0 +1,70 @@
+---
+title: "Import an Alibaba Cloud Kubernetes (ACK) Cluster"
+keywords: 'Kubernetes, KubeSphere, multicluster, ACK'
+description: 'Learn how to import an Alibaba Cloud Kubernetes cluster.'
+titleLink: "Import an Alibaba Cloud Kubernetes (ACK) Cluster"
+weight: 5310
+---
+
+This tutorial demonstrates how to import an Alibaba Cloud Kubernetes (ACK) cluster through the [direct connection](../../../multicluster-management/enable-multicluster/direct-connection/) method. If you want to use the agent connection method, refer to [Agent Connection](../../../multicluster-management/enable-multicluster/agent-connection/).
+
+## Prerequisites
+
+- You have a Kubernetes cluster with KubeSphere installed, and prepared this cluster as the host cluster. For more information about how to prepare a host cluster, refer to [Prepare a host cluster](../../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-host-cluster).
+- You have an ACK cluster with KubeSphere installed to be used as the member cluster.
+
+## Import an ACK Cluster
+
+### Step 1: Prepare the ACK Member Cluster
+
+1. In order to manage the member cluster from the host cluster, you need to make `jwtSecret` the same between them. Therefore, get it first by executing the following command on your host cluster.
+
+ ```bash
+ kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret
+ ```
+
+ The output is similar to the following:
+
+ ```yaml
+ jwtSecret: "QVguGh7qnURywHn2od9IiOX6X8f8wK8g"
+ ```
+
+2. Log in to the KubeSphere console of the ACK cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**.
+
+3. Go to **CRDs**, enter `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
+
+4. Click
on the right and then select **Edit YAML** to edit `ks-installer`.
+
+5. In the YAML file of `ks-installer`, change the value of `jwtSecret` to the corresponding value shown above and set the value of `clusterRole` to `member`. Click **Update** to save your changes.
+
+ ```yaml
+ authentication:
+ jwtSecret: QVguGh7qnURywHn2od9IiOX6X8f8wK8g
+ ```
+
+ ```yaml
+ multicluster:
+ clusterRole: member
+ ```
+
+ {{< notice note >}}
+
+ Make sure you use the value of your own `jwtSecret`. You need to wait for a while so that the changes can take effect.
+
+ {{ notice >}}
+
+### Step 2: Get the kubeconfig file
+
+Log in to the web console of Alibaba Cloud. Go to **Clusters** under **Container Service - Kubernetes**, click your cluster to go to its detail page, and then select the **Connection Information** tab. You can see the kubeconfig file under the **Public Access** tab. Copy the contents of the kubeconfig file.
+
+
+
+### Step 3: Import the ACK member cluster
+
+1. Log in to the KubeSphere console on your host cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
+
+2. Enter the basic information based on your needs and click **Next**.
+
+3. In **Connection Method**, select **Direct connection**. Fill in the kubeconfig file of the ACK member cluster and then click **Create**.
+
+4. Wait for cluster initialization to finish.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md b/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
new file mode 100644
index 000000000..c1dc96bf9
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/import-cloud-hosted-k8s/import-aws-eks.md
@@ -0,0 +1,171 @@
+---
+title: "Import an AWS EKS Cluster"
+keywords: 'Kubernetes, KubeSphere, multicluster, Amazon EKS'
+description: 'Learn how to import an Amazon Elastic Kubernetes Service cluster.'
+titleLink: "Import an AWS EKS Cluster"
+weight: 5320
+---
+
+This tutorial demonstrates how to import an AWS EKS cluster through the [direct connection](../../../multicluster-management/enable-multicluster/direct-connection/) method. If you want to use the agent connection method, refer to [Agent Connection](../../../multicluster-management/enable-multicluster/agent-connection/).
+
+## Prerequisites
+
+- You have a Kubernetes cluster with KubeSphere installed, and prepared this cluster as the host cluster. For more information about how to prepare a host cluster, refer to [Prepare a host cluster](../../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-host-cluster).
+- You have an EKS cluster to be used as the member cluster.
+
+## Import an EKS Cluster
+
+### Step 1: Deploy KubeSphere on your EKS cluster
+
+You need to deploy KubeSphere on your EKS cluster first. For more information about how to deploy KubeSphere on EKS, refer to [Deploy KubeSphere on AWS EKS](../../../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks/#install-kubesphere-on-eks).
+
+### Step 2: Prepare the EKS member cluster
+
+1. In order to manage the member cluster from the host cluster, you need to make `jwtSecret` the same between them. Therefore, get it first by executing the following command on your host cluster.
+
+ ```bash
+ kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret
+ ```
+
+ The output is similar to the following:
+
+ ```yaml
+ jwtSecret: "QVguGh7qnURywHn2od9IiOX6X8f8wK8g"
+ ```
+
+2. Log in to the KubeSphere console of the EKS cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**.
+
+3. Go to **CRDs**, enter `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
+
+4. Click
on the right and then select **Edit YAML** to edit `ks-installer`.
+
+5. In the YAML file of `ks-installer`, change the value of `jwtSecret` to the corresponding value shown above and set the value of `clusterRole` to `member`. Click **Update** to save your changes.
+
+ ```yaml
+ authentication:
+ jwtSecret: QVguGh7qnURywHn2od9IiOX6X8f8wK8g
+ ```
+
+ ```yaml
+ multicluster:
+ clusterRole: member
+ ```
+
+ {{< notice note >}}
+
+ Make sure you use the value of your own `jwtSecret`. You need to wait for a while so that the changes can take effect.
+
+ {{ notice >}}
+
+### Step 3: Create a new kubeconfig file
+
+1. [Amazon EKS](https://docs.aws.amazon.com/eks/index.html) doesn’t provide a built-in kubeconfig file as a standard kubeadm cluster does. Nevertheless, you can create a kubeconfig file by referring to this [document](https://docs.aws.amazon.com/eks/latest/userguide/create-kubeconfig.html). The generated kubeconfig file will be like the following:
+
+ ```yaml
+ apiVersion: v1
+ clusters:
+ - cluster:
+ server:
on the right and then select **Edit YAML** to edit `ks-installer`.
+
+5. In the YAML file of `ks-installer`, change the value of `jwtSecret` to the corresponding value shown above and set the value of `clusterRole` to `member`.
+
+ ```yaml
+ authentication:
+ jwtSecret: QVguGh7qnURywHn2od9IiOX6X8f8wK8g
+ ```
+
+ ```yaml
+ multicluster:
+ clusterRole: member
+ ```
+
+ {{< notice note >}}
+
+ Make sure you use the value of your own `jwtSecret`. You need to wait for a while so that the changes can take effect.
+
+ {{ notice >}}
+
+### Step 3: Create a new kubeconfig file
+
+1. Run the following commands on your GKE Cloud Shell Terminal:
+
+ ```bash
+ TOKEN=$(kubectl -n kubesphere-system get secret $(kubectl -n kubesphere-system get sa kubesphere -o jsonpath='{.secrets[0].name}') -o jsonpath='{.data.token}' | base64 -d)
+ kubectl config set-credentials kubesphere --token=${TOKEN}
+ kubectl config set-context --current --user=kubesphere
+ ```
+
+2. Retrieve the new kubeconfig file by running the following command:
+
+ ```bash
+ cat ~/.kube/config
+ ```
+
+ The output is similar to the following:
+
+ ```yaml
+ apiVersion: v1
+ clusters:
+ - cluster:
+ certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURLekNDQWhPZ0F3SUJBZ0lSQUtPRUlDeFhyWEdSbjVQS0dlRXNkYzR3RFFZSktvWklodmNOQVFFTEJRQXcKTHpFdE1Dc0dBMVVFQXhNa1pqVTBNVFpoTlRVdFpEZzFZaTAwWkdZNUxXSTVNR1V0TkdNeE0yRTBPR1ZpWW1VMwpNQjRYRFRJeE1ETXhNVEl5TXpBMU0xb1hEVEkyTURNeE1ESXpNekExTTFvd0x6RXRNQ3NHQTFVRUF4TWtaalUwCk1UWmhOVFV0WkRnMVlpMDBaR1k1TFdJNU1HVXROR014TTJFME9HVmlZbVUzTUlJQklqQU5CZ2txaGtpRzl3MEIKQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBdkVHVGtKRjZLVEl3QktlbXNYd3dPSnhtU3RrMDlKdXh4Z1grM0dTMwpoeThVQm5RWEo1d3VIZmFGNHNWcDFzdGZEV2JOZitESHNxaC9MV3RxQk5iSlNCU1ppTC96V3V5OUZNeFZMS2czCjVLdnNnM2drdUpVaFVuK0tMUUFPdTNUWHFaZ2tTejE1SzFOSU9qYm1HZGVWSm5KQTd6NTF2ZkJTTStzQWhGWTgKejJPUHo4aCtqTlJseDAvV0UzTHZEUUMvSkV4WnRCRGFuVFU0anpHMHR2NGk1OVVQN2lWbnlwRHk0dkFkWm5mbgowZncwVnplUXJqT2JuQjdYQTZuUFhseXZubzErclRqakFIMUdtU053c1IwcDRzcEViZ0lXQTNhMmJzeUN5dEJsCjVOdmJKZkVpSTFoTmFOZ3hoSDJNenlOUWVhYXZVa29MdDdPN0xqYzVFWlo4cFFJREFRQUJvMEl3UURBT0JnTlYKSFE4QkFmOEVCQU1DQWdRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVUVyVkJrc3MydGV0Qgp6ZWhoRi92bGdVMlJiM2N3RFFZSktvWklodmNOQVFFTEJRQURnZ0VCQUdEZVBVa3I1bDB2OTlyMHZsKy9WZjYrCitBanVNNFoyOURtVXFHVC80OHBaR1RoaDlsZDQxUGZKNjl4eXFvME1wUlIyYmJuTTRCL2NVT1VlTE5VMlV4VWUKSGRlYk1oQUp4Qy9Uaks2SHpmeExkTVdzbzVSeVAydWZEOFZob2ZaQnlBVWczajdrTFgyRGNPd1lzNXNrenZ0LwpuVUlhQURLaXhtcFlSSWJ6MUxjQmVHbWROZ21iZ0hTa3MrYUxUTE5NdDhDQTBnSExhMER6ODhYR1psSi80VmJzCjNaWVVXMVExY01IUHd5NnAwV2kwQkpQeXNaV3hZdFJyV3JFWUhZNVZIanZhUG90S3J4Y2NQMUlrNGJzVU1ZZ0wKaTdSaHlYdmJHc0pKK1lNc3hmalU5bm5XYVhLdXM5ZHl0WG1kRGw1R0hNU3VOeTdKYjIwcU5RQkxhWHFkVmY0PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
+ server: https://130.211.231.87
+ name: gke_grand-icon-307205_us-central1-c_cluster-3
+ contexts:
+ - context:
+ cluster: gke_grand-icon-307205_us-central1-c_cluster-3
+ user: gke_grand-icon-307205_us-central1-c_cluster-3
+ name: gke_grand-icon-307205_us-central1-c_cluster-3
+ current-context: gke_grand-icon-307205_us-central1-c_cluster-3
+ kind: Config
+ preferences: {}
+ users:
+ - name: gke_grand-icon-307205_us-central1-c_cluster-3
+ user:
+ auth-provider:
+ config:
+ cmd-args: config config-helper --format=json
+ cmd-path: /usr/lib/google-cloud-sdk/bin/gcloud
+ expiry-key: '{.credential.token_expiry}'
+ token-key: '{.credential.access_token}'
+ name: gcp
+ - name: kubesphere
+ user:
+ token: eyJhbGciOiJSUzI1NiIsImtpZCI6InNjOFpIb3RrY3U3bGNRSV9NWV8tSlJzUHJ4Y2xnMDZpY3hhc1BoVy0xTGsifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlc3BoZXJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlc3BoZXJlLXRva2VuLXpocmJ3Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Imt1YmVzcGhlcmUiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyMGFmZGI1Ny01MTBkLTRjZDgtYTAwYS1hNDQzYTViNGM0M2MiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXNwaGVyZS1zeXN0ZW06a3ViZXNwaGVyZSJ9.ic6LaS5rEQ4tXt_lwp7U_C8rioweP-ZdDjlIZq91GOw9d6s5htqSMQfTeVlwTl2Bv04w3M3_pCkvRzMD0lHg3mkhhhP_4VU0LIo4XeYWKvWRoPR2kymLyskAB2Khg29qIPh5ipsOmGL9VOzD52O2eLtt_c6tn-vUDmI_Zw985zH3DHwUYhppGM8uNovHawr8nwZoem27XtxqyBkqXGDD38WANizyvnPBI845YqfYPY5PINPYc9bQBFfgCovqMZajwwhcvPqS6IpG1Qv8TX2lpuJIK0LLjiKaHoATGvHLHdAZxe_zgAC2cT_9Ars3HIN4vzaSX0f-xP--AcRgKVSY9g
+ ```
+
+### Step 4: Import the GKE member cluster
+
+1. Log in to the KubeSphere console on your host cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
+
+2. Enter the basic information based on your needs and click **Next**.
+
+3. In **Connection Method**, select **Direct connection**. Fill in the new kubeconfig file of the GKE member cluster and then click **Create**.
+
+4. Wait for cluster initialization to finish.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/multicluster-management/introduction/_index.md b/content/en/docs/v3.4/multicluster-management/introduction/_index.md
new file mode 100644
index 000000000..0b97cbae9
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/introduction/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Introduction"
+weight: 5100
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/multicluster-management/introduction/kubefed-in-kubesphere.md b/content/en/docs/v3.4/multicluster-management/introduction/kubefed-in-kubesphere.md
new file mode 100644
index 000000000..d687f98ec
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/introduction/kubefed-in-kubesphere.md
@@ -0,0 +1,49 @@
+---
+title: "KubeSphere Federation"
+keywords: "Kubernetes, KubeSphere, federation, multicluster, hybrid-cloud"
+description: "Understand the fundamental concept of Kubernetes federation in KubeSphere, including member clusters and host clusters."
+linkTitle: "KubeSphere Federation"
+weight: 5120
+---
+
+The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters.
+
+## How the Multi-cluster Architecture Works
+
+Before you use the central control plane of KubeSphere to manage multiple clusters, you need to create a host cluster, also known as **host** cluster. The host cluster, essentially, is a KubeSphere cluster with the multi-cluster feature enabled. It provides you with the control plane for unified management of member clusters, also known as **member** cluster. Member clusters are common KubeSphere clusters without the central control plane. Namely, tenants with necessary permissions (usually cluster administrators) can access the control plane from the host cluster to manage all member clusters, such as viewing and editing resources on member clusters. Conversely, if you access the web console of any member cluster separately, you cannot see any resources on other clusters.
+
+There can only be one host cluster while multiple member clusters can exist at the same time. In a multi-cluster architecture, the network between the host cluster and member clusters can be [connected directly](../../enable-multicluster/direct-connection/) or [through an agent](../../enable-multicluster/agent-connection/). The network between member clusters can be set in a completely isolated environment.
+
+If you are using on-premises Kubernetes clusters built through kubeadm, install KubeSphere on your Kubernetes clusters by referring to [Air-gapped Installation on Kubernetes](../../../installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped/), and then enable KubeSphere multi-cluster management through direct connection or agent connection.
+
+
+
+## Vendor Agnostic
+
+KubeSphere features a powerful, inclusive central control plane so that you can manage any KubeSphere clusters in a unified way regardless of deployment environments or cloud providers.
+
+## Resource Requirements
+
+Before you enable multi-cluster management, make sure you have enough resources in your environment.
+
+| Namespace | kube-federation-system | kubesphere-system |
+| -------------- | ---------------------- | ----------------- |
+| Sub-component | 2 x controller-manager | tower |
+| CPU Request | 100 m | 100 m |
+| CPU Limit | 500 m | 500 m |
+| Memory Request | 64 MiB | 128 MiB |
+| Memory Limit | 512 MiB | 256 MiB |
+| Installation | Optional | Optional |
+
+{{< notice note >}}
+
+- The request and limit of CPU and memory resources all refer to single replica.
+- After the multi-cluster feature is enabled, tower and controller-manager will be installed on the host cluster. If you use [agent connection](../../../multicluster-management/enable-multicluster/agent-connection/), only tower is needed for member clusters. If you use [direct connection](../../../multicluster-management/enable-multicluster/direct-connection/), no additional component is needed for member clusters.
+
+{{ notice >}}
+
+## Use the App Store in a Multi-cluster Architecture
+
+Different from other components in KubeSphere, the [KubeSphere App Store](../../../pluggable-components/app-store/) serves as a global application pool for all clusters, including host cluster and member clusters. You only need to enable the App Store on the host cluster and you can use functions related to the App Store on member clusters directly (no matter whether the App Store is enabled on member clusters or not), such as [app templates](../../../project-user-guide/application/app-template/) and [app repositories](../../../workspace-administration/app-repository/import-helm-repository/).
+
+However, if you only enable the App Store on member clusters without enabling it on the host cluster, you will not be able to use the App Store on any cluster in the multi-cluster architecture.
diff --git a/content/en/docs/v3.4/multicluster-management/introduction/overview.md b/content/en/docs/v3.4/multicluster-management/introduction/overview.md
new file mode 100644
index 000000000..8568c836e
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/introduction/overview.md
@@ -0,0 +1,15 @@
+---
+title: "Kubernetes Multi-Cluster Management — Overview"
+keywords: 'Kubernetes, KubeSphere, multicluster, hybrid-cloud'
+description: 'Gain a basic understanding of multi-cluster management, such as its common use cases, and the benefits that KubeSphere can bring with its multi-cluster feature.'
+linkTitle: "Overview"
+weight: 5110
+---
+
+Today, it's very common for organizations to run and manage multiple Kubernetes clusters across different cloud providers or infrastructures. As each Kubernetes cluster is a relatively self-contained unit, the upstream community is struggling to research and develop a multi-cluster management solution. That said, Kubernetes Cluster Federation ([KubeFed](https://github.com/kubernetes-sigs/kubefed) for short) may be a possible approach among others.
+
+The most common use cases of multi-cluster management include service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and vendor lock-in avoidance.
+
+KubeSphere is developed to address multi-cluster and multi-cloud management challenges, including the scenarios mentioned above. It provides users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premises environments. KubeSphere also boasts rich observability across multiple clusters including centralized monitoring, logging, events, and auditing logs.
+
+
diff --git a/content/en/docs/v3.4/multicluster-management/unbind-cluster.md b/content/en/docs/v3.4/multicluster-management/unbind-cluster.md
new file mode 100644
index 000000000..e6dc92b65
--- /dev/null
+++ b/content/en/docs/v3.4/multicluster-management/unbind-cluster.md
@@ -0,0 +1,61 @@
+---
+title: "Remove a Member Cluster"
+keywords: 'Kubernetes, KubeSphere, multicluster, hybrid-cloud'
+description: 'Learn how to remove a member cluster from your cluster pool in KubeSphere.'
+linkTitle: "Remove a Member Cluster"
+weight: 5500
+---
+
+This tutorial demonstrates how to remove a member cluster on the KubeSphere web console.
+
+## Prerequisites
+
+- You have enabled multi-cluster management.
+- You need a user granted a role including the authorization of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to a user.
+
+## Remove a Cluster
+
+You can remove a cluster by using either of the following methods:
+
+**Method 1**
+
+1. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. In the **Member Clusters** area, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `alerting` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ alerting:
+ enabled: true # Change "false" to "true".
+ ```
+
+5. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+If you can see **Alerting Messages** and **Alerting Policies** on the **Cluster Management** page, it means the installation is successful as the two parts won't display until the component is installed.
+
+
+
diff --git a/content/en/docs/v3.4/pluggable-components/app-store.md b/content/en/docs/v3.4/pluggable-components/app-store.md
new file mode 100644
index 000000000..09c41607f
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/app-store.md
@@ -0,0 +1,120 @@
+---
+title: "KubeSphere App Store"
+keywords: "Kubernetes, KubeSphere, app-store, OpenPitrix"
+description: "Learn how to enable the KubeSphere App Store to share data and apps internally and set industry standards of delivery process externally."
+linkTitle: "KubeSphere App Store"
+weight: 6200
+---
+
+As an open-source and app-centric container platform, KubeSphere provides users with a Helm-based App Store for application lifecycle management on the back of [OpenPitrix](https://github.com/openpitrix/openpitrix), an open-source web-based system to package, deploy and manage different types of apps. The KubeSphere App Store allows ISVs, developers, and users to upload, test, install, and release apps with just several clicks in a one-stop shop.
+
+Internally, the KubeSphere App Store can serve as a place for different teams to share data, middleware, and office applications. Externally, it is conducive to setting industry standards of building and delivery. After you enable this feature, you can add more apps with app templates.
+
+For more information, see [App Store](../../application-store/).
+
+## Enable the App Store Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by running the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the App Store in this mode (for example, for testing purposes), refer to [the following section](#enable-app-store-after-installation) to see how the App Store can be installed after installation.
+ {{ notice >}}
+
+2. In this file, search for `openpitrix` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ openpitrix:
+ store:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the KubeSphere App Store first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, search for `openpitrix` and enable the App Store by changing `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ openpitrix:
+ store:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Run the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## Enable the App Store After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+
+{{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, search for `openpitrix` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ openpitrix:
+ store:
+ enabled: true # Change "false" to "true".
+ ```
+
+5. Use the web kubectl to check the installation process by running the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+
+{{ notice >}}
+
+## Verify the Installation of the Component
+
+After you log in to the console, if you can see **App Store** in the upper-left corner and apps in it, it means the installation is successful.
+
+{{< notice note >}}
+
+- You can even access the App Store without logging in to the console by visiting `
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `auditing` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ auditing:
+ enabled: true # Change "false" to "true".
+ ```
+
+ {{< notice note >}}
+By default, Elasticsearch will be installed internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Auditing, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+Verify that you can use the **Audit Log Search** function from the **Toolbox** in the lower-right corner.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Execute the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n kubesphere-logging-system
+```
+
+The output may look as follows if the component runs successfully:
+
+```yaml
+NAME READY STATUS RESTARTS AGE
+elasticsearch-logging-curator-elasticsearch-curator-159872n9g9g 0/1 Completed 0 2d10h
+elasticsearch-logging-curator-elasticsearch-curator-159880tzb7x 0/1 Completed 0 34h
+elasticsearch-logging-curator-elasticsearch-curator-1598898q8w7 0/1 Completed 0 10h
+elasticsearch-logging-data-0 1/1 Running 1 2d20h
+elasticsearch-logging-data-1 1/1 Running 1 2d20h
+elasticsearch-logging-discovery-0 1/1 Running 1 2d20h
+fluent-bit-6v5fs 1/1 Running 1 2d20h
+fluentbit-operator-5bf7687b88-44mhq 1/1 Running 1 2d20h
+kube-auditing-operator-7574bd6f96-p4jvv 1/1 Running 1 2d20h
+kube-auditing-webhook-deploy-6dfb46bb6c-hkhmx 1/1 Running 1 2d20h
+kube-auditing-webhook-deploy-6dfb46bb6c-jp77q 1/1 Running 1 2d20h
+```
+
+{{ tab >}}
+
+{{ tabs >}}
diff --git a/content/en/docs/v3.4/pluggable-components/devops.md b/content/en/docs/v3.4/pluggable-components/devops.md
new file mode 100644
index 000000000..a090184a1
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/devops.md
@@ -0,0 +1,130 @@
+---
+title: "KubeSphere DevOps System"
+keywords: "Kubernetes, Jenkins, KubeSphere, DevOps, cicd"
+description: "Learn how to enable DevOps to further free your developers and let them focus on code writing."
+linkTitle: "KubeSphere DevOps System"
+weight: 6300
+---
+
+The KubeSphere DevOps System is designed for CI/CD workflows in Kubernetes. Based on [Jenkins](https://jenkins.io/), it provides one-stop solutions to help both development and Ops teams build, test and publish apps to Kubernetes in a straight-forward way. It also features plugin management, [Binary-to-Image (B2I)](../../project-user-guide/image-builder/binary-to-image/), [Source-to-Image (S2I)](../../project-user-guide/image-builder/source-to-image/), code dependency caching, code quality analysis, pipeline logging, and more.
+
+The DevOps System offers an automated environment for users as apps can be automatically released to the same platform. It is also compatible with third-party private image registries (for example, Harbor) and code repositories (for example, GitLab/GitHub/SVN/BitBucket). As such, it creates excellent user experience by providing users with comprehensive, visualized CI/CD pipelines which are extremely useful in air-gapped environments.
+
+For more information, see [DevOps User Guide](../../devops-user-guide/).
+
+## Enable DevOps Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by running the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable DevOps in this mode (for example, for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how DevOps can be installed after installation.
+ {{ notice >}}
+
+2. In this file, search for `devops` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ devops:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere DevOps first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, search for `devops` and enable DevOps by changing `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ devops:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Run the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## Enable DevOps After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+
+{{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, search for `devops` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ devops:
+ enabled: true # Change "false" to "true".
+ ```
+
+5. Use the web kubectl to check the installation process by running the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+
+{{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+Go to **System Components** and check that all components on the **DevOps** tab page is in **Healthy** state.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Run the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n kubesphere-devops-system
+```
+
+The output may look as follows if the component runs successfully:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+devops-jenkins-5cbbfbb975-hjnll 1/1 Running 0 40m
+s2ioperator-0 1/1 Running 0 41m
+```
+
+{{ tab >}}
+
+{{ tabs >}}
diff --git a/content/en/docs/v3.4/pluggable-components/events.md b/content/en/docs/v3.4/pluggable-components/events.md
new file mode 100644
index 000000000..f4454d145
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/events.md
@@ -0,0 +1,191 @@
+---
+title: "KubeSphere Events"
+keywords: "Kubernetes, events, KubeSphere, k8s-events"
+description: "Learn how to enable Events to keep track of everything that is happening on the platform."
+linkTitle: "KubeSphere Events"
+weight: 6500
+---
+
+KubeSphere events allow users to keep track of what is happening inside a cluster, such as node scheduling status and image pulling result. They will be accurately recorded with the specific reason, status and message displayed in the web console. To query events, users can quickly launch the web Toolkit and enter related information in the search bar with different filters (e.g keyword and project) available. Events can also be archived to third-party tools, such as Elasticsearch, Kafka, or Fluentd.
+
+For more information, see [Event Query](../../toolbox/events-query/).
+
+## Enable Events Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+
+If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Events in this mode (for example, for testing purposes), refer to [the following section](#enable-events-after-installation) to see how Events can be [installed after installation](#enable-events-after-installation).
+
+{{ notice >}}
+
+2. In this file, navigate to `events` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ events:
+ enabled: true # Change "false" to "true".
+ ```
+
+ {{< notice note >}}
+By default, KubeKey will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `events` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ events:
+ enabled: true # Change "false" to "true".
+ ```
+
+ {{< notice note >}}
+
+By default, Elasticsearch will be installed internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
in the lower-right corner of the console.
+
+{{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+Verify that you can use the **Resource Event Search** function from the **Toolbox** in the lower-right corner.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Execute the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n kubesphere-logging-system
+```
+
+The output may look as follows if the component runs successfully:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+elasticsearch-logging-data-0 1/1 Running 0 155m
+elasticsearch-logging-data-1 1/1 Running 0 154m
+elasticsearch-logging-discovery-0 1/1 Running 0 155m
+fluent-bit-bsw6p 1/1 Running 0 108m
+fluent-bit-smb65 1/1 Running 0 108m
+fluent-bit-zdz8b 1/1 Running 0 108m
+fluentbit-operator-9b69495b-bbx54 1/1 Running 0 109m
+ks-events-exporter-5cb959c74b-gx4hw 2/2 Running 0 7m55s
+ks-events-operator-7d46fcccc9-4mdzv 1/1 Running 0 8m
+ks-events-ruler-8445457946-cl529 2/2 Running 0 7m55s
+ks-events-ruler-8445457946-gzlm9 2/2 Running 0 7m55s
+logsidecar-injector-deploy-667c6c9579-cs4t6 2/2 Running 0 106m
+logsidecar-injector-deploy-667c6c9579-klnmf 2/2 Running 0 106m
+```
+
+{{ tab >}}
+
+{{ tabs >}}
+
diff --git a/content/en/docs/v3.4/pluggable-components/kubeedge.md b/content/en/docs/v3.4/pluggable-components/kubeedge.md
new file mode 100644
index 000000000..a45841309
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/kubeedge.md
@@ -0,0 +1,184 @@
+---
+title: "KubeEdge"
+keywords: "Kubernetes, KubeSphere, Kubeedge"
+description: "Learn how to enable KubeEdge to add edge nodes to your cluster."
+linkTitle: "KubeEdge"
+weight: 6930
+---
+
+[KubeEdge](https://kubeedge.io/en/) is an open-source system for extending native containerized application orchestration capabilities to hosts at edge. It supports multiple edge protocols and looks to provide unified management of cloud and edge applications and resources.
+
+KubeEdge has components running in two separate places - cloud and edge nodes. The components running on the cloud, collectively known as CloudCore, include Controllers and Cloud Hub. Cloud Hub serves as the gateway for the requests sent by edge nodes while Controllers function as orchestrators. The components running on edge nodes, collectively known as EdgeCore, include EdgeHub, EdgeMesh, MetadataManager, and DeviceTwin. For more information, see [the KubeEdge website](https://kubeedge.io/en/).
+
+After you enable KubeEdge, you can [add edge nodes to your cluster](../../installing-on-linux/cluster-operation/add-edge-nodes/) and deploy workloads on them.
+
+
+
+## Enable KubeEdge Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeEdge in this mode (for example, for testing purposes), refer to [the following section](#enable-kubeedge-after-installation) to see how KubeEdge can be installed after installation.
+ {{ notice >}}
+
+2. In this file, navigate to `edgeruntime` and `kubeedge`, and change the value of `enabled` from `false` to `true` to enable all KubeEdge components. Click **OK**.
+
+ ```yaml
+ edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
+ enabled: false
+ kubeedge: # kubeedge configurations
+ enabled: false
+ cloudCore:
+ cloudHub:
+ advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
+ - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
+ service:
+ cloudhubNodePort: "30000"
+ cloudhubQuicNodePort: "30001"
+ cloudhubHttpsNodePort: "30002"
+ cloudstreamNodePort: "30003"
+ tunnelNodePort: "30004"
+ # resources: {}
+ # hostNetWork: false
+ ```
+
+3. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes. Save the file when you finish editing.
+
+4. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeEdge first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `edgeruntime` and `kubeedge`, and change the value of `enabled` from `false` to `true` to enable all KubeEdge components. Click **OK**.
+
+ ```yaml
+ edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
+ enabled: false
+ kubeedge: # kubeedge configurations
+ enabled: false
+ cloudCore:
+ cloudHub:
+ advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
+ - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
+ service:
+ cloudhubNodePort: "30000"
+ cloudhubQuicNodePort: "30001"
+ cloudhubHttpsNodePort: "30002"
+ cloudstreamNodePort: "30003"
+ tunnelNodePort: "30004"
+ # resources: {}
+ # hostNetWork: false
+ ```
+
+3. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes.
+
+4. Save the file and execute the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## Enable KubeEdge After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `edgeruntime` and `kubeedge`, and change the value of `enabled` from `false` to `true` to enable all KubeEdge components. Click **OK**.
+
+ ```yaml
+ edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
+ enabled: false
+ kubeedge: # kubeedge configurations
+ enabled: false
+ cloudCore:
+ cloudHub:
+ advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
+ - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
+ service:
+ cloudhubNodePort: "30000"
+ cloudhubQuicNodePort: "30001"
+ cloudhubHttpsNodePort: "30002"
+ cloudstreamNodePort: "30003"
+ tunnelNodePort: "30004"
+ # resources: {}
+ # hostNetWork: false
+ ```
+
+5. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+6. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+On the **Cluster Management** page, verify that the **Edge Nodes** module has appeared under **Nodes**.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Execute the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n kubeedge
+```
+
+The output may look as follows if the component runs successfully:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+cloudcore-5f994c9dfd-r4gpq 1/1 Running 0 5h13m
+edge-watcher-controller-manager-bdfb8bdb5-xqfbk 2/2 Running 0 5h13m
+iptables-hphgf 1/1 Running 0 5h13m
+```
+
+{{ tab >}}
+
+{{ tabs >}}
+
+{{< notice note >}}
+
+CloudCore may malfunction (`CrashLoopBackOff`) if `kubeedge.cloudCore.cloudHub.advertiseAddress` was not set when you enabled KubeEdge. In this case, run `kubectl -n kubeedge edit cm cloudcore` to add the public IP address of your cluster or an IP address that can be accessed by edge nodes.
+
+{{ notice >}}
diff --git a/content/en/docs/v3.4/pluggable-components/logging.md b/content/en/docs/v3.4/pluggable-components/logging.md
new file mode 100644
index 000000000..bbe764c7e
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/logging.md
@@ -0,0 +1,199 @@
+---
+title: "KubeSphere Logging System"
+keywords: "Kubernetes, Elasticsearch, KubeSphere, Logging, logs"
+description: "Learn how to enable Logging to leverage the tenant-based system for log collection, query and management."
+linkTitle: "KubeSphere Logging System"
+weight: 6400
+---
+
+KubeSphere provides a powerful, holistic, and easy-to-use logging system for log collection, query, and management. It covers logs at varied levels, including tenants, infrastructure resources, and applications. Users can search logs from different dimensions, such as project, workload, Pod and keyword. Compared with Kibana, the tenant-based logging system of KubeSphere features better isolation and security among tenants as tenants can only view their own logs. Apart from KubeSphere's own logging system, the container platform also allows users to add third-party log collectors, such as Elasticsearch, Kafka, and Fluentd.
+
+For more information, see [Log Query](../../toolbox/log-query/).
+
+## Enable Logging Before Installation
+
+### Installing on Linux
+
+When you install KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+
+- If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Logging in this mode (for example, for testing purposes), refer to [the following section](#enable-logging-after-installation) to see how Logging can be installed after installation.
+
+- If you adopt [Multi-node Installation](../../installing-on-linux/introduction/multioverview/) and are using symbolic links for docker root directory, make sure all nodes follow the exactly same symbolic links. Logging agents are deployed in DaemonSets onto nodes. Any discrepancy in container log path may cause collection failures on that node.
+
+{{ notice >}}
+
+2. In this file, navigate to `logging` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ logging:
+ enabled: true # Change "false" to "true".
+ containerruntime: docker
+ ```
+
+ {{< notice info >}}To use containerd as the container runtime, change the value of the field `containerruntime` to `containerd`. If you upgraded to KubeSphere 3.3 from earlier versions, you have to manually add the field `containerruntime` under `logging` when enabling KubeSphere Logging system.
+
+ {{ notice >}}
+
+ {{< notice note >}}By default, KubeKey will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `logging` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ logging:
+ enabled: true # Change "false" to "true".
+ containerruntime: docker
+ ```
+
+ {{< notice info >}}To use containerd as the container runtime, change the value of the field `.logging.containerruntime` to `containerd`. If you upgraded to KubeSphere 3.3 from earlier versions, you have to manually add the field `containerruntime` under `logging` when enabling KubeSphere Logging system.
+
+ {{ notice >}}
+
+ {{< notice note >}}By default, Elasticsearch will be installed internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
+ {{ notice >}}
+
+ ```yaml
+ es: # Storage backend for logging, tracing, events and auditing.
+ elasticsearchMasterReplicas: 1 # The total number of master nodes. Even numbers are not allowed.
+ elasticsearchDataReplicas: 1 # The total number of data nodes.
+ elasticsearchMasterVolumeSize: 4Gi # The volume size of Elasticsearch master nodes.
+ elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
+ logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
+ elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-
in the lower-right corner of the console.
+
+{{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+Go to **System Components** and check that all components on the **Logging** tab page is in **Healthy** state.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Execute the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n kubesphere-logging-system
+```
+
+The output may look as follows if the component runs successfully:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+elasticsearch-logging-data-0 1/1 Running 0 87m
+elasticsearch-logging-data-1 1/1 Running 0 85m
+elasticsearch-logging-discovery-0 1/1 Running 0 87m
+fluent-bit-bsw6p 1/1 Running 0 40m
+fluent-bit-smb65 1/1 Running 0 40m
+fluent-bit-zdz8b 1/1 Running 0 40m
+fluentbit-operator-9b69495b-bbx54 1/1 Running 0 40m
+logsidecar-injector-deploy-667c6c9579-cs4t6 2/2 Running 0 38m
+logsidecar-injector-deploy-667c6c9579-klnmf 2/2 Running 0 38m
+```
+
+{{ tab >}}
+
+{{ tabs >}}
diff --git a/content/en/docs/v3.4/pluggable-components/metrics-server.md b/content/en/docs/v3.4/pluggable-components/metrics-server.md
new file mode 100644
index 000000000..e82801df1
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/metrics-server.md
@@ -0,0 +1,113 @@
+---
+title: "Metrics Server"
+keywords: "Kubernetes, KubeSphere, Metrics Server"
+description: "Learn how to enable Metrics Server to use HPA to autoscale a Deployment."
+linkTitle: "Metrics Server"
+weight: 6910
+---
+
+KubeSphere supports Horizontal Pod Autoscalers (HPA) for [Deployments](../../project-user-guide/application-workloads/deployments/). In KubeSphere, the Metrics Server controls whether the HPA is enabled. You use an HPA object to autoscale a Deployment based on different types of metrics, such as CPU and memory utilization, as well as the minimum and maximum number of replicas. In this way, an HPA helps to make sure your application runs smoothly and consistently in different situations.
+
+## Enable the Metrics Server Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Metrics Server in this mode (for example, for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how the Metrics Server can be installed after installation.
+ {{ notice >}}
+
+2. In this file, navigate to `metrics_server` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ metrics_server:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Metrics Server first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `metrics_server` and enable it by changing `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ metrics_server:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Execute the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+ {{< notice note >}}
+
+If you install KubeSphere on some cloud hosted Kubernetes engines, it is probable that the Metrics Server is already installed in your environment. In this case, it is not recommended that you enable it in `cluster-configuration.yaml` as it may cause conflicts during installation.
+ {{ notice >}}
+
+## Enable the Metrics Server After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `metrics_server` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ metrics_server:
+ enabled: true # Change "false" to "true".
+ ```
+
+5. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+Execute the following command to verify that the Pod of Metrics Server is up and running.
+
+```bash
+kubectl get pod -n kube-system
+```
+
+If the Metrics Server is successfully installed, your cluster may return the following output (excluding irrelevant Pods):
+
+```bash
+NAME READY STATUS RESTARTS AGE
+metrics-server-6c767c9f94-hfsb7 1/1 Running 0 9m38s
+```
\ No newline at end of file
diff --git a/content/en/docs/v3.4/pluggable-components/network-policy.md b/content/en/docs/v3.4/pluggable-components/network-policy.md
new file mode 100644
index 000000000..437190c87
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/network-policy.md
@@ -0,0 +1,109 @@
+---
+title: "Network Policies"
+keywords: "Kubernetes, KubeSphere, NetworkPolicy"
+description: "Learn how to enable Network Policies to control traffic flow at the IP address or port level."
+linkTitle: "Network Policies"
+weight: 6900
+---
+
+Starting from v3.0.0, users can configure network policies of native Kubernetes in KubeSphere. Network Policies are an application-centric construct, enabling you to specify how a Pod is allowed to communicate with various network entities over the network. With network policies, users can achieve network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
+
+{{< notice note >}}
+
+- Please make sure that the CNI network plugin used by the cluster supports Network Policies before you enable the feature. There are a number of CNI network plugins that support Network Policies, including Calico, Cilium, Kube-router, Romana, and Weave Net.
+- It is recommended that you use [Calico](https://www.projectcalico.org/) as the CNI plugin before you enable Network Policies.
+
+{{ notice >}}
+
+For more information, see [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/).
+
+## Enable the Network Policy Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Network Policy in this mode (for example, for testing purposes), refer to [the following section](#enable-network-policy-after-installation) to see how the Network Policy can be installed after installation.
+ {{ notice >}}
+
+2. In this file, navigate to `network.networkpolicy` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ network:
+ networkpolicy:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Network Policy first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `network.networkpolicy` and enable it by changing `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ network:
+ networkpolicy:
+ enabled: true # Change "false" to "true".
+ ```
+
+3. Execute the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## Enable the Network Policy After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `network.networkpolicy` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ network:
+ networkpolicy:
+ enabled: true # Change "false" to "true".
+ ```
+
+5. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+If you can see the **Network Policies** module in **Network**, it means the installation is successful as this part won't display until you install the component.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/pluggable-components/overview.md b/content/en/docs/v3.4/pluggable-components/overview.md
new file mode 100644
index 000000000..04f63b922
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/overview.md
@@ -0,0 +1,98 @@
+---
+title: "Enable Pluggable Components — Overview"
+keywords: "Kubernetes, KubeSphere, pluggable-components, overview"
+description: "Develop a basic understanding of key components in KubeSphere, including features and resource consumption."
+linkTitle: "Overview"
+weight: 6100
+---
+
+KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be deployed with a minimal installation if you do not enable them.
+
+Different pluggable components are deployed in different namespaces. You can enable any of them based on your needs. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere.
+
+For more information about how to enable each component, see respective tutorials in this chapter.
+
+## Resource Requirements
+
+Before you enable pluggable components, make sure you have enough resources in your environment based on the tables below. Otherwise, components may crash due to a lack of resources.
+
+{{< notice note >}}
+
+The following request and limit of CPU and memory resources are required by a single replica.
+
+{{ notice >}}
+
+### KubeSphere App Store
+
+| Namespace | openpitrix-system |
+| -------------- | ------------------------------------------------------------ |
+| CPU Request | 0.3 core |
+| CPU Limit | None |
+| Memory Request | 300 MiB |
+| Memory Limit | None |
+| Installation | Optional |
+| Notes | Provide an App Store with application lifecycle management. The installation is recommended. |
+
+### KubeSphere DevOps System
+
+| Namespace | kubesphere-devops-system | kubesphere-devops-system |
+| -------------- | ------------------------------------------------------------ | ------------------------------------------------------- |
+| Pattern | All-in-One installation | Multi-node installation |
+| CPU Request | 34 m | 0.47 core |
+| CPU Limit | None | None |
+| Memory Request | 2.69 G | 8.6 G |
+| Memory Limit | None | None |
+| Installation | Optional | Optional |
+| Notes | Provide one-stop DevOps solutions with Jenkins pipelines and B2I & S2I. | The memory of one of the nodes must be larger than 8 G. |
+
+### KubeSphere Monitoring System
+
+| Namespace | kubesphere-monitoring-system | kubesphere-monitoring-system | kubesphere-monitoring-system |
+| -------------- | ------------------------------------------------------------ | ---------------------------- | ---------------------------- |
+| Sub-component | 2 x Prometheus | 3 x Alertmanager | Notification Manager |
+| CPU Request | 100 m | 10 m | 100 m |
+| CPU Limit | 4 cores | None | 500 m |
+| Memory Request | 400 MiB | 30 MiB | 20 MiB |
+| Memory Limit | 8 GiB | None | 1 GiB |
+| Installation | Required | Required | Required |
+| Notes | The memory consumption of Prometheus depends on the cluster size. 8 GiB is sufficient for a cluster with 200 nodes/16,000 Pods. | - | - |
+
+{{< notice note >}}
+
+The KubeSphere monitoring system is not a pluggable component. It is installed by default. The resource request and limit of it are also listed on this page for your reference as it is closely related to other components such as logging.
+
+{{ notice >}}
+
+### KubeSphere Logging System
+
+| Namespace | kubesphere-logging-system | kubesphere-logging-system | kubesphere-logging-system | kubesphere-logging-system |
+| -------------- | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
+| Sub-component | 3 x Elasticsearch | fluent bit | kube-events | kube-auditing |
+| CPU Request | 50 m | 20 m | 90 m | 20 m |
+| CPU Limit | 1 core | 200 m | 900 m | 200 m |
+| Memory Request | 2 G | 50 MiB | 120 MiB | 50 MiB |
+| Memory Limit | None | 100 MiB | 1200 MiB | 100 MiB |
+| Installation | Optional | Required | Optional | Optional |
+| Notes | An optional component for log data storage. The internal Elasticsearch is not recommended for the production environment. | The log collection agent. It is a required component after you enable logging. | Collecting, filtering, exporting and alerting of Kubernetes events. | Collecting, filtering and alerting of Kubernetes and KubeSphere auditing logs. |
+
+### KubeSphere Alerting and Notification
+
+| Namespace | kubesphere-alerting-system |
+| -------------- | ------------------------------------------------------------ |
+| CPU Request | 0.08 core |
+| CPU Limit | None |
+| Memory Request | 80 M |
+| Memory Limit | None |
+| Installation | Optional |
+| Notes | Alerting and Notification need to be enabled at the same time. |
+
+### KubeSphere Service Mesh
+
+| Namespace | istio-system |
+| -------------- | ------------------------------------------------------------ |
+| CPU Request | 1 core |
+| CPU Limit | None |
+| Memory Request | 3.5 G |
+| Memory Limit | None |
+| Installation | Optional |
+| Notes | Support grayscale release strategies, traffic topology, traffic management and distributed tracing. |
diff --git a/content/en/docs/v3.4/pluggable-components/pod-ip-pools.md b/content/en/docs/v3.4/pluggable-components/pod-ip-pools.md
new file mode 100644
index 000000000..b8df7f4aa
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/pod-ip-pools.md
@@ -0,0 +1,104 @@
+---
+title: "Pod IP Pools"
+keywords: "Kubernetes, KubeSphere, Pod, IP pools"
+description: "Learn how to enable Pod IP Pools to assign a specific Pod IP pool to your Pods."
+linkTitle: "Pod IP Pools"
+weight: 6920
+---
+
+A Pod IP pool is used to manage the Pod network address space, and the address space between each Pod IP pool cannot overlap. When you create a workload, you can select a specific Pod IP pool, so that created Pods will be assigned IP addresses from this Pod IP pool.
+
+## Enable Pod IP Pools Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Pod IP Pools in this mode (for example, for testing purposes), refer to [the following section](#enable-pod-ip-pools-after-installation) to see how Pod IP pools can be installed after installation.
+ {{ notice >}}
+
+2. In this file, navigate to `network.ippool.type` and change `none` to `calico`. Save the file after you finish.
+
+ ```yaml
+ network:
+ ippool:
+ type: calico # Change "none" to "calico".
+ ```
+
+3. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Pod IP Pools first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `network.ippool.type` and enable it by changing `none` to `calico`. Save the file after you finish.
+
+ ```yaml
+ network:
+ ippool:
+ type: calico # Change "none" to "calico".
+ ```
+
+3. Execute the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+
+## Enable Pod IP Pools After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `network` and change `network.ippool.type` to `calico`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ network:
+ ippool:
+ type: calico # Change "none" to "calico".
+ ```
+
+5. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+On the **Cluster Management** page, verify that you can see the **Pod IP Pools** module under **Network**.
+
+
+
diff --git a/content/en/docs/v3.4/pluggable-components/service-mesh.md b/content/en/docs/v3.4/pluggable-components/service-mesh.md
new file mode 100644
index 000000000..0b6685a4d
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/service-mesh.md
@@ -0,0 +1,157 @@
+---
+title: "KubeSphere Service Mesh"
+keywords: "Kubernetes, Istio, KubeSphere, service-mesh, microservices"
+description: "Learn how to enable KubeSphere Service Mesh to use different traffic management strategies for microservices governance."
+linkTitle: "KubeSphere Service Mesh"
+weight: 6800
+---
+
+On the basis of [Istio](https://istio.io/), KubeSphere Service Mesh visualizes microservices governance and traffic management. It features a powerful toolkit including **circuit breaking, blue-green deployment, canary release, traffic mirroring, tracing, observability, and traffic control**. Developers can easily get started with KubeSphere Service Mesh without any code hacking, which greatly reduces the learning curve of Istio. All features of KubeSphere Service Mesh are designed to meet users' demand for their business.
+
+For more information, see [Grayscale Release](../../project-user-guide/grayscale-release/overview/).
+
+## Enable KubeSphere Service Mesh Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeSphere Service Mesh in this mode (for example, for testing purposes), refer to [the following section](#enable-service-mesh-after-installation) to see how KubeSphere Service Mesh can be installed after installation.
+ {{ notice >}}
+
+2. In this file, navigate to `servicemesh` and change `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ servicemesh:
+ enabled: true # Change “false” to “true”.
+ istio: # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
+ components:
+ ingressGateways:
+ - name: istio-ingressgateway # Used to expose a service outside of the service mesh using an Istio Gateway. The value is false by defalut.
+ enabled: false
+ cni:
+ enabled: false # When the value is true, it identifies user application pods with sidecars requiring traffic redirection and sets this up in the Kubernetes pod lifecycle’s network setup phase.
+ ```
+
+ {{< notice note >}}
+ - For more information about how to access service after enabling Ingress Gateway, please refer to [Ingress Gateway](https://istio.io/latest/docs/tasks/traffic-management/ingress/ingress-control/).
+ - For more information about the Istio CNI plugin, please refer to [Install Istio with the Istio CNI plugin](https://istio.io/latest/docs/setup/additional-setup/cni/).
+ {{ notice >}}
+
+3. Run the following command to create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Service Mesh first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `servicemesh` and enable it by changing `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ servicemesh:
+ enabled: true # Change “false” to “true”.
+ istio: # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
+ components:
+ ingressGateways:
+ - name: istio-ingressgateway # Used to expose a service outside of the service mesh using an Istio Gateway. The value is false by defalut.
+ enabled: false
+ cni:
+ enabled: false # When the value is true, it identifies user application pods with sidecars requiring traffic redirection and sets this up in the Kubernetes pod lifecycle’s network setup phase.
+ ```
+
+3. Run the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+## Enable KubeSphere Service Mesh After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `servicemesh` and change `false` to `true` for `enabled`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ servicemesh:
+ enabled: true # Change “false” to “true”.
+ istio: # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
+ components:
+ ingressGateways:
+ - name: istio-ingressgateway # Used to expose a service outside of the service mesh using an Istio Gateway. The value is false by defalut.
+ enabled: false
+ cni:
+ enabled: false # When the value is true, it identifies user application pods with sidecars requiring traffic redirection and sets this up in the Kubernetes pod lifecycle’s network setup phase.
+ ```
+ ```
+
+5. Run the following command in kubectl to check the installation process:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+Go to **System Components** and check whether all components on the **Istio** tab page is in **Healthy** state. If yes, the component is successfully installed.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Run the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n istio-system
+```
+
+The following is an example of the output if the component runs successfully:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+istio-ingressgateway-78dbc5fbfd-f4cwt 1/1 Running 0 9m5s
+istiod-1-6-10-7db56f875b-mbj5p 1/1 Running 0 10m
+jaeger-collector-76bf54b467-k8blr 1/1 Running 0 6m48s
+jaeger-operator-7559f9d455-89hqm 1/1 Running 0 7m
+jaeger-query-b478c5655-4lzrn 2/2 Running 0 6m48s
+kiali-f9f7d6f9f-gfsfl 1/1 Running 0 4m1s
+kiali-operator-7d5dc9d766-qpkb6 1/1 Running 0 6m53s
+```
+
+{{ tab >}}
+
+{{ tabs >}}
diff --git a/content/en/docs/v3.4/pluggable-components/service-topology.md b/content/en/docs/v3.4/pluggable-components/service-topology.md
new file mode 100644
index 000000000..1dadea0bb
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/service-topology.md
@@ -0,0 +1,130 @@
+---
+title: "Service Topology"
+keywords: "Kubernetes, KubeSphere, Services, Topology"
+description: "Learn how to enable Service Topology to view contextual details of your Pods based on Weave Scope."
+linkTitle: "Service Topology"
+weight: 6915
+---
+
+You can enable Service Topology to integrate [Weave Scope](https://www.weave.works/oss/scope/), a visualization and monitoring tool for Docker and Kubernetes. Weave Scope uses established APIs to collect information to build a topology of your apps and containers. The Service topology displays in your project, providing you with visual representations of connections based on traffic.
+
+## Enable Service Topology Before Installation
+
+### Installing on Linux
+
+When you implement multi-node installation of KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
+
+1. In the tutorial of [Installing KubeSphere on Linux](../../installing-on-linux/introduction/multioverview/), you create a default file `config-sample.yaml`. Modify the file by executing the following command:
+
+ ```bash
+ vi config-sample.yaml
+ ```
+
+ {{< notice note >}}
+ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Service Topology in this mode (for example, for testing purposes), refer to [the following section](#enable-service-topology-after-installation) to see how Service Topology can be installed after installation.
+ {{ notice >}}
+
+2. In this file, navigate to `network.topology.type` and change `none` to `weave-scope`. Save the file after you finish.
+
+ ```yaml
+ network:
+ topology:
+ type: weave-scope # Change "none" to "weave-scope".
+ ```
+
+3. Create a cluster using the configuration file:
+
+ ```bash
+ ./kk create cluster -f config-sample.yaml
+ ```
+
+### Installing on Kubernetes
+
+As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Service Topology first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/cluster-configuration.yaml) and edit it.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `network.topology.type` and enable it by changing `none` to `weave-scope`. Save the file after you finish.
+
+ ```yaml
+ network:
+ topology:
+ type: weave-scope # Change "none" to "weave-scope".
+ ```
+
+3. Execute the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+
+## Enable Service Topology After Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the upper-left corner and select **Cluster Management**.
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Custom Resources**, click
on the right of `ks-installer` and select **Edit YAML**.
+
+4. In this YAML file, navigate to `network` and change `network.topology.type` to `weave-scope`. After you finish, click **OK** in the lower-right corner to save the configuration.
+
+ ```yaml
+ network:
+ topology:
+ type: weave-scope # Change "none" to "weave-scope".
+ ```
+
+5. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice note >}}
+
+You can find the web kubectl tool by clicking
in the lower-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+{{< tabs >}}
+
+{{< tab "Verify the component on the dashboard" >}}
+
+Go to one of your project, navigate to **Services** under **Application Workloads**, and you can see a topology of your Services on the **Service Topology** tab page.
+
+{{ tab >}}
+
+{{< tab "Verify the component through kubectl" >}}
+
+Execute the following command to check the status of Pods:
+
+```bash
+kubectl get pod -n weave
+```
+
+The output may look as follows if the component runs successfully:
+
+```bash
+NAME READY STATUS RESTARTS AGE
+weave-scope-agent-48cjp 1/1 Running 0 3m1s
+weave-scope-agent-9jb4g 1/1 Running 0 3m1s
+weave-scope-agent-ql5cf 1/1 Running 0 3m1s
+weave-scope-app-5b76897b6f-8bsls 1/1 Running 0 3m1s
+weave-scope-cluster-agent-8d9b8c464-5zlpp 1/1 Running 0 3m1s
+```
+
+{{ tab >}}
+
+{{ tabs >}}
\ No newline at end of file
diff --git a/content/en/docs/v3.4/pluggable-components/uninstall-pluggable-components.md b/content/en/docs/v3.4/pluggable-components/uninstall-pluggable-components.md
new file mode 100644
index 000000000..17c2cc4b1
--- /dev/null
+++ b/content/en/docs/v3.4/pluggable-components/uninstall-pluggable-components.md
@@ -0,0 +1,205 @@
+---
+title: "Uninstall Pluggable Components"
+keywords: "Installer, uninstall, KubeSphere, Kubernetes"
+description: "Learn how to uninstall each pluggable component in KubeSphere."
+linkTitle: "Uninstall Pluggable Components"
+Weight: 6940
+---
+
+After you [enable the pluggable components of KubeSphere](../../pluggable-components/), you can also uninstall them by performing the following steps. Please back up any necessary data before you uninstall these components.
+
+## Prerequisites
+
+You have to change the value of the field `enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration` before you uninstall any pluggable components except Service Topology and Pod IP Pools.
+
+Use either of the following methods to change the value of the field `enabled`:
+
+- Run the following command to edit `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit clusterconfiguration ks-installer
+ ```
+
+- Log in to the KubeSphere web console as `admin`, click **Platform** in the upper-left corner and select **Cluster Management**, and then go to **CRDs** to search for `ClusterConfiguration`. For more information, see [Enable Pluggable Components](../../../pluggable-components/).
+
+{{< notice note >}}
+
+After the value is changed, you need to wait until the updating process is complete before you continue with any further operations.
+
+{{ notice >}}
+
+## Uninstall KubeSphere App Store
+
+Change the value of `openpitrix.store.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+## Uninstall KubeSphere DevOps
+
+1. To uninstall DevOps:
+
+ ```bash
+ helm uninstall -n kubesphere-devops-system devops
+ kubectl patch -n kubesphere-system cc ks-installer --type=json -p='[{"op": "remove", "path": "/status/devops"}]'
+ kubectl patch -n kubesphere-system cc ks-installer --type=json -p='[{"op": "replace", "path": "/spec/devops/enabled", "value": false}]'
+ ```
+2. To delete DevOps resources:
+
+ ```bash
+ # Remove all resources related with DevOps
+ for devops_crd in $(kubectl get crd -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep "devops.kubesphere.io"); do
+ for ns in $(kubectl get ns -ojsonpath='{.items..metadata.name}'); do
+ for devops_res in $(kubectl get $devops_crd -n $ns -oname); do
+ kubectl patch $devops_res -n $ns -p '{"metadata":{"finalizers":[]}}' --type=merge
+ done
+ done
+ done
+ # Remove all DevOps CRDs
+ kubectl get crd -o=jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' | grep "devops.kubesphere.io" | xargs -I crd_name kubectl delete crd crd_name
+ # Remove DevOps namespace
+ kubectl delete namespace kubesphere-devops-system
+ ```
+
+
+## Uninstall KubeSphere Logging
+
+1. Change the value of `logging.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. To disable only log collection:
+
+ ```bash
+ kubectl delete inputs.logging.kubesphere.io -n kubesphere-logging-system tail
+ ```
+
+ {{< notice note >}}
+
+ After running this command, you can still view the container's recent logs provided by Kubernetes by default. However, the container history logs will be cleared and you cannot browse them any more.
+
+ {{ notice >}}
+
+3. To uninstall the Logging system, including Elasticsearch:
+
+ ```bash
+ kubectl delete crd fluentbitconfigs.logging.kubesphere.io
+ kubectl delete crd fluentbits.logging.kubesphere.io
+ kubectl delete crd inputs.logging.kubesphere.io
+ kubectl delete crd outputs.logging.kubesphere.io
+ kubectl delete crd parsers.logging.kubesphere.io
+ kubectl delete deployments.apps -n kubesphere-logging-system fluentbit-operator
+ helm uninstall elasticsearch-logging --namespace kubesphere-logging-system
+ ```
+
+ {{< notice warning >}}
+
+ This operation may cause anomalies in Auditing, Events, and Service Mesh.
+
+ {{ notice >}}
+
+4. Run the following command:
+
+ ```bash
+ kubectl delete deployment logsidecar-injector-deploy -n kubesphere-logging-system
+ kubectl delete ns kubesphere-logging-system
+ ```
+
+## Uninstall KubeSphere Events
+
+1. Change the value of `events.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following command:
+
+ ```bash
+ helm delete ks-events -n kubesphere-logging-system
+ ```
+
+## Uninstall KubeSphere Alerting
+
+1. Change the value of `alerting.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following command:
+
+ ```bash
+ kubectl -n kubesphere-monitoring-system delete thanosruler kubesphere
+ ```
+
+ {{< notice note >}}
+
+ Notification is installed in KubeSphere 3.3 by default, so you do not need to uninstall it.
+
+ {{ notice >}}
+
+
+## Uninstall KubeSphere Auditing
+
+1. Change the value of `auditing.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following commands:
+
+ ```bash
+ helm uninstall kube-auditing -n kubesphere-logging-system
+ kubectl delete crd rules.auditing.kubesphere.io
+ kubectl delete crd webhooks.auditing.kubesphere.io
+ ```
+
+## Uninstall KubeSphere Service Mesh
+
+1. Change the value of `servicemesh.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following commands:
+
+ ```bash
+ curl -L https://istio.io/downloadIstio | sh -
+ istioctl x uninstall --purge
+
+ kubectl -n istio-system delete kiali kiali
+ helm -n istio-system delete kiali-operator
+
+ kubectl -n istio-system delete jaeger jaeger
+ helm -n istio-system delete jaeger-operator
+ ```
+
+## Uninstall Network Policies
+
+For the component NetworkPolicy, disabling it does not require uninstalling the component as its controller is now inside `ks-controller-manager`. If you want to remove it from the KubeSphere console, change the value of `network.networkpolicy.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+## Uninstall Metrics Server
+
+1. Change the value of `metrics_server.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following commands:
+
+ ```bash
+ kubectl delete apiservice v1beta1.metrics.k8s.io
+ kubectl -n kube-system delete service metrics-server
+ kubectl -n kube-system delete deployment metrics-server
+ ```
+
+## Uninstall Service Topology
+
+1. Change the value of `network.topology.type` from `weave-scope` to `none` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following command:
+
+ ```bash
+ kubectl delete ns weave
+ ```
+
+## Uninstall Pod IP Pools
+
+Change the value of `network.ippool.type` from `calico` to `none` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+## Uninstall KubeEdge
+
+1. Change the value of `kubeedge.enabled` and `edgeruntime.enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration`.
+
+2. Run the following commands:
+
+ ```bash
+ helm uninstall kubeedge -n kubeedge
+ kubectl delete ns kubeedge
+ ```
+
+ {{< notice note >}}
+
+ After uninstallation, you will not be able to add edge nodes to your cluster.
+
+ {{ notice >}}
+
diff --git a/content/en/docs/v3.4/project-administration/_index.md b/content/en/docs/v3.4/project-administration/_index.md
new file mode 100644
index 000000000..a8c3c7e20
--- /dev/null
+++ b/content/en/docs/v3.4/project-administration/_index.md
@@ -0,0 +1,13 @@
+---
+title: "Project Administration"
+description: "Help you to better manage KubeSphere projects"
+layout: "second"
+
+linkTitle: "Project Administration"
+weight: 13000
+
+icon: "/images/docs/v3.3/docs.svg"
+
+---
+
+A KubeSphere project is a Kubernetes namespace. There are two types of projects, the single-cluster project and the multi-cluster project. The former one is the regular Kubernetes namespace, while the latter is the federated namespace across multiple clusters. As a project administrator, you are responsible for project creation, limit range settings, network isolation configuration, and more.
diff --git a/content/en/docs/v3.4/project-administration/container-limit-ranges.md b/content/en/docs/v3.4/project-administration/container-limit-ranges.md
new file mode 100644
index 000000000..8fa82fa9d
--- /dev/null
+++ b/content/en/docs/v3.4/project-administration/container-limit-ranges.md
@@ -0,0 +1,47 @@
+---
+title: "Container Limit Ranges"
+keywords: 'Kubernetes, KubeSphere, resource, quotas, limits, requests, limit ranges, containers'
+description: 'Learn how to set default container limit ranges in a project.'
+linkTitle: "Container Limit Ranges"
+weight: 13400
+---
+
+A container can use as much CPU and memory as set by [the resource quota for a project](../../workspace-administration/project-quotas/). At the same time, KubeSphere uses requests and limits to control resource (for example, CPU and memory) usage for a container, also known as [LimitRanges](https://kubernetes.io/docs/concepts/policy/limit-range/) in Kubernetes. Requests make sure the container can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that container can never use resources above a certain value.
+
+When you create a workload, such as a Deployment, you configure resource [Kubernetes requests and limits](https://kubesphere.io/blogs/understand-requests-and-limits-in-kubernetes/) for the container. To make these request and limit fields pre-populated with values, you can set default limit ranges.
+
+This tutorial demonstrates how to set default limit ranges for containers in a project.
+
+## Prerequisites
+
+You have an available workspace, a project and a user (`project-admin`). The user must have the `admin` role at the project level. For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+
+## Set Default Limit Ranges
+
+1. Log in to the console as `project-admin` and go to a project. On the **Overview** page, you can see default limit ranges remain unset if the project is newly created. Click **Edit Quotas** next to **Default Container Quotas Not Set** to configure limit ranges.
+
+2. In the dialog that appears, you can see that KubeSphere does not set any requests or limits by default. To set requests and limits to control CPU and memory resources, use the slider to move to a desired value or enter numbers directly. Leaving a field blank means you do not set any requests or limits.
+
+ {{< notice note >}}
+
+ The limit can never be lower than the request.
+
+ {{ notice >}}
+
+3. Click **OK** to finish setting limit ranges.
+
+4. Go to **Basic Information** in **Project Settings**, and you can see default limit ranges for containers in a project.
+
+5. To change default limit ranges, click **Edit Project** on the **Basic Information** page and select **Edit Default Container Quotas**.
+
+6. Change limit ranges directly in the dialog and click **OK**.
+
+7. When you create a workload, requests and limits of the container will be pre-populated with values.
+ {{< notice note >}}
+ For more information, see **Resource Request** in [Container Image Settings](../../project-user-guide/application-workloads/container-image-settings/).
+
+ {{ notice >}}
+
+## See Also
+
+[Project Quotas](../../workspace-administration/project-quotas/)
diff --git a/content/en/docs/v3.4/project-administration/disk-log-collection.md b/content/en/docs/v3.4/project-administration/disk-log-collection.md
new file mode 100644
index 000000000..634317a22
--- /dev/null
+++ b/content/en/docs/v3.4/project-administration/disk-log-collection.md
@@ -0,0 +1,75 @@
+---
+title: "Log Collection"
+keywords: 'KubeSphere, Kubernetes, project, disk, log, collection'
+description: 'Enable log collection so that you can collect, manage, and analyze logs in a unified way.'
+linkTitle: "Log Collection"
+weight: 13600
+---
+
+KubeSphere supports multiple log collection methods so that Ops teams can collect, manage, and analyze logs in a unified and flexible way.
+
+This tutorial demonstrates how to collect logs for an example app.
+
+## Prerequisites
+
+- You need to create a workspace, a project and a user (`project-admin`). The user must be invited to the project with the role of `admin` at the project level. For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+- You need to enable [the KubeSphere Logging System](../../pluggable-components/logging/).
+
+## Enable Log Collection
+
+1. Log in to the web console of KubeSphere as `project-admin` and go to your project.
+
+2. From the left navigation bar, click **Log Collection** in **Project Settings**, and then click
to enable the feature.
+
+## Create a Deployment
+
+1. From the left navigation bar, select **Workloads** in **Application Workloads**. Under the **Deployments** tab, click **Create**.
+
+2. In the dialog that appears, set a name for the Deployment (for example, `demo-deployment`) and click **Next**.
+
+3. Under **Containers**, click **Add Container**.
+
+4. Enter `alpine` in the search bar to use the image (tag: `latest`) as an example.
+
+5. Scroll down to **Start Command** and select the checkbox. Enter the following values for **Command** and **Parameters** respectively, click **√**, and then click **Next**.
+
+ **Command**
+
+ ```bash
+ /bin/sh
+ ```
+
+ **Parameters**
+
+ ```bash
+ -c,if [ ! -d /data/log ];then mkdir -p /data/log;fi; while true; do date >> /data/log/app-test.log; sleep 30;done
+ ```
+
+ {{< notice note >}}
+
+ The command and parameters above mean that the date information will be exported to `app-test.log` in `/data/log` every 30 seconds.
+
+ {{ notice >}}
+
+6. On the **Storage Settings** tab, click | Built-in Roles | +Description | +
|---|---|
viewer |
+ Project viewer who can view all resources in the project. | +
operator |
+ Project operator who can manage resources other than users and roles in the project. | +
admin |
+ Project administrator who has full control over all resources in the project. | +
on the right.
+
+## Invite a New Member
+
+1. Navigate to **Project Members** under **Project Settings**, and click **Invite**.
+
+2. Invite a user to the project by clicking
on the right of it and assign a role to it.
+
+3. After you add the user to the project, click **OK**. In **Project Members**, you can see the user in the list.
+
+4. To edit the role of an existing user or remove the user from the project, click
on the right and select the corresponding operation.
diff --git a/content/en/docs/v3.4/project-user-guide/_index.md b/content/en/docs/v3.4/project-user-guide/_index.md
new file mode 100644
index 000000000..7dc50ce3b
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/_index.md
@@ -0,0 +1,12 @@
+---
+title: "Project User Guide"
+description: "Help you to better manage resources in a KubeSphere project"
+layout: "second"
+
+linkTitle: "Project User Guide"
+weight: 10000
+
+icon: "/images/docs/v3.3/docs.svg"
+---
+
+In KubeSphere, project users with necessary permissions are able to perform a series of tasks, such as creating different kinds of workloads, configuring volumes, Secrets, and ConfigMaps, setting various release strategies, monitoring app metrics, and creating alerting policies. As KubeSphere features great flexibility and compatibility without any code hacking into native Kubernetes, it is very convenient for users to get started with any feature required for their testing, development and production environments.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/alerting/_index.md b/content/en/docs/v3.4/project-user-guide/alerting/_index.md
new file mode 100644
index 000000000..1b4523bc0
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/alerting/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Alerting"
+weight: 10700
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/project-user-guide/alerting/alerting-message.md b/content/en/docs/v3.4/project-user-guide/alerting/alerting-message.md
new file mode 100644
index 000000000..507563542
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/alerting/alerting-message.md
@@ -0,0 +1,27 @@
+---
+title: "Alerting Messages (Workload Level)"
+keywords: 'KubeSphere, Kubernetes, Workload, Alerting, Message, Notification'
+description: 'Learn how to view alerting messages for workloads.'
+linkTitle: "Alerting Messages (Workload Level)"
+weight: 10720
+---
+
+Alerting messages record detailed information of alerts triggered based on the alerting policy defined. This tutorial demonstrates how to view alerting messages at the workload level.
+
+## Prerequisites
+
+- You have enabled [KubeSphere Alerting](../../../pluggable-components/alerting/).
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- You have created a workload-level alerting policy and an alert has been triggered. For more information, refer to [Alerting Policies (Workload Level)](../alerting-policy/).
+
+## View Alerting Messages
+
+1. Log in to the console as `project-regular`, go to your project, and go to **Alerting Messages** under **Monitoring & Alerting**.
+
+2. On the **Alerting Messages** page, you can see all alerting messages in the list. The first column displays the summary and message you have defined in the notification of the alert. To view details of an alerting message, click the name of the alerting policy and click the **Alerting History** tab on the displayed page.
+
+3. On the **Alerting History** tab, you can see alert severity, monitoring targets, and activation time.
+
+## View Notifications
+
+If you also want to receive alert notifications (for example, email and Slack messages), you need to configure [a notification channel](../../../cluster-administration/platform-settings/notification-management/configure-email/) first.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/alerting/alerting-policy.md b/content/en/docs/v3.4/project-user-guide/alerting/alerting-policy.md
new file mode 100644
index 000000000..d46cea3bd
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/alerting/alerting-policy.md
@@ -0,0 +1,60 @@
+---
+title: "Alerting Policies (Workload Level)"
+keywords: 'KubeSphere, Kubernetes, Workload, Alerting, Policy, Notification'
+description: 'Learn how to set alerting policies for workloads.'
+linkTitle: "Alerting Policies (Workload Level)"
+weight: 10710
+---
+
+KubeSphere provides alerting policies for nodes and workloads. This tutorial demonstrates how to create alerting policies for workloads in a project. See [Alerting Policy (Node Level)](../../../cluster-administration/cluster-wide-alerting-and-notification/alerting-policy/) to learn how to configure alerting policies for nodes.
+
+## Prerequisites
+
+- You have enabled [KubeSphere Alerting](../../../pluggable-components/alerting/).
+- To receive alert notifications, you must configure a [notification channel](../../../cluster-administration/platform-settings/notification-management/configure-email/) beforehand.
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- You have workloads in this project. If they are not ready, see [Deploy and Access Bookinfo](../../../quick-start/deploy-bookinfo-to-k8s/) to create a sample app.
+
+## Create an Alerting Policy
+
+1. Log in to the console as `project-regular` and go to your project. Go to **Alerting Policies** under **Monitoring & Alerting**, then click **Create**.
+
+2. In the displayed dialog box, provide the basic information as follows. Click **Next** to continue.
+
+ - **Name**. A concise and clear name as its unique identifier, such as `alert-demo`.
+ - **Alias**. Help you distinguish alerting policies better.
+ - **Description**. A brief introduction to the alerting policy.
+ - **Threshold Duration (min)**. The status of the alerting policy becomes `Firing` when the duration of the condition configured in the alerting rule reaches the threshold.
+ - **Severity**. Allowed values include **Warning**, **Error** and **Critical**, providing an indication of how serious an alert is.
+
+3. On the **Rule Settings** tab, you can use the rule template or create a custom rule. To use the template, fill in the following fields.
+
+ - **Resource Type**. Select the resource type you want to monitor, such as **Deployment**, **StatefulSet**, and **DaemonSet**.
+ - **Monitoring Targets**. Depending on the resource type you select, the target can be different. You cannot see any target if you do not have any workload in the project.
+ - **Alerting Rule**. Define a rule for the alerting policy. These rules are based on Prometheus expressions and an alert will be triggered when conditions are met. You can monitor objects such as CPU and memory.
+
+ {{< notice note >}}
+
+ You can create a custom rule with PromQL by entering an expression in the **Monitoring Metrics** field (autocompletion supported). For more information, see [Querying Prometheus](https://prometheus.io/docs/prometheus/latest/querying/basics/).
+
+ {{ notice >}}
+
+ Click **Next** to continue.
+
+4. On the **Message Settings** tab, enter the alert summary and message to be included in your notification, then click **Create**.
+
+5. An alerting policy will be **Inactive** when just created. If conditions in the rule expression are met, it reaches **Pending** first, then turn to **Firing** if conditions keep to be met in the given time range.
+
+## Edit an Alerting Policy
+
+To edit an alerting policy after it is created, on the **Alerting Policies** page, click
on the right.
+
+1. Click **Edit** from the drop-down menu and edit the alerting policy following the same steps as you create it. Click **OK** on the **Message Settings** page to save it.
+
+2. Click **Delete** from the drop-down menu to delete an alerting policy.
+
+## View an Alerting Policy
+
+Click an alerting policy on the **Alerting Policies** page to see its detail information, including alerting rules and alerting history. You can also see the rule expression which is based on the template you use when creating the alerting policy.
+
+Under **Alert Monitoring**, the **Alert Monitoring** chart shows the actual usage or amount of resources over time. **Alerting Message** displays the customized message you set in notifications.
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/_index.md b/content/en/docs/v3.4/project-user-guide/application-workloads/_index.md
new file mode 100644
index 000000000..d73a9f85a
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Application Workloads"
+weight: 10200
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/container-image-settings.md b/content/en/docs/v3.4/project-user-guide/application-workloads/container-image-settings.md
new file mode 100644
index 000000000..f0a78fcae
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/container-image-settings.md
@@ -0,0 +1,268 @@
+---
+title: "Pod Settings"
+keywords: 'KubeSphere, Kubernetes, image, workload, setting, container'
+description: 'Learn different properties on the dashboard in detail as you set Pods for your workload.'
+linkTitle: "Pod Settings"
+weight: 10280
+---
+
+When you create Deployments, StatefulSets or DaemonSets, you need to specify a Pod. At the same time, KubeSphere provides users with various options to customize workload configurations, such as health check probes, environment variables and start commands. This page illustrates detailed explanations of different properties in **Pod Settings**.
+
+{{< notice tip >}}
+
+You can enable **Edit YAML** in the upper-right corner to see corresponding values in the manifest file (YAML format) of properties on the dashboard.
+
+{{ notice >}}
+
+## Pod Settings
+
+### Pod Replicas
+
+Set the number of replicated Pods by clicking
on the right and click
on the right and select the options from the menu to modify a DaemonSet.
+
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create the DaemonSet.
+ - **Delete**: Delete the DaemonSet.
+
+2. Click the name of the DaemonSet and you can go to its details page.
+
+3. Click **More** to display what operations about this DaemonSet you can do.
+
+ - **Roll Back**: Select the revision to roll back.
+ - **Edit Settings**: Configure update strategies, containers and volumes.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create this DaemonSet.
+ - **Delete**: Delete the DaemonSet, and return to the DaemonSet list page.
+
+4. Click the **Resource Status** tab to view the port and Pod information of a DaemonSet.
+
+ - **Replica Status**: You cannot change the number of Pod replicas for a DaemonSet.
+ - **Pods**
+
+ - The Pod list provides detailed information of the Pod (status, node, Pod IP and resource usage).
+ - You can view the container information by clicking a Pod item.
+ - Click the container log icon to view output logs of the container.
+ - You can view the Pod details page by clicking the Pod name.
+
+### Revision records
+
+After the resource template of workload is changed, a new log will be generated and Pods will be rescheduled for a version update. The latest 10 versions will be saved by default. You can implement a redeployment based on the change log.
+
+### Metadata
+
+Click the **Metadata** tab to view the labels and annotations of the DaemonSet.
+
+### Monitoring
+
+1. Click the **Monitoring** tab to view the CPU usage, memory usage, outbound traffic, and inbound traffic of the DaemonSet.
+
+2. Click the drop-down menu in the upper-right corner to customize the time range and sampling interval.
+
+3. Click
/
in the upper-right corner to start/stop automatic data refreshing.
+
+4. Click
in the upper-right corner to manually refresh the data.
+
+### Environment variables
+
+Click the **Environment Variables** tab to view the environment variables of the DaemonSet.
+
+### Events
+
+Click the **Events** tab to view the events of the DaemonSet.
+
+
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/deployments.md b/content/en/docs/v3.4/project-user-guide/application-workloads/deployments.md
new file mode 100644
index 000000000..062a03ec7
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/deployments.md
@@ -0,0 +1,139 @@
+---
+title: "Deployments"
+keywords: 'KubeSphere, Kubernetes, Deployments, workload'
+description: 'Learn basic concepts of Deployments and how to create Deployments in KubeSphere.'
+linkTitle: "Deployments"
+
+weight: 10210
+---
+
+A Deployment controller provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment object, and the Deployment controller changes the actual state to the desired state at a controlled rate. As a Deployment runs a number of replicas of your application, it automatically replaces instances that go down or malfunction. This is how Deployments make sure app instances are available to handle user requests.
+
+For more information, see the [official documentation of Kubernetes](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/).
+
+## Prerequisites
+
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create a Deployment
+
+### Step 1: Open the dashboard
+
+Log in to the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the tab **Deployments**.
+
+### Step 2: Enter basic information
+
+Specify a name for the Deployment (for example, `demo-deployment`), select a project, and click **Next**.
+
+### Step 3: Set a Pod
+
+1. Before you set an image, define the number of replicated Pods in **Pod Replicas** by clicking
on the right and select options from the menu to modify your Deployment.
+
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create the Deployment.
+ - **Delete**: Delete the Deployment.
+
+2. Click the name of the Deployment and you can go to its details page.
+
+3. Click **More** to display the operations about this Deployment you can do.
+
+ - **Roll Back**: Select the revision to roll back.
+ - **Edit Autoscaling**: Autoscale the replicas according to CPU and memory usage. If both CPU and memory are specified, replicas are added or deleted if any of the conditions is met.
+ - **Edit Settings**: Configure update strategies, containers and volumes.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create this Deployment.
+ - **Delete**: Delete the Deployment, and return to the Deployment list page.
+
+4. Click the **Resource Status** tab to view the port and Pod information of the Deployment.
+
+ - **Replica Status**: Click
/
in the upper-right corner to start/stop automatic data refreshing.
+
+4. Click
in the upper-right corner to manually refresh the data.
+
+### Environment variables
+
+Click the **Environment Variables** tab to view the environment variables of the Deployment.
+
+### Events
+
+Click the **Events** tab to view the events of the Deployment.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/horizontal-pod-autoscaling.md b/content/en/docs/v3.4/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
new file mode 100755
index 000000000..663444f5c
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/horizontal-pod-autoscaling.md
@@ -0,0 +1,104 @@
+---
+title: "Kubernetes HPA (Horizontal Pod Autoscaling) on KubeSphere"
+keywords: "Horizontal, Pod, Autoscaling, Autoscaler"
+description: "How to configure Kubernetes Horizontal Pod Autoscaling on KubeSphere."
+weight: 10290
+
+---
+
+This document describes how to configure Horizontal Pod Autoscaling (HPA) on KubeSphere.
+
+The Kubernetes HPA feature automatically adjusts the number of Pods to maintain average resource usage (CPU and memory) of Pods around preset values. For details about how HPA functions, see the [official Kubernetes document](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
+
+This document uses HPA based on CPU usage as an example. Operations for HPA based on memory usage are similar.
+
+## Prerequisites
+
+- You need to [enable the Metrics Server](../../../pluggable-components/metrics-server/).
+- You need to create a workspace, a project and a user (for example, `project-regular`). `project-regular` must be invited to the project and assigned the `operator` role. For more information, see [Create Workspaces, Projects, Users and Roles](/docs/v3.3/quick-start/create-workspace-and-project/).
+
+## Create a Service
+
+1. Log in to the KubeSphere web console as `project-regular` and go to your project.
+
+2. Choose **Services** in **Application Workloads** on the left navigation bar and click **Create** on the right.
+
+3. In the **Create Service** dialog box, click **Stateless Service**.
+
+4. Set the Service name (for example, `hpa`) and click **Next**.
+
+5. Click **Add Container**, set **Image** to `mirrorgooglecontainers/hpa-example` and click **Use Default Ports**.
+
+6. Set the CPU request (for example, 0.15 cores) for each container, click **√**, and click **Next**.
+
+ {{< notice note >}}
+
+ * To use HPA based on CPU usage, you must set the CPU request for each container, which is the minimum CPU resource reserved for each container (for details, see the [official Kubernetes document](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/)). The HPA feature compares the average Pod CPU usage with a target percentage of the average Pod CPU request.
+ * For HPA based on memory usage, you do not need to configure the memory request.
+
+ {{ notice >}}
+
+7. Click **Next** on the **Storage Settings** tab and click **Create** on the **Advanced Settings** tab.
+
+## Configure Kubernetes HPA
+
+1. Select **Deployments** in **Workloads** on the left navigation bar and click the HPA Deployment (for example, hpa-v1) on the right.
+
+2. Click **More** and select **Edit Autoscaling** from the drop-down menu.
+
+3. In the **Horizontal Pod Autoscaling** dialog box, configure the HPA parameters and click **OK**.
+
+ * **Target CPU Usage (%)**: Target percentage of the average Pod CPU request.
+ * **Target Memory Usage (MiB)**: Target average Pod memory usage in MiB.
+ * **Minimum Replicas**: Minimum number of Pods.
+ * **Maximum Replicas**: Maximum number of Pods.
+
+ In this example, **Target CPU Usage (%)** is set to `60`, **Minimum Replicas** is set to `1`, and **Maximum Replicas** is set to `10`.
+
+ {{< notice note >}}
+
+ Ensure that the cluster can provide sufficient resources for all Pods when the number of Pods reaches the maximum. Otherwise, the creation of some Pods will fail.
+
+ {{ notice >}}
+
+## Verify HPA
+
+This section uses a Deployment that sends requests to the HPA Service to verify that HPA automatically adjusts the number of Pods to meet the resource usage target.
+
+### Create a load generator Deployment
+
+1. Select **Workloads** in **Application Workloads** on the left navigation bar and click **Create** on the right.
+
+2. In the **Create Deployment** dialog box, set the Deployment name (for example, `load-generator`) and click **Next**.
+
+3. Click **Add Container** and set **Image** to `busybox`.
+
+4. Scroll down in the dialog box, select **Start Command**, and set **Command** to `sh,-c` and **Parameters** to `while true; do wget -q -O- http://
on the right of the load generator Deployment (for example, load-generator-v1), and choose **Delete** from the drop-down list. After the load-generator Deployment is deleted, check the status of the HPA Deployment again. The number of Pods decreases to the minimum.
+
+{{< notice note >}}
+
+The system may require a few minutes to adjust the number of Pods and collect data.
+
+{{ notice >}}
+
+## Edit HPA Configuration
+
+You can repeat steps in [Configure HPA](#configure-hpa) to edit the HPA configuration.
+
+## Cancel HPA
+
+1. Choose **Workloads** in **Application Workloads** on the left navigation bar and click the HPA Deployment (for example, hpa-v1) on the right.
+
+2. Click
on the right of **Autoscaling** and choose **Cancel** from the drop-down list.
+
+
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/jobs.md b/content/en/docs/v3.4/project-user-guide/application-workloads/jobs.md
new file mode 100644
index 000000000..cbfcf136f
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/jobs.md
@@ -0,0 +1,162 @@
+---
+title: "Jobs"
+keywords: "KubeSphere, Kubernetes, Docker, Jobs"
+description: "Learn basic concepts of Jobs and how to create Jobs on KubeSphere."
+linkTitle: "Jobs"
+
+weight: 10250
+---
+
+A Job creates one or more Pods and ensures that a specified number of them successfully terminates. As Pods successfully complete, the Job tracks the successful completions. When a specified number of successful completions is reached, the task (namely, Job) is complete. Deleting a Job will clean up the Pods it created.
+
+A simple case is to create one Job object in order to reliably run one Pod to completion. The Job object will start a new Pod if the first Pod fails or is deleted (for example, due to a node hardware failure or a node reboot). You can also use a Job to run multiple Pods in parallel.
+
+The following example demonstrates specific steps of creating a Job (computing π to 2000 decimal places) on KubeSphere.
+
+## Prerequisites
+
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create a Job
+
+### Step 1: Open the dashboard
+
+Log in to the console as `project-regular`. Go to **Jobs** under **Application Workloads** and click **Create**.
+
+### Step 2: Enter basic information
+
+Enter the basic information. The following describes the parameters:
+
+- **Name**: The name of the Job, which is also the unique identifier.
+- **Alias**: The alias name of the Job, making resources easier to identify.
+- **Description**: The description of the Job, which gives a brief introduction of the Job.
+
+### Step 3: Strategy settings (optional)
+
+You can set the values in this step or click **Next** to use the default values. Refer to the table below for detailed explanations of each field.
+
+| Name | Definition | Description |
+| ----------------------- | ---------------------------- | ------------------------------------------------------------ |
+| Maximum Retries | `spec.backoffLimit` | It specifies the maximum number of retries before this Job is marked as failed. It defaults to 6. |
+| Complete Pods | `spec.completions` | It specifies the desired number of successfully finished Pods the Job should be run with. Setting it to nil means that the success of any Pod signals the success of all Pods, and allows parallelism to have any positive value. Setting it to 1 means that parallelism is limited to 1 and the success of that Pod signals the success of the Job. For more information, see [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). |
+| Parallel Pods | `spec.parallelism` | It specifies the maximum desired number of Pods the Job should run at any given time. The actual number of Pods running in a steady state will be less than this number when the work left to do is less than max parallelism ((`.spec.completions - .status.successful`) < `.spec.parallelism`). For more information, see [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). |
+| Maximum Duration (s) | `spec.activeDeadlineSeconds` | It specifies the duration in seconds relative to the startTime that the Job may be active before the system tries to terminate it; the value must be a positive integer. |
+
+### Step 4: Set a Pod
+
+1. Select **Re-create Pod** for **Restart Policy**. You can only specify **Re-create Pod** or **Restart container** for **Restart Policy** when the Job is not completed:
+
+ - If **Restart Policy** is set to **Re-create Pod**, the Job creates a new Pod when the Pod fails, and the failed Pod does not disappear.
+
+ - If **Restart Policy** is set to **Restart container**, the Job will internally restart the container when the Pod fails, instead of creating a new Pod.
+
+2. Click **Add Container** which directs you to the **Add Container** page. Enter `perl` in the image search box and press **Enter**.
+
+3. On the same page, scroll down to **Start Command**. Enter the following commands in the box which computes pi to 2000 places then prints it. Click **√** in the lower-right corner and select **Next** to continue.
+
+ ```bash
+ perl,-Mbignum=bpi,-wle,print bpi(2000)
+ ```
+
+ {{< notice note >}}For more information about setting images, see [Pod Settings](../container-image-settings/).{{ notice >}}
+
+### Step 5: Inspect the Job manifest (optional)
+
+1. Enable **Edit YAML** in the upper-right corner which displays the manifest file of the Job. You can see all the values are set based on what you have specified in the previous steps.
+
+ ```yaml
+ apiVersion: batch/v1
+ kind: Job
+ metadata:
+ namespace: demo-project
+ labels:
+ app: job-test-1
+ name: job-test-1
+ annotations:
+ kubesphere.io/alias-name: Test
+ kubesphere.io/description: A job test
+ spec:
+ template:
+ metadata:
+ labels:
+ app: job-test-1
+ spec:
+ containers:
+ - name: container-4rwiyb
+ imagePullPolicy: IfNotPresent
+ image: perl
+ command:
+ - perl
+ - '-Mbignum=bpi'
+ - '-wle'
+ - print bpi(2000)
+ restartPolicy: Never
+ serviceAccount: default
+ initContainers: []
+ volumes: []
+ imagePullSecrets: null
+ backoffLimit: 5
+ completions: 4
+ parallelism: 2
+ activeDeadlineSeconds: 300
+ ```
+
+2. You can make adjustments in the manifest directly and click **Create** or disable the **Edit YAML** and get back to the **Create** page.
+
+ {{< notice note >}}You can skip **Storage Settings** and **Advanced Settings** for this tutorial. For more information, see [Mount volumes](../deployments/#step-4-mount-volumes) and [Configure advanced settings](../deployments/#step-5-configure-advanced-settings).{{ notice >}}
+
+### Step 6: Check the result
+
+1. In the final step of **Advanced Settings**, click **Create** to finish. A new item will be added to the Job list if the creation is successful.
+
+2. Click this Job and go to **Job Records** where you can see the information of each execution record. There are four completed Pods since **Completions** was set to `4` in Step 3.
+
+ {{< notice tip >}}
+You can rerun the Job if it fails and the reason for failure is displayed under **Message**.
+ {{ notice >}}
+
+3. In **Resource Status**, you can inspect the Pod status. Two Pods were created each time as **Parallel Pods** was set to 2. Click
on the right and click
to refresh the execution records.
+
+### Resource status
+
+1. Click the **Resource Status** tab to view the Pods of the Job.
+
+2. Click
to refresh the Pod information, and click
/
to display/hide the containers in each Pod.
+
+### Metadata
+
+Click the **Metadata** tab to view the labels and annotations of the Job.
+
+### Environment variables
+
+Click the **Environment Variables** tab to view the environment variables of the Job.
+
+### Events
+
+Click the **Events** tab to view the events of the Job.
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/routes.md b/content/en/docs/v3.4/project-user-guide/application-workloads/routes.md
new file mode 100644
index 000000000..586f5cee2
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/routes.md
@@ -0,0 +1,133 @@
+---
+title: "Routes"
+keywords: "KubeSphere, Kubernetes, Route, Ingress"
+description: "Learn basic concepts of Routes (i.e. Ingress) and how to create Routes in KubeSphere."
+weight: 10270
+---
+
+This document describes how to create, use, and edit a Route on KubeSphere.
+
+A Route on KubeSphere is the same as an [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress) on Kubernetes. You can use a Route and a single IP address to aggregate and expose multiple Services.
+
+## Prerequisites
+
+- You need to create a workspace, a project and two users (for example, `project-admin` and `project-regular`). In the project, the role of `admin` must be `project-admin` and that of `project-regular` must be `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](/docs/v3.3/quick-start/create-workspace-and-project/).
+- If the Route is to be accessed in HTTPS mode, you need to [create a Secret](/docs/v3.3/project-user-guide/configuration/secrets/) that contains the `tls.crt` (TLS certificate) and `tls.key` (TLS private key) keys used for encryption.
+- You need to [create at least one Service](/docs/v3.3/project-user-guide/application-workloads/services/). This document uses a demo Service as an example, which returns the Pod name to external requests.
+
+## Configure the Route Access Method
+
+1. Log in to the KubeSphere web console as `project-admin` and go to your project.
+
+2. Select **Gateway Settings** in **Project Settings** on the left navigation bar and click **Enable Gateway** on the right.
+
+3. In the displayed dialog box, set **Access Mode** to **NodePort** or **LoadBalancer**, and click **OK**.
+
+ {{< notice note >}}
+
+ If **Access Mode** is set to **LoadBalancer**, you may need to enable the load balancer plugin in your environment according to the plugin user guide.
+
+ {{ notice >}}
+
+## Create a Route
+
+### Step 1: Configure basic information
+
+1. Log out of the KubeSphere web console, log back in as `project-regular`, and go to the same project.
+
+2. Choose **Routes** in **Application Workloads** on the left navigation bar and click **Create** on the right.
+
+3. On the **Basic Information** tab, configure the basic information about the Route and click **Next**.
+ * **Name**: Name of the Route, which is used as a unique identifier.
+ * **Alias**: Alias of the Route.
+ * **Description**: Description of the Route.
+
+### Step 2: Configure routing rules
+
+1. On the **Routing Rules** tab, click **Add Routing Rule**.
+
+2. Select a mode, configure routing rules, click **√**, and click **Next**.
+
+ * **Auto Generate**: KubeSphere automatically generates a domain name in the `
on the right to further edit it, such as its metadata (excluding **Name**), YAML, port, and Internet access.
+
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Edit Service**: View the access type and set selectors and ports.
+ - **Edit External Access**: Edit external access method for the Service.
+ - **Delete**: When you delete a Service, associated resources will be displayed. If you check them, they will be deleted together with the Service.
+
+2. Click the name of the Service and you can go to its details page.
+
+ - Click **More** to expand the drop-down menu which is the same as the one in the Service list.
+ - The Pod list provides detailed information of the Pod (status, node, Pod IP and resource usage).
+ - You can view the container information by clicking a Pod item.
+ - Click the container log icon to view output logs of the container.
+ - You can view the Pod details page by clicking the Pod name.
+
+### Resource status
+
+1. Click the **Resource Status** tab to view information about the Service ports, workloads, and Pods.
+
+2. In the **Pods** area, click
to refresh the Pod information, and click
/
to display/hide the containers in each Pod.
+
+### Metadata
+
+Click the **Metadata** tab to view the labels and annotations of the Service.
+
+### Events
+
+Click the **Events** tab to view the events of the Service.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/application-workloads/statefulsets.md b/content/en/docs/v3.4/project-user-guide/application-workloads/statefulsets.md
new file mode 100644
index 000000000..d5b160d06
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application-workloads/statefulsets.md
@@ -0,0 +1,148 @@
+---
+title: "Kubernetes StatefulSet in KubeSphere"
+keywords: 'KubeSphere, Kubernetes, StatefulSets, Dashboard, Service'
+description: 'Learn basic concepts of StatefulSets and how to create StatefulSets on KubeSphere.'
+linkTitle: "StatefulSets"
+weight: 10220
+---
+
+As a workload API object, a Kubernetes StatefulSet is used to manage stateful applications. It is responsible for the deploying, scaling of a set of Pods, and guarantees the ordering and uniqueness of these Pods.
+
+Like a Deployment, a StatefulSet manages Pods that are based on an identical container specification. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These Pods are created from the same specification, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
+
+If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed.
+
+StatefulSets are valuable for applications that require one or more of the following.
+
+- Stable, unique network identifiers.
+- Stable, persistent storage.
+- Ordered, graceful deployment, and scaling.
+- Ordered, automated rolling updates.
+
+For more information, see the [official documentation of Kubernetes](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/).
+
+## Prerequisites
+
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create a Kubernetes StatefulSet
+
+In KubeSphere, a **Headless** service is also created when you create a StatefulSet. You can find the headless service in [Services](../services/) under **Application Workloads** in a project.
+
+### Step 1: Open the dashboard
+
+Log in to the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the **StatefulSets** tab.
+
+### Step 2: Enter basic information
+
+Specify a name for the StatefulSet (for example, `demo-stateful`), select a project, and click **Next**.
+
+### Step 3: Set a Pod
+
+1. Before you set an image, define the number of replicated Pods in **Pod Replicas** by clicking
on the right to select options from the menu to modify your StatefulSet.
+
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create the StatefulSet.
+ - **Delete**: Delete the StatefulSet.
+
+2. Click the name of the StatefulSet and you can go to its details page.
+
+3. Click **More** to display what operations about this StatefulSet you can do.
+
+ - **Roll Back**: Select the revision to roll back.
+ - **Edit Service**: Set the port to expose the container image and the service port.
+ - **Edit Settings**: Configure update strategies, containers and volumes.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Re-create**: Re-create this StatefulSet.
+ - **Delete**: Delete the StatefulSet, and return to the StatefulSet list page.
+
+4. Click the **Resource Status** tab to view the port and Pod information of a StatefulSet.
+
+ - **Replica Status**: Click
/
in the upper-right corner to start/stop automatic data refreshing.
+
+4. Click
in the upper-right corner to manually refresh the data.
+
+### Environment variables
+
+Click the **Environment Variables** tab to view the environment variables of the StatefulSet.
+
+### Events
+
+Click the **Events** tab to view the events of the StatefulSet.
+
diff --git a/content/en/docs/v3.4/project-user-guide/application/_index.md b/content/en/docs/v3.4/project-user-guide/application/_index.md
new file mode 100644
index 000000000..7e0d6b2b6
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Applications"
+weight: 10100
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/project-user-guide/application/app-template.md b/content/en/docs/v3.4/project-user-guide/application/app-template.md
new file mode 100644
index 000000000..30958f0bc
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application/app-template.md
@@ -0,0 +1,33 @@
+---
+title: "App Templates"
+keywords: 'Kubernetes, Chart, Helm, KubeSphere, Application Template, Repository'
+description: 'Understand the concept of app templates and how they can help to deploy applications within enterprises.'
+linkTitle: "App Templates"
+weight: 10110
+---
+
+An app template serves as a way for users to upload, deliver, and manage apps. Generally, an app is composed of one or more Kubernetes workloads (for example, [Deployments](../../../project-user-guide/application-workloads/deployments/), [StatefulSets](../../../project-user-guide/application-workloads/statefulsets/) and [DaemonSets](../../../project-user-guide/application-workloads/daemonsets/)) and [Services](../../../project-user-guide/application-workloads/services/) based on how it functions and communicates with the external environment. Apps that are uploaded as app templates are built based on a [Helm](https://helm.sh/) package.
+
+## How App Templates Work
+
+You can deliver Helm charts to the public repository of KubeSphere or import a private app repository to offer app templates.
+
+The public repository, also known as the App Store on KubeSphere, is accessible to every tenant in a workspace. After [uploading the Helm chart of an app](../../../workspace-administration/upload-helm-based-application/), you can deploy your app to test its functions and submit it for review. Ultimately, you have the option to release it to the App Store after it is approved. For more information, see [Application Lifecycle Management](../../../application-store/app-lifecycle-management/).
+
+For a private repository, only users with required permissions are allowed to [add private repositories](../../../workspace-administration/app-repository/import-helm-repository/) in a workspace. Generally, the private repository is built based on object storage services, such as MinIO. After imported to KubeSphere, these private repositories serve as application pools to provide app templates.
+
+{{< notice note >}}
+
+[For individual apps that are uploaded as Helm charts](../../../workspace-administration/upload-helm-based-application/) to KubeSphere, they are displayed in the App Store together with built-in apps after approved and released. Besides, when you select app templates from private app repositories, you can also see **Current workspace** in the list, which stores these individual apps uploaded as Helm charts.
+
+{{ notice >}}
+
+KubeSphere deploys app repository services based on [OpenPitrix](https://github.com/openpitrix/openpitrix) as a [pluggable component](../../../pluggable-components/app-store/).
+
+## Why App Templates
+
+App templates enable users to deploy and manage apps in a visualized way. Internally, they play an important role as shared resources (for example, databases, middleware and operating systems) created by enterprises for the coordination and cooperation within teams. Externally, app templates set industry standards of building and delivery. Users can take advantage of app templates in different scenarios to meet their own needs through one-click deployment.
+
+In addition, as OpenPitrix is integrated to KubeSphere to provide application management across the entire lifecycle, the platform allows ISVs, developers and regular users to all participate in the process. Backed by the multi-tenant system of KubeSphere, each tenant is only responsible for their own part, such as app uploading, app review, release, test, and version management. Ultimately, enterprises can build their own App Store and enrich their application pools with their customized standards. As such, apps can also be delivered in a standardized fashion.
+
+For more information about how to use app templates, see [Deploy Apps from App Templates](../deploy-app-from-template/).
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/application/compose-app.md b/content/en/docs/v3.4/project-user-guide/application/compose-app.md
new file mode 100644
index 000000000..5a7e7bb27
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application/compose-app.md
@@ -0,0 +1,96 @@
+---
+title: "Create a Microservices-based App"
+keywords: 'KubeSphere, Kubernetes, service mesh, microservices'
+description: 'Learn how to compose a microservice-based application from scratch.'
+linkTitle: "Create a Microservices-based App"
+weight: 10140
+---
+
+With each microservice handling a single part of the app's functionality, an app can be divided into different components. These components have their own responsibilities and limitations, independent from each other. In KubeSphere, this kind of app is called **Composed App**, which can be built through newly created Services or existing Services.
+
+This tutorial demonstrates how to create a microservices-based app Bookinfo, which is composed of four Services, and set a customized domain name to access the app.
+
+## Prerequisites
+
+- You need to create a workspace, a project, and a user (`project-regular`) for this tutorial. The user needs to be invited to the project with the `operator` role. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- `project-admin` needs to [set the project gateway](../../../project-administration/project-gateway/) so that `project-regular` can define a domain name when creating the app.
+
+## Create Microservices that Compose an App
+
+1. Log in to the web console of KubeSphere and navigate to **Apps** in **Application Workloads** of your project. On the **Composed Apps** tab, click **Create**.
+
+2. Set a name for the app (for example, `bookinfo`) and click **Next**.
+
+3. On the **Service Settings** page, you need to create microservices that compose the app. Click **Create Service** and select **Stateless Service**.
+
+4. Set a name for the Service (e.g `productpage`) and click **Next**.
+
+ {{< notice note >}}
+
+ You can create a Service on the dashboard directly or enable **Edit YAML** in the upper-right corner to edit the YAML file.
+
+ {{ notice >}}
+
+5. Click **Add Container** under **Containers** and enter `kubesphere/examples-bookinfo-productpage-v1:1.13.0` in the search box to use the Docker Hub image.
+
+ {{< notice note >}}
+
+ You must press **Enter** in your keyboard after you enter the image name.
+
+ {{ notice >}}
+
+6. Click **Use Default Ports**. For more information about image settings, see [Pod Settings](../../../project-user-guide/application-workloads/container-image-settings/). Click **√** in the lower-right corner and **Next** to continue.
+
+7. On the **Storage Settings** page, [add a volume](../../../project-user-guide/storage/volumes/) or click **Next** to continue.
+
+8. Click **Create** on the **Advanced Settings** page.
+
+9. Similarly, add the other three microservices for the app. Here is the image information:
+
+ | Service | Name | Image |
+ | --------- | --------- | ------------------------------------------------ |
+ | Stateless | `details` | `kubesphere/examples-bookinfo-details-v1:1.13.0` |
+ | Stateless | `reviews` | `kubesphere/examples-bookinfo-reviews-v1:1.13.0` |
+ | Stateless | `ratings` | `kubesphere/examples-bookinfo-ratings-v1:1.13.0` |
+
+10. When you finish adding microservices, click **Next**.
+
+11. On the **Route Settings** page, click **Add Routing Rule**. On the **Specify Domain** tab, set a domain name for your app (for example, `demo.bookinfo`) and select `HTTP` in the **Protocol** field. For `Paths`, select the Service `productpage` and port `9080`. Click **OK** to continue.
+
+ {{< notice note >}}
+
+The button **Add Routing Rule** is not visible if the project gateway is not set.
+
+{{ notice >}}
+
+12. You can add more rules or click **Create** to finish the process.
+
+13. Wait for your app to reach the **Ready** status.
+
+
+## Access the App
+
+1. As you set a domain name for the app, you need to add an entry in the hosts (`/etc/hosts`) file. For example, add the IP address and hostname as below:
+
+ ```txt
+ 192.168.0.9 demo.bookinfo
+ ```
+
+ {{< notice note >}}
+
+ You must add your **own** IP address and hostname.
+
+ {{ notice >}}
+
+2. In **Composed Apps**, click the app you just created.
+
+3. In **Resource Status**, click **Access Service** under **Routes** to access the app.
+
+ {{< notice note >}}
+
+ Make sure you open the port in your security group.
+
+ {{ notice >}}
+
+4. Click **Normal user** and **Test user** respectively to see other **Services**.
+
diff --git a/content/en/docs/v3.4/project-user-guide/application/deploy-app-from-appstore.md b/content/en/docs/v3.4/project-user-guide/application/deploy-app-from-appstore.md
new file mode 100644
index 000000000..f9613b89c
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/application/deploy-app-from-appstore.md
@@ -0,0 +1,62 @@
+---
+title: "Deploy Apps from the App Store"
+keywords: 'Kubernetes, Chart, Helm, KubeSphere, Application, App Store'
+description: 'Learn how to deploy an application from the App Store.'
+linkTitle: "Deploy Apps from the App Store"
+weight: 10130
+---
+
+The [App Store](../../../application-store/) is also the public app repository on the platform, which means every tenant on the platform can view the applications in the Store regardless of which workspace they belong to. The App Store contains 16 featured enterprise-ready containerized apps and apps released by tenants from different workspaces on the platform. Any authenticated users can deploy applications from the Store. This is different from private app repositories which are only accessible to tenants in the workspace where private app repositories are imported.
+
+This tutorial demonstrates how to quickly deploy [NGINX](https://www.nginx.com/) from the KubeSphere App Store powered by [OpenPitrix](https://github.com/openpitrix/openpitrix) and access its service through a NodePort.
+
+## Prerequisites
+
+- You have enabled [OpenPitrix (App Store)](../../../pluggable-components/app-store/).
+- You need to create a workspace, a project, and a user (`project-regular`) for this tutorial. The user must be invited to the project and granted the `operator` role. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Hands-on Lab
+
+### Step 1: Deploy NGINX from the App Store
+
+1. Log in to the web console of KubeSphere as `project-regular` and click **App Store** in the upper-left corner.
+
+ {{< notice note >}}
+
+ You can also go to **Apps** under **Application Workloads** in your project, click **Create**, and select **From App Store** to go to the App Store.
+
+ {{ notice >}}
+
+2. Search for NGINX, click it, and click **Install** on the **App Information** page. Make sure you click **Agree** in the displayed **Deployment Agreement** dialog box.
+
+3. Set a name and select an app version, confirm the location where NGINX will be deployed , and click **Next**.
+
+4. In **App Settings**, specify the number of replicas to deploy for the app and enable Ingress based on your needs. When you finish, click **Install**.
+
+ {{< notice note >}}
+
+ To specify more values for NGINX, use the toggle to see the app’s manifest in YAML format and edit its configurations.
+
+ {{ notice >}}
+
+5. Wait until NGINX is up and running.
+
+### Step 2: Access NGINX
+
+To access NGINX outside the cluster, you need to expose the app through a NodePort first.
+
+1. Go to **Services** in the created project and click the service name of NGINX.
+
+2. On the Service details page, click **More** and select **Edit External Access** from the drop-down menu.
+
+3. Select **NodePort** for **Access Method** and click **OK**. For more information, see [Project Gateway](../../../project-administration/project-gateway/).
+
+4. Under **Ports**, view the exposed port.
+
+5. Access NGINX through `
on the right and select the operation below from the drop-down list.
+
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Edit Settings**: Modify the key-value pair of the ConfigMap.
+ - **Delete**: Delete the ConfigMap.
+
+2. Click the name of the ConfigMap to go to its details page. Under the tab **Data**, you can see all the key-value pairs you have added for the ConfigMap.
+
+3. Click **More** to display what operations about this ConfigMap you can do.
+
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Edit Settings**: Modify the key-value pair of the ConfigMap.
+ - **Delete**: Delete the ConfigMap, and return to the list page.
+
+4. Click **Edit Information** to view and edit the basic information.
+
+
+## Use a ConfigMap
+
+When you create workloads, [Services](../../../project-user-guide/application-workloads/services/), [Jobs](../../../project-user-guide/application-workloads/jobs/) or [CronJobs](../../../project-user-guide/application-workloads/cronjobs/), you may need to add environment variables for containers. On the **Add Container** page, check **Environment Variables** and click **From secret** to use a ConfigMap from the list.
diff --git a/content/en/docs/v3.4/project-user-guide/configuration/image-registry.md b/content/en/docs/v3.4/project-user-guide/configuration/image-registry.md
new file mode 100644
index 000000000..0cce22b71
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/configuration/image-registry.md
@@ -0,0 +1,104 @@
+---
+title: "Image Registries"
+keywords: 'KubeSphere, Kubernetes, docker, Secrets'
+description: 'Learn how to create an image registry on KubeSphere.'
+linkTitle: "Image Registries"
+weight: 10430
+---
+
+A Docker image is a read-only template that can be used to deploy container services. Each image has a unique identifier (for example, image name:tag). For example, an image can contain a complete package of an Ubuntu operating system environment with only Apache and a few applications installed. An image registry is used to store and distribute Docker images.
+
+This tutorial demonstrates how to create Secrets for different image registries.
+
+## Prerequisites
+
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create a Secret
+
+When you create workloads, [Services](../../../project-user-guide/application-workloads/services/), [Jobs](../../../project-user-guide/application-workloads/jobs/), or [CronJobs](../../../project-user-guide/application-workloads/cronjobs/), you can select images from your private registry in addition to the public registry. To use images from your private registry, you must create a Secret for it so that the registry can be integrated to KubeSphere.
+
+### Step 1: Open the dashboard
+
+Log in to the web console of KubeSphere as `project-regular`. Go to **Configuration** of a project, select **Secrets** and click **Create**.
+
+### Step 2: Enter basic information
+
+Specify a name for the Secret (for example, `demo-registry-secret`) and click **Next** to continue.
+
+{{< notice tip >}}
+
+You can see the Secret's manifest file in YAML format by enabling **Edit YAML** in the upper-right corner. KubeSphere allows you to edit the manifest file directly to create a Secret. Alternatively, you can follow the steps below to create a Secret via the dashboard.
+
+{{ notice >}}
+
+### Step 3: Specify image registry information
+
+Select **Image registry information** for **Type**. To use images from your private registry as you create application workloads, you need to specify the following fields.
+
+- **Registry Address**. The address of the image registry that stores images for you to use when creating application workloads.
+- **Username**. The account name you use to log in to the registry.
+- **Password**. The password you use to log in to the registry.
+- **Email** (optional). Your email address.
+
+#### Add the Docker Hub registry
+
+1. Before you add your image registry in [Docker Hub](https://hub.docker.com/), make sure you have an available Docker Hub account. On the **Secret Settings** page, enter `docker.io` for **Registry Address** and enter your Docker ID and password for **User Name** and **Password**. Click **Validate** to check whether the address is available.
+
+2. Click **Create**. Later, the Secret is displayed on the **Secrets** page. For more information about how to edit the Secret after you create it, see [Check Secret Details](../../../project-user-guide/configuration/secrets/#check-secret-details).
+
+#### Add the Harbor image registry
+
+[Harbor](https://goharbor.io/) is an open-source trusted cloud-native registry project that stores, signs, and scans content. Harbor extends the open-source Docker Distribution by adding the functionalities usually required by users such as security, identity and management. Harbor uses HTTP and HTTPS to serve registry requests.
+
+**HTTP**
+
+1. You need to modify the Docker configuration for all nodes within the cluster. For example, if there is an external Harbor registry and its IP address is `http://192.168.0.99`, then you need to add the field `--insecure-registry=192.168.0.99` to `/etc/systemd/system/docker.service.d/docker-options.conf`:
+
+ ```bash
+ [Service]
+ Environment="DOCKER_OPTS=--registry-mirror=https://registry.docker-cn.com --insecure-registry=10.233.0.0/18 --data-root=/var/lib/docker --log-opt max-size=50m --log-opt max-file=5 \
+ --insecure-registry=192.168.0.99"
+ ```
+
+ {{< notice note >}}
+
+ - Replace the image registry address with your own registry address.
+
+ - `Environment` represents [dockerd options](https://docs.docker.com/engine/reference/commandline/dockerd/).
+
+ - `--insecure-registry` is required by the Docker daemon for the communication with an insecure registry. Refer to [Docker documentation](https://docs.docker.com/engine/reference/commandline/dockerd/#insecure-registries) for its syntax.
+
+ {{ notice >}}
+
+2. After that, reload the configuration file and restart Docker:
+
+ ```bash
+ sudo systemctl daemon-reload
+ ```
+
+ ```bash
+ sudo systemctl restart docker
+ ```
+
+3. Go back to the **Data Settings** page and select **Image registry information** for **Type**. Enter your Harbor IP address for **Registry Address** and enter the username and password.
+
+ {{< notice note >}}
+
+ If you want to use the domain name instead of the IP address with Harbor, you may need to configure the CoreDNS and nodelocaldns within the cluster.
+
+ {{ notice >}}
+
+4. Click **Create**. Later, the Secret is displayed on the **Secrets** page. For more information about how to edit the Secret after you create it, see [Check Secret Details](../../../project-user-guide/configuration/secrets/#check-secret-details).
+
+**HTTPS**
+
+For the integration of the HTTPS-based Harbor registry, refer to [Harbor Documentation](https://goharbor.io/docs/1.10/install-config/configure-https/). Make sure you use `docker login` to connect to your Harbor registry.
+
+## Use an Image Registry
+
+When you set images, you can select the private image registry if the Secret of it is created in advance. For example, click the arrow on the **Add Container** page to expand the registry list when you create a [Deployment](../../../project-user-guide/application-workloads/deployments/). After you choose the image registry, enter the image name and tag to use the image.
+
+If you use YAML to create a workload and need to use a private image registry, you need to manually add `kubesphere.io/imagepullsecrets` to `annotations` in your local YAML file, and enter the key-value pair in JSON format, where `key` must be the name of the container, and `value` must be the name of the secret, as shown in the following sample.
+
+
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/configuration/secrets.md b/content/en/docs/v3.4/project-user-guide/configuration/secrets.md
new file mode 100644
index 000000000..16e9dfd05
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/configuration/secrets.md
@@ -0,0 +1,121 @@
+---
+title: "Kubernetes Secrets in KubeSphere"
+keywords: 'KubeSphere, Kubernetes, Secrets'
+description: 'Learn how to create a Secret on KubeSphere.'
+linkTitle: "Secrets"
+weight: 10410
+---
+
+A Kubernetes [Secret](https://kubernetes.io/docs/concepts/configuration/secret/) is used to store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. To use a Secret, a Pod needs to reference it in one of [the following ways](https://kubernetes.io/docs/concepts/configuration/secret/#overview-of-secrets).
+
+- As a file in a volume mounted and consumed by containerized applications running in a Pod.
+- As environment variables used by containers in a Pod.
+- As image registry credentials when images are pulled for the Pod by the kubelet.
+
+This tutorial demonstrates how to create a Secret in KubeSphere.
+
+## Prerequisites
+
+You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Create a Kubernetes Secret
+
+### Step 1: Open the dashboard
+
+Log in to the console as `project-regular`. Go to **Configuration** of a project, select **Secrets** and click **Create**.
+
+### Step 2: Enter basic information
+
+Specify a name for the Secret (for example, `demo-secret`) and click **Next** to continue.
+
+{{< notice tip >}}
+
+You can see the Secret's manifest file in YAML format by enabling **Edit YAML** in the upper-right corner. KubeSphere allows you to edit the manifest file directly to create a Secret. Alternatively, you can follow the steps below to create a Secret via the dashboard.
+
+{{ notice >}}
+
+### Step 3: Set a Secret
+
+1. Under the tab **Data Settings**, you must select a Secret type. In KubeSphere, you can create the following Kubernetes Secret types, indicated by the `type` field.
+
+ {{< notice note >}}
+
+ For all Secret types, values for all keys under the field `data` in the manifest must be base64-encoded strings. After you specify values on the KubeSphere dashboard, KubeSphere converts them into corresponding base64 character values in the YAML file. For example, if you enter `password` and `hello123` for **Key** and **Value** respectively on the **Edit Data** page when you create the default type of Secret, the actual value displaying in the YAML file is `aGVsbG8xMjM=` (namely, `hello123` in base64 format), automatically created by KubeSphere.
+
+ {{ notice >}}
+
+ - **Default**. The type of [Opaque](https://kubernetes.io/docs/concepts/configuration/secret/#opaque-secrets) in Kubernetes, which is also the default Secret type in Kubernetes. You can create arbitrary user-defined data for this type of Secret. Click **Add Data** to add key-value pairs for it.
+
+ - **TLS information**. The type of [kubernetes.io/tls](https://kubernetes.io/docs/concepts/configuration/secret/#tls-secrets) in Kubernetes, which is used to store a certificate and its associated key that are typically used for TLS, such as TLS termination of Ingress resources. You must specify **Credential** and **Private Key** for it, indicated by `tls.crt` and `tls.key` in the YAML file respectively.
+
+ - **Image registry information**. The type of [kubernetes.io/dockerconfigjson](https://kubernetes.io/docs/concepts/configuration/secret/#docker-config-secrets) in Kubernetes, which is used to store the credentials for accessing a Docker registry for images. For more information, see [Image Registries](../image-registry/).
+
+ - **Username and password**. The type of [kubernetes.io/basic-auth](https://kubernetes.io/docs/concepts/configuration/secret/#basic-authentication-secret) in Kubernetes, which is used to store credentials needed for basic authentication. You must specify **Username** and **Password** for it, indicated by `username` and `password` in the YAML file respectively.
+
+2. For this tutorial, select the default type of Secret. Click **Add Data** and enter the **Key** (`MYSQL_ROOT_PASSWORD`) and **Value** (`123456`) to specify a Secret for MySQL.
+
+3. Click **√** in the lower-right corner to confirm. You can continue to add key-value pairs to the Secret or click **Create** to finish the creation. For more information about how to use the Secret, see [Compose and Deploy WordPress](../../../quick-start/wordpress-deployment/#task-3-create-an-application).
+
+## Check Secret Details
+
+1. After a Secret is created, it will be displayed in the list. You can click
on the right and select the operation from the menu to modify it.
+
+ - **Edit Information**: View and edit the basic information.
+ - **Edit YAML**: View, upload, download, or update the YAML file.
+ - **Edit Settings**: Modify the key-value pair of the Secret.
+ - **Delete**: Delete the Secret.
+
+2. Click the name of the Secret and you can go to its details page. Under the tab **Data**, you can see all the key-value pairs you have added for the Secret.
+
+ {{< notice note >}}
+
+As mentioned above, KubeSphere automatically converts the value of a key into its corresponding base64 character value. To see the actual decoded value, click
to drag and drop an item into the target group. To add a new group, click **Add Monitoring Group**. If you want to change the place of a group, hover over a group and click
or
arrow on the right.
+
+{{< notice note >}}
+
+The place of group on the right is consistent with the place of charts in the middle. In other words, if you change the order of groups, the place of their respective charts will also change accordingly.
+
+{{ notice >}}
+
+## Dashboard Templates
+
+Find and share dashboard templates in [Monitoring Dashboards Gallery](https://github.com/kubesphere/monitoring-dashboard/tree/master/contrib/gallery). It is a place for KubeSphere community users to contribute their masterpieces.
diff --git a/content/en/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/panel.md b/content/en/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/panel.md
new file mode 100644
index 000000000..1dd9703d8
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/panel.md
@@ -0,0 +1,34 @@
+---
+title: "Charts"
+keywords: 'monitoring, Prometheus, Prometheus Operator'
+description: 'Explore dashboard properties and chart metrics.'
+linkTitle: "Charts"
+weight: 10816
+---
+
+KubeSphere currently supports two kinds of charts: text charts and graphs.
+
+## Text Chart
+
+A text chart is preferable for displaying a single metric value. The editing window for the text chart is composed of two parts. The upper part displays the real-time metric value, and the lower part is for editing. You can enter a PromQL expression to fetch a single metric value.
+
+- **Chart Name**: The name of the text chart.
+- **Unit**: The metric data unit.
+- **Decimal Places**: Accept an integer.
+- **Monitoring Metric**: Specify a monitoring metric from the drop-down list of available Prometheus metrics.
+
+## Graph Chart
+
+A graph chart is preferable for displaying multiple metric values. The editing window for the graph is composed of three parts. The upper part displays real-time metric values. The left part is for setting the graph theme. The right part is for editing metrics and chart descriptions.
+
+- **Chart Types**: Support basic charts and bar charts.
+- **Graph Types**: Support basic charts and stacked charts.
+- **Chart Colors**: Change line colors.
+- **Chart Name**: The name of the chart.
+- **Description**: The chart description.
+- **Add**: Add a new query editor.
+- **Metric Name**: Legend for the line. It supports variables. For example, `{{pod}}` means using the value of the Prometheus metric label `pod` to name this line.
+- **Interval**: The step value between two data points.
+- **Monitoring Metric**: A list of available Prometheus metrics.
+- **Unit**: The metric data unit.
+- **Decimal Places**: Accept an integer.
diff --git a/content/en/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/querying.md b/content/en/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/querying.md
new file mode 100644
index 000000000..c9f6f40d4
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/custom-application-monitoring/visualization/querying.md
@@ -0,0 +1,13 @@
+---
+title: "Querying"
+keywords: 'monitoring, Prometheus, Prometheus Operator, querying'
+description: 'Learn how to specify monitoring metrics.'
+linkTitle: "Querying"
+weight: 10817
+---
+
+In the query editor, enter PromQL expressions in **Monitoring Metrics** to process and fetch metrics. To learn how to write PromQL, read [Query Examples](https://prometheus.io/docs/prometheus/latest/querying/examples/).
+
+
+
+
\ No newline at end of file
diff --git a/content/en/docs/v3.4/project-user-guide/grayscale-release/_index.md b/content/en/docs/v3.4/project-user-guide/grayscale-release/_index.md
new file mode 100644
index 000000000..f86106d9d
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/grayscale-release/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Grayscale Release"
+weight: 10500
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/project-user-guide/grayscale-release/blue-green-deployment.md b/content/en/docs/v3.4/project-user-guide/grayscale-release/blue-green-deployment.md
new file mode 100644
index 000000000..4f5abe9a4
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/grayscale-release/blue-green-deployment.md
@@ -0,0 +1,74 @@
+---
+title: "Kubernetes Blue-Green Deployment on Kubesphere"
+keywords: 'KubeSphere, Kubernetes, Service Mesh, Istio, Grayscale Release, Blue-Green deployment'
+description: 'Learn how to release a blue-green deployment on KubeSphere.'
+linkTitle: "Blue-Green Deployment with Kubernetes"
+weight: 10520
+---
+
+
+The blue-green release provides a zero downtime deployment, which means the new version can be deployed with the old one preserved. At any time, only one of the versions is active serving all the traffic, while the other one remains idle. If there is a problem with running, you can quickly roll back to the old version.
+
+
+
+
+## Prerequisites
+
+- You need to enable [KubeSphere Service Mesh](../../../pluggable-components/service-mesh/).
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to enable **Application Governance** and have an available app so that you can implement the blue-green deployment for it. The sample app used in this tutorial is Bookinfo. For more information, see [Deploy Bookinfo and Manage Traffic](../../../quick-start/deploy-bookinfo-to-k8s/).
+
+## Create a Blue-green Deployment Job
+
+1. Log in to KubeSphere as `project-regular` and go to **Grayscale Release**. Under **Release Modes**, click **Create** on the right of **Blue-Green Deployment**.
+
+2. Set a name for it and click **Next**.
+
+3. On the **Service Settings** tab, select your app from the drop-down list and the Service for which you want to implement the blue-green deployment. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
+
+4. On the **New Version Settings** tab, add another version (e.g `kubesphere/examples-bookinfo-reviews-v2:1.16.2`) as shown in the following figure and click **Next**.
+
+5. On the **Strategy Settings** tab, to allow the app version `v2` to take over all the traffic, select **Take Over** and click **Create**.
+
+6. The blue-green deployment job created is displayed under the **Release Jobs** tab. Click it to view details.
+
+7. Wait for a while and you can see all the traffic go to the version `v2`.
+
+8. The new **Deployment** is created as well.
+
+9. You can get the virtual service to identify the weight by running the following command:
+
+ ```bash
+ kubectl -n demo-project get virtualservice -o yaml
+ ```
+
+ {{< notice note >}}
+
+ - When you run the command above, replace `demo-project` with your own project (namely, namespace) name.
+ - If you want to run the command from the web kubectl on the KubeSphere console, you need to use the user `admin`.
+
+ {{ notice >}}
+
+10. Expected output:
+
+ ```yaml
+ ...
+ spec:
+ hosts:
+ - reviews
+ http:
+ - route:
+ - destination:
+ host: reviews
+ port:
+ number: 9080
+ subset: v2
+ weight: 100
+ ...
+ ```
+
+## Take a Job Offline
+
+After you implement the blue-green deployment, and the result meets your expectation, you can take the task offline with the version `v1` removed by clicking **Delete**.
+
+
diff --git a/content/en/docs/v3.4/project-user-guide/grayscale-release/canary-release.md b/content/en/docs/v3.4/project-user-guide/grayscale-release/canary-release.md
new file mode 100644
index 000000000..0aaa85250
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/grayscale-release/canary-release.md
@@ -0,0 +1,120 @@
+---
+title: "Canary Release"
+keywords: 'KubeSphere, Kubernetes, Canary Release, Istio, Service Mesh'
+description: 'Learn how to deploy a canary service on KubeSphere.'
+linkTitle: "Canary Release"
+weight: 10530
+---
+
+On the back of [Istio](https://istio.io/), KubeSphere provides users with necessary control to deploy canary services. In a canary release, you introduce a new version of a service and test it by sending a small percentage of traffic to it. At the same time, the old version is responsible for handling the rest of the traffic. If everything goes well, you can gradually increase the traffic sent to the new version, while simultaneously phasing out the old version. In the case of any occurring issues, KubeSphere allows you to roll back to the previous version as you change the traffic percentage.
+
+This method serves as an efficient way to test performance and reliability of a service. It can help detect potential problems in the actual environment while not affecting the overall system stability.
+
+
+
+## Prerequisites
+
+- You need to enable [KubeSphere Service Mesh](../../../pluggable-components/service-mesh/).
+- You need to enable [KubeSphere Logging](../../../pluggable-components/logging/) so that you can use the Tracing feature.
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to enable **Application Governance** and have an available app so that you can implement the canary release for it. The sample app used in this tutorial is Bookinfo. For more information, see [Deploy and Access Bookinfo](../../../quick-start/deploy-bookinfo-to-k8s/).
+
+## Step 1: Create a Canary Release Job
+
+1. Log in to KubeSphere as `project-regular` and navigate to **Grayscale Release**. Under **Release Modes**, click **Create** on the right of **Canary Release**.
+
+2. Set a name for it and click **Next**.
+
+3. On the **Service Settings** tab, select your app from the drop-down list and the Service for which you want to implement the canary release. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
+
+4. On the **New Version Settings** tab, add another version of it (e.g `kubesphere/examples-bookinfo-reviews-v2:1.16.2`; change `v1` to `v2`) and click **Next**.
+
+5. You send traffic to these two versions (`v1` and `v2`) either by a specific percentage or by the request content such as `Http Header`, `Cookie` and `URI`. Select **Specify Traffic Distribution** and move the slider to the middle to change the percentage of traffic sent to these two versions respectively (for example, set 50% for either one). When you finish, click **Create**.
+
+## Step 2: Verify the Canary Release
+
+Now that you have two available app versions, access the app to verify the canary release.
+
+1. Visit the Bookinfo website and refresh your browser repeatedly. You can see that the **Book Reviews** section switching between v1 and v2 at a rate of 50%.
+
+2. The created canary release job is displayed under the tab **Release Jobs**. Click it to view details.
+
+3. You can see half of the traffic goes to each of them.
+
+4. The new Deployment is created as well.
+
+5. You can directly get the virtual Service to identify the weight by executing the following command:
+
+ ```bash
+ kubectl -n demo-project get virtualservice -o yaml
+ ```
+
+ {{< notice note >}}
+
+ - When you execute the command above, replace `demo-project` with your own project (namely, namespace) name.
+ - If you want to execute the command from the web kubectl on the KubeSphere console, you need to use the user `admin`.
+
+ {{ notice >}}
+
+6. Expected output:
+
+ ```bash
+ ...
+ spec:
+ hosts:
+ - reviews
+ http:
+ - route:
+ - destination:
+ host: reviews
+ port:
+ number: 9080
+ subset: v1
+ weight: 50
+ - destination:
+ host: reviews
+ port:
+ number: 9080
+ subset: v2
+ weight: 50
+ ...
+ ```
+
+## Step 3: View Network Topology
+
+1. Execute the following command on the machine where KubeSphere runs to bring in real traffic to simulate the access to Bookinfo every 0.5 seconds.
+
+ ```bash
+ watch -n 0.5 "curl http://productpage.demo-project.192.168.0.2.nip.io:32277/productpage?u=normal"
+ ```
+
+ {{< notice note >}}
+ Make sure you replace the hostname and port number in the above command with your own.
+ {{ notice >}}
+
+2. In **Traffic Monitoring**, you can see communications, dependency, health and performance among different microservices.
+
+3. Click a component (for example, **reviews**) and you can see the information of traffic monitoring on the right, displaying real-time data of **Traffic**, **Success rate**, and **Duration**.
+
+## Step 4: View Tracing Details
+
+KubeSphere provides the distributed tracing feature based on [Jaeger](https://www.jaegertracing.io/), which is used to monitor and troubleshoot microservices-based distributed applications.
+
+1. On the **Tracing** tab, you can see all phases and internal calls of requests, as well as the period in each phase.
+
+2. Click any item, and you can even drill down to see request details and where this request is being processed (which machine or container).
+
+## Step 5: Take Over All Traffic
+
+If everything runs smoothly, you can bring all the traffic to the new version.
+
+1. In **Release Jobs**, click the canary release job.
+
+2. In the displayed dialog box, click
on the right of **reviews v2** and select **Take Over**. It means 100% of the traffic will be sent to the new version (v2).
+
+ {{< notice note >}}
+ If anything goes wrong with the new version, you can roll back to the previous version v1 anytime.
+ {{ notice >}}
+
+3. Access Bookinfo again and refresh the browser several times. You can find that it only shows the result of **reviews v2** (i.e. ratings with black stars).
+
diff --git a/content/en/docs/v3.4/project-user-guide/grayscale-release/overview.md b/content/en/docs/v3.4/project-user-guide/grayscale-release/overview.md
new file mode 100644
index 000000000..cf48a86f9
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/grayscale-release/overview.md
@@ -0,0 +1,39 @@
+---
+title: "Grayscale Release — Overview"
+keywords: 'Kubernetes, KubeSphere, grayscale release, overview, service mesh'
+description: 'Understand the basic concept of grayscale release.'
+linkTitle: "Overview"
+weight: 10510
+---
+
+Modern, cloud-native applications are often composed of a group of independently deployable components, also known as microservices. In a microservices architecture, developers are able to make adjustments to their apps with great flexibility as they do not affect the network of services each performing a specific function. This kind of network of microservices making up an application is also called **service mesh**.
+
+A KubeSphere service mesh, built on the open-source project of [Istio](https://istio.io/), controls how different parts of an app interact with one another. Among others, grayscale release strategies represent an important part for users to test and release new app versions without affecting the communication among microservices.
+
+## Grayscale Release Strategies
+
+A grayscale release in KubeSphere ensures smooth transition as you upgrade your apps to a new version. The specific strategy adopted may be different but the ultimate goal is the same - identify potential problems in advance without affecting your apps running in the production environment. This not only minimizes risks of a version upgrade but also tests the performance of new app builds.
+
+KubeSphere provides users with three grayscale release strategies.
+
+### [Blue-green Deployment](../blue-green-deployment/)
+
+A blue-green deployment provides an efficient method of releasing new versions with zero downtime and outages as it creates an identical standby environment where the new app version runs. With this approach, KubeSphere routes all the traffic to either version. Namely, only one environment is live at any given time. In the case of any issues with the new build, it allows you to immediately roll back to the previous version.
+
+### [Canary Release](../canary-release/)
+
+A canary deployment reduces the risk of version upgrades to a minimum as it slowly rolls out changes to a small subset of users. More specifically, you have the option to expose a new app version to a portion of production traffic, which is defined by yourself on the highly responsive dashboard. Besides, KubeSphere gives you a visualized view of real-time traffic as it monitors requests after you implement a canary deployment. During the process, you can analyze the behavior of the new app version and choose to gradually increase the percentage of traffic sent to it. Once you are confident of the build, you can route all the traffic to it.
+
+### [Traffic Mirroring](../traffic-mirroring/)
+
+Traffic mirroring copies live production traffic and sends it to a mirrored service. By default, KubeSphere mirrors all the traffic while you can also manually define the percentage of traffic to be mirrored by specifying a value. Common use cases include:
+
+- Test new app versions. You can compare the real-time output of mirrored traffic and production traffic.
+- Test clusters. You can use production traffic of instances for cluster testing.
+- Test databases. You can use an empty database to store and load data.
+
+{{< notice note >}}
+
+The current KubeSphere version does not support grayscale release strategies for multi-cluster apps.
+
+{{ notice >}}
diff --git a/content/en/docs/v3.4/project-user-guide/grayscale-release/traffic-mirroring.md b/content/en/docs/v3.4/project-user-guide/grayscale-release/traffic-mirroring.md
new file mode 100644
index 000000000..7d7568fd4
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/grayscale-release/traffic-mirroring.md
@@ -0,0 +1,81 @@
+---
+title: "Traffic Mirroring"
+keywords: 'KubeSphere, Kubernetes, Traffic Mirroring, Istio'
+description: 'Learn how to conduct a traffic mirroring job on KubeSphere.'
+linkTitle: "Traffic Mirroring"
+weight: 10540
+---
+
+Traffic mirroring, also called shadowing, is a powerful, risk-free method of testing your app versions as it sends a copy of live traffic to a service that is being mirrored. Namely, you implement a similar setup for acceptance test so that problems can be detected in advance. As mirrored traffic happens out of band of the critical request path for the primary service, your end users will not be affected during the whole process.
+
+## Prerequisites
+
+- You need to enable [KubeSphere Service Mesh](../../../pluggable-components/service-mesh/).
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to enable **Application Governance** and have an available app so that you can mirror the traffic of it. The sample app used in this tutorial is Bookinfo. For more information, see [Deploy Bookinfo and Manage Traffic](../../../quick-start/deploy-bookinfo-to-k8s/).
+
+## Create a Traffic Mirroring Job
+
+1. Log in to KubeSphere as `project-regular` and go to **Grayscale Release**. Under **Release Modes**, click **Create** on the right of **Traffic Mirroring**.
+
+2. Set a name for it and click **Next**.
+
+3. On the **Service Settings** tab, select your app from the drop-down list and the Service of which you want to mirror the traffic. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
+
+4. On the **New Version Settings** tab, add another version of it (for example, `kubesphere/examples-bookinfo-reviews-v2:1.16.2`; change `v1` to `v2`) and click **Next**.
+
+5. On the **Strategy Settings** tab, click **Create**.
+
+6. The traffic mirroring job created is displayed under the **Release Jobs** tab. Click it to view details.
+
+7. You can see the traffic is being mirrored to `v2` with real-time traffic displayed in the line chart.
+
+8. The new **Deployment** is created as well.
+
+9. You can get the virtual service to view `mirror` and `weight` by running the following command:
+
+ ```bash
+ kubectl -n demo-project get virtualservice -o yaml
+ ```
+
+ {{< notice note >}}
+
+ - When you run the command above, replace `demo-project` with your own project (namely, namespace) name.
+ - If you want to run the command from the web kubectl on the KubeSphere console, you need to use the user `admin`.
+
+ {{ notice >}}
+
+10. Expected output:
+
+ ```bash
+ ...
+ spec:
+ hosts:
+ - reviews
+ http:
+ - route:
+ - destination:
+ host: reviews
+ port:
+ number: 9080
+ subset: v1
+ weight: 100
+ mirror:
+ host: reviews
+ port:
+ number: 9080
+ subset: v2
+ ...
+ ```
+
+ This route rule sends 100% of the traffic to `v1`. The `mirror` field specifies that you want to mirror to the service `reviews v2`. When traffic gets mirrored, the requests are sent to the mirrored service with their Host/Authority headers appended with `-shadow`. For example, `cluster-1` becomes `cluster-1-shadow`.
+
+ {{< notice note >}}
+
+These requests are mirrored as “fire and forget”, which means that the responses are discarded. You can specify the `weight` field to mirror a fraction of the traffic, instead of mirroring all requests. If this field is absent, for compatibility with older versions, all traffic will be mirrored. For more information, see [Mirroring](https://istio.io/v1.5/pt-br/docs/tasks/traffic-management/mirroring/).
+
+{{ notice >}}
+
+## Take a Job Offline
+
+You can remove the traffic mirroring job by clicking **Delete**, which does not affect the current app version.
diff --git a/content/en/docs/v3.4/project-user-guide/image-builder/_index.md b/content/en/docs/v3.4/project-user-guide/image-builder/_index.md
new file mode 100644
index 000000000..d10a9e339
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/image-builder/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "Image Builder"
+weight: 10600
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/project-user-guide/image-builder/binary-to-image.md b/content/en/docs/v3.4/project-user-guide/image-builder/binary-to-image.md
new file mode 100644
index 000000000..63b377a2b
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/image-builder/binary-to-image.md
@@ -0,0 +1,141 @@
+---
+title: "Binary to Image: Publish an Artifact to Kubernetes"
+keywords: "KubeSphere, Kubernetes, Docker, B2I, Binary-to-Image"
+description: "Use B2I to import an artifact and push it to a target repository."
+linkTitle: "Binary to Image: Publish an Artifact to Kubernetes"
+weight: 10620
+---
+
+Binary-to-Image (B2I) is a toolkit and workflow for building reproducible container images from binary executables such as Jar, War, and binary packages. More specifically, you upload an artifact and specify a target repository such as Docker Hub or Harbor where you want to push the image. If everything runs successfully, your image will be pushed to the target repository and your application will be automatically deployed to Kubernetes if you create a Service in the workflow.
+
+In a B2I workflow, you do not need to write any Dockerfile. This not only reduces learning costs but improves release efficiency, which allows users to focus more on business.
+
+This tutorial demonstrates two different ways to build an image based on an artifact in a B2I workflow. Ultimately, the image will be released to Docker Hub.
+
+For demonstration and testing purposes, here are some example artifacts you can use to implement the B2I workflow:
+
+| Artifact Package | GitHub Repository |
+| ------------------------------------------------------------ | ------------------------------------------------------------ |
+| [b2i-war-java8.war](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java8.war) | [spring-mvc-showcase](https://github.com/spring-projects/spring-mvc-showcase) |
+| [b2i-war-java11.war](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java11.war) | [springmvc5](https://github.com/kubesphere/s2i-java-container/tree/master/tomcat/examples/springmvc5) |
+| [b2i-binary](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-binary) | [devops-go-sample](https://github.com/runzexia/devops-go-sample) |
+| [b2i-jar-java11.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java11.jar) | [ java-maven-example](https://github.com/kubesphere/s2i-java-container/tree/master/java/examples/maven) |
+| [b2i-jar-java8.jar](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-jar-java8.jar) | [devops-maven-sample](https://github.com/kubesphere/devops-maven-sample) |
+
+## Prerequisites
+
+- You have enabled the [KubeSphere DevOps System](../../../pluggable-components/devops/).
+- You need to create a [Docker Hub](https://www.dockerhub.com/) account. GitLab and Harbor are also supported.
+- You need to create a workspace, a project and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- Set a CI dedicated node for building images. This is not mandatory but recommended for the development and production environment as it caches dependencies and reduces build time. For more information, see [Set a CI Node for Dependency Caching](../../../devops-user-guide/how-to-use/devops-settings/set-ci-node/).
+
+## Create a Service Using Binary-to-Image (B2I)
+
+The steps below show how to upload an artifact, build an image and release it to Kubernetes by creating a Service in a B2I workflow.
+
+
+
+### Step 1: Create a Docker Hub Secret
+
+You must create a Docker Hub Secret so that the Docker image created through B2I can be push to Docker Hub. Log in to KubeSphere as `project-regular`, go to your project and create a Secret for Docker Hub. For more information, see [Create the Most Common Secrets](../../../project-user-guide/configuration/secrets/#create-the-most-common-secrets).
+
+### Step 2: Create a Service
+
+1. In the same project, navigate to **Services** under **Application Workloads** and click **Create**.
+
+2. Scroll down to **Create Service from Artifact** and select **WAR**. This tutorial uses the [spring-mvc-showcase](https://github.com/spring-projects/spring-mvc-showcase) project as a sample and uploads a war artifact to KubeSphere. Set a name, such as `b2i-war-java8`, and click **Next**.
+
+3. On the **Build Settings** page, provide the following information accordingly and click **Next**.
+
+ **Service Type**: Select **Stateless Service** for this example. For more information about different Services, see [Service Type](../../../project-user-guide/application-workloads/services/#service-type).
+
+ **Artifact File**: Upload the war artifact ([b2i-war-java8](https://github.com/kubesphere/tutorial/raw/master/tutorial%204%20-%20s2i-b2i/b2i-war-java8.war)).
+
+ **Build Environment**: Select **kubesphere/tomcat85-java8-centos7:v2.1.0**.
+
+ **Image Name**: Enter `
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
+
+3. Go back to the **Services**, **Deployments**, and **Jobs** page, and you can see the corresponding Service, Deployment, and Job of the image have been all created successfully.
+
+4. In your Docker Hub repository, you can see that KubeSphere has pushed the image to the repository with the expected tag.
+
+### Step 4: Access the B2I Service
+
+1. On the **Services** page, click the B2I Service to go to its details page, where you can see the port number has been exposed.
+
+2. Access the Service at `http://
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
+
+3. Go to the **Jobs** page, and you can see the corresponding Job of the image has been created successfully.
+
+4. In your Docker Hub repository, you can see that KubeSphere has pushed the image to the repository with the expected tag.
+
diff --git a/content/en/docs/v3.4/project-user-guide/image-builder/s2i-and-b2i-webhooks.md b/content/en/docs/v3.4/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
new file mode 100644
index 000000000..3b89fba20
--- /dev/null
+++ b/content/en/docs/v3.4/project-user-guide/image-builder/s2i-and-b2i-webhooks.md
@@ -0,0 +1,81 @@
+---
+title: "Configure S2I and B2I Webhooks"
+keywords: 'KubeSphere, Kubernetes, S2I, Source-to-Image, B2I, Binary-to-Image, Webhook'
+description: 'Learn how to configure S2I and B2I webhooks.'
+linkTitle: "Configure S2I and B2I Webhooks"
+weight: 10650
+---
+
+KubeSphere provides Source-to-Image (S2I) and Binary-to-Image (B2I) features to automate image building and pushing and application deployment. In KubeSphere v3.1.x and later versions, you can configure S2I and B2I webhooks so that your Image Builder can be automatically triggered when there is any relevant activity in your code repository.
+
+This tutorial demonstrates how to configure S2I and B2I webhooks.
+
+## Prerequisites
+
+- You need to enable the [KubeSphere DevOps System](../../../pluggable-components/devops/).
+- You need to create a workspace, a project (`demo-project`) and a user (`project-regular`). The user must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to create an S2I Image Builder and a B2I Image Builder. For more information, refer to [Source to Image: Publish an App without a Dockerfile](../source-to-image/) and [Binary to Image: Publish an Artifact to Kubernetes](../binary-to-image/).
+
+## Configure an S2I Webhook
+
+### Step 1: Expose the S2I trigger Service
+
+1. Log in to the KubeSphere web console as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**.
+
+2. In **Services** under **Application Workloads**, select **kubesphere-devops-system** from the drop-down list and click **s2ioperator-trigger-service** to go to its details page.
+
+3. Click **More** and select **Edit External Access**.
+
+4. In the displayed dialog box, select **NodePort** from the drop-down list for **Access Method** and then click **OK**.
+
+ {{< notice note >}}
+
+ This tutorial selects **NodePort** for demonstration purposes. You can also select **LoadBalancer** based on your needs.
+
+ {{ notice >}}
+
+5. You can view the **NodePort** on the details page. It is going to be included in the S2I webhook URL.
+
+### Step 2: Configure an S2I webhook
+
+1. Log out of KubeSphere and log back in as `project-regular`. Go to `demo-project`.
+
+2. In **Image Builders**, click the S2I Image Builder to go to its details page.
+
+3. You can see an auto-generated link shown in **Remote Trigger**. Copy `/s2itrigger/v1alpha1/general/namespaces/demo-project/s2ibuilders/felixnoo-s2i-sample-latest-zhd/` as it is going to be included in the S2I webhook URL.
+
+4. Log in to your GitHub account and go to the source code repository used for the S2I Image Builder. Go to **Webhooks** under **Settings** and then click **Add webhook**.
+
+5. In **Payload URL**, enter `http://
on the right of a record to see building logs. You can see `Build completed successfully` at the end of the log if everything runs normally.
+
+3. Go back to the **Services**, **Deployments**, and **Jobs** page, and you can see the corresponding Service, Deployment, and Job of the image have been all created successfully.
+
+4. In your Docker Hub repository, you can see that KubeSphere has pushed the image to the repository with the expected tag.
+
+### Step 5: Access the S2I Service
+
+1. On the **Services** page, click the S2I Service to go to its details page.
+
+2. To access the Service, you can either use the endpoint with the `curl` command or visit `| Parameter | +Description | +
|---|---|
| Name | +Name of the PV. It is specified by the field .metadata.name in the manifest file of the PV. | +
| Status | +
+ Current status of the PV. It is specified by the field .status.phase in the manifest file of the PV, including:
+
|
+
| Capacity | +Capacity of the PV. It is specified by the field .spec.capacity.storage in the manifest file of the PV. | +
| Access Mode | +
+ Access mode of the PV. It is specified by the field .spec.accessModes in the manifest file of the PV, including:
+
|
+
| Reclaim Policy | +
+ Reclaim policy of the PV. It is specified by the field .spec.persistentVolumeReclaimPolicy in the manifest file of the PV, including:
+
|
+
| Creation Time | +Time when the PV was created. | +
| OS | +Minimum Requirements | +
|---|---|
| Ubuntu 16.04, 18.04, 20.04, 22.04 | +2 CPU cores, 4 GB memory, and 40 GB disk space | +
| Debian Buster, Stretch | +2 CPU cores, 4 GB memory, and 40 GB disk space | +
| CentOS 7.x | +2 CPU cores, 4 GB memory, and 40 GB disk space | +
| Red Hat Enterprise Linux 7 | +2 CPU cores, 4 GB memory, and 40 GB disk space | +
| SUSE Linux Enterprise Server 15/openSUSE Leap 15.2 | +2 CPU cores, 4 GB memory, and 40 GB disk space | +
| Supported Container Runtime | +Version | +
|---|---|
| Docker | +19.3.8 + | +
| containerd | +Latest | +
| CRI-O (experimental, not fully tested) | +Latest | +
| iSula (experimental, not fully tested) | +Latest | +
| Dependency | +Kubernetes Version ≥ 1.18 | +Kubernetes Version < 1.18 | +
|---|---|---|
socat |
+ Required | +Optional but recommended | +
conntrack |
+ Required | +Optional but recommended | +
ebtables |
+ Optional but recommended | +Optional but recommended | +
ipset |
+ Optional but recommended | +Optional but recommended | +
| Built-in Roles | +Description | +
|---|---|
platform-self-provisioner |
+ Create workspaces and become the admin of the created workspaces. | +
platform-regular |
+ Has no access to any resources before joining a workspace or cluster. | +
platform-admin |
+ Manage all resources on the platform. | +
| User | +Assigned Platform Role | +User Permissions | +
|---|---|---|
ws-admin |
+ platform-regular |
+ Manage all resources in a workspace after being invited to the workspace (This user is used to invite new members to a workspace in this example). | +
project-admin |
+ platform-regular |
+ Create and manage projects and DevOps projects, and invite new members to the projects. | +
project-regular |
+ platform-regular |
+ project-regular will be invited to a project or DevOps project by project-admin. This user will be used to create workloads, pipelines and other resources in a specified project. |
+
| User | +Assigned Workspace Role | +Role Permissions | +
|---|---|---|
ws-admin |
+ demo-workspace-admin |
+ Manage all resources under the workspace (use this user to invite new members to the workspace). | +
project-admin |
+ demo-workspace-self-provisioner |
+ Create and manage projects and DevOps projects, and invite new members to join the projects. | +
project-regular |
+ demo-workspace-viewer |
+ project-regular will be invited by project-admin to join a project or DevOps project. The user can be used to create workloads, pipelines, etc. |
+
| Parameter | +Description | +
|---|---|
| Cluster | +Cluster where the operation happens. It is enabled if the multi-cluster feature is turned on. | +
| Project | +Project where the operation happens. It supports exact query and fuzzy query. | +
| Workspace | +Workspace where the operation happens. It supports exact query and fuzzy query. | +
| Resource Type | +Type of resource associated with the request. It supports fuzzy query. | +
| Resource Name | +Name of the resource associated with the request. It supports fuzzy query. | +
| Verb | +Kubernetes verb associated with the request. For non-resource requests, this is the lower-case HTTP method. It supports exact query. | +
| Status Code | +HTTP response code. It supports exact query. | +
| Operation Account | +User who calls this request. It supports exact and fuzzy query. | +
| Source IP | +IP address from where the request originated and intermediate proxies. It supports fuzzy query. | +
| Time Range | +Time when the request reaches the apiserver. | +
| Parameter | +Description | +
|---|---|
retentionDay |
+ retentionDay determines the date range displayed on the Metering and Billing page for users. The value of this parameter must be the same as the value of retention in Prometheus. |
+
currencyUnit |
+ The currency that is displayed on the Metering and Billing page. Currently allowed values are CNY (Renminbi) and USD (US dollars). If you specify other currencies, the console will display cost in USD by default. |
+
cpuCorePerHour |
+ The unit price of CPU per core/hour. | +
memPerGigabytesPerHour |
+ The unit price of memory per GB/hour. | +
ingressNetworkTrafficPerMegabytesPerHour |
+ The unit price of ingress traffic per MB/hour. | +
egressNetworkTrafficPerMegabytesPerHour |
+ The unit price of egress traffic per MB/hour. | +
pvcPerGigabytesPerHour |
+ The unit price of PVC per GB/hour. Note that KubeSphere calculates the total cost of volumes based on the storage capacity PVCs request regardless of the actual storage in use. | +
in the lower-right corner and select **Metering and Billing**.
+
+2. Click **View Consumption** in the **Cluster Resource Consumption** section.
+
+3. On the left side of the dashboard, you can see a cluster list containing your host cluster and all member clusters if you have enabled [multi-cluster management](../../../multicluster-management/). There is only one cluster called `default` in the list if it is not enabled.
+
+ On the right side, there are three parts showing resource consumption in different ways.
+
+ | Module | +Description | +
|---|---|
| Overview | +Displays a consumption overview of different resources in a cluster since its creation. You can also see the billing information if you have set prices for these resources in the ConfigMap kubesphere-config. |
+
| Consumption by Yesterday | +Displays the total resource consumption by yesterday. You can also customize the time range and internal to see data within a specific period. | +
| Current Resources Included | +Displays the consumption of resources included in the selected target object (in this case, all nodes in the selected cluster) over the last hour. | +
in the lower-right corner and select **Metering and Billing**.
+
+2. Click **View Consumption** in the **Workspace (Project) Resource Consumption** section.
+
+3. On the left side of the dashboard, you can see a list containing all the workspaces in the current cluster. The right part displays detailed consumption information in the selected workspace, the layout of which is basically the same as that of a cluster.
+
+ {{< notice note >}}
+
+ In a multi-cluster architecture, you cannot see the metering and billing information of a workspace if it does not have any available cluster assigned to it. For more information, see [Cluster Visibility and Authorization](../../../cluster-administration/cluster-settings/cluster-visibility-and-authorization/).
+
+ {{ notice >}}
+
+4. Click a workspace on the left and dive deeper into a project or workload (for example, Deployment and StatefulSet) to see detailed consumption information.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/toolbox/web-kubectl.md b/content/en/docs/v3.4/toolbox/web-kubectl.md
new file mode 100644
index 000000000..54a51b1f6
--- /dev/null
+++ b/content/en/docs/v3.4/toolbox/web-kubectl.md
@@ -0,0 +1,44 @@
+---
+title: "Web Kubectl"
+keywords: 'KubeSphere, Kubernetes, kubectl, cli'
+description: 'The web kubectl tool is integrated into KubeSphere to provide consistent user experiences for Kubernetes users.'
+linkTitle: "Web Kubectl"
+weight: 15500
+---
+
+The Kubernetes command-line tool, kubectl, allows you to run commands on Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, view logs, and more.
+
+KubeSphere provides web kubectl on the console for user convenience. By default, in the current version, only the account granted the `platform-admin` role (such as the default account `admin`) has the permission to use web kubectl for cluster resource operation and management.
+
+This tutorial demonstrates how to use web kubectl to operate on and manage cluster resources.
+
+## Use Web Kubectl
+
+1. Log in to KubeSphere with a user granted the `platform-admin` role, hover over the **Toolbox** in the lower-right corner and select **Kubectl**.
+
+2. You can see the kubectl interface in the pop-up window. If you have enabled the multi-cluster feature, you need to select the target cluster first from the drop-down list in the upper-right corner. This drop-down list is not visible if the multi-cluster feature is not enabled.
+
+3. Enter kubectl commands in the command-line tool to query and manage Kubernetes cluster resources. For example, execute the following command to query the status of all PVCs in the cluster.
+
+ ```bash
+ kubectl get pvc --all-namespaces
+ ```
+
+ 
+
+4. Use the following syntax to run kubectl commands from your terminal window:
+
+ ```bash
+ kubectl [command] [TYPE] [NAME] [flags]
+ ```
+
+ {{< notice note >}}
+
+- Where `command`, `TYPE`, `NAME`, and `flags` are:
+ - `command`: Specifies the operation that you want to perform on one or more resources, such as `create`, `get`, `describe` and `delete`.
+ - `TYPE`: Specifies the [resource type](https://kubernetes.io/docs/reference/kubectl/overview/#resource-types). Resource types are case-insensitive and you can specify the singular, plural, or abbreviated forms.
+ - `NAME`: Specifies the name of the resource. Names are case-sensitive. If the name is omitted, details for all resources are displayed, such as `kubectl get pods`.
+ - `flags`: Specifies optional flags. For example, you can use the `-s` or `--server` flags to specify the address and port of the Kubernetes API server.
+- If you need help, run `kubectl help` from the terminal window or refer to the [Kubernetes kubectl CLI documentation](https://kubernetes.io/docs/reference/kubectl/overview/).
+
+ {{ notice >}}
diff --git a/content/en/docs/v3.4/upgrade/_index.md b/content/en/docs/v3.4/upgrade/_index.md
new file mode 100644
index 000000000..a88033ba0
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/_index.md
@@ -0,0 +1,14 @@
+---
+title: "Upgrade"
+description: "Upgrade KubeSphere and Kubernetes"
+layout: "second"
+
+linkTitle: "Upgrade"
+
+weight: 7000
+
+icon: "/images/docs/v3.3/docs.svg"
+
+---
+
+This chapter demonstrates how cluster operators can upgrade KubeSphere to 3.3.2.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/upgrade/air-gapped-upgrade-with-ks-installer.md b/content/en/docs/v3.4/upgrade/air-gapped-upgrade-with-ks-installer.md
new file mode 100644
index 000000000..5a412910d
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/air-gapped-upgrade-with-ks-installer.md
@@ -0,0 +1,182 @@
+---
+title: "Air-Gapped Upgrade with ks-installer"
+keywords: "Air-Gapped, upgrade, kubesphere, 3.3"
+description: "Use ks-installer and offline package to upgrade KubeSphere."
+linkTitle: "Air-Gapped Upgrade with ks-installer"
+weight: 7500
+---
+
+ks-installer is recommended for users whose Kubernetes clusters were not set up by [KubeKey](../../installing-on-linux/introduction/kubekey/), but hosted by cloud vendors or created by themselves. This tutorial is for **upgrading KubeSphere only**. Cluster operators are responsible for upgrading Kubernetes beforehand.
+
+
+## Prerequisites
+
+- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
+- Read [Release Notes for 3.3.2](../../../v3.3/release/release-v332/) carefully.
+- Back up any important component beforehand.
+- A Docker registry. You need to have a Harbor or other Docker registries. For more information, see [Prepare a Private Image Registry](../../installing-on-linux/introduction/air-gapped-installation/#step-2-prepare-a-private-image-registry).
+- Supported Kubernetes versions of KubeSphere 3.3: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and * v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+
+## Major Updates
+
+In KubeSphere 3.3.1, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.3.1, please note the following:
+
+ - Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project).
+
+ - Some permission of custom roles are removed:
+ - Removed permissions of platform-level custom roles: user management, role management, and workspace management.
+ - Removed permissions of workspace-level custom roles: user management, role management, and user group management.
+ - Removed permissions of namespace-level custom roles: user management and role management.
+ - After you upgrade KubeSphere to 3.3.1, custom roles will be retained, but removed permissions of the custom roles will be revoked.
+## Step 1: Prepare Installation Images
+
+As you install KubeSphere in an air-gapped environment, you need to prepare an image package containing all the necessary images in advance.
+
+1. Download the image list file `images-list.txt` from a machine that has access to Internet through the following command:
+
+ ```bash
+ curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/images-list.txt
+ ```
+
+ {{< notice note >}}
+
+ This file lists images under `##+modulename` based on different modules. You can add your own images to this file following the same rule. To view the complete file, see [Appendix](../../installing-on-linux/introduction/air-gapped-installation/#image-list-of-kubesphere-v310).
+
+ {{ notice >}}
+
+2. Download `offline-installation-tool.sh`.
+
+ ```bash
+ curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/offline-installation-tool.sh
+ ```
+
+3. Make the `.sh` file executable.
+
+ ```bash
+ chmod +x offline-installation-tool.sh
+ ```
+
+4. You can execute the command `./offline-installation-tool.sh -h` to see how to use the script:
+
+ ```bash
+ root@master:/home/ubuntu# ./offline-installation-tool.sh -h
+ Usage:
+
+ ./offline-installation-tool.sh [-l IMAGES-LIST] [-d IMAGES-DIR] [-r PRIVATE-REGISTRY] [-v KUBERNETES-VERSION ]
+
+ Description:
+ -b : save kubernetes' binaries.
+ -d IMAGES-DIR : the dir of files (tar.gz) which generated by `docker save`. default: ./kubesphere-images
+ -l IMAGES-LIST : text file with list of images.
+ -r PRIVATE-REGISTRY : target private registry:port.
+ -s : save model will be applied. Pull the images in the IMAGES-LIST and save images as a tar.gz file.
+ -v KUBERNETES-VERSION : download kubernetes' binaries. default: v1.17.9
+ -h : usage message
+ ```
+
+5. Pull images in `offline-installation-tool.sh`.
+
+ ```bash
+ ./offline-installation-tool.sh -s -l images-list.txt -d ./kubesphere-images
+ ```
+
+ {{< notice note >}}
+
+ You can choose to pull images as needed. For example, you can delete `##k8s-images` and related images under it in `images-list.text` if you already have a Kubernetes cluster.
+
+ {{ notice >}}
+
+## Step 2: Push Images to Your Private Registry
+
+Transfer your packaged image file to your local machine and execute the following command to push it to the registry.
+
+```bash
+./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r dockerhub.kubekey.local
+```
+
+{{< notice note >}}
+
+The domain name is `dockerhub.kubekey.local` in the command. Make sure you use your **own registry address**.
+
+{{ notice >}}
+
+## Step 3: Download ks-installer
+
+Similar to installing KubeSphere on an existing Kubernetes cluster in an online environment, you also need to download `kubesphere-installer.yaml`.
+
+1. Execute the following command to download ks-installer and transfer it to your machine that serves as the taskbox for installation.
+
+ ```bash
+ curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml
+ ```
+
+2. Verify that you have specified your private image registry in `spec.local_registry` in `cluster-configuration.yaml`. Note that if your existing cluster was installed in an air-gapped environment, you may already have this field specified. Otherwise, run the following command to edit `cluster-configuration.yaml` of your existing KubeSphere v3.1.x cluster and add the private image registry:
+
+ ```
+ kubectl edit cc -n kubesphere-system
+ ```
+
+ For example, `dockerhub.kubekey.local` is the registry address in this tutorial, then use it as the value of `.spec.local_registry` as below:
+
+ ```yaml
+ spec:
+ persistence:
+ storageClass: ""
+ authentication:
+ jwtSecret: ""
+ local_registry: dockerhub.kubekey.local # Add this line manually; make sure you use your own registry address.
+ ```
+
+3. Save `cluster-configuration.yaml` after you finish editing it. Replace `ks-installer` with your **own registry address** with the following command:
+
+ ```bash
+ sed -i "s#^\s*image: kubesphere.*/ks-installer:.*# image: dockerhub.kubekey.local/kubesphere/ks-installer:v3.1.0#" kubesphere-installer.yaml
+ ```
+
+ {{< notice warning >}}
+
+ `dockerhub.kubekey.local` is the registry address in the command. Make sure you use your own registry address.
+
+ {{ notice >}}
+
+## Step 4: Upgrade KubeSphere
+
+Execute the following command after you make sure that all steps above are completed.
+
+```bash
+kubectl apply -f kubesphere-installer.yaml
+```
+
+## Step 5: Verify Installation
+
+When the installation finishes, you can see the content as follows:
+
+```bash
+#####################################################
+### Welcome to KubeSphere! ###
+#####################################################
+
+Console: http://192.168.0.2:30880
+Account: admin
+Password: P@88w0rd
+
+NOTES:
+ 1. After you log into the console, please check the
+ monitoring status of service components in
+ the "Cluster Management". If any service is not
+ ready, please wait patiently until all components
+ are up and running.
+ 2. Please change the default password after login.
+
+#####################################################
+https://kubesphere.io 20xx-xx-xx xx:xx:xx
+#####################################################
+```
+
+Now, you will be able to access the web console of KubeSphere through `http://{IP}:30880` with the default account and password `admin/P@88w0rd`.
+
+{{< notice note >}}
+
+To access the console, make sure port 30880 is opened in your security group.
+
+{{ notice >}}
diff --git a/content/en/docs/v3.4/upgrade/air-gapped-upgrade-with-kubekey.md b/content/en/docs/v3.4/upgrade/air-gapped-upgrade-with-kubekey.md
new file mode 100644
index 000000000..8b431df14
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/air-gapped-upgrade-with-kubekey.md
@@ -0,0 +1,349 @@
+---
+title: "Air-Gapped Upgrade with KubeKey"
+keywords: "Air-Gapped, kubernetes, upgrade, kubesphere, 3.3.1"
+description: "Use the offline package to upgrade Kubernetes and KubeSphere."
+linkTitle: "Air-Gapped Upgrade with KubeKey"
+weight: 7400
+---
+Air-gapped upgrade with KubeKey is recommended for users whose KubeSphere and Kubernetes were both deployed by [KubeKey](../../installing-on-linux/introduction/kubekey/). If your Kubernetes cluster was provisioned by yourself or cloud providers, refer to [Air-gapped Upgrade with ks-installer](../air-gapped-upgrade-with-ks-installer/).
+
+## Prerequisites
+
+- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
+- Your Kubernetes version must be v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and * v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+- Read [Release Notes for 3.3.2](../../../v3.3/release/release-v332/) carefully.
+- Back up any important component beforehand.
+- A Docker registry. You need to have a Harbor or other Docker registries.
+- Make sure every node can push and pull images from the Docker Registry.
+
+## Major Updates
+
+In KubeSphere 3.3.1, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.3.1, please note the following:
+
+ - Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project).
+
+ - Some permission of custom roles are removed:
+ - Removed permissions of platform-level custom roles: user management, role management, and workspace management.
+ - Removed permissions of workspace-level custom roles: user management, role management, and user group management.
+ - Removed permissions of namespace-level custom roles: user management and role management.
+ - After you upgrade KubeSphere to 3.3.1, custom roles will be retained, but removed permissions of the custom roles will be revoked.
+
+## Upgrade KubeSphere and Kubernetes
+
+Upgrading steps are different for single-node clusters (all-in-one) and multi-node clusters.
+
+{{< notice info >}}
+
+KubeKey upgrades Kubernetes from one MINOR version to the next MINOR version until the target version. For example, you may see the upgrading process going from 1.16 to 1.17 and to 1.18, instead of directly jumping to 1.18 from 1.16.
+
+{{ notice >}}
+
+
+### System Requirements
+
+| Systems | Minimum Requirements (Each node) |
+| --------------------------------------------------------------- | ------------------------------------------- |
+| **Ubuntu** *16.04, 18.04, 20.04* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
+| **Debian** *Buster, Stretch* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
+| **CentOS** *7.x* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
+| **Red Hat Enterprise Linux** *7* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
+| **SUSE Linux Enterprise Server** *15* **/openSUSE Leap** *15.2* | CPU: 2 Cores, Memory: 4 G, Disk Space: 40 G |
+
+{{< notice note >}}
+
+[KubeKey](https://github.com/kubesphere/kubekey) uses `/var/lib/docker` as the default directory where all Docker related files, including images, are stored. It is recommended you add additional storage volumes with at least **100G** mounted to `/var/lib/docker` and `/mnt/registry` respectively. See [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
+
+{{ notice >}}
+
+
+### Step 1: Download KubeKey
+1. 1. Run the following commands to download KubeKey.
+ {{< tabs >}}
+
+ {{< tab "Good network connections to GitHub/Googleapis" >}}
+
+ Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
+
+ ```bash
+ curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
+ ```
+
+ {{ tab >}}
+
+ {{< tab "Poor network connections to GitHub/Googleapis" >}}
+
+ Run the following command first to make sure you download KubeKey from the correct zone.
+
+ ```bash
+ export KKZONE=cn
+ ```
+
+ Run the following command to download KubeKey:
+
+ ```bash
+ curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
+ ```
+ {{ tab >}}
+
+ {{ tabs >}}
+
+2. After you uncompress the file, execute the following command to make `kk` executable:
+
+ ```bash
+ chmod +x kk
+ ```
+
+### Step 2: Prepare installation images
+
+As you install KubeSphere and Kubernetes on Linux, you need to prepare an image package containing all the necessary images and download the Kubernetes binary file in advance.
+
+1. Download the image list file `images-list.txt` from a machine that has access to Internet through the following command:
+
+ ```bash
+ curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/images-list.txt
+ ```
+
+ {{< notice note >}}
+
+ This file lists images under `##+modulename` based on different modules. You can add your own images to this file following the same rule.
+
+ {{ notice >}}
+
+2. Download `offline-installation-tool.sh`.
+
+ ```bash
+ curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/offline-installation-tool.sh
+ ```
+
+3. Make the `.sh` file executable.
+
+ ```bash
+ chmod +x offline-installation-tool.sh
+ ```
+
+4. You can execute the command `./offline-installation-tool.sh -h` to see how to use the script:
+
+ ```bash
+ root@master:/home/ubuntu# ./offline-installation-tool.sh -h
+ Usage:
+
+ ./offline-installation-tool.sh [-l IMAGES-LIST] [-d IMAGES-DIR] [-r PRIVATE-REGISTRY] [-v KUBERNETES-VERSION ]
+
+ Description:
+ -b : save kubernetes' binaries.
+ -d IMAGES-DIR : the dir of files (tar.gz) which generated by `docker save`. default: /home/ubuntu/kubesphere-images
+ -l IMAGES-LIST : text file with list of images.
+ -r PRIVATE-REGISTRY : target private registry:port.
+ -s : save model will be applied. Pull the images in the IMAGES-LIST and save images as a tar.gz file.
+ -v KUBERNETES-VERSION : download kubernetes' binaries. default: v1.17.9
+ -h : usage message
+ ```
+
+5. Download the Kubernetes binary file.
+
+ ```bash
+ ./offline-installation-tool.sh -b -v v1.22.12
+ ```
+
+ If you cannot access the object storage service of Google, run the following command instead to add the environment variable to change the source.
+
+ ```bash
+ export KKZONE=cn;./offline-installation-tool.sh -b -v v1.22.12
+ ```
+
+ {{< notice note >}}
+
+ - You can change the Kubernetes version downloaded based on your needs. Recommended Kubernetes versions for KubeSphere 3.3 are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and * v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix).
+
+ - After you run the script, a folder `kubekey` is automatically created. Note that this file and `kk` must be placed in the same directory when you create the cluster later.
+
+ {{ notice >}}
+
+6. Pull images in `offline-installation-tool.sh`.
+
+ ```bash
+ ./offline-installation-tool.sh -s -l images-list.txt -d ./kubesphere-images
+ ```
+
+ {{< notice note >}}
+
+ You can choose to pull images as needed. For example, you can delete `##k8s-images` and related images under it in `images-list.text` if you already have a Kubernetes cluster.
+
+ {{ notice >}}
+
+### Step 3: Push images to your private registry
+
+Transfer your packaged image file to your local machine and execute the following command to push it to the registry.
+
+```bash
+./offline-installation-tool.sh -l images-list.txt -d ./kubesphere-images -r dockerhub.kubekey.local
+```
+
+ {{< notice note >}}
+
+ The domain name is `dockerhub.kubekey.local` in the command. Make sure you use your **own registry address**.
+
+ {{ notice >}}
+
+### Air-gapped upgrade for all-in-one clusters
+
+#### Example machines
+| Host Name | IP | Role | Port | URL |
+| --------- | ----------- | -------------------- | ---- | ----------------------- |
+| master | 192.168.1.1 | Docker registry | 5000 | http://192.168.1.1:5000 |
+| master | 192.168.1.1 | master, etcd, worker | | |
+
+#### Versions
+
+| | Kubernetes | KubeSphere |
+| ------ | ---------- | ---------- |
+| Before | v1.18.6 | v3.2.x |
+| After | v1.22.12 | 3.3.x |
+
+#### Upgrade a cluster
+
+In this example, KubeSphere is installed on a single node, and you need to specify a configuration file to add host information. Besides, for air-gapped installation, pay special attention to `.spec.registry.privateRegistry`, which must be set to **your own registry address**. For more information, see the following sections.
+
+#### Create an example configuration file
+
+Execute the following command to generate an example configuration file for installation:
+
+```bash
+./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]
+```
+
+For example:
+
+```bash
+./kk create config --with-kubernetes v1.22.12 --with-kubesphere v3.3.2 -f config-sample.yaml
+```
+
+{{< notice note >}}
+
+Make sure the Kubernetes version is the one you downloaded.
+
+{{ notice >}}
+
+#### Edit the configuration file
+
+Edit the configuration file `config-sample.yaml`. Here is [an example for your reference](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md).
+
+ {{< notice warning >}}
+
+For air-gapped installation, you must specify `privateRegistry`, which is `dockerhub.kubekey.local` in this example.
+
+ {{ notice >}}
+
+ Set `hosts` of your `config-sample.yaml` file:
+
+```yaml
+ hosts:
+ - {name: ks.master, address: 192.168.1.1, internalAddress: 192.168.1.1, user: root, password: Qcloud@123}
+ roleGroups:
+ etcd:
+ - ks.master
+ control-plane:
+ - ks.master
+ worker:
+ - ks.master
+```
+
+Set `privateRegistry` of your `config-sample.yaml` file:
+```yaml
+ registry:
+ registryMirrors: []
+ insecureRegistries: []
+ privateRegistry: dockerhub.kubekey.local
+```
+
+#### Upgrade your single-node cluster to KubeSphere 3.3 and Kubernetes v1.22.12
+
+```bash
+./kk upgrade -f config-sample.yaml
+```
+
+To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and * v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+
+### Air-gapped upgrade for multi-node clusters
+
+#### Example machines
+| Host Name | IP | Role | Port | URL |
+| --------- | ----------- | --------------- | ---- | ----------------------- |
+| master | 192.168.1.1 | Docker registry | 5000 | http://192.168.1.1:5000 |
+| master | 192.168.1.1 | master, etcd | | |
+| slave1 | 192.168.1.2 | worker | | |
+| slave1 | 192.168.1.3 | worker | | |
+
+
+#### Versions
+
+| | Kubernetes | KubeSphere |
+| ------ | ---------- | ---------- |
+| Before | v1.18.6 | v3.2.x |
+| After | v1.22.12 | 3.3.x |
+
+#### Upgrade a cluster
+
+In this example, KubeSphere is installed on multiple nodes, so you need to specify a configuration file to add host information. Besides, for air-gapped installation, pay special attention to `.spec.registry.privateRegistry`, which must be set to **your own registry address**. For more information, see the following sections.
+
+#### Create an example configuration file
+
+ Execute the following command to generate an example configuration file for installation:
+
+```bash
+./kk create config [--with-kubernetes version] [--with-kubesphere version] [(-f | --file) path]
+```
+
+ For example:
+
+```bash
+./kk create config --with-kubernetes v1.22.12 --with-kubesphere v3.3.2 -f config-sample.yaml
+```
+
+{{< notice note >}}
+
+Make sure the Kubernetes version is the one you downloaded.
+
+{{ notice >}}
+
+#### Edit the configuration file
+
+Edit the configuration file `config-sample.yaml`. Here is [an example for your reference](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md).
+
+ {{< notice warning >}}
+
+ For air-gapped installation, you must specify `privateRegistry`, which is `dockerhub.kubekey.local` in this example.
+
+ {{ notice >}}
+
+Set `hosts` of your `config-sample.yaml` file:
+
+```yaml
+ hosts:
+ - {name: ks.master, address: 192.168.1.1, internalAddress: 192.168.1.1, user: root, password: Qcloud@123}
+ - {name: ks.slave1, address: 192.168.1.2, internalAddress: 192.168.1.2, user: root, privateKeyPath: "/root/.ssh/kp-qingcloud"}
+ - {name: ks.slave2, address: 192.168.1.3, internalAddress: 192.168.1.3, user: root, privateKeyPath: "/root/.ssh/kp-qingcloud"}
+ roleGroups:
+ etcd:
+ - ks.master
+ control-plane:
+ - ks.master
+ worker:
+ - ks.slave1
+ - ks.slave2
+```
+Set `privateRegistry` of your `config-sample.yaml` file:
+```yaml
+ registry:
+ registryMirrors: []
+ insecureRegistries: []
+ privateRegistry: dockerhub.kubekey.local
+```
+
+#### Upgrade your multi-node cluster to KubeSphere 3.3 and Kubernetes v1.22.12
+
+```bash
+./kk upgrade -f config-sample.yaml
+```
+
+To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and * v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
diff --git a/content/en/docs/v3.4/upgrade/overview.md b/content/en/docs/v3.4/upgrade/overview.md
new file mode 100644
index 000000000..df085801b
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/overview.md
@@ -0,0 +1,28 @@
+---
+title: "Upgrade — Overview"
+keywords: "Kubernetes, upgrade, KubeSphere, 3.3, upgrade"
+description: "Understand what you need to pay attention to before the upgrade, such as versions, and upgrade tools."
+linkTitle: "Overview"
+weight: 7100
+---
+
+## Make Your Upgrade Plan
+
+KubeSphere 3.3 is compatible with Kubernetes v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and * v1.24.x:
+
+- Before you upgrade your cluster to KubeSphere 3.3, you need to have a KubeSphere cluster running v3.2.x.
+- You can choose to only upgrade KubeSphere to 3.3 or upgrade Kubernetes (to a higher version) and KubeSphere (to 3.3) at the same time.
+- For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+## Before the Upgrade
+
+{{< notice warning >}}
+
+- You are supposed to implement a simulation for the upgrade in a testing environment first. After the upgrade is successful in the testing environment and all applications are running normally, upgrade your cluster in your production environment.
+- During the upgrade process, there may be a short interruption of applications (especially for those single-replica Pods). Please arrange a reasonable period of time for your upgrade.
+- It is recommended to back up etcd and stateful applications before in production. You can use [Velero](https://velero.io/) to implement the backup and migrate Kubernetes resources and persistent volumes.
+
+{{ notice >}}
+
+## Upgrade Tool
+
+Depending on how your existing cluster was set up, you can use KubeKey or ks-installer to upgrade your cluster. It is recommended that you [use KubeKey to upgrade your cluster](../upgrade-with-kubekey/) if it was created by KubeKey. Otherwise, [use ks-installer to upgrade your cluster](../upgrade-with-ks-installer/).
\ No newline at end of file
diff --git a/content/en/docs/v3.4/upgrade/upgrade-with-ks-installer.md b/content/en/docs/v3.4/upgrade/upgrade-with-ks-installer.md
new file mode 100644
index 000000000..6790aa41a
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/upgrade-with-ks-installer.md
@@ -0,0 +1,41 @@
+---
+title: "Upgrade with ks-installer"
+keywords: "Kubernetes, upgrade, KubeSphere, v3.3.2"
+description: "Use ks-installer to upgrade KubeSphere."
+linkTitle: "Upgrade with ks-installer"
+weight: 7300
+---
+
+ks-installer is recommended for users whose Kubernetes clusters were not set up by [KubeKey](../../installing-on-linux/introduction/kubekey/), but hosted by cloud vendors or created by themselves. This tutorial is for **upgrading KubeSphere only**. Cluster operators are responsible for upgrading Kubernetes beforehand.
+
+## Prerequisites
+
+- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
+- Read [Release Notes for 3.3.2](../../../v3.3/release/release-v332/) carefully.
+- Back up any important component beforehand.
+- Supported Kubernetes versions of KubeSphere 3.3: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+
+## Major Updates
+
+In KubeSphere 3.3.1, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.3.1, please note the following:
+
+ - Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project).
+
+ - Some permission of custom roles are removed:
+ - Removed permissions of platform-level custom roles: user management, role management, and workspace management.
+ - Removed permissions of workspace-level custom roles: user management, role management, and user group management.
+ - Removed permissions of namespace-level custom roles: user management and role management.
+ - After you upgrade KubeSphere to 3.3.1, custom roles will be retained, but removed permissions of the custom roles will be revoked.
+
+## Apply ks-installer
+
+Run the following command to upgrade your cluster.
+
+```bash
+kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.2/kubesphere-installer.yaml --force
+```
+
+## Enable Pluggable Components
+
+You can [enable new pluggable components](../../pluggable-components/overview/) of KubeSphere 3.3 after the upgrade to explore more features of the container platform.
+
diff --git a/content/en/docs/v3.4/upgrade/upgrade-with-kubekey.md b/content/en/docs/v3.4/upgrade/upgrade-with-kubekey.md
new file mode 100644
index 000000000..35eef7615
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/upgrade-with-kubekey.md
@@ -0,0 +1,146 @@
+---
+title: "Upgrade with KubeKey"
+keywords: "Kubernetes, upgrade, KubeSphere, 3.3, KubeKey"
+description: "Use KubeKey to upgrade Kubernetes and KubeSphere."
+linkTitle: "Upgrade with KubeKey"
+weight: 7200
+---
+KubeKey is recommended for users whose KubeSphere and Kubernetes were both installed by [KubeKey](../../installing-on-linux/introduction/kubekey/). If your Kubernetes cluster was provisioned by yourself or cloud providers, refer to [Upgrade with ks-installer](../upgrade-with-ks-installer/).
+
+This tutorial demonstrates how to upgrade your cluster using KubeKey.
+
+## Prerequisites
+
+- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
+- Read [Release Notes for 3.3.2](../../../v3.3/release/release-v332/) carefully.
+- Back up any important component beforehand.
+- Make your upgrade plan. Two scenarios are provided in this document for [all-in-one clusters](#all-in-one-cluster) and [multi-node clusters](#multi-node-cluster) respectively.
+
+## Major Updates
+
+In KubeSphere 3.3.1, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.3.1, please note the following:
+
+ - Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project).
+
+ - Some permission of custom roles are removed:
+ - Removed permissions of platform-level custom roles: user management, role management, and workspace management.
+ - Removed permissions of workspace-level custom roles: user management, role management, and user group management.
+ - Removed permissions of namespace-level custom roles: user management and role management.
+ - After you upgrade KubeSphere to 3.3.1, custom roles will be retained, but removed permissions of the custom roles will be revoked.
+
+## Download KubeKey
+
+Follow the steps below to download KubeKey before you upgrade your cluster.
+
+{{< tabs >}}
+
+{{< tab "Good network connections to GitHub/Googleapis" >}}
+
+Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
+
+```bash
+curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
+```
+
+{{ tab >}}
+
+{{< tab "Poor network connections to GitHub/Googleapis" >}}
+
+Run the following command first to make sure you download KubeKey from the correct zone.
+
+```bash
+export KKZONE=cn
+```
+
+Run the following command to download KubeKey:
+
+```bash
+curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
+```
+
+{{< notice note >}}
+
+After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
+
+{{ notice >}}
+
+{{ tab >}}
+
+{{ tabs >}}
+
+{{< notice note >}}
+
+The commands above download the latest release of KubeKey. You can change the version number in the command to download a specific version.
+
+{{ notice >}}
+
+Make `kk` executable:
+
+```bash
+chmod +x kk
+```
+
+## Upgrade KubeSphere and Kubernetes
+
+Upgrading steps are different for single-node clusters (all-in-one) and multi-node clusters.
+
+{{< notice info >}}
+
+When upgrading Kubernetes, KubeKey will upgrade from one MINOR version to the next MINOR version until the target version. For example, you may see the upgrading process going from 1.16 to 1.17 and to 1.18, instead of directly jumping to 1.18 from 1.16.
+
+{{ notice >}}
+
+### All-in-one cluster
+
+Run the following command to use KubeKey to upgrade your single-node cluster to KubeSphere 3.3 and Kubernetes v1.22.12:
+
+```bash
+./kk upgrade --with-kubernetes v1.22.12 --with-kubesphere v3.3.2
+```
+
+To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+### Multi-node cluster
+
+#### Step 1: Generate a configuration file using KubeKey
+
+This command creates a configuration file `sample.yaml` of your cluster.
+
+```bash
+./kk create config --from-cluster
+```
+
+{{< notice note >}}
+
+It assumes your kubeconfig is allocated in `~/.kube/config`. You can change it with the flag `--kubeconfig`.
+
+{{ notice >}}
+
+#### Step 2: Edit the configuration file template
+
+Edit `sample.yaml` based on your cluster configuration. Make sure you replace the following fields correctly.
+
+- `hosts`: The basic information of your hosts (hostname and IP address) and how to connect to them using SSH.
+- `roleGroups.etcd`: Your etcd nodes.
+- `controlPlaneEndpoint`: Your load balancer address (optional).
+- `registry`: Your image registry information (optional).
+
+{{< notice note >}}
+
+For more information, see [Edit the configuration file](../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file) or refer to the `Cluster` section of [the complete configuration file](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) for more information.
+
+{{ notice >}}
+
+#### Step 3: Upgrade your cluster
+The following command upgrades your cluster to KubeSphere 3.3 and Kubernetes v1.22.12:
+
+```bash
+./kk upgrade --with-kubernetes v1.22.12 --with-kubesphere v3.3.2 -f sample.yaml
+```
+
+To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
+
+{{< notice note >}}
+
+To use new features of KubeSphere 3.3, you may need to enable some pluggable components after the upgrade.
+
+{{ notice >}}
\ No newline at end of file
diff --git a/content/en/docs/v3.4/upgrade/what-changed.md b/content/en/docs/v3.4/upgrade/what-changed.md
new file mode 100644
index 000000000..f799ef160
--- /dev/null
+++ b/content/en/docs/v3.4/upgrade/what-changed.md
@@ -0,0 +1,12 @@
+---
+title: "Changes after Upgrade"
+keywords: "Kubernetes, upgrade, KubeSphere, 3.3"
+description: "Understand what will be changed after the upgrade."
+
+linkTitle: "Changes after Upgrade"
+weight: 7600
+---
+
+This section covers the changes after upgrade for existing settings in previous versions. If you want to know all the new features and enhancements in KubeSphere 3.3, see [Release Notes for 3.3.0](../../../v3.3/release/release-v330/), [Release Notes for 3.3.1](../../../v3.3/release/release-v331/), and [Release Notes for 3.3.2](../../../v3.3/release/release-v332/).
+
+
diff --git a/content/en/docs/v3.4/workspace-administration/_index.md b/content/en/docs/v3.4/workspace-administration/_index.md
new file mode 100644
index 000000000..2024f8313
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/_index.md
@@ -0,0 +1,16 @@
+---
+title: "Workspace Administration and User Guide"
+description: "This chapter helps you to better manage KubeSphere workspaces."
+layout: "second"
+
+linkTitle: "Workspace Administration and User Guide"
+
+weight: 9000
+
+icon: "/images/docs/v3.3/docs.svg"
+
+---
+
+KubeSphere tenants work in a workspace to manage projects and apps. Among others, workspace administrators are responsible for the management of app repositories. Tenants with necessary permissions can further deploy and use app templates from app repositories. They can also leverage individual app templates which are uploaded and released to the App Store. Besides, administrators also control whether the network of a workspace is isolated from others'.
+
+This chapter demonstrates how workspace administrators and tenants work at the workspace level.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/workspace-administration/app-repository/_index.md b/content/en/docs/v3.4/workspace-administration/app-repository/_index.md
new file mode 100644
index 000000000..656e5cfaf
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/app-repository/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "App Repositories"
+weight: 9300
+
+_build:
+ render: false
+---
diff --git a/content/en/docs/v3.4/workspace-administration/app-repository/import-helm-repository.md b/content/en/docs/v3.4/workspace-administration/app-repository/import-helm-repository.md
new file mode 100644
index 000000000..4e9a017f1
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/app-repository/import-helm-repository.md
@@ -0,0 +1,52 @@
+---
+title: "Import a Helm Repository"
+keywords: "Kubernetes, Helm, KubeSphere, Application"
+description: "Import a Helm repository to KubeSphere to provide app templates for tenants in a workspace."
+linkTitle: "Import a Helm Repository"
+weight: 9310
+---
+
+KubeSphere builds app repositories that allow users to use Kubernetes applications based on Helm charts. App repositories are powered by [OpenPitrix](https://github.com/openpitrix/openpitrix), an open source platform for cross-cloud application management sponsored by QingCloud. In an app repository, every application serves as a base package library. To deploy and manage an app from an app repository, you need to create the repository in advance.
+
+To create a repository, you use an HTTP/HTTPS server or object storage solutions to store packages. More specifically, an app repository relies on external storage independent of OpenPitrix, such as [MinIO](https://min.io/) object storage, [QingStor object storage](https://github.com/qingstor), and [AWS object storage](https://aws.amazon.com/what-is-cloud-object-storage/). These object storage services are used to store configuration packages and index files created by developers. After a repository is registered, the configuration packages are automatically indexed as deployable applications.
+
+This tutorial demonstrates how to add an app repository to KubeSphere.
+
+## Prerequisites
+
+- You need to enable the [KubeSphere App Store (OpenPitrix)](../../../pluggable-components/app-store/).
+- You need to have an app repository. Refer to [the official documentation of Helm](https://v2.helm.sh/docs/developing_charts/#the-chart-repository-guide) to create repositories or [upload your own apps to the public repository of KubeSphere](../upload-app-to-public-repository/). Alternatively, use the example repository in the steps below, which is only for demonstration purposes.
+- You need to create a workspace and a user (`ws-admin`). The user must be granted the role of `workspace-admin` in the workspace. For more information, refer to [Create Workspaces, Projects, Users and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Add an App Repository
+
+1. Log in to the web console of KubeSphere as `ws-admin`. In your workspace, go to **App Repositories** under **App Management**, and then click **Add**.
+
+2. In the dialog that appears, specify an app repository name and add your repository URL. For example, enter `https://charts.kubesphere.io/main`.
+
+ - **Name**: Set a simple and clear name for the repository, which is easy for users to identify.
+ - **URL**: Follow the RFC 3986 specification with the following three protocols supported:
+ - S3: The URL is S3-styled, such as `s3.
on the right of a user, and click **OK** for the displayed message to assign the user to the department.
+
+ {{< notice note >}}
+
+ * If permissions provided by the department overlap with existing permissions of the user, new permissions are added to the user. Existing permissions of the user are not affected.
+ * Users assigned to a department can perform operations according to the workspace role, project roles, and DevOps project roles associated with the department without being invited to the workspace, projects, and DevOps projects.
+
+ {{ notice >}}
+
+## Remove a User from a Department
+
+1. On the **Departments** page, select a department in the department tree on the left and click **Assigned** on the right.
+2. In the assigned user list, click
on the right of a user, enter the username in the displayed dialog box, and click **OK** to remove the user.
+
+## Delete and Edit a Department
+
+1. On the **Departments** page, click **Set Departments**.
+
+2. In the **Set Departments** dialog box, on the left, click the upper level of the department to be edited or deleted.
+
+3. Click
on the right of the department to edit it.
+
+ {{< notice note >}}
+
+ For details, see [Create a Department](#create-a-department).
+
+ {{ notice >}}
+
+4. Click
on the right of the department, enter the department name in the displayed dialog box, and click **OK** to delete the department.
+
+ {{< notice note >}}
+
+ * If a department contains sub-departments, the sub-departments will also be deleted.
+ * After a department is deleted, the associated roles will be unbound from the users.
+
+ {{ notice >}}
\ No newline at end of file
diff --git a/content/en/docs/v3.4/workspace-administration/project-quotas.md b/content/en/docs/v3.4/workspace-administration/project-quotas.md
new file mode 100644
index 000000000..ad59de15f
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/project-quotas.md
@@ -0,0 +1,56 @@
+---
+title: "Project Quotas"
+keywords: 'KubeSphere, Kubernetes, projects, quotas, resources, requests, limits'
+description: 'Set requests and limits to control resource usage in a project.'
+linkTitle: "Project Quotas"
+weight: 9600
+---
+
+KubeSphere uses [Kubernetes requests and limits](https://kubesphere.io/blogs/understand-requests-and-limits-in-kubernetes/) to control resource (for example, CPU and memory) usage in a project, also known as [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value.
+
+Besides CPU and memory, you can also set resource quotas for other objects separately such as Pods, [Deployments](../../project-user-guide/application-workloads/deployments/), [Jobs](../../project-user-guide/application-workloads/jobs/), [Services](../../project-user-guide/application-workloads/services/), and [ConfigMaps](../../project-user-guide/configuration/configmaps/) in a project.
+
+This tutorial demonstrates how to configure quotas for a project.
+
+## Prerequisites
+
+You have an available workspace, a project and a user (`ws-admin`). The user must have the `admin` role at the workspace level. For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+
+{{< notice note >}}
+
+If you use the user `project-admin` (a user of the `admin` role at the project level), you can set project quotas as well for a new project (i.e. its quotas remain unset). However, `project-admin` cannot change project quotas once they are set. Generally, it is the responsibility of `ws-admin` to set limits and requests for a project. `project-admin` is responsible for [setting limit ranges](../../project-administration/container-limit-ranges/) for containers in a project.
+
+{{ notice >}}
+
+## Set Project Quotas
+
+1. Log in to the console as `ws-admin` and go to a project. On the **Overview** page, you can see project quotas remain unset if the project is newly created. Click **Edit Quotas** to configure quotas.
+
+2. In the displayed dialog box, you can see that KubeSphere does not set any requests or limits for a project by default. To set
+limits to control CPU and memory resources, use the slider to move to a desired value or enter numbers directly. Leaving a field blank means you do not set any requests or limits.
+
+ {{< notice note >}}
+
+ The limit can never be lower than the request.
+
+ {{ notice >}}
+
+3. To set quotas for other resources, click **Add** under **Project Resource Quotas**, and then select a resource or enter a recource name and set a quota.
+
+4. Click **OK** to finish setting quotas.
+
+5. Go to **Basic Information** in **Project Settings**, and you can see all resource quotas for the project.
+
+6. To change project quotas, click **Edit Project** on the **Basic Information** page and select **Edit Project Quotas**.
+
+ {{< notice note >}}
+
+ For [a multi-cluster project](../../project-administration/project-and-multicluster-project/#multi-cluster-projects), the option **Edit Project Quotas** does not display in the **Manage Project** drop-down menu. To set quotas for a multi-cluster project, go to **Projects Quotas** under **Project Settings** and click **Edit Quotas**. Note that as a multi-cluster project runs across clusters, you can set resource quotas on different clusters separately.
+
+ {{ notice >}}
+
+7. Change project quotas in the dialog that appears and click **OK**.
+
+## See Also
+
+[Container Limit Ranges](../../project-administration/container-limit-ranges/)
diff --git a/content/en/docs/v3.4/workspace-administration/role-and-member-management.md b/content/en/docs/v3.4/workspace-administration/role-and-member-management.md
new file mode 100644
index 000000000..eeffc5f4c
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/role-and-member-management.md
@@ -0,0 +1,61 @@
+---
+title: "Workspace Role and Member Management"
+keywords: "Kubernetes, workspace, KubeSphere, multitenancy"
+description: "Customize a workspace role and grant it to tenants."
+linkTitle: "Workspace Role and Member Management"
+weight: 9400
+---
+
+This tutorial demonstrates how to manage roles and members in a workspace.
+
+## Prerequisites
+
+At least one workspace has been created, such as `demo-workspace`. Besides, you need a user of the `workspace-admin` role (for example, `ws-admin`) at the workspace level. For more information, see [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+
+{{< notice note >}}
+
+The actual role name follows a naming convention: `workspace name-role name`. For example, for a workspace named `demo-workspace`, the actual role name of the role `admin` is `demo-workspace-admin`.
+
+{{ notice >}}
+
+## Built-in Roles
+
+In **Workspace Roles**, there are four available built-in roles. Built-in roles are created automatically by KubeSphere when a workspace is created and they cannot be edited or deleted. You can only view permissions included in a built-in role or assign it to a user.
+
+| Built-in Roles | Description |
+| ------------------ | ------------------------------------------------------------ |
+| `workspace-viewer` | Workspace viewer who can view all resources in the workspace. |
+| `workspace-self-provisioner` | Workspace regular member who can view workspace settings, manage app templates, and create projects and DevOps projects. |
+| `workspace-regular` | Workspace regular member who can view workspace settings. |
+| `workspace-admin` | Workspace administrator who has full control over all resources in the workspace. |
+
+To view the permissions that a role contains:
+
+1. Log in to the console as `ws-admin`. In **Workspace Roles**, click a role (for example, `workspace-admin`) and you can see role details.
+
+2. Click the **Authorized Users** tab to see all the users that are granted the role.
+
+## Create a Workspace Role
+
+1. Navigate to **Workspace Roles** under **Workspace Settings**.
+
+2. In **Workspace Roles**, click **Create** and set a role **Name** (for example, `demo-project-admin`). Click **Edit Permissions** to continue.
+
+3. In the pop-up window, permissions are categorized into different **Modules**. In this example, click **Project Management** and select **Project Creation**, **Project Management**, and **Project Viewing** for this role. Click **OK** to finish creating the role.
+
+ {{< notice note >}}
+
+ **Depends on** means the major permission (the one listed after **Depends on**) needs to be selected first so that the affiliated permission can be assigned.
+
+ {{ notice >}}
+
+4. Newly-created roles will be listed in **Workspace Roles**. To edit the information or permissions, or delete an existing role, click
on the right.
+
+## Invite a New Member
+
+1. Navigate to **Workspace Members** under **Workspace Settings**, and click **Invite**.
+2. Invite a user to the workspace by clicking
on the right of it and assign a role to it.
+
+3. After you add the user to the workspace, click **OK**. In **Workspace Members**, you can see the user in the list.
+
+4. To edit the role of an existing user or remove the user from the workspace, click
on the right and select the corresponding operation.
\ No newline at end of file
diff --git a/content/en/docs/v3.4/workspace-administration/upload-helm-based-application.md b/content/en/docs/v3.4/workspace-administration/upload-helm-based-application.md
new file mode 100644
index 000000000..1a2236f91
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/upload-helm-based-application.md
@@ -0,0 +1,38 @@
+---
+title: "Upload Helm-based Applications"
+keywords: "Kubernetes, Helm, KubeSphere, OpenPitrix, Application"
+description: "Learn how to upload a Helm-based application as an app template to your workspace."
+linkTitle: "Upload Helm-based Applications"
+weight: 9200
+---
+
+KubeSphere provides full lifecycle management for applications. Among other things, workspace administrators can upload or create new app templates and test them quickly. Furthermore, they publish well-tested apps to the [App Store](../../application-store/) so that other users can deploy them with one click. To develop app templates, workspace administrators need to upload packaged [Helm charts](https://helm.sh/) to KubeSphere first.
+
+This tutorial demonstrates how to develop an app template by uploading a packaged Helm chart.
+
+## Prerequisites
+
+- You need to enable the [KubeSphere App Store (OpenPitrix)](../../pluggable-components/app-store/).
+- You need to create a workspace and a user (`project-admin`). The user must be invited to the workspace with the role of `workspace-self-provisioner`. For more information, refer to [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+
+## Hands-on Lab
+
+1. Log in to KubeSphere as `project-admin`. In your workspace, go to **App Templates** under **App Management**, and click **Create**.
+
+2. In the dialog that appears, click **Upload**. You can upload your own Helm chart or download the [Nginx chart](/files/application-templates/nginx-0.1.0.tgz) and use it as an example for the following steps.
+
+3. After the package is uploaded, click **OK** to continue.
+
+4. You can view the basic information of the app under **App Information**. To upload an icon for the app, click **Upload Icon**. You can also skip it and click **OK** directly.
+
+ {{< notice note >}}
+
+Maximum accepted resolutions of the app icon: 96 x 96 pixels.
+
+{{ notice >}}
+
+5. The app appears in the template list with the status **Developing** after successfully uploaded, which means this app is under development. The uploaded app is visible to all members in the same workspace.
+
+6. Click the app and the page opens with the **Versions** tab selected. Click the draft version to expand the menu, where you can see options including **Delete**, **Install**, and **Submit for Release**.
+
+7. For more information about how to release your app to the App Store, refer to [Application Lifecycle Management](../../application-store/app-lifecycle-management/#step-2-upload-and-submit-application).
diff --git a/content/en/docs/v3.4/workspace-administration/what-is-workspace.md b/content/en/docs/v3.4/workspace-administration/what-is-workspace.md
new file mode 100644
index 000000000..98e650db7
--- /dev/null
+++ b/content/en/docs/v3.4/workspace-administration/what-is-workspace.md
@@ -0,0 +1,83 @@
+---
+title: "Workspace Overview"
+keywords: "Kubernetes, KubeSphere, workspace"
+description: "Understand the concept of workspaces in KubeSphere and learn how to create and delete a workspace."
+
+linkTitle: "Workspace Overview"
+weight: 9100
+---
+
+A workspace is a logical unit to organize your [projects](../../project-administration/) and [DevOps projects](../../devops-user-guide/) and manage [app templates](../upload-helm-based-application/) and app repositories. It is the place for you to control resource access and share resources within your team in a secure way.
+
+It is a best practice to create a new workspace for tenants (excluding cluster administrators). A same tenant can work in multiple workspaces, while a workspace allows multiple tenants to access it in different ways.
+
+This tutorial demonstrates how to create and delete a workspace.
+
+## Prerequisites
+
+You have a user granted the role of `workspaces-manager`, such as `ws-manager` in [Create Workspaces, Projects, Users and Roles](../../quick-start/create-workspace-and-project/).
+
+## Create a Workspace
+
+1. Log in to the web console of KubeSphere as `ws-manager`. Click **Platform** on the upper-left corner, and then select **Access Control**. On the **Workspaces** page, click **Create**.
+
+
+2. For single-node cluster, on the **Basic Information** page, specify a name for the workspace and select an administrator from the drop-down list. Click **Create**.
+
+ - **Name**: Set a name for the workspace which serves as a unique identifier.
+ - **Alias**: An alias name for the workspace.
+ - **Administrator**: User that administers the workspace.
+ - **Description**: A brief introduction of the workspace.
+
+ For multi-node cluster, after the basic information about the workspace is set, click **Next** to continue. On the **Cluster Settings** page, select clusters to be used in the workspace, and then click **Create**.
+
+3. The workspace is displayed in the workspace list after it is created.
+
+4. Click the workspace and you can see resource status of the workspace on the **Overview** page.
+
+## Delete a Workspace
+
+In KubeSphere, you use a workspace to group and manage different projects, which means the lifecycle of a project is dependent on the workspace. More specifically, all the projects and related resources in a workspace will be deleted if the workspace is deleted.
+
+Before you delete a workspace, decide whether you want to unbind some key projects.
+
+### Unbind projects before deletion
+
+To delete a workspace while preserving some projects in it, run the following command first:
+
+```bash
+kubectl label ns
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. 在 `spec.authentication.jwtSecret` 字段下添加以下字段。
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: cas
+ type: CASIdentityProvider
+ mappingMethod: auto
+ provider:
+ redirectURL: "https://ks-console:30880/oauth/redirect/cas"
+ casServerURL: "https://cas.example.org/cas"
+ insecureSkipVerify: true
+ ```
+
+ 字段描述如下:
+
+ | 参数 | 描述 |
+ | -------------------- | ------------------------------------------------------------ |
+ | redirectURL | 重定向到 ks-console 的 URL,格式为:`https://<域名>/oauth/redirect/<身份提供者名称>`。URL 中的 `<身份提供者名称>` 对应 `oauthOptions:identityProviders:name` 的值。 |
+ | casServerURL | 定义cas 认证的url 地址 |
+ | insecureSkipVerify | 关闭 TLS 证书验证。 |
+
+
diff --git a/content/zh/docs/v3.4/access-control-and-account-management/external-authentication/oidc-identity-provider.md b/content/zh/docs/v3.4/access-control-and-account-management/external-authentication/oidc-identity-provider.md
new file mode 100644
index 000000000..fa144bc98
--- /dev/null
+++ b/content/zh/docs/v3.4/access-control-and-account-management/external-authentication/oidc-identity-provider.md
@@ -0,0 +1,64 @@
+---
+title: "OIDC 身份提供者"
+keywords: "OIDC, 身份提供者"
+description: "如何使用外部 OIDC 身份提供者。"
+
+linkTitle: "OIDC 身份提供者"
+weight: 12221
+---
+
+## OIDC 身份提供者
+
+[OpenID Connect](https://openid.net/connect/) 是一种基于 OAuth 2.0 系列规范的可互操作的身份认证协议。使用简单的 REST/JSON 消息流,其设计目标是“让简单的事情变得简单,让复杂的事情成为可能”。与之前的任何身份认证协议(例如 Keycloak、Okta、Dex、Auth0、Gluu、Casdoor 等)相比,开发人员集成起来非常容易。
+
+## 准备工作
+
+您需要部署一个 Kubernetes 集群,并在集群中安装 KubeSphere。有关详细信息,请参阅[在 Linux 上安装](../../../installing-on-linux/)和[在 Kubernetes 上安装](../../../installing-on-kubernetes/)。
+
+## 步骤
+
+1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. 在 `spec.authentication.jwtSecret` 字段下添加以下字段。
+
+ *使用 [Google Identity Platform](https://developers.google.com/identity/protocols/oauth2/openid-connect) 的示例*:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: google
+ type: OIDCIdentityProvider
+ mappingMethod: auto
+ provider:
+ clientID: '********'
+ clientSecret: '********'
+ issuer: https://accounts.google.com
+ redirectURL: 'https://ks-console/oauth/redirect/google'
+ ```
+
+ 字段描述如下:
+
+ | 参数 | 描述 |
+ | -------------------- | ------------------------------------------------------------ |
+ | clientID | 客户端 ID。 |
+ | clientSecret | 客户端密码。 |
+ | redirectURL | 重定向到 ks-console 的 URL,格式为:`https://<域名>/oauth/redirect/<身份提供者名称>`。URL 中的 `<身份提供者名称>` 对应 `oauthOptions:identityProviders:name` 的值。 |
+ | issuer | 定义客户端如何动态发现有关 OpenID 提供者的信息。 |
+ | preferredUsernameKey | 可配置的密钥,包含首选用户声明。此参数为可选参数。 |
+ | emailKey | 可配置的密钥,包含电子邮件声明。此参数为可选参数。 |
+ | getUserInfo | 使用 userinfo 端点获取令牌的附加声明。非常适用于上游返回 “thin” ID 令牌的场景。此参数为可选参数。 |
+ | insecureSkipVerify | 关闭 TLS 证书验证。 |
+
+
+
diff --git a/content/zh/docs/v3.4/access-control-and-account-management/external-authentication/set-up-external-authentication.md b/content/zh/docs/v3.4/access-control-and-account-management/external-authentication/set-up-external-authentication.md
new file mode 100644
index 000000000..bc880d62b
--- /dev/null
+++ b/content/zh/docs/v3.4/access-control-and-account-management/external-authentication/set-up-external-authentication.md
@@ -0,0 +1,112 @@
+---
+title: "设置外部身份验证"
+keywords: "LDAP, 外部, 第三方, 身份验证"
+description: "如何在 KubeSphere 上设置外部身份验证。"
+
+linkTitle: "设置外部身份验证"
+weight: 12210
+---
+
+本文档描述了如何在 KubeSphere 上使用外部身份提供者,例如 LDAP 服务或 Active Directory 服务。
+
+KubeSphere 提供了一个内置的 OAuth 服务。用户通过获取 OAuth 访问令牌以对 API 进行身份验证。作为 KubeSphere 管理员,您可以编辑 CRD `ClusterConfiguration` 中的 `ks-installer` 来配置 OAuth 并指定身份提供者。
+
+## 准备工作
+
+您需要部署一个 Kubernetes 集群,并在集群中安装 KubeSphere。有关详细信息,请参阅[在 Linux 上安装](../../../installing-on-linux/)和[在 Kubernetes 上安装](../../../installing-on-kubernetes/)。
+
+
+## 步骤
+
+1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. 在 `spec.authentication.jwtSecret` 字段下添加以下字段。
+
+ 示例:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ loginHistoryRetentionPeriod: 168h
+ maximumClockSkew: 10s
+ multipleLogin: true
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: LDAP
+ type: LDAPIdentityProvider
+ mappingMethod: auto
+ provider:
+ host: 192.168.0.2:389
+ managerDN: uid=root,cn=users,dc=nas
+ managerPassword: ********
+ userSearchBase: cn=users,dc=nas
+ loginAttribute: uid
+ mailAttribute: mail
+ ```
+
+ 字段描述如下:
+
+ * `jwtSecret`:签发用户令牌的密钥。在多集群环境下,所有的集群必须[使用相同的密钥](../../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-member-cluster)。
+ * `authenticateRateLimiterMaxTries`:`authenticateLimiterDuration` 指定的期间内允许的最大连续登录失败次数。如果用户连续登录失败次数达到限制,则该用户将被封禁。
+ * `authenticateRateLimiterDuration`:`authenticateRateLimiterMaxTries` 适用的时间段。
+ * `loginHistoryRetentionPeriod`:用户登录记录保留期限,过期的登录记录将被自动删除。
+ * `maximumClockSkew`:时间敏感操作(例如验证用户令牌的过期时间)的最大时钟偏差,默认值为10秒。
+ * `multipleLogin`:是否允许多个用户同时从不同位置登录,默认值为 `true`。
+ * `oauthOptions`:
+ * `accessTokenMaxAge`:访问令牌有效期。对于多集群环境中的成员集群,默认值为 `0h`,这意味着访问令牌永不过期。对于其他集群,默认值为 `2h`。
+ * `accessTokenInactivityTimeout`:令牌空闲超时时间。该值表示令牌过期后,刷新用户令牌最大的间隔时间,如果不在此时间窗口内刷新用户身份令牌,用户将需要重新登录以获得访问权。
+ * `identityProviders`:
+ * `name`:身份提供者的名称。
+ * `type`:身份提供者的类型。
+ * `mappingMethod`:帐户映射方式,值可以是 `auto` 或者 `lookup`。
+ * 如果值为 `auto`(默认),需要指定新的用户名。通过第三方帐户登录时,KubeSphere 会根据用户名自动创建关联帐户。
+ * 如果值为 `lookup`,需要执行步骤 3 以手动关联第三方帐户与 KubeSphere 帐户。
+ * `provider`:身份提供者信息。此部分中的字段根据身份提供者的类型而异。
+
+3. 如果 `mappingMethod` 设置为 `lookup`,可以运行以下命令并添加标签来进行帐户关联。如果 `mappingMethod` 是 `auto` 可以跳过这个部分。
+
+ ```bash
+ kubectl edit user
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+ 示例:
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ maximumClockSkew: 10s
+ multipleLogin: true
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: LDAP
+ type: LDAPIdentityProvider
+ mappingMethod: auto
+ provider:
+ host: 192.168.0.2:389
+ managerDN: uid=root,cn=users,dc=nas
+ managerPassword: ********
+ userSearchBase: cn=users,dc=nas
+ loginAttribute: uid
+ mailAttribute: mail
+ ```
+
+2. 在 `spec:authentication` 部分配置 `oauthOptions:identityProviders` 以外的字段信息请参阅[设置外部身份认证](../set-up-external-authentication/)。
+
+3. 在 `oauthOptions:identityProviders` 部分配置字段。
+
+ * `name`: 用户定义的 LDAP 服务名称。
+ * `type`: 必须将该值设置为 `LDAPIdentityProvider` 才能将 LDAP 服务用作身份提供者。
+ * `mappingMethod`: 帐户映射方式,值可以是 `auto` 或者 `lookup`。
+ * 如果值为 `auto`(默认),需要指定新的用户名。KubeSphere 根据用户名自动创建并关联 LDAP 用户。
+ * 如果值为 `lookup`,需要执行步骤 4 以手动关联现有 KubeSphere 用户和 LDAP 用户。
+ * `provider`:
+ * `host`: LDAP 服务的地址和端口号。
+ * `managerDN`: 用于绑定到 LDAP 目录的 DN 。
+ * `managerPassword`: `managerDN` 对应的密码。
+ * `userSearchBase`: 用户搜索基。设置为所有 LDAP 用户所在目录级别的 DN 。
+ * `loginAttribute`: 标识 LDAP 用户的属性。
+ * `mailAttribute`: 标识 LDAP 用户的电子邮件地址的属性。
+
+4. 如果 `mappingMethod` 设置为 `lookup`,可以运行以下命令并添加标签来进行帐户关联。如果 `mappingMethod` 是 `auto` 可以跳过这个部分。
+
+ ```bash
+ kubectl edit user
,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`:
+
+ ```bash
+ kubectl -n kubesphere-system edit cc ks-installer
+ ```
+
+2. 在 `spec:authentication` 部分配置的 `oauthOptions:identityProviders` 以外的字段信息请参阅[设置外部身份认证](../set-up-external-authentication/)。
+
+3. 根据开发的身份提供者插件来配置 `oauthOptions:identityProviders` 中的字段。
+
+ 以下是使用 GitHub 作为外部身份提供者的配置示例。详情请参阅 [GitHub 官方文档](https://docs.github.com/en/developers/apps/building-oauth-apps)和 [GitHubIdentityProvider 源代码](https://github.com/kubesphere/kubesphere/blob/release-3.1/pkg/apiserver/authentication/identityprovider/github/github.go) 。
+
+ ```yaml
+ spec:
+ authentication:
+ jwtSecret: ''
+ authenticateRateLimiterMaxTries: 10
+ authenticateRateLimiterDuration: 10m0s
+ oauthOptions:
+ accessTokenMaxAge: 1h
+ accessTokenInactivityTimeout: 30m
+ identityProviders:
+ - name: github
+ type: GitHubIdentityProvider
+ mappingMethod: auto
+ provider:
+ clientID: '******'
+ clientSecret: '******'
+ redirectURL: 'https://ks-console/oauth/redirect/github'
+ ```
+
+ 同样,您也可以使用阿里云 IDaaS 作为外部身份提供者。详情请参阅[阿里云 IDaaS 文档](https://www.alibabacloud.com/help/product/111120.htm?spm=a3c0i.14898238.2766395700.1.62081da1NlxYV0)和 [AliyunIDaasProvider 源代码](https://github.com/kubesphere/kubesphere/blob/release-3.1/pkg/apiserver/authentication/identityprovider/aliyunidaas/idaas.go)。
+
+4. 字段配置完成后,保存修改,然后等待 ks-installer 完成重启。
+
+ {{< notice note >}}
+
+ KubeSphere Web 控制台在 ks-installer 重新启动期间不可用。请等待重启完成。
+
+ {{ notice >}}
+
+5. 进入 KubeSphere 登录界面,点击 **Log In with XXX** (例如,**Log In with GitHub**)。
+
+6. 在外部身份提供者的登录界面,输入身份提供者配置的用户名和密码,登录 KubeSphere 。
+
+ 
+
diff --git a/content/zh/docs/v3.4/access-control-and-account-management/multi-tenancy-in-kubesphere.md b/content/zh/docs/v3.4/access-control-and-account-management/multi-tenancy-in-kubesphere.md
new file mode 100644
index 000000000..a82d14e8e
--- /dev/null
+++ b/content/zh/docs/v3.4/access-control-and-account-management/multi-tenancy-in-kubesphere.md
@@ -0,0 +1,57 @@
+---
+title: "KubeSphere 中的多租户"
+keywords: "Kubernetes, KubeSphere, 多租户"
+description: "理解 KubeSphere 中的多租户架构。"
+linkTitle: "KubeSphere 中的多租户"
+weight: 12100
+---
+
+Kubernetes 解决了应用编排、容器调度的难题,极大地提高了资源的利用率。有别于传统的集群运维方式,在使用 Kubernetes 的过程中,企业和个人用户在资源共享和安全性方面均面临着诸多挑战。
+
+首当其冲的就是企业环境中多租户形态该如何定义,租户的安全边界该如何划分。Kubernetes 社区[关于多租户的讨论](https://docs.google.com/document/d/1fj3yzmeU2eU8ZNBCUJG97dk_wC7228-e_MmdcmTNrZY)从未停歇,但到目前为止最终的形态尚无定论。
+
+## Kubernetes 多租户面临的挑战
+
+多租户是一种常见的软件架构,简单概括就是在多用户环境下实现资源共享,并保证各用户间数据的隔离性。在多租户集群环境中,集群管理员需要最大程度地避免恶意租户对其他租户的攻击,公平地分配集群资源。
+
+无论企业的多租户形态如何,多租户都无法避免以下两个层面的问题:逻辑层面的资源隔离;物理资源的隔离。
+
+逻辑层面的资源隔离主要包括 API 的访问控制,针对用户的权限控制。Kubernetes 中的 [RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) 和命名空间 (namespace) 提供了基本的逻辑隔离能力,但在大部分企业环境中并不适用。企业中的租户往往需要跨多个命名空间甚至是多个集群进行资源管理。除此之外,针对用户的行为审计、租户隔离的日志、事件查询也是不可或缺的能力。
+
+物理资源的隔离主要包括节点、网络的隔离,当然也包括容器运行时安全。您可以通过 [NetworkPolicy](../../pluggable-components/network-policy/) 对网络进行划分,通过 PodSecurityPolicy 限制容器的行为,[Kata Containers](https://katacontainers.io/) 也提供了更安全的容器运行时。
+
+## KubeSphere 中的多租户
+
+为了解决上述问题,KubeSphere 提供了基于 Kubernetes 的多租户管理方案。
+
+
+
+在 KubeSphere 中[企业空间](../../workspace-administration/what-is-workspace/)是最小的租户单元,企业空间提供了跨集群、跨项目(即 Kubernetes 中的命名空间)共享资源的能力。企业空间中的成员可以在授权集群中创建项目,并通过邀请授权的方式参与项目协同。
+
+**用户**是 KubeSphere 的帐户实例,可以被设置为平台层面的管理员参与集群的管理,也可以被添加到企业空间中参与项目协同。
+
+多级的权限控制和资源配额限制是 KubeSphere 中资源隔离的基础,奠定了多租户最基本的形态。
+
+### 逻辑隔离
+
+与 Kubernetes 相同,KubeSphere 通过 RBAC 对用户的权限加以控制,实现逻辑层面的资源隔离。
+
+KubeSphere 中的权限控制分为平台、企业空间、项目三个层级,通过角色来控制用户在不同层级的资源访问权限。
+
+1. [平台角色](../../quick-start/create-workspace-and-project/):主要控制用户对平台资源的访问权限,如集群的管理、企业空间的管理、平台用户的管理等。
+2. [企业空间角色](../../workspace-administration/role-and-member-management/):主要控制企业空间成员在企业空间下的资源访问权限,如企业空间下项目、DevOps 项目的管理等。
+3. [项目角色](../../project-administration/role-and-member-management/):主要控制项目下资源的访问权限,如工作负载的管理、流水线的管理等。
+
+### 网络隔离
+
+除了逻辑层面的资源隔离,KubeSphere 中还可以针对企业空间和项目设置[网络隔离策略](../../pluggable-components/network-policy/)。
+
+### 操作审计
+
+KubeSphere 还提供了针对用户的[操作审计](../../pluggable-components/auditing-logs/)。
+
+### 认证鉴权
+
+KubeSphere 完整的认证鉴权链路如下图所示,可以通过 OPA 拓展 Kubernetes 的 RBAC 规则。KubeSphere 团队计划集成 [Gatekeeper](https://github.com/open-policy-agent/gatekeeper) 以支持更为丰富的安全管控策略。
+
+
diff --git a/content/zh/docs/v3.4/application-store/_index.md b/content/zh/docs/v3.4/application-store/_index.md
new file mode 100644
index 000000000..26fc4d589
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/_index.md
@@ -0,0 +1,16 @@
+---
+title: "应用商店"
+description: "上手 KubeSphere 应用商店"
+layout: "second"
+
+
+linkTitle: "应用商店"
+weight: 14000
+
+icon: "/images/docs/v3.3/docs.svg"
+
+---
+
+KubeSphere 应用商店基于 [OpenPitrix](https://github.com/openpitrix/openpitrix) (一个跨云管理应用的开源平台)为用户提供企业就绪的容器化解决方案。您可以通过应用模板上传自己的应用,或者添加应用仓库作为应用工具,供租户选择他们想要的应用。
+
+应用商店为应用生命周期管理提供了一个高效的集成系统,用户可以用最合适的方式快速上传、发布、部署、升级和下架应用。因此,开发者借助 KubeSphere 就能减少花在设置上的时间,更多地专注于开发。
diff --git a/content/zh/docs/v3.4/application-store/app-developer-guide/_index.md b/content/zh/docs/v3.4/application-store/app-developer-guide/_index.md
new file mode 100644
index 000000000..cb4e2189f
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/app-developer-guide/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "应用开发者指南"
+weight: 14400
+
+_build:
+ render: false
+---
diff --git a/content/zh/docs/v3.4/application-store/app-developer-guide/helm-developer-guide.md b/content/zh/docs/v3.4/application-store/app-developer-guide/helm-developer-guide.md
new file mode 100644
index 000000000..3b2a72436
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/app-developer-guide/helm-developer-guide.md
@@ -0,0 +1,158 @@
+---
+title: "Helm 开发者指南"
+keywords: 'Kubernetes, KubeSphere, Helm, 开发'
+description: '开发基于 Helm 的应用。'
+linkTitle: "Helm 开发者指南"
+weight: 14410
+---
+
+您可以上传应用的 Helm Chart 至 KubeSphere,以便具有必要权限的租户能够进行部署。本教程以 NGINX 为示例演示如何准备 Helm Chart。
+
+## 安装 Helm
+
+如果您已经安装 KubeSphere,那么您的环境中已部署 Helm。如果未安装,请先参考 [Helm 文档](https://helm.sh/docs/intro/install/)安装 Helm。
+
+## 创建本地仓库
+
+执行以下命令在您的机器上创建仓库。
+
+```bash
+mkdir helm-repo
+```
+
+```bash
+cd helm-repo
+```
+
+## 创建应用
+
+使用 `helm create` 创建一个名为 `nginx` 的文件夹,它会自动为您的应用创建 YAML 模板和目录。一般情况下,不建议修改顶层目录中的文件名和目录名。
+
+```bash
+$ helm create nginx
+$ tree nginx/
+nginx/
+├── charts
+├── Chart.yaml
+├── templates
+│ ├── deployment.yaml
+│ ├── _helpers.tpl
+│ ├── ingress.yaml
+│ ├── NOTES.txt
+│ └── service.yaml
+└── values.yaml
+```
+
+`Chart.yaml` 用于定义 Chart 的基本信息,包括名称、API 和应用版本。有关更多信息,请参见 [Chart.yaml 文件](../helm-specification/#chartyaml-文件)。
+
+该 `Chart.yaml` 文件的示例:
+
+```yaml
+apiVersion: v1
+appVersion: "1.0"
+description: A Helm chart for Kubernetes
+name: nginx
+version: 0.1.0
+```
+
+当您向 Kubernetes 部署基于 Helm 的应用时,可以直接在 KubeSphere 控制台上编辑 `values.yaml` 文件。
+
+该 `values.yaml` 文件的示例:
+
+```yaml
+# 默认值仅供测试使用。
+# 此文件为 YAML 格式。
+# 对要传入您的模板的变量进行声明。
+
+replicaCount: 1
+
+image:
+ repository: nginx
+ tag: stable
+ pullPolicy: IfNotPresent
+
+nameOverride: ""
+fullnameOverride: ""
+
+service:
+ type: ClusterIP
+ port: 80
+
+ingress:
+ enabled: false
+ annotations: {}
+ # kubernetes.io/ingress.class: nginx
+ # kubernetes.io/tls-acme: "true"
+ path: /
+ hosts:
+ - chart-example.local
+ tls: []
+ # - secretName: chart-example-tls
+ # hosts:
+ # - chart-example.local
+
+resources: {}
+ # 通常不建议对默认资源进行指定,用户可以去主动选择是否指定。
+ # 这也有助于 Chart 在资源较少的环境上运行,例如 Minikube。
+ # 如果您要指定资源,请将下面几行内容取消注释,
+ # 按需调整,并删除 'resources:' 后面的大括号。
+ # limits:
+ # cpu: 100m
+ # memory: 128Mi
+ # requests:
+ # cpu: 100m
+ # memory: 128Mi
+
+nodeSelector: {}
+
+tolerations: []
+
+affinity: {}
+```
+
+请参考 [Helm 规范](../helm-specification/)对 `nginx` 文件夹中的文件进行编辑,完成编辑后进行保存。
+
+## 创建索引文件(可选)
+
+要在 KubeSphere 中使用 HTTP 或 HTTPS URL 添加仓库,您需要事先向对象存储上传一个 `index.yaml` 文件。在 `nginx` 的上一个目录中使用 Helm 执行以下命令,创建索引文件。
+
+```bash
+helm repo index .
+```
+
+```bash
+$ ls
+index.yaml nginx
+```
+
+{{< notice note >}}
+
+- 如果仓库 URL 是 S3 格式,您向仓库添加应用时会自动在对象存储中创建索引文件。
+
+- 有关何如向 KubeSphere 添加仓库的更多信息,请参见[导入 Helm 仓库](../../../workspace-administration/app-repository/import-helm-repository/)。
+
+{{ notice >}}
+
+## 打包 Chart
+
+前往 `nginx` 的上一个目录,执行以下命令打包您的 Chart,这会创建一个 .tgz 包。
+
+```bash
+helm package nginx
+```
+
+```bash
+$ ls
+nginx nginx-0.1.0.tgz
+```
+
+## 上传您的应用
+
+现在您已经准备好了基于 Helm 的应用,您可以将它上传至 KubeSphere 并在平台上进行测试。
+
+## 另请参见
+
+[Helm 规范](../helm-specification/)
+
+[导入 Helm 仓库](../../../workspace-administration/app-repository/import-helm-repository/)
+
diff --git a/content/zh/docs/v3.4/application-store/app-developer-guide/helm-specification.md b/content/zh/docs/v3.4/application-store/app-developer-guide/helm-specification.md
new file mode 100644
index 000000000..c33f28596
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/app-developer-guide/helm-specification.md
@@ -0,0 +1,131 @@
+---
+title: "Helm 规范"
+keywords: 'Kubernetes, KubeSphere, Helm, 规范'
+description: '了解 Chart 结构和规范。'
+linkTitle: "Helm 规范"
+weight: 14420
+---
+
+Helm Chart 是一种打包格式。Chart 是一个描述一组 Kubernetes 相关资源的文件集合。有关更多信息,请参见 [Helm 文档](https://helm.sh/zh/docs/topics/charts/)。
+
+## 结构
+
+Chart 的所有相关文件都存储在一个目录中,该目录通常包含:
+
+```text
+chartname/
+ Chart.yaml # 包含 Chart 基本信息(例如版本和名称)的 YAML 文件。
+ LICENSE # (可选)包含 Chart 许可证的纯文本文件。
+ README.md # (可选)应用说明和使用指南。
+ values.yaml # 该 Chart 的默认配置值。
+ values.schema.json # (可选)向 values.yaml 文件添加结构的 JSON Schema。
+ charts/ # 一个目录,包含该 Chart 所依赖的任意 Chart。
+ crds/ # 定制资源定义。
+ templates/ # 模板的目录,若提供相应值便可以生成有效的 Kubernetes 配置文件。
+ templates/NOTES.txt # (可选)包含使用说明的纯文本文件。
+```
+
+## Chart.yaml 文件
+
+您必须为 Chart 提供 `chart.yaml` 文件。下面是一个示例文件,每个字段都有说明。
+
+```yaml
+apiVersion: (必需)Chart API 版本。
+name: (必需)Chart 名称。
+version: (必需)版本,遵循 SemVer 2 标准。
+kubeVersion: (可选)兼容的 Kubernetes 版本,遵循 SemVer 2 标准。
+description: (可选)对应用的一句话说明。
+type: (可选)Chart 的类型。
+keywords:
+ - (可选)关于应用的关键字列表。
+home: (可选)应用的 URL。
+sources:
+ - (可选)应用源代码的 URL 列表。
+dependencies: (可选)Chart 必要条件的列表。
+ - name: Chart 的名称,例如 nginx。
+ version: Chart 的版本,例如 "1.2.3"。
+ repository: 仓库 URL ("https://example.com/charts") 或别名 ("@repo-name")。
+ condition: (可选)解析为布尔值的 YAML 路径,用于启用/禁用 Chart (例如 subchart1.enabled)。
+ tags: (可选)
+ - 用于将 Chart 分组,一同启用/禁用。
+ import-values: (可选)
+ - ImportValues 保存源值到待导入父键的映射。每一项可以是字符串或者一对子/父子列表项。
+ alias: (可选)Chart 要使用的别名。当您要多次添加同一个 Chart 时,它会很有用。
+maintainers: (可选)
+ - name: (必需)维护者姓名。
+ email: (可选)维护者电子邮件。
+ url: (可选)维护者 URL。
+icon: (可选)要用作图标的 SVG 或 PNG 图片的 URL。
+appVersion: (可选)应用版本。不需要是 SemVer。
+deprecated: (可选,布尔值)该 Chart 是否已被弃用。
+annotations:
+ example: (可选)按名称输入的注解列表。
+```
+
+{{< notice note >}}
+
+- `dependencies` 字段用于定义 Chart 依赖项,`v1` Chart 的依赖项都位于单独文件 `requirements.yaml` 中。有关更多信息,请参见 [Chart 依赖项](https://helm.sh/zh/docs/topics/charts/#chart-dependency)。
+- `type` 字段用于定义 Chart 的类型。允许的值有 `application` 和 `library`。有关更多信息,请参见 [Chart 类型](https://helm.sh/zh/docs/topics/charts/#chart-types)。
+
+{{ notice >}}
+
+## Values.yaml 和模板
+
+Helm Chart 模板采用 [Go 模板语言](https://golang.org/pkg/text/template/)编写并存储在 Chart 的 `templates` 文件夹。有两种方式可以为模板提供值:
+
+1. 在 Chart 中创建一个包含可供引用的默认值的 `values.yaml` 文件。
+2. 创建一个包含必要值的 YAML 文件,通过在命令行使用 `helm install` 命令来使用该文件。
+
+下面是 `templates` 文件夹中模板的示例。
+
+```yaml
+apiVersion: v1
+kind: ReplicationController
+metadata:
+ name: deis-database
+ namespace: deis
+ labels:
+ app.kubernetes.io/managed-by: deis
+spec:
+ replicas: 1
+ selector:
+ app.kubernetes.io/name: deis-database
+ template:
+ metadata:
+ labels:
+ app.kubernetes.io/name: deis-database
+ spec:
+ serviceAccount: deis-database
+ containers:
+ - name: deis-database
+ image: {{.Values.imageRegistry}}/postgres:{{.Values.dockerTag}}
+ imagePullPolicy: {{.Values.pullPolicy}}
+ ports:
+ - containerPort: 5432
+ env:
+ - name: DATABASE_STORAGE
+ value: {{default "minio" .Values.storage}}
+```
+
+上述示例在 Kubernetes 中定义 ReplicationController 模板,其中引用的一些值已在 `values.yaml` 文件中进行定义。
+
+- `imageRegistry`:Docker 镜像仓库。
+- `dockerTag`:Docker 镜像标签 (tag)。
+- `pullPolicy`:镜像拉取策略。
+- `storage`:存储后端,默认为 `minio`。
+
+下面是 `values.yaml` 文件的示例:
+
+```text
+imageRegistry: "quay.io/deis"
+dockerTag: "latest"
+pullPolicy: "Always"
+storage: "s3"
+```
+
+## 参考
+
+[Helm 文档](https://helm.sh/zh/docs/)
+
+[Chart](https://helm.sh/zh/docs/topics/charts/)
+
diff --git a/content/zh/docs/v3.4/application-store/app-lifecycle-management.md b/content/zh/docs/v3.4/application-store/app-lifecycle-management.md
new file mode 100644
index 000000000..020b0780c
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/app-lifecycle-management.md
@@ -0,0 +1,230 @@
+---
+title: "应用程序生命周期管理"
+keywords: 'Kubernetes, KubeSphere, 应用商店'
+description: '您可以跨整个生命周期管理应用,包括提交、审核、测试、发布、升级和下架。'
+linkTitle: '应用程序生命周期管理'
+weight: 14100
+---
+
+KubeSphere 集成了 [OpenPitrix](https://github.com/openpitrix/openpitrix)(一个跨云管理应用程序的开源平台)来构建应用商店,管理应用程序的整个生命周期。应用商店支持两种应用程序部署方式:
+
+- **应用模板**:这种方式让开发者和独立软件供应商 (ISV) 能够与企业空间中的用户共享应用程序。您也可以在企业空间中导入第三方应用仓库。
+- **自制应用**:这种方式帮助用户使用多个微服务来快速构建一个完整的应用程序。KubeSphere 让用户可以选择现有服务或者创建新的服务,用于在一站式控制台上创建自制应用。
+
+本教程使用 [Redis](https://redis.io/) 作为示例应用程序,演示如何进行应用全生命周期管理,包括提交、审核、测试、发布、升级和下架。
+
+## 视频演示
+
+
+
+## 准备工作
+
+- 您需要启用 [KubeSphere 应用商店 (OpenPitrix)](../../pluggable-components/app-store/)。
+- 您需要创建一个企业空间、一个项目以及一个用户 (`project-regular`)。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../quick-start/create-workspace-and-project/)。
+
+## 动手实验
+
+### 步骤一:创建自定义角色和帐户
+
+首先,您需要创建两个帐户,一个是 ISV 的帐户 (`isv`),另一个是应用技术审核员的帐户 (`reviewer`)。
+
+1. 使用 `admin` 帐户登录 KubeSphere 控制台。点击左上角的**平台管理**,选择**访问控制**。转到**平台角色**,点击**创建**。
+
+2. 为角色设置一个名称,例如 `app-review`,然后点击**编辑权限**。
+
+3. 转到**应用管理**,选择权限列表中的**应用商店管理**和**应用商店查看**,然后点击**确定**。
+
+ {{< notice note >}}
+
+ 被授予 `app-review` 角色的用户能够查看平台上的应用商店并管理应用,包括审核和下架应用。
+
+ {{ notice >}}
+
+4. 创建角色后,您需要创建一个用户,并授予 `app-review` 角色。转到**用户**,点击**创建**。输入必需的信息,然后点击**确定**。
+
+5. 再创建另一个用户 `isv`,把 `platform-regular` 角色授予它。
+
+6. 邀请上面创建好的两个帐户进入现有的企业空间,例如 `demo-workspace`,并授予它们 `workspace-admin` 角色。
+
+### 步骤二:上传和提交应用程序
+
+1. 以 `isv` 身份登录控制台,转到您的企业空间。您需要上传示例应用 Redis 至该企业空间,供后续使用。首先,下载应用 [Redis 11.3.4](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/redis-11.3.4.tgz),然后转到**应用模板**,点击**上传模板**。
+
+ {{< notice note >}}
+
+ 在本示例中,稍后会上传新版本的 Redis 来演示升级功能。
+
+ {{ notice >}}
+
+2. 在弹出的对话框中,点击**上传 Helm Chart** 上传 Chart 文件。点击**确定**继续。
+
+3. **应用信息**下显示了应用的基本信息。要上传应用的图标,点击**上传图标**。您也可以跳过上传图标,直接点击**确定**。
+
+ {{< notice note >}}
+
+ 应用图标支持的最大分辨率为 96 × 96 像素。
+
+ {{ notice >}}
+
+4. 成功上传后,模板列表中会列出应用,状态为**开发中**,意味着该应用正在开发中。上传的应用对同一企业空间下的所有成员均可见。
+
+5. 点击列表中的 Redis 进入应用模板详情页面。您可以点击**编辑**来编辑该应用的基本信息。
+
+6. 您可以通过在弹出窗口中指定字段来自定义应用的基本信息。
+
+7. 点击**确定**保存更改,然后您可以通过将其部署到 Kubernetes 来测试该应用程序。点击待提交版本展开菜单,选择**安装**。
+
+ {{< notice note >}}
+
+ 如果您不想测试应用,可以直接提交审核。但是,建议您先测试应用部署和功能,再提交审核,尤其是在生产环境中。这会帮助您提前发现问题,加快审核过程。
+
+ {{ notice >}}
+
+8. 选择要部署应用的集群和项目,为应用设置不同的配置,然后点击**安装**。
+
+ {{< notice note >}}
+
+ 有些应用可以在表单中设置所有配置后进行部署。您可以使用拨动开关查看它的 YAML 文件,文件中包含了需要在表单中指定的所有参数。
+
+ {{ notice >}}
+
+9. 稍等几分钟,切换到**应用实例**选项卡。您会看到 Redis 已经部署成功。
+
+10. 测试应用并且没有发现问题后,便可以点击**提交发布**,提交该应用程序进行发布。
+
+ {{< notice note >}}
+
+版本号必须以数字开头并包含小数点。
+
+{{ notice >}}
+
+11. 应用提交后,它的状态会变成**已提交**。现在,应用审核员便可以进行审核。
+
+
+### 步骤三:发布应用程序
+
+1. 登出控制台,然后以 `reviewer` 身份重新登录 KubeSphere。点击左上角的**平台管理**,选择**应用商店管理**。在**应用发布**页面,上一步中提交的应用会显示在**待发布**选项卡下。
+
+2. 点击该应用进行审核,在弹出窗口中查看应用信息、介绍、配置文件和更新日志。
+
+3. 审核员的职责是决定该应用是否符合发布至应用商店的标准。点击**通过**来批准,或者点击**拒绝**来拒绝提交的应用。
+
+### 步骤四:发布应用程序至应用商店
+
+应用获批后,`isv` 便可以将 Redis 应用程序发布至应用商店,让平台上的所有用户都能找到并部署该应用程序。
+
+1. 登出控制台,然后以 `isv` 身份重新登录 KubeSphere。转到您的企业空间,点击**应用模板**页面上的 Redis。在详情页面上展开版本菜单,然后点击**发布到商店**。在弹出的提示框中,点击**确定**以确认操作。
+
+2. 在**应用发布**下,您可以查看应用状态。**已上架**意味着它在应用商店中可用。
+
+3. 点击**在商店查看**转到应用商店的**应用信息**页面,或者点击左上角的**应用商店**也可以查看该应用。
+
+ {{< notice note >}}
+
+ 您可能会在应用商店看到两个 Redis 应用,其中一个是 KubeSphere 中的内置应用。请注意,新发布的应用会显示在应用商店列表的开头。
+
+ {{ notice >}}
+
+4. 现在,企业空间中的用户可以从应用商店中部署 Redis。要将应用部署至 Kubernetes,请点击应用转到**应用信息**页面,然后点击**安装**。
+
+ {{< notice note >}}
+
+ 如果您在部署应用时遇到问题,**状态**栏显示为**失败**,您可以将光标移至**失败**图标上方查看错误信息。
+
+ {{ notice >}}
+
+### 步骤五:创建应用分类
+
+`reviewer` 可以根据不同类型应用程序的功能和用途创建多个分类。这类似于设置标签,可以在应用商店中将分类用作筛选器,例如大数据、中间件和物联网等。
+
+1. 以 `reviewer` 身份登录 KubeSphere。要创建分类,请转到**应用商店管理**页面,再点击**应用分类**页面中的
。
+
+2. 在弹出的对话框中设置分类名称和图标,然后点击**确定**。对于 Redis,您可以将**分类名称**设置为 `Database`。
+
+ {{< notice note >}}
+
+ 通常,应用审核员会提前创建必要的分类,ISV 会选择应用所属的分类,然后提交审核。新创建的分类中没有应用。
+
+ {{ notice >}}
+
+3. 创建好分类后,您可以给您的应用分配分类。在**未分类**中选择 Redis,点击**调整分类**。
+
+4. 在弹出对话框的下拉列表中选择分类 (**Database**) 然后点击**确定**。
+
+5. 该应用便会显示在对应分类中。
+
+
+### 步骤六:添加新版本
+
+要让企业空间用户能够更新应用,您需要先向 KubeSphere 添加新的应用版本。按照下列步骤为示例应用添加新版本。
+
+1. 再次以 `isv` 身份登录 KubeSphere,点击**应用模板**,点击列表中的 Redis 应用。
+
+2. 下载 [Redis 12.0.0](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/redis-12.0.0.tgz),这是 Redis 的一个新版本,本教程用它来演示。在**版本**选项卡中点击右侧的**上传新版本**,上传您刚刚下载的文件包。
+
+3. 点击**上传 Helm Chart**,上传完成后点击**确定**。
+
+4. 新的应用版本会显示在版本列表中。您可以通过点击来展开菜单并测试新的版本。另外,您也可以提交审核并发布至应用商店,操作步骤和上面说明的一样。
+
+
+### 步骤七:升级
+
+新版本发布至应用商店后,所有用户都可以升级该应用程序至新版本。
+
+{{< notice note >}}
+
+要完成下列步骤,您必须先部署应用的一个旧版本。本示例中,Redis 11.3.4 已经部署至项目 `demo-project`,它的新版本 12.0.0 也已经发布至应用商店。
+
+{{ notice >}}
+
+1. 以 `project-regular` 身份登录 KubeSphere,搜寻到项目的**应用**页面,点击要升级的应用。
+
+2. 点击**更多操作**,在下拉菜单中选择**编辑设置**。
+
+3. 在弹出窗口中,您可以查看应用配置 YAML 文件。在右侧的下拉列表中选择新版本,您可以自定义新版本的 YAML 文件。在本教程中,点击**更新**,直接使用默认配置。
+
+ {{< notice note >}}
+
+ 您可以在右侧的下拉列表中选择与左侧相同的版本,通过 YAML 文件自定义当前应用的配置。
+
+ {{ notice >}}
+
+4. 在**应用**页面,您会看到应用正在升级中。升级完成后,应用状态会变成**运行中**。
+
+
+### 步骤八:下架应用程序
+
+您可以选择将应用完全从应用商店下架,或者下架某个特定版本。
+
+1. 以 `reviewer` 身份登录 KubeSphere。点击左上角的**平台管理**,选择**应用商店管理**。在**应用商店**页面,点击 Redis。
+
+2. 在详情页面,点击**下架应用**,在弹出的对话框中选择**确定**,确认将应用从应用商店下架的操作。
+
+ {{< notice note >}}
+
+ 将应用从应用商店下架不影响正在使用该应用的租户。
+
+ {{ notice >}}
+
+3. 要让应用再次在应用商店可用,点击**上架应用**。
+
+4. 要下架应用的特定版本,展开版本菜单,点击**下架版本**。在弹出的对话框中,点击**确定**以确认操作。
+
+ {{< notice note >}}
+
+ 下架应用版本后,该版本在应用商店将不可用。下架应用版本不影响正在使用该版本的租户。
+
+ {{ notice >}}
+
+5. 要让应用版本再次在应用商店可用,点击**上架版本**。
+
+
+
+
+
+
+
+
+
diff --git a/content/zh/docs/v3.4/application-store/built-in-apps/_index.md b/content/zh/docs/v3.4/application-store/built-in-apps/_index.md
new file mode 100644
index 000000000..49cf0ae27
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/built-in-apps/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "内置应用"
+weight: 14200
+
+_build:
+ render: false
+---
diff --git a/content/zh/docs/v3.4/application-store/built-in-apps/chaos-mesh-app.md b/content/zh/docs/v3.4/application-store/built-in-apps/chaos-mesh-app.md
new file mode 100644
index 000000000..8cdb60fa4
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/built-in-apps/chaos-mesh-app.md
@@ -0,0 +1,93 @@
+---
+title: "在 KubeSphere 中部署 Chaos Mesh"
+keywords: 'KubeSphere, Kubernetes, Chaos Mesh, Chaos Engineering'
+description: '了解如何在 KubeSphere 中部署 Chaos Mesh 并进行混沌实验。'
+linkTitle: "部署 Chaos Mesh"
+---
+
+[Chaos Mesh](https://github.com/chaos-mesh/chaos-mesh) 是一个开源的云原生混沌工程平台,提供丰富的故障模拟类型,具有强大的故障场景编排能力,方便用户在开发测试中以及生产环境中模拟现实世界中可能出现的各类异常,帮助用户发现系统潜在的问题。
+
+
+
+本教程演示了如何在 KubeSphere 上部署 Chaos Mesh 进行混沌实验。
+
+## **准备工作**
+
+* 部署 [KubeSphere 应用商店](../../../pluggable-components/app-store/)。
+* 您需要为本教程创建一个企业空间、一个项目和两个帐户(ws-admin 和 project-regular)。帐户 ws-admin 必须在企业空间中被赋予 workspace-admin 角色,帐户 project-regular 必须被邀请至项目中赋予 operator 角色。若还未创建好,请参考[创建企业空间、项目、用户和角色](https://kubesphere.io/zh/docs/quick-start/create-workspace-and-project/)。
+
+
+## **开始混沌实验**
+
+### 步骤1: 部署 Chaos Mesh
+
+1. 使用 `project-regular` 身份登陆,在应用市场中搜索 `chaos-mesh`,点击搜索结果进入应用。
+
+ 
+
+
+2. 进入应用信息页后,点击右上角**安装**按钮。
+
+ 
+
+3. 进入应用设置页面,可以设置应用**名称**(默认会随机一个唯一的名称)和选择安装的**位置**(对应的 Namespace) 和**版本**,然后点击右上角**下一步**。
+
+ 
+
+4. 根据实际需要编辑 `values.yaml` 文件,也可以直接点击**安装**使用默认配置。
+
+ 
+
+5. 等待 Chaos Mesh 开始正常运行。
+
+ 
+
+6. 访问**应用负载**, 可以看到 Chaos Mesh 创建的三个部署。
+
+ 
+
+### 步骤 2: 访问 Chaos Mesh
+
+1. 前往**应用负载**下服务页面,复制 chaos-dashboard 的 **NodePort**。
+
+ 
+
+2. 您可以通过 `${NodeIP}:${NODEPORT}` 方式访问 Chaos Dashboard。并参考[管理用户权限](https://chaos-mesh.org/zh/docs/manage-user-permissions/)文档,生成 Token,并登陆 Chaos Dashboard。
+
+ 
+
+### 步骤 3: 创建混沌实验
+
+1. 在开始混沌实验之前,需要先确定并部署您的实验目标,比如,测试某应用在网络延时下的工作状态。本文使用了一个 demo 应用 `web-show` 作为待测试目标,观测系统网络延迟。 你可以使用下面命令部署一个 Demo 应用 `web-show` :
+
+ ```bash
+ curl -sSL https://mirrors.chaos-mesh.org/latest/web-show/deploy.sh | bash
+ ```
+
+ {{< notice note >}}
+
+ web-show 应用页面上可以直接观察到自身到 kube-system 命名空间下 Pod 的网络延迟。
+
+ {{ notice >}}
+
+2. 访问 **web-show** 应用程序。从您的网络浏览器,进入 `${NodeIP}:8081`。
+
+ 
+
+3. 登陆 Chaos Dashboard 创建混沌实验,为了更好的观察混沌实验效果,这里只创建一个独立的混沌实验,混沌实验的类型选择**网络攻击**,模拟网络延迟的场景:
+
+ 
+
+ 实验范围设置为 web-show 应用:
+
+ 
+
+4. 提交混沌实验后,查看实验状态:
+
+ 
+
+5. 访问 web-show 应用观察实验结果 :
+
+ 
+
+更多详情参考 [Chaos Mesh 使用文档](https://chaos-mesh.org/zh/docs/)。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/application-store/built-in-apps/etcd-app.md b/content/zh/docs/v3.4/application-store/built-in-apps/etcd-app.md
new file mode 100644
index 000000000..ca74dfa5a
--- /dev/null
+++ b/content/zh/docs/v3.4/application-store/built-in-apps/etcd-app.md
@@ -0,0 +1,60 @@
+---
+title: "在 KubeSphere 中部署 etcd"
+keywords: 'Kubernetes, KubeSphere, etcd, 应用商店'
+description: '了解如何从 KubeSphere 应用商店中部署 etcd 并访问服务。'
+linkTitle: "在 KubeSphere 中部署 etcd"
+weight: 14210
+---
+
+[etcd](https://etcd.io/) 是一个采用 Go 语言编写的分布式键值存储库,用来存储供分布式系统或机器集群访问的数据。在 Kubernetes 中,etcd 是服务发现的后端,存储集群状态和配置。
+
+本教程演示如何从 KubeSphere 应用商店部署 etcd。
+
+## 准备工作
+
+- 请确保[已启用 OpenPitrix 系统](../../../pluggable-components/app-store/)。
+- 您需要创建一个企业空间、一个项目和一个用户帐户 (`project-regular`) 供本教程操作使用。该帐户需要是平台普通用户,并邀请至项目中赋予 `operator` 角色作为项目操作员。本教程中,请以 `project-regular` 身份登录控制台,在企业空间 `demo-workspace` 中的 `demo-project` 项目中进行操作。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 动手实验
+
+### 步骤 1:从应用商店中部署 etcd
+
+1. 在 `demo-project` 项目的**概览**页面,点击左上角的**应用商店**。
+
+2. 找到 etcd,点击**应用信息**页面上的**安装**。
+
+3. 设置名称并选择应用版本。请确保将 etcd 部署在 `demo-project` 中,点击**下一步**。
+
+4. 在**应用设置**页面,指定 etcd 的持久化持久卷大小,点击**安装**。
+
+ {{< notice note >}}
+
+ 要指定 etcd 的更多值,请使用右上角的**编辑YAML**查看 YAML 格式的应用清单文件,并编辑其配置。
+
+ {{ notice >}}
+
+5. 在**应用**页面的**基于模板的应用**选项卡下,稍等片刻待 etcd 启动并运行。
+
+
+### 步骤 2:访问 etcd 服务
+
+应用部署后,您可以在 KubeSphere 控制台上使用 etcdctl 命令行工具与 etcd 服务器进行交互,直接访问 etcd。
+
+1. 在**工作负载**的**有状态副本集**选项卡中,点击 etcd 的服务名称。
+
+2. 在**容器组**下,展开菜单查看容器详情,然后点击**终端**图标。
+
+3. 在终端中,您可以直接读写数据。例如,分别执行以下两个命令。
+
+ ```bash
+ etcdctl set /name kubesphere
+ ```
+
+ ```bash
+ etcdctl get /name
+ ```
+
+4. KubeSphere 集群内的客户端可以通过 `
,从下拉菜单中选择操作:
+
+- **编辑**:编辑项目网关的配置。
+- **关闭**:关闭项目网关。
+
+{{< notice note >}}
+
+如果在创建集群网关之前存在项目网关,则项目网关地址可能会在集群网关地址和项目网关地址之间切换。建议您只使用集群网关或项目网关。
+
+{{ notice >}}
+
+关于如何创建项目网关的更多信息,请参见[项目网关](../../../project-administration/project-gateway/)。
+
diff --git a/content/zh/docs/v3.4/cluster-administration/cluster-settings/cluster-visibility-and-authorization.md b/content/zh/docs/v3.4/cluster-administration/cluster-settings/cluster-visibility-and-authorization.md
new file mode 100644
index 000000000..0b5c6b61c
--- /dev/null
+++ b/content/zh/docs/v3.4/cluster-administration/cluster-settings/cluster-visibility-and-authorization.md
@@ -0,0 +1,53 @@
+---
+title: "集群可见性和授权"
+keywords: "集群可见性, 集群管理"
+description: "了解如何设置集群可见性和授权。"
+linkTitle: "集群可见性和授权"
+weight: 8610
+---
+
+在 KubeSphere 中,您可以通过授权将一个集群分配给多个企业空间,让企业空间资源都可以在该集群上运行。同时,一个企业空间也可以关联多个集群。拥有必要权限的企业空间用户可以使用分配给该企业空间的集群来创建多集群项目。
+
+本指南演示如何设置集群可见性。
+
+## 准备工作
+* 您需要启用[多集群功能](../../../multicluster-management/)。
+* 您需要有一个企业空间和一个拥有创建企业空间权限的帐户,例如 `ws-manager`。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 设置集群可见性
+
+### 在创建企业空间时选择可用集群
+
+1. 使用拥有创建企业空间权限的用户登录 KubeSphere,例如 `ws-manager`。
+
+2. 点击左上角的**平台管理**,选择**访问控制**。在左侧导航栏选择**企业空间**,然后点击**创建**。
+
+3. 输入企业空间的基本信息,点击**下一步**。
+
+4. 在**集群设置**页面,您可以看到可用的集群列表,选择要分配给企业空间的集群并点击**创建**。
+
+5. 创建企业空间后,拥有必要权限的企业空间成员可以创建资源,在关联集群上运行。
+
+ {{< notice warning >}}
+
+尽量不要在主集群上创建资源,避免负载过高导致多集群稳定性下降。
+
+{{ notice >}}
+
+### 在创建企业空间后设置集群可见性
+
+创建企业空间后,您可以通过授权向该企业空间分配其他集群,或者将集群从企业空间中解绑。按照以下步骤调整集群可见性。
+
+1. 使用拥有集群管理权限的帐户登录 KubeSphere,例如 `admin`。
+
+2. 点击左上角的**平台管理**,选择**集群管理**。从列表中选择一个集群查看集群信息。
+
+3. 在左侧导航栏找到**集群设置**,选择**集群可见性**。
+
+4. 您可以看到已授权企业空间的列表,这意味着所有这些企业空间中的资源都能使用当前集群。
+
+5. 点击**编辑可见性**设置集群可见性。您可以选择让新的企业空间使用该集群,或者将该集群从企业空间解绑。
+
+### 将集群设置为公开集群
+
+您可以打开**设置为公开集群**,以便平台用户访问该集群,并在该集群上创建和调度资源。
diff --git a/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/_index.md b/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/_index.md
new file mode 100644
index 000000000..bce4fe493
--- /dev/null
+++ b/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/_index.md
@@ -0,0 +1,7 @@
+---
+linkTitle: "日志接收器"
+weight: 8620
+
+_build:
+ render: false
+---
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md b/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md
new file mode 100644
index 000000000..70a1807f8
--- /dev/null
+++ b/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-es-as-receiver.md
@@ -0,0 +1,34 @@
+---
+title: "添加 Elasticsearch 作为接收器"
+keywords: 'Kubernetes, 日志, Elasticsearch, Pod, 容器, Fluentbit, 输出'
+description: '了解如何添加 Elasticsearch 来接收容器日志、资源事件或审计日志。'
+linkTitle: "添加 Elasticsearch 作为接收器"
+weight: 8622
+---
+您可以在 KubeSphere 中使用 Elasticsearch、Kafka 和 Fluentd 日志接收器。本教程演示如何添加 Elasticsearch 接收器。
+
+## 准备工作
+
+- 您需要一个被授予**集群管理**权限的用户。例如,您可以直接用 `admin` 用户登录控制台,或创建一个具有**集群管理**权限的角色然后将此角色授予一个用户。
+- 添加日志接收器前,您需要启用组件 `logging`、`events` 或 `auditing`。有关更多信息,请参见[启用可插拔组件](../../../../pluggable-components/)。本教程启用 `logging` 作为示例。
+
+## 添加 Elasticsearch 作为接收器
+
+1. 以 `admin` 身份登录 KubeSphere 的 Web 控制台。点击左上角的**平台管理**,然后选择**集群管理**。
+
+ {{< notice note >}}
+
+如果您启用了[多集群功能](../../../../multicluster-management/),您可以选择一个集群。
+
+{{ notice >}}
+
+2. 在左侧导航栏,选择**集群设置**下的**日志接收器**。
+
+3. 点击**添加日志接收器**并选择 **Elasticsearch**。
+
+4. 提供 Elasticsearch 服务地址和端口信息。
+
+5. Elasticsearch 会显示在**日志接收器**页面的接收器列表中,状态为**收集中**。
+
+6. 若要验证 Elasticsearch 是否从 Fluent Bit 接收日志,从右下角的**工具箱**中点击**日志查询**,在控制台中搜索日志。有关更多信息,请参阅[日志查询](../../../../toolbox/log-query/)。
+
diff --git a/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md b/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md
new file mode 100644
index 000000000..dc90d4e52
--- /dev/null
+++ b/content/zh/docs/v3.4/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver.md
@@ -0,0 +1,154 @@
+---
+title: "添加 Fluentd 作为接收器"
+keywords: 'Kubernetes, 日志, Fluentd, 容器组, 容器, Fluentbit, 输出'
+description: '了解如何添加 Fluentd 来接收容器日志、资源事件或审计日志。'
+linkTitle: "添加 Fluentd 作为接收器"
+weight: 8624
+---
+您可以在 KubeSphere 中使用 Elasticsearch、Kafka 和 Fluentd 日志接收器。本教程演示:
+
+- 创建 Fluentd 部署以及对应的服务(Service)和配置字典(ConfigMap)。
+- 添加 Fluentd 作为日志接收器以接收来自 Fluent Bit 的日志,并输出为标准输出。
+- 验证 Fluentd 能否成功接收日志。
+
+## 准备工作
+
+- 您需要一个被授予**集群管理**权限的用户。例如,您可以直接用 `admin` 用户登录控制台,或创建一个具有**集群管理**权限的角色然后将此角色授予一个用户。
+
+- 添加日志接收器前,您需要启用组件 `logging`、`events` 或 `auditing`。有关更多信息,请参见[启用可插拔组件](../../../../pluggable-components/)。本教程启用 `logging` 作为示例。
+
+## 步骤 1:创建 Fluentd 部署
+
+由于内存消耗低,KubeSphere 选择 Fluent Bit。Fluentd 一般在 Kubernetes 中以守护进程集的形式部署,在每个节点上收集容器日志。此外,Fluentd 支持多个插件。因此,Fluentd 会以部署的形式在 KubeSphere 中创建,将从 Fluent Bit 接收到的日志发送到多个目标,例如 S3、MongoDB、Cassandra、MySQL、syslog 和 Splunk 等。
+
+执行以下命令:
+
+{{< notice note >}}
+
+- 以下命令将在默认命名空间 `default` 中创建 Fluentd 部署、服务和配置字典,并为该 Fluentd 配置字典添加 `filter` 以排除 `default` 命名空间中的日志,避免 Fluent Bit 和 Fluentd 重复日志收集。
+- 如果您想要将 Fluentd 部署至其他命名空间,请修改以下命令中的命名空间名称。
+
+{{ notice >}}
+
+```yaml
+cat <
。
+
+1. 点击下拉菜单中的**编辑**,根据与创建时相同的步骤来编辑告警策略。点击**消息设置**页面的**确定**保存更改。
+
+2. 点击下拉菜单中的**删除**以删除告警策略。
+
+## 查看告警策略
+
+在**告警策略**页面,点击一个告警策略的名称查看其详情,包括告警规则和告警历史。您还可以看到创建告警策略时基于所使用模板的告警规则表达式。
+
+在**监控**下,**告警监控**图显示一段时间内的实际资源使用情况或使用量。**告警消息**显示您在通知中设置的自定义消息。
+
+{{< notice note >}}
+
+您可以点击右上角的
,然后选择**添加子部门**。
+
+2. 在弹出对话框中,输入部门名称(例如`测试二组`),然后点击**确定**。
+
+3. 创建部门后,您可以点击右侧的**添加成员**、**批量导入**或**从其他部门移入**来添加成员。添加成员后,点击该成员进入详情页面,查看其帐号。
+
+4. 您可以点击`测试二组`右侧的
来查看其部门 ID。
+
+5. 点击**标签**选项卡,然后点击**添加标签**来创建标签。若管理界面无**标签**选项卡,请点击加号图标来创建标签。
+
+6. 在弹出对话框中,输入标签名称,例如`组长`。您可以按需指定**可使用人**,点击**确定**完成操作。
+
+7. 创建标签后,您可以点击右侧的**添加部门/成员**或**批量导入**来添加部门或成员。点击**标签详情**进入详情页面,可以查看此标签的 ID。
+
+8. 要查看企业 ID,请点击**我的企业**,在**企业信息**页面查看 ID。
+
+### 步骤 3:在 KubeSphere 控制台配置企业微信通知
+
+您必须在 KubeSphere 控制台提供企业微信的相关 ID 和凭证,以便 KubeSphere 将通知发送至您的企业微信。
+
+1. 使用具有 `platform-admin` 角色的用户(例如,`admin`)登录 KubeSphere Web 控制台。
+
+2. 点击左上角的**平台管理**,选择**平台设置**。
+
+3. 前往**通知管理**下的**通知配置**,选择**企业微信**。
+
+4. 在**服务器设置**下的**企业 ID**、**应用 AgentId** 以及**应用 Secret** 中分别输入您的企业 ID、应用 AgentId 以及应用 Secret。
+
+5. 在**接收设置**中,从下拉列表中选择**用户 ID**、**部门 ID** 或者**标签 ID**,输入对应 ID 后点击**添加**。您可以添加多个 ID。
+
+6. 勾选**通知条件**左侧的复选框即可设置通知条件。
+
+ - **标签**:告警策略的名称、级别或监控目标。您可以选择一个标签或者自定义标签。
+ - **操作符**:标签与值的匹配关系,包括**包含值**,**不包含值**,**存在**和**不存在**。
+ - **值**:标签对应的值。
+ {{< notice note >}}
+ - 操作符**包含值**和**不包含值**需要添加一个或多个标签值。使用回车分隔多个值。
+ - 操作符**存在**和**不存在**判断某个标签是否存在,无需设置标签值。
+ {{ notice >}}
+
+ 您可以点击**添加**来添加多个通知条件,或点击通知条件右侧的
。
+
+2. 转到**仓库**页面,您可以看到 Nexus 提供了三种仓库类型。
+
+ - `proxy`:远程仓库代理,用于下载资源并将其作为缓存存储在 Nexus 上。
+
+ - `hosted`:在 Nexus 上存储制品的仓库。
+
+ - `group`:一组已配置好的 Nexus 仓库。
+
+3. 点击仓库查看它的详细信息。例如:点击 **maven-public** 进去详情页面,并且查看它的 **URL**。
+
+### 步骤 2:在 GitHub 仓库修改 `pom.xml`
+
+1. 登录 GitHub,Fork [示例仓库](https://github.com/devops-ws/learn-pipeline-java)到您的 GitHub 帐户。
+
+2. 在您的 **learn-pipline-java** GitHub 仓库中,点击根目录下的文件 `pom.xml`。
+
+3. 在文件中点击 | 代码仓库 | +参数 | +
|---|---|
| GitHub | +凭证:选择访问代码仓库的凭证。 | +
| GitLab | +
+
|
+
| Bitbucket | +
+
|
+
| Git | +
+
|
+
| 参数 | +描述 | +
|---|---|
+
+
+ 修订版本 + |
+
+
+
+ Git 仓库中的 commit ID、分支或标签。例如,master, v1.2.0, 0a1b2c3 或 HEAD。 + |
+
+
+
+ 清单文件路径 + |
+
+
+
+ 设置清单文件路径。例如,config/default。 + |
+
| 参数 | +描述 | +
|---|---|
+
+
+ 清理资源 + |
+
+
+
+ 如果勾选,自动同步时会删除 Git 仓库中不存在的资源。不勾选时,自动同步触发时不会删除集群中的资源。 + |
+
+
+
+ 自纠正 + |
+
+
+
+ 如果勾选,当检测到 Git 仓库中定义的状态与部署资源中有偏差时,将强制应用 Git 仓库中的定义。不勾选时,对部署资源做更改时不会触发自动同步。 + |
+
| 参数 | +描述 | +
|---|---|
+
+
+ 清理资源 + |
+
+
+
+ 如果勾选,同步会删除 Git 仓库中不存在的资源。不勾选时,同步不会删除集群中的资源,而是会显示 out-of-sync。 + |
+
+
+
+ 模拟运行 + |
+
+
+
+ 模拟同步,不影响最终部署资源。 + |
+
+
+
+ 仅执行 Apply + |
+
+
+
+ 如果勾选,同步应用资源时会跳过 pre/post 钩子,仅执行 kubectl apply。 + |
+
+
+
+ 强制 Apply + |
+
+
+
+ 如果勾选,同步时会执行 kubectl apply --force。 + |
+
| 参数 | +描述 | +
|---|---|
+
+
+ 跳过规范校验 + |
+
+
+
+ 跳过 kubectl 验证。执行 kubectl apply 时,增加 --validate=false 标识。 + |
+
+
+
+ 自动创建项目 + |
+
+
+
+ 在项目不存在的情况下自动为应用程序资源创建项目。 + |
+
+
+
+ 最后清理 + |
+
+
+
+ 同步操作时,其他资源都完成部署且处于健康状态后,再清理资源。 + |
+
+
+
+ 选择性同步 + |
+
+
+
+ 仅同步 out-of-sync 状态的资源。 + |
+
| 参数 | +描述 | +
|---|---|
+
+
+ foreground + |
+
+
+
+ 先删除依赖资源,再删除主资源。 + |
+
+
+
+ background + |
+
+
+
+ 先删除主资源,再删除依赖资源。 + |
+
+
+
+ orphan + |
+
+
+
+ 删除主资源,留下依赖资源成为孤儿。 + |
+
| 参数 | +描述信息 | +
|---|---|
| 名称 | +持续部署的名称。 | +
| 健康状态 | +持续部署的健康状态。主要包含以下几种状态: +
|
+
| 同步状态 | +持续部署的同步状态。主要包含以下几种状态: +
|
+
| 部署位置 | +资源部署的集群和项目。 | +
| 更新时间 | +资源更新的时间。 | +
以编辑文件。 例如,将 `spec.replicas` 的值改变为 `3`。
+
+4. 在页面底部点击 **Commit changes**。
+
+### 检查 webhook 交付
+
+1. 在您仓库的 **Webhooks** 页面,点击 webhook。
+
+2. 点击 **Recent Deliveries**,然后点击一个具体交付记录查看详情。
+
+### 检查流水线
+
+1. 使用 `project-regular` 帐户登录 Kubesphere Web 控制台。转到 DevOps 项目,点击流水线。
+
+2. 在**运行记录**选项卡,检查提交到远程仓库 `sonarqube` 分支的拉取请求是否触发了新的运行。
+
+3. 转到 `kubesphere-sample-dev` 项目的 **Pods** 页面,检查 3 个 Pods 的状态。如果 3 个 Pods 为运行状态,表示流水线运行正常。
+
+
+
diff --git a/content/zh/docs/v3.4/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md b/content/zh/docs/v3.4/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
new file mode 100644
index 000000000..4d7d6b9a2
--- /dev/null
+++ b/content/zh/docs/v3.4/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md
@@ -0,0 +1,92 @@
+---
+title: "使用流水线模板"
+keywords: 'KubeSphere, Kubernetes, Jenkins, 图形化流水线, 流水线模板'
+description: '了解如何在 KubeSphere 上使用流水线模板。'
+linkTitle: "使用流水线模板"
+weight: 11213
+---
+
+KubeSphere 提供图形编辑面板,您可以通过交互式操作定义 Jenkins 流水线的阶段和步骤。KubeSphere 3.3 中提供了内置流水线模板,如 Node.js、Maven 以及 Golang,使用户能够快速创建对应模板的流水线。同时,KubeSphere 3.3 还支持自定义流水线模板,以满足企业不同的需求。
+
+本文档演示如何在 KubeSphere 上使用流水线模板。
+
+## 准备工作
+
+- 您需要有一个企业空间、一个 DevOps 项目和一个用户 (`project-regular`),并已邀请此帐户至 DevOps 项目中且授予 `operator` 角色。如果尚未准备好,请参考[创建企业空间、项目、用户和角色](../../../../quick-start/create-workspace-and-project/)。
+
+- 您需要启用 [KubeSphere DevOps 系统](../../../../pluggable-components/devops/)。
+
+- 您需要[创建流水线](../../../how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel/)。
+
+## 使用内置流水线模板
+
+下面以 Node.js 为例演示如何使用内置流水线模板。如果需要使用 Maven 以及 Golang 流水线模板,可参考该部分内容。
+
+1. 以 `project-regular` 用户登录 KubeSphere 控制台,在左侧导航树,点击 **DevOps 项目**。
+
+2. 在右侧的 **DevOps 项目**页面,点击您创建的 DevOps 项目。
+
+3. 在左侧的导航树,点击**流水线**。
+
+4. 在右侧的**流水线**页面,点击已创建的流水线。
+
+5. 在右侧的**任务状态**页签,点击**编辑流水线**。
+
+
+6. 在**创建流水线**对话框,点击 **Node.js**,然后点击**下一步**。
+
+7. 在**参数设置**页签,按照实际情况设置以下参数,点击**创建**。
+
+ | 参数 | 参数解释 |
+ | ----------- | ------------------------- |
+ | GitURL | 需要克隆的项目仓库的地址。 |
+ | GitRevision | 需要检出的分支。 |
+ | NodeDockerImage | Node.js 的 Docker 镜像版本。 |
+ | InstallScript | 安装依赖项的 Shell 脚本。 |
+ | TestScript | 项目测试的 Shell 脚本。 |
+ | BuildScript | 构建项目的 Sell 脚本。 |
+ | ArtifactsPath | 归档文件所在的路径。 |
+
+8. 在左侧的可视化编辑页面,系统默认已添加一系列步骤,您可以添加步骤或并行阶段.
+
+9. 点击指定步骤,在页面右侧,您可以执行以下操作:
+ - 修改阶段名称。
+ - 删除阶段。
+ - 设置代理类型。
+ - 添加条件。
+ - 编辑或删除某一任务。
+ - 添加步骤或嵌套步骤。
+
+ {{< notice note >}}
+
+ 您还可以按需在流水线模板中自定义步骤和阶段。有关如何使用图形编辑面板的更多信息,请参考[使用图形编辑面板创建流水线](../create-a-pipeline-using-graphical-editing-panel/)。
+
+ {{ notice >}}
+
+10. 在右侧的**代理**区域,选择代理类型,默认值为 **kubernetes**,点击**确定**。
+
+ | 代理类型 | 说明 |
+ | ----------- | ------------------------- |
+ | any | 调用默认的 base pod 模板创建 Jenkins agent 运行流水线。 |
+ | node | 调用指定类型的 pod 模板创建 Jenkins agent 运行流水线,可配置的 label 标签为 base、java、nodejs、maven、go 等。 |
+ | kubernetes | 通过 yaml 文件自定义标准的 kubernetes pod 模板运行 agent 执行流水线任务。 |
+
+11. 在弹出的页面,您可以查看已创建的流水线模板详情,点击**运行**即可运行该流水线。
+
+在之前的版本中,KubeSphere 还提供了 CI 以及 CI & CD 流水线模板,但是由于这两个模板难以满足定制化需求,因为建议您采用其它内置模板或直接自定义模板。下面分别介绍了这两个模板。
+
+- CI 流水线模板
+
+ 
+
+ 
+
+ CI 流水线模板包含两个阶段。**clone code** 阶段用于检出代码,**build & push** 阶段用于构建镜像并将镜像推送至 Docker Hub。您需要预先为代码仓库和 Docker Hub 仓库创建凭证,然后在相应的步骤中设置仓库的 URL 以及凭证。完成编辑后,流水线即可开始运行。
+
+- CI & CD 流水线模板
+
+ 
+
+ 
+
+ CI & CD 流水线模板包含六个阶段。有关每个阶段的更多信息,请参考[使用 Jenkinsfile 创建流水线](../create-a-pipeline-using-jenkinsfile/#流水线概述),您可以在该文档中找到相似的阶段及描述。您需要预先为代码仓库、Docker Hub 仓库和集群的 kubeconfig 创建凭证,然后在相应的步骤中设置仓库的 URL 以及凭证。完成编辑后,流水线即可开始运行。
diff --git a/content/zh/docs/v3.4/faq/_index.md b/content/zh/docs/v3.4/faq/_index.md
new file mode 100644
index 000000000..af2e47209
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/_index.md
@@ -0,0 +1,12 @@
+---
+title: "常见问题"
+description: "FAQ is designed to answer and summarize the questions users ask most frequently about KubeSphere."
+layout: "second"
+
+linkTitle: "常见问题"
+weight: 16000
+
+icon: "/images/docs/v3.3/docs.svg"
+---
+
+本章节总结并回答了有关 KubeSphere 最常见的问题,问题根据 KubeSphere 的功能进行分类,您可以在对应部分找到有关的问题和答案。
diff --git a/content/zh/docs/v3.4/faq/access-control/_index.md b/content/zh/docs/v3.4/faq/access-control/_index.md
new file mode 100644
index 000000000..95af6334a
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/access-control/_index.md
@@ -0,0 +1,7 @@
+---
+title: "访问控制和帐户管理"
+keywords: 'Kubernetes, KubeSphere, 帐户, 访问控制'
+description: '关于访问控制和帐户管理的常见问题'
+layout: "second"
+weight: 16400
+---
diff --git a/content/zh/docs/v3.4/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md b/content/zh/docs/v3.4/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
new file mode 100644
index 000000000..d51d22ad7
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/access-control/add-kubernetes-namespace-to-kubesphere-workspace.md
@@ -0,0 +1,38 @@
+---
+title: "添加现有 Kubernetes 命名空间至 KubeSphere 企业空间"
+keywords: "命名空间, 项目, KubeSphere, Kubernetes"
+description: "将您现有 Kubernetes 集群中的命名空间添加至 KubeSphere 的企业空间。"
+linkTitle: "添加现有 Kubernetes 命名空间至 KubeSphere 企业空间"
+Weight: 16430
+---
+
+Kubernetes 命名空间即 KubeSphere 项目。如果您不是在 KubeSphere 控制台创建命名空间对象,则该命名空间不会直接在企业空间中显示。不过,集群管理员依然可以在**集群管理**页面查看该命名空间。同时,您也可以将该命名空间添加至企业空间。
+
+本教程演示如何添加现有 Kubernetes 命名空间至 KubeSphere 企业空间。
+
+## 准备工作
+
+- 您需要有一个具有**集群管理**权限的用户。例如,您可以直接以 `admin` 身份登录控制台,或者创建一个具有该权限的新角色并将其分配至一个用户。
+
+- 您需要有一个可用的企业空间,以便将命名空间分配至该企业空间。有关更多信息,请参见[创建企业空间、项目、用户和角色](../../../quick-start/create-workspace-and-project/)。
+
+## 创建 Kubernetes 命名空间
+
+首先,创建一个示例 Kubernetes 命名空间,以便稍后将其添加至企业空间。执行以下命令:
+
+```bash
+kubectl create ns demo-namespace
+```
+
+有关创建 Kubernetes 命名空间的更多信息,请参见[命名空间演练](https://kubernetes.io/zh/docs/tasks/administer-cluster/namespaces-walkthrough/)。
+
+## 添加命名空间至 KubeSphere 企业空间
+
+1. 以 `admin` 身份登录 KubeSphere 控制台,转到**集群管理**页面。点击**项目**,您可以查看在当前集群中运行的所有项目),包括前述刚刚创建的项目。
+
+2. 通过 kubectl 创建的命名空间不属于任何企业空间。请点击右侧的
,选择**分配企业空间**。
+
+3. 在弹出的对话框中,为该项目选择一个**企业空间**和**项目管理员**,然后点击**确定**。
+
+4. 转到您的企业空间,可以在**项目**页面看到该项目已显示。
+
diff --git a/content/zh/docs/v3.4/faq/access-control/cannot-login.md b/content/zh/docs/v3.4/faq/access-control/cannot-login.md
new file mode 100644
index 000000000..266a96f96
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/access-control/cannot-login.md
@@ -0,0 +1,143 @@
+---
+title: "用户无法登录"
+keywords: "无法登录, 用户不活跃, KubeSphere, Kubernetes"
+description: "如何解决无法登录的问题"
+linkTitle: "用户无法登录"
+Weight: 16440
+---
+
+KubeSphere 安装时会自动创建默认用户 (`admin/P@88w0rd`),密码错误或者用户状态不是**活跃**会导致无法登录。
+
+下面是用户无法登录时,一些常见的问题:
+
+## Account Not Active
+
+登录失败时,您可能看到以下提示。请根据以下步骤排查并解决问题:
+
+
+
+1. 执行以下命令检查用户状态:
+
+ ```bash
+ $ kubectl get users
+ NAME EMAIL STATUS
+ admin admin@kubesphere.io Active
+ ```
+
+2. 检查 `ks-controller-manager` 是否正常运行,是否有异常日志:
+
+ ```bash
+ kubectl -n kubesphere-system logs -l app=ks-controller-manager
+ ```
+
+以下是导致此问题的可能原因。
+
+### Kubernetes 1.19 中的 admission webhook 无法正常工作
+
+Kubernetes 1.19 使用了 Golang 1.15 进行编译,需要更新 admission webhook 用到的证书,该问题导致 `ks-controller` admission webhook 无法正常使用。
+
+相关错误日志:
+
+```bash
+Internal error occurred: failed calling webhook "validating-user.kubesphere.io": Post "https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2-user?timeout=30s": x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0
+```
+
+有关该问题和解决方式的更多信息,请参见[此 GitHub Issue](https://github.com/kubesphere/kubesphere/issues/2928)。
+
+### ks-controller-manager 无法正常工作
+
+`ks-controller-manager` 依赖 openldap、Jenkins 这两个有状态服务,当 openldap 或 Jenkins 无法正常运行时会导致 `ks-controller-manager` 一直处于 `reconcile` 状态。
+
+可以通过以下命令检查 openldap 和 Jeknins 服务是否正常:
+
+```
+kubectl -n kubesphere-devops-system get po | grep -v Running
+kubectl -n kubesphere-system get po | grep -v Running
+kubectl -n kubesphere-system logs -l app=openldap
+```
+
+相关错误日志:
+
+```bash
+failed to connect to ldap service, please check ldap status, error: factory is not able to fill the pool: LDAP Result Code 200 \"Network Error\": dial tcp: lookup openldap.kubesphere-system.svc on 169.254.25.10:53: no such host
+```
+
+```bash
+Internal error occurred: failed calling webhook “validating-user.kubesphere.io”: Post https://ks-controller-manager.kubesphere-system.svc:443/validate-email-iam-kubesphere-io-v1alpha2-user?timeout=4s: context deadline exceeded
+```
+
+**解决方式**
+
+您需要先恢复 openldap、Jenkins 这两个服务并保证网络的连通性,重启 `ks-controller-manager`。
+
+```
+kubectl -n kubesphere-system rollout restart deploy ks-controller-manager
+```
+
+### 使用了错误的代码分支
+
+如果您使用了错误的 ks-installer 版本,会导致安装之后各组件版本不匹配。
+
+通过以下方式检查各组件版本是否一致,正确的 image tag 应该是 v3.3.2。
+
+```
+kubectl -n kubesphere-system get deploy ks-installer -o jsonpath='{.spec.template.spec.containers[0].image}'
+kubectl -n kubesphere-system get deploy ks-apiserver -o jsonpath='{.spec.template.spec.containers[0].image}'
+kubectl -n kubesphere-system get deploy ks-controller-manager -o jsonpath='{.spec.template.spec.containers[0].image}'
+```
+
+## 用户名或密码错误
+
+
+
+通过以下命令检查用户密码是否正确:
+
+```
+curl -u
,并选择**编辑 YAML**。
+
+5. 在文件末尾添加 `telemetry_enabled: false` 字段,点击**确定**。
+
+
+{{< notice note >}}
+
+如需重新启用 Telemetry,请删除 `telemetry_enabled: false` 字段或将其更改为 `telemetry_enabled: true`,并更新 `ks-installer`。
+
+{{ notice >}}
diff --git a/content/zh/docs/v3.4/faq/multi-cluster-management/_index.md b/content/zh/docs/v3.4/faq/multi-cluster-management/_index.md
new file mode 100644
index 000000000..57f23b873
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/multi-cluster-management/_index.md
@@ -0,0 +1,7 @@
+---
+title: "多集群管理"
+keywords: 'Kubernetes, KubeSphere, 多集群管理, 主集群, 成员集群'
+description: 'KubeSphere 多集群管理常见问题'
+layout: "second"
+weight: 16700
+---
diff --git a/content/zh/docs/v3.4/faq/multi-cluster-management/host-cluster-access-member-cluster.md b/content/zh/docs/v3.4/faq/multi-cluster-management/host-cluster-access-member-cluster.md
new file mode 100644
index 000000000..8f66b661b
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/multi-cluster-management/host-cluster-access-member-cluster.md
@@ -0,0 +1,71 @@
+---
+title: "恢复主集群对成员集群的访问权限"
+keywords: "Kubernetes, KubeSphere, 多集群, 主集群, 成员集群"
+description: "了解如何恢复主集群对成员集群的访问。"
+linkTitle: "恢复主集群对成员集群的访问权限"
+Weight: 16720
+---
+
+[多集群管理](../../../multicluster-management/introduction/kubefed-in-kubesphere/)是 KubeSphere 的一大特色,拥有必要权限的租户(通常是集群管理员)能够从主集群访问中央控制平面,以管理全部成员集群。强烈建议您通过主集群管理整个集群的资源。
+
+本教程演示如何恢复主集群对成员集群的访问权限。
+
+## 可能出现的错误信息
+
+如果您无法从中央控制平面访问成员集群,并且浏览器一直将您重新定向到 KubeSphere 的登录页面,请在该成员集群上运行以下命令来获取 ks-apiserver 的日志。
+
+```
+kubectl -n kubesphere-system logs ks-apiserver-7c9c9456bd-qv6bs
+```
+
+{{< notice note >}}
+
+`ks-apiserver-7c9c9456bd-qv6bs` 指的是该成员集群上的容器组 ID。请确保您使用自己的容器组 ID。
+
+{{ notice >}}
+
+您可能会看到以下错误信息:
+
+```
+E0305 03:46:42.105625 1 token.go:65] token not found in cache
+E0305 03:46:42.105725 1 jwt_token.go:45] token not found in cache
+E0305 03:46:42.105759 1 authentication.go:60] Unable to authenticate the request due to error: token not found in cache
+E0305 03:46:52.045964 1 token.go:65] token not found in cache
+E0305 03:46:52.045992 1 jwt_token.go:45] token not found in cache
+E0305 03:46:52.046004 1 authentication.go:60] Unable to authenticate the request due to error: token not found in cache
+E0305 03:47:34.502726 1 token.go:65] token not found in cache
+E0305 03:47:34.502751 1 jwt_token.go:45] token not found in cache
+E0305 03:47:34.502764 1 authentication.go:60] Unable to authenticate the request due to error: token not found in cache
+```
+
+## 解决方案
+
+### 步骤 1:验证 jwtSecret
+
+分别在主集群和成员集群上运行以下命令,确认它们的 jwtSecret 是否相同。
+
+```
+kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v “apiVersion” | grep jwtSecret
+```
+
+### 步骤 2:更改 `accessTokenMaxAge`
+
+请确保主集群和成员集群的 jwtSecret 相同,然后在该成员集群上运行以下命令获取 `accessTokenMaxAge` 的值。
+
+```
+kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep accessTokenMaxAge
+```
+
+如果该值不为 `0`,请运行以下命令更改 `accessTokenMaxAge` 的值。
+
+```
+kubectl -n kubesphere-system edit cm kubesphere-config -o yaml
+```
+
+将 `accessTokenMaxAge` 的值更改为 `0` 之后,运行以下命令重启 ks-apiserver。
+
+```
+kubectl -n kubesphere-system rollout restart deploy ks-apiserver
+```
+
+现在,您可以再次从中央控制平面访问该成员集群。
\ No newline at end of file
diff --git a/content/zh/docs/v3.4/faq/multi-cluster-management/manage-multi-cluster.md b/content/zh/docs/v3.4/faq/multi-cluster-management/manage-multi-cluster.md
new file mode 100644
index 000000000..d50fc8330
--- /dev/null
+++ b/content/zh/docs/v3.4/faq/multi-cluster-management/manage-multi-cluster.md
@@ -0,0 +1,60 @@
+---
+title: "在 KubeSphere 上管理多集群环境"
+keywords: 'Kubernetes,KubeSphere,联邦,多集群,混合云'
+description: '理解如何在 KubeSphere 上管理多集群环境。'
+linkTitle: "在 KubeSphere 上管理多集群环境"
+weight: 16710
+
+---
+
+KubeSphere 提供了易于使用的多集群功能,帮助您[在 KubeSphere 上构建多集群环境](../../../multicluster-management/)。本指南说明如何在 KubeSphere 上管理多集群环境。
+
+## 准备工作
+
+- 请确保您的 Kubernetes 集群在用作主集群和成员集群之前已安装 KubeSphere。
+- 请确保主集群和成员集群分别设置了正确的集群角色,并且在主集群和成员集群上的 `jwtSecret` 也相同。
+- 建议成员集群在导入主集群之前是干净环境,即没有创建任何资源。
+
+
+## 管理 KubeSphere 多集群环境
+
+当您在 KubeSphere 上创建多集群环境之后,您可以通过主集群的中央控制平面管理该环境。在创建资源的时候,您可以选择一个特定的集群,但是需要避免您的主集群过载。不建议您登录成员集群的 KubeSphere Web 控制台去创建资源,因为部分资源(例如:企业空间)将不会同步到您的主集群进行管理。
+
+### 资源管理
+
+不建议您将主集群转换为成员集群,或将成员集群转换成主集群。如果一个成员集群曾经被导入进主集群,您将该成员集群从先前的主集群解绑后,再导入进新的主集群时必须使用相同的集群名称。
+
+如果您想在将成员集群导入新的主集群时保留现有项目,请按照以下步骤进行操作。
+
+1. 在成员集群上运行以下命令将需要保留的项目从企业空间解绑。
+
+ ```bash
+ kubectl label ns