diff --git a/content/en/blogs/TiDB-on-KubeSphere-using-qke.md b/content/en/blogs/TiDB-on-KubeSphere-using-qke.md
index 99f2fdb6d..93399f2c1 100644
--- a/content/en/blogs/TiDB-on-KubeSphere-using-qke.md
+++ b/content/en/blogs/TiDB-on-KubeSphere-using-qke.md
@@ -24,7 +24,7 @@ By combining TiDB with KubeSphere, we can have Kubernetes-powered TiDB clusters,
As you can imagine, the very first thing to consider is to have a Kubernetes cluster so that you can deploy TiDB. Well, in this regard, the installation of Kubernetes may have haunted a large number of neophytes, especially the preparation of working machines, either physical or virtual. Besides, you also need to configure different network rules so that traffic can move smoothly among instances. Fortunately, QingCloud, the sponsor of KubeSphere, provides users with a highly functional platform that enables them to quickly deploy Kubernetes and KubeSphere at the same time (you can choose to deploy Kubernetes only). Namely, you only need to click few buttons and the platform will do the rest.
-Therefore, I select QingCloud Kubernetes Engine (QKE) to prepare the environment. In fact, you can also use instances on the platform directly and [deploy a highly-available Kubernetes cluster with KubeSphere installed](https://kubesphere.io/docs/installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance/). Here is how I deploy the cluster and TiDB:
+Therefore, I select QingCloud Kubernetes Engine (QKE) to prepare the environment. In fact, you can also use instances on the platform directly and [deploy a highly-available Kubernetes cluster with KubeSphere installed](https://kubesphere.io/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms/). Here is how I deploy the cluster and TiDB:
1. Log in to the [web console of QingCloud](https://console.qingcloud.com/). Simply select **KubeSphere (QKE)** from the menu and create a Kubernetes cluster with KubeSphere installed. The platform allows you to install different components of KubeSphere. Here, we need to enable [OpenPitrix](https://github.com/openpitrix/openpitrix), which powers the app management feature in KubeSphere.
diff --git a/content/en/docs/application-store/_index.md b/content/en/docs/application-store/_index.md
index b2484858b..8019d2e28 100644
--- a/content/en/docs/application-store/_index.md
+++ b/content/en/docs/application-store/_index.md
@@ -75,6 +75,14 @@ You can upload app templates or add app repositories to KubeSphere so that tenan
Learn how to deploy GitLab through an app repository and access its service.
+### [Deploy TiDB Operator and a TiDB Cluster on KubeSphere](../application-store/external-apps/deploy-tidb/)
+
+Learn how to deploy TiDB Operator and a TiDB Cluster on KubeSphere.
+
+### [Deploy MeterSphere on KubeSphere](../application-store/external-apps/deploy-metersphere/)
+
+Learn how to deploy MeterSphere on KubeSphere.
+
## Application Developer Guide
### [Helm Developer Guide](../application-store/app-developer-guide/helm-developer-guide/)
diff --git a/content/en/docs/application-store/external-apps/deploy-metersphere.md b/content/en/docs/application-store/external-apps/deploy-metersphere.md
new file mode 100644
index 000000000..6eb5072fc
--- /dev/null
+++ b/content/en/docs/application-store/external-apps/deploy-metersphere.md
@@ -0,0 +1,90 @@
+---
+title: "Deploy MeterSphere on KubeSphere"
+keywords: 'KubeSphere, Kubernetes, Applications, MeterSphere'
+description: 'How to deploy MeterSphere on KubeSphere'
+linkTitle: "Deploy MeterSphere on KubeSphere"
+weight: 14330
+---
+
+MeterSphere is an open-source, one-stop, and enterprise-level continuous testing platform. It features test tracking, interface testing, and performance testing.
+
+This tutorial demonstrates how to deploy MeterSphere on KubeSphere.
+
+## Prerequisites
+
+- You need to enable [the OpenPitrix system](../../../pluggable-components/app-store/).
+- You need to create a workspace, a project, and two user accounts (`ws-admin` and `project-regular`) for this tutorial. The account `ws-admin` must be granted the role of `workspace-admin` in the workspace, and the account `project-regular` must be invited to the project with the role of `operator`. If they are not ready, refer to [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Hands-on Lab
+
+### Step 1: Add an app repository
+
+1. Log in to KubeSphere as `ws-admin`. In your workspace, go to **App Repos** under **Apps Management**, and then click **Add Repo**.
+
+ 
+
+2. In the dialog that appears, enter `metersphere` for the app repository name and `https://charts.kubesphere.io/test` for the MeterSphere repository URL. Click **Validate** to verify the URL and you will see a green check mark next to the URL if it is available. Click **OK** to continue.
+
+ 
+
+3. Your repository displays in the list after successfully imported to KubeSphere.
+
+ 
+
+### Step 2: Deploy MeterSphere
+
+1. Log out of KubeSphere and log back in as `project-regular`. In your project, go to **Applications** under **Application Workloads** and click **Deploy New Application**.
+
+ 
+
+2. In the dialog that appears, select **From App Templates**.
+
+ 
+
+3. Select `metersphere` from the drop-down list, then click **metersphere-chart**.
+
+ 
+
+4. On the **App Info** tab and the **Chart Files** tab, you can view the default configuration from the console. Click **Deploy** to continue.
+
+ 
+
+5. On the **Basic Info** page, you can view the app name, app version, and deployment location. Click **Next** to continue.
+
+ 
+
+6. On the **App Config** page, change the value of `imageTag` from `master` to `v1.6`, and then click **Deploy**.
+
+ 
+
+7. Wait for MeterSphere to be up and running.
+
+ 
+
+8. Go to **Workloads**, and you can see two Deployments and three StatefulSets created for MeterSphere.
+
+ 
+
+ 
+
+ {{< notice note >}}
+
+ It may take a while before all the Deployments and StatefulSets are up and running.
+
+ {{ notice >}}
+
+### Step 3: Access MeterSphere
+
+1. Go to **Services** under **Application Workloads**, and you can see the MeterSphere Service and its type is set to `NodePort` by default.
+
+ 
+
+2. You can access MeterSphere through `{$NodeIP}:{NodePort}` using the default account and password (`admin/metersphere`).
+
+ 
+
+ {{< notice note >}}
+
+ You may need to open the port in your security groups and configure related port forwarding rules depending on where your Kubernetes cluster is deployed. Make sure you use your own `NodeIP`.
+
+ {{ notice >}}
diff --git a/content/en/docs/cluster-administration/shut-down-and-restart-cluster-gracefully.md b/content/en/docs/cluster-administration/shut-down-and-restart-cluster-gracefully.md
index d6f3d56af..dfcdd3b50 100644
--- a/content/en/docs/cluster-administration/shut-down-and-restart-cluster-gracefully.md
+++ b/content/en/docs/cluster-administration/shut-down-and-restart-cluster-gracefully.md
@@ -16,7 +16,7 @@ Usually, it is recommended to maintain your nodes one by one instead of restarti
{{ notice >}}
## Prerequisites
-- Take an [etcd backup](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md#snapshotting-the-keyspace) prior to shutting down a cluster.
+- Take an [etcd backup](https://etcd.io/docs/current/op-guide/recovery/#snapshotting-the-keyspace) prior to shutting down a cluster.
- SSH [passwordless login](https://man.openbsd.org/ssh.1#AUTHENTICATION) is set up between hosts.
## Shut Down a Cluster
@@ -71,4 +71,4 @@ kubectl get nodes -l node-role.kubernetes.io/master
kubectl get nodes -l node-role.kubernetes.io/worker
```
-If your cluster fails to restart, please try to [restore the etcd cluster](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md#restoring-a-cluster).
+If your cluster fails to restart, please try to [restore the etcd cluster](https://etcd.io/docs/current/op-guide/recovery/#restoring-a-cluster).
diff --git a/content/en/docs/faq/_index.md b/content/en/docs/faq/_index.md
index a1dd799b9..3996274ef 100644
--- a/content/en/docs/faq/_index.md
+++ b/content/en/docs/faq/_index.md
@@ -25,9 +25,17 @@ Understand what Telemetry is and how to enable or disable it in KubeSphere.
Understand why the installation may fail when you use KubeKey to install an add-on through YAML.
+### [Uninstall Pluggable Components from KubeSphere](../faq/installation/uninstall-pluggable-components/)
+
+Learn how to uninstall each pluggable component in KubeSphere.
+
+### [SSH Connection Failure](../faq/installation/ssh-connection-failure/)
+
+Understand why the SSH connection may fail when you use KubeKey to create a cluster.
+
## Upgrade
-### [Upgrade QingCloud CSI](../faq/upgrade/upgrade-faq/)
+### [Upgrade QingCloud CSI](../faq/upgrade/qingcloud-csi-upgrade/)
Upgrade the QingCloud CSI after you upgrade KubeSphere.
@@ -51,7 +59,7 @@ Use your own Prometheus stack setup in KubeSphere.
Reset the password of any account.
-### [Session Timeout](../access-control/session-timeout/)
+### [Session Timeout](../faq/access-control/session-timeout/)
Understand session timeout and customize the timeout period.
@@ -71,4 +79,14 @@ Enable the editing function of system resources on the console.
### [Change the Console Language](../faq/console/change-console-language/)
-Select a desire language of the console.
\ No newline at end of file
+Select a desire language of the console.
+
+## Applications
+
+### [Remove Built-in Apps in KubeSphere](../faq/applications/remove-built-in-apps/)
+
+Learn how to remove built-in apps from the KubeSphere App Store.
+
+### [Reuse the Same App Name after Its Deletion](../faq/applications/reuse-the-same-app-name-after-deletion/)
+
+Learn how to reuse the same app name after its deletion.
\ No newline at end of file
diff --git a/content/en/docs/faq/applications/remove-built-in-apps.md b/content/en/docs/faq/applications/remove-built-in-apps.md
index 7e5a0a0ff..babecd46b 100644
--- a/content/en/docs/faq/applications/remove-built-in-apps.md
+++ b/content/en/docs/faq/applications/remove-built-in-apps.md
@@ -17,25 +17,25 @@ As an open-source and app-centric container platform, KubeSphere integrates 15 b
1. Log in to the web console of KubeSphere as `admin`, click **Platform** in the upper left corner, and then select **App Store Management**.
- 
+ 
- 
+ 
2. In the **App Store** page, you can see all 15 built-in apps displayed in the list. Select an app that you want to remove from the App Store. For example, click **tomcat** to go to its detail page.
- 
+ 
3. In the detail page of tomcat, click **Suspend App** to remove the app.
- 
+ 
4. In the dialog that appears, click **OK** to confirm your operation.
- 
+ 
5. To make the app available again in the App Store, click **Activate App** and then click **OK** to confirm your operation.
- 
+ 
{{< notice note >}}
diff --git a/content/en/docs/faq/applications/reuse-the-same-app-name-after-deletion.md b/content/en/docs/faq/applications/reuse-the-same-app-name-after-deletion.md
new file mode 100644
index 000000000..e55c42dbe
--- /dev/null
+++ b/content/en/docs/faq/applications/reuse-the-same-app-name-after-deletion.md
@@ -0,0 +1,48 @@
+---
+title: "Reuse the Same App Name after Its Deletion"
+keywords: "KubeSphere, OpenPitrix, Application, App"
+description: "How to reuse the same app name after its deletion"
+linkTitle: "Reuse the Same App Name after Its Deletion"
+Weight: 16920
+---
+
+To deploy an app in KubeSphere, tenants can go to the App Store and select the available app based on their needs. However, tenants could experience errors when deploying an app with the same app name as that of the deleted one. This tutorial demonstrates how to use the same app name after its deletion.
+
+## Prerequisites
+
+- You need to use an account invited to your project with the role of `operator`. This tutorial uses the account `project-regular` for demonstration purposes. For more information, refer to [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to [enable the App Store](../../../pluggable-components/app-store/).
+
+## Reuse the Same App Name
+
+### Deploy an app from the App Store
+
+1. Log in to the web console of KubeSphere as `project-regular` and deploy an app from the App Store. This tutorial uses Redis as an example app and set the app name as `redis-1`. For more information about how to deploy Redis, refer to [Deploy Redis on KubeSphere](../../../application-store/built-in-apps/redis-app/).
+
+ 
+
+2. Click the app to go to its detail page, and then click **Delete** to delete it.
+
+ 
+
+### Reuse the same app name
+
+1. If you try to deploy a new Redis app with the same app name as `redis-1`, you can see the following error prompt in the upper right corner.
+
+ 
+
+2. In your project, go to **Secrets** under **Configurations**, and input `redis-1` in the search bar to search the Secret.
+
+ 
+
+3. Click the Secret to go to its detail page, and click **More** to select **Delete** from the drop-down menu.
+
+ 
+
+4. In the dialog that appears, input the Secret name and click **OK** to delete it.
+
+ 
+
+5. Now, you can deploy a new Redis app with the same app name as `redis-1`.
+
+ 
diff --git a/content/en/docs/faq/installation/telemetry.md b/content/en/docs/faq/installation/telemetry.md
index 0ae33a657..a107d8f98 100644
--- a/content/en/docs/faq/installation/telemetry.md
+++ b/content/en/docs/faq/installation/telemetry.md
@@ -68,7 +68,7 @@ If you install KubeSphere on Linux, see [Disable Telemetry after Installation](.
2. Select **Clusters Management** and navigate to **CRDs**.
{{< notice note >}}
-If you have enabled [the multi-cluster feature](../../multicluster-management/), you need to select a cluster first.
+If you have enabled [the multi-cluster feature](../../../multicluster-management/), you need to select a cluster first.
{{ notice >}}
3. Input `clusterconfiguration` in the search bar and click the result to go to its detail page.
diff --git a/content/en/docs/faq/observability/logging.md b/content/en/docs/faq/observability/logging.md
index 28b9d8ef5..5349dfd34 100644
--- a/content/en/docs/faq/observability/logging.md
+++ b/content/en/docs/faq/observability/logging.md
@@ -19,7 +19,7 @@ This page contains some of the frequently asked questions about logging.
## How to change the log store to the external Elasticsearch and shut down the internal Elasticsearch
-If you are using the KubeSphere internal Elasticsearch and want to change it to your external alternate, follow the steps below. If you haven't enabled the logging system, refer to [KubeSphere Logging System](../../logging/) to setup your external Elasticsearch directly.
+If you are using the KubeSphere internal Elasticsearch and want to change it to your external alternate, follow the steps below. If you haven't enabled the logging system, refer to [KubeSphere Logging System](../../../pluggable-components/logging/) to setup your external Elasticsearch directly.
1. First, you need to update the KubeKey configuration. Execute the following command:
diff --git a/content/en/docs/installing-on-kubernetes/_index.md b/content/en/docs/installing-on-kubernetes/_index.md
index c8cbadab7..527dce67b 100644
--- a/content/en/docs/installing-on-kubernetes/_index.md
+++ b/content/en/docs/installing-on-kubernetes/_index.md
@@ -21,7 +21,7 @@ Below you will find some of the most viewed and helpful pages in this chapter. I
{{< popularPage icon="/images/docs/brand-icons/aks.jpg" title="Deploy KubeSphere on AKS" description="Provision KubeSphere on existing Kubernetes clusters on AKS." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks/" >}}
-{{< popularPage icon="/images/docs/brand-icons/huawei.svg" title="Deploy KubeSphere on CCE" description="Provision KubeSphere on existing Kubernetes clusters on Huawei CCE." link="../installing-on-kubernetes/hosted-kubernetes/install-ks-on-huawei-cce/" >}}
+{{< popularPage icon="/images/docs/brand-icons/huawei.svg" title="Deploy KubeSphere on CCE" description="Provision KubeSphere on existing Kubernetes clusters on Huawei CCE." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce/" >}}
{{< popularPage icon="/images/docs/brand-icons/oracle.jpg" title="Deploy KubeSphere on Oracle OKE" description="Provision KubeSphere on existing Kubernetes clusters on OKE." link="../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke/" >}}
@@ -55,7 +55,7 @@ Learn how to deploy KubeSphere on DigitalOcean.
Learn how to deploy KubeSphere on Google Kubernetes Engine.
-### [Deploy KubeSphere on Huawei CCE](../installing-on-kubernetes/hosted-kubernetes/install-ks-on-huawei-cce/)
+### [Deploy KubeSphere on Huawei CCE](../installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce/)
Learn how to deploy KubeSphere on Huawei Cloud Container Engine.
diff --git a/content/en/docs/installing-on-linux/_index.md b/content/en/docs/installing-on-linux/_index.md
index ea149a038..daa80c54c 100644
--- a/content/en/docs/installing-on-linux/_index.md
+++ b/content/en/docs/installing-on-linux/_index.md
@@ -89,4 +89,4 @@ Remove KubeSphere and Kubernetes from your machines.
Below you will find some of the most viewed and helpful pages in this chapter. It is highly recommended that you refer to them first.
-{{< popularPage icon="/images/docs/qingcloud-2.svg" title="Deploy KubeSphere on QingCloud" description="Provision an HA KubeSphere cluster on QingCloud." link="../installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance/" >}}
+{{< popularPage icon="/images/docs/qingcloud-2.svg" title="Deploy KubeSphere on QingCloud" description="Provision an HA KubeSphere cluster on QingCloud." link="../installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms/" >}}
diff --git a/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md b/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md
index e01fe5e6c..d7841763b 100644
--- a/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md
+++ b/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md
@@ -6,13 +6,13 @@ linkTitle: "Set up an HA Cluster Using a Load Balancer"
weight: 3210
---
-You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
+You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
This tutorial demonstrates the general configurations of a high-availability cluster as you install KubeSphere on Linux.
## Architecture
-Make sure you have prepared six Linux machines before you begin, with three of them serving as master nodes and the other three as worker nodes. The following image shows details of these machines, including their private IP address and role. For more information about system and network requirements, see [Multi-node Installation](../multioverview/#step-1-prepare-linux-hosts).
+Make sure you have prepared six Linux machines before you begin, with three of them serving as master nodes and the other three as worker nodes. The following image shows details of these machines, including their private IP address and role. For more information about system and network requirements, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#step-1-prepare-linux-hosts).

@@ -31,7 +31,7 @@ You must create a load balancer in your environment to listen (also known as lis
- Make sure your load balancer at least listens on the port of apiserver.
-- You may need to open ports in your security group to ensure external traffic is not blocked depending on where your cluster is deployed. For more information, see [Port Requirements](../port-firewall/).
+- You may need to open ports in your security group to ensure external traffic is not blocked depending on where your cluster is deployed. For more information, see [Port Requirements](../../../installing-on-linux/introduction/port-firewall/).
- You can configure both internal and external load balancers on some cloud platforms. After assigning a public IP address to the external load balancer, you can use the IP address to access the cluster.
- For more information about how to configure load balancers, see “Installing on Public Cloud” to see specific steps on major public cloud platforms.
@@ -140,7 +140,7 @@ spec:
- node3
```
-For more information about different fields in this configuration file, see [Kubernetes Cluster Configurations](../vars/) and [Multi-node Installation](../multioverview/#2-edit-the-configuration-file).
+For more information about different fields in this configuration file, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/) and [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file).
### Configure the load balancer
@@ -163,7 +163,7 @@ For more information about different fields in this configuration file, see [Kub
### Persistent storage plugin configurations
-For a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../storage-configuration/).
+For a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/introduction/storage-configuration/).
### Enable pluggable components (Optional)
diff --git a/content/en/docs/introduction/features.md b/content/en/docs/introduction/features.md
index 2ea080d41..6b385f24f 100644
--- a/content/en/docs/introduction/features.md
+++ b/content/en/docs/introduction/features.md
@@ -11,7 +11,7 @@ weight: 1300
As an open source container platform, KubeSphere provides enterprises with a robust, secure and feature-rich platform, boasting the most common functionalities needed for enterprises adopting Kubernetes, such as multi-cluster deployment and management, network policy configuration, Service Mesh (Istio-based), DevOps projects (CI/CD), security management, Source-to-Image and Binary-to-Image, multi-tenant management, multi-dimensional monitoring, log query and collection, alerting and notification, auditing, application management, and image registry management.
-It also supports various open source storage and network solutions, as well as cloud storage services. For example, KubeSphere presents users with a powerful cloud-native tool [Porter](https://porterlb.io/), a CNCF-certified load balancer developed for bare metal Kubernetes clusters.
+It also supports various open source storage and network solutions, as well as cloud storage services. For example, KubeSphere presents users with a powerful cloud-native tool [PorterLB](https://porterlb.io/), a CNCF-certified load balancer developed for bare metal Kubernetes clusters.
With an easy-to-use web console in place, KubeSphere eases the learning curve for users and drives the adoption of Kubernetes.
@@ -159,7 +159,7 @@ For more information, please see Project Administration and Usage.
- Open source network solutions are available such as Calico and Flannel.
-- [Porter](https://github.com/kubesphere/porter), a load balancer developed for bare metal Kubernetes clusters, is designed by KubeSphere development team. This CNCF-certified tool serves as an important solution for developers. It mainly features:
+- [PorterLB](https://github.com/kubesphere/porter), a load balancer developed for bare metal Kubernetes clusters, is designed by KubeSphere development team. This CNCF-certified tool serves as an important solution for developers. It mainly features:
1. ECMP routing load balancing
2. BGP dynamic routing configuration
diff --git a/content/en/docs/introduction/scenarios.md b/content/en/docs/introduction/scenarios.md
index 1bec682cc..708160185 100644
--- a/content/en/docs/introduction/scenarios.md
+++ b/content/en/docs/introduction/scenarios.md
@@ -100,6 +100,6 @@ With a lightweight, highly scalable microservices architecture offered by KubeSp
Sometimes, the cloud is not necessarily the ideal place for the deployment of resources. For example, physical, dedicated servers tend to function better when it comes to the cases that require considerable compute resources and high disk I/O. Besides, for some specialized workloads that are difficult to migrate to a cloud environment, certified hardware and complicated licensing and support agreements may be required.
-KubeSphere can help enterprises deploy a containerized architecture on bare metal, load balancing traffic with a physical switch. In this connection, [Porter](https://github.com/kubesphere/porter), a CNCF-certified cloud-native tool is born for this end. At the same time, KubeSphere, together with QingCloud VPC and QingStor NeonSAN, provides users with a complete set of features ranging from load balancing, container platform building, network management, and storage. This means virtually all aspects of the containerized architecture can be fully controlled and uniformly managed, without sacrificing the performance in virtualization.
+KubeSphere can help enterprises deploy a containerized architecture on bare metal, load balancing traffic with a physical switch. In this connection, [PorterLB](https://github.com/kubesphere/porter), a CNCF-certified cloud-native tool is born for this end. At the same time, KubeSphere, together with QingCloud VPC and QingStor NeonSAN, provides users with a complete set of features ranging from load balancing, container platform building, network management, and storage. This means virtually all aspects of the containerized architecture can be fully controlled and uniformly managed, without sacrificing the performance in virtualization.
For detailed information about how KubeSphere drives the development of numerous industries, please see [Case Studies](https://kubesphere.io/case/).
diff --git a/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md b/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md
index 61224e464..234c4c360 100644
--- a/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md
+++ b/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md
@@ -89,7 +89,7 @@ tower LoadBalancer 10.233.63.191 139.198.110.23 8080:30721/TCP
{{< notice note >}}
-Generally, there is always a LoadBalancer solution in the public cloud, and the external IP can be allocated by the load balancer automatically. If your clusters are running in an on-premises environment, especially a **bare metal environment**, you can use [Porter](https://github.com/kubesphere/porter) as the LB solution.
+Generally, there is always a LoadBalancer solution in the public cloud, and the external IP can be allocated by the load balancer automatically. If your clusters are running in an on-premises environment, especially a **bare metal environment**, you can use [PorterLB](https://github.com/kubesphere/porter) as the LB solution.
{{ notice >}}
diff --git a/content/en/docs/pluggable-components/_index.md b/content/en/docs/pluggable-components/_index.md
index 2b9371a6b..62f57a84f 100644
--- a/content/en/docs/pluggable-components/_index.md
+++ b/content/en/docs/pluggable-components/_index.md
@@ -47,3 +47,7 @@ Learn how to enable KubeSphere Service Mesh to use different traffic management
## [Network Policies](../pluggable-components/network-policy/)
Learn how to enable Network Policies to control traffic flow at the IP address or port level.
+
+## [Metrics Server](../pluggable-components/metrics-server/)
+
+Learn how to enable the Metrics Server to use HPA to autoscale a Deployment.
\ No newline at end of file
diff --git a/content/en/docs/pluggable-components/metrics-server.md b/content/en/docs/pluggable-components/metrics-server.md
new file mode 100644
index 000000000..fd5c37490
--- /dev/null
+++ b/content/en/docs/pluggable-components/metrics-server.md
@@ -0,0 +1,109 @@
+---
+title: "Metrics Server"
+keywords: "Kubernetes, KubeSphere, Metrics Server"
+description: "How to enable the Metrics Server"
+linkTitle: "Metrics Server"
+weight: 6910
+---
+
+## What is Metrics Server
+
+KubeSphere supports Horizontal Pod Autoscalers (HPA) for [Deployments](../../project-user-guide/application-workloads/deployments/). In KubeSphere, the Metrics Server controls whether the HPA is enabled. You use an HPA object to autoscale a Deployment based on different types of metrics, such as CPU and memory utilization, as well as the minimum and maximum number of replicas. In this way, an HPA helps to make sure your application runs smoothly and consistently in different situations.
+
+## Enable the Metrics Server before Installation
+
+### Installing on Linux
+
+When you use KubeKey to create a configuration file for your cluster, the Metrics Server is enabled by default in the file. Namely, you do not need to manually enable it before you install KubeSphere on Linux.
+
+### **Installing on Kubernetes**
+
+The process of installing KubeSphere on Kubernetes is stated in the tutorial of [Installing KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/). To install the optional component Metrics Server, you can enable it first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml) and open it for editing.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `metrics_server` and enable it by changing `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ metrics_server:
+ enabled: true # Change "false" to "true"
+ ```
+
+3. Execute the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+ {{< notice note >}}
+
+
+If you install KubeSphere on some cloud hosted Kubernetes engines, it is probable that the Metrics Server is already installed in your environment. In this case, it is not recommended that you enable it in `cluster-configuration.yaml` as it may cause conflicts during installation.
+ {{ notice >}}
+
+## Enable the Metrics Server after Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
+
+ 
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Resource List**, click the three dots on the right of `ks-installer` and select **Edit YAML**.
+
+ 
+
+4. In this YAML file, navigate to `metrics_server` and change `false` to `true` for `enabled`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+
+ ```yaml
+ metrics_server:
+ enabled: true # Change "false" to "true"
+ ```
+
+5. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice tip >}}
+You can find the web kubectl tool by clicking the hammer icon in the bottom-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+Execute the following command to verify that the Pod of Metrics Server is up and running.
+
+```bash
+kubectl get pod -n kube-system
+```
+
+If the Metrics Server is successfully installed, your cluster may return the following output (`metrics-server-5ddd98b7f9-jjdln`):
+
+```bash
+NAME READY STATUS RESTARTS AGE
+calico-kube-controllers-59d85c5c84-m4blq 1/1 Running 0 28m
+calico-node-nqzcp 1/1 Running 0 28m
+coredns-74d59cc5c6-8djtt 1/1 Running 0 28m
+coredns-74d59cc5c6-jv65g 1/1 Running 0 28m
+kube-apiserver-master 1/1 Running 0 29m
+kube-controller-manager-master 1/1 Running 0 29m
+kube-proxy-6qjz7 1/1 Running 0 28m
+kube-scheduler-master 1/1 Running 0 29m
+metrics-server-5ddd98b7f9-jjdln 1/1 Running 0 7m17s
+nodelocaldns-8wbfm 1/1 Running 0 28m
+openebs-localpv-provisioner-84956ddb89-dxbnx 1/1 Running 0 28m
+openebs-ndm-operator-6896cbf7b8-xwcth 1/1 Running 1 28m
+openebs-ndm-pf47z 1/1 Running 0 28m
+snapshot-controller-0 1/1 Running 0 22m
+```
\ No newline at end of file
diff --git a/content/en/docs/project-administration/project-gateway.md b/content/en/docs/project-administration/project-gateway.md
index 2285f374d..b444585a8 100644
--- a/content/en/docs/project-administration/project-gateway.md
+++ b/content/en/docs/project-administration/project-gateway.md
@@ -67,6 +67,6 @@ You must configure a load balancer in advance before you select **LoadBalancer**
{{< notice note >}}
-Cloud providers often support load balancer plugins. If you install KubeSphere on major Kubernetes engines on their platforms, you may notice a load balancer is already available in the environment for you to use. If you install KubeSphere in a bare metal environment, you can use [Porter](https://github.com/kubesphere/porter) for load balancing.
+Cloud providers often support load balancer plugins. If you install KubeSphere on major Kubernetes engines on their platforms, you may notice a load balancer is already available in the environment for you to use. If you install KubeSphere in a bare metal environment, you can use [PorterLB](https://github.com/kubesphere/porter) for load balancing.
{{ notice >}}
\ No newline at end of file
diff --git a/content/en/docs/project-user-guide/configuration/image-registry.md b/content/en/docs/project-user-guide/configuration/image-registry.md
index 6de098213..2a1ac459d 100644
--- a/content/en/docs/project-user-guide/configuration/image-registry.md
+++ b/content/en/docs/project-user-guide/configuration/image-registry.md
@@ -99,7 +99,7 @@ Select **Image Registry Secret** for **Type**. To use images from your private r
{{ notice >}}
-4. Click **Create**. Later, the Secret will appear on the **Secrets** page. For more information about how to edit the Secret after you create it, see [Check Secret Details](../../project-user-guide/configuration/secrets/#check-secret-details).
+4. Click **Create**. Later, the Secret will appear on the **Secrets** page. For more information about how to edit the Secret after you create it, see [Check Secret Details](../../../project-user-guide/configuration/secrets/#check-secret-details).
**Https**
diff --git a/content/en/docs/project-user-guide/grayscale-release/overview.md b/content/en/docs/project-user-guide/grayscale-release/overview.md
index f672d897a..2b4badcfe 100644
--- a/content/en/docs/project-user-guide/grayscale-release/overview.md
+++ b/content/en/docs/project-user-guide/grayscale-release/overview.md
@@ -32,3 +32,8 @@ Traffic mirroring copies live production traffic and sends it to a mirrored serv
- Test clusters. You can use production traffic of instances for cluster testing.
- Test databases. You can use an empty database to store and load data.
+{{< notice note >}}
+
+The current KubeSphere version does not support grayscale release strategies for multi-cluster apps.
+
+{{ notice >}}
\ No newline at end of file
diff --git a/content/en/docs/project-user-guide/image-builder/binary-to-image.md b/content/en/docs/project-user-guide/image-builder/binary-to-image.md
index 101658f43..e74c81152 100644
--- a/content/en/docs/project-user-guide/image-builder/binary-to-image.md
+++ b/content/en/docs/project-user-guide/image-builder/binary-to-image.md
@@ -24,7 +24,7 @@ For demonstration and testing purposes, here are some example artifacts you can
## Prerequisites
-- You have enabled the [KubeSphere DevOps System](../../installation/install-devops).
+- You have enabled the [KubeSphere DevOps System](../../../pluggable-components/devops/).
- You need to create a [Docker Hub](http://www.dockerhub.com/) account. GitLab and Harbor are also supported.
- You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
- Set a CI dedicated node for building images. This is not mandatory but recommended for the development and production environment as it caches dependencies and reduces build time. For more information, see [Set a CI Node for Dependency Caching](../../../devops-user-guide/how-to-use/set-ci-node/).
diff --git a/content/en/docs/quick-start/create-workspace-and-project.md b/content/en/docs/quick-start/create-workspace-and-project.md
index 4b9bd5dea..26de6ce2a 100644
--- a/content/en/docs/quick-start/create-workspace-and-project.md
+++ b/content/en/docs/quick-start/create-workspace-and-project.md
@@ -159,7 +159,7 @@ The user granted the role `operator` will be a project maintainer who can manage
8. Under **Internet Access**, it can be seen that the Gateway Address and the NodePort of http and https all display on the page.
{{< notice note >}}
-If you want to expose services using the type `LoadBalancer`, you need to use the LoadBalancer plugin of cloud providers. If your Kubernetes cluster is running in a bare metal environment, it is recommended that you use [Porter](https://github.com/kubesphere/porter) as the LoadBalancer plugin.
+If you want to expose services using the type `LoadBalancer`, you need to use the LoadBalancer plugin of cloud providers. If your Kubernetes cluster is running in a bare metal environment, it is recommended that you use [PorterLB](https://github.com/kubesphere/porter) as the LoadBalancer plugin.
{{ notice >}}

diff --git a/content/en/docs/release/release-v202.md b/content/en/docs/release/release-v202.md
index aaea51b2e..fac2db055 100644
--- a/content/en/docs/release/release-v202.md
+++ b/content/en/docs/release/release-v202.md
@@ -13,7 +13,7 @@ KubeSphere 2.0.2 was released on July 9, 2019, which fixes known bugs and enhanc
### Enhanced Features
-- [API docs](/api-reference/api-docs/) are available on the official website.
+- [API docs](../../api-reference/api-docs/) are available on the official website.
- Block brute-force attacks.
- Standardize the maximum length of resource names.
- Upgrade the gateway of project (Ingress Controller) to the version of 0.24.1. Support Ingress grayscale release.
diff --git a/content/en/docs/release/release-v300.md b/content/en/docs/release/release-v300.md
index 918768aed..a2b46de19 100644
--- a/content/en/docs/release/release-v300.md
+++ b/content/en/docs/release/release-v300.md
@@ -120,7 +120,7 @@ weight: 18100
| MySQL | 5.7.30 | 1.6.6 |
| MySQL Exporter | 0.11.0 | 0.5.3 |
| Nginx | 1.18.0 | 1.3.2 |
- | Porter | 0.3-alpha | 0.1.3 |
+ | PorterLB | 0.3-alpha | 0.1.3 |
| PostgreSQL | 12.0 | 0.3.2 |
| RabbitMQ | 3.8.1 | 0.3.0 |
| Redis | 5.0.5 | 0.3.2 |
diff --git a/content/en/service-mesh/_index.md b/content/en/service-mesh/_index.md
index 8e810abff..d1dcbeee7 100644
--- a/content/en/service-mesh/_index.md
+++ b/content/en/service-mesh/_index.md
@@ -6,7 +6,7 @@ css: "scss/scenario.scss"
section1:
title: KubeSphere Service Mesh provides a simpler distribution of Istio with consolidated UX.
- content: If you’re running and scaling microservices on Kubernetes, it’s time to adopt the istio-based service mesh for your distributed system. We design a unified UI to integrate and manage tools including Istio, Envoy and Jaeger.
+ content: If you’re running and scaling microservices on Kubernetes, it’s time to adopt the Istio-based service mesh for your distributed system. We design a unified UI to integrate and manage tools including Istio, Envoy and Jaeger.
image: /images/service-mesh/banner.jpg
image: /images/service-mesh/service-mesh.jpg
@@ -20,9 +20,9 @@ section2:
summary:
contentList:
- content: Canary release provides canary rollouts and staged rollouts with percentage-based traffic splits
- - content: Blue-green deployment allows the new version of the application to be deployed in the green environment and tested for functionality and performance
- - content: Traffic mirroring enables teams to bring changes to production with as few risks as possible
- - content: Circuit breakers allow users to set limits for calls to individual hosts within a service
+ - content: Blue-green deployment allows the new version of an application to be deployed in a separate environment and tested for functionality and performance
+ - content: Traffic mirroring is a powerful, risk-free method of testing your app versions as it sends a copy of live traffic to a mirrored Service
+ - content: Circuit breakers allow users to set limits for calls to individual hosts within a Service
- title: Visualization
image: /images/service-mesh/visualization.png
@@ -31,7 +31,7 @@ section2:
- title: Distributed Tracing
image: /images/service-mesh/distributed-tracing.png
- summary: Based on Jaeger, KubeSphere enables users to track how each service interacts with other services. It brings a deeper understanding about request latency, bottlenecks, serialization and parallelism via visualization.
+ summary: Based on Jaeger, KubeSphere enables users to track how each Service interacts with each other. It brings a deeper understanding about request latency, bottlenecks, serialization and parallelism via visualization.
contentList:
section3:
diff --git a/content/zh/docs/application-store/_index.md b/content/zh/docs/application-store/_index.md
index 64dab4487..cedfd43bc 100644
--- a/content/zh/docs/application-store/_index.md
+++ b/content/zh/docs/application-store/_index.md
@@ -75,6 +75,14 @@ KubeSphere 内置了15个在 Kubernetes 上常用的精选应用。只需点击
了解如何通过应用仓库部署 GitLab 并访问服务。
+### [在 KubeSphere 中部署 TiDB Operator 和 TiDB 集群](../application-store/external-apps/deploy-tidb/)
+
+了解如何在 KubeSphere 中部署 TiDB Operator 和 TiDB 集群。
+
+### [在 KubeSphere 中部署 MeterSphere](../application-store/external-apps/deploy-metersphere/)
+
+了解如何在 KubeSphere 中部署 MeterSphere。
+
## 应用开发者指南
### [Helm 开发者指南](../application-store/app-developer-guide/helm-developer-guide/)
diff --git a/content/zh/docs/application-store/external-apps/deploy-metersphere.md b/content/zh/docs/application-store/external-apps/deploy-metersphere.md
new file mode 100644
index 000000000..398172c3f
--- /dev/null
+++ b/content/zh/docs/application-store/external-apps/deploy-metersphere.md
@@ -0,0 +1,90 @@
+---
+title: "在 KubeSphere 中部署 MeterSphere"
+keywords: 'KubeSphere, Kubernetes, 应用程序, MeterSphere'
+description: '如何在 KubeSphere 中部署 MeterSphere'
+linkTitle: "在 KubeSphere 中部署 MeterSphere"
+weight: 14330
+---
+
+MeterSphere is an open-source, one-stop, and enterprise-level continuous testing platform. It features test tracking, interface testing, and performance testing.
+
+This tutorial demonstrates how to deploy MeterSphere on KubeSphere.
+
+## Prerequisites
+
+- You need to enable [the OpenPitrix system](../../../pluggable-components/app-store/).
+- You need to create a workspace, a project, and two user accounts (`ws-admin` and `project-regular`) for this tutorial. The account `ws-admin` must be granted the role of `workspace-admin` in the workspace, and the account `project-regular` must be invited to the project with the role of `operator`. If they are not ready, refer to [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+
+## Hands-on Lab
+
+### Step 1: Add an app repository
+
+1. Log in to KubeSphere as `ws-admin`. In your workspace, go to **App Repos** under **Apps Management**, and then click **Add Repo**.
+
+ 
+
+2. In the dialog that appears, enter `metersphere` for the app repository name and `https://charts.kubesphere.io/test` for the MeterSphere repository URL. Click **Validate** to verify the URL and you will see a green check mark next to the URL if it is available. Click **OK** to continue.
+
+ 
+
+3. Your repository displays in the list after successfully imported to KubeSphere.
+
+ 
+
+### Step 2: Deploy MeterSphere
+
+1. Log out of KubeSphere and log back in as `project-regular`. In your project, go to **Applications** under **Application Workloads** and click **Deploy New Application**.
+
+ 
+
+2. In the dialog that appears, select **From App Templates**.
+
+ 
+
+3. Select `metersphere` from the drop-down list, then click **metersphere-chart**.
+
+ 
+
+4. On the **App Info** tab and the **Chart Files** tab, you can view the default configuration from the console. Click **Deploy** to continue.
+
+ 
+
+5. On the **Basic Info** page, you can view the app name, app version, and deployment location. Click **Next** to continue.
+
+ 
+
+6. On the **App Config** page, change the value of `imageTag` from `master` to `v1.6`, and then click **Deploy**.
+
+ 
+
+7. Wait for MeterSphere app to be up and running.
+
+ 
+
+8. Go to **Workloads**, and you can see two Deployments and three StatefulSets created for MeterSphere.
+
+ 
+
+ 
+
+ {{< notice note >}}
+
+ It may take a while before all the Deployments and StatefulSets are up and running.
+
+ {{ notice >}}
+
+### Step 3: Access MeterSphere
+
+1. Go to **Services** under **Application Workloads**, and you can see the MeterSphere Service and its type is set to `NodePort` by default.
+
+ 
+
+2. You can access MeterSphere through `{$NodeIP}:{NodePort}` using the default account and password (`admin/metersphere`).
+
+ 
+
+ {{< notice note >}}
+
+ You may need to open the port in your security groups and configure related port forwarding rules depending on where your Kubernetes cluster is deployed. Make sure to use your own `NodeIP`.
+
+ {{ notice >}}
\ No newline at end of file
diff --git a/content/zh/docs/cluster-administration/shut-down-and-restart-cluster-gracefully.md b/content/zh/docs/cluster-administration/shut-down-and-restart-cluster-gracefully.md
index a34a3f5a6..16d854d8f 100644
--- a/content/zh/docs/cluster-administration/shut-down-and-restart-cluster-gracefully.md
+++ b/content/zh/docs/cluster-administration/shut-down-and-restart-cluster-gracefully.md
@@ -17,7 +17,7 @@ icon: "/images/docs/docs.svg"
## 准备工作
-- 请先进行 [etcd 备份](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md#snapshotting-the-keyspace),再关闭集群。
+- 请先进行 [etcd 备份](https://etcd.io/docs/current/op-guide/recovery/#snapshotting-the-keyspace),再关闭集群。
- 主机之间已设置 SSH [免密登录](https://man.openbsd.org/ssh.1#AUTHENTICATION)。
## 关闭集群
@@ -86,4 +86,4 @@ kubectl get nodes -l node-role.kubernetes.io/master
kubectl get nodes -l node-role.kubernetes.io/worker
```
-如果您的集群重启失败,请尝试[恢复 etcd 集群](https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/recovery.md#restoring-a-cluster)。
+如果您的集群重启失败,请尝试[恢复 etcd 集群](https://etcd.io/docs/current/op-guide/recovery/#restoring-a-cluster)。
diff --git a/content/zh/docs/faq/_index.md b/content/zh/docs/faq/_index.md
index 2191afb2c..440c0f71b 100644
--- a/content/zh/docs/faq/_index.md
+++ b/content/zh/docs/faq/_index.md
@@ -27,7 +27,7 @@ icon: "/images/docs/docs.svg"
## 升级
-### [升级 QingCloud CSI](../faq/upgrade/upgrade-faq/)
+### [升级 QingCloud CSI](../faq/upgrade/qingcloud-csi-upgrade/)
升级 KubeSphere 后升级 QingCloud CSI。
@@ -71,4 +71,14 @@ icon: "/images/docs/docs.svg"
### [更改控制台语言](../faq/console/change-console-language/)
-选择控制台的显示语言。
\ No newline at end of file
+选择控制台的显示语言。
+
+## 应用程序
+
+### [下架 KubeSphere 中的内置应用](../faq/applications/remove-built-in-apps/)
+
+了解如何下架 KubeSphere 中的内置应用。
+
+### [删除应用后复用相同应用名称](../faq/applications/reuse-the-same-app-name-after-deletion/)
+
+了解如何在删除应用后复用相同应用名称。
\ No newline at end of file
diff --git a/content/zh/docs/faq/applications/remove-built-in-apps.md b/content/zh/docs/faq/applications/remove-built-in-apps.md
index c5c59b2e9..babecd46b 100644
--- a/content/zh/docs/faq/applications/remove-built-in-apps.md
+++ b/content/zh/docs/faq/applications/remove-built-in-apps.md
@@ -1,8 +1,8 @@
---
-title: "下架 KubeSphere 中的内置应用"
-keywords: "KubeSphere, OpenPitrix, 应用程序, 应用"
-description: "如何下架 KubeSphere 中的内置应用"
-linkTitle: "下架 KubeSphere 中的内置应用"
+title: "Remove Built-in Apps in KubeSphere"
+keywords: "KubeSphere, OpenPitrix, Application, App"
+description: "How to remove built-in apps in KubeSphere"
+linkTitle: "Remove Built-in Apps in KubeSphere"
Weight: 16910
---
@@ -17,25 +17,25 @@ As an open-source and app-centric container platform, KubeSphere integrates 15 b
1. Log in to the web console of KubeSphere as `admin`, click **Platform** in the upper left corner, and then select **App Store Management**.
- 
+ 
- 
+ 
2. In the **App Store** page, you can see all 15 built-in apps displayed in the list. Select an app that you want to remove from the App Store. For example, click **tomcat** to go to its detail page.
- 
+ 
3. In the detail page of tomcat, click **Suspend App** to remove the app.
- 
+ 
4. In the dialog that appears, click **OK** to confirm your operation.
- 
+ 
5. To make the app available again in the App Store, click **Activate App** and then click **OK** to confirm your operation.
- 
+ 
{{< notice note >}}
diff --git a/content/zh/docs/faq/applications/reuse-the-same-app-name-after-deletion.md b/content/zh/docs/faq/applications/reuse-the-same-app-name-after-deletion.md
new file mode 100644
index 000000000..7fc6a8a0c
--- /dev/null
+++ b/content/zh/docs/faq/applications/reuse-the-same-app-name-after-deletion.md
@@ -0,0 +1,48 @@
+---
+title: "删除应用后复用相同应用名称"
+keywords: "KubeSphere, OpenPitrix, 应用程序, 应用"
+description: "如何在删除应用后复用相同应用名称"
+linkTitle: "删除应用后复用相同应用名称"
+Weight: 16920
+---
+
+To deploy an app in KubeSphere, tenants can go to the App Store and select the available app based on their needs. However, tenants could experience errors when deploying an app with the same app name as that of the deleted one. This tutorial demonstrates how to use the same app name after its deletion.
+
+## Prerequisites
+
+- You need to use an account invited to your project with the role of `operator`. This tutorial uses the account `project-regular` for demonstration purposes. For more information, refer to [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
+- You need to [enable the App Store](../../../pluggable-components/app-store/).
+
+## Reuse the Same App Name
+
+### Deploy an app from the App Store
+
+1. Log in to the web console of KubeSphere as `project-regular` and deploy an app from the App Store. This tutorial uses Redis as an example app and set the app name as `redis-1`. For more information about how to deploy Redis, refer to [Deploy Redis on KubeSphere](../../../application-store/built-in-apps/redis-app/).
+
+ 
+
+2. Click the app to go to its detail page, and then click **Delete** to delete it.
+
+ 
+
+### Reuse the same app name
+
+1. If you try to deploy a new redis app with the same app name as `redis-1`, you can see the following error prompt in the upper right corner.
+
+ 
+
+2. In your project, go to **Secrets** under **Configurations**, and input `redis-1` in the search bar to search the Secret.
+
+ 
+
+3. Click the Secret to go to its detail page, and click **More** to select **Delete** from the drop-down menu.
+
+ 
+
+4. In the dialog that appears, input the Secret name and click **OK** to delete it.
+
+ 
+
+5. Now, you can deploy a new redis app with the same app name as `redis-1`.
+
+ 
\ No newline at end of file
diff --git a/content/zh/docs/faq/installation/telemetry.md b/content/zh/docs/faq/installation/telemetry.md
index fdfa1b236..b026e181e 100644
--- a/content/zh/docs/faq/installation/telemetry.md
+++ b/content/zh/docs/faq/installation/telemetry.md
@@ -68,7 +68,7 @@ Telemetry 在安装 KubeSphere 时默认启用。同时,您也可以在安装
2. 选择**集群管理**,在左侧导航栏中点击**自定义资源 CRD**。
{{< notice note >}}
-如果[多集群功能](../../multicluster-management/)已经启用,您需要先选择一个集群。
+如果[多集群功能](../../../multicluster-management/)已经启用,您需要先选择一个集群。
{{ notice >}}
3. 在搜索框中输入 `clusterconfiguration`,点击搜索结果打开详情页。
diff --git a/content/zh/docs/installing-on-linux/_index.md b/content/zh/docs/installing-on-linux/_index.md
index 377883b61..a56fa8208 100644
--- a/content/zh/docs/installing-on-linux/_index.md
+++ b/content/zh/docs/installing-on-linux/_index.md
@@ -80,9 +80,3 @@ icon: "/images/docs/docs.svg"
## [卸载 KubeSphere 和 Kubernetes](../installing-on-linux/uninstall-kubesphere-and-kubernetes/)
从机器上删除 KubeSphere 和 Kubernetes。
-
-## 常用指南
-
-以下是本章节中的常用指南,建议您优先参考。
-
-{{< popularPage icon="/images/docs/qingcloud-2.svg" title="Deploy KubeSphere on QingCloud" description="Provision an HA KubeSphere cluster on QingCloud." link="../installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance/" >}}
diff --git a/content/zh/docs/introduction/features.md b/content/zh/docs/introduction/features.md
index b757df5a3..4b3628d9f 100644
--- a/content/zh/docs/introduction/features.md
+++ b/content/zh/docs/introduction/features.md
@@ -11,7 +11,7 @@ weight: 1300
KubeSphere 作为开源的企业级全栈化容器平台,为用户提供了一个健壮、安全、功能丰富、具备极致体验的 Web 控制台。拥有企业级 Kubernetes 所需的最常见的功能,如工作负载管理,网络策略配置,微服务治理(基于 Istio),DevOps 工程 (CI/CD) ,安全管理,Source to Image/Binary to Image,多租户管理,多维度监控,日志查询和收集,告警通知,审计,应用程序管理和镜像管理、应用配置密钥管理等功能模块。
-它还支持各种开源存储和网络解决方案以及云存储服务。例如,KubeSphere 为用户提供了功能强大的云原生工具[负载均衡器插件 Porter](https://porterlb.io/),这是为 Kubernetes 集群开发的 CNCF 认证的负载均衡插件。
+它还支持各种开源存储和网络解决方案以及云存储服务。例如,KubeSphere 为用户提供了功能强大的云原生工具[负载均衡器插件 PorterLB](https://porterlb.io/),这是为 Kubernetes 集群开发的 CNCF 认证的负载均衡插件。
有了易于使用的图形化 Web 控制台,KubeSphere 简化了用户的学习曲线并推动了更多的企业使用 Kubernetes 。
@@ -160,7 +160,7 @@ KubeSphere 通过可视化界面操作监控、运维功能,可简化操作和
- 支持 Calico、Flannel 等开源网络方案。所
-- [Porter](https://github.com/kubesphere/porter),是由 KubeSphere 开发团队设计、经过 CNCF 认证的一款适用于物理机部署 Kubernetes 的负载均衡插件。 主要特点:
+- [PorterLB](https://github.com/kubesphere/porter),是由 KubeSphere 开发团队设计、经过 CNCF 认证的一款适用于物理机部署 Kubernetes 的负载均衡插件。 主要特点:
1. ECMP 路由负载均衡
2. BGP 动态路由
@@ -170,4 +170,4 @@ KubeSphere 通过可视化界面操作监控、运维功能,可简化操作和
6. 通过 CRD 动态配置BGP服务器 (v0.3.0)
7. 通过 CRD 动态配置BGP对等 (v0.3.0)
- Porter 有关更多信息,请参见 [本文](https://kubesphere.io/conferences/porter/)。
+ 有关 PorterLB 的更多信息,请参见 [本文](https://kubesphere.io/conferences/porter/)。
diff --git a/content/zh/docs/introduction/scenarios.md b/content/zh/docs/introduction/scenarios.md
index e10204dd1..0c37e5689 100644
--- a/content/zh/docs/introduction/scenarios.md
+++ b/content/zh/docs/introduction/scenarios.md
@@ -99,6 +99,6 @@ DevOps 是一套重要的实践和方法,让开发和运维团队能够更高
有时,云端并非资源部署的最优环境。例如,当需要大量计算资源并要求硬盘高 I/O 速度时,使用专门的物理服务器可以实现更佳的性能。此外,对于一些难以迁移上云的特殊工作负载,可能还需要通过经认证的硬件运行,加以复杂的许可与支持协议,在这种情况下,企业更倾向于使用裸机环境部署应用。
-借助新一代轻量级安装器 [KubeKey](https://github.com/kubesphere/kubekey),KubeSphere 帮助企业快速在裸机环境搭建容器化架构,并通过 Porter 实现流量的负载均衡。[Porter](https://github.com/kubesphere/porter) 由 KubeSphere 社区开源,专为裸机环境下的负载均衡所设计,现已加入 CNCF Landscape,是为 CNCF 所认可的构建云原生最佳实践中的重要一环。
+借助新一代轻量级安装器 [KubeKey](https://github.com/kubesphere/kubekey),KubeSphere 帮助企业快速在裸机环境搭建容器化架构,并通过 PorterLB 实现流量的负载均衡。[PorterLB](https://github.com/kubesphere/porter) 由 KubeSphere 社区开源,专为裸机环境下的负载均衡所设计,现已加入 CNCF Landscape,是为 CNCF 所认可的构建云原生最佳实践中的重要一环。
有关 KubeSphere 如何推动各行各业的发展并实现数字化转型,请参见[用户案例学习](../../../case/)。
\ No newline at end of file
diff --git a/content/zh/docs/multicluster-management/enable-multicluster/agent-connection.md b/content/zh/docs/multicluster-management/enable-multicluster/agent-connection.md
index 87508c337..bc70da983 100644
--- a/content/zh/docs/multicluster-management/enable-multicluster/agent-connection.md
+++ b/content/zh/docs/multicluster-management/enable-multicluster/agent-connection.md
@@ -89,7 +89,7 @@ tower LoadBalancer 10.233.63.191 139.198.110.23 8080:30721/TCP
{{< notice note >}}
-一般来说,主流公有云厂商会提供 LoadBalancer 解决方案,并且负载均衡器可以自动分配外部 IP。如果您的集群运行在本地环境中,尤其是在**裸机环境**中,可以使用 [Porter](https://github.com/kubesphere/porter) 作为负载均衡器解决方案。
+一般来说,主流公有云厂商会提供 LoadBalancer 解决方案,并且负载均衡器可以自动分配外部 IP。如果您的集群运行在本地环境中,尤其是在**裸机环境**中,可以使用 [PorterLB](https://github.com/kubesphere/porter) 作为负载均衡器解决方案。
{{ notice >}}
diff --git a/content/zh/docs/pluggable-components/_index.md b/content/zh/docs/pluggable-components/_index.md
index 003a38f12..3d9b97799 100644
--- a/content/zh/docs/pluggable-components/_index.md
+++ b/content/zh/docs/pluggable-components/_index.md
@@ -47,3 +47,7 @@ icon: "/images/docs/docs.svg"
## [网络策略](../pluggable-components/network-policy/)
了解如何启用网络策略来控制 IP 地址或端口级别的流量。
+
+## [Metrics Server](../pluggable-components/metrics-server/)
+
+了解如何启用 Metrics Server 以使用 HPA 对部署进行自动伸缩。
\ No newline at end of file
diff --git a/content/zh/docs/pluggable-components/metrics-server.md b/content/zh/docs/pluggable-components/metrics-server.md
new file mode 100644
index 000000000..fd5c37490
--- /dev/null
+++ b/content/zh/docs/pluggable-components/metrics-server.md
@@ -0,0 +1,109 @@
+---
+title: "Metrics Server"
+keywords: "Kubernetes, KubeSphere, Metrics Server"
+description: "How to enable the Metrics Server"
+linkTitle: "Metrics Server"
+weight: 6910
+---
+
+## What is Metrics Server
+
+KubeSphere supports Horizontal Pod Autoscalers (HPA) for [Deployments](../../project-user-guide/application-workloads/deployments/). In KubeSphere, the Metrics Server controls whether the HPA is enabled. You use an HPA object to autoscale a Deployment based on different types of metrics, such as CPU and memory utilization, as well as the minimum and maximum number of replicas. In this way, an HPA helps to make sure your application runs smoothly and consistently in different situations.
+
+## Enable the Metrics Server before Installation
+
+### Installing on Linux
+
+When you use KubeKey to create a configuration file for your cluster, the Metrics Server is enabled by default in the file. Namely, you do not need to manually enable it before you install KubeSphere on Linux.
+
+### **Installing on Kubernetes**
+
+The process of installing KubeSphere on Kubernetes is stated in the tutorial of [Installing KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/). To install the optional component Metrics Server, you can enable it first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml) file.
+
+1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml) and open it for editing.
+
+ ```bash
+ vi cluster-configuration.yaml
+ ```
+
+2. In this local `cluster-configuration.yaml` file, navigate to `metrics_server` and enable it by changing `false` to `true` for `enabled`. Save the file after you finish.
+
+ ```yaml
+ metrics_server:
+ enabled: true # Change "false" to "true"
+ ```
+
+3. Execute the following commands to start installation:
+
+ ```bash
+ kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml
+
+ kubectl apply -f cluster-configuration.yaml
+ ```
+
+ {{< notice note >}}
+
+
+If you install KubeSphere on some cloud hosted Kubernetes engines, it is probable that the Metrics Server is already installed in your environment. In this case, it is not recommended that you enable it in `cluster-configuration.yaml` as it may cause conflicts during installation.
+ {{ notice >}}
+
+## Enable the Metrics Server after Installation
+
+1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
+
+ 
+
+2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detail page.
+
+ {{< notice info >}}
+A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
+ {{ notice >}}
+
+3. In **Resource List**, click the three dots on the right of `ks-installer` and select **Edit YAML**.
+
+ 
+
+4. In this YAML file, navigate to `metrics_server` and change `false` to `true` for `enabled`. After you finish, click **Update** in the bottom-right corner to save the configuration.
+
+ ```yaml
+ metrics_server:
+ enabled: true # Change "false" to "true"
+ ```
+
+5. You can use the web kubectl to check the installation process by executing the following command:
+
+ ```bash
+ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
+ ```
+
+ {{< notice tip >}}
+You can find the web kubectl tool by clicking the hammer icon in the bottom-right corner of the console.
+ {{ notice >}}
+
+## Verify the Installation of the Component
+
+Execute the following command to verify that the Pod of Metrics Server is up and running.
+
+```bash
+kubectl get pod -n kube-system
+```
+
+If the Metrics Server is successfully installed, your cluster may return the following output (`metrics-server-5ddd98b7f9-jjdln`):
+
+```bash
+NAME READY STATUS RESTARTS AGE
+calico-kube-controllers-59d85c5c84-m4blq 1/1 Running 0 28m
+calico-node-nqzcp 1/1 Running 0 28m
+coredns-74d59cc5c6-8djtt 1/1 Running 0 28m
+coredns-74d59cc5c6-jv65g 1/1 Running 0 28m
+kube-apiserver-master 1/1 Running 0 29m
+kube-controller-manager-master 1/1 Running 0 29m
+kube-proxy-6qjz7 1/1 Running 0 28m
+kube-scheduler-master 1/1 Running 0 29m
+metrics-server-5ddd98b7f9-jjdln 1/1 Running 0 7m17s
+nodelocaldns-8wbfm 1/1 Running 0 28m
+openebs-localpv-provisioner-84956ddb89-dxbnx 1/1 Running 0 28m
+openebs-ndm-operator-6896cbf7b8-xwcth 1/1 Running 1 28m
+openebs-ndm-pf47z 1/1 Running 0 28m
+snapshot-controller-0 1/1 Running 0 22m
+```
\ No newline at end of file
diff --git a/content/zh/docs/project-administration/project-gateway.md b/content/zh/docs/project-administration/project-gateway.md
index 266f38386..58d520436 100644
--- a/content/zh/docs/project-administration/project-gateway.md
+++ b/content/zh/docs/project-administration/project-gateway.md
@@ -67,6 +67,6 @@ KubeSphere 项目中的网关是一个[ NGINX Ingress 控制器](https://www.ngi
{{< notice note >}}
-云厂商通常支持负载均衡器插件。如果在主流的 Kubernetes Engine 上安装 KubeSphere,您可能会发现环境中已有可用的负载均衡器。如果在裸金属环境中安装 KubeSphere,您可以使用 [Porter](https://github.com/kubesphere/porter) 作为负载均衡器。
+云厂商通常支持负载均衡器插件。如果在主流的 Kubernetes Engine 上安装 KubeSphere,您可能会发现环境中已有可用的负载均衡器。如果在裸金属环境中安装 KubeSphere,您可以使用 [PorterLB](https://github.com/kubesphere/porter) 作为负载均衡器。
{{ notice >}}
\ No newline at end of file
diff --git a/content/zh/docs/project-user-guide/grayscale-release/overview.md b/content/zh/docs/project-user-guide/grayscale-release/overview.md
index 409093093..034d34c6e 100644
--- a/content/zh/docs/project-user-guide/grayscale-release/overview.md
+++ b/content/zh/docs/project-user-guide/grayscale-release/overview.md
@@ -32,3 +32,8 @@ KubeSphere 为用户提供三种灰度发布策略。
- 测试集群。您可以将实例的生产流量用于集群测试。
- 测试数据库。您可以使用空数据库来存储和加载数据。
+{{< notice note >}}
+
+当前版本的 KubeSphere 暂不支持为多集群应用创建灰度发布策略。
+
+{{ notice >}}
\ No newline at end of file
diff --git a/content/zh/docs/quick-start/create-workspace-and-project.md b/content/zh/docs/quick-start/create-workspace-and-project.md
index d865dede2..874b8044d 100644
--- a/content/zh/docs/quick-start/create-workspace-and-project.md
+++ b/content/zh/docs/quick-start/create-workspace-and-project.md
@@ -165,7 +165,7 @@ KubeSphere 的多租户系统分三个层级,即**群集**、**企业空间**
8. 在**外网访问**下,可以在页面上看到网关地址以及 http/https 的端口。
{{< notice note >}}
-如果要使用 `LoadBalancer` 暴露服务,则需要使用云厂商的 LoadBalancer 插件。如果您的 Kubernetes 集群在裸机环境中运行,建议使用 [Porter](https://github.com/kubesphere/porter) 作为 LoadBalancer 插件。
+如果要使用 `LoadBalancer` 暴露服务,则需要使用云厂商的 LoadBalancer 插件。如果您的 Kubernetes 集群在裸机环境中运行,建议使用 [PorterLB](https://github.com/kubesphere/porter) 作为 LoadBalancer 插件。
{{ notice >}}

diff --git a/content/zh/docs/release/release-v202.md b/content/zh/docs/release/release-v202.md
index aaea51b2e..fac2db055 100644
--- a/content/zh/docs/release/release-v202.md
+++ b/content/zh/docs/release/release-v202.md
@@ -13,7 +13,7 @@ KubeSphere 2.0.2 was released on July 9, 2019, which fixes known bugs and enhanc
### Enhanced Features
-- [API docs](/api-reference/api-docs/) are available on the official website.
+- [API docs](../../api-reference/api-docs/) are available on the official website.
- Block brute-force attacks.
- Standardize the maximum length of resource names.
- Upgrade the gateway of project (Ingress Controller) to the version of 0.24.1. Support Ingress grayscale release.
diff --git a/content/zh/docs/release/release-v300.md b/content/zh/docs/release/release-v300.md
index 918768aed..a2b46de19 100644
--- a/content/zh/docs/release/release-v300.md
+++ b/content/zh/docs/release/release-v300.md
@@ -120,7 +120,7 @@ weight: 18100
| MySQL | 5.7.30 | 1.6.6 |
| MySQL Exporter | 0.11.0 | 0.5.3 |
| Nginx | 1.18.0 | 1.3.2 |
- | Porter | 0.3-alpha | 0.1.3 |
+ | PorterLB | 0.3-alpha | 0.1.3 |
| PostgreSQL | 12.0 | 0.3.2 |
| RabbitMQ | 3.8.1 | 0.3.0 |
| Redis | 5.0.5 | 0.3.2 |
diff --git a/content/zh/service-mesh/_index.md b/content/zh/service-mesh/_index.md
index e582565c0..4f307729c 100644
--- a/content/zh/service-mesh/_index.md
+++ b/content/zh/service-mesh/_index.md
@@ -5,40 +5,40 @@ layout: "scenario"
css: "scss/scenario.scss"
section1:
- title: KubeSphere Service Mesh provides a simpler distribution of Istio with consolidated UX.
- content: If you’re running and scaling microservices on Kubernetes, it’s time to adopt the istio-based service mesh for your distributed system. We design a unified UI to integrate and manage tools including Istio, Envoy and Jaeger.
+ title: KubeSphere 基于 Istio 微服务框架提供可视化的微服务治理功能,全面提升用户体验
+ content: 如果您在 Kubernetes 上运行和伸缩微服务,您可以为您的分布式系统配置基于 Istio 的微服务治理功能。KubeSphere 提供统一的操作界面,便于您集成并管理各类工具,包括 Istio、Envoy 和 Jaeger 等。
image: /images/service-mesh/banner.jpg
image: /images/service-mesh/service-mesh.jpg
bg: /images/service-mesh/28.svg
section2:
- title: What Makes KubeSphere Service Mesh Special
+ title: KubeSphere 独特的微服务治理功能
list:
- - title: Traffic Management
+ - title: 流量治理
image: /images/service-mesh/traffic-management.png
summary:
contentList:
- - content: Canary release provides canary rollouts and staged rollouts with percentage-based traffic splits
- - content: Blue-green deployment allows the new version of the application to be deployed in the green environment and tested for functionality and performance
- - content: Traffic mirroring enables teams to bring changes to production with as few risks as possible
- - content: Circuit breakers allow users to set limits for calls to individual hosts within a service
+ - content: 金丝雀发布提供灵活的灰度策略,将流量按照所配置的比例转发至当前不同的灰度版本
+ - content: 蓝绿部署支持零宕机部署,让应用程序可以在独立的环境中测试新版本的功能和特性
+ - content: 流量镜像模拟生产环境,将实时流量的副本发送给被镜像的服务
+ - content: 熔断机制支持为服务设置对单个主机的调用限制
- - title: Visualization
+ - title: 虚拟化
image: /images/service-mesh/visualization.png
- summary: Observability is extremely useful in understanding cloud-native microservice interconnections. KubeSphere has the ability to visualize the connections between microservices and the topology of how they interconnect.
+ summary: 可观察性有助于了解云原生微服务之间的关系。KubeSphere 支持可视化界面,直接地呈现微服务之间的拓扑关系,并提供细粒度的监控数据。
contentList:
- - title: Distributed Tracing
+ - title: 分布式链路追踪
image: /images/service-mesh/distributed-tracing.png
- summary: Based on Jaeger, KubeSphere enables users to track how each service interacts with other services. It brings a deeper understanding about request latency, bottlenecks, serialization and parallelism via visualization.
+ summary: KubeSphere 基于 Jaeger 让用户追踪服务之间的通讯,以虚拟化的方式使用户更深入地了解请求延迟、性能瓶颈、序列化和并行调用等。
contentList:
section3:
- title: See KubeSphere Service Mesh In Action
+ title: 观看 KubeSphere 微服务治理工作流操作演示
videoLink: https://www.youtube.com/embed/EkGWtwcsdE4
- content: Want to get started in action by following the hands-on lab?
- btnContent: Start Hands-on Lab
+ content: 想自己动手体验实际操作?
+ btnContent: 开始动手实验
link: https://kubesphere.com.cn/docs/pluggable-components/service-mesh/
bgLeft: /images/service-mesh/3-2.svg
bgRight: /images/service-mesh/3.svg
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/add-metersphere-repo.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/add-metersphere-repo.PNG
new file mode 100644
index 000000000..c7710dded
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/add-metersphere-repo.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/add-repo.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/add-repo.PNG
new file mode 100644
index 000000000..6dd4676b1
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/add-repo.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/added-metersphere-repo.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/added-metersphere-repo.PNG
new file mode 100644
index 000000000..537208f8d
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/added-metersphere-repo.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/basic-info.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/basic-info.PNG
new file mode 100644
index 000000000..95caa1a4a
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/basic-info.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/change-value.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/change-value.PNG
new file mode 100644
index 000000000..7ab5a9892
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/change-value.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/click-metersphere.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/click-metersphere.PNG
new file mode 100644
index 000000000..f45a06f3c
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/click-metersphere.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/deploy-app.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/deploy-app.PNG
new file mode 100644
index 000000000..4dd8af19b
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/deploy-app.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/deployments-running.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/deployments-running.PNG
new file mode 100644
index 000000000..e33804670
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/deployments-running.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/from-app-templates.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/from-app-templates.PNG
new file mode 100644
index 000000000..e1b315799
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/from-app-templates.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/login-metersphere.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/login-metersphere.PNG
new file mode 100644
index 000000000..098b2b6de
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/login-metersphere.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/metersphere-running.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/metersphere-running.PNG
new file mode 100644
index 000000000..6ce11cc5c
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/metersphere-running.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/metersphere-service.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/metersphere-service.PNG
new file mode 100644
index 000000000..1ab714950
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/metersphere-service.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/statefulsets-running.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/statefulsets-running.PNG
new file mode 100644
index 000000000..10ae82fcf
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/statefulsets-running.PNG differ
diff --git a/static/images/docs/appstore/external-apps/deploy-metersphere/view-config.PNG b/static/images/docs/appstore/external-apps/deploy-metersphere/view-config.PNG
new file mode 100644
index 000000000..f94fd7a0c
Binary files /dev/null and b/static/images/docs/appstore/external-apps/deploy-metersphere/view-config.PNG differ
diff --git a/static/images/docs/enable-pluggable-components/metrics-server/clusters-management.png b/static/images/docs/enable-pluggable-components/metrics-server/clusters-management.png
new file mode 100644
index 000000000..a9a7ae691
Binary files /dev/null and b/static/images/docs/enable-pluggable-components/metrics-server/clusters-management.png differ
diff --git a/static/images/docs/enable-pluggable-components/metrics-server/edit-yaml.png b/static/images/docs/enable-pluggable-components/metrics-server/edit-yaml.png
new file mode 100644
index 000000000..005e9c310
Binary files /dev/null and b/static/images/docs/enable-pluggable-components/metrics-server/edit-yaml.png differ
diff --git a/static/images/docs/zh-cn/faq/applications/remove-built-in-apps/activate-tomcat.PNG b/static/images/docs/faq/applications/remove-built-in-apps/activate-tomcat.PNG
similarity index 100%
rename from static/images/docs/zh-cn/faq/applications/remove-built-in-apps/activate-tomcat.PNG
rename to static/images/docs/faq/applications/remove-built-in-apps/activate-tomcat.PNG
diff --git a/static/images/docs/zh-cn/faq/applications/remove-built-in-apps/click-platform.PNG b/static/images/docs/faq/applications/remove-built-in-apps/click-platform.PNG
similarity index 100%
rename from static/images/docs/zh-cn/faq/applications/remove-built-in-apps/click-platform.PNG
rename to static/images/docs/faq/applications/remove-built-in-apps/click-platform.PNG
diff --git a/static/images/docs/zh-cn/faq/applications/remove-built-in-apps/click-tomcat.PNG b/static/images/docs/faq/applications/remove-built-in-apps/click-tomcat.PNG
similarity index 100%
rename from static/images/docs/zh-cn/faq/applications/remove-built-in-apps/click-tomcat.PNG
rename to static/images/docs/faq/applications/remove-built-in-apps/click-tomcat.PNG
diff --git a/static/images/docs/zh-cn/faq/applications/remove-built-in-apps/confirm-suspend.PNG b/static/images/docs/faq/applications/remove-built-in-apps/confirm-suspend.PNG
similarity index 100%
rename from static/images/docs/zh-cn/faq/applications/remove-built-in-apps/confirm-suspend.PNG
rename to static/images/docs/faq/applications/remove-built-in-apps/confirm-suspend.PNG
diff --git a/static/images/docs/zh-cn/faq/applications/remove-built-in-apps/select-app-store-management.PNG b/static/images/docs/faq/applications/remove-built-in-apps/select-app-store-management.PNG
similarity index 100%
rename from static/images/docs/zh-cn/faq/applications/remove-built-in-apps/select-app-store-management.PNG
rename to static/images/docs/faq/applications/remove-built-in-apps/select-app-store-management.PNG
diff --git a/static/images/docs/zh-cn/faq/applications/remove-built-in-apps/suspend-tomcat.PNG b/static/images/docs/faq/applications/remove-built-in-apps/suspend-tomcat.PNG
similarity index 100%
rename from static/images/docs/zh-cn/faq/applications/remove-built-in-apps/suspend-tomcat.PNG
rename to static/images/docs/faq/applications/remove-built-in-apps/suspend-tomcat.PNG
diff --git a/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/confirm-delete.PNG b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/confirm-delete.PNG
new file mode 100644
index 000000000..601ad7423
Binary files /dev/null and b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/confirm-delete.PNG differ
diff --git a/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/delete-redis-1.PNG b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/delete-redis-1.PNG
new file mode 100644
index 000000000..772586e42
Binary files /dev/null and b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/delete-redis-1.PNG differ
diff --git a/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/delete-secret.PNG b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/delete-secret.PNG
new file mode 100644
index 000000000..1334137d0
Binary files /dev/null and b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/delete-secret.PNG differ
diff --git a/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/error-prompt.PNG b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/error-prompt.PNG
new file mode 100644
index 000000000..562caf5f5
Binary files /dev/null and b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/error-prompt.PNG differ
diff --git a/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/new-redis-app.PNG b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/new-redis-app.PNG
new file mode 100644
index 000000000..f955d494f
Binary files /dev/null and b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/new-redis-app.PNG differ
diff --git a/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/redis-1.PNG b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/redis-1.PNG
new file mode 100644
index 000000000..3d3c14d7c
Binary files /dev/null and b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/redis-1.PNG differ
diff --git a/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/search-secret.PNG b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/search-secret.PNG
new file mode 100644
index 000000000..ae2e98f33
Binary files /dev/null and b/static/images/docs/faq/applications/use-the-same-app-name-after-deletion/search-secret.PNG differ