diff --git a/content/en/blogs/TiDB-on-KubeSphere-upload-helm-chart.md b/content/en/blogs/TiDB-on-KubeSphere-upload-helm-chart.md index e2365288c..6903856cc 100644 --- a/content/en/blogs/TiDB-on-KubeSphere-upload-helm-chart.md +++ b/content/en/blogs/TiDB-on-KubeSphere-upload-helm-chart.md @@ -101,7 +101,7 @@ Now that you have Helm charts ready, you can upload them to KubeSphere as app te ## Releasing Apps to the App Store -[App templates](https://kubesphere.io/docs/project-user-guide/application/app-template/) enable users to deploy and manage apps in a visualized way. Internally, they play an important role as shared resources (e.g. databases, middleware and operating systems) created by enterprises for the coordination and cooperation within teams. +[App templates](https://kubesphere.io/docs/project-user-guide/application/app-template/) enable users to deploy and manage apps in a visualized way. Internally, they play an important role as shared resources (for example, databases, middleware and operating systems) created by enterprises for the coordination and cooperation within teams. You can release apps you have uploaded to KubeSphere to the public repository, also known as the App Store. In this way, all tenants on the platform can see these apps and deploy them if they have necessary permissions regardless of the workspace they belong to. diff --git a/content/en/blogs/TiDB-on-KubeSphere-using-qke.md b/content/en/blogs/TiDB-on-KubeSphere-using-qke.md index a3ead8744..3c190b656 100644 --- a/content/en/blogs/TiDB-on-KubeSphere-using-qke.md +++ b/content/en/blogs/TiDB-on-KubeSphere-using-qke.md @@ -58,7 +58,7 @@ Therefore, I select QingCloud Kubernetes Engine (QKE) to prepare the environment customresourcedefinition.apiextensions.k8s.io/tidbclusterautoscalers.pingcap.com created ``` -5. Now, let's get back to the **Access Control** page where all the workspaces are listed. Before I proceed, first I need to create a new workspace (e.g. `dev-workspace`). +5. Now, let's get back to the **Access Control** page where all the workspaces are listed. Before I proceed, first I need to create a new workspace (for example, `dev-workspace`). In a workspace, different users have different permissions to perform varied tasks in projects. Usually, a department-wide project requires a multi-tenant system so that everyone is responsible for their own part. For demonstration purposes, I use the account `admin` in this example. You can [see the official documentation of KubeSphere](https://kubesphere.io/docs/quick-start/create-workspace-and-project/) to know more about how the multi-tenant system works. diff --git a/content/en/blogs/add-master-for-ha-using-kubekey.md b/content/en/blogs/add-master-for-ha-using-kubekey.md index f2439dfde..f15f8bbbd 100644 --- a/content/en/blogs/add-master-for-ha-using-kubekey.md +++ b/content/en/blogs/add-master-for-ha-using-kubekey.md @@ -48,7 +48,7 @@ For more information about requirements for nodes, network, and dependencies, [s ## Prepare Load Balancers -You can use any cloud load balancers or hardware load balancers (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters. In this example, I have an internal load balancer with a listener that listens on port `6443` (`api-server`) and an external load balancer with a listener that listens on the port of the Kubernetes dashboard. +You can use any cloud load balancers or hardware load balancers (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters. In this example, I have an internal load balancer with a listener that listens on port `6443` (`api-server`) and an external load balancer with a listener that listens on the port of the Kubernetes dashboard. ## Download KubeKey @@ -131,7 +131,7 @@ You can use any cloud load balancers or hardware load balancers (e.g. F5). In ad {{< notice note >}} - - You are not allowed to modify the host name of existing nodes (e.g. `master1`) when adding new nodes. + - You are not allowed to modify the host name of existing nodes (for example, `master1`) when adding new nodes. - For more information about different parameters in the configuration file, see [this article](https://kubesphere.io/blogs/install-kubernetes-using-kubekey/#install-kubernetes). {{}} diff --git a/content/en/blogs/scale-kubernetes-cluster-using-kubekey.md b/content/en/blogs/scale-kubernetes-cluster-using-kubekey.md index dee171a1f..1fc70cf21 100644 --- a/content/en/blogs/scale-kubernetes-cluster-using-kubekey.md +++ b/content/en/blogs/scale-kubernetes-cluster-using-kubekey.md @@ -124,7 +124,7 @@ For more information about requirements for nodes, network, and dependencies, [s {{< notice note >}} - - You are not allowed to modify the host name of existing nodes (e.g. master) when adding new nodes. + - You are not allowed to modify the host name of existing nodes (for example, master) when adding new nodes. - For more information about different parameters in the configuration file, see my [last article](https://kubesphere.io/blogs/install-kubernetes-using-kubekey/#install-kubernetes). diff --git a/content/en/blogs/set-up-ha-cluster-using-keepalived-haproxy.md b/content/en/blogs/set-up-ha-cluster-using-keepalived-haproxy.md index bf2e43cba..cb9c513e1 100644 --- a/content/en/blogs/set-up-ha-cluster-using-keepalived-haproxy.md +++ b/content/en/blogs/set-up-ha-cluster-using-keepalived-haproxy.md @@ -8,7 +8,7 @@ author: 'Pixiake, Sherlock' snapshot: 'https://ap3.qingstor.com/kubesphere-website/docs/architecture-ha-k8s-cluster.png' --- -A highly available Kubernetes cluster ensures your applications run without outages which is required for production. In this connection, there are plenty of ways for you to choose from to achieve high availability. For example, if your cluster is deployed on cloud (e.g. Google Cloud and AWS), you can create load balancers on these platforms directly. At the same time, Keepalived, HAproxy and NGINX are also possible alternatives for you to achieve load balancing. +A highly available Kubernetes cluster ensures your applications run without outages which is required for production. In this connection, there are plenty of ways for you to choose from to achieve high availability. For example, if your cluster is deployed on cloud (for example, Google Cloud and AWS), you can create load balancers on these platforms directly. At the same time, Keepalived, HAproxy and NGINX are also possible alternatives for you to achieve load balancing. In this article, I am going to use Keepalived and HAproxy for load balancing and achieve high availability. The steps are listed as below: @@ -264,7 +264,7 @@ Before you start to create your Kubernetes cluster, make sure you have tested th [KubeKey](https://github.com/kubesphere/kubekey) is an efficient and convenient tool to create a Kubernetes cluster. If you are not familiar with KubeKey, have a look at my previous articles about using KubeKey to [create a three-node cluster](https://kubesphere.io/blogs/install-kubernetes-using-kubekey/) and scale your cluster. -1. Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command to download KubeKey version 1.0.1. You only need to download KubeKey to one of your machines (e.g. `master1`) that serves as the **taskbox** for installation. +1. Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command to download KubeKey version 1.0.1. You only need to download KubeKey to one of your machines (for example, `master1`) that serves as the **taskbox** for installation. ```bash curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh - diff --git a/content/en/blogs/understand-requests-and-limits-in-kubernetes.md b/content/en/blogs/understand-requests-and-limits-in-kubernetes.md index 9f511fde3..fba94f646 100644 --- a/content/en/blogs/understand-requests-and-limits-in-kubernetes.md +++ b/content/en/blogs/understand-requests-and-limits-in-kubernetes.md @@ -212,7 +212,7 @@ func (f *Fit) Filter(ctx context.Context, cycleState *framework.CycleState, pod } ``` -It can be seen from the code above that the scheduler (schedule thread) calculates the resources required by the Pod to be scheduled. Specifically, it calculates the total requests of init containers and the total requests of working containers respectively according to Pod specifications. The greater one will be used. Note that for lightweight virtual machines (e.g. kata-container), their own resource consumption of virtualization needs to be counted in caches. In the following `Filter` stage, all nodes will be checked to see if they meet the conditions. +It can be seen from the code above that the scheduler (schedule thread) calculates the resources required by the Pod to be scheduled. Specifically, it calculates the total requests of init containers and the total requests of working containers respectively according to Pod specifications. The greater one will be used. Note that for lightweight virtual machines (for example, kata-container), their own resource consumption of virtualization needs to be counted in caches. In the following `Filter` stage, all nodes will be checked to see if they meet the conditions. {{< notice note >}} diff --git a/content/en/blogs/value-of-kubesphere.md b/content/en/blogs/value-of-kubesphere.md index 1145f8924..748b08534 100644 --- a/content/en/blogs/value-of-kubesphere.md +++ b/content/en/blogs/value-of-kubesphere.md @@ -58,7 +58,7 @@ On the basis of Kubernetes, KubeSphere DevOps has taken full advantage of Kubern The KubeSphere account can also be used for the built-in Jenkins, meeting the demand of enterprises for multi-tenant isolation of CI/CD pipelines and unified authentication. In addition, KubeSphere DevOps supports two forms of pipelines: InSCM and OutOfSCM. This has created great compatibility with the existing Jenkinsfile and editing pipelines graphically has been made much easier. -Business developers can use built-in automation CD tools in KubeSphere (e.g. Binary to Image and Source to Image) even without a thorough understanding of how Docker and Kubernetes work. Users only need to submit a registry address or upload binary files (e.g. JAR/WAR/Binary), after which the artifact will be packed as a Docker image and released to the image registry. Ultimately, the service will be released to Kubernetes automatically without any coding in a Dockerfile. Meanwhile, dynamic logs will be generated in automated building, which can help developers quickly locate any issue in service creation and release. +Business developers can use built-in automation CD tools in KubeSphere (for example, Binary to Image and Source to Image) even without a thorough understanding of how Docker and Kubernetes work. Users only need to submit a registry address or upload binary files (for example, JAR/WAR/Binary), after which the artifact will be packed as a Docker image and released to the image registry. Ultimately, the service will be released to Kubernetes automatically without any coding in a Dockerfile. Meanwhile, dynamic logs will be generated in automated building, which can help developers quickly locate any issue in service creation and release. ![Binary/Source to Image](https://pek3b.qingstor.com/kubesphere-docs/png/20200410134220.png) diff --git a/content/en/case/anchnet.md b/content/en/case/anchnet.md index e1bab6d46..4c22a8029 100644 --- a/content/en/case/anchnet.md +++ b/content/en/case/anchnet.md @@ -17,7 +17,7 @@ section2: - title: Transfer Platform contentList: - - content: SmartAnt is a one-stop, lightweight transfer platform that helps users to transfer their business to the cloud in a rapid and convenient fashion. With visualized interfaces, SmartAnt supports one-click data transfer (e.g. host, database, and object storage), which has fundamentally solved the problem in the traditional ways of cloud transfer. + - content: SmartAnt is a one-stop, lightweight transfer platform that helps users to transfer their business to the cloud in a rapid and convenient fashion. With visualized interfaces, SmartAnt supports one-click data transfer (for example, host, database, and object storage), which has fundamentally solved the problem in the traditional ways of cloud transfer. image: - title: Basic Architecture Development diff --git a/content/en/case/vng.md b/content/en/case/vng.md index af3b42cf6..4a452518b 100644 --- a/content/en/case/vng.md +++ b/content/en/case/vng.md @@ -27,7 +27,7 @@ section2: - title: Adopting Kubernetes and KubeSphere contentList: - content: At the end of 2018, we adopted Kubernetes as the container orchestration solution. Kubernetes helps us to declaratively manage our cluster, allowing our apps to be version controlled and easily replicated. However, the learning curve of Kubernetes is high as there are a series of solutions we need to consider, including logging, monitoring, DevOps and middleware. Actually, we have investigated the most popular tools. For example, we use EFK for logging management and adopt Jenkins as the CI/CD engine for business update. Redis and Kafka are also used in our environment. - - content: These popular tools help us improve development and operation efficiency. Nevertheless, the biggest challenge facing us is that developers need to learn and maintain these different tools; and we need to spend more time switching back and forth between different terminals and dashboards. Hence, we started to research a centralized solution which can bring the cloud native stack within a unified web console. We compared a couple of solutions (e.g. Rancher and native Kubernetes) and KubeSphere has proven to be the most convenient one among them. + - content: These popular tools help us improve development and operation efficiency. Nevertheless, the biggest challenge facing us is that developers need to learn and maintain these different tools; and we need to spend more time switching back and forth between different terminals and dashboards. Hence, we started to research a centralized solution which can bring the cloud native stack within a unified web console. We compared a couple of solutions (for example, Rancher and native Kubernetes) and KubeSphere has proven to be the most convenient one among them. - content: We install [KubeSphere Container Platform](https://kubesphere.io/) on our existing Kubernetes cluster, and we have two Kubernetes clusters for sandbox and production respectively. For data privacy, our clusters are all deployed on bare metal machines. We install the highly available cluster using HAProxy to balance the traffic load. image: https://pek3b.qingstor.com/kubesphere-docs/png/20200619223626.png diff --git a/content/en/conferences/logging.md b/content/en/conferences/logging.md index bc26e3978..b801d6242 100644 --- a/content/en/conferences/logging.md +++ b/content/en/conferences/logging.md @@ -17,7 +17,7 @@ As an essential part of observability, logs play an important role in developmen In the environment of physical or virtual machines, logs are generally exported as files and managed by users. This makes it difficult for centralized management and analysis. On the contrary, container technologies, such as Kubernetes and Docker, can export logs directly to stdout, providing great convenience for the centralized management and analysis of logs. -The general logging architecture offered by the official website of Kubernetes is shown below, including logging agent, backend services and frontend console. Mature solutions (e.g. ELK/EFK) and the open source tool Loki launched in 2018 in the cloud native area share a similar architecture. More details will be provided below on the contribution of ELK/EFK, [Loki](https://github.com/grafana/loki) and [KubeSphere](https://github.com/kubesphere/kubesphere) in this regard. +The general logging architecture offered by the official website of Kubernetes is shown below, including logging agent, backend services and frontend console. Mature solutions (for example, ELK/EFK) and the open source tool Loki launched in 2018 in the cloud native area share a similar architecture. More details will be provided below on the contribution of ELK/EFK, [Loki](https://github.com/grafana/loki) and [KubeSphere](https://github.com/kubesphere/kubesphere) in this regard. ![](https://pek3b.qingstor.com/kubesphere-docs/png/20191001090839.png) @@ -102,7 +102,7 @@ As a logging solution that answers the call of the cloud native times, Loki has ## Prospect -The way Kubernetes is structured makes it possible for the centralized management of logs. In this connection, an increasing number of outstanding log management methods have empowered users to better dig out the value of log data, with greater observability achieved. As an open source platform based on Kubernetes, KubeSphere will work to improve the existing logging solutions of Kubernetes (e.g. multi-cluster log management and log alert). On top of that, it will also pay close and continuous attention to Loki, the log aggregation system inspired by Prometheus, and strives to be an active player in its development. This is how it works to integrate the most cutting-edge technology of log management into KubeSphere for users around the world. +The way Kubernetes is structured makes it possible for the centralized management of logs. In this connection, an increasing number of outstanding log management methods have empowered users to better dig out the value of log data, with greater observability achieved. As an open source platform based on Kubernetes, KubeSphere will work to improve the existing logging solutions of Kubernetes (for example, multi-cluster log management and log alert). On top of that, it will also pay close and continuous attention to Loki, the log aggregation system inspired by Prometheus, and strives to be an active player in its development. This is how it works to integrate the most cutting-edge technology of log management into KubeSphere for users around the world. ## Relevant Information diff --git a/content/en/conferences/porter.md b/content/en/conferences/porter.md index b2a93a502..175bb2fe5 100644 --- a/content/en/conferences/porter.md +++ b/content/en/conferences/porter.md @@ -47,7 +47,7 @@ Kubernetes itself does not provide the way to expose services through Ingress. R Ingress is the most used method in a business environment than NodePort and LoadBalancer. The reasons include: -1. Compared with the load balancing way of kube-proxy, Ingress Controller is more capable (e.g. traffic control and security strategy). +1. Compared with the load balancing way of kube-proxy, Ingress Controller is more capable (for example, traffic control and security strategy). 2. It is more direct to identify services through domains; large port numbers in NodePort are also not needed for Ingress. Nevertheless, the following problems need to be solved for Ingress: diff --git a/content/en/docs/application-store/app-developer-guide/helm-specification.md b/content/en/docs/application-store/app-developer-guide/helm-specification.md index 895c42e3d..ab16d028a 100644 --- a/content/en/docs/application-store/app-developer-guide/helm-specification.md +++ b/content/en/docs/application-store/app-developer-guide/helm-specification.md @@ -45,7 +45,7 @@ dependencies: (Optional) A list of the chart requirements. - name: The name of the chart, such as nginx. version: The version of the chart, such as "1.2.3". repository: The repository URL ("https://example.com/charts") or alias ("@repo-name"). - condition: (Optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (e.g. subchart1.enabled ). + condition: (Optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (for example, subchart1.enabled ). tags: (Optional) - Tags can be used to group charts for enabling/disabling together. import-values: (Optional) diff --git a/content/en/docs/application-store/built-in-apps/etcd-app.md b/content/en/docs/application-store/built-in-apps/etcd-app.md index ec3574226..afd09f176 100644 --- a/content/en/docs/application-store/built-in-apps/etcd-app.md +++ b/content/en/docs/application-store/built-in-apps/etcd-app.md @@ -71,6 +71,6 @@ After the app is deployed, you can use etcdctl, a command-line tool for interact ![etcd-command](/images/docs/appstore/built-in-apps/etcd-app/etcd-command.jpg) -4. For clients within the KubeSphere cluster, the etcd service can be accessed through `..svc.:2379` (e.g. `etcd-bqe0g4.demo-project.svc.cluster.local:2379` in this guide). +4. For clients within the KubeSphere cluster, the etcd service can be accessed through `..svc.:2379` (for example, `etcd-bqe0g4.demo-project.svc.cluster.local:2379` in this guide). 5. For more information, see [the official documentation of etcd](https://etcd.io/docs/v3.4.0/). \ No newline at end of file diff --git a/content/en/docs/cluster-administration/nodes.md b/content/en/docs/cluster-administration/nodes.md index e92867eb7..8156683b8 100644 --- a/content/en/docs/cluster-administration/nodes.md +++ b/content/en/docs/cluster-administration/nodes.md @@ -51,7 +51,7 @@ Click a node from the list and you can go to its detail page. ![Node Detail](/images/docs/cluster-administration/node-management/node_detail.png) - **Cordon/Uncordon**: Marking a node as unschedulable is very useful during a node reboot or other maintenance. The Kubernetes scheduler will not schedule new Pods to this node if it's been marked unschedulable. Besides, this does not affect existing workloads already on the node. In KubeSphere, you mark a node as unschedulable by clicking **Cordon** on the node detail page. The node will be schedulable if you click the button (**Uncordon**) again. -- **Labels**: Node labels can be very useful when you want to assign Pods to specific nodes. Label a node first (e.g. label GPU nodes with `node-role.kubernetes.io/gpu-node`), and then add the label in **Advanced Settings** [when you create a workload](../../project-user-guide/application-workloads/deployments/#step-5-configure-advanced-settings) so that you can allow Pods to run on GPU nodes explicitly. To add node labels, click **More** and select **Edit Labels**. +- **Labels**: Node labels can be very useful when you want to assign Pods to specific nodes. Label a node first (for example, label GPU nodes with `node-role.kubernetes.io/gpu-node`), and then add the label in **Advanced Settings** [when you create a workload](../../project-user-guide/application-workloads/deployments/#step-5-configure-advanced-settings) so that you can allow Pods to run on GPU nodes explicitly. To add node labels, click **More** and select **Edit Labels**. ![drop-down-list-node](/images/docs/cluster-administration/node-management/drop-down-list-node.jpg) diff --git a/content/en/docs/cluster-administration/persistent-volume-and-storage-class.md b/content/en/docs/cluster-administration/persistent-volume-and-storage-class.md index 784e7e22e..af13898dd 100644 --- a/content/en/docs/cluster-administration/persistent-volume-and-storage-class.md +++ b/content/en/docs/cluster-administration/persistent-volume-and-storage-class.md @@ -26,7 +26,7 @@ The table below summarizes common volume plugins for various provisioners (stora | -------------------- | ------------------------------------------------------------ | | In-tree | Built-in and run as part of Kubernetes, such as [RBD](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) and [Glusterfs](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs). For more plugins of this kind, see [Provisioner](https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner). | | External-provisioner | Deployed independently from Kubernetes, but works like an in-tree plugin, such as [nfs-client](https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client). For more plugins of this kind, see [External Storage](https://github.com/kubernetes-retired/external-storage). | -| CSI | Container Storage Interface, a standard for exposing storage resources to workloads on COs (e.g. Kubernetes), such as [QingCloud-csi](https://github.com/yunify/qingcloud-csi) and [Ceph-CSI](https://github.com/ceph/ceph-csi). For more plugins of this kind, see [Drivers](https://kubernetes-csi.github.io/docs/drivers.html). | +| CSI | Container Storage Interface, a standard for exposing storage resources to workloads on COs (for example, Kubernetes), such as [QingCloud-csi](https://github.com/yunify/qingcloud-csi) and [Ceph-CSI](https://github.com/ceph/ceph-csi). For more plugins of this kind, see [Drivers](https://kubernetes-csi.github.io/docs/drivers.html). | ## Prerequisites diff --git a/content/en/docs/devops-user-guide/how-to-integrate/sonarqube.md b/content/en/docs/devops-user-guide/how-to-integrate/sonarqube.md index 24e8590f0..30d4c0569 100644 --- a/content/en/docs/devops-user-guide/how-to-integrate/sonarqube.md +++ b/content/en/docs/devops-user-guide/how-to-integrate/sonarqube.md @@ -190,7 +190,7 @@ To integrate SonarQube into your pipeline, you must install SonarQube Server fir http://192.168.0.4:30180 ``` -3. Access Jenkins with the address `http://Public IP:30180`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (e.g. `admin/P@88w0rd`) directly. For more information about configuring Jenkins, see [Jenkins System Settings](../../../devops-user-guide/how-to-use/jenkins-setting/). +3. Access Jenkins with the address `http://Public IP:30180`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (for example, `admin/P@88w0rd`) directly. For more information about configuring Jenkins, see [Jenkins System Settings](../../../devops-user-guide/how-to-use/jenkins-setting/). ![jenkins-login-page](/images/docs/devops-user-guide/tool-integration/integrate-sonarqube-into-pipeline/jenkins-login-page.jpg) diff --git a/content/en/docs/devops-user-guide/how-to-use/create-a-pipeline-using-jenkinsfile.md b/content/en/docs/devops-user-guide/how-to-use/create-a-pipeline-using-jenkinsfile.md index a5ef39b79..a352a48ee 100644 --- a/content/en/docs/devops-user-guide/how-to-use/create-a-pipeline-using-jenkinsfile.md +++ b/content/en/docs/devops-user-guide/how-to-use/create-a-pipeline-using-jenkinsfile.md @@ -232,7 +232,7 @@ The account `project-admin` needs to be created in advance since it is the revie ![pipeline-proceed](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-a-jenkinsfile/pipeline-proceed.jpg) - In a development or production environment, it requires someone who has higher authority (e.g. release manager) to review the pipeline, images, as well as the code analysis result. They have the authority to determine whether the pipeline can go to the next stage. In the Jenkinsfile, you use the section `input` to specify who reviews the pipeline. If you want to specify a user (e.g. `project-admin`) to review it, you can add a field in the Jenkinsfile. If there are multiple users, you need to use commas to separate them as follows: + In a development or production environment, it requires someone who has higher authority (for example, release manager) to review the pipeline, images, as well as the code analysis result. They have the authority to determine whether the pipeline can go to the next stage. In the Jenkinsfile, you use the section `input` to specify who reviews the pipeline. If you want to specify a user (for example, `project-admin`) to review it, you can add a field in the Jenkinsfile. If there are multiple users, you need to use commas to separate them as follows: ```groovy ··· diff --git a/content/en/docs/devops-user-guide/how-to-use/credential-management.md b/content/en/docs/devops-user-guide/how-to-use/credential-management.md index ea7024365..5460fb5a5 100644 --- a/content/en/docs/devops-user-guide/how-to-use/credential-management.md +++ b/content/en/docs/devops-user-guide/how-to-use/credential-management.md @@ -50,7 +50,7 @@ Log in to the console of KubeSphere as `project-regular`. Navigate to your DevOp ### Create GitHub credentials -Similarly, follow the same steps above to create GitHub credentials. Set a different Credential ID (e.g. `github-id`) and also select **Account Credentials** for **Type**. Enter your GitHub username and password for **Username** and **Token/Password** respectively. +Similarly, follow the same steps above to create GitHub credentials. Set a different Credential ID (for example, `github-id`) and also select **Account Credentials** for **Type**. Enter your GitHub username and password for **Username** and **Token/Password** respectively. {{< notice note >}} @@ -60,7 +60,7 @@ If there are any special characters such as `@` and `$` in your account or passw ### Create kubeconfig credentials -Similarly, follow the same steps above to create kubeconfig credentials. Set a different Credential ID (e.g. `demo-kubeconfig`) and select **kubeconfig**. +Similarly, follow the same steps above to create kubeconfig credentials. Set a different Credential ID (for example, `demo-kubeconfig`) and select **kubeconfig**. {{< notice info >}} diff --git a/content/en/docs/devops-user-guide/how-to-use/jenkins-email.md b/content/en/docs/devops-user-guide/how-to-use/jenkins-email.md index 60d73b549..9b4373042 100644 --- a/content/en/docs/devops-user-guide/how-to-use/jenkins-email.md +++ b/content/en/docs/devops-user-guide/how-to-use/jenkins-email.md @@ -39,7 +39,7 @@ The built-in Jenkins cannot share the same email configuration with the platform | Environment Variable Name | Description | | ------------------------- | -------------------------------- | | EMAIL\_SMTP\_HOST | SMTP server address | - | EMAIL\_SMTP\_PORT | SMTP server port (e.g. 25) | + | EMAIL\_SMTP\_PORT | SMTP server port (for example, 25) | | EMAIL\_FROM\_ADDR | Email sender address | | EMAIL\_FROM\_NAME | Email sender name | | EMAIL\_FROM\_PASS | Email sender password | diff --git a/content/en/docs/devops-user-guide/how-to-use/jenkins-setting.md b/content/en/docs/devops-user-guide/how-to-use/jenkins-setting.md index 6b471079b..ba3958f13 100644 --- a/content/en/docs/devops-user-guide/how-to-use/jenkins-setting.md +++ b/content/en/docs/devops-user-guide/how-to-use/jenkins-setting.md @@ -56,7 +56,7 @@ After you modified `jenkins-casc-config`, you need to reload your updated system http://192.168.0.4:30180 ``` -3. Access Jenkins at `http://Node IP:Port Number`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (e.g. `admin/P@88w0rd`) directly. +3. Access Jenkins at `http://Node IP:Port Number`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (for example, `admin/P@88w0rd`) directly. ![jenkins-dashboard](/images/docs/devops-user-guide/using-devops/jenkins-system-settings/jenkins-dashboard.jpg) diff --git a/content/en/docs/devops-user-guide/how-to-use/pipeline-settings.md b/content/en/docs/devops-user-guide/how-to-use/pipeline-settings.md index fd7f99245..3b230a888 100644 --- a/content/en/docs/devops-user-guide/how-to-use/pipeline-settings.md +++ b/content/en/docs/devops-user-guide/how-to-use/pipeline-settings.md @@ -145,7 +145,7 @@ You can select a pipeline from the drop-down list for **When Create Pipeline** a ![webhook](/images/docs/devops-user-guide/using-devops/pipeline-settings/webhook.png) -**Webhook** is an efficient way to allow pipelines to discover changes in the remote code repository and automatically trigger a new running. Webhook should be the primary method to trigger Jenkins automatic scanning for GitHub and Git (e.g. GitLab). +**Webhook** is an efficient way to allow pipelines to discover changes in the remote code repository and automatically trigger a new running. Webhook should be the primary method to trigger Jenkins automatic scanning for GitHub and Git (for example, GitLab). ### Advanced Settings with No Code Repository Selected diff --git a/content/en/docs/devops-user-guide/understand-and-manage-devops-projects/devops-project-management.md b/content/en/docs/devops-user-guide/understand-and-manage-devops-projects/devops-project-management.md index a205b6c20..31141249e 100644 --- a/content/en/docs/devops-user-guide/understand-and-manage-devops-projects/devops-project-management.md +++ b/content/en/docs/devops-user-guide/understand-and-manage-devops-projects/devops-project-management.md @@ -48,7 +48,7 @@ A DevOps project user with required permissions can configure credentials for pi ### Members and roles -Similar to a project, a DevOps project also requires users to be granted different roles before they can work in the DevOps project. Project administrators (e.g. `project-admin`) are responsible for inviting tenants and granting them different roles. For more information, see [Role and Member Management](../role-and-member-management/). +Similar to a project, a DevOps project also requires users to be granted different roles before they can work in the DevOps project. Project administrators (for example, `project-admin`) are responsible for inviting tenants and granting them different roles. For more information, see [Role and Member Management](../role-and-member-management/). ## Edit or Delete a DevOps Project diff --git a/content/en/docs/devops-user-guide/understand-and-manage-devops-projects/role-and-member-management.md b/content/en/docs/devops-user-guide/understand-and-manage-devops-projects/role-and-member-management.md index d47deb30d..5a0daf371 100644 --- a/content/en/docs/devops-user-guide/understand-and-manage-devops-projects/role-and-member-management.md +++ b/content/en/docs/devops-user-guide/understand-and-manage-devops-projects/role-and-member-management.md @@ -17,7 +17,7 @@ In DevOps project scope, you can grant the following resources' permissions to a ## Prerequisites -At least one DevOps project has been created, such as `demo-devops`. Besides, you need an account of the `admin` role (e.g. `devops-admin`) at the DevOps project level. +At least one DevOps project has been created, such as `demo-devops`. Besides, you need an account of the `admin` role (for example, `devops-admin`) at the DevOps project level. ## Built-in Roles @@ -31,7 +31,7 @@ In **Project Roles**, there are three available built-in roles as shown below. B ## Create a DevOps Project Role -1. Log in to the console as `devops-admin` and select a DevOps project (e.g. `demo-devops`) under **DevOps Projects** list. +1. Log in to the console as `devops-admin` and select a DevOps project (for example, `demo-devops`) under **DevOps Projects** list. {{< notice note >}} diff --git a/content/en/docs/faq/applications/remove-built-in-apps.md b/content/en/docs/faq/applications/remove-built-in-apps.md index d4a8c9eb0..3a112e840 100644 --- a/content/en/docs/faq/applications/remove-built-in-apps.md +++ b/content/en/docs/faq/applications/remove-built-in-apps.md @@ -10,7 +10,7 @@ As an open-source and app-centric container platform, KubeSphere integrates 15 b ## Prerequisites -- You need to use an account with the role of `platform-admin` (e.g. `admin`) for this tutorial. +- You need to use an account with the role of `platform-admin` (for example, `admin`) for this tutorial. - You need to [enable the App Store](../../../pluggable-components/app-store/). ## Remove a Built-in App diff --git a/content/en/docs/faq/devops/install-jenkins-plugins.md b/content/en/docs/faq/devops/install-jenkins-plugins.md index 258b7e836..5bce504a2 100644 --- a/content/en/docs/faq/devops/install-jenkins-plugins.md +++ b/content/en/docs/faq/devops/install-jenkins-plugins.md @@ -30,7 +30,7 @@ You need to enable [the KubeSphere DevOps system](../../../pluggable-components/ echo http://$NODE_IP:$NODE_PORT ``` -2. You can get the output similar to the following. You can access the Jenkins dashboard through the address with your own KubeSphere account and password (e.g. `admin/P@88w0rd`). +2. You can get the output similar to the following. You can access the Jenkins dashboard through the address with your own KubeSphere account and password (for example, `admin/P@88w0rd`). ``` http://192.168.0.4:30180 diff --git a/content/en/docs/faq/installation/configure-booster.md b/content/en/docs/faq/installation/configure-booster.md index 798e18fa4..d62802ab9 100644 --- a/content/en/docs/faq/installation/configure-booster.md +++ b/content/en/docs/faq/installation/configure-booster.md @@ -78,7 +78,7 @@ Docker needs to be installed in advance for this method. registry: registryMirrors: [] # For users who need to speed up downloads insecureRegistries: [] # Set an address of insecure image registry. See https://docs.docker.com/registry/insecure/ - privateRegistry: "" # Configure a private image registry for air-gapped installation (e.g. docker local registry or Harbor) + privateRegistry: "" # Configure a private image registry for air-gapped installation (for example, docker local registry or Harbor) ``` 2. Input the registry mirror address above and save the file. For more information about the installation process, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md index c836fb5ce..d7a558353 100644 --- a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md @@ -18,11 +18,11 @@ A Kubernetes cluster in DO is a prerequisite for installing KubeSphere. Go to yo You need to select: -1. Kubernetes version (e.g. *1.18.6-do.0*) -2. Datacenter region (e.g. *Frankfurt*) -3. VPC network (e.g. *default-fra1*) -4. Cluster capacity (e.g. 2 standard nodes with 2 vCPUs and 4GB of RAM each) -5. A name for the cluster (e.g. *kubesphere-3*) +1. Kubernetes version (for example, *1.18.6-do.0*) +2. Datacenter region (for example, *Frankfurt*) +3. VPC network (for example, *default-fra1*) +4. Cluster capacity (for example, 2 standard nodes with 2 vCPUs and 4GB of RAM each) +5. A name for the cluster (for example, *kubesphere-3*) ![config-cluster-do](/images/docs/do/config-cluster-do.png) diff --git a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md index 84053786d..137bc57b8 100644 --- a/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md +++ b/content/en/docs/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md @@ -14,7 +14,7 @@ This guide walks you through the steps of deploying KubeSphere on [Huaiwei CCE]( First, create a Kubernetes cluster based on the requirements below. -- KubeSphere 3.0.0 supports Kubernetes `1.15.x`, `1.16.x`, `1.17.x`, and `1.18.x`. Select a version and create the cluster, e.g. `v1.15.11` or `v1.17.9`. +- KubeSphere 3.0.0 supports Kubernetes `1.15.x`, `1.16.x`, `1.17.x`, and `1.18.x`. Select a version and create the cluster, for example, `v1.15.11` or `v1.17.9`. - Ensure the cloud computing network for your Kubernetes cluster works, or use an elastic IP when you use **Auto Create** or **Select Existing**. You can also configure the network after the cluster is created. Refer to [NAT Gateway](https://support.huaweicloud.com/en-us/productdesc-natgateway/en-us_topic_0086739762.html). - Select `s3.xlarge.2` `4-core|8GB` for nodes and add more if necessary (3 and more nodes are required for a production environment). diff --git a/content/en/docs/installing-on-kubernetes/introduction/overview.md b/content/en/docs/installing-on-kubernetes/introduction/overview.md index 252ba99b6..d82abeba0 100644 --- a/content/en/docs/installing-on-kubernetes/introduction/overview.md +++ b/content/en/docs/installing-on-kubernetes/introduction/overview.md @@ -8,7 +8,7 @@ weight: 4110 ![kubesphere+k8s](/images/docs/installing-on-kubernetes/introduction/overview/kubesphere+k8s.png) -As part of KubeSphere's commitment to provide a plug-and-play architecture for users, it can be easily installed on existing Kubernetes clusters. More specifically, KubeSphere can be deployed on Kubernetes either hosted on clouds (e.g. AWS EKS, QingCloud QKE and Google GKE) or on-premises. This is because KubeSphere does not hack Kubernetes itself. It only interacts with the Kubernetes API to manage Kubernetes cluster resources. In other words, KubeSphere can be installed on any native Kubernetes cluster and Kubernetes distribution. +As part of KubeSphere's commitment to provide a plug-and-play architecture for users, it can be easily installed on existing Kubernetes clusters. More specifically, KubeSphere can be deployed on Kubernetes either hosted on clouds (for example, AWS EKS, QingCloud QKE and Google GKE) or on-premises. This is because KubeSphere does not hack Kubernetes itself. It only interacts with the Kubernetes API to manage Kubernetes cluster resources. In other words, KubeSphere can be installed on any native Kubernetes cluster and Kubernetes distribution. This section gives you an overview of the general steps of installing KubeSphere on Kubernetes. For more information about the specific way of installation in different environments, see Installing on Hosted Kubernetes and Installing on On-premises Kubernetes. diff --git a/content/en/docs/installing-on-linux/cluster-operation/add-new-nodes.md b/content/en/docs/installing-on-linux/cluster-operation/add-new-nodes.md index c1e7b4833..866aa2f5d 100644 --- a/content/en/docs/installing-on-linux/cluster-operation/add-new-nodes.md +++ b/content/en/docs/installing-on-linux/cluster-operation/add-new-nodes.md @@ -76,7 +76,7 @@ You can skip this step if you already have the configuration file on your machin ## Add Master Nodes for High Availability -The steps of adding master nodes are generally the same as adding worker nodes while you need to configure a load balancer for your cluster. You can use any cloud load balancers or hardware load balancers (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating highly available clusters. +The steps of adding master nodes are generally the same as adding worker nodes while you need to configure a load balancer for your cluster. You can use any cloud load balancers or hardware load balancers (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating highly available clusters. 1. Create a configuration file using KubeKey. diff --git a/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md b/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md index 5cf7dcf18..3ee498db8 100644 --- a/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md +++ b/content/en/docs/installing-on-linux/high-availability-configurations/ha-configuration.md @@ -6,7 +6,7 @@ linkTitle: "Set up an HA Cluster Using a Load Balancer" weight: 3210 --- -You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters. +You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters. This tutorial demonstrates the general configurations of a high-availability cluster as you install KubeSphere on Linux. @@ -163,7 +163,7 @@ For more information about different fields in this configuration file, see [Kub ### Persistent storage plugin configurations -For a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/). +For a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/). ### Enable pluggable components (Optional) diff --git a/content/en/docs/installing-on-linux/introduction/multioverview.md b/content/en/docs/installing-on-linux/introduction/multioverview.md index 59a28580b..40f0a11bd 100644 --- a/content/en/docs/installing-on-linux/introduction/multioverview.md +++ b/content/en/docs/installing-on-linux/introduction/multioverview.md @@ -16,7 +16,7 @@ This section gives you an overview of a single-master multi-node installation, i ## Concept -A multi-node cluster is composed of at least one master node and one worker node. You can use any node as the **taskbox** to carry out the installation task. You can add additional nodes based on your needs (e.g. for high availability) both before and after the installation. +A multi-node cluster is composed of at least one master node and one worker node. You can use any node as the **taskbox** to carry out the installation task. You can add additional nodes based on your needs (for example, for high availability) both before and after the installation. - **Master**. A master node generally hosts the control plane that controls and manages the whole system. - **Worker**. Worker nodes run the actual applications deployed on them. @@ -166,7 +166,7 @@ Here are some examples for your reference: ./kk create config [-f ~/myfolder/abc.yaml] ``` -- You can specify a KubeSphere version that you want to install (e.g. `--with-kubesphere v3.0.0`). +- You can specify a KubeSphere version that you want to install (for example, `--with-kubesphere v3.0.0`). ```bash ./kk create config --with-kubesphere [version] @@ -252,7 +252,7 @@ List all your machines under `hosts` and add their detailed information as above #### addons -You can customize persistent storage plugins (e.g. NFS Client, Ceph RBD, and GlusterFS) by specifying storage under the field `addons` in `config-sample.yaml`. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/). +You can customize persistent storage plugins (for example, NFS Client, Ceph RBD, and GlusterFS) by specifying storage under the field `addons` in `config-sample.yaml`. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/). {{< notice note >}} @@ -309,7 +309,7 @@ https://kubesphere.io 20xx-xx-xx xx:xx:xx ##################################################### ``` -Now, you will be able to access the web console of KubeSphere at `http://{IP}:30880` (e.g. you can use the EIP) with the account and password `admin/P@88w0rd`. +Now, you will be able to access the web console of KubeSphere at `http://{IP}:30880` (for example, you can use the EIP) with the account and password `admin/P@88w0rd`. {{< notice note >}} diff --git a/content/en/docs/installing-on-linux/introduction/vars.md b/content/en/docs/installing-on-linux/introduction/vars.md index 6a55fe017..80c602d47 100644 --- a/content/en/docs/installing-on-linux/introduction/vars.md +++ b/content/en/docs/installing-on-linux/introduction/vars.md @@ -30,6 +30,6 @@ kubernetes: registry: registryMirrors: [] # For users who need to speed up downloads. insecureRegistries: [] # Set an address of insecure image registry. See https://docs.docker.com/registry/insecure/ - privateRegistry: "" # Configure a private image registry for air-gapped installation (e.g. docker local registry or Harbor). - addons: [] # You can specify any add-ons with one or more Helm Charts or YAML files in this field (e.g. CSI plugins or cloud provider plugins). + privateRegistry: "" # Configure a private image registry for air-gapped installation (for example, docker local registry or Harbor). + addons: [] # You can specify any add-ons with one or more Helm Charts or YAML files in this field (for example, CSI plugins or cloud provider plugins). ``` diff --git a/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md b/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md index 3310ef087..e0692c62c 100644 --- a/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md +++ b/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md @@ -244,7 +244,7 @@ chmod +x kk With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file. -Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.0.0`): +Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.0.0`): ```bash ./kk create config --with-kubernetes v1.17.9 --with-kubesphere v3.0.0 diff --git a/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md b/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md index 6d0e318e1..ebc0c265d 100644 --- a/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md +++ b/content/en/docs/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md @@ -9,7 +9,7 @@ weight: 3510 ## Introduction -For a production environment, we need to consider the high availability of the cluster. If the key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters. +For a production environment, we need to consider the high availability of the cluster. If the key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters. This tutorial walks you through an example of how to create Keepalived and HAProxy, and implement high availability of master and etcd nodes using the load balancers on VMware vSphere. @@ -345,7 +345,7 @@ chmod +x kk With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file. -Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.0.0`): +Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.0.0`): ```bash ./kk create config --with-kubernetes v1.17.9 --with-kubesphere v3.0.0 diff --git a/content/en/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd.md b/content/en/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd.md index a52cccd59..d55e4bf1a 100644 --- a/content/en/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd.md +++ b/content/en/docs/installing-on-linux/persistent-storage-configurations/install-ceph-csi-rbd.md @@ -77,7 +77,7 @@ mountOptions: #### Add-on configurations -Save the above chart config and StorageClass locally (e.g. `/root/ceph-csi-rbd.yaml` and `/root/ceph-csi-rbd-sc.yaml`). The add-on configuration can be set like: +Save the above chart config and StorageClass locally (for example, `/root/ceph-csi-rbd.yaml` and `/root/ceph-csi-rbd-sc.yaml`). The add-on configuration can be set like: ```yaml addons: @@ -115,7 +115,7 @@ If you want to configure more values, see [chart configuration for rbd-provision #### Add-on configurations -Save the above chart config locally (e.g. `/root/rbd-provisioner.yaml`). The add-on config for rbd provisioner cloud be like: +Save the above chart config locally (for example, `/root/rbd-provisioner.yaml`). The add-on config for rbd provisioner cloud be like: ```yaml - name: rbd-provisioner diff --git a/content/en/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md b/content/en/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md index 7736cb576..1586a47c3 100644 --- a/content/en/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md +++ b/content/en/docs/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md @@ -8,7 +8,7 @@ Weight: 3420 ## Introduction -For a production environment, you need to consider the high availability of the cluster. If key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters. +For a production environment, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters. This tutorial walks you through an example of how to create two [QingCloud load balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of master and etcd nodes using the load balancers. @@ -253,7 +253,7 @@ Kubekey provides some fields and parameters to allow the cluster administrator t ### Step 6: Persistent storage plugin configurations -Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want. +Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want. {{< notice note >}} diff --git a/content/en/docs/introduction/advantages.md b/content/en/docs/introduction/advantages.md index a89709bc8..8c416a26b 100644 --- a/content/en/docs/introduction/advantages.md +++ b/content/en/docs/introduction/advantages.md @@ -54,7 +54,7 @@ Automation represents a key part of implementing DevOps. With automatic, streaml **Jenkins-powered**. The KubeSphere DevOps system is built with Jenkins as the engine, which is abundant in plugins. On top of that, Jenkins provides an enabling environment for extension development, making it possible for the DevOps team to work smoothly across the whole process (developing, testing, building, deploying, monitoring, logging, notifying, etc.) in a unified platform. The KubeSphere account can also be used for the built-in Jenkins, meeting the demand of enterprises for multi-tenant isolation of CI/CD pipelines and unified authentication. -**Convenient built-in tools**. Users can easily take advantage of automation tools (e.g. Binary-to-Image and Source-to-Image) even without a thorough understanding of how Docker or Kubernetes works. They only need to submit a registry address or upload binary files (e.g. JAR/WAR/Binary). Ultimately, services will be released to Kubernetes automatically without any coding in a Dockerfile. +**Convenient built-in tools**. Users can easily take advantage of automation tools (for example, Binary-to-Image and Source-to-Image) even without a thorough understanding of how Docker or Kubernetes works. They only need to submit a registry address or upload binary files (for example, JAR/WAR/Binary). Ultimately, services will be released to Kubernetes automatically without any coding in a Dockerfile. For more information, see [DevOps User Guide](../../devops-user-guide/). @@ -85,7 +85,7 @@ The KubeSphere community has the capabilities and technical know-how to help you **Partners**. KubeSphere partners play a critical role in KubeSphere's go-to-market strategy. They can be app developers, technology companies, cloud providers or go-to-market partners, all of whom drive the community ahead in their respective aspects. -**Ambassadors**. As community representatives, ambassadors promote KubeSphere in a variety of ways (e.g. activities, blogs and user cases) so that more people can join the community. +**Ambassadors**. As community representatives, ambassadors promote KubeSphere in a variety of ways (for example, activities, blogs and user cases) so that more people can join the community. **Contributors**. KubeSphere contributors help the whole community by contributing to code or documentation. You don't need to be an expert while you can still make a difference even it is a minor code fix or language improvement. diff --git a/content/en/docs/introduction/features.md b/content/en/docs/introduction/features.md index 8d8da1ee4..92e691499 100644 --- a/content/en/docs/introduction/features.md +++ b/content/en/docs/introduction/features.md @@ -39,7 +39,7 @@ The next-gen installer [KubeKey](https://github.com/kubesphere/kubekey) provides As the IT world sees a growing number of cloud-native applications reshaping software portfolios for enterprises, users tend to deploy their clusters across locations, geographies, and clouds. Against this backdrop, KubeSphere has undergone a significant upgrade to address the pressing need of users with its brand-new multi-cluster feature. -With KubeSphere, users can manage the infrastructure underneath, such as adding or deleting clusters. Heterogeneous clusters deployed on any infrastructure (e.g. Amazon EKS and Google Kubernetes Engine) can be managed in a unified way. This is made possible by a central control plane of KubeSphere with two efficient management approaches available. +With KubeSphere, users can manage the infrastructure underneath, such as adding or deleting clusters. Heterogeneous clusters deployed on any infrastructure (for example, Amazon EKS and Google Kubernetes Engine) can be managed in a unified way. This is made possible by a central control plane of KubeSphere with two efficient management approaches available. - **Solo**. Independently deployed Kubernetes clusters can be maintained and managed together in KubeSphere container platform. - **Federation**. Multiple Kubernetes clusters can be aggregated together as a Kubernetes resource pool. When users deploy applications, replicas can be deployed on different Kubernetes clusters in the pool. In this regard, high availability is achieved across zones and clusters. @@ -72,7 +72,7 @@ S2I allows you to publish your service to Kubernetes without writing a Dockerfil ### Binary-to-Image -Similar to S2I, Binary-to-Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (e.g. Jar, War, Binary package). +Similar to S2I, Binary-to-Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (for example, Jar, War, Binary package). You just need to upload your application binary package, and specify the image registry to which you want to push. The rest is exactly the same as S2I. @@ -103,7 +103,7 @@ Based on Jaeger, KubeSphere service mesh enables users to track how services int ## Multi-tenant Management -In KubeSphere, resources (e.g. clusters) can be shared between tenants. First, administrators or managers need to set different account roles with different authorizations. After that, members in the platform can be assigned with these roles to perform specific actions on varied resources. Meanwhile, as KubeSphere completely isolates tenants, they will not affect each other at all. +In KubeSphere, resources (for example, clusters) can be shared between tenants. First, administrators or managers need to set different account roles with different authorizations. After that, members in the platform can be assigned with these roles to perform specific actions on varied resources. Meanwhile, as KubeSphere completely isolates tenants, they will not affect each other at all. - **Multi-tenancy**. It provides role-based fine-grained authentication in a unified way and a three-tier authorization system. - **Unified authentication**. For enterprises, KubeSphere is compatible with their central authentication system that is base on LDAP or AD protocol. Single sign-on (SSO) is also supported to achieve unified authentication of tenant identity. diff --git a/content/en/docs/introduction/what's-new-in-v3.0.md b/content/en/docs/introduction/what's-new-in-v3.0.md index 7e0fda98f..37dba9805 100644 --- a/content/en/docs/introduction/what's-new-in-v3.0.md +++ b/content/en/docs/introduction/what's-new-in-v3.0.md @@ -10,9 +10,9 @@ Published at the end of August, 2020, KubeSphere 3.0 is the most important versi ## New Features in KubeSphere 3.0 -- **Multi-cluster Management**. As we usher in an era of hybrid cloud, multi-cluster management has emerged as the call of our times. It represents one of the most necessary features on top of Kubernetes as it addresses the pressing need of our users. In the latest version 3.0, we have equipped KubeSphere with its unique multi-cluster feature that is able to provide a central control plane for clusters deployed in different clouds. Users can import and manage their existing Kubernetes clusters created on the platform of mainstream infrastructure providers (e.g. Amazon EKS and Google Kubernetes Engine). This will greatly reduce the learning cost for our users with operation and maintenance process streamlined as well. Solo and Federation are the two featured patterns for multi-cluster management, making KubeSphere stand out among its counterparts. +- **Multi-cluster Management**. As we usher in an era of hybrid cloud, multi-cluster management has emerged as the call of our times. It represents one of the most necessary features on top of Kubernetes as it addresses the pressing need of our users. In the latest version 3.0, we have equipped KubeSphere with its unique multi-cluster feature that is able to provide a central control plane for clusters deployed in different clouds. Users can import and manage their existing Kubernetes clusters created on the platform of mainstream infrastructure providers (for example, Amazon EKS and Google Kubernetes Engine). This will greatly reduce the learning cost for our users with operation and maintenance process streamlined as well. Solo and Federation are the two featured patterns for multi-cluster management, making KubeSphere stand out among its counterparts. -- **Improved Observability**. We have enhanced observability as it becomes more powerful to include custom monitoring, tenant event management, diversified notification methods (e.g. WeChat and Slack) and more features. Among others, users can now customize monitoring dashboards, with a variety of metrics and graphs to choose from for their own needs. It also deserves to mention that KubeSphere 3.0 is compatible with Prometheus, which is the de facto standard for Kubernetes monitoring in the cloud-native industry. +- **Improved Observability**. We have enhanced observability as it becomes more powerful to include custom monitoring, tenant event management, diversified notification methods (for example, WeChat and Slack) and more features. Among others, users can now customize monitoring dashboards, with a variety of metrics and graphs to choose from for their own needs. It also deserves to mention that KubeSphere 3.0 is compatible with Prometheus, which is the de facto standard for Kubernetes monitoring in the cloud-native industry. - **Enhanced Security**. Security has alway remained one of our focuses in KubeSphere. In this connection, feature enhancements can be summarized as follows: diff --git a/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md b/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md index 3388b5e63..4ac736707 100644 --- a/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md +++ b/content/en/docs/multicluster-management/enable-multicluster/agent-connection.md @@ -6,7 +6,7 @@ titleLink: "Agent Connection" weight: 5220 --- -The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the Host Cluster (H Cluster) cannot access the Member Cluster (M Cluster) directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H Cluster through the agent. This method is applicable when the M Cluster is in a private environment (e.g. IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed across different cloud providers. +The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the Host Cluster (H Cluster) cannot access the Member Cluster (M Cluster) directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H Cluster through the agent. This method is applicable when the M Cluster is in a private environment (for example, IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed across different cloud providers. To use the multi-cluster feature using an agent, you must have at least two clusters serving as the H Cluster and the M Cluster respectively. A cluster can be defined as the H Cluster or the M Cluster either before or after you install KubeSphere. For more information about installing KubeSphere, refer to [Installing on Linux](../../../installing-on-linux/) and [Installing on Kubernetes](../../../installing-on-kubernetes/). diff --git a/content/en/docs/pluggable-components/alerting-notification.md b/content/en/docs/pluggable-components/alerting-notification.md index 5067a8bcd..c45611dff 100644 --- a/content/en/docs/pluggable-components/alerting-notification.md +++ b/content/en/docs/pluggable-components/alerting-notification.md @@ -31,7 +31,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c ``` {{< notice note >}} -If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Alerting and Notification in this mode (e.g. for testing purposes), refer to [the following section](#enable-alerting-and-notification-after-installation) to see how Alerting and Notification can be installed after installation. +If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Alerting and Notification in this mode (for example, for testing purposes), refer to [the following section](#enable-alerting-and-notification-after-installation) to see how Alerting and Notification can be installed after installation. {{}} 2. In this file, navigate to `alerting` and `notification` and change `false` to `true` for `enabled`. Save the file after you finish. diff --git a/content/en/docs/pluggable-components/app-store.md b/content/en/docs/pluggable-components/app-store.md index 04ef69df3..745e5f532 100644 --- a/content/en/docs/pluggable-components/app-store.md +++ b/content/en/docs/pluggable-components/app-store.md @@ -29,7 +29,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c ``` {{< notice note >}} -If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the App Store in this mode (e.g. for testing purposes), refer to [the following section](#enable-app-store-after-installation) to see how the App Store can be installed after installation. +If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the App Store in this mode (for example, for testing purposes), refer to [the following section](#enable-app-store-after-installation) to see how the App Store can be installed after installation. {{}} 2. In this file, navigate to `openpitrix` and change `false` to `true` for `enabled`. Save the file after you finish. diff --git a/content/en/docs/pluggable-components/auditing-logs.md b/content/en/docs/pluggable-components/auditing-logs.md index d55b629b3..3b1151d32 100644 --- a/content/en/docs/pluggable-components/auditing-logs.md +++ b/content/en/docs/pluggable-components/auditing-logs.md @@ -25,7 +25,7 @@ When you implement multi-node installation KubeSphere on Linux, you need to crea ``` {{< notice note >}} -If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Auditing in this mode (e.g. for testing purposes), refer to [the following section](#enable-auditing-logs-after-installation) to see how Auditing can be installed after installation. +If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Auditing in this mode (for example, for testing purposes), refer to [the following section](#enable-auditing-logs-after-installation) to see how Auditing can be installed after installation. {{}} 2. In this file, navigate to `auditing` and change `false` to `true` for `enabled`. Save the file after you finish. diff --git a/content/en/docs/pluggable-components/devops.md b/content/en/docs/pluggable-components/devops.md index e9e05d158..f613d11ab 100644 --- a/content/en/docs/pluggable-components/devops.md +++ b/content/en/docs/pluggable-components/devops.md @@ -10,7 +10,7 @@ weight: 6300 The KubeSphere DevOps System is designed for CI/CD workflows in Kubernetes. Based on [Jenkins](https://jenkins.io/), it provides one-stop solutions to help both development and Ops teams build, test and publish apps to Kubernetes in a straight-forward way. It also features plugin management, [Binary-to-Image (B2I)](../../project-user-guide/image-builder/binary-to-image/), [Source-to-Image (S2I)](../../project-user-guide/image-builder/source-to-image/), code dependency caching, code quality analysis, pipeline logging, etc. -The DevOps System offers an enabling environment for users as apps can be automatically released to the same platform. It is also compatible with third-party private image registries (e.g. Harbor) and code repositories (e.g. GitLab/GitHub/SVN/BitBucket). As such, it creates excellent user experiences by providing users with comprehensive, visualized CI/CD pipelines which are extremely useful in air-gapped environments. +The DevOps System offers an enabling environment for users as apps can be automatically released to the same platform. It is also compatible with third-party private image registries (for example, Harbor) and code repositories (for example, GitLab/GitHub/SVN/BitBucket). As such, it creates excellent user experiences by providing users with comprehensive, visualized CI/CD pipelines which are extremely useful in air-gapped environments. For more information, see [DevOps User Guide](../../devops-user-guide/). @@ -27,7 +27,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c ``` {{< notice note >}} -If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable DevOps in this mode (e.g. for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how DevOps can be installed after installation. +If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable DevOps in this mode (for example, for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how DevOps can be installed after installation. {{}} 2. In this file, navigate to `devops` and change `false` to `true` for `enabled`. Save the file after you finish. diff --git a/content/en/docs/pluggable-components/events.md b/content/en/docs/pluggable-components/events.md index 7bc412a33..a57cd9269 100644 --- a/content/en/docs/pluggable-components/events.md +++ b/content/en/docs/pluggable-components/events.md @@ -26,7 +26,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c {{< notice note >}} -If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Events in this mode (e.g. for testing purposes), refer to [the following section](#enable-events-after-installation) to see how Events can be [installed after installation](#enable-events-after-installation). +If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Events in this mode (for example, for testing purposes), refer to [the following section](#enable-events-after-installation) to see how Events can be [installed after installation](#enable-events-after-installation). {{}} diff --git a/content/en/docs/pluggable-components/logging.md b/content/en/docs/pluggable-components/logging.md index 51435291b..5cc76138c 100644 --- a/content/en/docs/pluggable-components/logging.md +++ b/content/en/docs/pluggable-components/logging.md @@ -26,7 +26,7 @@ When you install KubeSphere on Linux, you need to create a configuration file, w {{< notice note >}} -- If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Logging in this mode (e.g. for testing purposes), refer to [the following section](#enable-logging-after-installation) to see how Logging can be installed after installation. +- If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Logging in this mode (for example, for testing purposes), refer to [the following section](#enable-logging-after-installation) to see how Logging can be installed after installation. - If you adopt [Multi-node Installation](../../installing-on-linux/introduction/multioverview/) and are using symbolic links for docker root directory, make sure all nodes follow the exactly same symbolic links. Logging agents are deployed in DaemonSets onto nodes. Any discrepancy in container log path may cause collection failures on that node. diff --git a/content/en/docs/pluggable-components/network-policy.md b/content/en/docs/pluggable-components/network-policy.md index d8ac4ac4a..e3a659e89 100644 --- a/content/en/docs/pluggable-components/network-policy.md +++ b/content/en/docs/pluggable-components/network-policy.md @@ -32,7 +32,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c ``` {{< notice note >}} -If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Network Policy in this mode (e.g. for testing purposes), refer to [the following section](#enable-network-policy-after-installation) to see how the Network Policy can be installed after installation. +If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Network Policy in this mode (for example, for testing purposes), refer to [the following section](#enable-network-policy-after-installation) to see how the Network Policy can be installed after installation. {{}} 2. In this file, navigate to `networkpolicy` and change `false` to `true` for `enabled`. Save the file after you finish. diff --git a/content/en/docs/pluggable-components/service-mesh.md b/content/en/docs/pluggable-components/service-mesh.md index 2eee73708..98d04ec58 100644 --- a/content/en/docs/pluggable-components/service-mesh.md +++ b/content/en/docs/pluggable-components/service-mesh.md @@ -26,7 +26,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c ``` {{< notice note >}} -If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeSphere Service Mesh in this mode (e.g. for testing purposes), refer to [the following section](#enable-service-mesh-after-installation) to see how KubeSphere Service Mesh can be installed after installation. +If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeSphere Service Mesh in this mode (for example, for testing purposes), refer to [the following section](#enable-service-mesh-after-installation) to see how KubeSphere Service Mesh can be installed after installation. {{}} 2. In this file, navigate to `servicemesh` and change `false` to `true` for `enabled`. Save the file after you finish. diff --git a/content/en/docs/project-administration/container-limit-ranges.md b/content/en/docs/project-administration/container-limit-ranges.md index 3ff0971c9..7cf5df033 100644 --- a/content/en/docs/project-administration/container-limit-ranges.md +++ b/content/en/docs/project-administration/container-limit-ranges.md @@ -6,7 +6,7 @@ linkTitle: "Container Limit Ranges" weight: 13400 --- -A container can use as much CPU and memory as set by [the resource quota for a project](../../workspace-administration/project-quotas/). At the same time, KubeSphere uses requests and limits to control resource (e.g. CPU and memory) usage for a container, also known as [LimitRanges](https://kubernetes.io/docs/concepts/policy/limit-range/) in Kubernetes. Requests make sure the container can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that container can never use resources above a certain value. +A container can use as much CPU and memory as set by [the resource quota for a project](../../workspace-administration/project-quotas/). At the same time, KubeSphere uses requests and limits to control resource (for example, CPU and memory) usage for a container, also known as [LimitRanges](https://kubernetes.io/docs/concepts/policy/limit-range/) in Kubernetes. Requests make sure the container can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that container can never use resources above a certain value. When you create a workload, such as a Deployment, you configure resource requests and limits for the container. To make these request and limit fields pre-populated with values, you can set default limit ranges. diff --git a/content/en/docs/project-administration/disk-log-collection.md b/content/en/docs/project-administration/disk-log-collection.md index c13ccb436..b7fa5468e 100644 --- a/content/en/docs/project-administration/disk-log-collection.md +++ b/content/en/docs/project-administration/disk-log-collection.md @@ -27,7 +27,7 @@ This tutorial demonstrates how to collect disk logs for an example app. 1. From the left navigation bar, select **Workloads** in **Application Workloads**. Under the **Deployments** tab, click **Create**. -2. In the dialog that appears, set a name for the Deployment (e.g. `demo-deployment`) and click **Next**. +2. In the dialog that appears, set a name for the Deployment (for example, `demo-deployment`) and click **Next**. 3. Under **Container Image**, click **Add Container Image**. @@ -61,7 +61,7 @@ This tutorial demonstrates how to collect disk logs for an example app. ![mount-volumes](/images/docs/project-administration/disk-log-collection/mount-volumes.png) -7. On the **Temporary Volume** tab, input a name for the volume (e.g. `demo-disk-log-collection`) and set the access mode and path. Refer to the image below as an example. +7. On the **Temporary Volume** tab, input a name for the volume (for example, `demo-disk-log-collection`) and set the access mode and path. Refer to the image below as an example. ![volume-example](/images/docs/project-administration/disk-log-collection/volume-example.png) diff --git a/content/en/docs/project-administration/project-gateway.md b/content/en/docs/project-administration/project-gateway.md index d910dddcc..c10706f87 100644 --- a/content/en/docs/project-administration/project-gateway.md +++ b/content/en/docs/project-administration/project-gateway.md @@ -30,7 +30,7 @@ You need to create a workspace, a project and an account (`project-admin`). The **LoadBalancer**: You can access Services with a single IP address through the gateway. -3. You can also enable **Application Governance** on the **Set Gateway** page. You need to enable **Application Governance** so that you can use the Tracing feature and use [different grayscale release strategies](../../project-user-guide/grayscale-release/overview/). Once it is enabled, check whether an annotation (e.g. `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your route (Ingress) if the route is inaccessible. +3. You can also enable **Application Governance** on the **Set Gateway** page. You need to enable **Application Governance** so that you can use the Tracing feature and use [different grayscale release strategies](../../project-user-guide/grayscale-release/overview/). Once it is enabled, check whether an annotation (for example, `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your route (Ingress) if the route is inaccessible. 4. After you select an access method, click **Save**. diff --git a/content/en/docs/project-administration/role-and-member-management.md b/content/en/docs/project-administration/role-and-member-management.md index 90fe25bdf..dea03f837 100644 --- a/content/en/docs/project-administration/role-and-member-management.md +++ b/content/en/docs/project-administration/role-and-member-management.md @@ -22,7 +22,7 @@ In project scope, you can grant the following resources' permissions to a role: ## Prerequisites -At least one project has been created, such as `demo-project`. Besides, you need an account of the `admin` role (e.g. `project-admin`) at the project level. See [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/) if it is not ready yet. +At least one project has been created, such as `demo-project`. Besides, you need an account of the `admin` role (for example, `project-admin`) at the project level. See [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/) if it is not ready yet. ## Built-in Roles @@ -42,7 +42,7 @@ In **Project Roles**, there are three available built-in roles as shown below. B ## Create a Project Role -1. Log in to the console as `project-admin` and select a project (e.g. `demo-project`) under **Projects** list. +1. Log in to the console as `project-admin` and select a project (for example, `demo-project`) under **Projects** list. {{< notice note >}} diff --git a/content/en/docs/project-user-guide/application-workloads/daemonsets.md b/content/en/docs/project-user-guide/application-workloads/daemonsets.md index cb5167e62..9fdb17dbe 100644 --- a/content/en/docs/project-user-guide/application-workloads/daemonsets.md +++ b/content/en/docs/project-user-guide/application-workloads/daemonsets.md @@ -32,7 +32,7 @@ Log in to the console as `project-regular`. Go to **Application Workloads** of a ### Step 2: Input basic information -Specify a name for the DaemonSet (e.g. `demo-daemonset`) and click **Next** to continue. +Specify a name for the DaemonSet (for example, `demo-daemonset`) and click **Next** to continue. ![daemonsets](/images/docs/project-user-guide/workloads/daemonsets_form_1.jpg) diff --git a/content/en/docs/project-user-guide/application-workloads/deployments.md b/content/en/docs/project-user-guide/application-workloads/deployments.md index e5f763859..dad712329 100644 --- a/content/en/docs/project-user-guide/application-workloads/deployments.md +++ b/content/en/docs/project-user-guide/application-workloads/deployments.md @@ -24,7 +24,7 @@ Log in to the console as `project-regular`. Go to **Application Workloads** of a ### Step 2: Input basic information -Specify a name for the Deployment (e.g. `demo-deployment`) and click **Next** to continue. +Specify a name for the Deployment (for example, `demo-deployment`) and click **Next** to continue. ![deployments](/images/docs/project-user-guide/workloads/deployments_form_1.jpg) diff --git a/content/en/docs/project-user-guide/application-workloads/jobs.md b/content/en/docs/project-user-guide/application-workloads/jobs.md index 3dd4c5d99..2dc3c6873 100644 --- a/content/en/docs/project-user-guide/application-workloads/jobs.md +++ b/content/en/docs/project-user-guide/application-workloads/jobs.md @@ -144,7 +144,7 @@ You can rerun the Job if it fails, the reason of which displays under **Messages {{< notice tip >}} -- In **Resource Status**, the Pod list provides the Pod's detailed information (e.g. creation time, node, Pod IP and monitoring data). +- In **Resource Status**, the Pod list provides the Pod's detailed information (for example, creation time, node, Pod IP and monitoring data). - You can view the container information by clicking the Pod. - Click the container log icon to view the output logs of the container. - You can view the Pod detail page by clicking the Pod name. diff --git a/content/en/docs/project-user-guide/application-workloads/statefulsets.md b/content/en/docs/project-user-guide/application-workloads/statefulsets.md index 0b06d8753..0eab70e60 100644 --- a/content/en/docs/project-user-guide/application-workloads/statefulsets.md +++ b/content/en/docs/project-user-guide/application-workloads/statefulsets.md @@ -37,7 +37,7 @@ Log in to the console as `project-regular`. Go to **Application Workloads** of a ### Step 2: Input basic information -Specify a name for the StatefulSet (e.g. `demo-stateful`) and click **Next** to continue. +Specify a name for the StatefulSet (for example, `demo-stateful`) and click **Next** to continue. ![statefulsets](/images/docs/project-user-guide/workloads/statefulsets_form_1.jpg) diff --git a/content/en/docs/project-user-guide/application/app-template.md b/content/en/docs/project-user-guide/application/app-template.md index cb3ba5634..472cf4566 100644 --- a/content/en/docs/project-user-guide/application/app-template.md +++ b/content/en/docs/project-user-guide/application/app-template.md @@ -8,7 +8,7 @@ aliases: weight: 10110 --- -An app template serves as a way for users to upload, deliver and manage apps. Generally, an app is composed of one or more Kubernetes workloads (e.g. [Deployments](../../../project-user-guide/application-workloads/deployments/), [StatefulSets](../../../project-user-guide/application-workloads/statefulsets/) and [DaemonSets](../../../project-user-guide/application-workloads/daemonsets/)) and [Services](../../../project-user-guide/application-workloads/services/) based on how it functions and communicates with the external environment. Apps that are uploaded as app templates are built based on a [Helm](https://helm.sh/) package. +An app template serves as a way for users to upload, deliver and manage apps. Generally, an app is composed of one or more Kubernetes workloads (for example, [Deployments](../../../project-user-guide/application-workloads/deployments/), [StatefulSets](../../../project-user-guide/application-workloads/statefulsets/) and [DaemonSets](../../../project-user-guide/application-workloads/daemonsets/)) and [Services](../../../project-user-guide/application-workloads/services/) based on how it functions and communicates with the external environment. Apps that are uploaded as app templates are built based on a [Helm](https://helm.sh/) package. ## How App Templates Work @@ -32,7 +32,7 @@ KubeSphere deploys app repository services based on [OpenPitrix](https://github. ## Why App Templates -App templates enable users to deploy and manage apps in a visualized way. Internally, they play an important role as shared resources (e.g. databases, middleware and operating systems) created by enterprises for the coordination and cooperation within teams. Externally, app templates set industry standards of building and delivery. Users can take advantage of app templates in different scenarios to meet their own needs through one-click deployment. +App templates enable users to deploy and manage apps in a visualized way. Internally, they play an important role as shared resources (for example, databases, middleware and operating systems) created by enterprises for the coordination and cooperation within teams. Externally, app templates set industry standards of building and delivery. Users can take advantage of app templates in different scenarios to meet their own needs through one-click deployment. In addition, as OpenPitrix is integrated to KubeSphere to provide application management across the entire lifecycle, the platform allows ISVs, developers and regular users to all participate in the process. Backed by the multi-tenant system of KubeSphere, each tenant is only responsible for their own part, such as app uploading, app review, release, test, and version management. Ultimately, enterprises can build their own App Store and enrich their application pools with their customized standards. As such, apps can also be delivered in a standardized fashion. diff --git a/content/en/docs/project-user-guide/application/compose-app.md b/content/en/docs/project-user-guide/application/compose-app.md index 313f744d5..2da8bc38a 100644 --- a/content/en/docs/project-user-guide/application/compose-app.md +++ b/content/en/docs/project-user-guide/application/compose-app.md @@ -23,7 +23,7 @@ This tutorial demonstrates how to create a microservices-based app Bookinfo, whi ![create-composing-app](/images/docs/project-user-guide/applications/create-a-microservices-based-app/create-composing-app.png) -2. Set a name for the app (e.g. `bookinfo`) and click **Next**. +2. Set a name for the app (for example, `bookinfo`) and click **Next**. 3. On the **Components** page, you need to create microservices that compose the app. Click **Add Service** and select **Stateless Service**. @@ -65,7 +65,7 @@ This tutorial demonstrates how to create a microservices-based app Bookinfo, whi ![microservices-done](/images/docs/project-user-guide/applications/create-a-microservices-based-app/microservices-done.png) -11. On the **Internet Access** page, click **Add Route Rule**. In the **Specify Domain** tab, set a domain name for your app (e.g. `demo.bookinfo`) and select `http` in the **Protocol** field. For `Paths`, select the Service `productpage` and port `9080`. Click **OK** to continue. +11. On the **Internet Access** page, click **Add Route Rule**. In the **Specify Domain** tab, set a domain name for your app (for example, `demo.bookinfo`) and select `http` in the **Protocol** field. For `Paths`, select the Service `productpage` and port `9080`. Click **OK** to continue. ![route](/images/docs/project-user-guide/applications/create-a-microservices-based-app/route.png) diff --git a/content/en/docs/project-user-guide/configuration/configmaps.md b/content/en/docs/project-user-guide/configuration/configmaps.md index cccb0533a..ba62fb7b6 100644 --- a/content/en/docs/project-user-guide/configuration/configmaps.md +++ b/content/en/docs/project-user-guide/configuration/configmaps.md @@ -28,7 +28,7 @@ Log in to the console as `project-regular`. Go to **Configurations** of a projec ### Step 2: Input basic information -Specify a name for the ConfigMap (e.g. `demo-configmap`) and click **Next** to continue. +Specify a name for the ConfigMap (for example, `demo-configmap`) and click **Next** to continue. {{< notice tip >}} diff --git a/content/en/docs/project-user-guide/configuration/image-registry.md b/content/en/docs/project-user-guide/configuration/image-registry.md index ef107bef5..74f2e6177 100644 --- a/content/en/docs/project-user-guide/configuration/image-registry.md +++ b/content/en/docs/project-user-guide/configuration/image-registry.md @@ -26,7 +26,7 @@ Log in to the web console of KubeSphere as `project-regular`. Go to **Configurat ### Step 2: Input basic information -Specify a name for the Secret (e.g. `demo-registry-secret`) and click **Next** to continue. +Specify a name for the Secret (for example, `demo-registry-secret`) and click **Next** to continue. {{< notice tip >}} diff --git a/content/en/docs/project-user-guide/configuration/secrets.md b/content/en/docs/project-user-guide/configuration/secrets.md index 8cf57f619..d385830e3 100644 --- a/content/en/docs/project-user-guide/configuration/secrets.md +++ b/content/en/docs/project-user-guide/configuration/secrets.md @@ -28,7 +28,7 @@ Log in to the console as `project-regular`. Go to **Configurations** of a projec ### Step 2: Input basic information -Specify a name for the Secret (e.g. `demo-secret`) and click **Next** to continue. +Specify a name for the Secret (for example, `demo-secret`) and click **Next** to continue. {{< notice tip >}} diff --git a/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-sample-web.md b/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-sample-web.md index a7fdd6ccd..be7d3edbd 100644 --- a/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-sample-web.md +++ b/content/en/docs/project-user-guide/custom-application-monitoring/examples/monitor-sample-web.md @@ -11,7 +11,7 @@ This section walks you through monitoring a sample web application. The applicat ## Prerequisites - Please make sure you [enable the OpenPitrix system](../../../../pluggable-components/app-store/). -- You need to create a workspace, a project, and a user account for this tutorial. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../../quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited to the workspace with the `self-provisioner` role. Namely, create an account `workspace-self-provisioner` of the `self-provisioner` role, and use this account to create a project (e.g. `test`). In this tutorial, you log in as `workspace-self-provisioner` and work in the project `test` in the workspace `demo-workspace`. +- You need to create a workspace, a project, and a user account for this tutorial. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../../quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited to the workspace with the `self-provisioner` role. Namely, create an account `workspace-self-provisioner` of the `self-provisioner` role, and use this account to create a project (for example, `test`). In this tutorial, you log in as `workspace-self-provisioner` and work in the project `test` in the workspace `demo-workspace`. - Knowledge of Helm charts and [PromQL](https://prometheus.io/docs/prometheus/latest/querying/examples/). @@ -87,11 +87,11 @@ This section guides you on how to create a dashboard from scratch. You will crea ![create-dashboard-2](/images/docs/project-user-guide/custom-application-monitoring/create-dashboard-2.jpg) -2. Set a name (e.g. `sample-web`) and click **Create**. +2. Set a name (for example, `sample-web`) and click **Create**. ![create-dashboard-3](/images/docs/project-user-guide/custom-application-monitoring/create-dashboard-3.jpg) -3. Enter a title in the top left corner (e.g. `Sample Web Overview`). +3. Enter a title in the top left corner (for example, `Sample Web Overview`). ![create-dashboard-4](/images/docs/project-user-guide/custom-application-monitoring/create-dashboard-4.jpg) @@ -99,7 +99,7 @@ This section guides you on how to create a dashboard from scratch. You will crea ![create-dashboard-5](/images/docs/project-user-guide/custom-application-monitoring/create-dashboard-5.jpg) -5. Type the PromQL expression `myapp_processed_ops_total` in the field **Monitoring Metrics** and give a chart name (e.g. `Operation Count`). Click **√** in the bottom right corner to continue. +5. Type the PromQL expression `myapp_processed_ops_total` in the field **Monitoring Metrics** and give a chart name (for example, `Operation Count`). Click **√** in the bottom right corner to continue. ![create-dashboard-6](/images/docs/project-user-guide/custom-application-monitoring/create-dashboard-6.jpg) diff --git a/content/en/docs/project-user-guide/grayscale-release/canary-release.md b/content/en/docs/project-user-guide/grayscale-release/canary-release.md index 245107dc9..323d4007c 100644 --- a/content/en/docs/project-user-guide/grayscale-release/canary-release.md +++ b/content/en/docs/project-user-guide/grayscale-release/canary-release.md @@ -42,7 +42,7 @@ This method serves as an efficient way to test performance and reliability of a {{}} -5. You send traffic to these two versions (`v1` and `v2`) either by a specific percentage or by the request content such as `Http Header`, `Cookie` and `URI`. Select **Forward by traffic ratio** and drag the icon in the middle to change the percentage of traffic sent to these two versions respectively (e.g. set 50% for either one). When you finish, click **Create**. +5. You send traffic to these two versions (`v1` and `v2`) either by a specific percentage or by the request content such as `Http Header`, `Cookie` and `URI`. Select **Forward by traffic ratio** and drag the icon in the middle to change the percentage of traffic sent to these two versions respectively (for example, set 50% for either one). When you finish, click **Create**. ![canary-release-5](/images/docs/project-user-guide/grayscale-release/canary-release/canary-release-5.gif) @@ -119,7 +119,7 @@ Now that you have two available app versions, access the app to verify the canar ![traffic-management](/images/docs/project-user-guide/grayscale-release/canary-release/traffic-management.png) -3. Click a component (e.g. **reviews**) and you can see the information of traffic monitoring on the right, displaying real-time data of **Traffic**, **Success rate** and **Duration**. +3. Click a component (for example, **reviews**) and you can see the information of traffic monitoring on the right, displaying real-time data of **Traffic**, **Success rate** and **Duration**. ![topology](/images/docs/project-user-guide/grayscale-release/canary-release/topology.png) diff --git a/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md b/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md index 85e86462e..24b96ffb1 100644 --- a/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md +++ b/content/en/docs/project-user-guide/grayscale-release/traffic-mirroring.md @@ -28,7 +28,7 @@ Traffic mirroring, also called shadowing, is a powerful, risk-free method of tes ![traffic-mirroring-3](/images/docs/project-user-guide/grayscale-release/traffic-mirroring/traffic-mirroring-3.jpg) -4. On the **Grayscale Release Version** page, add another version of it (e.g. `v2`) as shown in the image below and click **Next**: +4. On the **Grayscale Release Version** page, add another version of it (for example, `v2`) as shown in the image below and click **Next**: ![traffic-mirroring-4](/images/docs/project-user-guide/grayscale-release/traffic-mirroring/traffic-mirroring-4.jpg) diff --git a/content/en/docs/project-user-guide/storage/volumes.md b/content/en/docs/project-user-guide/storage/volumes.md index 44b6948de..155a69a5b 100644 --- a/content/en/docs/project-user-guide/storage/volumes.md +++ b/content/en/docs/project-user-guide/storage/volumes.md @@ -30,7 +30,7 @@ All the volumes that are created on the **Volumes** page are PersistentVolumeCla ![create-volume](/images/docs/project-user-guide/volume-management/volumes/create-volume.jpg) -3. In the dialog that appears, set a name (e.g. `demo-volume`) for the volume and click **Next**. +3. In the dialog that appears, set a name (for example, `demo-volume`) for the volume and click **Next**. ![basic-volume-info](/images/docs/project-user-guide/volume-management/volumes/basic-volume-info.jpg) diff --git a/content/en/docs/quick-start/create-workspace-and-project.md b/content/en/docs/quick-start/create-workspace-and-project.md index b2c7c27c5..04ca153d4 100644 --- a/content/en/docs/quick-start/create-workspace-and-project.md +++ b/content/en/docs/quick-start/create-workspace-and-project.md @@ -126,7 +126,7 @@ In this step, you create a project using the account `project-admin` created in ![kubesphere-projects](/images/docs/quickstart/create-workspaces-projects-accounts/kubesphere-projects.png) -2. Enter the project name (e.g. `demo-project`) and click **OK** to finish. You can also add an alias and description for the project. +2. Enter the project name (for example, `demo-project`) and click **OK** to finish. You can also add an alias and description for the project. ![demo-project](/images/docs/quickstart/create-workspaces-projects-accounts/demo-project.png) @@ -134,7 +134,7 @@ In this step, you create a project using the account `project-admin` created in ![click-demo-project](/images/docs/quickstart/create-workspaces-projects-accounts/click-demo-project.png) -4. On the **Overview** page of the project, the project quota remains unset by default. You can click **Set** and specify [resource requests and limits](../../workspace-administration/project-quotas/) as needed (e.g. 1 core for CPU and 1000Gi for memory). +4. On the **Overview** page of the project, the project quota remains unset by default. You can click **Set** and specify [resource requests and limits](../../workspace-administration/project-quotas/) as needed (for example, 1 core for CPU and 1000Gi for memory). ![quota](/images/docs/quickstart/create-workspaces-projects-accounts/quota.png) @@ -212,7 +212,7 @@ To create a DevOps project, you must install the KubeSphere DevOps system in adv ![devops](/images/docs/quickstart/create-workspaces-projects-accounts/devops.png) -2. Enter the DevOps project name (e.g. `demo-devops`) and click **OK**. You can also add an alias and description for the project. +2. Enter the DevOps project name (for example, `demo-devops`) and click **OK**. You can also add an alias and description for the project. ![devops-project](/images/docs/quickstart/create-workspaces-projects-accounts/devops-project.png) diff --git a/content/en/docs/quick-start/deploy-bookinfo-to-k8s.md b/content/en/docs/quick-start/deploy-bookinfo-to-k8s.md index 0bc1fd332..a57fc5e5d 100644 --- a/content/en/docs/quick-start/deploy-bookinfo-to-k8s.md +++ b/content/en/docs/quick-start/deploy-bookinfo-to-k8s.md @@ -23,7 +23,7 @@ To provide consistent user experiences of managing microservices, KubeSphere int Log in to the console as `project-admin` and go to your project. Navigate to **Advanced Settings** under **Project Settings**, click **Edit**, and select **Edit Gateway**. In the dialog that appears, flip on the toggle switch next to **Application Governance**. {{< notice note >}} -You need to enable **Application Governance** so that you can use the Tracing feature. Once it is enabled, check whether an annotation (e.g. `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your Route (Ingress) if the Route is inaccessible. +You need to enable **Application Governance** so that you can use the Tracing feature. Once it is enabled, check whether an annotation (for example, `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your Route (Ingress) if the Route is inaccessible. {{}} ## What is Bookinfo diff --git a/content/en/docs/quick-start/enable-pluggable-components.md b/content/en/docs/quick-start/enable-pluggable-components.md index fcbd8427f..1ce3d1873 100644 --- a/content/en/docs/quick-start/enable-pluggable-components.md +++ b/content/en/docs/quick-start/enable-pluggable-components.md @@ -44,7 +44,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c ``` {{< notice note >}} -If you adopt [All-in-one Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable pluggable components in this mode (e.g. for testing purpose), refer to the [following section](#enable-pluggable-components-after-installation) to see how pluggable components can be installed after installation. +If you adopt [All-in-one Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable pluggable components in this mode (for example, for testing purpose), refer to the [following section](#enable-pluggable-components-after-installation) to see how pluggable components can be installed after installation. {{}} 2. In this file, enable the pluggable components you want to install by changing `false` to `true` for `enabled`. Here is [the complete file](https://github.com/kubesphere/kubekey/blob/release-1.0/docs/config-example.md) for your reference. Save the file after you finish. diff --git a/content/en/docs/quick-start/wordpress-deployment.md b/content/en/docs/quick-start/wordpress-deployment.md index 70ce66d19..fc2213f71 100644 --- a/content/en/docs/quick-start/wordpress-deployment.md +++ b/content/en/docs/quick-start/wordpress-deployment.md @@ -36,7 +36,7 @@ The environment variable `WORDPRESS_DB_PASSWORD` is the password to connect to t ![create-secrets](/images/docs/quickstart/wordpress-deployment/create-secrets.png) -2. Enter the basic information (e.g. name it `mysql-secret`) and click **Next**. On the next page, select **Default** for **Type** and click **Add Data** to add a key-value pair. Input the Key (`MYSQL_ROOT_PASSWORD`) and Value (`123456`) as below and click **√** in the bottom-right corner to confirm. When you finish, click **Create** to continue. +2. Enter the basic information (for example, name it `mysql-secret`) and click **Next**. On the next page, select **Default** for **Type** and click **Add Data** to add a key-value pair. Input the Key (`MYSQL_ROOT_PASSWORD`) and Value (`123456`) as below and click **√** in the bottom-right corner to confirm. When you finish, click **Create** to continue. ![key-value](/images/docs/quickstart/wordpress-deployment/key-value.png) @@ -52,7 +52,7 @@ Follow the same steps above to create a WordPress Secret `wordpress-secret` with ![volumes](/images/docs/quickstart/wordpress-deployment/volumes.png) -2. Enter the basic information of the volume (e.g. name it `wordpress-pvc`) and click **Next**. +2. Enter the basic information of the volume (for example, name it `wordpress-pvc`) and click **Next**. 3. In **Volume Settings**, you need to choose an available **Storage Class**, and set **Access Mode** and **Volume Capacity**. You can use the default value directly as shown below. Click **Next** to continue. @@ -68,7 +68,7 @@ Follow the same steps above to create a WordPress Secret `wordpress-secret` with ![composing-app](/images/docs/quickstart/wordpress-deployment/composing-app.png) -2. Enter the basic information (e.g. input `wordpress` for Application Name) and click **Next**. +2. Enter the basic information (for example, input `wordpress` for Application Name) and click **Next**. ![basic-info](/images/docs/quickstart/wordpress-deployment/basic-info.png) @@ -78,7 +78,7 @@ Follow the same steps above to create a WordPress Secret `wordpress-secret` with 4. Define a service type for the component. Select **Stateful Service** here. -5. Enter the name for the stateful service (e.g. **mysql**) and click **Next**. +5. Enter the name for the stateful service (for example, **mysql**) and click **Next**. ![mysqlname](/images/docs/quickstart/wordpress-deployment/mysqlname.png) diff --git a/content/en/docs/workspace-administration/project-quotas.md b/content/en/docs/workspace-administration/project-quotas.md index 019573968..4cd19ad06 100644 --- a/content/en/docs/workspace-administration/project-quotas.md +++ b/content/en/docs/workspace-administration/project-quotas.md @@ -8,7 +8,7 @@ aliases: weight: 9600 --- -KubeSphere uses requests and limits to control resource (e.g. CPU and memory) usage in a project, also known as [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value. +KubeSphere uses requests and limits to control resource (for example, CPU and memory) usage in a project, also known as [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value. Besides CPU and memory, you can also set resource quotas for other objects separately such as Pods, [Deployments](../../project-user-guide/application-workloads/deployments/), [Jobs](../../project-user-guide/application-workloads/jobs/), [Services](../../project-user-guide/application-workloads/services/) and [ConfigMaps](../../project-user-guide/configuration/configmaps/) in a project. diff --git a/content/en/docs/workspace-administration/role-and-member-management.md b/content/en/docs/workspace-administration/role-and-member-management.md index f2172eab4..e78bf522f 100644 --- a/content/en/docs/workspace-administration/role-and-member-management.md +++ b/content/en/docs/workspace-administration/role-and-member-management.md @@ -16,7 +16,7 @@ This guide demonstrates how to manage roles and members in your workspace. At th ## Prerequisites -At least one workspace has been created, such as `demo-workspace`. Besides, you need an account of the `workspace-admin` role (e.g. `ws-admin`) at the workspace level. See [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/) if they are not ready yet. +At least one workspace has been created, such as `demo-workspace`. Besides, you need an account of the `workspace-admin` role (for example, `ws-admin`) at the workspace level. See [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/) if they are not ready yet. {{< notice note >}} diff --git a/content/en/news/kubesphere-3.0.0-ga-announcement.md b/content/en/news/kubesphere-3.0.0-ga-announcement.md index 6d37b094e..47859bd29 100644 --- a/content/en/news/kubesphere-3.0.0-ga-announcement.md +++ b/content/en/news/kubesphere-3.0.0-ga-announcement.md @@ -42,9 +42,9 @@ By doing so, we look to empower developers to share their middleware, big data, KubeSphere has been upgraded to integrate the powerful capacity of cloud platforms for networking and storage into container platforms. That means users will be provided with stable, secure and convenient storage and network services just as they can enjoy on IaaS platforms. -In network management, KubeSphere now supports tenant network isolation at both workspace and project levels, firewall policy management, and network policy management of native Kubernetes, providing a more enabling, secure environment for different tenants to access apps. In addition, Porter v0.3.0, **a CNCF-certified load balancer developed by KubeSphere team**, is also integrated, creating smooth user experiences for those who run Kubernetes clusters on-premises (e.g. bare metal). In this way, they can expose services in the way as easy as they operate on cloud. +In network management, KubeSphere now supports tenant network isolation at both workspace and project levels, firewall policy management, and network policy management of native Kubernetes, providing a more enabling, secure environment for different tenants to access apps. In addition, Porter v0.3.0, **a CNCF-certified load balancer developed by KubeSphere team**, is also integrated, creating smooth user experiences for those who run Kubernetes clusters on-premises (for example, bare metal). In this way, they can expose services in the way as easy as they operate on cloud. -Persistent storage is one of the most important capabilities for enterprises to run Kubernetes clusters in a production environment. A reliable and stable solution is vital to data storage and security. KubeSphere 3.0.0 adds support for **volume snapshots, capacity management, and volume monitoring**, offering persistent storage maintenance services for stateful applications in a more convenient way. In this connection, users only need to configure storage plugins (e.g. QingCloud CSI, AWS EBS CSI and Ceph CSI) provided in the infrastructure environment in their installation files. After that, KubeSphere will automatically integrate corresponding storage features of public or private cloud providers for volume maintenance and monitoring. +Persistent storage is one of the most important capabilities for enterprises to run Kubernetes clusters in a production environment. A reliable and stable solution is vital to data storage and security. KubeSphere 3.0.0 adds support for **volume snapshots, capacity management, and volume monitoring**, offering persistent storage maintenance services for stateful applications in a more convenient way. In this connection, users only need to configure storage plugins (for example, QingCloud CSI, AWS EBS CSI and Ceph CSI) provided in the infrastructure environment in their installation files. After that, KubeSphere will automatically integrate corresponding storage features of public or private cloud providers for volume maintenance and monitoring. ## Department-wide Fine-grained Access Control diff --git a/content/en/projects/_index.md b/content/en/projects/_index.md index c7baae143..3643c1f81 100644 --- a/content/en/projects/_index.md +++ b/content/en/projects/_index.md @@ -18,7 +18,7 @@ groups: - title: OpenPitrix icon: 'https://pek3b.qingstor.com/kubesphere-docs/png/20200607231502.png' link: 'https://github.com/openpitrix/openpitrix' - description: OpenPitrix is an open source multi-cloud application management platform. It is useful in packing, deploying and managing applications of different kinds (e.g. traditional, microservice and serverless) in multiple cloud platforms, including AWS, Kubernetes, QingCloud and VMWare. + description: OpenPitrix is an open source multi-cloud application management platform. It is useful in packing, deploying and managing applications of different kinds (for example, traditional, microservice and serverless) in multiple cloud platforms, including AWS, Kubernetes, QingCloud and VMWare. - name: Service Proxy children: