From a65f9d50fb445c1942b713cf3814e88d6cdfb664 Mon Sep 17 00:00:00 2001 From: Rodion Miromind Date: Mon, 28 Jun 2021 10:37:33 +0300 Subject: [PATCH 1/9] Update multi-cluster-deployment.md --- content/en/blogs/multi-cluster-deployment.md | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/content/en/blogs/multi-cluster-deployment.md b/content/en/blogs/multi-cluster-deployment.md index 096ad243b..8dfe1163c 100644 --- a/content/en/blogs/multi-cluster-deployment.md +++ b/content/en/blogs/multi-cluster-deployment.md @@ -1,5 +1,5 @@ --- -title: 'Kubernetes Multi-cluster Deployment: Federation and KubeSphere' +title: 'Kubernetes Multi-cluster Deployment: Kubernetes Federation and KubeSphere' keywords: Kubernetes, KubeSphere, Multi-cluster, Container description: KubeSphere v3.0 supports the management of multiple clusters, isolated management of resources, and federated deployments. tag: 'KubeSphere, Multi-cluster' @@ -10,7 +10,7 @@ snapshot: 'https://ap3.qingstor.com/kubesphere-website/docs/kubesphere-architect ## Scenarios for Multi-cluster Deployment -As the container technology and Kubernetes see a surge in popularity among their users, it is not uncommon for enterprises to run multiple clusters for their business. In general, here are the main scenarios where multiple clusters can be adopted. +As the container technology and Kubernetes see a surge in popularity among their users, it is not uncommon for enterprises to run multiple clusters for their business. In general, here are the main scenarios where multiple Kubernetes clusters can be adopted. ### High Availability @@ -28,7 +28,7 @@ Generally, it is much easier for multiple small clusters to isolate failures tha ### Business Isolation -Although Kubernetes provides namespaces as a solution to app isolation, this method only represents the isolation in logic. This is because different namespaces are connected through the network, which means the issue of resource preemption still exists. To achieve further isolation, you need to create additional network isolation policies or set resource quotas. Using multiple clusters can achieve complete physical isolation that is more secure and reliable than the isolation through namespaces. For example, this is extremely effective when different departments within an enterprise use multiple clusters for the deployment of development, testing or production environments. +Although Kubernetes provides namespaces as a solution to app isolation, this method only represents the isolation in logic. This is because different namespaces are connected through the network, which means the issue of resource preemption still exists. To achieve further isolation, you need to create additional network isolation policies or set resource quotas. Using multiple Kubernetes clusters can achieve complete physical isolation that is more secure and reliable than the isolation through namespaces. For example, this is extremely effective when different departments within an enterprise use multiple clusters for the deployment of development, testing or production environments. ![pipeline](https://ap3.qingstor.com/kubesphere-website/docs/pipeline.png) @@ -40,7 +40,7 @@ Kubernetes has become the de facto standard in container orchestration. Against The application of multi-cluster deployment offers solutions to a variety of problems as we can see from the scenarios above. Nevertheless, it brings more complexity for operation and maintenance. For a single cluster, app deployment and upgrade are quite straightforward as you can directly update yaml of the cluster. For multiple clusters, you can update them one by one, but how can you guarantee the application load status is the same across different clusters? How to implement service discovery among different clusters? How to achieve load balancing across clusters? The answer given by the community is Federation. -### Federation v1 +### Kubernetes Federation v1 ![federation-v1](https://ap3.qingstor.com/kubesphere-website/docs/federation-v1.png) @@ -50,11 +50,11 @@ There are two versions of Federation with the original v1 already deprecated. In In terms of API, federated resources are scheduled through annotations, ensuring great compatibility with the original Kubernetes API. As such, the original code can be reused and existing deployment files of users can be easily transferred without any major change. However, this also prevents users from taking further advantage of Federation for API evolution. At the same time, a corresponding controller is needed for each federated resource so that they can be scheduled to different clusters. Originally, Federation only supported a limited number of resource type. -### Federation v2 +### Kubernetes Federation v2 ![federation-v2](https://ap3.qingstor.com/kubesphere-website/docs/federation-v2.png) -The community developed Federation v2 (KubeFed) on the basis of v1. KubeFed has defined its own API standards through CRDs while deprecating the annotation method used before. The architecture has changed significantly as well, discarding Federated API Server and etcd that need to be deployed independently. The control plane of KubeFed adopts the popular implementation of CRD + Controller, which can be directly installed on existing Kubernetes clusters without any additional deployment. +The community developed Kubernetes Federation v2 (KubeFed) on the basis of v1. KubeFed has defined its own API standards through CRDs while deprecating the annotation method used before. The architecture has changed significantly as well, discarding Federated API Server and etcd that need to be deployed independently. The control plane of KubeFed adopts the popular implementation of CRD + Controller, which can be directly installed on existing Kubernetes clusters without any additional deployment. KubeFed mainly defines four resource types: @@ -81,9 +81,9 @@ However, KubeFed also has some issues to be resolved: ## Multi-cluster Feature in KubeSphere -Resource federation is what the community has proposed to solve the issue of deployments across multiple clusters. For many enterprise users, the deployment of multiple clusters is not necessary. What is more important is that they need to be able to manage the resources across multiple clusters at the same time and in the same place. +Resource federation is what the community has proposed to solve the issue of deployments across multiple Kubernetes clusters. For many enterprise users, the deployment of multiple clusters is not necessary. What is more important is that they need to be able to manage the resources across multiple clusters at the same time and in the same place. -[KubeSphere](https://github.com/kubesphere) supports the management of multiple clusters, isolated management of resources, and federated deployments. In addition, it also features multi-dimensional queries (monitoring, logging, events and auditing) of resources such as clusters and apps, as well as alerts and notifications through various channels. Apps can be deployed on multiple clusters with CI/CD pipelines. +[KubeSphere](https://github.com/kubesphere) supports the management of multiple Kubernetes clusters, isolated management of resources, and federated deployments. In addition, it also features multi-dimensional queries (monitoring, logging, events and auditing) of resources such as clusters and apps, as well as alerts and notifications through various channels. Apps can be deployed on multiple clusters with CI/CD pipelines. ![kubesphere-workflow](https://ap3.qingstor.com/kubesphere-website/docs/workflow.png) @@ -136,4 +136,4 @@ The topic of multi-cluster deployment is far more complicated than we think. The 1. KubeFed: https://github.com/kubernetes-sigs/kubefed 2. KubeSphere Website: https://kubesphere.io/ 3. Kubernetes Federation Evolution: https://kubernetes.io/blog/2018/12/12/kubernetes-federation-evolution/ -4. KubeSphere GitHub: https://github.com/kubesphere \ No newline at end of file +4. KubeSphere GitHub: https://github.com/kubesphere From e6d7b03db287d70416cf210d3a5c9d85a7df2fd6 Mon Sep 17 00:00:00 2001 From: Rodion Miromind Date: Mon, 28 Jun 2021 10:41:23 +0300 Subject: [PATCH 2/9] Update multi-cluster-deployment.md --- content/en/blogs/multi-cluster-deployment.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blogs/multi-cluster-deployment.md b/content/en/blogs/multi-cluster-deployment.md index 8dfe1163c..26a5cb439 100644 --- a/content/en/blogs/multi-cluster-deployment.md +++ b/content/en/blogs/multi-cluster-deployment.md @@ -95,7 +95,7 @@ KubeSphere 3.0 supports unified management of user access for the multi-cluster ![kubesphere-architecture](https://ap3.qingstor.com/kubesphere-website/docs/kubesphere-architecture.png) -The overall multi-cluster architecture of [KubeSphere](https://kubesphere.io/) is shown above. The cluster where the control plane is located is called Host cluster. The cluster managed by the Host cluster is called Member cluster, which is essentially a Kubernetes cluster with KubeSphere installed. The Host cluster needs to be able to access the kube-apiserver of Member clusters. Besides, there is no requirement for the network connectivity between Member clusters. The Host cluster is independent of the member clusters managed by it, which do not know the existence of the Host cluster. The advantage of the logic is that when the Host cluster malfunctions, Member clusters will not be affected and deployed workloads can continue to run as well. +The overall multi-cluster architecture of KubeSphere [Container Platform](https://kubesphere.io/) is shown above. The cluster where the control plane is located is called Host cluster. The cluster managed by the Host cluster is called Member cluster, which is essentially a Kubernetes cluster with KubeSphere installed. The Host cluster needs to be able to access the kube-apiserver of Member clusters. Besides, there is no requirement for the network connectivity between Member clusters. The Host cluster is independent of the member clusters managed by it, which do not know the existence of the Host cluster. The advantage of the logic is that when the Host cluster malfunctions, Member clusters will not be affected and deployed workloads can continue to run as well. In addition, the Host cluster also serves as an entry for API requests. It will forward all resource requests for member clusters to them. In this way, not only can requests be aggregated, but also authentication and authorization can be implemented in a unified fashion. From e35fa6574e0ccb0a999c71b051125580ed48415d Mon Sep 17 00:00:00 2001 From: Rodion Miromind Date: Mon, 28 Jun 2021 11:07:55 +0300 Subject: [PATCH 3/9] Update project-quotas.md --- content/en/docs/workspace-administration/project-quotas.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/workspace-administration/project-quotas.md b/content/en/docs/workspace-administration/project-quotas.md index 4cd19ad06..44875aa0a 100644 --- a/content/en/docs/workspace-administration/project-quotas.md +++ b/content/en/docs/workspace-administration/project-quotas.md @@ -8,7 +8,7 @@ aliases: weight: 9600 --- -KubeSphere uses requests and limits to control resource (for example, CPU and memory) usage in a project, also known as [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value. +KubeSphere uses [Kubernetes requests and limits](https://kubesphere.io/blogs/understand-requests-and-limits-in-kubernetes/) to control resource (for example, CPU and memory) usage in a project, also known as [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value. Besides CPU and memory, you can also set resource quotas for other objects separately such as Pods, [Deployments](../../project-user-guide/application-workloads/deployments/), [Jobs](../../project-user-guide/application-workloads/jobs/), [Services](../../project-user-guide/application-workloads/services/) and [ConfigMaps](../../project-user-guide/configuration/configmaps/) in a project. @@ -58,4 +58,4 @@ If you use the account `project-admin` (an account of the `admin` role at the pr ## See Also -[Container Limit Ranges](../../project-administration/container-limit-ranges/) \ No newline at end of file +[Container Limit Ranges](../../project-administration/container-limit-ranges/) From 9d6131dfc35276476f4f5766c4d6bc10bd2399ec Mon Sep 17 00:00:00 2001 From: Rodion Miromind Date: Mon, 28 Jun 2021 10:45:50 +0300 Subject: [PATCH 4/9] Update _index.md --- content/en/service-mesh/_index.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/content/en/service-mesh/_index.md b/content/en/service-mesh/_index.md index d1dcbeee7..b6b224ea2 100644 --- a/content/en/service-mesh/_index.md +++ b/content/en/service-mesh/_index.md @@ -1,5 +1,5 @@ --- -title: "service mesh" +title: "Kubernetes Service Mesh with Istio" layout: "scenario" css: "scss/scenario.scss" @@ -15,7 +15,7 @@ bg: /images/service-mesh/28.svg section2: title: What Makes KubeSphere Service Mesh Special list: - - title: Traffic Management + - title: Service Mesh traffic management image: /images/service-mesh/traffic-management.png summary: contentList: @@ -24,12 +24,12 @@ section2: - content: Traffic mirroring is a powerful, risk-free method of testing your app versions as it sends a copy of live traffic to a mirrored Service - content: Circuit breakers allow users to set limits for calls to individual hosts within a Service - - title: Visualization + - title: Microservices Visualization image: /images/service-mesh/visualization.png summary: Observability is extremely useful in understanding cloud-native microservice interconnections. KubeSphere has the ability to visualize the connections between microservices and the topology of how they interconnect. contentList: - - title: Distributed Tracing + - title: Distributed Tracing for Kubernetes image: /images/service-mesh/distributed-tracing.png summary: Based on Jaeger, KubeSphere enables users to track how each Service interacts with each other. It brings a deeper understanding about request latency, bottlenecks, serialization and parallelism via visualization. contentList: From f2c60ff9c25764308264f15679ca2be2e881c256 Mon Sep 17 00:00:00 2001 From: Rodion Miromind Date: Mon, 28 Jun 2021 10:47:57 +0300 Subject: [PATCH 5/9] Update blue-green-deployment.md --- .../grayscale-release/blue-green-deployment.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md b/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md index 695d26871..656098020 100644 --- a/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md +++ b/content/en/docs/project-user-guide/grayscale-release/blue-green-deployment.md @@ -1,8 +1,8 @@ --- -title: "Blue-green Deployment" +title: "Kubernetes Blue-green Deployment in Kubesphere" keywords: 'KubeSphere, Kubernetes, service mesh, istio, release, blue-green deployment' description: 'Learn how to release a blue-green deployment in KubeSphere.' -linkTitle: "Blue-green Deployment" +linkTitle: "Blue-Green Deployment with Kubernetes" weight: 10520 --- From 58140f60325ed43c1258321f1a10630bcfd39d2e Mon Sep 17 00:00:00 2001 From: Rodion Miromind Date: Mon, 28 Jun 2021 10:54:11 +0300 Subject: [PATCH 6/9] Update argo-cd-a-tool-for-devops.md --- content/en/blogs/argo-cd-a-tool-for-devops.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/blogs/argo-cd-a-tool-for-devops.md b/content/en/blogs/argo-cd-a-tool-for-devops.md index d44e756d5..89dbe832c 100644 --- a/content/en/blogs/argo-cd-a-tool-for-devops.md +++ b/content/en/blogs/argo-cd-a-tool-for-devops.md @@ -8,7 +8,7 @@ author: 'Shaowen Chen, Felix, Sherlock' snapshot: '/images/blogs/en/argo-cd-a-tool-for-devops/argo-schematics.png' --- -In this post, I'll show you how Argo CD betters Kubernetes DevOps process. Before we begin, let's look at some background information. +In this post, I'll show you how Argo CD betters [Kubernetes DevOps](https://kubesphere.io/devops/) process. Before we begin, let's look at some background information. ## Argo CD Capability @@ -209,4 +209,4 @@ Last but not the least, when updating Kubernetes, Argo CD also supports various ## See Also - [Argo CD website](https://argoproj.github.io/argo-cd/) -- [Argo CD Git Repo](https://github.com/argoproj/argo-cd/) \ No newline at end of file +- [Argo CD Git Repo](https://github.com/argoproj/argo-cd/) From 5b5200562db442d7562d3be5a8b93805e825dcc5 Mon Sep 17 00:00:00 2001 From: Rodion Miromind Date: Mon, 28 Jun 2021 10:56:32 +0300 Subject: [PATCH 7/9] Update container-limit-ranges.md --- .../en/docs/project-administration/container-limit-ranges.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/content/en/docs/project-administration/container-limit-ranges.md b/content/en/docs/project-administration/container-limit-ranges.md index 7cf5df033..d00696fa5 100644 --- a/content/en/docs/project-administration/container-limit-ranges.md +++ b/content/en/docs/project-administration/container-limit-ranges.md @@ -8,7 +8,7 @@ weight: 13400 A container can use as much CPU and memory as set by [the resource quota for a project](../../workspace-administration/project-quotas/). At the same time, KubeSphere uses requests and limits to control resource (for example, CPU and memory) usage for a container, also known as [LimitRanges](https://kubernetes.io/docs/concepts/policy/limit-range/) in Kubernetes. Requests make sure the container can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that container can never use resources above a certain value. -When you create a workload, such as a Deployment, you configure resource requests and limits for the container. To make these request and limit fields pre-populated with values, you can set default limit ranges. +When you create a workload, such as a Deployment, you configure resource [Kubernetes requests and limits](https://kubesphere.io/blogs/understand-requests-and-limits-in-kubernetes/) for the container. To make these request and limit fields pre-populated with values, you can set default limit ranges. This tutorial demonstrates how to set default limit ranges for containers in a project. @@ -56,4 +56,4 @@ You have an available workspace, a project and an account (`project-admin`). The ## See Also -[Project Quotas](../../workspace-administration/project-quotas/) \ No newline at end of file +[Project Quotas](../../workspace-administration/project-quotas/) From 1198cc205f81ea37409cc730833eec7391163781 Mon Sep 17 00:00:00 2001 From: Rodion Miromind Date: Mon, 28 Jun 2021 10:59:47 +0300 Subject: [PATCH 8/9] Update install-kubernetes-using-kubekey.md --- content/en/blogs/install-kubernetes-using-kubekey.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/blogs/install-kubernetes-using-kubekey.md b/content/en/blogs/install-kubernetes-using-kubekey.md index c8198f83b..a2bb16688 100644 --- a/content/en/blogs/install-kubernetes-using-kubekey.md +++ b/content/en/blogs/install-kubernetes-using-kubekey.md @@ -332,7 +332,7 @@ You can use KubeKey to install a specified Kubernetes version. The dependency th ## KubeSphere and its Graphic Dashboard -KubeSphere is a **distributed operating system managing cloud-native applications** with Kubernetes as its kernel. As an [open-source enterprise-grade container platform](https://kubesphere.io/), it boasts full-stack automated IT operation, multi-cluster management, and streamlined DevOps workflows. Here is the architecture of KubeSphere. +KubeSphere is a **distributed operating system managing cloud-native applications** with Kubernetes as its kernel. As an [open-source enterprise-grade container platform](https://kubesphere.io/), it boasts full-stack automated IT operation, multi-cluster management, and streamlined [DevOps workflows](https://kubesphere.io/devops/). Here is the architecture of KubeSphere. ![architecture](https://ap3.qingstor.com/kubesphere-website/docs/architecture.png) From 320db5c77b33d11c95f0d2808773e38a4e3425a3 Mon Sep 17 00:00:00 2001 From: Rodion Miromind Date: Mon, 28 Jun 2021 11:23:41 +0300 Subject: [PATCH 9/9] Update _index.md --- content/en/service-mesh/_index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/content/en/service-mesh/_index.md b/content/en/service-mesh/_index.md index b6b224ea2..4b7e6655f 100644 --- a/content/en/service-mesh/_index.md +++ b/content/en/service-mesh/_index.md @@ -15,7 +15,7 @@ bg: /images/service-mesh/28.svg section2: title: What Makes KubeSphere Service Mesh Special list: - - title: Service Mesh traffic management + - title: Service Mesh Traffic Management image: /images/service-mesh/traffic-management.png summary: contentList: