Merge pull request #1749 from rodmiromind/release-3.0

Content SEO changes and interlinks
This commit is contained in:
KubeSphere CI Bot 2021-07-02 11:59:25 +08:00 committed by GitHub
commit 72ee23ce55
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
7 changed files with 23 additions and 23 deletions

View File

@ -8,7 +8,7 @@ author: 'Shaowen Chen, Felix, Sherlock'
snapshot: '/images/blogs/en/argo-cd-a-tool-for-devops/argo-schematics.png'
---
In this post, I'll show you how Argo CD betters Kubernetes DevOps process. Before we begin, let's look at some background information.
In this post, I'll show you how Argo CD betters [Kubernetes DevOps](https://kubesphere.io/devops/) process. Before we begin, let's look at some background information.
## Argo CD Capability
@ -209,4 +209,4 @@ Last but not the least, when updating Kubernetes, Argo CD also supports various
## See Also
- [Argo CD website](https://argoproj.github.io/argo-cd/)
- [Argo CD Git Repo](https://github.com/argoproj/argo-cd/)
- [Argo CD Git Repo](https://github.com/argoproj/argo-cd/)

View File

@ -332,7 +332,7 @@ You can use KubeKey to install a specified Kubernetes version. The dependency th
## KubeSphere and its Graphic Dashboard
KubeSphere is a **distributed operating system managing cloud-native applications** with Kubernetes as its kernel. As an [open-source enterprise-grade container platform](https://kubesphere.io/), it boasts full-stack automated IT operation, multi-cluster management, and streamlined DevOps workflows. Here is the architecture of KubeSphere.
KubeSphere is a **distributed operating system managing cloud-native applications** with Kubernetes as its kernel. As an [open-source enterprise-grade container platform](https://kubesphere.io/), it boasts full-stack automated IT operation, multi-cluster management, and streamlined [DevOps workflows](https://kubesphere.io/devops/). Here is the architecture of KubeSphere.
![architecture](https://ap3.qingstor.com/kubesphere-website/docs/architecture.png)

View File

@ -1,5 +1,5 @@
---
title: 'Kubernetes Multi-cluster Deployment: Federation and KubeSphere'
title: 'Kubernetes Multi-cluster Deployment: Kubernetes Federation and KubeSphere'
keywords: Kubernetes, KubeSphere, Multi-cluster, Container
description: KubeSphere v3.0 supports the management of multiple clusters, isolated management of resources, and federated deployments.
tag: 'KubeSphere, Multi-cluster'
@ -10,7 +10,7 @@ snapshot: 'https://ap3.qingstor.com/kubesphere-website/docs/kubesphere-architect
## Scenarios for Multi-cluster Deployment
As the container technology and Kubernetes see a surge in popularity among their users, it is not uncommon for enterprises to run multiple clusters for their business. In general, here are the main scenarios where multiple clusters can be adopted.
As the container technology and Kubernetes see a surge in popularity among their users, it is not uncommon for enterprises to run multiple clusters for their business. In general, here are the main scenarios where multiple Kubernetes clusters can be adopted.
### High Availability
@ -28,7 +28,7 @@ Generally, it is much easier for multiple small clusters to isolate failures tha
### Business Isolation
Although Kubernetes provides namespaces as a solution to app isolation, this method only represents the isolation in logic. This is because different namespaces are connected through the network, which means the issue of resource preemption still exists. To achieve further isolation, you need to create additional network isolation policies or set resource quotas. Using multiple clusters can achieve complete physical isolation that is more secure and reliable than the isolation through namespaces. For example, this is extremely effective when different departments within an enterprise use multiple clusters for the deployment of development, testing or production environments.
Although Kubernetes provides namespaces as a solution to app isolation, this method only represents the isolation in logic. This is because different namespaces are connected through the network, which means the issue of resource preemption still exists. To achieve further isolation, you need to create additional network isolation policies or set resource quotas. Using multiple Kubernetes clusters can achieve complete physical isolation that is more secure and reliable than the isolation through namespaces. For example, this is extremely effective when different departments within an enterprise use multiple clusters for the deployment of development, testing or production environments.
![pipeline](https://ap3.qingstor.com/kubesphere-website/docs/pipeline.png)
@ -40,7 +40,7 @@ Kubernetes has become the de facto standard in container orchestration. Against
The application of multi-cluster deployment offers solutions to a variety of problems as we can see from the scenarios above. Nevertheless, it brings more complexity for operation and maintenance. For a single cluster, app deployment and upgrade are quite straightforward as you can directly update yaml of the cluster. For multiple clusters, you can update them one by one, but how can you guarantee the application load status is the same across different clusters? How to implement service discovery among different clusters? How to achieve load balancing across clusters? The answer given by the community is Federation.
### Federation v1
### Kubernetes Federation v1
![federation-v1](https://ap3.qingstor.com/kubesphere-website/docs/federation-v1.png)
@ -50,11 +50,11 @@ There are two versions of Federation with the original v1 already deprecated. In
In terms of API, federated resources are scheduled through annotations, ensuring great compatibility with the original Kubernetes API. As such, the original code can be reused and existing deployment files of users can be easily transferred without any major change. However, this also prevents users from taking further advantage of Federation for API evolution. At the same time, a corresponding controller is needed for each federated resource so that they can be scheduled to different clusters. Originally, Federation only supported a limited number of resource type.
### Federation v2
### Kubernetes Federation v2
![federation-v2](https://ap3.qingstor.com/kubesphere-website/docs/federation-v2.png)
The community developed Federation v2 (KubeFed) on the basis of v1. KubeFed has defined its own API standards through CRDs while deprecating the annotation method used before. The architecture has changed significantly as well, discarding Federated API Server and etcd that need to be deployed independently. The control plane of KubeFed adopts the popular implementation of CRD + Controller, which can be directly installed on existing Kubernetes clusters without any additional deployment.
The community developed Kubernetes Federation v2 (KubeFed) on the basis of v1. KubeFed has defined its own API standards through CRDs while deprecating the annotation method used before. The architecture has changed significantly as well, discarding Federated API Server and etcd that need to be deployed independently. The control plane of KubeFed adopts the popular implementation of CRD + Controller, which can be directly installed on existing Kubernetes clusters without any additional deployment.
KubeFed mainly defines four resource types:
@ -81,9 +81,9 @@ However, KubeFed also has some issues to be resolved:
## Multi-cluster Feature in KubeSphere
Resource federation is what the community has proposed to solve the issue of deployments across multiple clusters. For many enterprise users, the deployment of multiple clusters is not necessary. What is more important is that they need to be able to manage the resources across multiple clusters at the same time and in the same place.
Resource federation is what the community has proposed to solve the issue of deployments across multiple Kubernetes clusters. For many enterprise users, the deployment of multiple clusters is not necessary. What is more important is that they need to be able to manage the resources across multiple clusters at the same time and in the same place.
[KubeSphere](https://github.com/kubesphere) supports the management of multiple clusters, isolated management of resources, and federated deployments. In addition, it also features multi-dimensional queries (monitoring, logging, events and auditing) of resources such as clusters and apps, as well as alerts and notifications through various channels. Apps can be deployed on multiple clusters with CI/CD pipelines.
[KubeSphere](https://github.com/kubesphere) supports the management of multiple Kubernetes clusters, isolated management of resources, and federated deployments. In addition, it also features multi-dimensional queries (monitoring, logging, events and auditing) of resources such as clusters and apps, as well as alerts and notifications through various channels. Apps can be deployed on multiple clusters with CI/CD pipelines.
![kubesphere-workflow](https://ap3.qingstor.com/kubesphere-website/docs/workflow.png)
@ -95,7 +95,7 @@ KubeSphere 3.0 supports unified management of user access for the multi-cluster
![kubesphere-architecture](https://ap3.qingstor.com/kubesphere-website/docs/kubesphere-architecture.png)
The overall multi-cluster architecture of [KubeSphere](https://kubesphere.io/) is shown above. The cluster where the control plane is located is called Host cluster. The cluster managed by the Host cluster is called Member cluster, which is essentially a Kubernetes cluster with KubeSphere installed. The Host cluster needs to be able to access the kube-apiserver of Member clusters. Besides, there is no requirement for the network connectivity between Member clusters. The Host cluster is independent of the member clusters managed by it, which do not know the existence of the Host cluster. The advantage of the logic is that when the Host cluster malfunctions, Member clusters will not be affected and deployed workloads can continue to run as well.
The overall multi-cluster architecture of KubeSphere [Container Platform](https://kubesphere.io/) is shown above. The cluster where the control plane is located is called Host cluster. The cluster managed by the Host cluster is called Member cluster, which is essentially a Kubernetes cluster with KubeSphere installed. The Host cluster needs to be able to access the kube-apiserver of Member clusters. Besides, there is no requirement for the network connectivity between Member clusters. The Host cluster is independent of the member clusters managed by it, which do not know the existence of the Host cluster. The advantage of the logic is that when the Host cluster malfunctions, Member clusters will not be affected and deployed workloads can continue to run as well.
In addition, the Host cluster also serves as an entry for API requests. It will forward all resource requests for member clusters to them. In this way, not only can requests be aggregated, but also authentication and authorization can be implemented in a unified fashion.
@ -136,4 +136,4 @@ The topic of multi-cluster deployment is far more complicated than we think. The
1. KubeFed: https://github.com/kubernetes-sigs/kubefed
2. KubeSphere Website: https://kubesphere.io/
3. Kubernetes Federation Evolution: https://kubernetes.io/blog/2018/12/12/kubernetes-federation-evolution/
4. KubeSphere GitHub: https://github.com/kubesphere
4. KubeSphere GitHub: https://github.com/kubesphere

View File

@ -8,7 +8,7 @@ weight: 13400
A container can use as much CPU and memory as set by [the resource quota for a project](../../workspace-administration/project-quotas/). At the same time, KubeSphere uses requests and limits to control resource (for example, CPU and memory) usage for a container, also known as [LimitRanges](https://kubernetes.io/docs/concepts/policy/limit-range/) in Kubernetes. Requests make sure the container can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that container can never use resources above a certain value.
When you create a workload, such as a Deployment, you configure resource requests and limits for the container. To make these request and limit fields pre-populated with values, you can set default limit ranges.
When you create a workload, such as a Deployment, you configure resource [Kubernetes requests and limits](https://kubesphere.io/blogs/understand-requests-and-limits-in-kubernetes/) for the container. To make these request and limit fields pre-populated with values, you can set default limit ranges.
This tutorial demonstrates how to set default limit ranges for containers in a project.
@ -56,4 +56,4 @@ You have an available workspace, a project and an account (`project-admin`). The
## See Also
[Project Quotas](../../workspace-administration/project-quotas/)
[Project Quotas](../../workspace-administration/project-quotas/)

View File

@ -1,8 +1,8 @@
---
title: "Blue-green Deployment"
title: "Kubernetes Blue-green Deployment in Kubesphere"
keywords: 'KubeSphere, Kubernetes, service mesh, istio, release, blue-green deployment'
description: 'Learn how to release a blue-green deployment in KubeSphere.'
linkTitle: "Blue-green Deployment"
linkTitle: "Blue-Green Deployment with Kubernetes"
weight: 10520
---

View File

@ -8,7 +8,7 @@ aliases:
weight: 9600
---
KubeSphere uses requests and limits to control resource (for example, CPU and memory) usage in a project, also known as [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value.
KubeSphere uses [Kubernetes requests and limits](https://kubesphere.io/blogs/understand-requests-and-limits-in-kubernetes/) to control resource (for example, CPU and memory) usage in a project, also known as [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value.
Besides CPU and memory, you can also set resource quotas for other objects separately such as Pods, [Deployments](../../project-user-guide/application-workloads/deployments/), [Jobs](../../project-user-guide/application-workloads/jobs/), [Services](../../project-user-guide/application-workloads/services/) and [ConfigMaps](../../project-user-guide/configuration/configmaps/) in a project.
@ -58,4 +58,4 @@ If you use the account `project-admin` (an account of the `admin` role at the pr
## See Also
[Container Limit Ranges](../../project-administration/container-limit-ranges/)
[Container Limit Ranges](../../project-administration/container-limit-ranges/)

View File

@ -1,5 +1,5 @@
---
title: "service mesh"
title: "Kubernetes Service Mesh with Istio"
layout: "scenario"
css: "scss/scenario.scss"
@ -15,7 +15,7 @@ bg: /images/service-mesh/28.svg
section2:
title: What Makes KubeSphere Service Mesh Special
list:
- title: Traffic Management
- title: Service Mesh Traffic Management
image: /images/service-mesh/traffic-management.png
summary:
contentList:
@ -24,12 +24,12 @@ section2:
- content: <span>Traffic mirroring</span> is a powerful, risk-free method of testing your app versions as it sends a copy of live traffic to a mirrored Service
- content: <span>Circuit breakers</span> allow users to set limits for calls to individual hosts within a Service
- title: Visualization
- title: Microservices Visualization
image: /images/service-mesh/visualization.png
summary: Observability is extremely useful in understanding cloud-native microservice interconnections. KubeSphere has the ability to visualize the connections between microservices and the topology of how they interconnect.
contentList:
- title: Distributed Tracing
- title: Distributed Tracing for Kubernetes
image: /images/service-mesh/distributed-tracing.png
summary: Based on Jaeger, KubeSphere enables users to track how each Service interacts with each other. It brings a deeper understanding about request latency, bottlenecks, serialization and parallelism via visualization.
contentList: