mirror of
https://github.com/kubesphere/website.git
synced 2025-12-29 15:42:49 +00:00
zh-cn localization of introduction chapter of munticluster management.
Signed-off-by: Bingo Liao <44894824@qq.com>
This commit is contained in:
parent
11816b8f63
commit
dbdbf89dea
|
|
@ -1,5 +1,5 @@
|
|||
---
|
||||
linkTitle: "Introduction"
|
||||
linkTitle: "介绍"
|
||||
weight: 3005
|
||||
|
||||
_build:
|
||||
|
|
|
|||
|
|
@ -1,13 +1,13 @@
|
|||
---
|
||||
title: "Kubernetes Federation in KubeSphere"
|
||||
keywords: 'Kubernetes, KubeSphere, federation, multicluster, hybrid-cloud'
|
||||
description: 'Overview'
|
||||
title: "KubeSphere 中的 Kubernetes 联邦"
|
||||
keywords: 'Kubernetes, KubeSphere, 联邦, 多集群, 混合云'
|
||||
description: '概要'
|
||||
|
||||
weight: 3007
|
||||
---
|
||||
|
||||
The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters as the workload can be reduced.
|
||||
多群集功能与多个群集之间的网络连接有关。 因此,了解集群的拓扑关系很重要,这样可以减少工作量。
|
||||
|
||||
Before you use the multi-cluster feature, you need to create a Host Cluster (hereafter referred to as **H** Cluster), which is actually a KubeSphere cluster with the multi-cluster feature enabled. All the clusters managed by the H Cluster are called Member Cluster (hereafter referred to as **M** Cluster). They are common KubeSphere clusters that do not have the multi-cluster feature enabled. There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and the M Cluster can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment.
|
||||
在使用多集群功能之前,您需要创建一个主集群(Host Cluster,以下简称 **H** 集群),H 集群实际上是启用了多集群功能的 KubeSphere 集群。所有被 H 集群管理的集群称为成员集群(Member Cluster,以下简称 **M** 集群)。M 集群是未启用多集群功能的普通 KubeSphere 集群。只能有一个 H 集群存在,而多个 M 集群可以同时存在。 在多集群体系结构中,H 集群和 M 集群之间的网络可以直接连接,也可以通过代理连接。 M 集群之间的网络可以设置在完全隔离的环境中。
|
||||
|
||||

|
||||

|
||||
|
|
|
|||
|
|
@ -1,15 +1,15 @@
|
|||
---
|
||||
title: "Overview"
|
||||
keywords: 'Kubernetes, KubeSphere, multicluster, hybrid-cloud'
|
||||
description: 'Overview'
|
||||
title: "概要"
|
||||
keywords: 'Kubernetes, KubeSphere, 多集群, 混合云'
|
||||
description: '概要'
|
||||
|
||||
weight: 3006
|
||||
---
|
||||
|
||||
Today, it's very common for organizations to run and manage multiple Kubernetes clusters across different cloud providers or infrastructures. As each Kubernetes cluster is a relatively self-contained unit, the upstream community is struggling to research and develop a multi-cluster management solution. That said, Kubernetes Cluster Federation ([KubeFed](https://github.com/kubernetes-sigs/kubefed) for short) may be a possible approach among others.
|
||||
如今,在不同的云服务提供商或者基础设施上运行和管理多个 Kubernetes 集群已经非常普遍。 由于每个 Kubernetes 集群都是一个相对独立的单元,上游社区正努力研发多集群管理解决方案。 也就是说,Kubernetes 集群联邦(Kubernetes Cluster Federation,简称 [KubeFed](https://github.com/kubernetes-sigs/kubefed))可能是其中一种可行的方法。
|
||||
|
||||
The most common use cases of multi-cluster management include service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and vendor lock-in avoidance.
|
||||
多集群管理最常见的用例包括服务流量负载均衡、开发和生产的隔离、数据处理和数据存储的分离、跨云备份和灾难恢复、计算资源的灵活分配、跨区域服务的低延迟访问以及厂商捆绑的防范。
|
||||
|
||||
KubeSphere is developed to address multi-cluster and multi-cloud management challenges and implement the proceeding user scenarios, providing users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premises environments. KubeSphere also provides rich observability cross multiple clusters including centralized monitoring, logging, events, and auditing logs.
|
||||
KubeSphere 的开发旨在解决多集群和多云管理的难题,并实现后续的用户场景,为用户提供统一的控制平面,以将应用程序及其副本分发到从公有云到本地环境的多个集群。 KubeSphere 还提供跨多个群集的丰富的可观察性,包括集中式监视、日志记录、事件和审核日志。
|
||||
|
||||

|
||||

|
||||
|
|
|
|||
Loading…
Reference in New Issue