【Documentation】Add ks-high-availability (#3266)

* add oidc doc

Signed-off-by: zhuxiujuan28 <562873187@qq.com>

* add ks-high-availability

Signed-off-by: zhuxiujuan28 <562873187@qq.com>

* minor fix

Signed-off-by: zhuxiujuan28 <562873187@qq.com>

* change directory and translate

Signed-off-by: zhuxiujuan28 <562873187@qq.com>

* ad layout

Signed-off-by: zhuxiujuan28 <562873187@qq.com>

* ad layout

Signed-off-by: zhuxiujuan28 <562873187@qq.com>

---------

Signed-off-by: zhuxiujuan28 <562873187@qq.com>
This commit is contained in:
zhuxiujuan28 2025-05-15 17:12:36 +08:00 committed by GitHub
parent 115a2c1e88
commit cd04038765
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
14 changed files with 512 additions and 124 deletions

View File

@ -1,21 +1,13 @@
---
title: "Configure High Availability"
linkTitle: "Configure High Availability"
title: "Configure Kubernetes High Availability"
keywords: "Kubernetes, KubeSphere, Installation, Preparation, High Availability"
description: "Learn how to configure high availability in case of a single control plane node failure."
weight: 03
weight: 02
---
This section explains how to configure multiple control plane nodes for high availability in a production environment for the KubeSphere cluster. This ensures that the cluster services remain operational even if a single control plane node fails. If your KubeSphere cluster does not require high availability, you can skip this section.
// Note
include::../../../../_ks_components-en/admonitions/note.adoc[]
The high availability configuration for KubeSphere is only supported when installing Kubernetes and {ks_product-en} together. If you are installing {ks_product-en} on an existing Kubernetes cluster, {ks_product-en} will utilize the existing high availability configuration of the Kubernetes cluster.
include::../../../../_ks_components-en/admonitions/admonEnd.adoc[]
This section explains the following methods for configuring high availability:
* **Local Load Balancer Configuration**: You can install HAProxy on the worker nodes during the KubeSphere installation process using the KubeKey tool. HAProxy will act as a reverse proxy for the control plane nodes, and the Kubernetes components on the worker nodes will connect to the control plane nodes through HAProxy. This method requires additional health check mechanisms and may reduce efficiency compared to other methods, but can be used in scenarios without a dedicated load balancer and with a limited number of servers.
@ -29,18 +21,17 @@ This section explains the following methods for configuring high availability:
To use HAProxy for high availability, you need to configure the following parameters in the installation configuration file **config-sample.yaml** during the installation of {ks_product-en}:
// YAML
include::../../../../_ks_components-en/code/yaml.adoc[]
[source,yaml]
----
spec:
controlPlaneEndpoint:
internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
----
KubeKey will automatically install HAProxy on the worker nodes and complete the high availability configuration, requiring no additional actions. For more information, please refer to link:../../02-install-kubesphere/02-install-kubernetes-and-kubesphere/[Install Kubernetes and {ks_product-en}].
KubeKey will automatically install HAProxy on the worker nodes and complete the high availability configuration, requiring no additional actions. For more information, please refer to link:../../../02-install-kubesphere/02-install-kubernetes-and-kubesphere/[Install Kubernetes and {ks_product-en}].
== Dedicated Load Balancer
@ -70,9 +61,7 @@ The following describes how to configure a generic server as a load balancer usi
// Bash
[,bash]
----
apt install keepalived haproxy psmisc -y
----
--
@ -82,9 +71,7 @@ apt install keepalived haproxy psmisc -y
// Bash
[,bash]
----
vi /etc/haproxy/haproxy.cfg
----
--
@ -94,7 +81,6 @@ vi /etc/haproxy/haproxy.cfg
// Bash
[,bash]
----
global
log /dev/log  local0 warning
chroot      /var/lib/haproxy
@ -129,7 +115,6 @@ backend kube-apiserver
server kube-apiserver-1 <IP address>:6443 check
server kube-apiserver-2 <IP address>:6443 check
server kube-apiserver-3 <IP address>:6443 check
----
--
@ -139,9 +124,7 @@ backend kube-apiserver
// Bash
[,bash]
----
systemctl restart haproxy
----
--
@ -151,9 +134,7 @@ systemctl restart haproxy
// Bash
[,bash]
----
systemctl enable haproxy
----
--
@ -163,9 +144,7 @@ systemctl enable haproxy
// Bash
[,bash]
----
vi /etc/keepalived/keepalived.conf
----
--
@ -175,7 +154,6 @@ vi /etc/keepalived/keepalived.conf
// Bash
[,bash]
----
global_defs {
notification_email {
}
@ -214,7 +192,6 @@ vrrp_instance haproxy-vip {
chk_haproxy
}
}
----
Replace the following parameters with actual values:
@ -243,9 +220,7 @@ Replace the following parameters with actual values:
// Bash
[,bash]
----
systemctl restart keepalived
----
--
@ -255,9 +230,7 @@ systemctl restart keepalived
// Bash
[,bash]
----
systemctl enable keepalived
----
--
@ -274,9 +247,7 @@ systemctl enable keepalived
// Bash
[,bash]
----
ip a s
----
If the system's high availability is functioning properly, the configured floating IP address will be displayed in the command output. For example, in the following command output, **inet 172.16.0.10/24 scope global secondary eth0** indicates that the floating IP address is bound to the eth0 network interface:
@ -284,7 +255,6 @@ If the system's high availability is functioning properly, the configured floati
// Bash
[,bash]
----
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
@ -308,9 +278,7 @@ If the system's high availability is functioning properly, the configured floati
// Bash
[,bash]
----
systemctl stop haproxy
----
--
@ -320,9 +288,7 @@ systemctl stop haproxy
// Bash
[,bash]
----
ip a s
----
If the system's high availability is functioning properly, the command output will no longer display the floating IP address, as shown in the following command output:
@ -330,7 +296,6 @@ If the system's high availability is functioning properly, the command output wi
// Bash
[,bash]
----
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
@ -343,7 +308,6 @@ If the system's high availability is functioning properly, the command output wi
valid_lft 72802sec preferred_lft 72802sec
inet6 fe80::510e:f96:98b2:af40/64 scope link noprefixroute
valid_lft forever preferred_lft forever
----
--
@ -353,9 +317,7 @@ If the system's high availability is functioning properly, the command output wi
// Bash
[,bash]
----
ip a s
----
If the system's high availability is functioning properly, the configured floating IP address will be displayed in the command output. For example, in the following command output, **inet 172.16.0.10/24 scope global secondary eth0** indicates that the floating IP address is bound to the eth0 network interface:
@ -363,7 +325,6 @@ If the system's high availability is functioning properly, the configured floati
// Bash
[,bash]
----
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
@ -378,7 +339,6 @@ If the system's high availability is functioning properly, the configured floati
valid_lft forever preferred_lft forever
inet6 fe80::f67c:bd4f:d6d5:1d9b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
----
--
@ -388,8 +348,6 @@ If the system's high availability is functioning properly, the configured floati
// Bash
[,bash]
----
systemctl start haproxy
----
--

View File

@ -0,0 +1,7 @@
---
title: "Configure High Availability"
keywords: "Kubernetes, KubeSphere, Installation, Preparation, High Availability"
description: "Learn how to configure high availability for KubeSphere clusters."
weight: 02
layout: "second"
---

View File

@ -25,7 +25,7 @@ The installation process will use the open-source tool KubeKey. For more informa
* In a production environment, to ensure the cluster has sufficient computing and storage resources, it is recommended that each cluster node be configured with at least 8 CPU cores, 16 GB of memory, and 200 GB of disk space. In addition, it is recommended to mount an additional 200 GB of disk space in the **/var/lib/docker** (for Docker) or **/var/lib/containerd** (for containerd) directory of each cluster node for storing container runtime data.
* In a production environment, it is recommended to configure high availability for the KubeSphere cluster in advance to avoid service interruption in the event of a single control plane node failure. For more information, please refer to the link:../../../03-installation-and-upgrade/01-preparations/03-configure-high-availability/[Configure High Availability].
* In a production environment, it is recommended to configure high availability for the KubeSphere cluster in advance to avoid service interruption in the event of a single control plane node failure. For more information, please refer to the link:../../../03-installation-and-upgrade/01-preparations/02-configure-high-availability/02-configure-k8s-high-availability/[Configure High Availability].
+
--
// Note

View File

@ -0,0 +1,221 @@
---
title: "Configure KubeSphere High Availability"
keywords: "Kubernetes, {ks_product-en}, Installation, Preparation, High Availability"
description: "Learn how to configure high availability for KubeSphere."
weight: 04
---
This section describes how to configure high availability (HA) for KubeSphere.
[.admon.attention,cols="a"]
|===
|Attention
|KubeSphere high availability depends on Kubernetes high availability of control plane nodes. Ensure Kubernetes is deployed in high availability mode first.
|===
== 1. High Availability Architecture Overview
KubeSphere supports high availability deployment through `ha.enabled` configuration.
In HA mode, Redis supports two deployment modes:
. Redis standalone mode
. Redis high availability mode (Redis HA)
== 2. Version Compatibility
KubeSphere HA configuration applies to {ks_product-en} v4.1.2 and later versions.
== 3. KubeSphere HA Configuration
=== 3.1 Enabling HA Mode
Create a `values.yaml` file with the following configuration:
[source,yaml]
----
ha:
enabled: true
----
== 4. Redis Configuration
Choose either Redis standalone mode or Redis HA mode and add corresponding configurations to `values.yaml`.
=== 4.1 Redis Standalone Mode
Suitable for small clusters with simple configuration and lower resource consumption.
[source,yaml]
----
redis:
port: 6379
replicaCount: 1
image:
repository: kubesphereio/redis
tag: 7.2.4-alpine
pullPolicy: IfNotPresent
persistentVolume:
enabled: true
size: 2Gi
----
=== 4.2 Redis HA Mode
Recommended for production environments, providing full high availability.
[source,yaml]
----
redisHA:
enabled: true
redis:
port: 6379
image:
repository: kubesphereio/redis
tag: 7.2.4-alpine
pullPolicy: IfNotPresent
persistentVolume:
enabled: true
size: 2Gi
----
=== 4.3 Redis HA Advanced Configuration
[source,yaml]
----
redisHA:
enabled: true
# Redis node configuration
redis:
port: 6379
# Persistence configuration
persistentVolume:
enabled: true
size: 2Gi
# Node affinity
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: CriticalAddonsOnly
operator: Exists
# HA configuration
hardAntiAffinity: false
additionalAffinities:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: In
values:
- ""
# HAProxy configuration
haproxy:
servicePort: 6379
containerPort: 6379
image:
repository: kubesphereio/haproxy
tag: 2.9.6-alpine
pullPolicy: IfNotPresent
----
== 5. HA Deployment
Add `-f values.yaml` when installing or upgrading {ks_product-en}.
[.admon.attention,cols="a"]
|===
|Attention
|The following commands are examples. Always append `-f values.yaml` to your actual installation/upgrade commands.
|===
// KubeSphere
[source,bash]
----
# Installation
helm upgrade --install -n kubesphere-system --create-namespace ks-core https://charts.kubesphere.io/main/ks-core-1.1.4.tgz -f values.yaml --debug --wait
# Upgrade
helm upgrade -n kubesphere-system ks-core https://charts.kubesphere.io/main/ks-core-1.1.4.tgz -f values.yaml --debug --wait
----
// kse
// [source,bash]
// ----
// # Installation
// helm install -n kubesphere-system --create-namespace ks-core oci://hub.kubesphere.com.cn/kse/ks-core --version 1.1.0 -f values.yaml
// # Upgrade
// helm upgrade -n kubesphere-system ks-core oci://hub.kubesphere.com.cn/kse/ks-core --version 1.1.0 -f values.yaml
// ----
== 6. Configuration Reference
=== 6.1 Redis Standalone Mode
- Suitable for small clusters
- Uses single Redis instance
- Supports basic failover
- Simple configuration with low resource consumption
=== 6.2 Redis HA Mode
- Recommended for production
- Uses Redis cluster
- Provides full high availability
- Supports automatic failover
- Data persistence
- Load balancing
== 7. Optional Configurations
=== JWT Signing Key Configuration
In high availability environments, configure a custom SignKey to ensure all replicas use the same JWT signing key.
. Generate an RSA private key.
+
[source,bash]
----
openssl genrsa -out private_key.pem 2048
----
. View the Base64-encoded key.
+
[source,bash]
----
cat private_key.pem | base64 -w 0
----
. Edit KubeSphere configuration.
+
--
[source,bash]
----
kubectl -n kubesphere-system edit cm kubesphere-config
----
Add or replace the following field under `authentication.issuer`:
[source,yaml]
----
signKeyData: <Base64-encoded private key>
----
--
. Restart KubeSphere components.
+
[source,bash]
----
kubectl -n kubesphere-system rollout restart deploy ks-apiserver ks-controller-manager
----
. Verify configuration. Access `http://<ks-console-address>/oauth/keys` multiple times in browser to check if responses from all replicas are consistent.

View File

@ -24,7 +24,7 @@ include::../../../../_ks_components-en/admonitions/admonEnd.adoc[]
* In a production environment, to ensure the cluster has sufficient computing and storage resources, it is recommended that each cluster node be configured with at least 8 CPU cores, 16 GB of memory, and 200 GB of disk space. In addition, it is recommended to mount an additional 200 GB of disk space in the **/var/lib/docker** (for Docker) or **/var/lib/containerd** (for containerd) directory of each cluster node for storing container runtime data.
* In a production environment, it is recommended to configure high availability for the KubeSphere cluster in advance to avoid service interruption in the event of a single control plane node failure. For more information, please refer to the link:../../../03-installation-and-upgrade/01-preparations/03-configure-high-availability/[Configure High Availability].
* In a production environment, it is recommended to configure high availability for the KubeSphere cluster in advance to avoid service interruption in the event of a single control plane node failure. For more information, please refer to the link:../../../03-installation-and-upgrade/01-preparations/02-configure-high-availability/02-configure-k8s-high-availability/[Configure High Availability].
:relfileprefix: ../../../
@ -102,7 +102,7 @@ include::../../../../_ks_components-en/admonitions/warning.adoc[]
* If the current cluster has been configured with high availability, do not modify the high availability information in the **config-sample.yaml** file. Otherwise, the cluster may encounter errors after adding nodes.
* If the current cluster uses local load balancing to achieve high availability, you do not need to perform any operations on cluster high availability; if the current cluster uses a load balancer to achieve high availability, you only need to configure the load balancer to listen on port 6443 of all control plane nodes. For more information, see link:../../01-preparations/03-configure-high-availability/[Configure High Availability].
* If the current cluster uses local load balancing to achieve high availability, you do not need to perform any operations on cluster high availability; if the current cluster uses a load balancer to achieve high availability, you only need to configure the load balancer to listen on port 6443 of all control plane nodes. For more information, see link:../../01-preparations/02-configure-high-availability/02-configure-k8s-high-availability/[Configure High Availability].
include::../../../../_ks_components-en/admonitions/admonEnd.adoc[]
--

View File

@ -18,7 +18,7 @@ sectionLink:
- /docs/v4.1/02-quickstart/01-install-kubesphere.adoc
- /docs/v4.1/03-installation-and-upgrade/02-install-kubesphere/02-install-kubernetes-and-kubesphere.adoc
- /docs/v4.1/02-quickstart/04-control-user-permissions.adoc
- docs/v4.1/03-installation-and-upgrade/02-install-kubesphere/04-offline-installation.adoc
- docs/v4.1/03-installation-and-upgrade/02-install-kubesphere/03-offline-installation.adoc
- /docs/v4.1/03-installation-and-upgrade/05-add-and-delete-cluster-nodes/01-add-cluster-nodes.adoc
- /docs/v4.1/07-cluster-management/10-multi-cluster-management
- /docs/v4.1/02-quickstart/03-install-an-extension.adoc

View File

@ -1,21 +1,13 @@
---
title: "配置高可用性"
linkTitle: "配置高可用性"
keywords: "Kubernetes, KubeSphere, 安装, 准备, 高可用"
description: "介绍如何在生产环境中为 KubeSphere 集群配置多个控制平面节点,以防止单个控制平面节点故障时集群服务中断,从而实现高可用性。"
weight: 03
title: "配置 Kubernetes 高可用性"
keywords: "Kubernetes, {ks_product}, 安装, 准备, 高可用"
description: "介绍如何在生产环境中为 KubeSphere 集群配置多个控制平面节点。"
weight: 02
---
本节介绍如何在生产环境中为{ks_product_both}集群配置多个控制平面节点,以防止单个控制平面节点故障时集群服务中断,从而实现高可用性。如果您的{ks_product_both}集群没有高可用性需求,您可以跳过本节。
// Note
include::../../../../_ks_components/admonitions/note.adoc[]
{ks_product_right}高可用性配置仅支持同时安装 Kubernetes 和{ks_product_both}的场景。如果您在现有的 Kubernetes 集群上安装{ks_product_left}{ks_product_right}安装完成后将使用 Kubernetes 集群现有的高可用性配置。
include::../../../../_ks_components/admonitions/admonEnd.adoc[]
本节介绍以下高可用性配置方式:
* 使用本地负载均衡配置。您可以在安装{ks_product_both}的过程中,设置 KubeKey 工具在工作节点上安装 HAProxy 作为各控制平面节点的反向代理,所有工作节点的 Kubernetes 组件将通过 HAProxy 连接各控制平面节点。这种方式需要额外的健康检查机制,所以相较其他方式运行效率有所降低,但可以用于没有专用负载均衡器且服务器数量有限的场景。
@ -29,18 +21,17 @@ include::../../../../_ks_components/admonitions/admonEnd.adoc[]
如需使用 HAProxy 实现高可用性,只需要在安装{ks_product_both}时在安装配置文件 **config-sample.yaml** 中设置以下参数:
// YAML
include::../../../../_ks_components/code/yaml.adoc[]
[source,yaml]
----
spec:
controlPlaneEndpoint:
internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
----
KubeKey 将自动在工作节点上安装 HAProxy 并完成高可用配置您无需进行其他操作。有关更多信息请参阅link:../../02-install-kubesphere/02-install-kubernetes-and-kubesphere/[安装 Kubernetes 和 KubeSphere]。
KubeKey 将自动在工作节点上安装 HAProxy 并完成高可用配置您无需进行其他操作。有关更多信息请参阅link:../../../02-install-kubesphere/02-install-kubernetes-and-kubesphere/[安装 Kubernetes 和{ks_product_left}]。
== 使用专用负载均衡器
如需使用云环境提供的专用负载均衡器实现高可用性,您需要在云环境中进行以下操作:
@ -69,11 +60,9 @@ KubeKey 将自动在工作节点上安装 HAProxy 并完成高可用配置,您
. 登录用作负载均衡器的服务器,执行以下命令安装 HAProxy 和 Keepalived以下以 Ubuntu 操作系统为例,在其他操作系统中请将 **apt** 替换为操作系统对应的软件包管理工具):
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
apt install keepalived haproxy psmisc -y
----
--
@ -81,10 +70,9 @@ apt install keepalived haproxy psmisc -y
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
vi /etc/haproxy/haproxy.cfg
----
--
@ -92,8 +80,8 @@ vi /etc/haproxy/haproxy.cfg
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
global
log /dev/log  local0 warning
chroot      /var/lib/haproxy
@ -128,7 +116,6 @@ backend kube-apiserver
server kube-apiserver-1 <IP address>:6443 check
server kube-apiserver-2 <IP address>:6443 check
server kube-apiserver-3 <IP address>:6443 check
----
--
@ -136,10 +123,9 @@ backend kube-apiserver
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
systemctl restart haproxy
----
--
@ -147,10 +133,9 @@ systemctl restart haproxy
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
systemctl enable haproxy
----
--
@ -158,10 +143,9 @@ systemctl enable haproxy
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
vi /etc/keepalived/keepalived.conf
----
--
@ -169,8 +153,8 @@ vi /etc/keepalived/keepalived.conf
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
global_defs {
notification_email {
}
@ -209,7 +193,6 @@ vrrp_instance haproxy-vip {
chk_haproxy
}
}
----
将以下参数替换为实际值:
@ -236,10 +219,9 @@ vrrp_instance haproxy-vip {
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
systemctl restart keepalived
----
--
@ -247,10 +229,9 @@ systemctl restart keepalived
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
systemctl enable keepalived
----
--
@ -265,17 +246,16 @@ systemctl enable keepalived
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
ip a s
----
如果系统高可用性正常,命令回显中将显示已配置的浮动 IP 地址。例如,在以下命令回显中,**inet 172.16.0.10/24 scope global secondary eth0** 表明浮动 IP 地址已与 eth0 网卡绑定:
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
@ -290,7 +270,6 @@ include::../../../../_ks_components/code/bash.adoc[]
valid_lft forever preferred_lft forever
inet6 fe80::510e:f96:98b2:af40/64 scope link noprefixroute
valid_lft forever preferred_lft forever
----
--
@ -298,10 +277,9 @@ include::../../../../_ks_components/code/bash.adoc[]
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
systemctl stop haproxy
----
--
@ -309,17 +287,16 @@ systemctl stop haproxy
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
ip a s
----
如果系统高可用性正常,命令回显中将不再显示浮动 IP 地址,如以下命令回显所示:
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
@ -332,7 +309,6 @@ include::../../../../_ks_components/code/bash.adoc[]
valid_lft 72802sec preferred_lft 72802sec
inet6 fe80::510e:f96:98b2:af40/64 scope link noprefixroute
valid_lft forever preferred_lft forever
----
--
@ -340,17 +316,16 @@ include::../../../../_ks_components/code/bash.adoc[]
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
ip a s
----
如果系统高可用性正常,命令回显中将显示已配置的浮动 IP 地址。例如,在以下命令回显中,**inet 172.16.0.10/24 scope global secondary eth0** 表明浮动 IP 地址已与 eth0 网卡绑定:
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
@ -365,7 +340,6 @@ include::../../../../_ks_components/code/bash.adoc[]
valid_lft forever preferred_lft forever
inet6 fe80::f67c:bd4f:d6d5:1d9b/64 scope link noprefixroute
valid_lft forever preferred_lft forever
----
--
@ -373,9 +347,8 @@ include::../../../../_ks_components/code/bash.adoc[]
+
--
// Bash
include::../../../../_ks_components/code/bash.adoc[]
[source,bash]
----
systemctl start haproxy
----
--

View File

@ -0,0 +1,7 @@
---
title: "配置高可用性"
keywords: "Kubernetes, {ks_product}, 安装, 准备, 高可用"
description: "介绍如何为 KubeSphere 集群配置高可用性。"
weight: 02
layout: "second"
---

View File

@ -20,7 +20,7 @@ weight: 02
* 在生产环境中,为确保集群具有足够的计算和存储资源,建议每台集群节点配置至少 8 个 CPU 核心、16 GB 内存和 200 GB 磁盘空间。除此之外,建议在每台集群节点的 **/var/lib/docker**(对于 Docker或 **/var/lib/containerd**(对于 containerd 目录额外挂载至少 200 GB 磁盘空间用于存储容器运行时数据。
* 在生产环境中,建议提前为{ks_product_both}集群配置高可用性以避免单个控制平面节点出现故障时集群服务中断。有关更多信息请参阅link:../../../03-installation-and-upgrade/01-preparations/03-configure-high-availability/[配置高可用性]。
* 在生产环境中,建议提前为{ks_product_both}集群配置高可用性以避免单个控制平面节点出现故障时集群服务中断。有关更多信息请参阅link:../../../03-installation-and-upgrade/01-preparations/02-configure-high-availability/02-configure-k8s-high-availability/[配置高可用性]。
+
--
// Note

View File

@ -3,7 +3,7 @@ title: "离线安装 KubeSphere"
linkTitle: "离线安装 KubeSphere"
keywords: "Kubernetes, KubeSphere, 安装, 离线包, 离线安装, 离线部署"
description: "了解如何在离线环境下安装 KubeSphere 和 Kubernetes。"
weight: 04
weight: 03
---

View File

@ -0,0 +1,222 @@
---
title: "配置 KubeSphere 高可用性"
keywords: "Kubernetes, {ks_product}, 安装, 准备, 高可用"
description: "介绍如何为 KubeSphere 配置高可用性。"
weight: 04
---
本节介绍如何配置 KubeSphere 的高可用性。
[.admon.attention,cols="a"]
|===
|注意
|KubeSphere 的高可用性建立在 Kubernetes 控制平面节点高可用的基础上,因此需先确保 Kubernetes 为高可用部署。
|===
== 1. 高可用架构概述
KubeSphere 支持高可用部署,可通过 `ha.enabled` 开启。
在高可用模式下Redis 支持两种部署方式:
. Redis 单实例模式
. Redis 高可用模式 (Redis HA)
== 2. 版本兼容性
KubeSphere 高可用配置适用于{ks_product_left} v4.1.2 及之后更新的版本。
== 3. KubeSphere 高可用配置
=== 3.1 启用高可用模式
创建 `values.yaml` 文件,添加如下配置。
[source,yaml]
----
ha:
enabled: true
----
== 4. Redis 配置
根据需求选择 Redis 单实例模式或 Redis 高可用模式,在上一步创建的 `values.yaml` 文件中继续添加对应的配置。
=== 4.1 Redis 单实例模式
适用于小型集群,配置简单,资源消耗较少。
[source,yaml]
----
redis:
port: 6379
replicaCount: 1
image:
repository: kubesphereio/redis
tag: 7.2.4-alpine
pullPolicy: IfNotPresent
persistentVolume:
enabled: true
size: 2Gi
----
=== 4.2 Redis 高可用模式
适用于生产环境,提供完整的高可用性。
[source,yaml]
----
redisHA:
enabled: true
redis:
port: 6379
image:
repository: kubesphereio/redis
tag: 7.2.4-alpine
pullPolicy: IfNotPresent
persistentVolume:
enabled: true
size: 2Gi
----
=== 4.3 Redis HA 高级配置
[source,yaml]
----
redisHA:
enabled: true
# Redis 节点配置
redis:
port: 6379
# 持久化配置
persistentVolume:
enabled: true
size: 2Gi
# 节点亲和性配置
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: CriticalAddonsOnly
operator: Exists
# 高可用配置
hardAntiAffinity: false
additionalAffinities:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: node-role.kubernetes.io/control-plane
operator: In
values:
- ""
# HAProxy 配置
haproxy:
servicePort: 6379
containerPort: 6379
image:
repository: kubesphereio/haproxy
tag: 2.9.6-alpine
pullPolicy: IfNotPresent
----
== 5. 高可用部署
安装或升级{ks_product_both}时,请在安装或升级命令后添加 `-f values.yaml`。
[.admon.attention,cols="a"]
|===
|注意
|以下命令仅为示例。请在实际的安装或升级命令后添加 `-f values.yaml`。
|===
// KubeSphere
[source,bash]
----
# 安装
helm upgrade --install -n kubesphere-system --create-namespace ks-core https://charts.kubesphere.io/main/ks-core-1.1.4.tgz -f values.yaml --debug --wait
# 升级
helm upgrade -n kubesphere-system ks-core https://charts.kubesphere.io/main/ks-core-1.1.4.tgz -f values.yaml --debug --wait
----
// kse
// [source,bash]
// ----
// # 安装
// helm install -n kubesphere-system --create-namespace ks-core oci://hub.kubesphere.com.cn/kse/ks-core --version 1.1.0 -f values.yaml
// # 升级
// helm upgrade -n kubesphere-system ks-core oci://hub.kubesphere.com.cn/kse/ks-core --version 1.1.0 -f values.yaml
// ----
== 6. 配置说明
=== 6.1 Redis 单实例模式
- 适用于小型集群
- 使用单实例 Redis
- 支持基本的故障转移
- 配置简单,资源消耗较少
=== 6.2 Redis 高可用模式
- 适用于生产环境
- 使用 Redis 集群
- 提供完整的高可用性
- 支持自动故障转移
- 数据持久化
- 负载均衡
== 7. 可选配置
=== JWT 签名密钥配置
在高可用环境中,为了确保所有副本使用相同的 JWT 签名密钥,可以配置自定义的 SignKey。
. 生成 RSA 私钥。
+
[source,bash]
----
openssl genrsa -out private_key.pem 2048
----
. 查看 Base64 编码后的密钥内容。
+
[source,bash]
----
cat private_key.pem | base64 -w 0
----
. 编辑 KubeSphere 配置。
+
--
[source,bash]
----
kubectl -n kubesphere-system edit cm kubesphere-config
----
在 `authentication.issuer` 下添加或替换如下字段:
[source,yaml]
----
signKeyData: <Base64 编码的私钥内容>
----
--
. 重启 KubeSphere 组件。
+
[source,bash]
----
kubectl -n kubesphere-system rollout restart deploy ks-apiserver ks-controller-manager
----
. 验证配置。浏览器访问 `http://<ks-console-address>/oauth/keys` 访问多次,检查每个副本数据是否一致。

View File

@ -25,7 +25,7 @@ include::../../../../_ks_components/admonitions/admonEnd.adoc[]
* 为确保集群具有足够的计算和存储资源,建议新增节点配置至少 8 个 CPU 核心16 GB 内存和 200 GB 磁盘空间。除此之外,建议在每台集群节点的 **/var/lib/docker**(对于 Docker或 **/var/lib/containerd**(对于 containerd 目录额外挂载至少 200 GB 磁盘空间用于存储容器运行时数据。
* 如果添加控制平面节点,您需要提前为集群配置高可用性。如果您使用负载均衡器,请确保负载均衡器监听所有控制平面节点的 6443 端口。有关更多信息请参阅link:../../../03-installation-and-upgrade/01-preparations/03-configure-high-availability/[配置高可用性]。
* 如果添加控制平面节点,您需要提前为集群配置高可用性。如果您使用负载均衡器,请确保负载均衡器监听所有控制平面节点的 6443 端口。有关更多信息请参阅link:../../../03-installation-and-upgrade/01-preparations/02-configure-high-availability/02-configure-k8s-high-availability/[配置高可用性]。
// * 如果您的集群节点无法连接互联网,您还需要准备一台 Linux 服务器用于创建私有镜像服务,该服务器必须与{ks_product_both}集群节点网络连通并且在 **/mnt/registry** 目录挂载至少 100 GB 磁盘空间。
@ -105,7 +105,7 @@ include::../../../../_ks_components/admonitions/warning.adoc[]
* 如果当前集群已配置高可用性,请勿修改 **config-sample.yaml** 文件中的高可用性信息。否则,添加节点后集群可能会出现错误。
* 如果当前集群使用本地负载均衡实现高可用性,您不需要对集群高可用性进行任何操作;如果当前集群使用负载均衡器实现高可用性,您只需要设置负载均衡器监听所有控制平面节点的 6443 端口。有关更多信息请参阅link:../../../03-installation-and-upgrade/01-preparations/03-configure-high-availability/[配置高可用性]。
* 如果当前集群使用本地负载均衡实现高可用性,您不需要对集群高可用性进行任何操作;如果当前集群使用负载均衡器实现高可用性,您只需要设置负载均衡器监听所有控制平面节点的 6443 端口。有关更多信息请参阅link:../../../03-installation-and-upgrade/01-preparations/02-configure-high-availability/02-configure-k8s-high-availability/[配置高可用性]。
include::../../../../_ks_components/admonitions/admonEnd.adoc[]
--

View File

@ -18,7 +18,7 @@ sectionLink:
- /docs/v4.1/02-quickstart/01-install-kubesphere.adoc
- /docs/v4.1/03-installation-and-upgrade/02-install-kubesphere/02-install-kubernetes-and-kubesphere.adoc
- /docs/v4.1/02-quickstart/04-control-user-permissions.adoc
- docs/v4.1/03-installation-and-upgrade/02-install-kubesphere/04-offline-installation.adoc
- docs/v4.1/03-installation-and-upgrade/02-install-kubesphere/03-offline-installation.adoc
- /docs/v4.1/03-installation-and-upgrade/05-add-and-delete-cluster-nodes/01-add-cluster-nodes.adoc
- /docs/v4.1/07-cluster-management/10-multi-cluster-management
- /docs/v4.1/02-quickstart/03-install-an-extension.adoc