mirror of
https://github.com/kubesphere/website.git
synced 2025-12-26 00:12:48 +00:00
Merge branch 'kubesphere:master' into master
This commit is contained in:
commit
d7cbec9e0c
|
|
@ -66,7 +66,7 @@
|
|||
|
||||
.nav {
|
||||
height:100%;
|
||||
margin-left: 280px;
|
||||
margin-left: 250px;
|
||||
margin-right: 360px;
|
||||
line-height: 93px;
|
||||
|
||||
|
|
|
|||
|
|
@ -392,6 +392,11 @@ weight = 6
|
|||
name = "用户论坛"
|
||||
URL = "https://ask.kubesphere.io/forum"
|
||||
|
||||
[[languages.zh.menu.main]]
|
||||
weight = 7
|
||||
name = "认证"
|
||||
URL = "https://kubesphere.cloud/certification/"
|
||||
|
||||
|
||||
# [languages.tr]
|
||||
# weight = 3
|
||||
|
|
|
|||
|
|
@ -162,7 +162,8 @@ section6:
|
|||
children:
|
||||
- icon: /images/home/section6-anchnet.jpg
|
||||
- icon: /images/home/section6-aviation-industry-corporation-of-china.jpg
|
||||
- icon: /images/home/section6-aqara.jpg
|
||||
- icon: /images/case/logo-alphaflow.png
|
||||
- icon: /images/home/section6-aqara.jpg
|
||||
- icon: /images/home/section6-bank-of-beijing.jpg
|
||||
- icon: /images/home/section6-benlai.jpg
|
||||
- icon: /images/home/section6-china-taiping.jpg
|
||||
|
|
@ -184,7 +185,7 @@ section6:
|
|||
- icon: /images/home/section6-webank.jpg
|
||||
- icon: /images/home/section6-wisdom-world.jpg
|
||||
- icon: /images/home/section6-yiliu.jpg
|
||||
- icon: /images/home/section6-zking-insurance.jpg
|
||||
|
||||
|
||||
|
||||
btnContent: Case Studies
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ In Kubernetes clusters, LoadBalancer services can be used to expose backend work
|
|||
|
||||
OpenELB is designed to expose LoadBalancer services in non-public-cloud Kubernetes clusters. It provides easy-to-use EIPs and makes IP address pool management easier for users in private environments.
|
||||
## OpenELB Adopters and Contributors
|
||||
Currently, OpenELB has been used in production environments by many enterprises, such as BENLAI, Suzhou TV, CVTE, Wisdom World, Jollychic, QingCloud, BAIWANG, Rocketbyte, and more. At the end of 2019, BENLAI has used an earlier version of OpenELB in production. Now, OpenELB has attracted 13 contributors and more than 100 community members.
|
||||
Currently, OpenELB has been used in production environments by many enterprises, such as BENLAI, Suzhou TV, CVTE, Wisdom World, Jollychic, QingCloud, BAIWANG and more. At the end of 2019, BENLAI has used an earlier version of OpenELB in production. Now, OpenELB has attracted 13 contributors and more than 100 community members.
|
||||

|
||||
|
||||
## Differences Between OpenELB and MetalLB
|
||||
|
|
|
|||
|
|
@ -74,6 +74,8 @@ section3:
|
|||
children:
|
||||
- name: 'msxf'
|
||||
icon: 'images/case/logo-msxf.png'
|
||||
- name: 'hshc'
|
||||
icon: 'images/case/logo-hshc.png'
|
||||
|
||||
- name: 'IT Service'
|
||||
children:
|
||||
|
|
|
|||
|
|
@ -146,7 +146,7 @@ Pipelines include [declarative pipelines](https://www.jenkins.io/doc/book/pipeli
|
|||
3. Click **Add Nesting Steps** to add a nested step under the `maven` container. Select **shell** from the list and enter the following command in the command line. Click **OK** to save it.
|
||||
|
||||
```shell
|
||||
mvn clean -gs `pwd`/configuration/settings.xml test
|
||||
mvn clean test
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
|
|||
|
|
@ -40,7 +40,7 @@ See the table below for the role of each cluster.
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
These Kubernetes clusters can be hosted across different cloud providers and their Kubernetes versions can also vary. Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
These Kubernetes clusters can be hosted across different cloud providers and their Kubernetes versions can also vary. Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -146,7 +146,7 @@ Pipelines include [declarative pipelines](https://www.jenkins.io/doc/book/pipeli
|
|||
3. Click **Add Nesting Steps** to add a nested step under the `maven` container. Select **shell** from the list and enter the following command in the command line. Click **OK** to save it.
|
||||
|
||||
```shell
|
||||
mvn clean -gs `pwd`/configuration/settings.xml test
|
||||
mvn clean test
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ You need to select:
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
- 2 nodes are included in this example. You can add more nodes based on your own needs, especially in a production environment.
|
||||
- The machine type Standard/4 GB/2 vCPUs is for minimal installation. If you plan to enable several pluggable components or use the cluster for production, you can upgrade your nodes to a more powerful type (such as CPU-Optimized / 8 GB / 4 vCPUs). It seems that DigitalOcean provisions the control plane nodes based on the type of the worker nodes, and for Standard ones the API server can become unresponsive quite soon.
|
||||
|
||||
|
|
|
|||
|
|
@ -79,7 +79,7 @@ Check the installation with `aws --version`.
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
|
||||
- The machine type t3.medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources.
|
||||
- For other settings, you can change them as well based on your own needs or use the default value.
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ This guide walks you through the steps of deploying KubeSphere on [Google Kubern
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
|
||||
- The machine type e2-medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources.
|
||||
- For other settings, you can change them as well based on your own needs or use the default value.
|
||||
|
|
|
|||
|
|
@ -14,7 +14,7 @@ This guide walks you through the steps of deploying KubeSphere on [Huaiwei CCE](
|
|||
|
||||
First, create a Kubernetes cluster based on the requirements below.
|
||||
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
- Ensure the cloud computing network for your Kubernetes cluster works, or use an elastic IP when you use **Auto Create** or **Select Existing**. You can also configure the network after the cluster is created. Refer to [NAT Gateway](https://support.huaweicloud.com/en-us/productdesc-natgateway/en-us_topic_0086739762.html).
|
||||
- Select `s3.xlarge.2` `4-core|8GB` for nodes and add more if necessary (3 and more nodes are required for a production environment).
|
||||
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ This guide walks you through the steps of deploying KubeSphere on [Oracle Kubern
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
- It is recommended that you should select **Public** for **Visibility Type**, which will assign a public IP address for every node. The IP address can be used later to access the web console of KubeSphere.
|
||||
- In Oracle Cloud, a Shape is a template that determines the number of CPUs, amount of memory, and other resources that are allocated to an instance. `VM.Standard.E2.2 (2 CPUs and 16G Memory)` is used in this example. For more information, see [Standard Shapes](https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/References/computeshapes.htm#vmshapes__vm-standard).
|
||||
- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ weight: 4120
|
|||
|
||||
You can install KubeSphere on virtual machines and bare metal with Kubernetes also provisioned. In addition, KubeSphere can also be deployed on cloud-hosted and on-premises Kubernetes clusters as long as your Kubernetes cluster meets the prerequisites below.
|
||||
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
- Available CPU > 1 Core and Memory > 2 G. Only x86_64 CPUs are supported, and Arm CPUs are not fully supported at present.
|
||||
- A **default** StorageClass in your Kubernetes cluster is configured; use `kubectl get sc` to verify it.
|
||||
- The CSR signing feature is activated in kube-apiserver when it is started with the `--cluster-signing-cert-file` and `--cluster-signing-key-file` parameters. See [RKE installation issue](https://github.com/kubesphere/kubesphere/issues/1925#issuecomment-591698309).
|
||||
|
|
|
|||
|
|
@ -245,55 +245,18 @@ To access the console, make sure port 30880 is opened in your security group.
|
|||
### Image list of KubeSphere 3.4
|
||||
|
||||
```txt
|
||||
##k8s-images
|
||||
kubesphere/kube-apiserver:v1.23.10
|
||||
kubesphere/kube-controller-manager:v1.23.10
|
||||
kubesphere/kube-proxy:v1.23.10
|
||||
kubesphere/kube-scheduler:v1.23.10
|
||||
kubesphere/kube-apiserver:v1.24.3
|
||||
kubesphere/kube-controller-manager:v1.24.3
|
||||
kubesphere/kube-proxy:v1.24.3
|
||||
kubesphere/kube-scheduler:v1.24.3
|
||||
kubesphere/kube-apiserver:v1.22.12
|
||||
kubesphere/kube-controller-manager:v1.22.12
|
||||
kubesphere/kube-proxy:v1.22.12
|
||||
kubesphere/kube-scheduler:v1.22.12
|
||||
kubesphere/kube-apiserver:v1.21.14
|
||||
kubesphere/kube-controller-manager:v1.21.14
|
||||
kubesphere/kube-proxy:v1.21.14
|
||||
kubesphere/kube-scheduler:v1.21.14
|
||||
kubesphere/pause:3.7
|
||||
kubesphere/pause:3.6
|
||||
kubesphere/pause:3.5
|
||||
kubesphere/pause:3.4.1
|
||||
coredns/coredns:1.8.0
|
||||
coredns/coredns:1.8.6
|
||||
calico/cni:v3.23.2
|
||||
calico/kube-controllers:v3.23.2
|
||||
calico/node:v3.23.2
|
||||
calico/pod2daemon-flexvol:v3.23.2
|
||||
calico/typha:v3.23.2
|
||||
kubesphere/flannel:v0.12.0
|
||||
openebs/provisioner-localpv:2.10.1
|
||||
openebs/linux-utils:2.10.0
|
||||
library/haproxy:2.3
|
||||
kubesphere/nfs-subdir-external-provisioner:v4.0.2
|
||||
kubesphere/k8s-dns-node-cache:1.15.12
|
||||
##kubesphere-images
|
||||
kubesphere/ks-installer:v3.4.0
|
||||
kubesphere/ks-apiserver:v3.4.0
|
||||
kubesphere/ks-console:v3.4.0
|
||||
kubesphere/ks-controller-manager:v3.4.0
|
||||
kubesphere/ks-upgrade:v3.4.0
|
||||
kubesphere/kubectl:v1.22.0
|
||||
kubesphere/kubectl:v1.21.0
|
||||
kubesphere/kubectl:v1.20.0
|
||||
kubesphere/kubefed:v0.8.1
|
||||
kubesphere/tower:v0.2.0
|
||||
kubesphere/tower:v0.2.1
|
||||
minio/minio:RELEASE.2019-08-07T01-59-21Z
|
||||
minio/mc:RELEASE.2019-08-07T23-14-43Z
|
||||
csiplugin/snapshot-controller:v4.0.0
|
||||
kubesphere/nginx-ingress-controller:v1.1.0
|
||||
kubesphere/nginx-ingress-controller:v1.3.1
|
||||
mirrorgooglecontainers/defaultbackend-amd64:1.4
|
||||
kubesphere/metrics-server:v0.4.2
|
||||
redis:5.0.14-alpine
|
||||
|
|
@ -302,18 +265,18 @@ alpine:3.14
|
|||
osixia/openldap:1.3.0
|
||||
kubesphere/netshoot:v1.0
|
||||
##kubeedge-images
|
||||
kubeedge/cloudcore:v1.9.2
|
||||
kubeedge/iptables-manager:v1.9.2
|
||||
kubesphere/edgeservice:v0.2.0
|
||||
kubeedge/cloudcore:v1.13.0
|
||||
kubesphere/iptables-manager:v1.13.0
|
||||
kubesphere/edgeservice:v0.3.0
|
||||
##gatekeeper-images
|
||||
openpolicyagent/gatekeeper:v3.5.2
|
||||
##openpitrix-images
|
||||
kubesphere/openpitrix-jobs:v3.4.0
|
||||
kubesphere/openpitrix-jobs:v3.3.2
|
||||
##kubesphere-devops-images
|
||||
kubesphere/devops-apiserver:ks-v3.4.0
|
||||
kubesphere/devops-controller:ks-v3.4.0
|
||||
kubesphere/devops-tools:ks-v3.4.0
|
||||
kubesphere/ks-jenkins:v3.4.0-2.319.1
|
||||
kubesphere/ks-jenkins:v3.4.0-2.319.3-1
|
||||
jenkins/inbound-agent:4.10-2
|
||||
kubesphere/builder-base:v3.2.2
|
||||
kubesphere/builder-nodejs:v3.2.0
|
||||
|
|
@ -356,43 +319,46 @@ quay.io/argoproj/argocd-applicationset:v0.4.1
|
|||
ghcr.io/dexidp/dex:v2.30.2
|
||||
redis:6.2.6-alpine
|
||||
##kubesphere-monitoring-images
|
||||
jimmidyson/configmap-reload:v0.5.0
|
||||
prom/prometheus:v2.34.0
|
||||
jimmidyson/configmap-reload:v0.7.1
|
||||
prom/prometheus:v2.39.1
|
||||
kubesphere/prometheus-config-reloader:v0.55.1
|
||||
kubesphere/prometheus-operator:v0.55.1
|
||||
kubesphere/kube-rbac-proxy:v0.11.0
|
||||
kubesphere/kube-state-metrics:v2.5.0
|
||||
kubesphere/kube-state-metrics:v2.6.0
|
||||
prom/node-exporter:v1.3.1
|
||||
prom/alertmanager:v0.23.0
|
||||
thanosio/thanos:v0.25.2
|
||||
thanosio/thanos:v0.31.0
|
||||
grafana/grafana:8.3.3
|
||||
kubesphere/kube-rbac-proxy:v0.8.0
|
||||
kubesphere/notification-manager-operator:v1.4.0
|
||||
kubesphere/notification-manager:v1.4.0
|
||||
kubesphere/kube-rbac-proxy:v0.11.0
|
||||
kubesphere/notification-manager-operator:v2.3.0
|
||||
kubesphere/notification-manager:v2.3.0
|
||||
kubesphere/notification-tenant-sidecar:v3.2.0
|
||||
##kubesphere-logging-images
|
||||
kubesphere/elasticsearch-curator:v5.7.6
|
||||
kubesphere/opensearch-curator:v0.0.5
|
||||
kubesphere/elasticsearch-oss:6.8.22
|
||||
kubesphere/fluentbit-operator:v0.13.0
|
||||
opensearchproject/opensearch:2.6.0
|
||||
opensearchproject/opensearch-dashboards:2.6.0
|
||||
kubesphere/fluentbit-operator:v0.14.0
|
||||
docker:19.03
|
||||
kubesphere/fluent-bit:v1.8.11
|
||||
kubesphere/log-sidecar-injector:1.1
|
||||
kubesphere/fluent-bit:v1.9.4
|
||||
kubesphere/log-sidecar-injector:v1.2.0
|
||||
elastic/filebeat:6.7.0
|
||||
kubesphere/kube-events-operator:v0.4.0
|
||||
kubesphere/kube-events-exporter:v0.4.0
|
||||
kubesphere/kube-events-ruler:v0.4.0
|
||||
kubesphere/kube-events-operator:v0.6.0
|
||||
kubesphere/kube-events-exporter:v0.6.0
|
||||
kubesphere/kube-events-ruler:v0.6.0
|
||||
kubesphere/kube-auditing-operator:v0.2.0
|
||||
kubesphere/kube-auditing-webhook:v0.2.0
|
||||
##istio-images
|
||||
istio/pilot:1.11.1
|
||||
istio/proxyv2:1.11.1
|
||||
jaegertracing/jaeger-operator:1.27
|
||||
jaegertracing/jaeger-agent:1.27
|
||||
jaegertracing/jaeger-collector:1.27
|
||||
jaegertracing/jaeger-query:1.27
|
||||
jaegertracing/jaeger-es-index-cleaner:1.27
|
||||
kubesphere/kiali-operator:v1.38.1
|
||||
kubesphere/kiali:v1.38
|
||||
istio/pilot:1.14.6
|
||||
istio/proxyv2:1.14.6
|
||||
jaegertracing/jaeger-operator:1.29
|
||||
jaegertracing/jaeger-agent:1.29
|
||||
jaegertracing/jaeger-collector:1.29
|
||||
jaegertracing/jaeger-query:1.29
|
||||
jaegertracing/jaeger-es-index-cleaner:1.29
|
||||
kubesphere/kiali-operator:v1.50.1
|
||||
kubesphere/kiali:v1.50
|
||||
##example-images
|
||||
busybox:1.31.1
|
||||
nginx:1.14-alpine
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ This tutorial demonstrates how to add an edge node to your cluster.
|
|||
## Prerequisites
|
||||
|
||||
- You have enabled [KubeEdge](../../../pluggable-components/kubeedge/).
|
||||
- To prevent compatability issues, you are advised to install Kubernetes v1.21.x.
|
||||
- To prevent compatability issues, you are advised to install Kubernetes v1.23.x.
|
||||
- You have an available node to serve as an edge node. The node can run either Ubuntu (recommended) or CentOS. This tutorial uses Ubuntu 18.04 as an example.
|
||||
- Edge nodes, unlike Kubernetes cluster nodes, should work in a separate network.
|
||||
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ You must create a load balancer in your environment to listen (also known as lis
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -64,7 +64,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -97,7 +97,7 @@ Create an example configuration file with default configurations. Here Kubernete
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ Refer to the following steps to download KubeKey.
|
|||
Download KubeKey from [its GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or run the following command.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -49,7 +49,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -82,7 +82,7 @@ Create an example configuration file with default configurations. Here Kubernete
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
||||
|
|
|
|||
|
|
@ -268,7 +268,7 @@ Before you start to create your Kubernetes cluster, make sure you have tested th
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -284,7 +284,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -317,7 +317,7 @@ Create an example configuration file with default configurations. Here Kubernete
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.10 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -44,7 +44,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.10 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -38,7 +38,7 @@ With the configuration file in place, you execute the `./kk` command with varied
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -54,7 +54,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -84,6 +84,6 @@ If you want to use KubeKey to install both Kubernetes and KubeSphere 3.4, see th
|
|||
{{< notice note >}}
|
||||
|
||||
- You can also run `./kk version --show-supported-k8s` to see all supported Kubernetes versions that can be installed by KubeKey.
|
||||
- The Kubernetes versions that can be installed using KubeKey are different from the Kubernetes versions supported by KubeSphere 3.4. If you want to [install KubeSphere 3.4 on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/), your Kubernetes version must be v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x
|
||||
- The Kubernetes versions that can be installed using KubeKey are different from the Kubernetes versions supported by KubeSphere 3.4. If you want to [install KubeSphere 3.4 on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/), your Kubernetes version must be v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
|
||||
{{</ notice >}}
|
||||
|
|
@ -110,7 +110,7 @@ Follow the step below to download [KubeKey](../kubekey).
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -126,7 +126,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -165,7 +165,7 @@ Command:
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../kubekey/#support-matrix).
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ Follow the step below to download [KubeKey](../../../installing-on-linux/introdu
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -48,7 +48,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
|
|||
|
|
@ -199,7 +199,7 @@ Follow the step below to download KubeKey.
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -215,7 +215,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -252,7 +252,7 @@ Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kube
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command above, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
|
|
|||
|
|
@ -300,7 +300,7 @@ Follow the step below to download KubeKey.
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -316,7 +316,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -353,7 +353,7 @@ Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kube
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
|
|
|||
|
|
@ -119,7 +119,7 @@ Follow the steps below to download [KubeKey](../../../installing-on-linux/introd
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -135,7 +135,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -170,7 +170,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
|
|
|||
|
|
@ -71,7 +71,7 @@ Follow the steps below to download [KubeKey](../../../installing-on-linux/introd
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -87,7 +87,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -122,7 +122,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
|
|
|||
|
|
@ -73,7 +73,7 @@ Follow the steps below to download [KubeKey](../../../installing-on-linux/introd
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -89,7 +89,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -124,7 +124,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
|
|
|||
|
|
@ -101,7 +101,7 @@ ssh -i .ssh/id_rsa2 -p50200 kubesphere@40.81.5.xx
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -117,7 +117,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -150,7 +150,7 @@ The commands above download the latest release of KubeKey. You can change the ve
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
|
|
|
|||
|
|
@ -126,7 +126,7 @@ Follow the step below to download KubeKey.
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -142,7 +142,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -175,7 +175,7 @@ Create an example configuration file with default configurations. Here Kubernete
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
|
||||
|
||||
|
|
|
|||
|
|
@ -145,7 +145,7 @@ Perform the following steps to download KubeKey.
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or run the following command:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -161,7 +161,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -202,7 +202,7 @@ To create a Kubernetes cluster with KubeSphere installed, refer to the following
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey installs Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- Recommended Kubernetes versions for KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x. If you do not specify a Kubernetes version, KubeKey installs Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- For all-in-one installation, you do not need to change any configuration.
|
||||
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed. KubeKey will install Kubernetes only. If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
|
||||
- KubeKey will install [OpenEBS](https://openebs.io/) to provision LocalPV for the development and testing environment by default, which is convenient for new users. For other storage classes, see [Persistent Storage Configurations](../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ In addition to installing KubeSphere on a Linux machine, you can also deploy it
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
- To install KubeSphere 3.4 on Kubernetes, your Kubernetes version must be v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
- Make sure your machine meets the minimal hardware requirement: CPU > 1 Core, Memory > 2 GB.
|
||||
- A **default** Storage Class in your Kubernetes cluster needs to be configured before the installation.
|
||||
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ ks-installer is recommended for users whose Kubernetes clusters were not set up
|
|||
- Read [Release Notes for 3.4.0](../../../v3.4/release/release-v340/) carefully.
|
||||
- Back up any important component beforehand.
|
||||
- A Docker registry. You need to have a Harbor or other Docker registries. For more information, see [Prepare a Private Image Registry](../../installing-on-linux/introduction/air-gapped-installation/#step-2-prepare-a-private-image-registry).
|
||||
- Supported Kubernetes versions of KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
- Supported Kubernetes versions of KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
|
||||
## Major Updates
|
||||
|
||||
|
|
|
|||
|
|
@ -9,8 +9,8 @@ Air-gapped upgrade with KubeKey is recommended for users whose KubeSphere and Ku
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
|
||||
- Your Kubernetes version must be v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
- You need to have a KubeSphere cluster running v3.3.x. If your KubeSphere version is v3.2.x or earlier, upgrade to v3.3.x first.
|
||||
- Your Kubernetes version must be v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
- Read [Release Notes for 3.4.0](../../../v3.4/release/release-v340/) carefully.
|
||||
- Back up any important component beforehand.
|
||||
- A Docker registry. You need to have a Harbor or other Docker registries.
|
||||
|
|
@ -65,7 +65,7 @@ KubeKey upgrades Kubernetes from one MINOR version to the next MINOR version unt
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -81,7 +81,7 @@ KubeKey upgrades Kubernetes from one MINOR version to the next MINOR version unt
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
|
|
@ -153,7 +153,7 @@ As you install KubeSphere and Kubernetes on Linux, you need to prepare an image
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- You can change the Kubernetes version downloaded based on your needs. Recommended Kubernetes versions for KubeSphere 3.4 are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
- You can change the Kubernetes version downloaded based on your needs. Recommended Kubernetes versions for KubeSphere 3.4 are v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x. If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.10 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix).
|
||||
|
||||
- After you run the script, a folder `kubekey` is automatically created. Note that this file and `kk` must be placed in the same directory when you create the cluster later.
|
||||
|
||||
|
|
@ -262,7 +262,7 @@ Set `privateRegistry` of your `config-sample.yaml` file:
|
|||
./kk upgrade -f config-sample.yaml
|
||||
```
|
||||
|
||||
To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
|
||||
### Air-gapped upgrade for multi-node clusters
|
||||
|
||||
|
|
@ -346,4 +346,4 @@ Set `privateRegistry` of your `config-sample.yaml` file:
|
|||
./kk upgrade -f config-sample.yaml
|
||||
```
|
||||
|
||||
To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
|
|
|
|||
|
|
@ -8,11 +8,11 @@ weight: 7100
|
|||
|
||||
## Make Your Upgrade Plan
|
||||
|
||||
KubeSphere 3.4 is compatible with Kubernetes v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x:
|
||||
KubeSphere 3.4 is compatible with Kubernetes v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x:
|
||||
|
||||
- Before you upgrade your cluster to KubeSphere 3.4, you need to have a KubeSphere cluster running v3.2.x.
|
||||
- You can choose to only upgrade KubeSphere to 3.4 or upgrade Kubernetes (to a higher version) and KubeSphere (to 3.4) at the same time.
|
||||
- For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
- For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
## Before the Upgrade
|
||||
|
||||
{{< notice warning >}}
|
||||
|
|
|
|||
|
|
@ -10,10 +10,10 @@ ks-installer is recommended for users whose Kubernetes clusters were not set up
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
|
||||
- You need to have a KubeSphere cluster running v3.3.x. If your KubeSphere version is v3.2.x or earlier, upgrade to v3.3.x first.
|
||||
- Read [Release Notes for 3.4.0](../../../v3.4/release/release-v340/) carefully.
|
||||
- Back up any important component beforehand.
|
||||
- Supported Kubernetes versions of KubeSphere 3.4: v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
- Supported Kubernetes versions of KubeSphere 3.4: v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
|
||||
## Major Updates
|
||||
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ This tutorial demonstrates how to upgrade your cluster using KubeKey.
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
|
||||
- You need to have a KubeSphere cluster running v3.3.x. If your KubeSphere version is v3.2.x or earlier, upgrade to v3.3.x first.
|
||||
- Read [Release Notes for 3.4.0](../../../v3.4/release/release-v340/) carefully.
|
||||
- Back up any important component beforehand.
|
||||
- Make your upgrade plan. Two scenarios are provided in this document for [all-in-one clusters](#all-in-one-cluster) and [multi-node clusters](#multi-node-cluster) respectively.
|
||||
|
|
@ -39,7 +39,7 @@ Follow the steps below to download KubeKey before you upgrade your cluster.
|
|||
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -55,7 +55,7 @@ export KKZONE=cn
|
|||
Run the following command to download KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -98,7 +98,7 @@ Run the following command to use KubeKey to upgrade your single-node cluster to
|
|||
./kk upgrade --with-kubernetes v1.22.12 --with-kubesphere v3.4.0
|
||||
```
|
||||
|
||||
To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, v1.22.x, v1.23.x, * v1.24.x, * v1.25.x, and * v1.26.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
### Multi-node cluster
|
||||
|
||||
#### Step 1: Generate a configuration file using KubeKey
|
||||
|
|
@ -137,7 +137,7 @@ The following command upgrades your cluster to KubeSphere 3.4 and Kubernetes v1.
|
|||
./kk upgrade --with-kubernetes v1.22.12 --with-kubesphere v3.4.0 -f sample.yaml
|
||||
```
|
||||
|
||||
To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x.
|
||||
To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.23.x.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,4 @@
|
|||
---
|
||||
title: 云原生实战
|
||||
css: "scss/learn.scss"
|
||||
---
|
||||
|
|
@ -24,7 +24,7 @@ At the same time, Huawei plays an industry-leading role in product solutions bot
|
|||
|
||||
## Acknowledgement
|
||||
|
||||
For this partnership, we thank every staff and contributor from Huawei, KubeSphere Turkey, RocketByte, and EquoSystem. This partnership would never happen without the efforts of Huawei executives Frank Machao and Bobby Zhang, Yavuz Sarı, Haldun Bozkır, Rıza Can Sevinç, Wu Yongxi, and Lin Zelin from Huawei team, and Eda Konyar, Halil BUGOL, and Stephane Yasar from KubeSphere Turkey team.
|
||||
For this partnership, we thank every staff and contributor from Huawei, KubeSphere Turkey and EquoSystem. This partnership would never happen without the efforts of Huawei executives Frank Machao and Bobby Zhang, Yavuz Sarı, Haldun Bozkır, Rıza Can Sevinç, Wu Yongxi, and Lin Zelin from Huawei team, and Eda Konyar, Halil BUGOL, and Stephane Yasar from KubeSphere Turkey team.
|
||||
|
||||
## More information
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,58 @@
|
|||
---
|
||||
title: 'Welcome New KubeSphere Ambassadors! KubeSphere Ambassadorship 2023 Applications Announced!'
|
||||
tag: 'Community News'
|
||||
keywords: Kubernetes, KubeSphere, Community
|
||||
description: We are happy to announce fourteen new KubeSphere Ambassadors who have contributed in different ways to the KubeSphere community many times to help more users to get to know the application scenarios and best practices of KubeSphere.
|
||||
createTime: '2023-10-10'
|
||||
author: 'KubeSphere'
|
||||
image: 'https://pek3b.qingstor.com/kubesphere-community/images/ambassador-20230920-cover.png'
|
||||
---
|
||||
|
||||
Contributing to the open-source community is not limited to contributing code and documentation and supporting localization and internationalization, but includes technology evangelism as well.
|
||||
|
||||
With the KubeSphere Ambassadorship program we organized this year, we received applications from outside for the first time and selected our ambassadors through evaluation. The term of office of our ambassadors will be 1 year, and new ambassador elections can be held at the same time next year. An ambassador can be re-elected as an ambassador every year. Through this ambassador program, we aim to foster a more open community environment.
|
||||
|
||||
With weekly meetings with ambassadors, we will give our ambassadors much more active roles in the development processes of KubeSphere and enable them to lead their communities in the regions they are located.
|
||||
|
||||
KubeSphere Ambassador is awarded to technical evangelists who are helping to grow the KubeSphere community by writing technical blogs and user cases, sharing technologies in the community, etc. Or it is presented to people who we think will make useful contributions to the KubeSphere community in the future.
|
||||
|
||||
We are happy to announce fourteen new KubeSphere Ambassadors who have contributed in different ways to the KubeSphere community many times to help more users to get to know the application scenarios and best practices of KubeSphere.
|
||||
|
||||
## About the Certificate
|
||||
|
||||
The KubeSphere Ambassadorship Program (KSAP) is a program where we bring together our members who carry out community activities for KubeSphere. We support your community activities and develop KubeSphere together. We would like to select a total of twenty-five Ambassadors for the program that we will hold open to the community for the first time this year. These ambassadors will have a one-year term. And an ambassador can be chosen for multiple years in a row. KubeSphere Ambassador is rewarded with special benefits and a very special certificate after being selected.
|
||||
|
||||
## Obtaining the Certificate
|
||||
|
||||
| Name | certificate |
|
||||
| -------- | -------- |
|
||||
| Onur Canoğlu | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Onur-Canog%CC%86lu.png) |
|
||||
| Rossana Suarez | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Rossana-Suarez.png) |
|
||||
|Jona Apelbaum| [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Jona-Apelbaum.png) |
|
||||
| Nilo Yucra Gavilan | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Nilo-Yucra-Gavilan.png) |
|
||||
| Halil BUGOL | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Halil-I%CC%87brahim-BUGOL.png) |
|
||||
| Eda Konyar | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Eda-Konyar.png) |
|
||||
| İremnur Önder | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-I%CC%87remnur-O%CC%88nder.png) |
|
||||
|Harun Eren SAT | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Harun-Eren-SAT.png) |
|
||||
| Min Yin | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-yinmin.png) |
|
||||
| Kevin Xu | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-xupeng.png) |
|
||||
| Haili Zhang | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-zhanghaili.png) |
|
||||
| Zhengjun Zhou | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-zhouzhengjun.png) |
|
||||
| Zhenfei Pei | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-peizhenfei.png) |
|
||||
| Jianlin Zheng | [View and download the certificate](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-zhengjianlin.png) |
|
||||
|
||||
## Important Update Regarding KubeSphere Ambassador Program Email Usage
|
||||
|
||||
As part of our ongoing efforts to maintain the integrity and purpose of the KubeSphere Ambassador Program, we are introducing an important update regarding the usage of **the kubesphere.io mailbox**.
|
||||
|
||||
To ensure the efficient use of the kubesphere.io mailbox and maintain its focus on open-source activities, we kindly request that applicants restrict its usage to open-source purposes only. This includes discussions, contributions, and inquiries related to KubeSphere and its associated projects.
|
||||
|
||||
While we encourage engagement and appreciate your enthusiasm for KubeSphere, we kindly request that you refrain from using the kubesphere.io mailbox for business-related matters, such as sales, marketing, or commercial inquiries. This restriction will help us maintain the integrity of the Ambassador Program and ensure its effectiveness in supporting the open-source community.
|
||||
|
||||
We appreciate your understanding and cooperation in adhering to these guidelines. By doing so, together, we can create a vibrant and collaborative environment for open-source enthusiasts and contributors.
|
||||
|
||||
If you have any questions or require further clarification regarding the Ambassador Program or its guidelines, please feel free to reach out to us at info@kubesphere.io.
|
||||
|
||||
## Final
|
||||
|
||||
The KubeSphere community would like to express its gratitude to new KubeSphere Ambassadors and extend the sincerest greetings to those who participated in the open-source contribution to the KubeSphere community! Shortly before this announcement, we contacted all our ambassadors and started our work. We are currently in the grouping and meeting phase. Very soon, all our ambassadors will be with you for a better KubeSphere community.
|
||||
|
|
@ -161,6 +161,7 @@ section6:
|
|||
children:
|
||||
- icon: /images/home/section6-anchnet.jpg
|
||||
- icon: /images/home/section6-aviation-industry-corporation-of-china.jpg
|
||||
- icon: /images/case/logo-alphaflow.png
|
||||
- icon: /images/home/section6-aqara.jpg
|
||||
- icon: /images/home/section6-bank-of-beijing.jpg
|
||||
- icon: /images/home/section6-benlai.jpg
|
||||
|
|
@ -183,7 +184,6 @@ section6:
|
|||
- icon: /images/home/section6-webank.jpg
|
||||
- icon: /images/home/section6-wisdom-world.jpg
|
||||
- icon: /images/home/section6-yiliu.jpg
|
||||
- icon: /images/case/logo-alphaflow.png
|
||||
|
||||
btnContent: 案例学习
|
||||
btnLink: case/
|
||||
|
|
|
|||
|
|
@ -0,0 +1,149 @@
|
|||
---
|
||||
title: '基于 KubeSphere 部署 KubeBlocks 实现数据库自由'
|
||||
tag: 'Kubernetes,KubeSphere,KubeBlocks'
|
||||
keywords: 'Kubernetes, KubeSphere, KubeBlocks'
|
||||
description: 'KubeSphere 让 KubeBlocks 更易部署和使用,KubeBlocks 让应用在 KubeSphere 上更灵活弹性。'
|
||||
createTime: '2023-10-19'
|
||||
author: '尹珉'
|
||||
snapshot: 'https://pek3b.qingstor.com/kubesphere-community/images/kubeblocks-on-kubesphere-cover.png'
|
||||
---
|
||||
|
||||
## KubeSphere 是什么?
|
||||
|
||||
KubeSphere 是在 Kubernetes 之上构建的面向云原生应用的分布式操作系统,完全开源,支持多云与多集群管理,提供全栈的 IT 自动化运维能力,简化企业的 DevOps 工作流。它的架构可以非常方便地使第三方应用与云原生生态组件进行即插即用 (plug-and-play) 的集成。作为全栈的多租户容器平台,KubeSphere 提供了运维友好的向导式操作界面,帮助企业快速构建一个强大和功能丰富的容器云平台。KubeSphere 为用户提供构建企业级 Kubernetes 环境所需的多项功能,例如多云与多集群管理、Kubernetes 资源管理、DevOps、应用生命周期管理、微服务治理(服务网格)、日志查询与收集、服务与网络、多租户管理、监控告警、事件与审计查询、存储管理、访问权限控制、GPU 支持、网络策略、镜像仓库管理以及安全管理等。
|
||||
|
||||

|
||||
|
||||
## KubeBlocks 是什么?
|
||||
|
||||
KubeBlocks 这个名字来源于 Kubernetes 和 LEGO 积木,这表明在 Kubernetes 上构建数据库和分析型工作负载既高效又愉快,就像玩乐高玩具一样。KubeBlocks 将顶级云服务提供商的大规模生产经验与增强的可用性和稳定性改进相结合,帮助用户轻松构建容器化、声明式的关系型、NoSQL、流计算和向量型数据库服务。
|
||||
|
||||
官网:https://kubeblocks.io/。
|
||||
|
||||

|
||||
|
||||
## 为什么需要 KubeBlocks?
|
||||
|
||||
Kubernetes 已经成为容器编排的事实标准。它利用 ReplicaSet 提供的可扩展性和可用性以及部署提供的推出和回滚功能来管理数量不断增加的无状态工作负载。然而,管理有状态工作负载给 Kubernetes 带来了巨大的挑战。尽管 StatefulSet 提供了稳定的持久存储和唯一的网络标识符,但这些功能对于复杂的有状态工作负载来说远远不够。
|
||||
|
||||
为了应对这些挑战,并解决复杂性问题,KubeBlocks 引入了新的 workload——RSM(Replicated State Machines),具有以下能力:
|
||||
|
||||
- 基于角色的更新顺序可减少因升级版本、缩放和重新启动而导致的停机时间。
|
||||
- 维护数据复制的状态,并自动修复复制错误或延迟。
|
||||
|
||||
## 它俩结合会带来什么收益?
|
||||
|
||||
KubeSphere 提供了一个成熟的 Kubernetes 容器管理平台,而 KubeBlocks 在其上构建了数据库专业能力。这种创新融合,打通了数据库服务容器化的技术壁垒,实现了“开箱即用”。KubeSphere 让 KubeBlocks 应用享受集群级的资源调度和服务治理。KubeBlocks 使数据库服务在 KubeSphere 中具备自动化运维的专业实力。两者的协同互补,不仅简化了数据库的云化改造,也使数据库应用交付更加快速和可靠。
|
||||
|
||||
## 部署开始
|
||||
|
||||
### 部署先决条件
|
||||
|
||||
- 确保已有可用的 KubeSphere 平台,如还未部署请至官网进行部署即可。官网地址:https://kubesphere.io/zh/docs/v3.4/。
|
||||
|
||||
- 确保宿主机网络互通并可以访问互联网。
|
||||
|
||||
### 登录 KubeSphere 平台添加 KubeBlocks 官方仓库
|
||||
|
||||
仓库地址:https://apecloud.github.io/helm-charts。
|
||||
|
||||

|
||||
|
||||
### 选择干净的 NameSpace 添加 KubeBlocks 服务
|
||||
|
||||
#### 1. 导航并点击右侧【创建】按钮
|
||||
|
||||

|
||||
|
||||
#### 2. 选择【应用模板】
|
||||
|
||||

|
||||
|
||||
#### 3. 选择刚才创建的【应用仓库】并搜索到 KubeBlocks 服务
|
||||
|
||||

|
||||
|
||||
#### 4. 选择目前的稳定版本【0.6.1】
|
||||
|
||||

|
||||
|
||||
#### 5. 默认不需要改 Values 的值,额外要注意 StorageClass 的配置
|
||||
|
||||

|
||||
|
||||
#### 6. 耐心等待后,确认应用服务启动状态正常
|
||||
|
||||

|
||||
|
||||
## 安装 kbcli
|
||||
|
||||
目前支持 macOS、Windows、Linux。本教程以 Linux 为例。
|
||||
|
||||
### 1. 安装 kbcli
|
||||
|
||||
```shell
|
||||
curl -fsSL https://kubeblocks.io/installer/install_cli.sh | bash
|
||||
```
|
||||
|
||||
### 2. 验证安装
|
||||
|
||||
```shell
|
||||
kbcli version
|
||||
```
|
||||
|
||||

|
||||
|
||||
### 3. 检查刚才部署的 Kubeblocks 相关信息
|
||||
|
||||
```shell
|
||||
kbcli kubeblocks status
|
||||
```
|
||||
|
||||

|
||||
|
||||
## 创建并连接到 MySQL 实例
|
||||
|
||||
> 说明:
|
||||
> KubeBlocks 官方支持 kbcli 和 kubectl 创建集群。本教程使用 kbcli 作为演示。
|
||||
|
||||
### 1. 查看可用于创建集群的所有数据库类型和版本
|
||||
|
||||
```shell
|
||||
kbcli clusterdefinition list
|
||||
```
|
||||
|
||||

|
||||
|
||||
```shell
|
||||
kbcli clusterversion list
|
||||
```
|
||||
|
||||

|
||||
|
||||
### 2. 创建 MySQL 实例
|
||||
|
||||
```shell
|
||||
kbcli cluster create mysql mycluster
|
||||
```
|
||||
|
||||

|
||||
|
||||
### 3. 检查实例状态
|
||||
|
||||
```shell
|
||||
kbcli cluster list
|
||||
```
|
||||
|
||||

|
||||
|
||||
### 4. 连接到 MySQL 实例
|
||||
|
||||
```shell
|
||||
kbcli cluster connect mycluster -n default
|
||||
```
|
||||
|
||||

|
||||
|
||||
## 总结
|
||||
|
||||
KubeSphere 提供了 GUI 和 DevOps 工具,大大降低了 Kubernetes 学习和使用门槛。KubeBlocks 基于 K8s Operator 模式实现了应用解耦和复用,是云原生架构的重要选择。双方深度融合,发挥各自在易用性和敏捷开发上的优势。KubeSphere 让 KubeBlocks 更易部署和使用,KubeBlocks 让应用在 KubeSphere 上更灵活弹性。通过结合两者优势,企业能够更轻松实施以应用为中心的数字化转型,实现业务创新。
|
||||
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,857 @@
|
|||
---
|
||||
title: 'ARM 版 openEuler 22.03 部署 KubeSphere v3.4.0 不完全指南'
|
||||
tag: 'KubeSphere'
|
||||
keywords: 'Kubernetes, KubeSphere, openEuler, ARM '
|
||||
description: '本文主要实战演示了在 ARM 版 openEuler 22.03 LTS SP2 服务器上,利用 KubeKey v3.0.10 自动化部署最小化 KubeSphere v3.4.0 和 Kubernetes v1.26.5 高可用集群的详细过程。'
|
||||
createTime: '2023-10-26'
|
||||
author: '运维有术'
|
||||
snapshot: 'https://pek3b.qingstor.com/kubesphere-community/images/kubesphere-3.4-on-openeuler-cover.png'
|
||||
---
|
||||
|
||||
## 前言
|
||||
|
||||
### 知识点
|
||||
|
||||
- 定级:**入门级**
|
||||
- KubeKey 安装部署 ARM 版 KubeSphere 和 Kubernetes
|
||||
- ARM 版 KubeSphere 和 Kubernetes 常见问题
|
||||
|
||||
### 实战服务器配置 (个人云上测试服务器)
|
||||
|
||||
| 主机名 | IP | CPU | 内存 | 系统盘 | 数据盘 | 用途 |
|
||||
| :---------: | :----------: | :-: | :--: | :----: | :----: | :-------------------: |
|
||||
| ks-master-1 | 172.16.33.16 | 6 | 16 | 50 | 200 | KubeSphere/k8s-master |
|
||||
| ks-master-2 | 172.16.33.22 | 6 | 16 | 50 | 200 | KubeSphere/k8s-master |
|
||||
| ks-master-3 | 172.16.33.23 | 6 | 16 | 50 | 200 | KubeSphere/k8s-master |
|
||||
| 合计 | 10 | 18 | 48 | 150 | 600+ | |
|
||||
|
||||
### 实战环境涉及软件版本信息
|
||||
|
||||
- 服务器芯片:**Kunpeng-920**
|
||||
|
||||
- 操作系统:**openEuler 22.03 LTS SP2 aarch64**
|
||||
|
||||
- KubeSphere:**v3.4.0**
|
||||
|
||||
- Kubernetes:**v1.26.5**
|
||||
|
||||
- Containerd:**1.6.4**
|
||||
|
||||
- KubeKey: **v3.0.10**
|
||||
|
||||
## 1. 本文简介
|
||||
|
||||
本文介绍了如何在 **openEuler 22.03 LTS SP2 aarch64** 架构服务器上部署 KubeSphere 和 Kubernetes 集群。我们将使用 KubeSphere 开发的 KubeKey 工具实现自动化部署,在三台服务器上实现高可用模式最小化部署 Kubernetes 集群和 KubeSphere。
|
||||
|
||||
KubeSphere 和 Kubernetes 在 ARM 架构 和 x86 架构的服务器上部署,最大的区别在于所有服务使用的**容器镜像架构类型**的不同,KubeSphere 开源版对于 ARM 架构的默认支持可以实现 KubeSphere-Core 功能,即可以实现最小化的 KubeSphere 和完整的 Kubernetes 集群的部署。当启用了 KubeSphere 可插拔组件时,会遇到个别组件部署失败的情况,需要我们手工替换官方或是第三方提供的 ARM 版镜像或是根据官方源码手工构建 ARM 版镜像。如果需要实现开箱即用及更多的技术支持,则需要购买企业版的 KubeSphere。
|
||||
|
||||
本文源自我在调研 KubeSphere 开源版对 ARM 架构服务器支持程度的实验过程文档。文中详细的记录了在完成最终部署的过程中,遇到的各种问题报错及相应的解决方案。由于能力有限,本文中所遇到的架构不兼容的问题,均采用了手工替换第三方仓库或是官方其他仓库相同或是相似 ARM 版本镜像的方案。建议计划在生产中使用的读者最好能具备使用官方源码及 DockerFile 构建与 X86 版本完全相同的 ARM 版容器镜像的能力,不要替换相近版本或是使用第三方镜像。也正是因为本文并没有涉及利用官方源码及 Dockerfile 构建 ARM 镜像的相关内容,所以才取名为**不完全指南**。
|
||||
|
||||
接下来我将提供详细的部署说明,以便读者轻松地完成 ARM 版 KubeSphere 和 Kubernetes 部署并解决部署过程中遇到的问题。
|
||||
|
||||
### 1.1 操作系统配置
|
||||
|
||||
在执行下文的任务之前,先确认操作系统相关配置。
|
||||
|
||||
- 操作系统类型
|
||||
|
||||
```bash
|
||||
[root@ks-master-1 ~]# cat /etc/os-release
|
||||
NAME="openEuler"
|
||||
VERSION="22.03 (LTS-SP2)"
|
||||
ID="openEuler"
|
||||
VERSION_ID="22.03"
|
||||
PRETTY_NAME="openEuler 22.03 (LTS-SP2)"
|
||||
ANSI_COLOR="0;31"
|
||||
```
|
||||
|
||||
- 操作系统内核
|
||||
|
||||
```bash
|
||||
[root@ks-master-1 ~]# uname -a
|
||||
Linux ks-master-1 5.10.0-153.12.0.92.oe2203sp2.aarch64 #1 SMP Wed Jun 28 23:18:48 CST 2023 aarch64 aarch64 aarch64 GNU/Linux
|
||||
```
|
||||
|
||||
- 服务器 CPU 信息
|
||||
|
||||
```bash
|
||||
[root@ks-master-1 ~]# lscpu
|
||||
Architecture: aarch64
|
||||
CPU op-mode(s): 64-bit
|
||||
Byte Order: Little Endian
|
||||
CPU(s): 6
|
||||
On-line CPU(s) list: 0-5
|
||||
Vendor ID: HiSilicon
|
||||
BIOS Vendor ID: QEMU
|
||||
Model name: Kunpeng-920
|
||||
BIOS Model name: virt-4.1
|
||||
Model: 0
|
||||
Thread(s) per core: 1
|
||||
Core(s) per socket: 1
|
||||
Socket(s): 6
|
||||
Stepping: 0x1
|
||||
Frequency boost: disabled
|
||||
CPU max MHz: 2600.0000
|
||||
CPU min MHz: 2600.0000
|
||||
BogoMIPS: 200.00
|
||||
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma dcpop asimddp asimdfhm
|
||||
Caches (sum of all):
|
||||
L1d: 384 KiB (6 instances)
|
||||
L1i: 384 KiB (6 instances)
|
||||
L2: 3 MiB (6 instances)
|
||||
L3: 192 MiB (6 instances)
|
||||
```
|
||||
|
||||
## 2. 操作系统基础配置
|
||||
|
||||
请注意,以下操作无特殊说明时需在所有服务器上执行。本文只选取 Master-1 节点作为演示,并假定其余服务器都已按照相同的方式进行配置和设置。
|
||||
|
||||
### 2.1 配置主机名
|
||||
|
||||
```shell
|
||||
hostnamectl hostname ks-master-1
|
||||
```
|
||||
|
||||
### 2.2 配置 DNS
|
||||
|
||||
```shell
|
||||
echo "nameserver 114.114.114.114" > /etc/resolv.conf
|
||||
```
|
||||
|
||||
### 2.3 配置服务器时区
|
||||
|
||||
配置服务器时区为 **Asia/Shanghai**。
|
||||
|
||||
```shell
|
||||
timedatectl set-timezone Asia/Shanghai
|
||||
```
|
||||
|
||||
### 2.4 配置时间同步
|
||||
|
||||
安装 chrony 作为时间同步软件。
|
||||
|
||||
```shell
|
||||
yum install chrony
|
||||
```
|
||||
|
||||
修改配置文件 /etc/chrony.conf,修改 ntp 服务器配置。
|
||||
|
||||
```shell
|
||||
vi /etc/chrony.conf
|
||||
|
||||
# 删除所有的 pool 配置
|
||||
pool pool.ntp.org iburst
|
||||
|
||||
# 增加国内的 ntp 服务器,或是指定其他常用的时间服务器
|
||||
pool cn.pool.ntp.org iburst
|
||||
|
||||
# 上面的手工操作,也可以使用 sed 自动替换
|
||||
sed -i 's/^pool pool.*/pool cn.pool.ntp.org iburst/g' /etc/chrony.conf
|
||||
```
|
||||
|
||||
重启并设置 chrony 服务开机自启动。
|
||||
|
||||
```shell
|
||||
systemctl enable chronyd --now
|
||||
```
|
||||
|
||||
验证 chrony 同步状态。
|
||||
|
||||
```shell
|
||||
# 执行查看命令
|
||||
chronyc sourcestats -v
|
||||
```
|
||||
|
||||
### 2.5 关闭系统防火墙
|
||||
|
||||
```shell
|
||||
systemctl stop firewalld && systemctl disable firewalld
|
||||
```
|
||||
|
||||
### 2.6 禁用 SELinux
|
||||
|
||||
openEuler 22.03 SP2 最小化安装的系统默认启用了 SELinux,为了减少麻烦,我们所有的节点都禁用 SELinux。
|
||||
|
||||
```shell
|
||||
# 使用 sed 修改配置文件,实现彻底的禁用
|
||||
sed -i 's/^SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
|
||||
|
||||
# 使用命令,实现临时禁用,这一步其实不做也行,KubeKey 会自动配置
|
||||
setenforce 0
|
||||
```
|
||||
|
||||
### 2.7 安装系统依赖
|
||||
|
||||
在所有节点上,以 **root** 用户登陆系统,执行下面的命令为 Kubernetes 安装系统基本依赖包。
|
||||
|
||||
```shell
|
||||
# 安装 Kubernetes 系统依赖包
|
||||
yum install curl socat conntrack ebtables ipset ipvsadm
|
||||
|
||||
# 安装其他必备包,openEuler 也是奇葩了,默认居然都不安装 tar,不装的话后面会报错
|
||||
yum install tar
|
||||
```
|
||||
|
||||
## 3. 操作系统磁盘配置
|
||||
|
||||
服务器新增一块数据盘 **/dev/sdb**,用于 **Containerd** 和 **Kubernetes Pod** 的持久化存储。
|
||||
|
||||
请注意,以下操作无特殊说明时需在集群所有节点上执行。本文只选取 **Master-1** 节点作为演示,并假定其余服务器都已按照相同的方式进行配置和设置。
|
||||
|
||||
### 3.1 使用 LVM 配置磁盘
|
||||
|
||||
为了满足部分用户希望在生产上线后,磁盘容量不足时可以实现动态扩容。本文采用了 LVM 的方式配置磁盘(**实际上,本人维护的生产环境,几乎不用 LVM**)。
|
||||
|
||||
- 创建 PV
|
||||
|
||||
```bash
|
||||
pvcreate /dev/sdb
|
||||
```
|
||||
|
||||
- 创建 VG
|
||||
|
||||
```bash
|
||||
vgcreate data /dev/sdb
|
||||
```
|
||||
|
||||
- 创建 LV
|
||||
|
||||
```bash
|
||||
# 使用所有空间,VG 名字为 data,LV 名字为 lvdata
|
||||
lvcreate -l 100%VG data -n lvdata
|
||||
```
|
||||
|
||||
### 3.2 格式化磁盘
|
||||
|
||||
```shell
|
||||
mkfs.xfs /dev/mapper/data-lvdata
|
||||
```
|
||||
|
||||
### 3.3 磁盘挂载
|
||||
|
||||
- 手工挂载
|
||||
|
||||
```bash
|
||||
mkdir /data
|
||||
mount /dev/mapper/data-lvdata /data/
|
||||
```
|
||||
|
||||
- 开机自动挂载
|
||||
|
||||
```bash
|
||||
tail -1 /etc/mtab >> /etc/fstab
|
||||
```
|
||||
|
||||
### 3.4 创建数据目录
|
||||
|
||||
- 创建 **Containerd** 数据目录
|
||||
|
||||
```bash
|
||||
mkdir -p /data/containerd
|
||||
```
|
||||
|
||||
- 创建 Containerd 数据目录软连接
|
||||
|
||||
```bash
|
||||
ln -s /data/containerd /var/lib/containerd
|
||||
```
|
||||
|
||||
> **说明:** KubeKey 到 v3.0.10 版为止,一直不支持在部署的时候更改 Containerd 的数据目录,只能用这种目录软链接到变通方式来增加存储空间(**也可以提前手工安装 Containerd,建议**)。
|
||||
|
||||
### 3.5 磁盘配置自动化 Shell 脚本
|
||||
|
||||
上述所有操作,都可以整理成自动化配置脚本。
|
||||
|
||||
```shell
|
||||
pvcreate /dev/sdb
|
||||
vgcreate data /dev/sdb
|
||||
lvcreate -l 100%VG data -n lvdata
|
||||
mkfs.xfs /dev/mapper/data-lvdata
|
||||
mkdir /data
|
||||
mount /dev/mapper/data-lvdata /data/
|
||||
tail -1 /etc/mtab >> /etc/fstab
|
||||
mkdir -p /data/containerd
|
||||
ln -s /data/containerd /var/lib/containerd
|
||||
```
|
||||
|
||||
## 4. 安装部署 KubeSphere 和 Kubernetes
|
||||
|
||||
### 4.1 下载 KubeKey
|
||||
|
||||
本文将 master-1 节点作为部署节点,把 KubeKey (下文简称 kk) 最新版 (**v3.0.10**) 二进制文件下载到该服务器。具体 kk 版本号可以在 [kk 发行页面](https://github.com/kubesphere/kubekey/releases "kk 发行页面") 查看。
|
||||
|
||||
- 下载最新版的 KubeKey
|
||||
|
||||
```shell
|
||||
cd ~
|
||||
mkdir kubekey
|
||||
cd kubekey/
|
||||
|
||||
# 选择中文区下载(访问 GitHub 受限时使用)
|
||||
export KKZONE=cn
|
||||
curl -sfL https://get-kk.kubesphere.io | sh -
|
||||
|
||||
# 也可以使用下面的命令指定具体版本
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.10 sh -
|
||||
|
||||
# 正确的执行效果如下
|
||||
[root@ks-master-1 ~]# cd ~
|
||||
[root@ks-master-1 ~]# mkdir kubekey
|
||||
[root@ks-master-1 ~]# cd kubekey/
|
||||
[root@ks-master-1 kubekey]# export KKZONE=cn
|
||||
[root@ks-master-1 kubekey]# curl -sfL https://get-kk.kubesphere.io | sh -
|
||||
|
||||
Downloading kubekey v3.0.10 from https://kubernetes.pek3b.qingstor.com/kubekey/releases/download/v3.0.10/kubekey-v3.0.10-linux-arm64.tar.gz ...
|
||||
|
||||
|
||||
Kubekey v3.0.10 Download Complete!
|
||||
|
||||
[root@ks-master-1 kubekey]# ll
|
||||
total 107040
|
||||
-rwxr-xr-x. 1 root root 76376640 Jul 28 14:13 kk
|
||||
-rw-r--r--. 1 root root 33229133 Oct 12 09:03 kubekey-v3.0.10-linux-arm64.tar.gz
|
||||
```
|
||||
|
||||
> **注意:** ARM 版的安装包名称为 **kubekey-v3.0.10-linux-arm64.tar.gz**。
|
||||
|
||||
### 4.2 创建 Kubernetes 和 KubeSphere 部署配置文件
|
||||
|
||||
创建集群配置文件,本示例中,选择 KubeSphere v3.4.0 和 Kubernetes v1.26.5。因此,指定配置文件名称为 **kubesphere-v340-v1265.yaml**,如果不指定,默认的文件名为 **config-sample.yaml**。
|
||||
|
||||
```shell
|
||||
./kk create config -f kubesphere-v340-v1265.yaml --with-kubernetes v1.26.5 --with-kubesphere v3.4.0
|
||||
```
|
||||
|
||||
命令执行成功后,在当前目录会生成文件名为 **kubesphere-v340-v1265.yaml** 的配置文件。
|
||||
|
||||
> **注意:** 生成的默认配置文件内容较多,这里就不做过多展示了,更多详细的配置参数请参考 [官方配置示例](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md "官方配置示例")。
|
||||
|
||||
本文示例采用 3 个节点同时作为 control-plane、Etcd 节点和 worker 节点。
|
||||
|
||||
编辑配置文件 `kubesphere-v340-v1265.yaml`,主要修改 **kind: Cluster** 和 **kind: ClusterConfiguration** 两小节的相关配置
|
||||
|
||||
修改 **kind: Cluster** 小节中 hosts 和 roleGroups 等信息,修改说明如下。
|
||||
|
||||
- hosts:指定节点的 IP、ssh 用户、ssh 密码、ssh 端口。**特别注意:** 一定要手工指定 **arch: arm64**,否则部署的时候会安装 X86 架构的软件包。
|
||||
- roleGroups:指定 3 个 Etcd、control-plane 节点,复用相同的机器作为 3 个 worker 节点,。
|
||||
- internalLoadbalancer: 启用内置的 HAProxy 负载均衡器
|
||||
- domain:自定义了一个 opsman.top
|
||||
- containerManager:使用了 containerd
|
||||
|
||||
修改后的示例如下:
|
||||
|
||||
```yaml
|
||||
apiVersion: kubekey.kubesphere.io/v1alpha2
|
||||
kind: Cluster
|
||||
metadata:
|
||||
name: sample
|
||||
spec:
|
||||
hosts:
|
||||
- {name: ks-master-1, address: 172.16.33.16, internalAddress: 172.16.33.16, user: root, password: "P@88w0rd", arch: arm64}
|
||||
- {name: ks-master-2, address: 172.16.33.22, internalAddress: 172.16.33.22, user: root, password: "P@88w0rd", arch: arm64}
|
||||
- {name: ks-master-3, address: 172.16.33.23, internalAddress: 172.16.33.23, user: root, password: "P@88w0rd", arch: arm64}
|
||||
roleGroups:
|
||||
Etcd:
|
||||
- ks-master-1
|
||||
- ks-master-2
|
||||
- ks-master-3
|
||||
control-plane:
|
||||
- ks-master-1
|
||||
- ks-master-2
|
||||
- ks-master-3
|
||||
worker:
|
||||
- ks-master-1
|
||||
- ks-master-2
|
||||
- ks-master-3
|
||||
controlPlaneEndpoint:
|
||||
## Internal loadbalancer for apiservers
|
||||
internalLoadbalancer: haproxy
|
||||
|
||||
domain: lb.opsman.top
|
||||
address: ""
|
||||
port: 6443
|
||||
kubernetes:
|
||||
version: v1.26.5
|
||||
clusterName: opsman.top
|
||||
autoRenewCerts: true
|
||||
containerManager: containerd
|
||||
Etcd:
|
||||
type: kubekey
|
||||
network:
|
||||
plugin: calico
|
||||
kubePodsCIDR: 10.233.64.0/18
|
||||
kubeServiceCIDR: 10.233.0.0/18
|
||||
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
|
||||
multusCNI:
|
||||
enabled: false
|
||||
registry:
|
||||
privateRegistry: ""
|
||||
namespaceOverride: ""
|
||||
registryMirrors: []
|
||||
insecureRegistries: []
|
||||
addons: []
|
||||
```
|
||||
|
||||
修改 **kind: ClusterConfiguration** 启用可插拔组件,修改说明如下。
|
||||
|
||||
- 启用 Etcd 监控
|
||||
|
||||
```yaml
|
||||
Etcd:
|
||||
monitoring: true # 将 "false" 更改为 "true"
|
||||
endpointIps: localhost
|
||||
port: 2379
|
||||
tlsEnable: true
|
||||
```
|
||||
|
||||
- 启用应用商店
|
||||
|
||||
```yaml
|
||||
openpitrix:
|
||||
store:
|
||||
enabled: true # 将 "false" 更改为 "true"
|
||||
```
|
||||
|
||||
- 启用 KubeSphere DevOps 系统
|
||||
|
||||
```yaml
|
||||
devops:
|
||||
enabled: true # 将 "false" 更改为 "true"
|
||||
```
|
||||
|
||||
- 启用 KubeSphere 日志系统
|
||||
|
||||
```shell
|
||||
logging:
|
||||
enabled: true # 将 "false" 更改为 "true"
|
||||
```
|
||||
|
||||
- 启用 KubeSphere 事件系统
|
||||
|
||||
```yaml
|
||||
events:
|
||||
enabled: true # 将 "false" 更改为 "true"
|
||||
```
|
||||
|
||||
> **注意:** 默认情况下,如果启用了事件系统功能,KubeKey 将安装内置 Elasticsearch。对于生产环境,不建议在部署集群时启用事件系统。请在部署完成后,参考 [可插拔组件官方文档](https://www.kubesphere.io/zh/docs/v3.3/pluggable-components/events/ "可插拔组件官方文档") 手工配置。
|
||||
|
||||
- 启用 KubeSphere 告警系统
|
||||
|
||||
```yaml
|
||||
alerting:
|
||||
enabled: true # 将 "false" 更改为 "true"
|
||||
```
|
||||
|
||||
- 启用 KubeSphere 审计日志
|
||||
|
||||
```yaml
|
||||
auditing:
|
||||
enabled: true # 将 "false" 更改为 "true"
|
||||
```
|
||||
|
||||
> **注意:** 默认情况下,如果启用了审计日志功能,KubeKey 将安装内置 Elasticsearch。对于生产环境,不建议在部署集群时启用审计功能。请在部署完成后,参考 [可插拔组件官方文档](https://www.kubesphere.io/zh/docs/v3.3/pluggable-components/events/ "可插拔组件官方文档") 手工配置。
|
||||
|
||||
- 启用 KubeSphere 服务网格
|
||||
|
||||
```yaml
|
||||
servicemesh:
|
||||
enabled: true # 将 "false" 更改为 "true"
|
||||
istio:
|
||||
components:
|
||||
ingressGateways:
|
||||
- name: istio-ingressgateway # 将服务暴露至服务网格之外。默认不开启。
|
||||
enabled: false
|
||||
cni:
|
||||
enabled: false # 启用后,会在 Kubernetes pod 生命周期的网络设置阶段完成 Istio 网格的 pod 流量转发设置工作。
|
||||
```
|
||||
|
||||
- 启用 Metrics Server
|
||||
|
||||
```shell
|
||||
metrics_server:
|
||||
enabled: true # 将 "false" 更改为 "true"
|
||||
```
|
||||
|
||||
> **说明:**KubeSphere 支持用于 [部署](https://www.kubesphere.io/zh/docs/v3.3/project-user-guide/application-workloads/deployments/ "部署") 的容器组(Pod)弹性伸缩程序 (HPA)。在 KubeSphere 中,Metrics Server 控制着 HPA 是否启用。
|
||||
|
||||
- 启用网络策略、容器组 IP 池、服务拓扑图(名字排序,对应配置参数排序)
|
||||
|
||||
```yaml
|
||||
network:
|
||||
networkpolicy:
|
||||
enabled: true # 将 "false" 更改为 "true"
|
||||
ippool:
|
||||
type: calico # 将 "none" 更改为 "calico"
|
||||
topology:
|
||||
type: none # 将 "none" 更改为 "weave-scope"
|
||||
```
|
||||
|
||||
> **说明:**
|
||||
>
|
||||
> - 从 3.0.0 版本开始,用户可以在 KubeSphere 中配置原生 Kubernetes 的网络策略。
|
||||
> - 容器组 IP 池用于规划容器组网络地址空间,每个容器组 IP 池之间的地址空间不能重叠。
|
||||
> - 启用服务拓扑图以集成 [Weave Scope](https://www.weave.works/oss/scope/ "Weave Scope") (Docker 和 Kubernetes 的可视化和监控工具),服务拓扑图显示在您的项目中,将服务之间的连接关系可视化。
|
||||
> - **因为对应版本 weave-scope 的 arm64 架构的镜像不好找,需要自己构建,但是该功能实际上用处不大了,该项目都已经停止维护了,所以本文最后放弃了启用该功能。**
|
||||
|
||||
### 4.3 部署 KubeSphere 和 Kubernetes
|
||||
|
||||
接下来我们执行下面的命令,使用上面生成的配置文件部署 KubeSphere 和 Kubernetes。
|
||||
|
||||
```shell
|
||||
export KKZONE=cn
|
||||
./kk create cluster -f kubesphere-v340-v1265.yaml
|
||||
```
|
||||
|
||||
上面的命令执行后,首先 kk 会检查部署 Kubernetes 的依赖及其他详细要求。检查合格后,系统将提示您确认安装。输入 **yes** 并按 ENTER 继续部署。
|
||||
|
||||
```shell
|
||||
[root@ks-master-1 kubekey]# export KKZONE=cn
|
||||
[root@ks-master-1 kubekey]# ./kk create cluster -f kubesphere-v340-v1265.yaml
|
||||
|
||||
|
||||
_ __ _ _ __
|
||||
| | / / | | | | / /
|
||||
| |/ / _ _| |__ ___| |/ / ___ _ _
|
||||
| \| | | | '_ \ / _ \ \ / _ \ | | |
|
||||
| |\ \ |_| | |_) | __/ |\ \ __/ |_| |
|
||||
\_| \_/\__,_|_.__/ \___\_| \_/\___|\__, |
|
||||
__/ |
|
||||
|___/
|
||||
|
||||
09:58:12 CST [GreetingsModule] Greetings
|
||||
09:58:12 CST message: [ks-master-3]
|
||||
Greetings, KubeKey!
|
||||
09:58:13 CST message: [ks-master-1]
|
||||
Greetings, KubeKey!
|
||||
09:58:13 CST message: [ks-master-2]
|
||||
Greetings, KubeKey!
|
||||
09:58:13 CST success: [ks-master-3]
|
||||
09:58:13 CST success: [ks-master-1]
|
||||
09:58:13 CST success: [ks-master-2]
|
||||
09:58:13 CST [NodePreCheckModule] A pre-check on nodes
|
||||
09:58:16 CST success: [ks-master-3]
|
||||
09:58:16 CST success: [ks-master-1]
|
||||
09:58:16 CST success: [ks-master-2]
|
||||
09:58:16 CST [ConfirmModule] Display confirmation form
|
||||
+-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
|
||||
| name | sudo | curl | openssl | ebtables | socat | ipset | ipvsadm | conntrack | chrony | docker | containerd | nfs client | ceph client | glusterfs client | time |
|
||||
+-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
|
||||
| ks-master-1 | y | y | y | y | y | y | y | y | y | | | | | | CST 09:58:15 |
|
||||
| ks-master-2 | y | y | y | y | y | y | y | y | y | | | | | | CST 09:58:16 |
|
||||
| ks-master-3 | y | y | y | y | y | y | y | y | y | | | | | | CST 09:58:15 |
|
||||
+-------------+------+------+---------+----------+-------+-------+---------+-----------+--------+--------+------------+------------+-------------+------------------+--------------+
|
||||
|
||||
This is a simple check of your environment.
|
||||
Before installation, ensure that your machines meet all requirements specified at
|
||||
https://github.com/kubesphere/kubekey#requirements-and-recommendations
|
||||
|
||||
Continue this installation? [yes/no]:
|
||||
```
|
||||
|
||||
安装过程日志输出比较多,本文只展示重要的一点,一定要观察下载二进制包的时候,格式为 **arm64**,其它的日志输出,为了节省篇幅这里就不展示了。
|
||||
|
||||
```bash
|
||||
Continue this installation? [yes/no]: yes
|
||||
10:49:21 CST success: [LocalHost]
|
||||
10:49:21 CST [NodeBinariesModule] Download installation binaries
|
||||
10:49:21 CST message: [localhost]
|
||||
downloading arm64 kubeadm v1.26.5 ...
|
||||
% Total % Received % Xferd Average Speed Time Time Time Current
|
||||
Dload Upload Total Spent Left Speed
|
||||
100 43.3M 100 43.3M 0 0 1035k 0 0:00:42 0:00:42 --:--:-- 1212k
|
||||
```
|
||||
|
||||
部署完成需要大约 10-30 分钟左右,具体看网速和机器配置,本次部署完成耗时 32 分钟。
|
||||
|
||||
部署完成后,您应该会在终端上看到类似于下面的输出。提示部署完成的同时,输出中还会显示用户登陆 KubeSphere 的默认管理员用户和密码。
|
||||
|
||||
```yaml
|
||||
clusterconfiguration.installer.kubesphere.io/ks-installer created
|
||||
11:35:03 CST skipped: [ks-master-3]
|
||||
11:35:03 CST skipped: [ks-master-2]
|
||||
11:35:03 CST success: [ks-master-1]
|
||||
#####################################################
|
||||
### Welcome to KubeSphere! ###
|
||||
#####################################################
|
||||
|
||||
Console: http://172.16.33.16:30880
|
||||
Account: admin
|
||||
Password: P@88w0rd
|
||||
NOTES:
|
||||
1. After you log into the console, please check the
|
||||
monitoring status of service components in
|
||||
"Cluster Management". If any service is not
|
||||
ready, please wait patiently until all components
|
||||
are up and running.
|
||||
2. Please change the default password after login.
|
||||
|
||||
#####################################################
|
||||
https://kubesphere.io 2023-10-12 11:43:50
|
||||
#####################################################
|
||||
11:43:53 CST skipped: [ks-master-3]
|
||||
11:43:53 CST skipped: [ks-master-2]
|
||||
11:43:53 CST success: [ks-master-1]
|
||||
11:43:53 CST Pipeline[CreateClusterPipeline] execute successfully
|
||||
Installation is complete.
|
||||
|
||||
Please check the result using the command:
|
||||
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
|
||||
```
|
||||
|
||||
> **注意:** 当显示上面的部署完成信息后,也不代表所有组件和服务都能正常部署且能提供服务,请查看本文续集中的**「异常组件解决方案」** 排查解决创建和启动异常的组件。
|
||||
|
||||
## 5. 部署验证
|
||||
|
||||
上面的部署任务完成以后,只能说明基于 ARM 架构的 KubeSphere 和 Kubernetes 集群部署完成了。 但是整体功能是否可用,还需要做一个验证。
|
||||
|
||||
本文只做基本验证,不做详细全功能验证,有需要的朋友请自行验证测试。
|
||||
|
||||
### 5.1 KubeSphere 管理控制台验证集群状态
|
||||
|
||||
我们打开浏览器访问 master-1 节点的 IP 地址和端口 **30880**,可以看到 KubeSphere 管理控制台的登录页面。
|
||||
|
||||
输入默认用户 **admin** 和默认密码 **P@88w0rd**,然后点击「登录」。
|
||||
|
||||

|
||||
|
||||
登录后,系统会要求您更改 KubeSphere 默认用户 admin 的默认密码,输入新的密码并点击「提交」。
|
||||
|
||||

|
||||
|
||||
提交完成后,系统会跳转到 KubeSphere admin 用户工作台页面,该页面显示了当前 KubeSphere 版本为 **v3.4.0**,可用的 Kubernetes 集群数量为 1。
|
||||
|
||||

|
||||
|
||||
接下来,单击左上角的「平台管理」菜单,选择「集群管理」。
|
||||
|
||||

|
||||
|
||||
进入集群管理界面,在该页面可以查看集群的基本信息,包括集群资源用量、Kubernetes 状态、节点资源用量 Top、系统组件、工具箱等内容。
|
||||
|
||||

|
||||
|
||||
单击左侧「节点」菜单,点击「集群节点」可以查看 Kubernetes 集群可用节点的详细信息。
|
||||
|
||||

|
||||
|
||||
单击左侧「系统组件」菜单,可以查看已安装组件的详细信息。
|
||||
|
||||

|
||||
|
||||
接下来我们粗略的看一下我们部署集群时启用的可插拔插件的状态。
|
||||
|
||||
- Etcd 监控
|
||||
|
||||

|
||||
|
||||
- 应用商店
|
||||
|
||||

|
||||
|
||||
- KubeSphere DevOps 系统(**所有组件状态正常,实际测试中流水线也能正常创建,但是在构建任务时异常无法启动 maven 容器,仅做记录,后续专题解决**)
|
||||
|
||||

|
||||
|
||||
- KubeSphere 日志系统
|
||||
|
||||

|
||||
|
||||
- KubeSphere 事件系统
|
||||
|
||||

|
||||
|
||||
- KubeSphere 审计日志
|
||||
|
||||

|
||||
|
||||
- KubeSphere 告警系统
|
||||
|
||||

|
||||
|
||||
- KubeSphere 服务网格(**实际功能未验证测试**)
|
||||
|
||||

|
||||
|
||||
- Metrics Server(**页面没有,需要启用 HPA 时验证**)
|
||||
- 网络策略、容器组 IP 池
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
最后看一组监控图表来结束我们的图形验证(**Etcd 监控在上文已展示**)。
|
||||
|
||||
- 概览
|
||||
|
||||

|
||||
|
||||
- 物理资源监控
|
||||
|
||||

|
||||
|
||||
- API Server 监控
|
||||
|
||||

|
||||
|
||||
- 调度器监控
|
||||
|
||||

|
||||
|
||||
- 资源用量排行
|
||||
|
||||

|
||||
|
||||
- Pod 监控
|
||||
|
||||

|
||||
|
||||
### 5.2 Kubectl 命令行验证集群状态
|
||||
|
||||
**本小节只是简单的看了一下基本状态,并不全面,更多的细节大家自己体验探索吧。**
|
||||
|
||||
- 查看集群节点信息
|
||||
|
||||
在 master-1 节点运行 kubectl 命令获取 Kubernetes 集群上的可用节点列表。
|
||||
|
||||
```shell
|
||||
kubectl get nodes -o wide
|
||||
```
|
||||
|
||||
在输出结果中可以看到,当前的 Kubernetes 集群有三个可用节点、节点的内部 IP、节点角色、节点的 Kubernetes 版本号、容器运行时及版本号、操作系统类型及内核版本等信息。
|
||||
|
||||
```shell
|
||||
[root@ks-master-1 ~]# kubectl get nodes -o wide
|
||||
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
|
||||
ks-master-1 Ready control-plane,worker 4d4h v1.26.5 172.16.33.16 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.aarch64 containerd://1.6.4
|
||||
ks-master-2 Ready control-plane,worker 4d4h v1.26.5 172.16.33.22 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.aarch64 containerd://1.6.4
|
||||
ks-master-3 Ready control-plane,worker 4d4h v1.26.5 172.16.33.23 <none> openEuler 22.03 (LTS-SP2) 5.10.0-153.12.0.92.oe2203sp2.aarch64 containerd://1.6.4
|
||||
```
|
||||
|
||||
- 查看 Pod 列表
|
||||
|
||||
输入以下命令获取在 Kubernetes 集群上运行的 Pod 列表,按工作负载在 NODE 上的分布排序。
|
||||
|
||||
```shell
|
||||
kubectl get pods -o wide -A
|
||||
```
|
||||
|
||||
在输出结果中可以看到, Kubernetes 集群上有多个可用的命名空间 kube-system、kubesphere-control-system、kubesphere-monitoring-system、kubesphere-system、argocd 和 istio-system 等,所有 pod 都在运行。
|
||||
|
||||
```shell
|
||||
[root@ks-master-1 ~]# kubectl get pods -o wide -A | grep -v Completed | grep -v weave
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
|
||||
argocd devops-argocd-application-controller-0 1/1 Running 1 (29m ago) 4d 10.233.103.140 ks-master-1 <none> <none>
|
||||
argocd devops-argocd-applicationset-controller-864f464855-64zvf 1/1 Running 1 (29m ago) 99m 10.233.103.129 ks-master-1 <none> <none>
|
||||
argocd devops-argocd-dex-server-65f7bc75c9-872sh 1/1 Running 1 (30m ago) 4d 10.233.93.39 ks-master-3 <none> <none>
|
||||
argocd devops-argocd-notifications-controller-68f699d6fb-xd2j4 1/1 Running 1 (30m ago) 4d 10.233.93.40 ks-master-3 <none> <none>
|
||||
argocd devops-argocd-redis-84f4c697ff-l96m5 1/1 Running 1 (29m ago) 4d 10.233.103.146 ks-master-1 <none> <none>
|
||||
argocd devops-argocd-repo-server-b6896f6d5-sdfxz 1/1 Running 1 (30m ago) 4d 10.233.93.36 ks-master-3 <none> <none>
|
||||
argocd devops-argocd-server-7f76f4fccb-v82f4 1/1 Running 1 (31m ago) 4d 10.233.93.44 ks-master-3 <none> <none>
|
||||
istio-system istiod-1-14-6-6d4dbc56df-n5z9g 1/1 Running 0 11m 10.233.102.149 ks-master-2 <none> <none>
|
||||
istio-system jaeger-operator-654c67b7cc-f62zp 1/1 Running 1 (8m20s ago) 11m 10.233.103.147 ks-master-1 <none> <none>
|
||||
istio-system kiali-5d6dc84c75-v4v7n 1/1 Running 1 (30m ago) 4d 10.233.102.127 ks-master-2 <none> <none>
|
||||
istio-system kiali-operator-7946dd765f-zbhng 1/1 Running 1 (30m ago) 4d 10.233.102.132 ks-master-2 <none> <none>
|
||||
kube-system calico-kube-controllers-7f576895dd-zfm25 1/1 Running 1 (30m ago) 4d5h 10.233.102.141 ks-master-2 <none> <none>
|
||||
kube-system calico-node-jq4rm 1/1 Running 1 (30m ago) 4d5h 172.16.33.22 ks-master-2 <none> <none>
|
||||
kube-system calico-node-wdrmh 1/1 Running 1 (30m ago) 4d5h 172.16.33.23 ks-master-3 <none> <none>
|
||||
kube-system calico-node-xbmzq 1/1 Running 1 (29m ago) 4d5h 172.16.33.16 ks-master-1 <none> <none>
|
||||
kube-system coredns-d9d84b5bf-9zp82 1/1 Running 1 (30m ago) 4d5h 10.233.102.142 ks-master-2 <none> <none>
|
||||
kube-system coredns-d9d84b5bf-pndfd 1/1 Running 1 (30m ago) 4d5h 10.233.102.140 ks-master-2 <none> <none>
|
||||
kube-system kube-apiserver-ks-master-1 1/1 Running 1 (29m ago) 4d5h 172.16.33.16 ks-master-1 <none> <none>
|
||||
kube-system kube-apiserver-ks-master-2 1/1 Running 1 (30m ago) 4d5h 172.16.33.22 ks-master-2 <none> <none>
|
||||
kube-system kube-apiserver-ks-master-3 1/1 Running 1 (30m ago) 4d5h 172.16.33.23 ks-master-3 <none> <none>
|
||||
kube-system kube-controller-manager-ks-master-1 1/1 Running 1 (29m ago) 4d5h 172.16.33.16 ks-master-1 <none> <none>
|
||||
kube-system kube-controller-manager-ks-master-2 1/1 Running 1 (30m ago) 4d5h 172.16.33.22 ks-master-2 <none> <none>
|
||||
kube-system kube-controller-manager-ks-master-3 1/1 Running 1 (31m ago) 4d5h 172.16.33.23 ks-master-3 <none> <none>
|
||||
kube-system kube-proxy-66v8m 1/1 Running 1 (31m ago) 4d5h 172.16.33.23 ks-master-3 <none> <none>
|
||||
kube-system kube-proxy-6gq2q 1/1 Running 1 (29m ago) 4d5h 172.16.33.16 ks-master-1 <none> <none>
|
||||
kube-system kube-proxy-9zppd 1/1 Running 1 (30m ago) 4d5h 172.16.33.22 ks-master-2 <none> <none>
|
||||
kube-system kube-scheduler-ks-master-1 1/1 Running 1 (29m ago) 4d5h 172.16.33.16 ks-master-1 <none> <none>
|
||||
kube-system kube-scheduler-ks-master-2 1/1 Running 1 (30m ago) 4d5h 172.16.33.22 ks-master-2 <none> <none>
|
||||
kube-system kube-scheduler-ks-master-3 1/1 Running 2 (31m ago) 4d5h 172.16.33.23 ks-master-3 <none> <none>
|
||||
kube-system metrics-server-66b6cfb784-85l94 1/1 Running 47 (31m ago) 4d5h 172.16.33.23 ks-master-3 <none> <none>
|
||||
kube-system nodelocaldns-8mgpl 1/1 Running 1 (29m ago) 4d5h 172.16.33.16 ks-master-1 <none> <none>
|
||||
kube-system nodelocaldns-ggg45 1/1 Running 1 (30m ago) 4d5h 172.16.33.22 ks-master-2 <none> <none>
|
||||
kube-system nodelocaldns-z77x2 1/1 Running 1 (31m ago) 4d5h 172.16.33.23 ks-master-3 <none> <none>
|
||||
kube-system openebs-localpv-provisioner-589cc46f59-k6fvq 1/1 Running 1 (29m ago) 4d5h 10.233.103.139 ks-master-1 <none> <none>
|
||||
kube-system snapshot-controller-0 1/1 Running 1 (31m ago) 4d1h 10.233.93.46 ks-master-3 <none> <none>
|
||||
kubesphere-controls-system default-http-backend-7b44d89cb8-lnj9c 1/1 Running 0 21s 10.233.102.151 ks-master-2 <none> <none>
|
||||
kubesphere-controls-system kubectl-admin-5656cd6dfc-n5k4c 1/1 Running 1 (30m ago) 4d 10.233.102.124 ks-master-2 <none> <none>
|
||||
kubesphere-devops-system devops-apiserver-5554d4c946-9hk2d 1/1 Running 1 (29m ago) 4d 10.233.103.137 ks-master-1 <none> <none>
|
||||
kubesphere-devops-system devops-controller-76f8c5bf57-tpvlb 1/1 Running 1 (29m ago) 4d 10.233.103.136 ks-master-1 <none> <none>
|
||||
kubesphere-devops-system devops-jenkins-865b94d8c6-nv6nw 1/1 Running 1 (31m ago) 3d1h 10.233.93.41 ks-master-3 <none> <none>
|
||||
kubesphere-devops-system s2ioperator-0 1/1 Running 1 (29m ago) 4d 10.233.103.135 ks-master-1 <none> <none>
|
||||
kubesphere-logging-system fluent-bit-6wd7l 1/1 Running 1 (29m ago) 4d 10.233.103.143 ks-master-1 <none> <none>
|
||||
kubesphere-logging-system fluent-bit-hl56h 1/1 Running 1 (31m ago) 4d 10.233.93.48 ks-master-3 <none> <none>
|
||||
kubesphere-logging-system fluent-bit-q2t7x 1/1 Running 1 (30m ago) 4d 10.233.102.133 ks-master-2 <none> <none>
|
||||
kubesphere-logging-system fluentbit-operator-5f6598c96c-s7vzg 1/1 Running 1 (31m ago) 4d1h 10.233.93.50 ks-master-3 <none> <none>
|
||||
kubesphere-logging-system ks-events-exporter-7cffc5bdcb-8cz5z 2/2 Running 2 (29m ago) 4d 10.233.103.138 ks-master-1 <none> <none>
|
||||
kubesphere-logging-system ks-events-operator-c7cbd9495-vl8gf 1/1 Running 1 (29m ago) 4d 10.233.103.134 ks-master-1 <none> <none>
|
||||
kubesphere-logging-system ks-events-ruler-85697c5545-6s5z6 2/2 Running 2 (30m ago) 4d 10.233.102.139 ks-master-2 <none> <none>
|
||||
kubesphere-logging-system ks-events-ruler-85697c5545-fksnk 2/2 Running 2 (29m ago) 4d 10.233.103.130 ks-master-1 <none> <none>
|
||||
kubesphere-logging-system kube-auditing-operator-6d494f5965-55fr6 1/1 Running 2 (28m ago) 4d 10.233.93.33 ks-master-3 <none> <none>
|
||||
kubesphere-logging-system kube-auditing-webhook-deploy-79c7d464fd-mfrb8 1/1 Running 1 (31m ago) 4d 10.233.93.34 ks-master-3 <none> <none>
|
||||
kubesphere-logging-system kube-auditing-webhook-deploy-79c7d464fd-thtg7 1/1 Running 1 (30m ago) 4d 10.233.102.143 ks-master-2 <none> <none>
|
||||
kubesphere-logging-system logsidecar-injector-deploy-88fc46d66-lttls 2/2 Running 2 (29m ago) 4d 10.233.103.141 ks-master-1 <none> <none>
|
||||
kubesphere-logging-system logsidecar-injector-deploy-88fc46d66-mlhf8 2/2 Running 2 (30m ago) 4d 10.233.93.38 ks-master-3 <none> <none>
|
||||
kubesphere-logging-system opensearch-cluster-data-0 1/1 Running 1 (30m ago) 4d1h 10.233.102.136 ks-master-2 <none> <none>
|
||||
kubesphere-logging-system opensearch-cluster-data-1 1/1 Running 1 (29m ago) 4d1h 10.233.103.128 ks-master-1 <none> <none>
|
||||
kubesphere-logging-system opensearch-cluster-master-0 1/1 Running 1 (30m ago) 4d1h 10.233.93.37 ks-master-3 <none> <none>
|
||||
kubesphere-monitoring-system alertmanager-main-0 2/2 Running 2 (31m ago) 4d 10.233.93.42 ks-master-3 <none> <none>
|
||||
kubesphere-monitoring-system alertmanager-main-1 2/2 Running 2 (29m ago) 4d 10.233.103.133 ks-master-1 <none> <none>
|
||||
kubesphere-monitoring-system alertmanager-main-2 2/2 Running 2 (30m ago) 4d 10.233.102.131 ks-master-2 <none> <none>
|
||||
kubesphere-monitoring-system kube-state-metrics-7f4df45cc5-j6rmm 3/3 Running 3 (30m ago) 4d 10.233.102.128 ks-master-2 <none> <none>
|
||||
kubesphere-monitoring-system node-exporter-6z75x 2/2 Running 2 (30m ago) 4d 172.16.33.23 ks-master-3 <none> <none>
|
||||
kubesphere-monitoring-system node-exporter-c6vhv 2/2 Running 2 (29m ago) 4d 172.16.33.16 ks-master-1 <none> <none>
|
||||
kubesphere-monitoring-system node-exporter-gj7qq 2/2 Running 2 (30m ago) 4d 172.16.33.22 ks-master-2 <none> <none>
|
||||
kubesphere-monitoring-system notification-manager-deployment-6bd69dcc66-2bl84 2/2 Running 2 (29m ago) 4d 10.233.103.145 ks-master-1 <none> <none>
|
||||
kubesphere-monitoring-system notification-manager-deployment-6bd69dcc66-tcmg5 2/2 Running 3 (29m ago) 4d 10.233.93.45 ks-master-3 <none> <none>
|
||||
kubesphere-monitoring-system notification-manager-operator-69b55cdd9-c2f7q 2/2 Running 2 (30m ago) 4d 10.233.102.135 ks-master-2 <none> <none>
|
||||
kubesphere-monitoring-system prometheus-k8s-0 2/2 Running 2 (29m ago) 4d 10.233.103.132 ks-master-1 <none> <none>
|
||||
kubesphere-monitoring-system prometheus-k8s-1 2/2 Running 2 (30m ago) 4d 10.233.93.43 ks-master-3 <none> <none>
|
||||
kubesphere-monitoring-system prometheus-operator-6fb9967754-lqczb 2/2 Running 2 (29m ago) 4d 10.233.103.142 ks-master-1 <none> <none>
|
||||
kubesphere-monitoring-system thanos-ruler-kubesphere-0 2/2 Running 5 (28m ago) 4d 10.233.93.49 ks-master-3 <none> <none>
|
||||
kubesphere-monitoring-system thanos-ruler-kubesphere-1 2/2 Running 4 (28m ago) 4d 10.233.102.129 ks-master-2 <none> <none>
|
||||
kubesphere-system ks-apiserver-6485fd9665-q2zht 1/1 Running 1 (30m ago) 4d1h 10.233.102.126 ks-master-2 <none> <none>
|
||||
kubesphere-system ks-console-6f77f6f49d-kdvl6 1/1 Running 1 (30m ago) 4d1h 10.233.102.144 ks-master-2 <none> <none>
|
||||
kubesphere-system ks-controller-manager-85ccdf5f67-l2x86 1/1 Running 3 (28m ago) 4d1h 10.233.102.134 ks-master-2 <none> <none>
|
||||
kubesphere-system ks-installer-6674579f54-r9dxz 1/1 Running 1 (29m ago) 4d1h 10.233.103.131 ks-master-1 <none> <none>
|
||||
kubesphere-system minio-757c8bc7f-8j9gx 1/1 Running 1 (29m ago) 4d1h 10.233.103.144 ks-master-1 <none> <none>
|
||||
kubesphere-system openldap-0 1/1 Running 2 (30m ago) 4d1h 10.233.93.47 ks-master-3 <none> <none>
|
||||
```
|
||||
|
||||
> **注意:** 如果 Pod 状态不是 Running 请根据本文的续集「异常组件及解决方案」中的内容进行比对处理,文中未涉及的问题可以参考本文的解决思路自行解决。
|
||||
|
||||
- 查看 Image 列表
|
||||
|
||||
输入以下命令获取在 Kubernetes 集群节点上已经下载的 Image 列表。
|
||||
|
||||
```shell
|
||||
crictl images ls
|
||||
# 篇幅受限,输出结果略,完整的请看续集
|
||||
```
|
||||
|
||||
至此,我们已经完成了部署 3 台 服务器,复用为 Master 节点 和 Worker 节点的最小化的 Kubernetes 集群和 KubeSphere。我们还通过 KubeSphere 管理控制台和命令行界面查看了集群的状态。
|
||||
|
||||
## 6. 总结
|
||||
|
||||
本文主要实战演示了在 ARM 版 openEuler 22.03 LTS SP2 服务器上,利用 KubeKey v3.0.10 自动化部署最小化 KubeSphere v3.4.0 和 Kubernetes v1.26.5 高可用集群的详细过程。
|
||||
|
||||
部署完成后,我们还利用 KubeSphere 管理控制台和 Kubectl 命令行,查看并验证了 KubeSphere 和 Kubernetes 集群的状态。
|
||||
|
||||
概括总结全文主要涉及以下内容:
|
||||
|
||||
- openEuler 22.03 LTS SP2 aarch64 操作系统基础配置;
|
||||
- 操作系统数据盘 LVM 配置、磁盘挂载、数据目录创建;
|
||||
- KubeKey 下载及创建集群配置文件;
|
||||
- 利用 KubeKey 自动化部署 KubeSphere 和 Kubernetes 集群;
|
||||
- 部署完成后的 KubeSphere 和 Kubernetes 集群状态验证。
|
||||
|
||||
本文部署环境虽然是基于 **Kunpeng-920** 芯片的 aarch64 版 openEuler 22.03 LTS SP2 ,但是对于 CentOS、麒麟 V10 SP2 等 ARM 版操作系统以及飞腾(FT-2500)等芯片也有一定的借鉴意义。
|
||||
|
||||
本文介绍的内容可直接用于研发、测试环境,对于生产环境有一定的参考意义,**绝对不能**直接用于生产环境。
|
||||
|
||||
**本文的不完全测试结论:** KubeSphere 和 Kubernetes 基本功能可用,DevOps 功能部分可用,主要问题在构建镜像时 Maven 容器启动异常,**其他插件功能未做验证**。
|
||||
|
||||
> **特别说明:** 由于篇幅限制,部署完成后资源开通测试以及本文的核心价值「**解决 ARM 版 KubeSphere 和 Kubernetes 服务组件异常的问题**」小节的内容放到了本文的续集中,请持续关注。
|
||||
|
|
@ -0,0 +1,362 @@
|
|||
---
|
||||
title: 'KubeSphere 在互联网医疗行业的应用实践'
|
||||
tag: 'KubeSphere, Kubernetes'
|
||||
keywords: 'KubeSphere, Kubernetes, DevOps'
|
||||
description: '本文描写了某互联网医疗行业使用 KubeSphere 的最佳实践经验。'
|
||||
createTime: '2023-09-14'
|
||||
author: '宇轩辞白'
|
||||
snapshot: 'https://pek3b.qingstor.com/kubesphere-community/images/20230914001-cover.png'
|
||||
---
|
||||
|
||||
## 前言
|
||||
|
||||
2020 年我国互联网医疗企业迎来了“爆发元年”,互联网医疗企业的迅速发展的同时,也暴露出更多的不足。互联网医疗作为医疗行业发展的趋势,对于解决中国医疗资源分配不平衡和人们日益增长的医疗健康需求之间的矛盾具有诸多意义。但对于能否切实解决居民就诊的问题,以及企业能否实现持续发展等是国家以及企业十分关注的问题。而我司在这条道路上沉淀多年,一直致力于互联网医疗服务,拥有自己完善医疗产品平台以及技术体系。
|
||||
|
||||
## 项目简介
|
||||
|
||||
### 建设目标
|
||||
|
||||
第三方客户业务环境均是 IDC 自建机房环境,提供虚拟化服务器资源,计划引入 Kubernetes 技术,满足互联网医疗需求。
|
||||
|
||||
### 技术现状
|
||||
|
||||
据悉,第三方客户已有的架构体系已不满足日益增长的业务量,缺少一个完整且灵活的技术架构体系。
|
||||
|
||||
### 平台架构图
|
||||
|
||||
#### 线上平面逻辑架构图参考
|
||||
|
||||

|
||||
|
||||
上图便是我们项目生产企业架构图,从逻辑上分为四大版块。
|
||||
|
||||
#### DevOps CI/CD 平台
|
||||
|
||||
关于 CI/CD 自动化开源工具相信大家都了解不少,就我个人而言,我所熟知的就有 Jenkins、GitLab、Spug 以及我接下来将会为大家介绍的 KubeSphere。它同样也能完成企业级 CI/CD 持续交付事宜。
|
||||
|
||||
#### Kubernetes 集群
|
||||
|
||||
因业务需要,这里将测试、生产两套环境独立开,避免相互影响。如上图所示是三个 Matsre 节点,五个 Node 节点,这里 Master 节点标注污点使其 Pod 不可调度,避免主节点负载过高等情况发生。另外测试环境集群规模相对较小,Master 节点数量相同,但 Node 节点仅只有两个作为使用,因仅作测试,没有问题。
|
||||
|
||||
#### 底层存储环境
|
||||
|
||||
底层存储环境我们并未采用容器化的方式进行部署,而是以传统的方式部署。这样做也是为了高效,而且在互联网业务中,存储服务都有一定的性能要求来应对高并发场景。因此将其部署在裸机服务器上是最佳的选择。MySQL、Redis、NFS 均做了高可用,避免了单点问题,NFS 在这里是作为 KubeSphere StorageClass 存储类。关于 StorageClass 存储类选型还有很多,比如 Ceph、OpenEBS 等等,它们都是 KubeSphere 能接入的开源底层存储类解决方案。尤其是 Ceph,得到了很多互联网大厂的青睐,此时你们可能会问我,为什么选择 NFS 而不选择 Ceph,我只能说,在工具类选型中,只有最合适的,没有最好的,适合你的业务类型你就选择什么,而不是人云亦云,哪个工具热度高而去选择哪个工具。
|
||||
|
||||
#### 分布式监控平台
|
||||
|
||||
一个完整的互联网应用平台自然是少不了监控告警了。在过去几年,我们所熟知的 Nagios、Zabbix、Cacti 这几款都是老牌监控了,现如今都渐渐退出历史的舞台。如今 Prometheus 脱颖而出,深受各大互联网企业青睐,结合 Grafana,不得不说是真的香。在该架构体系中,我也是毫不犹豫的选择了它。
|
||||
|
||||
## 背景介绍
|
||||
|
||||
客户现有平台环境缺少完整的技术架构体系,业务版本更新迭代困难,无论是业务还是技术平台都出现较为严重的瓶颈问题,不足以支撑现有的业务体系。为了避免导致用户流失,需要重新制定完整的架构体系。而如今,互联网技术不断更新迭代,随着 Kubernetes 日益盛行,KubeSphere 也应运而生。一个技术的兴起必定会能带动整个技术生态圈的发展,我相信,KubeSphere 的出现,能带给我们远不止你想象的价值和便捷。
|
||||
|
||||
## 选型说明
|
||||
|
||||
Kubernetes 集群建设完毕之后,随后便面临一个问题,就是我们内部研发人员如何去管理维护它。需求新增要求版本迭代,研发人员如何去进行发版上线自己的业务代码;出现问题如何更好的去分析定位处理等等一系列问题都需要考虑,难不成让他们登陆到服务器上通过命令行敲?因此为了解决上面的问题,我们需要再引入一个 Dashboard 管理平台。
|
||||
|
||||
### 选择 KubeSphere 的原由
|
||||
|
||||
KubeSphere 为企业用户提供高性能可伸缩的容器应用管理服务,旨在帮助企业完成新一代互联网技术驱动下的数字化转型,加速应用的快速迭代与业务交付,以满足企业日益增长的业务需求。我所看重的 KubeSphere 四大主要优势如下:
|
||||
|
||||
#### 1. 多集群统一管理
|
||||
|
||||
随着容器应用的日渐普及,各个企业跨云或在本地环境中部署多个集群,而集群管理的复杂程度也在不断增加。为满足用户统一管理多个异构集群的需求,KubeSphere 配备了全新的多集群管理功能,帮助用户跨区、跨云等多个环境管理、监控、导入和运维多个集群,全面提升用户体验。
|
||||
|
||||
多集群功能可在安装 KubeSphere 之前或之后启用。具体来说,该功能有两大特性:
|
||||
|
||||
- 统一管理:用户可以使用直接连接或间接连接导入 Kubernetes 集群。只需简单配置,即可在数分钟内在 KubeSphere 的互动式 Web 控制台上完成整个流程。集群导入后,用户可以通过统一的中央控制平面监控集群状态、运维集群资源。
|
||||
- 高可用:在多集群架构中,一个集群可以运行主要服务,另一集群作为备用集群。一旦该主集群宕机,备用集群可以迅速接管相关服务。此外,当集群跨区域部署时,为最大限度地减少延迟,请求可以发送至距离最近的集群,由此实现跨区跨集群的高可用。
|
||||
|
||||
#### 2. 强大的可观测性功能
|
||||
|
||||
KubeSphere 的可观测性功能在 v3.0 中全面升级,进一步优化与改善了其中的重要组件,包括监控日志、审计事件以及告警通知。用户可以借助 KubeSphere 强大的监控系统查看平台中的各类数据,该系统主要的优势包括:
|
||||
|
||||
- 自定义配置:用户可以为应用自定义监控面板,有多种模板和图表模式可供选择。用户可按需添加想要监控的指标,甚至选择指标在图表上所显示的颜色。此外,也可自定义告警策略与规则,包括告警间隔、次数和阈值等。
|
||||
- 全维度数据监控与查询:KubeSphere 提供全维度的资源监控数据,将运维团队从繁杂的数据记录工作中彻底解放,同时配备了高效的通知系统,支持多种通知渠道。基于 KubeSphere 的多租户管理体系,不同租户可以在控制台上查询对应的监控日志与审计事件,支持关键词过滤、模糊匹配和精确匹配。
|
||||
- 图形化交互式界面设计:KubeSphere 为用户提供图形化 Web 控制台,便于从不同维度监控各个资源。资源的监控数据会显示在交互式图表上,详细记录集群中的资源用量情况。不同级别的资源可以根据用量进行排序,方便用户对数据进行对比与分析。
|
||||
- 高精度秒级监控:整个监控系统提供秒级监控数据,帮助用户快速定位组件异常。此外,所有审计事件均会准确记录在 KubeSphere 中,便于后续数据分析。
|
||||
|
||||
#### 3. 自动化 DevOps CI/CD 流程机制
|
||||
|
||||
自动化是落地 DevOps 的重要组成部分,自动、精简的流水线为用户通过 CI/CD 流程交付应用提供了良好的条件。
|
||||
|
||||
- 集成 Jenkins:KubeSphere DevOps 系统内置了 Jenkins 作为引擎,支持多种第三方插件。此外,Jenkins 为扩展开发提供了良好的环境,DevOps 团队的整个工作流程可以在统一的平台上无缝对接,包括开发测试、构建部署、监控日志和通知等。KubeSphere 的帐户可以用登录内置的 Jenkins,满足企业对于 CI/CD 流水线和统一认证多租户隔离的需求。
|
||||
- 便捷的内置工具:无需对 Docker 或 Kubernetes 的底层运作原理有深刻的了解,用户即可快速上手自动化工具,包括 Binary-to-Image 和 Source-to-Image。只需定义镜像仓库地址,上传二进制文件(例如 JAR/WAR/Binary),即可将对应的服务自动发布至 Kubernetes,无需编写 Dockerfile。
|
||||
|
||||
#### 4. 细粒度权限控制
|
||||
|
||||
KubeSphere 为用户提供不同级别的权限控制,包括集群、企业空间和项目。拥有特定角色的用户可以操作对应的资源。
|
||||
|
||||
- 自定义角色:除了系统内置的角色外,KubeSphere 还支持自定义角色,用户可以给角色分配不同的权限以执行不同的操作,以满足企业对不同租户具体工作分配的要求,即可以定义每个租户所应该负责的部分,不被无关资源所影响。安全性方面由于不同级别的租户之间完全隔离,他们在共享部分资源的同时也不会相互影响。租户之间的网络也完全隔离,确保数据安全。
|
||||
|
||||
## 实践过程
|
||||
|
||||
### 基础设施建设与规划
|
||||
|
||||
底层集群环境准备就绪之后,我们就需要考虑(CI/CD)持续集成交付的问题,为了保证最后生产服务容器化顺利部署至 Kubernetes 以及后期更加稳定可控,于是乎我采用了一下战略性方案:
|
||||
|
||||
- 第一步:IDC 虚拟化平台测试/生产环境同步部署,在现有的两套服务器资源中以二进制的方式部署 Kubernetes 集群
|
||||
- 第二步:然后基于 Kubernetes 集群分别以最小化方式部署 KubeSphere 云原生管理平台,其目的就是为了实现两套 Kubernetes 集群被 KubeSphere 托管
|
||||
- 第三步:建设 DevOps CI/CD 流水线机制,在 KubeSphere 平台中以 Deployment 方式建设 Jenkins、Harbor、git 平台一体化流水线平台
|
||||
- 第四步:配置 Pipeline 脚本,将 Jenkins 集成两套 Kubernetes,使其业务功能更新迭代能正常发版上线
|
||||
|
||||
Devops CI/CD 流程剖析:
|
||||
|
||||

|
||||
|
||||
- 阶段 1:Checkout SCM:从 Git 代码仓库检出源代码。
|
||||
- 阶段 2:单元测试:待该测试通过后才会进行下一阶段。
|
||||
- 阶段 3:SonarQube 分析:SonarQube 代码质量分析(可选)。
|
||||
- 阶段 4:构建并推送快照镜像:根据策略设置中选定的分支来构建镜像,并将 `SNAPSHOT-$BRANCH_NAME-$BUILD_NUMBER` 标签推送至 Docker Hub,其中 `\$BUILD_NUMBER` 为流水线活动列表中的运行序号。
|
||||
- 阶段 5:推送最新镜像:将 SonarQube 分支标记为 latest,并推送至 Harbor 镜像仓库。
|
||||
- 阶段 6:部署至开发环境:将 SonarQube 分支部署到开发环境,此阶段需要审核。
|
||||
- 阶段 7:带标签推送:生成标签并发布到 Git,该标签会推送到 Harbor 镜像仓库。
|
||||
- 阶段 8:部署至生产环境:将已发布的标签部署到生产环境。
|
||||
|
||||
### 线上 DevOps 流水线参考
|
||||
|
||||

|
||||
|
||||
无状态服务在 KubeSphere 中的服务如下图所示,包括应用层的前后端服务,另外 Minio 都是以 Deployment 方式容器部署。
|
||||
|
||||

|
||||
|
||||
有状态服务主要是一些基础设施服务,如下图所示:比如 MySql、Redis 等,我仍然是选择采用虚机部署,RocketMQ 较为特殊,选择了 Statefulset 方式进行部署。
|
||||
|
||||

|
||||
|
||||
### 企业实战案例
|
||||
|
||||
#### 定义 Deployment 资源 yaml 文件
|
||||
|
||||
该资源类需要定义在 git 上,当我们运行 KubeSphere DevOps 流水线部署环节,需要调用该 yaml 资源进行服务更新迭代。
|
||||
|
||||
```
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
labels:
|
||||
app: boot-preject
|
||||
name: boot-preject
|
||||
namespace: middleware #定义Namespace
|
||||
spec:
|
||||
progressDeadlineSeconds: 600
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: boot-preject
|
||||
strategy:
|
||||
rollingUpdate:
|
||||
maxSurge: 50%
|
||||
maxUnavailable: 50%
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: boot-preject
|
||||
spec:
|
||||
imagePullSecrets:
|
||||
- name: harbor
|
||||
containers:
|
||||
- image: $REGISTRY/$HARBOR_NAMESPACE/boot-preject:SNAPSHOT-$BUILD_NUMBER #这里定义镜像仓库地址+kubesphere 构建变量值
|
||||
imagePullPolicy: Always
|
||||
name: app
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
protocol: TCP
|
||||
resources:
|
||||
limits:
|
||||
cpu: 300m
|
||||
memory: 600Mi
|
||||
terminationMessagePath: /dev/termination-log
|
||||
terminationMessagePolicy: File
|
||||
dnsPolicy: ClusterFirst
|
||||
restartPolicy: Always
|
||||
terminationGracePeriodSeconds: 30
|
||||
```
|
||||
|
||||
定义流水线凭据:
|
||||
|
||||

|
||||
|
||||
#### 1. 新建 DevOps 项目
|
||||
|
||||

|
||||
|
||||
#### 2. 创建流水线向导
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
#### 3. 自定义流水线
|
||||
|
||||
KubeSphere 3.3.2 版本提供了已有的模版,不过我们可以尝试自己定义流水线体系图形编辑面板包括两个区域:左侧的画布和右侧的内容。它会根据您对不同阶段和步骤的配置自动生成一个 Jenkinsfile,为开发者提供更加用户友好的操作体验。
|
||||
|
||||

|
||||
|
||||
##### 第一阶段
|
||||
|
||||
该阶段主要是拉取 git 代码环境,阶段名称命名为 `Pulling Code`,指定 maven 容器,在图形编辑面板上,从类型下拉列表中选择 node,从 Label 下拉列表中选择 maven:
|
||||
|
||||

|
||||
|
||||
##### 第二阶段
|
||||
|
||||
选择+号,开始定义代码编译环境,名称定义为 `Build compilation`,添加步骤。
|
||||
|
||||

|
||||
|
||||
##### 第三阶段
|
||||
|
||||
该阶段主要是通过 Dockerfile 打包生成镜像的过程,同样是生成之前先指定容器,然后新增嵌套步骤,指定 shell 命令定义 Dockerfile 编译过程:
|
||||
|
||||

|
||||
|
||||
##### 第四阶段
|
||||
|
||||
该阶段主要是基于 Dockerfile 编译之后的镜像上传至镜像仓库中:
|
||||
|
||||
```
|
||||
docker tag boot-preject:latest $REGISTRY/$HARBOR_NAMESPACE/boot-preject:SNAPSHOT-$BUILD_NUMBER
|
||||
```
|
||||
|
||||

|
||||
|
||||
##### 第五阶段
|
||||
|
||||
该阶段主要是部署环境,镜像上传至 Harbor 仓库中,就开始着手部署的工作了。
|
||||
|
||||
在这里我们需要提前把 Deployment 资源定义好,通过 `kubectl apply -f` 根据定义好的文件执行即可。
|
||||
|
||||
流程如下:选择 +,定义名称 `Deploying to k8s`,选择下方“添加步骤”--->"添加凭据"--->"添加嵌套步骤"--->"指定容器"--->"添加嵌套步骤"--->"shell"。
|
||||
|
||||
这里命令是指定的 git 事先定义完成的 yaml 文件。
|
||||
|
||||
```
|
||||
envsubst < deploy/deploy.yml | kubectl apply -f -
|
||||
```
|
||||
|
||||

|
||||
|
||||

|
||||
|
||||
以上,一个完整的流水线制作完毕。接下来我们运行即可完成编译。
|
||||
|
||||

|
||||
|
||||
工作负载展示:
|
||||
|
||||

|
||||
|
||||
### 附上生产 Jenkinsfile 脚本
|
||||
|
||||
在这里给大家附上我本人生产 Pipeline 案例,可通过该 Pipeline 流水线,直接应用于企业生产环境。
|
||||
|
||||
```
|
||||
pipeline {
|
||||
agent {
|
||||
node {
|
||||
label 'maven'
|
||||
}
|
||||
|
||||
}
|
||||
stages {
|
||||
stage('Pulling Code') {
|
||||
agent none
|
||||
steps {
|
||||
container('maven') {
|
||||
//指定git地址
|
||||
git(url: 'https://gitee.com/xxx/test-boot-projext.git', credentialsId: 'gitee', branch: 'master', changelog: true, poll: false)
|
||||
sh 'ls'
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
stage('Build compilation') {
|
||||
agent none
|
||||
steps {
|
||||
container('maven') {
|
||||
sh 'ls'
|
||||
sh 'mvn clean package -Dmaven.test.skip=true'
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
stage('Build a mirror image') {
|
||||
agent none
|
||||
steps {
|
||||
container('maven') {
|
||||
sh 'mkdir -p repo/$APP_NAME'
|
||||
sh 'cp target/**.jar repo/${APP_NAME}'
|
||||
sh 'cp ./start.sh repo/${APP_NAME}'
|
||||
sh 'cp ./Dockerfile repo/${APP_NAME}'
|
||||
sh 'ls repo/${APP_NAME}'
|
||||
sh 'cd repo/${APP_NAME}'
|
||||
sh 'docker build -t boot-preject:latest . '
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
stage('Pack and upload') {
|
||||
agent none
|
||||
steps {
|
||||
container('maven') {
|
||||
withCredentials([usernamePassword(credentialsId : 'harbor' ,passwordVariable : 'DOCKER_PWD_VAR' ,usernameVariable : 'DOCKER_USER_VAR' ,)]) {
|
||||
sh 'echo "$DOCKER_PWD_VAR" | docker login $REGISTRY -u "$DOCKER_USER_VAR" --password-stdin'
|
||||
sh 'docker tag boot-preject:latest $REGISTRY/$HARBOR_NAMESPACE/boot-preject:SNAPSHOT-$BUILD_NUMBER'
|
||||
sh 'docker push $REGISTRY/$HARBOR_NAMESPACE/boot-preject:SNAPSHOT-$BUILD_NUMBER'
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
stage('Deploying to K8s') {
|
||||
agent none
|
||||
steps {
|
||||
withCredentials([kubeconfigFile(credentialsId : 'demo-kubeconfig' ,variable : 'KUBECONFIG' )]) {
|
||||
container('maven') {
|
||||
sh 'envsubst < deploy/deploy.yml | kubectl apply -f -'
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
environment {
|
||||
DOCKER_CREDENTIAL_ID = 'dockerhub-id' #定义docker镜像认证
|
||||
GITHUB_CREDENTIAL_ID = 'github-id' #定义git代码仓库认证
|
||||
KUBECONFIG_CREDENTIAL_ID = 'demo-kubeconfig' #定义kubeconfig kubectl api认证文件
|
||||
REGISTRY = 'harbor.xxx.com' #定义镜像仓库地址
|
||||
HARBOR_NAMESPACE = 'ks-devopos'
|
||||
APP_NAME = 'boot-preject'
|
||||
}
|
||||
parameters {
|
||||
string(name: 'TAG_NAME', defaultValue: '', description: '')
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 存储与网络
|
||||
|
||||
业务存储我们选择的是 MySQl、Redis。MySQL 结合 Xenon 实现高可用方案。
|
||||
|
||||
## 使用效果
|
||||
|
||||
引入 KubeSphere 很大程度的减轻了公司研发持续集成、持续部署的负担,极大提升了整个研发团队生产里项目交付效率,研发团队只需自行在本地实现 function 修复 Bug,之后 Commit 提交代码至 git,然后基于 KubeSphere 的 DevOps 直接点击运行,发布测试环境/生产环境的工程,此时整套 CI/CD 持续集成交付的工作流程就彻底完成了,剩余的联调工作就交给研发。
|
||||
|
||||
基于 KubeSphere 实现 DevOps,给我们带来了最大的效率亮点如下:
|
||||
|
||||
- 平台一体化管理:在服务功能迭代方面,只需要登录 KubeSphere 平台,点击各自所负责的项目流水线即可,极大的减轻了部署工作量,虽说可以通过 Jenkins 结合 KubeSphere,同样能实现项目交付工作,但整套流程相对繁琐,既要关注 Jenkins 平台的构建情况,同时也要关注 KubeSphere 交付结果;造成了诸多不便,也背离的了我们交付的初衷。
|
||||
- 资源利用率显著提高:KubeSphere 和 Kubernetes 相结合,进一步优化了系统资源利用率,降低了使用成本,最大限度增加了 DevOps 资源利用率。
|
||||
|
||||
## 未来规划(改进)
|
||||
|
||||
从目前来看,通过这次生产项目中引入 KubeSphere 云原生平台实践,发现确实给我们解决了微服务部署和管理的了问题,极大的提高我们的便捷性。负载均衡、应用路由、自动扩缩容、DevOps 等等,都能对我们 Kubernetes 产生极大的推进,未来继续深耕云原生以及 Kubernetes 容器化领域,继续推进现有业务容器化,去拥抱云原生这个生态圈,为我们的业务保驾护航。
|
||||
|
|
@ -116,6 +116,14 @@ section2:
|
|||
content: "东方通信是一家集硬件设备、软件、服务为一体的整体解决方案提供商。"
|
||||
link: "eastcom/"
|
||||
|
||||
- icon: "images/case/logo-alphaflow.png"
|
||||
content: "杭州微宏科技是专注于业务流程管理和自动化(BPM&BPA)软件研发和解决方案供应商。"
|
||||
link: "alphaflow/"
|
||||
|
||||
- icon: "images/case/logo-hshc.png"
|
||||
content: "花生好车致力于打造下沉市场汽车出行解决方案第一品牌。"
|
||||
link: "hshc/"
|
||||
|
||||
section3:
|
||||
title: 'KubeSphere 助力各行各业'
|
||||
tip: 全部
|
||||
|
|
@ -149,7 +157,9 @@ section3:
|
|||
- name: '金融'
|
||||
children:
|
||||
- name: 'msxf'
|
||||
icon: 'images/case/logo-msxf.png'
|
||||
icon: 'images/case/logo-msxf.png'
|
||||
- name: 'hshc'
|
||||
icon: 'images/case/logo-hshc.png'
|
||||
|
||||
- name: 'IT 服务'
|
||||
children:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,122 @@
|
|||
---
|
||||
title: alphaflow
|
||||
description:
|
||||
|
||||
css: scss/case-detail.scss
|
||||
|
||||
section1:
|
||||
title: 杭州微宏科技
|
||||
content: 杭州微宏科技是专注于业务流程管理和自动化(BPM&BPA)软件研发和解决方案供应商。
|
||||
|
||||
section2:
|
||||
listLeft:
|
||||
- title: 公司简介
|
||||
contentList:
|
||||
- content: 杭州微宏科技有限公司于 2012 年成立,专注于业务流程管理和自动化(BPM&BPA)软件研发和解决方案供应商。创始团队毕业于浙江大学、清华大学、美国 Rice 大学和 University of Texas 等海内外知名高校,曾服务于世界知名软件公司和 500 强企业。
|
||||
- content: 微宏已为超过 1000 家的国内国外大中型企业和政府提供了从流程规划设计、流程运行、流程自动化、流程集成、流程挖掘的全生命周期流程软件产品和解决方案,客户分布于制造、金融、电器电子、医药、服务业、高科技和政府等十多个行业。
|
||||
- content: 微宏科技是国家高新技术企业、浙江省专精特新企业,通过了 ISO9001 质量管理体系认证、CMMI 认证、ISO27001 信息安全管理体系认证。获赛迪“2021 年智能 BPM 领域最佳产品”奖、“2021-2022 业务流程管理&自动化领域优秀产品”奖、中国软件网“2021 年度智能流程平台优秀产品奖”、“2022 应龙杯最佳 BPA 业务流程自动化产品奖、“2022 数字政府建设领军企业”奖,连续 2 年上榜浙江省软件协会“浙江省软件核心竞争力企业(成长型)”榜单。
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/1696663218887.jpg
|
||||
|
||||
- title: 背景介绍
|
||||
contentList:
|
||||
- content: 公司在自建 IDC 机房的物理服务器上搭建了 Kubernetes 集群,并使用 Kuboard 作为集群管理工具。研发环境使用这些集群资源进行开发和测试。而 CI/CD 流水线则通过同样部署在物理服务器上的 Jenkins 来实现代码编译、镜像构建等步骤,最终以手动方式发布服务。
|
||||
- content: 这种模式下存在一些问题:缺乏统一的服务编排和管理,集群和服务之间缺乏联动,CI/CD 流程自动化程度不足,部署发布需要手动操作,日志和监控数据分散,缺少统一可视化平台等。这种传统研发模式已经难以适应企业对敏捷开发和自动化交付的需求。需要进一步融合云原生技术,实现基础设施的智能化和研发流程的端到端自动化。
|
||||
image:
|
||||
|
||||
- title: 选型
|
||||
contentList:
|
||||
- content: 作为 DevOps 运维团队,我们需要提供自助化的综合运维平台。在开源平台选型时,公司考虑到以下两点最终选择了 KubeSphere:
|
||||
- content: 1. KubeSphere 屏蔽了 Kubernetes 的复杂性,通过 GUI 来简化集群管理,降低学习成本。
|
||||
- content: 2. KubeSphere 整合并扩展了多种优秀开源项目,如 Prometheus、Jenkins 等,提供了统一的入口,实现了全栈的 DevOps 能力。
|
||||
- content: 相比其他平台,KubeSphere 更好地规避了 Kubernetes 本身的复杂性,也减少了集成各类开源工具的工作量。这使得我们可以更专注于运维自动化与自助化平台建设,而不需要单独管理底层基础架构与服务。因此 KubeSphere 成为我们满足公司需求的最佳选择。
|
||||
image:
|
||||
|
||||
- type: 1
|
||||
contentList:
|
||||
- content: 应用 CI/CD 流水线自动化构建,极大提升部署效率
|
||||
- content: 应用商店统一包管理,使应用发布和使用更便捷
|
||||
- content: 集群管理和监控可视化,快速定位和解决问题
|
||||
|
||||
- title: 实践过程
|
||||
contentList:
|
||||
- specialContent:
|
||||
text: 硬件资源
|
||||
level: 3
|
||||
- content: 研发环境:IDC 机房 40 台虚拟机,自建 K8s+KubeSphere 集群。
|
||||
- content: 生产环境:阿里云 ACK 集群 12 节点。
|
||||
- content:
|
||||
image:
|
||||
- title:
|
||||
contentList:
|
||||
- specialContent:
|
||||
text: 存储方案
|
||||
level: 3
|
||||
- content: 使用 JuiceFS 作为分布式文件层,搭配 MinIO 作为对象存储接入层。
|
||||
- content: JuiceFS:提供分布式高性能文件存储。使用近似原子开源存储引擎如 LevelDB。
|
||||
- content: MinIO:开源对象存储兼容 AWS S3 API,作为 JuiceFS 对象存储接口。
|
||||
- content: 整合方案优点:
|
||||
- content: 简单易用,提供类 S3 对象存储 API;
|
||||
- content: 高性能、弹性,通过 JuiceFS 实现;
|
||||
- content: 低成本,可以使用廉价的云硬盘或 NAS 作为后端存储。
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/juicefs-arch-52477e7677b23c870b72f08bb28c7ceb.svg
|
||||
- title:
|
||||
contentList:
|
||||
- specialContent:
|
||||
text: DevOps 持续集成部署
|
||||
level: 3
|
||||
- content: 公司以前研发环境中的 CI/CD 主要依靠单节点 Jenkins 实现,存在许多问题:
|
||||
- content: 开发人员频繁更新代码,多环境切换导致构建部署经常出错;
|
||||
- content: Jenkins 资源有限,构建效率较低。
|
||||
- content: 为解决这些问题,我们切换到了 KubeSphere 平台,利用其整合的 DevOps 功能改进了 CI/CD 流程:
|
||||
- content: KubeSphere 提供了可视化流水线编排,简化了复杂流程的搭建;
|
||||
- content: 基于 Kubernetes 的弹性资源,可以动态扩展 Jenkins executor 提升构建效率;
|
||||
- content: 标准化和最佳实践减少了环境配置错误,提升了部署稳定性。
|
||||
- content: 通过 KubeSphere 的 DevOps 解决方案,我们改善了 CI/CD 流程,提升了研发环境的效率和质量。
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/123213213123.png
|
||||
- title:
|
||||
contentList:
|
||||
- specialContent:
|
||||
text: 日志及监控
|
||||
level: 3
|
||||
- content: 公司使用自建的 ELK 栈采集日志数据,并使用 KubeSphere 平台内置的 Prometheus 作为监控方案,然后通过 Grafana 来可视化展示监控数据。
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/1696481910520.png
|
||||
|
||||
- title: 使用效果
|
||||
contentList:
|
||||
- specialContent:
|
||||
text: CI/CD
|
||||
level: 3
|
||||
- content: 公司使用 KubeSphere 平台的 DevOps 功能,更好地满足了大规模并发构建流水线的需求。
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/alphflow-x.png
|
||||
- title:
|
||||
contentList:
|
||||
- specialContent:
|
||||
text: 存储方案
|
||||
level: 3
|
||||
- content: 公司在探索云原生过程中,发现使用 Helm 可以标准化地进行应用发布。KubeSphere 天生具备应用商店功能,将 Helm 的能力可视化,大大降低了开发人员的学习成本。
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/alphflow-xy.png
|
||||
|
||||
- type: 2
|
||||
content: '使用 KubeSphere 之后,我们实现了应用交付自动化,工程效率显著提升。'
|
||||
author: '微宏科技'
|
||||
|
||||
- title: 未来规划
|
||||
contentList:
|
||||
- content: 目前我们已完成业务的全面容器化,并基于 KubeSphere 平台的能力进行云原生架构的迁移。KubeSphere 为我们提供了 GUI 化的 Kubernetes 集群管理、CI/CD 流水线、服务网格治理等功能,简化了云原生技术的运用。
|
||||
- content: 在平台助力下,我们的研发和运维效率显著提升。我们相信运用 KubeSphere 的云原生平台,必将为公司下一步业务增长提供坚实基础。我们将持续扩展业务场景,丰富平台功能,并探索基于 KubeSphere 的多云和边缘计算等新型架构,为客户带来更出色的产品体验。
|
||||
image:
|
||||
|
||||
rightPart:
|
||||
icon: /images/case/logo-alphaflow.png
|
||||
list:
|
||||
- title: 行业
|
||||
content: BPA、BPM
|
||||
- title: 地点
|
||||
content: 杭州
|
||||
- title: 云类型
|
||||
content: 公有云,私有云
|
||||
- title: 挑战
|
||||
content: CI/CD
|
||||
- title: 采用功能
|
||||
content: DevOps、监控、日志
|
||||
---
|
||||
|
|
@ -0,0 +1,127 @@
|
|||
---
|
||||
title: hshc
|
||||
description:
|
||||
|
||||
css: scss/case-detail.scss
|
||||
|
||||
section1:
|
||||
title: 花生好车
|
||||
content: 花生好车致力于打造下沉市场汽车出行解决方案第一品牌。
|
||||
|
||||
section2:
|
||||
listLeft:
|
||||
- title: 公司简介
|
||||
contentList:
|
||||
- content: 花生好车成立于 2015 年 6 月,致力于打造下沉市场汽车出行解决方案第一品牌。通过自建直营渠道,瞄准下沉市场,现形成以直租、批售、回租、新能源汽车零售,四大业务为核心驱动力的汽车新零售平台,目前拥有门店 600 余家,覆盖 400 余座城市,共设有 25 个中心仓库。目前已为超 40 万以上用户提供优质的用车服务,凭借全渠道优势和产品丰富度成功领跑行业第一梯队。
|
||||
image:
|
||||
|
||||
- title: 背景介绍
|
||||
contentList:
|
||||
- content: 公司在自建 IDC 机房的物理服务器使用 KVM 作为底层虚拟机管理,随着业务增加,导致系统存在一些问题,故有了此次底层基础架构改造实践:
|
||||
- content: 利用率不饱和:各类服务器的 CPU 利用率普遍不饱和,闲时利用率低下,且忙闲不均;
|
||||
- content: 耗能大:服务器需求量大,机柜、网络、服务器等利用率低;
|
||||
- content: 基础资源庞杂:底层标准化不一,无法传承;
|
||||
- content: 资源共享不足:烟筒式建设模式,资源相互隔离且固定投资成本高,为满足业务峰值,需采购大量数据扩容服务器产品等;
|
||||
- content: 存储容量不断上升,逻辑存储设备增加,管理复杂和强度增大;
|
||||
- content: 业务网缺乏总体发展规划,部分系统或平台的功能定位不清晰,跨部门、跨区域、跨系统的流程界面模糊;
|
||||
- content: 系统开发和上线周期长,后期维护和问题定位开销大,平台的独立建设多为烟筒式建设和孤岛化解决方案;
|
||||
- content: 业务流程,平台结构和接口缺乏统一规范和要求。
|
||||
image:
|
||||
|
||||
- title: 平台选型
|
||||
contentList:
|
||||
- content: 作为 DevOps 运维团队,我们需要提供自助化的综合运维平台。在开源平台选型时,公司最终选择了 KubeSphere:
|
||||
- content: 1. 完全开源,无收费,可进行二次开发;
|
||||
- content: 2. 功能丰富,安装简单,支持一键升级和扩容,完善的 DevOps 工具链;
|
||||
- content: 3. 支持多集群管理,用户可以使用直接连接或间接连接导入 Kubernetes 集群;
|
||||
- content: 4. 集成可观测性,可按需添加想要监控的指标以及告警,以及日志查询;
|
||||
- content: 5. 自定义角色和审计功能,便于后续数据分析。
|
||||
- content: 相比其他平台,KubeSphere 更好地规避了 Kubernetes 本身的复杂性,也减少了集成各类开源工具的工作量。这使得我们可以更专注于运维自动化与自助化平台建设,而不需要单独管理底层基础架构与服务。提供全栈的 IT 自动化运维的能力,简化企业的 DevOps 工作流。因此 KubeSphere 成为我们满足公司需求的最佳选择。
|
||||
image:
|
||||
|
||||
- type: 1
|
||||
contentList:
|
||||
- content: Kubernetes 集群部署和升级的方便快捷性
|
||||
- content: 集群和应用的日志、监控平台的统一管理
|
||||
- content: 简化了在应用治理方面的使用门槛
|
||||
|
||||
- title: 实践过程
|
||||
contentList:
|
||||
- specialContent:
|
||||
text: 基础设施建设与规划
|
||||
level: 3
|
||||
- content:
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/hshc-kubesphere-1.svg
|
||||
- title:
|
||||
contentList:
|
||||
- specialContent:
|
||||
text: Kubernetes 集群
|
||||
level: 3
|
||||
- content: 因业务需要,我们将测试、生产两套环境独立开,避免相互影响。生产如上图所示是三个 Matsre 节点,目前为十三个 Node 节点,这里 Master 节点标注污点使其 Pod 不可调度,避免主节点负载过高等情况发生。
|
||||
- content: 生产环境使用了官方推荐的 Keepalived 和 HAproxy 创建高可用 Kubernetes 集群高可用 Kubernetes 集群能够确保应用程序在运行时不会出现服务中断,这也是生产的需求之一。
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/hshc-kubesphere-2.png
|
||||
- title:
|
||||
contentList:
|
||||
- content: 发版工作流示意图:
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/hshc-kubesphere-3.png
|
||||
- title:
|
||||
contentList:
|
||||
- content:
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/hshc-kubesphere-4.png
|
||||
- title:
|
||||
contentList:
|
||||
- specialContent:
|
||||
text: 底层存储环境
|
||||
level: 3
|
||||
- content: 底层存储环境,我们并未采用容器化的方式进行部署,而是以传统的方式部署。这样做也是为了高效,而且在互联网业务中,存储服务都有一定的性能要求来应对高并发场景。因此将其部署在裸机服务器上是最佳的选择。
|
||||
- content: MySQL、Redis、NFS 均做了高可用,避免了单点问题,Ceph 是作为 KubeSphere StorageClass 存储类通过 cephfs 挂载,目前大部分为无状态应用,后续部署有状态应用会对存储进一步优化。
|
||||
image:
|
||||
- title:
|
||||
contentList:
|
||||
- specialContent:
|
||||
text: 监控平台
|
||||
level: 3
|
||||
- content: 为日常高效使用 KubeSphere,我们将集成的监控告警进行配置,目前大部分可满足使用,至于 node 节点,通过单独的 PMM 监控来查看日常问题。
|
||||
- content: 告警示例:
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/hshc-kubesphere-5.png
|
||||
- title:
|
||||
contentList:
|
||||
- content: 监控示例:
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/hshc-kubesphere-6.png
|
||||
- title:
|
||||
contentList:
|
||||
- content:
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/hshc-kubesphere-7.png
|
||||
|
||||
- title: 使用效果
|
||||
contentList:
|
||||
- content: 引入 KubeSphere 很大程度的减轻了公司研发持续集成、持续部署的负担,极大提升了整个研发团队生产里项目交付效率。研发团队只需自行在本地实现 function 修复 Bug,之后 Commit 提交代码至 git,然后基于 Jenkins 发布测试环境/生产环境的工程,此时整套 CI/CD 持续集成交付的工作流程就彻底完成了,剩余的联调工作就交给研发。
|
||||
- content: 基于 KubeSphere 实现 DevOps,给我们带来了最大的效率亮点如下:
|
||||
- content: 1. 平台一体化管理:在服务功能迭代方面,只需要登录 KubeSphere 平台,点击各自所负责的项目即可,极大的减轻了部署工作量,可以通过 Jenkins 结合 KubeSphere,同样能实现项目交付工作,但整套流程相对繁琐,既要关注 Jenkins 平台的构建情况,同时也要关注 KubeSphere 交付结果;造成了诸多不便,也背离了我们交付的初衷,后续我们可能通过 KubeSphere 自带的自定义流水线来统一管理。
|
||||
- content: 2. 资源利用率显著提高:KubeSphere 和 Kubernetes 相结合,进一步优化了系统资源利用率,降低了使用成本,最大限度增加了 DevOps 资源利用率。
|
||||
image:
|
||||
|
||||
- type: 2
|
||||
content: 'KubeSphere 为我们简化了 K8s 集群管理,进一步优化了系统资源利用率,降低了使用成本,最大限度增加了 DevOps 资源利用率。'
|
||||
author: '花生好车'
|
||||
|
||||
- title: 未来规划(改进)
|
||||
contentList:
|
||||
- content: 目前通过这次生产项目中引入 KubeSphere 云原生平台实践,发现确实给我们解决了微服务部署和管理的问题,基于 KubeSphere 平台的能力进行云原生架构的迁移,极大的提高我们的便捷性。负载均衡、应用路由、自动扩缩容、DevOps 等。
|
||||
- content: 在平台助力下,我们的研发和运维效率显著提升。我们相信运用 KubeSphere 的云原生平台,服务网格治理、金丝雀、灰度发布、链路追踪必将为公司下一步业务增长提供坚实基础。
|
||||
image:
|
||||
|
||||
rightPart:
|
||||
icon: /images/case/logo-hshc.png
|
||||
list:
|
||||
- title: 行业
|
||||
content: 金融
|
||||
- title: 地点
|
||||
content: 北京
|
||||
- title: 云类型
|
||||
content: 私有云
|
||||
- title: 挑战
|
||||
content: 资源利用率、多集群、弹性伸缩、可观测性
|
||||
- title: 采用功能
|
||||
content: 多集群管理,应用治理,监控,告警,日志
|
||||
---
|
||||
|
|
@ -5,6 +5,24 @@ css: "scss/conferences.scss"
|
|||
viewDetail: 查看详情
|
||||
|
||||
list:
|
||||
- name: KubeCon China 2023
|
||||
content: KubeSphere 社区在 KubeCon + CloudNativeCon 中国上海 2023 上的技术主题分享。
|
||||
icon: images/conferences/kubecon.svg
|
||||
bg: images/conferences/kubecon-bg.svg
|
||||
bgColor: linear-gradient(270deg, rgb(101, 193, 148), rgb(76, 169, 134))
|
||||
children:
|
||||
- name: 使用 OpenFunction 在任何基础设施上运行无服务器工作负载
|
||||
summary: 云原生技术的崛起使得我们可以以相同的方式在公有云、私有云或本地数据中心运行应用程序或工作负载。但是,对于需要访问不同云或开源中间件的各种 BaaS 服务的无服务器工作负载来说,这并不容易。在这次演讲中,OpenFunction 维护者将详细介绍如何使用 OpenFunction 解决这个问题,以及 OpenFunction 的最新更新和路线图。
|
||||
author: 霍秉杰,王翼飞
|
||||
link: openfunction-2023/
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/kubecon-2023-openfunction.png
|
||||
|
||||
- name: 使用 Kubernetes 原生方式实现多集群告警
|
||||
summary: 在这个演示中,我们将揭示一个基于 Kubernetes 的解决方案,以满足多集群和多租户告警和通知的需求。我们的综合方法涵盖了指标、事件、审计和日志的告警,同时确保与 alertmanager 的兼容性。对于指标,我们提供了适用于不同告警范围的分层 RuleGroups CRDs,同时保持与 Prometheus 规则定义的兼容性。我们还为 Kubernetes 事件和审计事件开发了特定的规则定义和评估器(即 rulers),它们共享同一规则评估引擎。我们的通知实现名为 notification-manager,提供了许多通知渠道和基本功能,如路由、过滤、聚合和通过 CRDs 进行静默。不仅如此,还提供了全面的通知历史记录、多集群和多租户支持。这些功能有助于在各种告警源之间实现无缝集成。
|
||||
author: 向军涛,雷万钧
|
||||
link: alerting-2023/
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/kubecon-2023-alerting.png
|
||||
|
||||
- name: KubeCon 北美 2022
|
||||
content: KubeSphere 社区在 KubeCon + CloudNativeCon 北美 2022 上的技术主题分享。
|
||||
icon: images/conferences/kubecon.svg
|
||||
|
|
@ -30,7 +48,7 @@ list:
|
|||
image: https://pek3b.qingstor.com/kubesphere-community/images/kubecon-eu-2022-ben-lu.png
|
||||
|
||||
- name: 深入浅出 Fluent Operator
|
||||
summary: 在新增 Fluentd 的支持后,Fluent Bit Operator 现已被重新命名为 Fluent Operator。在本次分享中,Fluent Operator 的 Maintainer 将会详细介绍 Fluent Operator 的主要功能及其设计原则和架构.
|
||||
summary: 在新增 Fluentd 的支持后,Fluent Bit Operator 现已被重新命名为 Fluent Operator。在本次分享中,Fluent Operator 的 Maintainer 将会详细介绍 Fluent Operator 的主要功能及其设计原则和架构。
|
||||
author: 霍秉杰,朱晗
|
||||
link: fluent-operator/
|
||||
image: https://pek3b.qingstor.com/kubesphere-community/images/kubecon-eu-2022-fluent-operator.png
|
||||
|
|
|
|||
|
|
@ -0,0 +1,239 @@
|
|||
---
|
||||
title: '使用 Kubernetes 原生方式实现多集群告警'
|
||||
author: '向军涛,雷万钧'
|
||||
createTime: '2023-09-27'
|
||||
snapshot: 'https://pek3b.qingstor.com/kubesphere-community/images/kubecon-2023-alerting.png'
|
||||
---
|
||||
|
||||
## 议题简介
|
||||
|
||||
在这个演示中,我们将揭示一个基于 Kubernetes 的解决方案,以满足多集群和多租户告警和通知的需求。我们的综合方法涵盖了指标、事件、审计和日志的告警,同时确保与 alertmanager 的兼容性。对于指标,我们提供了适用于不同告警范围的分层 RuleGroups CRDs,同时保持与 Prometheus 规则定义的兼容性。我们还为 Kubernetes 事件和审计事件开发了特定的规则定义和评估器(即 rulers),它们共享同一规则评估引擎。我们的通知实现名为 notification-manager,提供了许多通知渠道和基本功能,如路由、过滤、聚合和通过 CRDs 进行静默。不仅如此,还提供了全面的通知历史记录、多集群和多租户支持。这些功能有助于在各种告警源之间实现无缝集成。
|
||||
|
||||
## 分享者简介
|
||||
|
||||
向军涛:KubeSphere 监控、告警和事件管理模块的核心维护者,对 Kubernetes 和云原生开源技术以及大数据技术有浓厚的兴趣。
|
||||
|
||||
雷万钧:KubeSphere 可观测性和 Serverless 团队资深开发工程师。Fluent Operator、Notification Manager 和 OpenFunction 的维护者。热爱云原生和开源技术,参与了多个开源项目,如 thanos 和 buildpacks 等。
|
||||
|
||||
## 视频回放
|
||||
|
||||
<video id="videoPlayer" controls="" preload="true">
|
||||
<source src="https://kubesphere-community.pek3b.qingstor.com/videos/Multi-Cluster-Alerting-A-Kubernetes-Native-Approach.mp4" type="video/mp4">
|
||||
</video>
|
||||
|
||||
## PPT 下载
|
||||
|
||||
关注公众号【KubeSphere云原生】,后台回复关键词 `alerting-2023` 即可获取 PPT 下载链接。
|
||||
|
||||
**以下是本分享对应的文章内容。**
|
||||
|
||||
## 可观测性来源
|
||||
|
||||
在 Kubernetes 集群上,各个维度的可观测性数据,可以让我们及时了解集群上应用的状态,以及集群本身的状态。
|
||||
|
||||

|
||||
|
||||
- Metrics 指标:监控对象状态的量化信息,通常会以时序数据的形式采集和存储。
|
||||
- Events:这里特指的是 Kubernetes 集群上所报告的各种事件,他们是以 Kubernetes 资源对象的形式存在。
|
||||
- Auditing:审计,是与用户 API 和安全相关的一些事件。
|
||||
- Logs:日志,是应用和系统对它们内部所发生各种事件的详细记录。
|
||||
- Traces:链路,主要记录了请求在系统中调用时的链路信息。
|
||||
|
||||
## 告警规则
|
||||
|
||||
接下来介绍一下几个可观测性维度上,我们是如何实现告警的。
|
||||
|
||||
### metrics
|
||||
|
||||
在云原生监控领域,Prometheus 是被广泛使用的,它可以说是一个事实上的标准。
|
||||
|
||||
对于一个单独的集群来说,或者说是集群自己管理指标存储的场景,我们直接部署一个 Prometheus,就可以提供指标采集、存储、查询和告警的功能。当然也可以额外部署一个 Ruler 组件,来专门进行规则的评估和告警,这样可以减轻 Prometheus 组件的负担。
|
||||
|
||||

|
||||
|
||||
我们还会面临指标数据托管的场景,因为有一些集群会有轻量化的需求,它需要将指标数据托管到一个 host 集群上,或者是托管到专门的服务上。
|
||||
|
||||
Prometheus 是支持 Agent 模式的,这个模式下的 Prometheus 可以将指标进行采集,然后推送到一个主集群上进行存储。在主集群上,我们需要提供指标存储和查询的功能,当然告警也需要在主集群上进行,这时候的告警不只是要实现针对每个集群的单独告警,还需要支持多集群关联告警。
|
||||
|
||||

|
||||
|
||||
Prometheus Operator 作为管理 Prometheus 的 Kubernetes 原生方式,为部署和配置 Prometheus 提供了极大的便利。比如 Prometheus Operator 定义了一个 ServiceMonitor CRD,我们可以用它来方便的配置拉取指标的 targets。
|
||||
|
||||
另外 Prometheus Operator 还定义了一个 PrometheusRule CRD 来配置告警规则,但目前仍然存在一些不足:
|
||||
|
||||
- 配置粒度大,导致对并发更新的支持不足。
|
||||
- 可配置性不够,比如不支持禁用告警规则。
|
||||
- 对多租户和多集群场景的支持较差。
|
||||
|
||||
|
||||
为了让规则配置更加灵活,并且能够更好的应用到多集群和多租户的场景,我们引入了三个新的 CRDs。这些 CRDs 以规则组为配置单元,配置粒度更加细化,可配置性也得到了增强。
|
||||
|
||||
- RuleGroup:项目级别的资源,只对所在项目的指标生效。
|
||||
- ClusterRuleGroup:集群级别的资源,其生效范围是其实例所在集群的指标。
|
||||
- GlobalRuleGroup:特殊资源,支持对多个指定集群的指标生效。
|
||||
|
||||

|
||||
|
||||
上方是 RuleGroup 的一个实例。
|
||||
|
||||
每个规则组都是一个配置单元,可以包含有关联关系的多个规则。
|
||||
|
||||
在单个规则的结构上,我们保留了 PrometheusRule CRD 中原始的规则结构,以确保和上游 PrometheusRule 的兼容性。
|
||||
|
||||
在 RuleGroup 的实例当中,我们可以通过设置一个资源标签来启用或者禁用整个规则组,也可以在规则配置中进行单个规则的禁用或启用。
|
||||
|
||||
另外我们还提供了一个 `exprBuilder`,以针对一些典型的告警场景,通过简单的配置即可进行规则表达式的自动化构建。
|
||||
|
||||
`exprBuilder` 提供了配置 Prometheus 查询表达式的便捷方法,涵盖了工作负载和节点的各种指标告警。
|
||||
- 工作负载
|
||||
- 类型:Deployment、StatefulSet、DaemonSet。
|
||||
- 指标:工作负载的 CPU、内存、网络和副本。
|
||||
- 节点(不适用于 RuleGroup 实例)
|
||||
- 指标:节点的 CPU、内存、网络、磁盘、pod 使用率等。
|
||||
|
||||
RuleGroup,ClusterRuleGroup 和 GlobalRuleGroup 的实例可以组合成 PrometheusRule 的实例。在此过程中,会添加一些指标数据访问的限制。比如 RuleGroup 被合并生成到 PrometheusRule 实例中时,它会将 `exprBuilder` 构建成 Prometheus 查询表达式,同时也会将 `namespace` 的条件注入到表达式中,以限制规则可以访问的指标。
|
||||
|
||||

|
||||
|
||||
在多集群告警的场景下,还会涉及到 PrometheusRule 的跨集群同步。在这个同步过程中,我们会将 `cluster` 条件注入规则表达式中,这样可以限制规则访问的集群指标。
|
||||
|
||||

|
||||
|
||||
无论采用哪种指标存储管理模式,在数据侧评估告警规则的效率会更高。如果一个集群自己管理指标存储,那么同一集群的 Ruler 可以直接加载这些规则,然后进行评估和告警,RuleGroup 和 ClusterRuleGroup 的更新也会及时反馈到 Ruler 组件内部。
|
||||
|
||||

|
||||
|
||||
如果一个集群将指标数据托管到一个主集群上,在这个集群上仍然会有 RuleGroup 和 ClusterRuleGroup 合并生成 PrometheusRule 的过程,不过生成的 PrometheusRule 会被同步到主集群上,由主集群的 Ruler 进行评估和告警。这是因为在同步的过程中,规则表达式中已经注入了 `cluster` 的过滤条件,所以能够正常的对该集群的指标数据进行评估,并决定是否告警。
|
||||
|
||||
如果有多个集群将数据托管到一个主集群上,那么可以在主集群上配置多个 Ruler 来分担压力。在 KubeSphere 的某些版本中,可以根据多集群的规模来动态的扩展 Ruler 以及相关的查询组件,以确保告警评估过程高效运行。
|
||||
|
||||

|
||||
|
||||
由新定义的规则资源触发的告警不仅包含指标标签,还将通过以下标签丰富告警信息以方面故障定位:
|
||||
|
||||
- alerttype:区分不同的告警来源。
|
||||
- cluster:用于多集群方案,快速定位告警对象。
|
||||
- severity:对告警执行分级控制。
|
||||
- rule_group:在规则组和告警之间建立有效的关系。
|
||||
- rule_level:在规则组和告警之间建立有效的关系。
|
||||
|
||||
### events
|
||||
|
||||
接下来介绍一下事件告警规则的实现方式。
|
||||
|
||||
Kubernetes 事件通常表示集群中的某些状态变化,作为 Kubernetes 资源对象,其保留时间有限。kube-events 项目中的 exporter 组件可以导出这些事件进行长期存储,并通过评估事件规则生成与 Alertmanager 兼容的告警。这些规则可以过滤掉告警、关键事件或者用户感兴趣的事件。
|
||||
|
||||
事件规则 CRD 定义了基于事件的告警配置:
|
||||
|
||||
- condition:它类似于一个 sql 语句的 where 部分(通过我们实现的 event-rule-engine 来提供语法支持),用于支持更灵活的事件过滤方式。
|
||||
- labels:添加到告警中的额外标签。
|
||||
- annotations:关于告警的详细信息。
|
||||
|
||||

|
||||
|
||||
在 KubeSphere 中,事件规则实例中的 `kubesphere.io/rule-scope` 标签可用于限制规则的生效范围:
|
||||
- cluster:适用于集群中的所有事件。
|
||||
- workspace:适用于属于同一工作区的多个命名空间中的事件。必须在规则实例中指定 `workspace` 标签。
|
||||
- namespace:规则实例所在的 namespace 需要与事件涉及对象所在的 namespace 相匹配。
|
||||
|
||||
### audit events
|
||||
|
||||
接下来介绍一下审计告警规则的实现方式。
|
||||
|
||||
在 Kubernetes 集群中,是通过审计事件记录了所有 API 的调用,包含了请求响应信息和用户信息。审计组件在提供审计导出功能的时候,还可以根据相关的规则进行灵活的配置。
|
||||
|
||||
审计规则 CRD 定义了过滤审计事件的配置,这些事件将被长期存储或生成告警发送给用户。每个审计规则实例可包含多个规则。
|
||||
|
||||
定义了四种类型的规则:
|
||||
- rule:真正完整的规则,带有条件字段,也是之前的类似 sql 的表达式。
|
||||
- macro:精简规则,可被其他宏或规则调用。
|
||||
- list:列表,可被宏或规则调用。
|
||||
- alias:只是一个变量的别名。
|
||||
|
||||
## 如何配置接收告警通知
|
||||
|
||||
所有的告警都将通过通知系统,实时准确地发送给用户。KubeSphere 团队开源的 Notification Manager 是一个多租户的云原生通知管理工具,它支持多种通知渠道,比如微信、钉钉、飞书、邮件、Slack、Pushover、WeCom、Web hook 以及短信平台(阿里、腾讯、华为)等等。
|
||||
|
||||
下面就来通过 Notification Manager,快速地搭建出一套云原生的多租户通知系统。
|
||||
|
||||
Notification Manager 会接收来自 Prometheus、Alertmanager 发出的警告消息、K8s 产生的审计消息以及 K8s 本身的事件。在规划中我们还会实现接入 Prometheus 的告警消息和接入 Cloud Event 。消息在进入缓存之后,会经过静默、抑制、路由、过滤、聚合,最后进行实际的通知,并记录在历史中。
|
||||
|
||||

|
||||
|
||||
下面是对每个步骤的解析:
|
||||
|
||||
### 静默(Silence)
|
||||
|
||||
静默的作用是在特定的时间段阻止特定的告警发送,具有时效性。可以通过配置时间段或者 Cron 表达式来设置静默规则的生效时间。当然,也可以设置永久生效的静默规则。
|
||||
|
||||
静默规则有两种级别:全局级别和租户级别。全局级别的静默规则作用于所有的告警,租户级别的租户级别只作用于需要发送给某个租户的告警。
|
||||
|
||||
### 抑制(Inhibit)
|
||||
|
||||
抑制的作用是通过某些特定的告警去阻止其他告警的发送。一个节点宕机之后会发送大量告警,而这些告警不利于我们排查原因,我们可以通过设置抑制规则不再发送这部分告警给用户。
|
||||
|
||||
### 路由(Route)
|
||||
|
||||
告警、事件、审计都是由一个个标签组成的,路由的本质就是根据标签寻找需要接收标签的接收器。换句话说,路由的作用就是根据告警信息去寻找,要把告警发送给哪个用户,用户又通过什么方式去接收告警。
|
||||
|
||||
那么如何去定义用户接收告警的方式?
|
||||
|
||||
Notification Manager 引入了一个 Receiver 的概念。
|
||||
|
||||
Receiver 用于定义通知格式和发送通知的目的地。接收器包含以下信息:通知渠道信息,如电子邮件地址、Slack channel 等;生成通知消息的模板;过滤告警的标签选择器。
|
||||
|
||||
Receiver 分为两类:全局级别和租户级别。
|
||||
|
||||
全局级别的 Receiver 会接收所有的告警消息,租户级别的 Receiver 只会接收租户有权限访问的 namespace 下产生的告警消息。
|
||||
|
||||
有两种方式把告警和 Receiver 匹配起来:
|
||||
- 路由匹配:用户可以制定一个路由规则,然后把特定的告警路由到特定的 Receiver 上。
|
||||
- 根据 namespace 标签匹配:对于没有 namespace 标签的告警,会全部发送到全局级别的 Receiver;对于有 namespace 标签的告警,会根据标签的值,发送到有权限访问 namespace 租户创建的 Receiver 上。
|
||||
|
||||
这是一个路由规则,我们可以通过标签选择器去选择需要路由的告警,然后可以把这些告警路由到指定租户的所有 Receiver 上,或者把它路由到某一个指定的 Receiver 上。更进一步,我们可以选择某一些类型的 Receiver,比如我们可以把告警路由到所有的 Email Receiver,而不把它路由到 wechat Receiver。
|
||||
|
||||

|
||||
|
||||
对于这两种模式,有三种协同方式,分别是:
|
||||
|
||||
- All:同时使用两种方式匹配。
|
||||
- 路由优先:优先使用路由规则匹配,未匹配成功的使用自动匹配。
|
||||
- 只使用路由:使用路由规则未匹配到 Receiver,则不发送通知。
|
||||
|
||||
### 过滤(Filter)
|
||||
|
||||
每个用户对告警的需求是不一样的,我们需要为每个选定的 Receiver 过滤掉无效或不感兴趣的告警。这就是 Filter 的作用。
|
||||
|
||||
实现 Filter 的方式有两种,一种是通过在 Receiver 中设置标签选择器,过滤掉不重要的通知,对于单个通知有效,另一种是定义租户级别的静默规则,对部分通知进行静默。
|
||||
|
||||
### 聚合(Aggregation)
|
||||
|
||||
告警匹配到了对应的 Receiver,就可以通过设定的规则,将同一 Receiver 需要发送的告警做一个聚合。
|
||||
|
||||
聚合的作用有两个:
|
||||
- 聚合告警消息,便于归档,方便用户定位故障。
|
||||
- 减少调用频次,避免被微信、钉钉等禁言,节省资源。
|
||||
|
||||
### 模板(Template)
|
||||
|
||||
到了这一步,告警发送之前的所有准备工作已经完成了,接下来就要向用户发送通知。
|
||||
|
||||
首先,我们需要根据告警消息生成一条通知消息(根据不同的 Receiver 生成不同的消息)。我们支持用户自定义通知模板,用户可以定义全局模板,也可以为每个接收者定义模板。同时我们支持用户自定义语言包,然后实现语言切换。Notification Manager 为大家提供了内置的相关函数,可以实现语言的翻译功能。
|
||||
|
||||
### Config
|
||||
|
||||
现在一切都准备就绪,是时候向用户发送通知了,但我们还缺少一些关键信息,例如:SMTP 服务器和用于发送电子邮件的电子邮件地址;用于向 Slack 频道发送通知的 Slack APP 令牌;飞书的 AppID 和 AppSecret,因此我们需要定义这些信息。
|
||||
|
||||
Config 就是用来定义发送通知消息所必需的一些信息的,同样分为全局类型和租户类型两种。
|
||||
|
||||
对于全局级别的 Receiver,默认情况下,会选择全局级别的 Config。对于租户级别的 Receiver,默认选择当前租户的创建的 Config。如果当前租户未创建 Config,会选择全局级别的 Config。
|
||||
|
||||
同时也支持 Receiver 通过标签选择器去指定 Config。
|
||||
|
||||
通过发送配置和接收配置分离的模式,我们可以最大限度的实现配置复用,同时可以实现多租户的通知设置。
|
||||
|
||||

|
||||
|
||||
举个简单的例子,比如对邮箱这种通知模式,整个公司可能只有一个 SMTP 服务器,这样就可以设置一个全局级别的 Email Config,然后所有的租户只配置一个 Receiver 就可以了。
|
||||
|
||||
至此我们就完整的搭建了一个多租户的云原生通知系统,然后就可以给用户发送通知了。
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
---
|
||||
title: '使用 OpenFunction 在任何基础设施上运行无服务器工作负载'
|
||||
author: '霍秉杰,王翼飞'
|
||||
createTime: '2023-09-27'
|
||||
snapshot: 'https://pek3b.qingstor.com/kubesphere-community/images/kubecon-2023-openfunction.png'
|
||||
---
|
||||
|
||||
## 议题简介
|
||||
|
||||
云原生技术的崛起使得我们可以以相同的方式在公有云、私有云或本地数据中心运行应用程序或工作负载。但是,对于需要访问不同云或开源中间件的各种 BaaS 服务的无服务器工作负载来说,这并不容易。在这次演讲中,OpenFunction 维护者将详细介绍如何使用 OpenFunction 解决这个问题,以及 OpenFunction 的最新更新和路线图:
|
||||
|
||||
- 使用 Dapr 将 FaaS 与 BaaS 解耦
|
||||
- 使用 Dapr 代理而不是 Dapr sidecar 来加速函数启动
|
||||
- 使用 Kubernetes Gateway API 构建 OpenFunction 网关
|
||||
- 使用 WasmEdge 运行时运行 WebAssembly 函数
|
||||
- OpenFunction 在自动驾驶行业的应用案例
|
||||
- 最新更新和路线图
|
||||
|
||||
## 分享者简介
|
||||
|
||||
霍秉杰:KubeSphere 可观测性、边缘计算和 Serverless 团队负责人,Fluent Operator 和 OpenFunction 项目的创始人,还是多个可观测性开源项目包括 Kube-Events、Notification Manager 等的作者,热爱云原生技术,并贡献过 KEDA、Prometheus Operator、Thanos、Loki 和 Falco 等知名开源项目。
|
||||
|
||||
王翼飞:青云科技资深软件工程师,负责开发和维护 OpenFunction 项目。专注于 Serverless 领域的研发,对 Knative、Dapr、Keda 等开源项目有深入的了解和实践经验。
|
||||
|
||||
## 视频回放
|
||||
|
||||
<video id="videoPlayer" controls="" preload="true">
|
||||
<source src="https://kubesphere-community.pek3b.qingstor.com/videos/Run-Serverless-Workloads-on-Any-Infrastructure-with-OpenFunction.mp4" type="video/mp4">
|
||||
</video>
|
||||
|
||||
## 对应文章
|
||||
|
||||
整理中
|
||||
|
|
@ -146,7 +146,7 @@ KubeSphere 中的图形编辑面板包含用于 Jenkins [阶段 (Stage)](https:/
|
|||
3. 点击**添加嵌套步骤**,在 `maven` 容器下添加一个嵌套步骤。在列表中选择 **shell** 并在命令行中输入以下命令。点击**确定**保存操作。
|
||||
|
||||
```shell
|
||||
mvn clean -gs `pwd`/configuration/settings.xml test
|
||||
mvn clean test
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
|
|||
|
|
@ -63,8 +63,6 @@ KubeSphere 将 PVC 绑定到满足您设定的请求条件(例如容量和访
|
|||
|
||||
- 新建的持久卷声明也会显示在**集群管理**中的**持久卷声明**页面。集群管理员需要查看和跟踪项目中创建的持久卷声明。另一方面,集群管理员在**集群管理**中为项目创建的持久卷声明也会显示在项目的**持久卷声明**页面。
|
||||
|
||||
- 一些持久卷声明是动态供应的持久卷声明,它们的状态会在创建后立刻从**等待中**变为**已绑定**。其他仍处于**等待中**的持久卷声明会在挂载至工作负载后变为**已绑定**。持久卷声明是否支持动态供应取决于其存储类。例如,如果您使用默认的存储类型 (OpenEBS) 安装 KubeSphere,您只能创建不支持动态供应的本地持久卷声明。这类持久卷声明的绑定模式由 YAML 文件中的 `VolumeBindingMode: WaitForFirstConsumer` 字段指定。
|
||||
|
||||
- 一些持久卷声明是动态供应的持久卷声明,它们的状态会在创建后立刻从**等待中**变为**已绑定**。其他仍处于**等待中**的持久卷声明会在挂载至工作负载后变为**已绑定**。持久卷声明是否支持动态供应取决于其存储类。例如,如果您使用默认的存储类型 (OpenEBS) 安装 KubeSphere,您只能创建不支持动态供应的本地持久卷声明。这类持久卷声明的绑定模式由 YAML 文件中的 `VolumeBindingMode: WaitForFirstConsumer` 字段指定。
|
||||
|
||||
{{</ notice >}}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,174 @@
|
|||
---
|
||||
title: "添加 OpenSearch 作为接收器"
|
||||
keywords: 'Kubernetes, 日志, OpenSearch, Pod, 容器, Fluentbit, 输出'
|
||||
description: '了解如何添加 OpenSearch 来接收容器日志、资源事件或审计日志。'
|
||||
linkTitle: "添加 OpenSearch 作为接收器"
|
||||
weight: 8625
|
||||
---
|
||||
|
||||
[OpenSearch](https://opensearch.org/) 是一种分布式,由社区驱动并取得 Apache 2.0 许可的、 100% 开源的搜索和分析套件,可用于实时应用程序监控、日志分析和网站搜索等场景。
|
||||
OpenSearch 由 Apache Lucene 搜索库提供技术支持,它支持一系列搜索及分析功能,如 k-最近邻(KNN)搜索、SQL、异常检测、Machine Learning Commons、Trace Analytics、全文搜索等。
|
||||
OpenSearch 提供了一个高度可扩展的系统,通过集成可视化工具,使用户可以轻松地探索他们的数据。
|
||||
|
||||
KubeSphere 在 `v3.4.0` 版本集成了 OpenSearch 的 `v1` 和 `v2` 版本,并作为 `logging`、`events` 和 `auditing` 组件的默认后端存储。
|
||||
|
||||
|
||||
## 准备工作
|
||||
|
||||
- 需要一个被授予集群管理权限的用户。例如,可以直接用 `admin` 用户登录控制台,或创建一个具有集群管理权限的角色然后将此角色授予一个用户。
|
||||
|
||||
- 添加日志接收器前,需先启用组件 `logging`、`events` 或 `auditing`。有关更多信息,请参见[启用可插拔组件](/content/zh/docs/v3.4/pluggable-components/)。本教程启用 `logging` 作为示例。
|
||||
|
||||
|
||||
## 使用 OpenSearch 作为日志接收器
|
||||
|
||||
在 KubeSphere `v3.4.0` 版本中,默认 OpenSearch 为 `logging`、`events` 或 `auditing` 组件的后端存储。 配置如下:
|
||||
|
||||
```shell
|
||||
$ kubectl edit cc -n kubesphere-system ks-installer
|
||||
|
||||
apiVersion: installer.kubesphere.io/v1alpha1
|
||||
|
||||
kind: ClusterConfiguration
|
||||
|
||||
metadata:
|
||||
|
||||
name: ks-installer
|
||||
|
||||
namespace: kubesphere-system
|
||||
|
||||
...
|
||||
|
||||
spec:
|
||||
|
||||
...
|
||||
|
||||
common:
|
||||
|
||||
opensearch: # Storage backend for logging, events and auditing.
|
||||
|
||||
...
|
||||
|
||||
enabled: true
|
||||
|
||||
logMaxAge: 7 # Log retention time in built-in Opensearch. It is 7 days by default.
|
||||
|
||||
opensearchPrefix: whizard # The string making up index names. The index name will be formatted as ks-<opensearchPrefix>-logging.
|
||||
|
||||
...
|
||||
|
||||
```
|
||||
KubeSphere 版本低于 `v3.4.0`的,请先[升级](https://github.com/kubesphere/ks-installer/tree/release-3.4#upgrade)。
|
||||
|
||||
### 通过控制台启用 `logging` 组件,并使用 `OpenSearch` 作为后端存储
|
||||
|
||||
1. 以 admin 用户登录控制台。点击左上角的平台管理,选择集群管理。
|
||||
|
||||
2. 点击定制资源定义,在搜索栏中输入 `clusterconfiguration`。点击结果查看其详细页面。
|
||||
|
||||

|
||||
|
||||
3. 在自定义资源中,点击 ks-installer 右侧的 ,选择编辑 YAML。
|
||||
|
||||

|
||||
|
||||
4. 在该 YAML 文件中,搜索 `logging`,将 `enabled` 的 `false` 改为 `true`。完成后,点击右下角的确定以保存配置。
|
||||
|
||||
```yaml
|
||||
common:
|
||||
opensearch:
|
||||
enabled: true
|
||||
|
||||
logging:
|
||||
enabled: true
|
||||
```
|
||||
|
||||
|
||||
## 将日志存储改为外部 OpenSearch 并关闭内部 OpenSearch
|
||||
|
||||
如果您使用的是 KubeSphere 内部的 OpenSearch,并且想把它改成您的外部 OpenSearch,请按照以下步骤操作。
|
||||
|
||||
1. 执行以下命令更新 ClusterConfig 配置:
|
||||
|
||||
```shell
|
||||
kubectl edit cc -n kubesphere-system ks-installer
|
||||
```
|
||||
|
||||
2. 将 `opensearch.externalOpensearchHost` 设置为外部 `OpenSearch` 的地址,将 `opensearch.externalOpensearchPort` 设置为其端口号,并将 `status.logging` 字段注释或者删除掉。以下示例供您参考:
|
||||
|
||||
```yaml
|
||||
apiVersion: installer.kubesphere.io/v1alpha1
|
||||
|
||||
kind: ClusterConfiguration
|
||||
|
||||
metadata:
|
||||
|
||||
name: ks-installer
|
||||
|
||||
namespace: kubesphere-system
|
||||
|
||||
...
|
||||
|
||||
spec:
|
||||
|
||||
...
|
||||
|
||||
common:
|
||||
|
||||
opensearch:
|
||||
|
||||
enabled: true
|
||||
|
||||
...
|
||||
|
||||
externalOpensearchHost: ""
|
||||
|
||||
externalOpensearchPort: ""
|
||||
|
||||
dashboard:
|
||||
|
||||
enabled: false
|
||||
|
||||
...
|
||||
|
||||
status:
|
||||
|
||||
...
|
||||
|
||||
# logging:
|
||||
|
||||
# enabledTime: 2023-08-21T21:05:13UTC
|
||||
|
||||
# status: enabled
|
||||
|
||||
...
|
||||
|
||||
```
|
||||
|
||||
如果要使用 `OpenSearch` 的可视化工具,可将 `opensearch.dashboard.enabled` 设置为 `true`。
|
||||
|
||||
3. 重新运行 ks-installer。
|
||||
|
||||
```shell
|
||||
kubectl rollout restart deploy -n kubesphere-system ks-installer
|
||||
```
|
||||
|
||||
4. 运行以下命令删除内部 OpenSearch,请确认您已备份内部 OpenSearch 中的数据。
|
||||
|
||||
```shell
|
||||
helm uninstall opensearch-master -n kubesphere-logging-system && helm uninstall opensearch-data -n kubesphere-logging-system && helm uninstall opensearch-logging-curator -n kubesphere-logging-system
|
||||
```
|
||||
|
||||
## 在 KubeSphere 中查询日志
|
||||
|
||||
1. 所有用户都可以使用日志查询功能。使用任意帐户登录控制台,在右下角的 icon 上悬停,然后在弹出菜单中选择日志查询。
|
||||
|
||||

|
||||
|
||||
2. 在弹出窗口中,您可以看到日志数量的时间直方图、集群选择下拉列表以及日志查询栏。
|
||||
|
||||

|
||||
|
||||
3. 您可以点击搜索栏并输入搜索条件,可以按照消息、企业空间、项目、资源类型、资源名称、原因、类别或时间范围搜索事件(例如,输入时间范围:最近 10 分钟,来搜索最近 10 分钟的事件)。或者,点击时间直方图中的柱状图,KubeSphere 会使用该柱状图的时间范围进行日志查询。
|
||||
|
||||

|
||||
|
|
@ -40,7 +40,7 @@ weight: 11440
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
这些 Kubernetes 集群可以被托管至不同的云厂商,也可以使用不同的 Kubernetes 版本。针对 KubeSphere 3.4 推荐的 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
|
||||
这些 Kubernetes 集群可以被托管至不同的云厂商,也可以使用不同的 Kubernetes 版本。针对 KubeSphere 3.4 推荐的 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -146,7 +146,7 @@ KubeSphere 中的图形编辑面板包含用于 Jenkins [阶段 (Stage)](https:/
|
|||
3. 点击**添加嵌套步骤**,在 `maven` 容器下添加一个嵌套步骤。在列表中选择 **shell** 并在命令行中输入以下命令。点击**确定**保存操作。
|
||||
|
||||
```shell
|
||||
mvn clean -gs `pwd`/configuration/settings.xml test
|
||||
mvn clean test
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
|
|||
|
|
@ -0,0 +1,196 @@
|
|||
---
|
||||
title: "使用流水线步骤模板"
|
||||
keywords: 'KubeSphere, Kubernetes, Jenkins, 图形化流水线, 流水线步骤模板'
|
||||
description: '了解如何在 KubeSphere 上使用流水线步骤模板。'
|
||||
linkTitle: "使用流水线步骤模板"
|
||||
weight: 11214
|
||||
---
|
||||
|
||||
|
||||
在最新版的 KubeSphere 3.4.0 版本中,DevOps 项目支持在流水线模板中使用步骤模板来优化使用流水线。
|
||||
|
||||
## 准备工作
|
||||
|
||||
- [启用 KubeSphere DevOps 系统](../../../../pluggable-components/devops/)。
|
||||
|
||||
- 创建企业用户,请参见[创建企业空间、项目、用户和角色](../../../../quick-start/create-workspace-and-project/)。
|
||||
|
||||
### 开启 DevOps 方式
|
||||
|
||||
1. 以 admin 用户登录控制台,点击左上角的平台管理,选择集群管理。
|
||||
|
||||
2. 点击定制资源定义,在搜索栏中输入 clusterconfiguration,点击搜索结果查看其详细页面。
|
||||
|
||||
3. 在自定义资源中,点击 ks-installer 右侧的操作符号,选择编辑 YAML,将 devops 下的 enabled 配置更改为true。
|
||||
|
||||
```
|
||||
devops:
|
||||
enabled: true # 将“false”更改为“true”
|
||||
|
||||
```
|
||||
4. 使用 kubectl 命令检查安装过程。
|
||||
```
|
||||
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f
|
||||
|
||||
```
|
||||
5. 使用 kubectl 命令验证是否安装完成。
|
||||
|
||||
```
|
||||
kubectl get pod -n kubesphere-devops-system
|
||||
```
|
||||
|
||||
对应的 pod 为 Running 状态即表示成功。
|
||||
|
||||
```
|
||||
devops-apiserver-7576cfc79c-j9kdz 1/1 Running 0 23h
|
||||
devops-controller-7bcbbfc546-lszkt 1/1 Running 0 23h
|
||||
devops-jenkins-79b59bdd5-tjrj8 1/1 Running 0 23h
|
||||
s2ioperator-0 1/1 Running 0 23h
|
||||
```
|
||||
|
||||
## 配置使用自定义步骤模板
|
||||
|
||||
### 创建自定义的步骤模板
|
||||
|
||||
目前自定义的步骤模板只能通过控制台去操作。
|
||||
|
||||
1. 通过 kubectl 命令查看现有的步骤模板。
|
||||
```
|
||||
kubectl get clustersteptemplates
|
||||
```
|
||||
|
||||
```
|
||||
NAME AGE
|
||||
archiveartifacts 6d7h
|
||||
build 6d7h
|
||||
cd 6d7h
|
||||
checkout 6d7h
|
||||
container 6d7h
|
||||
echo 6d7h
|
||||
error 6d7h
|
||||
git 6d7h
|
||||
input 6d7h
|
||||
junit 6d7h
|
||||
mail 6d7h
|
||||
retry 6d7h
|
||||
script 6d7h
|
||||
shell 6d7h
|
||||
sleep 6d7h
|
||||
timeout 6d7h
|
||||
waitforqualitygate 6d7h
|
||||
withcredentials 6d7h
|
||||
withsonarqubeenv 6d7h
|
||||
```
|
||||
|
||||
2. 创建自定义步骤模板,先创建一个 yaml 文件,简单实现写文件。
|
||||
|
||||
```
|
||||
apiVersion: devops.kubesphere.io/v1alpha3
|
||||
kind: ClusterStepTemplate
|
||||
metadata:
|
||||
annotations:
|
||||
devops.kubesphere.io/descriptionEN: Write message to file in the build
|
||||
devops.kubesphere.io/descriptionZH: 在构建过程中写入文件
|
||||
devops.kubesphere.io/displayNameEN: writeFile
|
||||
devops.kubesphere.io/displayNameZH: 写文件
|
||||
meta.helm.sh/release-name: devops
|
||||
meta.helm.sh/release-namespace: kubesphere-devops-system
|
||||
step.devops.kubesphere.io/icon: loudspeaker
|
||||
generation: 1
|
||||
labels:
|
||||
app.kubernetes.io/managed-by: Helm
|
||||
step.devops.kubesphere.io/category: General
|
||||
name: writefile
|
||||
spec:
|
||||
parameters:
|
||||
- display: file
|
||||
name: file
|
||||
required: true
|
||||
type: string
|
||||
- display: text
|
||||
name: text
|
||||
required: true
|
||||
type: string
|
||||
runtime: dsl
|
||||
template: |
|
||||
{
|
||||
"arguments": [
|
||||
{
|
||||
"key": "file",
|
||||
"value": {
|
||||
"isLiteral": true,
|
||||
"value": "{{.param.file}}"
|
||||
}
|
||||
},
|
||||
{
|
||||
"key": "text",
|
||||
"value": {
|
||||
"isLiteral": true,
|
||||
"value": "{{.param.text}}"
|
||||
}
|
||||
}
|
||||
],
|
||||
"name": "writeFile"
|
||||
}
|
||||
```
|
||||
|
||||
备注:
|
||||
|
||||
a. 步骤模板是通过 crd 实现的,详细可参考 [步骤模板的crd](https://github.com/kubesphere-sigs/ks-devops-helm-chart/blob/master/charts/ks-devops/crds/devops.kubesphere.io_clustersteptemplates.yaml)。
|
||||
|
||||
b. yaml 文件中的 metadata.name 字段和 spec.template.name 字段需要保持一致,同时 name 字段依赖 jenkins 中的函数来实现对应功能,如上的 yaml 文件中使用了 writeFile 函数来实现输出功能,详细可参考[ pipeline steps](https://www.jenkins.io/doc/pipeline/steps/)。
|
||||
|
||||
3.使用 kubectl 命令创建自定义的步骤。
|
||||
|
||||
```
|
||||
kubectl apply -f test-writefile.yaml
|
||||
```
|
||||
|
||||
4.再次查看自定义步骤模板 writefile 已创建。
|
||||
|
||||
```
|
||||
kubectl get clustersteptemplates
|
||||
NAME AGE
|
||||
archiveartifacts 37d
|
||||
build 37d
|
||||
cd 37d
|
||||
checkout 37d
|
||||
container 37d
|
||||
echo 37d
|
||||
error 37d
|
||||
git 37d
|
||||
input 37d
|
||||
junit 37d
|
||||
mail 37d
|
||||
pwd 28d
|
||||
retry 37d
|
||||
script 37d
|
||||
shell 37d
|
||||
sleep 37d
|
||||
timeout 37d
|
||||
waitforqualitygate 37d
|
||||
withcredentials 37d
|
||||
withsonarqubeenv 37d
|
||||
writefile 28s
|
||||
```
|
||||
|
||||
### 使用自定义步骤模板
|
||||
|
||||
1. 选择进入 DevOps 项目后,建立新的 pipeline 流水线。
|
||||
|
||||

|
||||
|
||||
2. 进入编辑流水线中,可以按需选择固定模板(比如 Node.js/Maven/Golang 等),也可以选择创建自定义流水线。
|
||||

|
||||
|
||||
3. 这里选择固定模板 Golang 创建流水线,进入流水线后,可以按需增加一个阶段。我们选择在流水线最后创建一个通知的阶段。
|
||||
|
||||

|
||||
|
||||
4. 在通知的阶段这里,继续添加执行步骤,这里有很多的步骤模板,我们选择
|
||||
写文件 的这个自定义步骤。
|
||||
|
||||

|
||||
|
||||
|
||||
至此,我们完成了一个自定义步骤模板的配置。
|
||||
|
|
@ -30,7 +30,7 @@ weight: 4230
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 如需在 Kubernetes 上安装 KubeSphere 3.4,您的 Kubernetes 版本必须为:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
|
||||
- 如需在 Kubernetes 上安装 KubeSphere 3.4,您的 Kubernetes 版本必须为:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。
|
||||
- 此示例中包括 3 个节点。您可以根据自己的需求添加更多节点,尤其是在生产环境中。
|
||||
- 机器类型 Standard/4 GB/2 vCPU 仅用于最小化安装的,如果您计划启用多个可插拔组件或将集群用于生产,建议将节点升级到规格更大的类型(例如,CPU-Optimized /8 GB /4 vCPUs)。DigitalOcean 是基于工作节点类型来配置主节点,而对于标准节点,API server 可能会很快会变得无响应。
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ weight: 4120
|
|||
|
||||
您可以在虚拟机和裸机上安装 KubeSphere,并同时配置 Kubernetes。另外,只要 Kubernetes 集群满足以下前提条件,那么您也可以在云托管和本地 Kubernetes 集群上部署 KubeSphere。
|
||||
|
||||
- 如需在 Kubernetes 上安装 KubeSphere 3.4,您的 Kubernetes 版本必须为:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
|
||||
- 如需在 Kubernetes 上安装 KubeSphere 3.4,您的 Kubernetes 版本必须为:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。
|
||||
- 可用 CPU > 1 核;内存 > 2 G。CPU 必须为 x86_64,暂时不支持 Arm 架构的 CPU。
|
||||
- Kubernetes 集群已配置**默认** StorageClass(请使用 `kubectl get sc` 进行确认)。
|
||||
- 使用 `--cluster-signing-cert-file` 和 `--cluster-signing-key-file` 参数启动集群时,kube-apiserver 将启用 CSR 签名功能。请参见 [RKE 安装问题](https://github.com/kubesphere/kubesphere/issues/1925#issuecomment-591698309)。
|
||||
|
|
|
|||
|
|
@ -244,55 +244,18 @@ https://kubesphere.io 20xx-xx-xx xx:xx:xx
|
|||
### KubeSphere 3.4 镜像清单
|
||||
|
||||
```txt
|
||||
##k8s-images
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.10
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.10
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.10
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.10
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.24.3
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.24.3
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.24.3
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.24.3
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.14
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.14
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.14
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.14
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.7
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.23.2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:2.10.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:2.10.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12
|
||||
##kubesphere-images
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/ks-upgrade:v3.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.3.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine
|
||||
|
|
@ -301,18 +264,18 @@ registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14
|
|||
registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0
|
||||
##kubeedge-images
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.9.2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.9.2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.2.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.13.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.13.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.3.0
|
||||
##gatekeeper-images
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2
|
||||
##openpitrix-images
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.2
|
||||
##kubesphere-devops-images
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:ks-v3.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:ks-v3.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:ks-v3.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.4.0-2.319.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.4.0-2.319.3-1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0
|
||||
|
|
@ -355,43 +318,46 @@ registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1
|
|||
registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine
|
||||
##kubesphere-monitoring-images
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.7.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.39.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.6.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.31.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v2.3.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v2.3.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0
|
||||
##kubesphere-logging-images
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch-curator:v0.0.5
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch:2.6.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/opensearch-dashboards:2.6.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.14.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.9.4
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:v1.2.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.4.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.6.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.6.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.6.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0
|
||||
##istio-images
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.14.6
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.14.6
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.29
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.29
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.29
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.29
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.29
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.50.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.50
|
||||
##example-images
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1
|
||||
registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ weight: 3150
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -64,7 +64,7 @@ export KKZONE=cn
|
|||
执行以下命令下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -97,7 +97,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
|
||||
- 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。
|
||||
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ weight: 3150
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -49,7 +49,7 @@ export KKZONE=cn
|
|||
执行以下命令下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -82,7 +82,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
|
||||
- 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。
|
||||
|
||||
|
|
|
|||
|
|
@ -267,7 +267,7 @@ yum install keepalived haproxy psmisc -y
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接使用以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -283,7 +283,7 @@ export KKZONE=cn
|
|||
运行以下命令来下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -316,7 +316,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
|
||||
- 如果您没有在本步骤的命令中添加标志 `--with-kubesphere`,那么除非您使用配置文件中的 `addons` 字段进行安装,或者稍后使用 `./kk create cluster` 时再添加该标志,否则 KubeSphere 将不会被部署。
|
||||
- 如果您添加标志 `--with-kubesphere` 时未指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接运行以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.10 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -48,7 +48,7 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概
|
|||
运行以下命令来下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.10 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ KubeKey 的几种使用场景:
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接运行以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tabs >}}
|
||||
|
|
@ -55,7 +55,7 @@ export KKZONE=cn
|
|||
运行以下命令来下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -85,6 +85,6 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
|||
{{< notice note >}}
|
||||
|
||||
- 您也可以运行 `./kk version --show-supported-k8s`,查看能使用 KubeKey 安装的所有受支持的 Kubernetes 版本。
|
||||
- 能使用 KubeKey 安装的 Kubernetes 版本与 KubeSphere 3.4 支持的 Kubernetes 版本不同。如需[在现有 Kubernetes 集群上安装 KubeSphere 3.4](../../../installing-on-kubernetes/introduction/overview/),您的 Kubernetes 版本必须为 v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。
|
||||
- 带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如果您需要使用 KubeEdge,为了避免兼容性问题,建议安装 v1.21.x 版本的 Kubernetes。
|
||||
- 能使用 KubeKey 安装的 Kubernetes 版本与 KubeSphere 3.4 支持的 Kubernetes 版本不同。如需[在现有 Kubernetes 集群上安装 KubeSphere 3.4](../../../installing-on-kubernetes/introduction/overview/),您的 Kubernetes 版本必须为 v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。
|
||||
- 带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如果您需要使用 KubeEdge,为了避免兼容性问题,建议安装 v1.23.x 版本的 Kubernetes。
|
||||
{{</ notice >}}
|
||||
|
|
@ -101,7 +101,7 @@ KubeKey 可以一同安装 Kubernetes 和 KubeSphere。根据要安装的 Kubern
|
|||
从 [GitHub 发布页面](https://github.com/kubesphere/kubekey/releases)下载 KubeKey 或直接使用以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -117,7 +117,7 @@ export KKZONE=cn
|
|||
执行以下命令下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -156,7 +156,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
|
||||
- 如果您在此步骤的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。
|
||||
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ weight: 3530
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接运行以下命令:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -48,7 +48,7 @@ export KKZONE=cn
|
|||
运行以下命令来下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -178,7 +178,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
您可以在安装后启用 KubeSphere 的可插拔组件,但由于在 KubeSphere 上部署 K3s 目前处于测试阶段,某些功能可能不兼容。
|
||||
您可以在安装后启用 KubeSphere 的可插拔组件,但由于在 K3s 上部署 KubeSphere 目前处于测试阶段,某些功能可能不兼容。
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -200,7 +200,7 @@ yum install conntrack-tools
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或使用以下命令:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -216,7 +216,7 @@ export KKZONE=cn
|
|||
执行以下命令下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -253,7 +253,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
|
||||
- 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装 KubeSphere,或者在您后续使用 `./kk create cluster` 命令时再次添加该标志。
|
||||
- 如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。
|
||||
|
|
|
|||
|
|
@ -288,7 +288,7 @@ systemctl status -l keepalived
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -304,7 +304,7 @@ export KKZONE=cn
|
|||
执行以下命令下载 KubeKey。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -343,7 +343,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
|
||||
- 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。
|
||||
|
||||
|
|
|
|||
|
|
@ -119,7 +119,7 @@ weight: 3340
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接运行以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -135,7 +135,7 @@ export KKZONE=cn
|
|||
运行以下命令来下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -170,7 +170,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
|
||||
- 如果您在此步骤的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。
|
||||
- 如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。
|
||||
|
|
|
|||
|
|
@ -71,7 +71,7 @@ weight: 3330
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接运行以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -87,7 +87,7 @@ export KKZONE=cn
|
|||
运行以下命令来下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -122,7 +122,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
|
||||
- 如果您在此步骤的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。
|
||||
- 如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。
|
||||
|
|
|
|||
|
|
@ -73,7 +73,7 @@ weight: 3320
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接运行以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -89,7 +89,7 @@ export KKZONE=cn
|
|||
运行以下命令下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -124,7 +124,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
|
||||
- 如果您在此步骤的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。
|
||||
- 如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。
|
||||
|
|
|
|||
|
|
@ -91,7 +91,7 @@ controlPlaneEndpoint:
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -107,7 +107,7 @@ export KKZONE=cn
|
|||
执行以下命令下载 KubeKey。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
|
|||
|
|
@ -102,7 +102,7 @@ ssh -i .ssh/id_rsa2 -p50200 kubesphere@40.81.5.xx
|
|||
从 KubeKey 的 [Github 发布页面](https://github.com/kubesphere/kubekey/releases)下载,或执行以下命令:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -118,7 +118,7 @@ export KKZONE=cn
|
|||
运行以下命令下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -153,7 +153,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- KubeSphere 3.4 对应 Kubernetes 版本推荐:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果未指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关支持的 Kubernetes 版本请参阅[支持矩阵](../../../installing-on-linux/introduction/kubekey/#support-matrix)。
|
||||
- KubeSphere 3.4 对应 Kubernetes 版本推荐:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。如果未指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关支持的 Kubernetes 版本请参阅[支持矩阵](../../../installing-on-linux/introduction/kubekey/#support-matrix)。
|
||||
- 如果在此步骤中的命令中未添加标志 `--with-kubesphere`,则不会部署 KubeSphere,除非您使用配置文件中的 `addons` 字段进行安装,或稍后使用 `./kk create cluster` 时再次添加此标志。
|
||||
|
||||
- 如果在未指定 KubeSphere 版本的情况下添加标志 --with kubesphere`,将安装 KubeSphere 的最新版本。
|
||||
|
|
|
|||
|
|
@ -85,7 +85,7 @@ Kubernetes 服务需要做到高可用,需要保证 kube-apiserver 的 HA ,
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -101,7 +101,7 @@ export KKZONE=cn
|
|||
执行以下命令下载 KubeKey。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
|
|||
|
|
@ -126,7 +126,7 @@ Weight: 3420
|
|||
从 [GitHub 发布页面](https://github.com/kubesphere/kubekey/releases)下载 KubeKey 或直接使用以下命令:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -142,7 +142,7 @@ export KKZONE=cn
|
|||
执行以下命令下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -175,7 +175,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
|
||||
- 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。
|
||||
|
||||
|
|
|
|||
|
|
@ -63,8 +63,6 @@ KubeSphere 将 PVC 绑定到满足您设定的请求条件(例如容量和访
|
|||
|
||||
- 新建的持久卷声明也会显示在**集群管理**中的**持久卷声明**页面。集群管理员需要查看和跟踪项目中创建的持久卷声明。另一方面,集群管理员在**集群管理**中为项目创建的持久卷声明也会显示在项目的**持久卷声明**页面。
|
||||
|
||||
- 一些持久卷声明是动态供应的持久卷声明,它们的状态会在创建后立刻从**等待中**变为**已绑定**。其他仍处于**等待中**的持久卷声明会在挂载至工作负载后变为**已绑定**。持久卷声明是否支持动态供应取决于其存储类。例如,如果您使用默认的存储类型 (OpenEBS) 安装 KubeSphere,您只能创建不支持动态供应的本地持久卷声明。这类持久卷声明的绑定模式由 YAML 文件中的 `VolumeBindingMode: WaitForFirstConsumer` 字段指定。
|
||||
|
||||
- 一些持久卷声明是动态供应的持久卷声明,它们的状态会在创建后立刻从**等待中**变为**已绑定**。其他仍处于**等待中**的持久卷声明会在挂载至工作负载后变为**已绑定**。持久卷声明是否支持动态供应取决于其存储类。例如,如果您使用默认的存储类型 (OpenEBS) 安装 KubeSphere,您只能创建不支持动态供应的本地持久卷声明。这类持久卷声明的绑定模式由 YAML 文件中的 `VolumeBindingMode: WaitForFirstConsumer` 字段指定。
|
||||
|
||||
{{</ notice >}}
|
||||
|
|
|
|||
|
|
@ -145,7 +145,7 @@ KubeKey 是用 Go 语言开发的一款全新的安装工具,代替了以前
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令(ubuntu使用bash替换sh)。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -161,7 +161,7 @@ export KKZONE=cn
|
|||
执行以下命令下载 KubeKey。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -202,7 +202,7 @@ chmod +x kk
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
- 安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
|
||||
- 一般来说,对于 All-in-One 安装,您无需更改任何配置。
|
||||
- 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,KubeKey 将只安装 Kubernetes。如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ weight: 2200
|
|||
|
||||
## 准备工作
|
||||
|
||||
- 您的 Kubernetes 版本必须为:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
|
||||
- 您的 Kubernetes 版本必须为:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。
|
||||
- 确保您的机器满足最低硬件要求:CPU > 1 核,内存 > 2 GB。
|
||||
- 在安装之前,需要配置 Kubernetes 集群中的**默认**存储类型。
|
||||
|
||||
|
|
|
|||
|
|
@ -11,11 +11,11 @@ weight: 7500
|
|||
|
||||
## 准备工作
|
||||
|
||||
- 您需要有一个运行 KubeSphere v3.2.x 的集群。如果您的 KubeSphere 是 v3.1.0 或更早的版本,请先升级至 v3.2.x。
|
||||
- 您需要有一个运行 KubeSphere v3.3.x 的集群。如果您的 KubeSphere 是 v3.2.0 或更早的版本,请先升级至 v3.3.x。
|
||||
- 请仔细阅读 [3.4.0 版本说明](../../../v3.4/release/release-v340/)。
|
||||
- 提前备份所有重要的组件。
|
||||
- Docker 仓库。您需要有一个 Harbor 或其他 Docker 仓库。有关更多信息,请参见[准备一个私有镜像仓库](../../installing-on-linux/introduction/air-gapped-installation/#步骤-2准备一个私有镜像仓库)。
|
||||
- KubeSphere 3.4 支持的 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
|
||||
- KubeSphere 3.4 支持的 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。
|
||||
|
||||
## 重要提示
|
||||
|
||||
|
|
|
|||
|
|
@ -9,8 +9,8 @@ weight: 7400
|
|||
|
||||
## 准备工作
|
||||
|
||||
- 您需要有一个运行 KubeSphere v3.2.x 的集群。如果您的 KubeSphere 是 v3.1.0 或更早的版本,请先升级至 v3.2.x。
|
||||
- 您的 Kubernetes 版本必须为 1.20.x、1.21.x、1.22.x,1.23.x 或 1.24.x。
|
||||
- 您需要有一个运行 KubeSphere v3.3.x 的集群。如果您的 KubeSphere 是 v3.2.0 或更早的版本,请先升级至 v3.3.x。
|
||||
- 您的 Kubernetes 版本必须为 v1.20.x、v1.21.x、v1.22.x,v1.23.x,v1.24.x,v1.25.x 或 v1.26.x。
|
||||
- 请仔细阅读 [3.4.0 版本说明](../../../v3.4/release/release-v340/)。
|
||||
- 提前备份所有重要的组件。
|
||||
- Docker 仓库。您需要有一个 Harbor 或其他 Docker 仓库。
|
||||
|
|
@ -67,7 +67,7 @@ KubeSphere 3.4 对内置角色和自定义角色的授权项做了一些调整
|
|||
从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接运行以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -83,7 +83,7 @@ KubeSphere 3.4 对内置角色和自定义角色的授权项做了一些调整
|
|||
运行以下命令来下载 KubeKey:
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
{{</ tab >}}
|
||||
|
||||
|
|
@ -155,7 +155,7 @@ KubeSphere 3.4 对内置角色和自定义角色的授权项做了一些调整
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- 您可以根据自己的需求变更下载的 Kubernetes 版本。安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
- 您可以根据自己的需求变更下载的 Kubernetes 版本。安装 KubeSphere 3.4 的建议 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.10。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../installing-on-linux/introduction/kubekey/#支持矩阵)。
|
||||
|
||||
- 运行脚本后,会自动创建一个文件夹 `kubekey`。请注意,您稍后创建集群时,该文件和 `kk` 必须放在同一个目录下。
|
||||
|
||||
|
|
@ -264,7 +264,7 @@ KubeSphere 3.4 对内置角色和自定义角色的授权项做了一些调整
|
|||
./kk upgrade -f config-sample.yaml
|
||||
```
|
||||
|
||||
要将 Kubernetes 升级至特定版本,可以在 `--with-kubernetes` 标志后明确指定版本号。以下是可用版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
|
||||
要将 Kubernetes 升级至特定版本,可以在 `--with-kubernetes` 标志后明确指定版本号。以下是可用版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。
|
||||
|
||||
### 离线升级多节点集群
|
||||
|
||||
|
|
@ -348,5 +348,5 @@ KubeSphere 3.4 对内置角色和自定义角色的授权项做了一些调整
|
|||
./kk upgrade -f config-sample.yaml
|
||||
```
|
||||
|
||||
要将 Kubernetes 升级至特定版本,可以在 `--with-kubernetes` 标志后明确指定版本号。以下是可用版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
|
||||
要将 Kubernetes 升级至特定版本,可以在 `--with-kubernetes` 标志后明确指定版本号。以下是可用版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ weight: 7100
|
|||
|
||||
KubeSphere 3.4 与 Kubernetes 1.19.x、1.20.x、1.21.x、* 1.22.x、* 1.23.x、* 1.24.x 兼容:
|
||||
|
||||
- 在您升级集群至 KubeSphere 3.4 之前,您的 KubeSphere 集群版本必须为 v3.2.x。
|
||||
- 在您升级集群至 KubeSphere 3.4 之前,您的 KubeSphere 集群版本必须为 v3.3.x。
|
||||
|
||||
- 您可选择只将 KubeSphere 升级到 3.4 或者同时升级 Kubernetes(到更高版本)和 KubeSphere(到 3.4)。
|
||||
|
||||
|
|
@ -28,4 +28,4 @@ KubeSphere 3.4 与 Kubernetes 1.19.x、1.20.x、1.21.x、* 1.22.x、* 1.23.x、*
|
|||
|
||||
## 升级工具
|
||||
|
||||
根据您已有集群的搭建方式,您可以使用 KubeKey 或 ks-installer 升级集群。如果您的集群由 KubeKey 搭建,[建议您使用 KubeKey 升级集群](../upgrade-with-kubekey/)。如果您通过其他方式搭建集群,[请使用 ks-installer 升级集群](../upgrade-with-ks-installer/)。
|
||||
根据您已有集群的搭建方式,您可以使用 KubeKey 或 ks-installer 升级集群。如果您的集群由 KubeKey 搭建,[建议您使用 KubeKey 升级集群](../upgrade-with-kubekey/)。如果您通过其他方式搭建集群,[请使用 ks-installer 升级集群](../upgrade-with-ks-installer/)。
|
||||
|
|
|
|||
|
|
@ -10,10 +10,10 @@ weight: 7300
|
|||
|
||||
## 准备工作
|
||||
|
||||
- 您需要有一个运行 KubeSphere v3.2.x 的集群。如果您的 KubeSphere 是 v3.1.0 或更早的版本,请先升级至 v3.2.x。
|
||||
- 您需要有一个运行 KubeSphere v3.3.x 的集群。如果您的 KubeSphere 是 v3.2.0 或更早的版本,请先升级至 v3.3.x。
|
||||
- 请仔细阅读 [3.4.0 版本说明](../../../v3.4/release/release-v340/)。
|
||||
- 提前备份所有重要的组件。
|
||||
- KubeSphere 3.4 支持的 Kubernetes 版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
|
||||
- KubeSphere 3.4 支持的 Kubernetes 版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。
|
||||
|
||||
## 重要提示
|
||||
|
||||
|
|
|
|||
|
|
@ -41,7 +41,7 @@ KubeSphere 3.4 对内置角色和自定义角色的授权项做了一些调整
|
|||
从 [GitHub 发布页面](https://github.com/kubesphere/kubekey/releases)下载 KubeKey 或直接使用以下命令。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{</ tab >}}
|
||||
|
|
@ -57,7 +57,7 @@ export KKZONE=cn
|
|||
执行以下命令下载 KubeKey。
|
||||
|
||||
```bash
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.7 sh -
|
||||
curl -sfL https://get-kk.kubesphere.io | VERSION=v3.0.13 sh -
|
||||
```
|
||||
|
||||
{{< notice note >}}
|
||||
|
|
@ -99,7 +99,7 @@ chmod +x kk
|
|||
./kk upgrade --with-kubernetes v1.22.12 --with-kubesphere v3.4.0
|
||||
```
|
||||
|
||||
要将 Kubernetes 升级至特定版本,请在 `--with-kubernetes` 标志后明确指定版本号。以下是可用版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
|
||||
要将 Kubernetes 升级至特定版本,请在 `--with-kubernetes` 标志后明确指定版本号。以下是可用版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。
|
||||
|
||||
### 多节点集群
|
||||
|
||||
|
|
@ -140,7 +140,7 @@ chmod +x kk
|
|||
./kk upgrade --with-kubernetes v1.22.12 --with-kubesphere v3.4.0 -f sample.yaml
|
||||
```
|
||||
|
||||
要将 Kubernetes 升级至特定版本,请在 `--with-kubernetes` 标志后明确指定版本号。以下是可用版本:v1.20.x、v1.21.x、* v1.22.x、* v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.21.x。
|
||||
要将 Kubernetes 升级至特定版本,请在 `--with-kubernetes` 标志后明确指定版本号。以下是可用版本:v1.20.x、v1.21.x、v1.22.x、v1.23.x、* v1.24.x、* v1.25.x 和 * v1.26.x。带星号的版本可能出现边缘节点部分功能不可用的情况。因此,如需使用边缘节点,推荐安装 v1.23.x。
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -9,23 +9,65 @@ section1:
|
|||
image: /images/live/background.jpg
|
||||
|
||||
section2:
|
||||
image: /images/live/cloudnative-live-20230805.png
|
||||
url: ./meetup-shanghai-20230805/
|
||||
image: /images/live/cloudnative-live-20231104.png
|
||||
url: ./meetup-chengdu-20231104/
|
||||
|
||||
notice:
|
||||
title: 万亿级流量下的视频行业云原生建设之路
|
||||
tag: 结束
|
||||
time: 2023 年 07 月 27 日
|
||||
base: 线上
|
||||
url: ./cloudnative0727-live/
|
||||
title: 云原生 + 可观测性 Meetup 广州站
|
||||
tag: 预告
|
||||
time: 2023 年 11 月 25 日
|
||||
base: 广州
|
||||
url: ./meetup-guangzhou-20231125/
|
||||
|
||||
over:
|
||||
title: Meetup 杭州站
|
||||
title: Meetup 上海站
|
||||
tag: 结束
|
||||
url: ./meetup-hangzhou-20230603/
|
||||
url: ./meetup-shanghai-20230805/
|
||||
|
||||
section3:
|
||||
videos:
|
||||
- title: 使用可插拔架构集成多个多集群解决方案
|
||||
link: ./chengdu1104-kubesphere-v4.0/
|
||||
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/meetup-chengdu-20231104-6.png
|
||||
type: iframe
|
||||
createTime: 2023.11.04
|
||||
group: Meetup
|
||||
|
||||
- title: KubeBlocks RSM:如何让数据库更好的跑在 K8s 上
|
||||
link: ./chengdu1104-kubeblocks-rsm/
|
||||
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/meetup-chengdu-20231104-5.png
|
||||
type: iframe
|
||||
createTime: 2023.11.04
|
||||
group: Meetup
|
||||
|
||||
- title: KubeBlocks 简介及部署 AIGC 基础设施演示
|
||||
link: ./chengdu1104-kubeblocks/
|
||||
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/meetup-chengdu-20231104-4.png
|
||||
type: iframe
|
||||
createTime: 2023.11.04
|
||||
group: Meetup
|
||||
|
||||
- title: SOFABoot 4.0-迈向 JDK17 新时代
|
||||
link: ./chengdu1104-sofaboot/
|
||||
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/meetup-chengdu-20231104-3.png
|
||||
type: iframe
|
||||
createTime: 2023.11.04
|
||||
group: Meetup
|
||||
|
||||
- title: KubeSphere 平台整合多样化云原生网关的设计
|
||||
link: ./chengdu1104-gateway/
|
||||
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/meetup-chengdu-20231104-2.png
|
||||
type: iframe
|
||||
createTime: 2023.11.04
|
||||
group: Meetup
|
||||
|
||||
- title: DLRover:蚂蚁大模型训练弹性容错与自动优化
|
||||
link: ./chengdu1104-dlrover/
|
||||
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/meetup-chengdu-20231104-1.png
|
||||
type: iframe
|
||||
createTime: 2023.11.04
|
||||
group: Meetup
|
||||
|
||||
- title: EMQX 云服务的 Serverless 实践
|
||||
link: ./shanghai0805-emqx/
|
||||
snapshot: https://pek3b.qingstor.com/kubesphere-community/images/20230805-shanghai-meetup-6-cover.png
|
||||
|
|
@ -1076,6 +1118,10 @@ section4:
|
|||
list:
|
||||
- year: 2023
|
||||
meetup:
|
||||
- place: 成都站(11.04)
|
||||
img: https://pek3b.qingstor.com/kubesphere-community/images/meetup-chengdu-20231104-cover.png
|
||||
meetupUrl: https://kubesphere.io/zh/live/meetup-chengdu-20231104/
|
||||
|
||||
- place: 上海站(08.05)
|
||||
img: https://pek3b.qingstor.com/kubesphere-community/images/meetup-shanghai-20230805-cover.png
|
||||
meetupUrl: https://kubesphere.io/zh/live/meetup-shanghai-20230805/
|
||||
|
|
|
|||
|
|
@ -0,0 +1,37 @@
|
|||
---
|
||||
title: DLRover:蚂蚁大模型训练弹性容错与自动优化
|
||||
description: 本次分享将介绍 DLRover 的容错如何提高大规模分布式训练的稳定性和训练的自动优化。
|
||||
keywords: KubeSphere, Kubernetes, DLRover
|
||||
css: scss/live-detail.scss
|
||||
|
||||
section1:
|
||||
snapshot:
|
||||
videoUrl: //player.bilibili.com/player.html?aid=493119904&bvid=BV1oN411u7f8&cid=1323370126&page=1&high_quality=1
|
||||
type: iframe
|
||||
time: 2023-11-04 14:00-18:00
|
||||
timeIcon: /images/live/clock.svg
|
||||
base: 成都 + 线上
|
||||
baseIcon: /images/live/base.svg
|
||||
---
|
||||
|
||||
## 分享人简介
|
||||
|
||||
王勤龙,蚂蚁集团技术专家,AI 系统工程师。
|
||||
|
||||

|
||||
|
||||
## 议题简介
|
||||
|
||||
介绍 DLRover 云上弹性容错的分布式训练架构。本次分享将介绍 DLRover 的容错如何提高大规模分布式训练的稳定性和训练的自动优化。同时还会介绍 DLRover 分布式训练的资源自动扩缩容功能如何降低分布式训练门槛,提升训练性能和集群效能。
|
||||
|
||||
## 听众受益
|
||||
|
||||
- 了解 DLRover 项目及架构。
|
||||
- 了解分布式训练弹性、容错和自动扩缩容的原理。
|
||||
- 了解分布式训练自动调优的原理与实现。
|
||||
|
||||
## 下载 PPT
|
||||
|
||||
可扫描官网底部二维码,关注 「KubeSphere云原生」公众号,后台回复 `20231104` 即可下载 PPT。
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,46 @@
|
|||
---
|
||||
title: KubeSphere 平台整合多样化云原生网关的设计
|
||||
description: KubeSphere 作为平台工程是如何设计开放式的网关集成方式,以满足多样化的需求呢?。
|
||||
keywords: KubeSphere, Kubernetes, 网关
|
||||
css: scss/live-detail.scss
|
||||
|
||||
section1:
|
||||
snapshot:
|
||||
videoUrl: //player.bilibili.com/player.html?aid=235567328&bvid=BV1Ee411X7C6&cid=1323376078&page=1&high_quality=1
|
||||
type: iframe
|
||||
time: 2023-11-04 14:00-18:00
|
||||
timeIcon: /images/live/clock.svg
|
||||
base: 成都 + 线上
|
||||
baseIcon: /images/live/base.svg
|
||||
---
|
||||
|
||||
## 分享人简介
|
||||
|
||||
魏泓舟,青云科技高级研发工程师。现主要负责 KubeSphere 团队微服务领域相关研发工作,主要涉及云原生网关、Service Mesh、Spring Cloud、应用等模块的集成。曾从事 Spring Cloud 微服务体系的 Java 应用研发与基于云平台落地实践。
|
||||
|
||||

|
||||
|
||||
## 议题简介
|
||||
|
||||
随着云原生网关越来越丰富多样化,我们的选择自然也更多。然而,在 KubeSphere 3.x 及之前版本中,只支持使用 Ingress-NGINX 作为云原生网关的实现,不支持其他网关,如 APISIX,Kong, Traefik 等。这在一定程度上限制了 KubeSphere 用户对网关的选择,并且存在耦合度较高、不易扩展改造等问题。
|
||||
|
||||
面对这些挑战,KubeSphere 作为平台工程是如何设计开放式的网关集成方式,以满足多样化的需求呢?这将是我们本次分享和讨论的主题。
|
||||
|
||||
## 议题大纲
|
||||
|
||||
- 云原生网关简介
|
||||
- KubeSphere 集成网关的演进过程
|
||||
- 整合多样化云原生网关的设计
|
||||
- 示例效果展示
|
||||
|
||||
## 听众受益
|
||||
|
||||
- 了解云原生网关
|
||||
- 了解 KubeSphere 集成网关的思想
|
||||
- 了解平台级项目整合多样化网关的设计思路
|
||||
|
||||
## 下载 PPT
|
||||
|
||||
可扫描官网底部二维码,关注 「KubeSphere云原生」公众号,后台回复 `20231104` 即可下载 PPT。
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,55 @@
|
|||
---
|
||||
title: KubeBlocks RSM:如何让数据库更好的跑在 K8s 上
|
||||
description: KubeBlocks 中设计了 StatefulSet 的增强版本 RSM 以解决上述问题,本次分享讲解 RSM 的核心设计思路和原理。
|
||||
keywords: KubeSphere, Kubernetes, KubeBlocks, RSM
|
||||
css: scss/live-detail.scss
|
||||
|
||||
section1:
|
||||
snapshot:
|
||||
videoUrl: //player.bilibili.com/player.html?aid=535532341&bvid=BV1SM411Q79p&cid=1323375006&page=1&high_quality=1
|
||||
type: iframe
|
||||
time: 2023-11-04 14:00-18:00
|
||||
timeIcon: /images/live/clock.svg
|
||||
base: 成都 + 线上
|
||||
baseIcon: /images/live/base.svg
|
||||
---
|
||||
|
||||
## 分享人简介
|
||||
|
||||
吴学强,云猿生数据高级技术专家。原阿里云 PolarDB-X 云原生分布式数据库技术负责人之一,毕业于浙江大学计算机学院,兴趣广泛,对操作系统、密码学、分布式系统等均有涉猎。2017 年加入 PolarDB-X 团队进行高并发低延迟的 MySQL 分布式相关系统开发工作,负责 PolarDB-X 的云原生底座打造、生态系统连接、开源等开放生态构建工作。现为开源数据基础设施 KubeBlocks 核心开发者。
|
||||
|
||||

|
||||
|
||||
## 议题简介
|
||||
|
||||
K8s 中管理数据库这种有状态应用的组件是 StatefulSet,但其并不能很好的满足数据库的高可用要求:
|
||||
- 数据库通常有读写节点和只读节点,StatefulSet 中该如何支持?
|
||||
- 想增加一个只读节点到现有的集群,如何正确搭建复制关系?
|
||||
- 发生了主备切换,对外服务的 Service 如何自动感知并切换?
|
||||
- 想先升级备库,后升级主库,怎么办?想先将 Leader 切换到别的节点以降低系统不可用时长该怎么做?
|
||||
|
||||
KubeBlocks 中设计了 StatefulSet 的增强版本 RSM 以解决上述问题,本次分享讲解 RSM 的核心设计思路和原理。
|
||||
|
||||
## 议题大纲
|
||||
|
||||
- 数据库的本质
|
||||
- 角色抽象与定义
|
||||
- 基于角色对外提供服务
|
||||
- 基于角色的更新策略
|
||||
- 角色探测与更新
|
||||
- 成员管理
|
||||
- switchover 与 failover
|
||||
- 数据副本准备
|
||||
|
||||
## 听众受益
|
||||
|
||||
- 理解数据库的状态复杂在哪里
|
||||
- 理解数据库高可用该考虑哪些方面
|
||||
- 了解 RSM 的核心设计思路和原理
|
||||
- 了解 KubeBlocks 为什么更适合管理数据库
|
||||
|
||||
## 下载 PPT
|
||||
|
||||
可扫描官网底部二维码,关注 「KubeSphere云原生」公众号,后台回复 `20231104` 即可下载 PPT。
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,51 @@
|
|||
---
|
||||
title: KubeBlocks 简介及部署 AIGC 基础设施演示
|
||||
description: 此次分享将介绍 KubeBlocks 新版本的主要特性,包括核心CRD,Controller,扩展机制以及高级运维特性。此外,还将演示如何使用 KubeBlocks 部署 AIGC 基础设施,展示 KubeBlocks 在实际应用中的强大能力。
|
||||
keywords: KubeSphere, Kubernetes, KubeBlocks, AIGC
|
||||
css: scss/live-detail.scss
|
||||
|
||||
section1:
|
||||
snapshot:
|
||||
videoUrl: //player.bilibili.com/player.html?aid=578084710&bvid=BV1rz4y1A773&cid=1323371066&page=1&high_quality=1
|
||||
type: iframe
|
||||
time: 2023-11-04 14:00-18:00
|
||||
timeIcon: /images/live/clock.svg
|
||||
base: 成都 + 线上
|
||||
baseIcon: /images/live/base.svg
|
||||
---
|
||||
|
||||
## 分享人简介
|
||||
|
||||
刘东明,云猿生数据高级技术专家。2015 年加入阿里巴巴,先后从事阿里云云原生数据库 PolarDB-X 和 PolarDB-PostgreSQL 内核研发,负责 PolarDB-PostgreSQL 一写多读架构设计,以及缓冲区管理,查询优化等核心模块研发。现为 KubeBlocks 核心开发者。
|
||||
|
||||

|
||||
|
||||
## 议题简介
|
||||
|
||||
随着 Kubernetes 越来越流行,越来越多的无状态应用运行在 K8s 上。然而,对于有状态应用,特别是数据基础设施如数据库服务,迁移到 K8s 上运行仍然是一件充满挑战的事。KubeBlocks 致力于让 K8s 上的数据基础设施管理就像搭乐高积木一样,既高效又有趣,帮助用户轻松构建容器化、声明式的关系型数据库、NoSQL、流计算和向量数据库服务。
|
||||
|
||||
此次分享将介绍 KubeBlocks 新版本的主要特性,包括核心 CRD,Controller,扩展机制以及高级运维特性。此外,还将演示如何使用 KubeBlocks 部署 AIGC 基础设施,展示 KubeBlocks 在实际应用中的强大能力。
|
||||
|
||||
## 议题大纲
|
||||
|
||||
- KubeBlocks 简介
|
||||
- KubeBlocks 中的 “Block”
|
||||
- KubeBlocks CRDs
|
||||
- KubeBlocks Controllers
|
||||
- KubeBlocks 扩展机制--Add-on
|
||||
- KubeBlocks 高级运维特性
|
||||
- 演示:
|
||||
- 使用 KubeBlocks 部署 AIGC 基础设施 Jupyter Notebook
|
||||
- 简单演示 KubeChat
|
||||
|
||||
## 听众受益
|
||||
|
||||
- 了解 KubeBlocks 核心功能
|
||||
- 了解如何使用 KubeBlocks
|
||||
- 了解如何基于 KubeBlocks 部署 AIGC 基础设置
|
||||
|
||||
## 下载 PPT
|
||||
|
||||
可扫描官网底部二维码,关注 「KubeSphere云原生」公众号,后台回复 `20231104` 即可下载 PPT。
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,41 @@
|
|||
---
|
||||
title: 使用可插拔架构集成多个多集群解决方案
|
||||
description: 在本次演讲中,KubeSphere 维护者将分享他们在如何从特定多集群框架解耦方面的经验,以及作为一个平台,我们如何整合不同的多集群解决方案以满足不同客户的需求。
|
||||
keywords: KubeSphere, Kubernetes, 可插拔, 多集群
|
||||
css: scss/live-detail.scss
|
||||
|
||||
section1:
|
||||
snapshot:
|
||||
videoUrl: //player.bilibili.com/player.html?aid=748081892&bvid=BV1vC4y1J7b6&cid=1323367993&page=1&high_quality=1
|
||||
type: iframe
|
||||
time: 2023-11-04 14:00-18:00
|
||||
timeIcon: /images/live/clock.svg
|
||||
base: 成都 + 线上
|
||||
baseIcon: /images/live/base.svg
|
||||
---
|
||||
|
||||
## 分享人简介
|
||||
|
||||
徐信钊,青云科技高级软件工程师,KubeSphere Maintainer。
|
||||
|
||||

|
||||
|
||||
## 议题简介
|
||||
|
||||
Kubernetes 中多集群领域的发展非常迅速,目前有很多多集群解决方案,如 Karmada、OCM 和 Kubefed 等。随着项目的发展,像我们这样的最终用户经常会遇到这样的情况:我们正在使用的多集群框架已经过时,我们必须切换到新的框架。在本次演讲中,KubeSphere 维护者将分享他们在如何从特定多集群框架解耦方面的经验,以及作为一个平台,我们如何整合不同的多集群解决方案以满足不同客户的需求。
|
||||
|
||||
## 议题大纲
|
||||
|
||||
- KubeSphere 4.0 可插拔架构介绍
|
||||
- 集成多个多集群方案
|
||||
|
||||
## 听众受益
|
||||
|
||||
- 了解 KubeSphere 4.0 可插拔架构
|
||||
- 如何结合可插拔架构集成多个多集群方案以满足不同客户需求
|
||||
|
||||
## 下载 PPT
|
||||
|
||||
可扫描官网底部二维码,关注 「KubeSphere云原生」公众号,后台回复 `20231104` 即可下载 PPT。
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,38 @@
|
|||
---
|
||||
title: SOFABoot 4.0-迈向 JDK17 新时代
|
||||
description: 本次分享将主要介绍 SOFABoot 4 新版本引入的新特性与变化,包括其设计理念与实现方式。
|
||||
keywords: KubeSphere, Kubernetes, SOFABoot
|
||||
css: scss/live-detail.scss
|
||||
|
||||
section1:
|
||||
snapshot:
|
||||
videoUrl: //player.bilibili.com/player.html?aid=833040391&bvid=BV1cg4y1d7WE&cid=1323377652&page=1&high_quality=1
|
||||
type: iframe
|
||||
time: 2023-11-04 14:00-18:00
|
||||
timeIcon: /images/live/clock.svg
|
||||
base: 成都 + 线上
|
||||
baseIcon: /images/live/base.svg
|
||||
---
|
||||
|
||||
## 分享人简介
|
||||
|
||||
胡子杰,蚂蚁集团技术专家,SOFABoot Maintainer。
|
||||
|
||||

|
||||
|
||||
## 议题简介
|
||||
|
||||
本次分享将主要介绍 SOFABoot 4 新版本引入的新特性与变化,包括其设计理念与实现方式。
|
||||
再者就是介绍 SOFABoot 3 应用如何升级至 SOFABoot 4 版本,并展望 SOFABoot 未来的发展趋势。
|
||||
|
||||
## 听众受益
|
||||
|
||||
- SOFABoot 4 的新特性与变化
|
||||
- 已有应用如何升级至 SOFABoot 4 版本
|
||||
- 一起探讨 SOFABoot 未来发展的趋势
|
||||
|
||||
## 下载 PPT
|
||||
|
||||
可扫描官网底部二维码,关注 「KubeSphere云原生」公众号,后台回复 `20231104` 即可下载 PPT。
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,224 @@
|
|||
---
|
||||
title: 云原生 + AI Meetup 成都站
|
||||
description: 此次 Meetup,我们邀请到了蚂蚁集团、云猿生数据、青云科技等企业专家们,来为大家分享 AI 及云原生主题的技术干货。
|
||||
keywords: KubeSphere, Meetup, Chengdu, Kubernetes, gateway, AI
|
||||
css: scss/live-detail.scss
|
||||
|
||||
section1:
|
||||
snapshot:
|
||||
videoUrl:
|
||||
type: iframe
|
||||
time: 2023-11-04 14:00-18:00
|
||||
timeIcon: /images/live/clock.svg
|
||||
base: 成都 + 线上同步直播
|
||||
baseIcon: /images/live/base.svg
|
||||
---
|
||||
|
||||
2023 年,KubeSphere 社区已经在深圳、杭州、上海三个城市各组织了一场线下 Meetup。第四站,我们将走进天府成都。
|
||||
|
||||
11 月 4 日,云原生 + AI Meetup 成都站将正式开启!
|
||||
|
||||
此次 Meetup,我们邀请到了蚂蚁集团、云猿生数据、青云科技等企业专家们,来为大家分享 AI 及云原生主题的技术干货。
|
||||
|
||||
## 活动时间和地点
|
||||
|
||||
- 时间:2023 年 11 月 4 日 14:00-18:00
|
||||
- 地点:四川省成都市武侯区天府四街蚂蚁集团 C 空间 101 猎户座
|
||||
|
||||
## 活动组织方
|
||||
|
||||
- KubeSphere 社区
|
||||
- KubeBlocks 社区
|
||||
- SOFAStack 社区
|
||||
|
||||
## 议程海报
|
||||
|
||||

|
||||
|
||||
## 分享内容回顾
|
||||
|
||||
## 议题 1:DLRover:蚂蚁大模型训练弹性容错与自动优化
|
||||
|
||||
### 讲师
|
||||
|
||||

|
||||
|
||||
王勤龙,蚂蚁集团技术专家,AI 系统工程师。
|
||||
|
||||
### 议题简介
|
||||
|
||||
介绍 DLRover 云上弹性容错的分布式训练架构。本次分享将介绍 DLRover 的容错如何提高大规模分布式训练的稳定性和训练的自动优化。同时还会介绍 DLRover 分布式训练的资源自动扩缩容功能如何降低分布式训练门槛,提升训练性能和集群效能。
|
||||
|
||||
### 听众受益
|
||||
|
||||
- 了解 DLRover 项目及架构。
|
||||
- 了解分布式训练弹性、容错和自动扩缩容的原理。
|
||||
- 了解分布式训练自动调优的原理与实现。
|
||||
|
||||
<iframe width="760" height="380" src="https://player.bilibili.com/player.html?aid=493119904&bvid=BV1oN411u7f8&cid=1323370126&page=1&high_quality=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
|
||||
|
||||
## 议题 2:KubeSphere 平台整合多样化云原生网关的设计
|
||||
|
||||
### 讲师
|
||||
|
||||

|
||||
|
||||
魏泓舟,青云科技高级研发工程师。现主要负责 KubeSphere 团队微服务领域相关研发工作,主要涉及云原生网关、Service Mesh、Spring Cloud、应用等模块的集成。曾从事 Spring Cloud 微服务体系的 Java 应用研发与基于云平台落地实践。
|
||||
|
||||
### 议题简介
|
||||
|
||||
随着云原生网关越来越丰富多样化,我们的选择自然也更多。然而,在 KubeSphere 3.x 及之前版本中,只支持使用 Ingress-NGINX 作为云原生网关的实现,不支持其他网关,如 APISIX,Kong,Traefik 等。这在一定程度上限制了 KubeSphere 用户对网关的选择,并且存在耦合度较高、不易扩展改造等问题。
|
||||
|
||||
面对这些挑战,KubeSphere 作为平台工程是如何设计开放式的网关集成方式,以满足多样化的需求呢?这将是我们本次分享和讨论的主题。
|
||||
|
||||
### 议题大纲
|
||||
|
||||
- 云原生网关简介
|
||||
- KubeSphere 集成网关的演进过程
|
||||
- 整合多样化云原生网关的设计
|
||||
- 示例效果展示
|
||||
|
||||
### 听众受益
|
||||
|
||||
- 了解云原生网关
|
||||
- 了解 KubeSphere 集成网关的思想
|
||||
- 了解平台级项目整合多样化网关的设计思路
|
||||
|
||||
<iframe width="760" height="380" src="https://player.bilibili.com/player.html?aid=235567328&bvid=BV1Ee411X7C6&cid=1323376078&page=1&high_quality=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
|
||||
|
||||
## 议题 3:SOFABoot 4.0-迈向 JDK17 新时代
|
||||
|
||||
### 讲师
|
||||
|
||||

|
||||
|
||||
胡子杰,蚂蚁集团技术专家,SOFABoot Maintainer。
|
||||
|
||||
### 议题简介
|
||||
|
||||
本次分享将主要介绍 SOFABoot 4 新版本引入的新特性与变化,包括其设计理念与实现方式。
|
||||
再者就是介绍 SOFABoot 3 应用如何升级至 SOFABoot 4 版本,并展望 SOFABoot 未来的发展趋势。
|
||||
|
||||
### 听众受益
|
||||
|
||||
- SOFABoot 4 的新特性与变化
|
||||
- 已有应用如何升级至 SOFABoot 4 版本
|
||||
- 一起探讨 SOFABoot 未来发展的趋势
|
||||
|
||||
<iframe width="760" height="380" src="https://player.bilibili.com/player.html?aid=833040391&bvid=BV1cg4y1d7WE&cid=1323377652&page=1&high_quality=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
|
||||
|
||||
## 议题 4:KubeBlocks 简介及部署 AIGC 基础设施演示
|
||||
|
||||
### 讲师
|
||||
|
||||

|
||||
|
||||
刘东明,云猿生数据高级技术专家。2015 年加入阿里巴巴,先后从事阿里云云原生数据库 PolarDB-X 和 PolarDB-PostgreSQL 内核研发,负责 PolarDB-PostgreSQL 一写多读架构设计,以及缓冲区管理,查询优化等核心模块研发。现为 KubeBlocks 核心开发者。
|
||||
|
||||
### 议题简介
|
||||
|
||||
随着 Kubernetes 越来越流行,越来越多的无状态应用运行在 K8s 上。然而,对于有状态应用,特别是数据基础设施如数据库服务,迁移到 K8s 上运行仍然是一件充满挑战的事。KubeBlocks 致力于让 K8s 上的数据基础设施管理就像搭乐高积木一样,既高效又有趣,帮助用户轻松构建容器化、声明式的关系型数据库、NoSQL、流计算和向量数据库服务。
|
||||
|
||||
此次分享将介绍 KubeBlocks 新版本的主要特性,包括核心 CRD,Controller,扩展机制以及高级运维特性。此外,还将演示如何使用 KubeBlocks 部署 AIGC 基础设施,展示 KubeBlocks 在实际应用中的强大能力。
|
||||
|
||||
### 议题大纲
|
||||
|
||||
- KubeBlocks 简介
|
||||
- KubeBlocks 中的 “Block”
|
||||
- KubeBlocks CRDs
|
||||
- KubeBlocks Controllers
|
||||
- KubeBlocks 扩展机制--Add-on
|
||||
- KubeBlocks 高级运维特性
|
||||
- 演示:
|
||||
- 使用 KubeBlocks 部署 AIGC 基础设施 Jupyter Notebook
|
||||
- 简单演示 KubeChat
|
||||
|
||||
### 听众受益
|
||||
|
||||
- 了解 KubeBlocks 核心功能
|
||||
- 了解如何使用 KubeBlocks
|
||||
- 了解如何基于 KubeBlocks 部署 AIGC 基础设置
|
||||
|
||||
<iframe width="760" height="380" src="https://player.bilibili.com/player.html?aid=578084710&bvid=BV1rz4y1A773&cid=1323371066&page=1&high_quality=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
|
||||
|
||||
## 议题 5:KubeBlocks RSM:如何让数据库更好的跑在 K8s 上
|
||||
|
||||
### 讲师
|
||||
|
||||

|
||||
|
||||
吴学强,云猿生数据高级技术专家。原阿里云 PolarDB-X 云原生分布式数据库技术负责人之一,毕业于浙江大学计算机学院,兴趣广泛,对操作系统、密码学、分布式系统等均有涉猎。2017 年加入 PolarDB-X 团队进行高并发低延迟的 MySQL 分布式相关系统开发工作,负责 PolarDB-X 的云原生底座打造、生态系统连接、开源等开放生态构建工作。现为开源数据基础设施 KubeBlocks 核心开发者。
|
||||
|
||||
### 议题简介
|
||||
|
||||
K8s 中管理数据库这种有状态应用的组件是 StatefulSet,但其并不能很好的满足数据库的高可用要求:
|
||||
- 数据库通常有读写节点和只读节点,StatefulSet 中该如何支持?
|
||||
- 想增加一个只读节点到现有的集群,如何正确搭建复制关系?
|
||||
- 发生了主备切换,对外服务的 Service 如何自动感知并切换?
|
||||
- 想先升级备库,后升级主库,怎么办?想先将 Leader 切换到别的节点以降低系统不可用时长该怎么做?
|
||||
|
||||
KubeBlocks 中设计了 StatefulSet 的增强版本 RSM 以解决上述问题,本次分享讲解 RSM 的核心设计思路和原理。
|
||||
|
||||
### 议题大纲
|
||||
|
||||
- 数据库的本质
|
||||
- 角色抽象与定义
|
||||
- 基于角色对外提供服务
|
||||
- 基于角色的更新策略
|
||||
- 角色探测与更新
|
||||
- 成员管理
|
||||
- switchover 与 failover
|
||||
- 数据副本准备
|
||||
|
||||
### 听众受益
|
||||
|
||||
- 理解数据库的状态复杂在哪里
|
||||
- 理解数据库高可用该考虑哪些方面
|
||||
- 了解 RSM 的核心设计思路和原理
|
||||
- 了解 KubeBlocks 为什么更适合管理数据库
|
||||
|
||||
<iframe width="760" height="380" src="https://player.bilibili.com/player.html?aid=535532341&bvid=BV1SM411Q79p&cid=1323375006&page=1&high_quality=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
|
||||
|
||||
## 议题 6:使用可插拔架构集成多个多集群解决方案
|
||||
|
||||
### 讲师
|
||||
|
||||

|
||||
|
||||
徐信钊,青云科技高级软件工程师,KubeSphere Maintainer。
|
||||
|
||||
### 议题简介
|
||||
|
||||
Kubernetes 中多集群领域的发展非常迅速,目前有很多多集群解决方案,如 Karmada、OCM 和 Kubefed 等。随着项目的发展,像我们这样的最终用户经常会遇到这样的情况:我们正在使用的多集群框架已经过时,我们必须切换到新的框架。在本次演讲中,KubeSphere 维护者将分享他们在如何从特定多集群框架解耦方面的经验,以及作为一个平台,我们如何整合不同的多集群解决方案以满足不同客户的需求。
|
||||
|
||||
### 议题大纲
|
||||
|
||||
- KubeSphere 4.0 可插拔架构介绍
|
||||
- 集成多个多集群方案
|
||||
|
||||
### 听众受益
|
||||
|
||||
- 了解 KubeSphere 4.0 可插拔架构
|
||||
- 如何结合可插拔架构集成多个多集群方案以满足不同客户需求
|
||||
|
||||
<iframe width="760" height="380" src="https://player.bilibili.com/player.html?aid=748081892&bvid=BV1vC4y1J7b6&cid=1323367993&page=1&high_quality=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
|
||||
|
||||
## PPT 下载
|
||||
|
||||
关注「KubeSphere云原生」公众号,回复关键词 `20231104`,获取 PPT 下载链接。
|
||||
|
||||
> 获取 PPT 下载链接后,若手机无法下载,可在电脑端浏览器打开下载。
|
||||
|
||||
## 现场合照
|
||||
|
||||

|
||||
|
||||
## 致谢
|
||||
|
||||
感谢 SOFAStack 社区和 KubeBlocks 社区对本次活动的大力支持!
|
||||
|
||||
感谢各位讲师贡献的精彩演讲和分享!
|
||||
|
||||
感谢 KubeSphere 社区用户委员会成都站站长周正军以及活动志愿者田惠文、周正纬、何绍辉、曾朝俊对本次活动的支持!
|
||||
|
||||

|
||||
|
|
@ -0,0 +1,57 @@
|
|||
---
|
||||
title: 云原生 + 可观测性 Meetup 广州站
|
||||
description: 此次 Meetup,我们邀请到了 KubeSphere、DeepFlow、SkyWalking 等社区的技术专家们,来为大家分享云原生及可观测性主题的技术干货。
|
||||
keywords: KubeSphere, Meetup, Guangzhou, Kubernetes, DeepFlow, SkyWalking
|
||||
css: scss/live-detail.scss
|
||||
|
||||
section1:
|
||||
snapshot:
|
||||
videoUrl:
|
||||
type: iframe
|
||||
time: 2023-11-25 14:00-18:00
|
||||
timeIcon: /images/live/clock.svg
|
||||
base: 广州 + 线上同步直播
|
||||
baseIcon: /images/live/base.svg
|
||||
---
|
||||
|
||||
2023 年,KubeSphere 社区已经在深圳、杭州、上海和成都这 4 个城市各组织了一场线下 Meetup。第五站,我们将走进广州。
|
||||
|
||||
11 月 25 日,云原生 + 可观测性 Meetup 广州站将正式开启!
|
||||
|
||||
此次 Meetup,我们邀请到了 KubeSphere、DeepFlow、SkyWalking 等社区的技术专家们,来为大家分享云原生及可观测性主题的技术干货。
|
||||
|
||||
欢迎广州的各位小伙伴报名参与!现在即可报名预约!
|
||||
|
||||
## 活动时间和地点
|
||||
|
||||
- 时间:2023 年 11 月 25 日 14:00-18:00
|
||||
- 地点:广州国际科技成果转化(天河)基地三楼星空厅
|
||||
|
||||
## 活动组织方
|
||||
|
||||
### 主办方
|
||||
|
||||
- KubeSphere 社区
|
||||
- DeepFlow 社区
|
||||
|
||||
### 协办方
|
||||
|
||||
- 广州市天河区软件和信息产业协会
|
||||
- 开源科技OSTech
|
||||
- 广州(国际)科技成果转化天河基地
|
||||
|
||||
## 议程海报
|
||||
|
||||

|
||||
|
||||
## 报名方式
|
||||
|
||||
扫描上方海报二维码或点击[链接](https://resources.qingcloud.com/p/736d56)报名。
|
||||
|
||||
## 互动礼品
|
||||
|
||||
参与本次活动,即有机会获得 KubeSphere 社区周边礼品一份,礼品种类包括:T 恤、马克杯、背包等。
|
||||
|
||||
凡到场的小伙伴,即可获得 KubeSphere 精美贴纸一套。
|
||||
|
||||
此外, KubeSphere 社区将会在现场设置填问卷抽好礼活动,奖品为 KubeSphere 社区周边礼品,如背包、T 恤、马克杯等以及最高奖品为 CKA 考试券(只有一张)。
|
||||
|
|
@ -24,7 +24,7 @@ KubeSphere 土耳其地区产品经理 Halil 表示,“华为在土耳其建
|
|||
|
||||
## 致谢
|
||||
|
||||
本次合作是华为、KubeSphere 土耳其、RocketByte 和 EquoSystem 各方员工和贡献者辛苦付出的成果,在此表示由衷的感谢。 在此致谢华为执行主管 Frank Machao 和 Bobby Zhang、华为团队 Yavuz Sarı、Haldun Bozkır、Rıza Can Sevinç、Wu Yongxi 和 Lin Zelin, 以及 KubeSphere 土耳其团队 Eda Konyar、Halil BUGOL 和 Stephane Yasar 等成员的辛苦付出。
|
||||
本次合作是华为、KubeSphere 土耳其和 EquoSystem 各方员工和贡献者辛苦付出的成果,在此表示由衷的感谢。 在此致谢华为执行主管 Frank Machao 和 Bobby Zhang、华为团队 Yavuz Sarı、Haldun Bozkır、Rıza Can Sevinç、Wu Yongxi 和 Lin Zelin, 以及 KubeSphere 土耳其团队 Eda Konyar、Halil BUGOL 和 Stephane Yasar 等成员的辛苦付出。
|
||||
|
||||
## 更多信息
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,118 @@
|
|||
---
|
||||
title: 'Fluent Operator v2.5.0 发布'
|
||||
tag: '产品动态'
|
||||
keywords: 'Kubernetes, KubeSphere, Fluent Operator'
|
||||
description: '新增多个插件支持。'
|
||||
createTime: '2023-09-19'
|
||||
author: 'KubeSphere'
|
||||
image: 'https://pek3b.qingstor.com/kubesphere-community/images/Fluent-Operator-v2.5.0-cover.png'
|
||||
---
|
||||
|
||||
日前,Fluent Operator 发布了 v2.5.0。
|
||||
|
||||
Fluent Operator v2.5.0 新增 11 个 features, 其中 Fluent Bit 新增支持 7 个插件, Fluentd 新增支持 1 个插件。此外,对 Fluent Operator 也进行了增强,调整了默认参数,以便适应更多场景,并对 helm chart 进行了优化,用户可以更方便的进行安装,并修复了部分 bug。
|
||||
|
||||
以下将重点介绍:
|
||||
|
||||
## Fluent Bit 增加多个插件
|
||||
|
||||
### 1. Prometheus Exporter 插件
|
||||
|
||||
Fluent Bit 新增了输出插件 Prometheus Exporter,Prometheus Exporter 输出插件允许您从 Fluent Bit 中获取 metrics 并暴露它们,以便 prometheus 实例可以抓取它们。
|
||||
|
||||
相关 PR: https://github.com/fluent/fluent-operator/pull/840。
|
||||
|
||||
### 2. Forward 插件
|
||||
|
||||
Fluent Bit 新增了输入插件 Forward,Forward 是 Fluent Bit 和 Fluentd 用于在对等设备之间路由消息的协议。使用该插件可以监听 Forward 消息的输入。
|
||||
|
||||
相关 PR: https://github.com/fluent/fluent-operator/pull/843。
|
||||
|
||||
### 3. GELF 插件
|
||||
|
||||
Fluent Bit 新增了输出插件 GELF,GELF 是 Graylog 扩展日志格式。GELF 输出插件允许使用 TLS、TCP 或 UDP 协议将 GELF 格式的日志直接发送到 Graylog 输入端。
|
||||
|
||||
相关 PR: https://github.com/fluent/fluent-operator/pull/882。
|
||||
|
||||
### 4. OpenTelemetry 插件
|
||||
|
||||
Fluent Bit 新增了输入插件 OpenTelemetry,OpenTelemetry 插件可让您按照 OTLP 规范,从各种 OpenTelemetry 输出程序、OpenTelemetry 收集器或 Fluent Bit 的 OpenTelemetry 输出插件获取 OpenTelemetry 格式的数据。
|
||||
|
||||
相关 PR: https://github.com/fluent/fluent-operator/pull/890。
|
||||
|
||||
### 5. HTTP 插件
|
||||
|
||||
Fluent Bit 新增了输入插件 HTTP,HTTP 输入插件允许 Fluent Bit 打开一个 HTTP 端口,然后以动态方式将数据路由到该端口。该插件支持动态标签,允许你通过同一个输入发送带有不同标签的数据。
|
||||
|
||||
相关 PR: https://github.com/fluent/fluent-operator/pull/904。
|
||||
|
||||
### 6. MQTT 插件
|
||||
|
||||
Fluent Bit 新增了输入插件 MQTT,MQTT 输入插件允许通过 TCP 连接从 MQTT 控制包中获取消息/数据。要接收的传入数据必须是 JSON map 格式的数据。
|
||||
|
||||
相关 PR: https://github.com/fluent/fluent-operator/pull/911。
|
||||
|
||||
### 7. Collectd 插件
|
||||
|
||||
Fluent Bit 新增了输入插件 MQTT,Collectd 输入插件允许您从 Collectd 服务端接收数据。
|
||||
|
||||
相关 PR: https://github.com/fluent/fluent-operator/pull/914。
|
||||
|
||||
## Fluentd 主要变化
|
||||
|
||||
### 新增 Grok parser 插件
|
||||
|
||||
Fluentd 新增 Grok parser 插件。Grok 是一个第三方的解析器,Grok 是一个简化和重用正则表达式的宏,最初由 Jordan Sissel 开发。如果您熟悉 Grok 模式,那么 Grok parser 插件非常有用。
|
||||
|
||||
Grok parser 插件的版本涵盖如下:
|
||||
|
||||
| fluent-plugin-grok-parser | fluentd | ruby |
|
||||
| ------------------------- | ---------- | ------ |
|
||||
| >= 2.0.0 | >= v0.14.0 | >= 2.1 |
|
||||
| < 2.0.0 | >= v0.12.0 | >= 1.9 |
|
||||
|
||||
相关 PR: https://github.com/fluent/fluent-operator/pull/861。
|
||||
|
||||
### 增加对 Fluentd 作为 DaemonSet 运行的支持
|
||||
|
||||
目前,Fluentd 以 StatefulSet 的形式运行,但我们希望将 Fluentd 作为一个完整的日志方面的进程来运行,这就需要在 Fluentd 中包含一些输入插件(tail、systemd)。所以我们需要将 Fluentd 作为 DaemonSet 的方式来运行。
|
||||
|
||||
在该 PR 中,我们引入了将 Fluentd 作为 DaemonSet 运行的选项支持。默认情况下,Fluentd 将作为 StatefulSet 运行,但用户也可以通过启用 `agent` 模式,将 Fluend 作为 DaemonSet 运行。如果开始了`agent` 模式,那么在创建 DaemonSet 时会忽略 StatefulSet 特定字段,反之亦然。
|
||||
|
||||
此外,Fluend 可以作为 DaemonSet 或 StatefulSet 运行,而不能同时作为 DaemonSet 和 StatefulSet 运行。如果我们启用 DaemonSet,StatefulSet 将被删除,Fluentd 将作为 DaemonSet 运行。
|
||||
|
||||
相关 PR: https://github.com/fluent/fluent-operator/pull/839。
|
||||
|
||||
## 其他优化
|
||||
|
||||
- 在 Fluent-bit config 中删除重复的 Cluster parsers;
|
||||
- 调整 Fluent Bit 的多项默认参数;
|
||||
- 为 Fluentd 添加 ImagePullSecret 参数;
|
||||
- 将 Fluent Bit 升级到 2.1.9 版本;
|
||||
- 优化 Fluent Operator 的 helm chart 中的各项参数;
|
||||
- ...
|
||||
|
||||
## 致谢贡献者
|
||||
|
||||
该版本贡献者共有 16 位,他们分别是:
|
||||
|
||||
- gregorycuellar
|
||||
- Nyefan
|
||||
- WaywardWizard
|
||||
- alexandrevilain
|
||||
- yash97
|
||||
- husnialhamdani
|
||||
- L1ghtman2k
|
||||
- wenchajun
|
||||
- leonsteinhaeuser
|
||||
- vincent-vinf
|
||||
- Rajan-226
|
||||
- sharkeyl
|
||||
- ikolesnikovrevizto
|
||||
- karan56625
|
||||
- ajax-bychenok-y
|
||||
- sjliu1
|
||||
|
||||
这些贡献者大部分来自海外,这表明 Fluent Operator 是一个全球化的项目,越来越受欢迎和具有影响力,在此感谢各位贡献者!也非常欢迎大家参与这个开源项目和社区!
|
||||
|
||||
关于新版本的具体变化,您还可以参考 release note: https://github.com/fluent/fluent-operator/releases/tag/v2.5.0。
|
||||
|
|
@ -0,0 +1,43 @@
|
|||
---
|
||||
title: 'KubeSphere 3.4.1 发布'
|
||||
tag: '产品动态'
|
||||
keyword: '社区, 开源, 贡献, KubeSphere, release, 权限控制'
|
||||
description: '本次发布的 KubeSphere 3.4.1 是对 KubeSphere 3.4.0 的 patch 版本,主要集中在对 Console 和 DevOps 的提升及问题修复。'
|
||||
createTime: '2023-11-10'
|
||||
author: 'KubeSphere'
|
||||
image: 'https://pek3b.qingstor.com/kubesphere-community/images/kubesphere-3.4.1-ga.png'
|
||||
---
|
||||
|
||||
|
||||
本次发布的 KubeSphere 3.4.1 是对 KubeSphere 3.4.0 的 patch 版本,主要集中在对 Console 和 DevOps 的提升及问题修复。
|
||||
|
||||
## Console
|
||||
- 修复页面部分字段翻译不准确的问题。
|
||||
- 修复部分页面样式显示缺失,错误,不全等问题。
|
||||
- 修复部分接口 API 调用错误的问题。
|
||||
- 修复项目概览页面,帮助信息丢失的问题。
|
||||
|
||||
## DevOps
|
||||
- 修复流水线详情页面无法查看的问题。
|
||||
- 修复 Jenkin 传参错误的问题。
|
||||
- 修复 shell 脚本执行出错的问题。
|
||||
- 修复清理任务报错的问题。
|
||||
- 修复 devops-controller 运行报错的问题。
|
||||
|
||||
|
||||
## Observability
|
||||
- 修复 CPU 和内存统计图表不显示的问题。
|
||||
- 修复日志接收器页面未显示的问题。
|
||||
|
||||
## Authentication & Authorization
|
||||
- 修复 LDAP 登录错误的问题。
|
||||
|
||||
## App Store
|
||||
- 修复页面报错的问题。
|
||||
|
||||
## 其他优化
|
||||
- 修复部分组件升级失败的问题。
|
||||
|
||||
可以访问下方链接来查看完整的 Release Notes:
|
||||
|
||||
https://github.com/kubesphere/kubesphere/releases/tag/v3.4.1。
|
||||
|
|
@ -0,0 +1,59 @@
|
|||
---
|
||||
title: '欢迎新任 KubeSphere Ambassador!2023 年 KubeSphere Ambassador 申请结果公布!'
|
||||
tag: '社区动态'
|
||||
keyword: '社区, 开源, 贡献, KubeSphere'
|
||||
description: '我们很高兴地欢迎 14 位新的 KubeSphere Ambassador,他们曾多次以不同方式为 KubeSphere 社区做出贡献,帮助更多用户了解 KubeSphere 的应用场景和最佳实践。'
|
||||
createTime: '2023-10-10'
|
||||
author: 'KubeSphere'
|
||||
image: 'https://pek3b.qingstor.com/kubesphere-community/images/2023-ambassador-cover.png'
|
||||
---
|
||||
参与开源社区贡献,除了代码、中英文文档、本地化与国际化等贡献方式,还有技术布道这个重要的方式。
|
||||
|
||||
技术布道这一贡献方式包括撰写技术博客、用户案例以及在社区活动进行公开技术分享等,社区设置了 KubeSphere Ambassador 奖项作为激励多次在社区分享过 KubeSphere 落地实践案例与技术文章的成员。
|
||||
|
||||
今年有所不同的是,社区发起了一个 KubeSphere 大使计划([KubeSphere Ambassadorship Program](https://github.com/kubesphere/community/tree/master/ksap-ambassadorship-program)),首次向全球的贡献者公开征集。最终,我们通过评估选出了 2023 年的 KubeSphere Ambassador。此次评选出的 KubeSphere Ambassador 任期为一年,新的选举可在明年同期举行。我们希望通过大使计划营造一个更加开放的社区环境。
|
||||
|
||||
我们很高兴地欢迎 14 位新的 KubeSphere Ambassador,他们曾多次以不同方式为 KubeSphere 社区做出贡献,帮助更多用户了解 KubeSphere 的应用场景和最佳实践。
|
||||
|
||||
## 关于 KSAP
|
||||
|
||||
KubeSphere Ambassadorship Program(KSAP)是一项今年开启的计划,旨在把为 KubeSphere 开展社区活动及布道的成员聚集在一起。社区希望为该计划挑选出 25 名 KubeSphere Ambassador 来共同发展 KubeSphere 社区。
|
||||
|
||||
新评选出的 KubeSphere Ambassador 任期为一年(2023.9.20-2024.9.21),并且可以连选连任。社区也会为新评选出的 KubeSphere Ambassador 颁发新的证书。
|
||||
|
||||
## 获取证书
|
||||
|
||||
| Name | certificate |
|
||||
| ------------------ | ---------------------------------------------------------------------------------------------------------------- |
|
||||
| Onur Canoğlu | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Onur-Canog%CC%86lu.png) |
|
||||
| Rossana Suarez | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Rossana-Suarez.png) |
|
||||
| Jona Apelbaum | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Jona-Apelbaum.png) |
|
||||
| Nilo Yucra Gavilan | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Nilo-Yucra-Gavilan.png) |
|
||||
| Halil BUGOL | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Halil-I%CC%87brahim-BUGOL.png) |
|
||||
| Eda Konyar | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Eda-Konyar.png) |
|
||||
| İremnur Önder | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-I%CC%87remnur-O%CC%88nder.png) |
|
||||
| Harun Eren SAT | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-Harun-Eren-SAT.png) |
|
||||
| Min Yin | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-yinmin.png) |
|
||||
| Kevin Xu | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-xupeng.png) |
|
||||
| Haili Zhang | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-zhanghaili.png) |
|
||||
| Zhengjun Zhou | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-zhouzhengjun.png) |
|
||||
| Zhenfei Pei | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-peizhenfei.png) |
|
||||
| Jianlin Zheng | [下载证书](https://pek3b.qingstor.com/kubesphere-community/images/ambassador-2023-zhengjianlin.png) |
|
||||
|
||||
## 关于 KubeSphere 大使计划电子邮件使用的重要更新
|
||||
|
||||
我们一直在努力维护 KubeSphere 大使计划的完整性和宗旨,作为努力的一部分,我们将介绍有关使用 **kubesphere.io 邮箱**的重要更新。
|
||||
|
||||
为确保 kubesphere.io 邮箱的有效使用,并保持其对开源活动的关注,我们恳请所有者将其使用限制在开源目的范围内。这包括与 KubeSphere 及其相关项目有关的讨论、贡献和咨询。
|
||||
|
||||
虽然我们鼓励您参与并感谢您对 KubeSphere 的热情,但我们恳请您不要将 kubesphere.io 邮箱用于与业务相关的事务,如销售、营销或商业咨询。这一限制将有助于我们维护大使计划的完整性,并确保其在支持开源社区方面的有效性。
|
||||
|
||||
我们感谢您在遵守这些准则时给予的理解和合作。这样做,我们可以共同为开源爱好者和贡献者创造一个充满活力和协作的环境。
|
||||
|
||||
如果您对 "大使计划 "或其指导方针有任何疑问或需要进一步说明,请随时通过 info@kubesphere.io 与我们联系。
|
||||
|
||||
## 写在最后
|
||||
|
||||
KubeSphere 社区向新任的 KubeSphere Ambassador 表示祝贺,并向所有参与 KubeSphere 社区开源贡献的人员致以最诚挚的感谢!
|
||||
|
||||
社区期待各位 ambassador 通过不同的方式参与社区、发展社区,让 KubeSphere 可以帮助更多的云原生用户。
|
||||
|
|
@ -10,7 +10,7 @@ image: 'https://pek3b.qingstor.com/kubesphere-community/images/kubesphere-partne
|
|||
|
||||
作为一个开源项目,KubeSphere 的稳步发展,得益于产品研发、社区发展等多方面,也离不开与各个合作伙伴的精诚合作。通过合作,KubeSphere 社区与合作伙伴共拓生态,共谋发展,逐步壮大。
|
||||
|
||||
目前 KubeSphere 的合作伙伴已遍布全球各地,助力全球用户全面拥抱云原生。最新加入的合作伙伴是[来自土耳其的 RocketByte 和 EquoSystem](https://kubesphere.io/zh/news/kubesphere-turkey-and-huawei-partnership/),已经在当地获得了丰硕的合作成果。双方将继续发挥各自技术和资源优势,为土耳其用户提供更加本土化和优质的云原生服务,共同赋能企业云原生新时代。
|
||||
目前 KubeSphere 的合作伙伴已遍布全球各地,助力全球用户全面拥抱云原生。最新加入的合作伙伴是[来自土耳其的 EquoSystem](https://kubesphere.io/zh/news/kubesphere-turkey-and-huawei-partnership/),已经在当地获得了丰硕的合作成果。双方将继续发挥各自技术和资源优势,为土耳其用户提供更加本土化和优质的云原生服务,共同赋能企业云原生新时代。
|
||||
|
||||
KubeSphere 期待有更多的合作伙伴加入 KubeSphere 合作伙伴计划,以改善生态系统并发展业务。KubeSphere 可以为合作伙伴提供资源和权益,帮助他们提高专业知识、交付和推广产品,并把 KubeSphere 加入其市场策略中来实现具体业务目标。
|
||||
|
||||
|
|
|
|||
|
|
@ -35,7 +35,7 @@ OpenELB 解决了在非公有云环境的 Kubernetes 集群下对外暴露 LoadB
|
|||
|
||||
## 社区情况
|
||||
|
||||
目前 OpenELB 已具备生产可用的特性,已被**本来生活、苏州电视台、视源股份、云智天下、Jollychic、QingCloud、百旺、Rocketbyte** 等海内外多家企业采用。早在 2019 年底,本来生活就将 OpenELB 的早期版本用于其生产环境,可参考 [OpenELB 如何帮助本来生活在 K8s 物理环境暴露集群服务](https://mp.weixin.qq.com/s/uFwYaPE7cVolLWxYHcgZdQ) 了解详情。OpenELB 项目目前有 13 位贡献者,100 多位社区成员。
|
||||
目前 OpenELB 已具备生产可用的特性,已被**本来生活、苏州电视台、视源股份、云智天下、Jollychic、QingCloud、百旺** 等海内外多家企业采用。早在 2019 年底,本来生活就将 OpenELB 的早期版本用于其生产环境,可参考 [OpenELB 如何帮助本来生活在 K8s 物理环境暴露集群服务](https://mp.weixin.qq.com/s/uFwYaPE7cVolLWxYHcgZdQ) 了解详情。OpenELB 项目目前有 13 位贡献者,100 多位社区成员。
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -75,6 +75,12 @@ members:
|
|||
|
||||
activities:
|
||||
videos:
|
||||
- image: https://pek3b.qingstor.com/kubesphere-community/images/meetup-chengdu-20231104-cover.png
|
||||
link: https://kubesphere.io/zh/live/meetup-chengdu-20231104/
|
||||
|
||||
- image: https://pek3b.qingstor.com/kubesphere-community/images/cloudnative-chengdu-20220514-cover.png
|
||||
link: https://kubesphere.io/zh/live/meetup-chengdu-20220514/
|
||||
|
||||
- image: https://pek3b.qingstor.com/kubesphere-community/images/multicluster-cover.png
|
||||
link: https://kubesphere.io/zh/live/multicluster-chengdu/
|
||||
|
||||
|
|
@ -84,8 +90,8 @@ activities:
|
|||
- image: https://pek3b.qingstor.com/kubesphere-community/images/hpa-cover.png
|
||||
link: https://kubesphere.io/zh/live/hpa-chengdu/
|
||||
|
||||
- image: https://pek3b.qingstor.com/kubesphere-community/images/cloudnative-chengdu-20220514-cover.png
|
||||
link: https://kubesphere.io/zh/live/meetup-chengdu-20220514/
|
||||
- image: https://pek3b.qingstor.com/kubesphere-community/images/meetup-chengdu-cover.png
|
||||
link: https://kubesphere.io/zh/live/meetup-chengdu/
|
||||
|
||||
review:
|
||||
- text: 马上消费金融基于 KubeSphere 的 AI 平台的开发实践
|
||||
|
|
|
|||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue