diff --git a/content/en/docs/v3.3/cluster-administration/cluster-settings/cluster-gateway.md b/content/en/docs/v3.3/cluster-administration/cluster-settings/cluster-gateway.md index 867a73dbd..5497ff55a 100644 --- a/content/en/docs/v3.3/cluster-administration/cluster-settings/cluster-gateway.md +++ b/content/en/docs/v3.3/cluster-administration/cluster-settings/cluster-gateway.md @@ -6,7 +6,7 @@ linkTitle: "Cluster Gateway" weight: 8630 --- -KubeSphere 3.3.0 provides cluster-scope gateways to let all projects share a global gateway. This document describes how to set a cluster gateway on KubeSphere. +KubeSphere 3.3 provides cluster-scope gateways to let all projects share a global gateway. This document describes how to set a cluster gateway on KubeSphere. ## Prerequisites diff --git a/content/en/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md b/content/en/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md index 8651b2f6f..3d97e57b7 100644 --- a/content/en/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md +++ b/content/en/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md @@ -6,7 +6,7 @@ linkTitle: "Introduction" weight: 8621 --- -KubeSphere provides a flexible log receiver configuration method. Powered by [FluentBit Operator](https://github.com/kubesphere/fluentbit-operator/), users can easily add, modify, delete, enable, or disable Elasticsearch, Kafka and Fluentd receivers. Once a receiver is added, logs will be sent to this receiver. +KubeSphere provides a flexible log receiver configuration method. Powered by [Fluent Operator](https://github.com/fluent/fluent-operator), users can easily add, modify, delete, enable, or disable Elasticsearch, Kafka and Fluentd receivers. Once a receiver is added, logs will be sent to this receiver. This tutorial gives a brief introduction about the general steps of adding log receivers in KubeSphere. @@ -45,7 +45,7 @@ To add a log receiver: A default Elasticsearch receiver will be added with its service address set to an Elasticsearch cluster if `logging`, `events`, or `auditing` is enabled in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md). -An internal Elasticsearch cluster will be deployed to the Kubernetes cluster if neither `externalElasticsearchHost` nor `externalElasticsearchPort` is specified in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) when `logging`, `events`, or `auditing` is enabled. The internal Elasticsearch cluster is for testing and development only. It is recommended that you configure an external Elasticsearch cluster for production. +An internal Elasticsearch cluster will be deployed to the Kubernetes cluster if neither `externalElasticsearchUrl` nor `externalElasticsearchPort` is specified in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) when `logging`, `events`, or `auditing` is enabled. The internal Elasticsearch cluster is for testing and development only. It is recommended that you configure an external Elasticsearch cluster for production. Log searching relies on the internal or external Elasticsearch cluster configured. diff --git a/content/en/docs/v3.3/cluster-administration/storageclass.md b/content/en/docs/v3.3/cluster-administration/storageclass.md index 080600004..94f27321e 100644 --- a/content/en/docs/v3.3/cluster-administration/storageclass.md +++ b/content/en/docs/v3.3/cluster-administration/storageclass.md @@ -121,17 +121,17 @@ Nevertheless, you can use [rbd provisioner](https://github.com/kubernetes-incuba | Parameter | Description | | :---- | :---- | -| Monitors| IP address of Ceph monitors. | -| adminId| Ceph client ID that is capable of creating images in the pool. | -| adminSecretName| Secret name of `adminId`. | -| adminSecretNamespace| Namespace of `adminSecretName`. | -| pool | Name of the Ceph RBD pool. | -| userId | The Ceph client ID that is used to map the RBD image. | -| userSecretName | The name of Ceph Secret for `userId` to map RBD image. | -| userSecretNamespace | The namespace for `userSecretName`. | +| MONITORS| IP address of Ceph monitors. | +| ADMINID| Ceph client ID that is capable of creating images in the pool. | +| ADMINSECRETNAME| Secret name of `adminId`. | +| ADMINSECRETNAMESPACE| Namespace of `adminSecretName`. | +| POOL | Name of the Ceph RBD pool. | +| USERID | The Ceph client ID that is used to map the RBD image. | +| USERSECRETNAME | The name of Ceph Secret for `userId` to map RBD image. | +| USERSECRETNAMESPACE | The namespace for `userSecretName`. | | File System Type | File system type of the storage volume. | -| imageFormat | Option of the Ceph volume. The value can be `1` or `2`. `imageFeatures` needs to be filled when you set imageFormat to `2`. | -| imageFeatures| Additional function of the Ceph cluster. The value should only be set when you set imageFormat to `2`. | +| IMAGEFORMAT | Option of the Ceph volume. The value can be `1` or `2`. `imageFeatures` needs to be filled when you set imageFormat to `2`. | +| IMAGEFEATURES| Additional function of the Ceph cluster. The value should only be set when you set imageFormat to `2`. | For more information about StorageClass parameters, see [Ceph RBD in Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd). @@ -146,7 +146,7 @@ NFS (Net File System) is widely used on Kubernetes with the external-provisioner {{< notice note >}} -NFS is incompatible with some applications, for example, Prometheus, which may result in pod creation failures. If you need to use NFS in the production environment, ensure that you have understood the risks. For more information, contact support@kubesphere.cloud. +It is not recommended that you use NFS storage for production (especially on Kubernetes version 1.20 or later) as some issues may occur, such as `failed to obtain lock` and `input/output error`, resulting in Pod `CrashLoopBackOff`. Besides, some apps may not be compatible with NFS, including [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects). {{}} diff --git a/content/en/docs/v3.3/devops-user-guide/examples/create-multi-cluster-pipeline.md b/content/en/docs/v3.3/devops-user-guide/examples/create-multi-cluster-pipeline.md index 6f3f9accc..947ce17e5 100644 --- a/content/en/docs/v3.3/devops-user-guide/examples/create-multi-cluster-pipeline.md +++ b/content/en/docs/v3.3/devops-user-guide/examples/create-multi-cluster-pipeline.md @@ -40,7 +40,7 @@ See the table below for the role of each cluster. {{< notice note >}} -These Kubernetes clusters can be hosted across different cloud providers and their Kubernetes versions can also vary. Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). +These Kubernetes clusters can be hosted across different cloud providers and their Kubernetes versions can also vary. Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). {{}} diff --git a/content/en/docs/v3.3/devops-user-guide/how-to-use/code-repositories/import-code-repositories.md b/content/en/docs/v3.3/devops-user-guide/how-to-use/code-repositories/import-code-repositories.md index 57acab437..16a3f3f44 100755 --- a/content/en/docs/v3.3/devops-user-guide/how-to-use/code-repositories/import-code-repositories.md +++ b/content/en/docs/v3.3/devops-user-guide/how-to-use/code-repositories/import-code-repositories.md @@ -6,7 +6,7 @@ linkTitle: "Import a Code Repository" weight: 11231 --- -In KubeSphere 3.3.0, you can import a GitHub, GitLab, Bitbucket, or Git-based repository. The following describes how to import a GitHub repository. +In KubeSphere 3.3, you can import a GitHub, GitLab, Bitbucket, or Git-based repository. The following describes how to import a GitHub repository. ## Prerequisites diff --git a/content/en/docs/v3.3/devops-user-guide/how-to-use/continuous-deployments/use-gitops-for-continous-deployment.md b/content/en/docs/v3.3/devops-user-guide/how-to-use/continuous-deployments/use-gitops-for-continous-deployment.md index b296191f1..808d65cc3 100755 --- a/content/en/docs/v3.3/devops-user-guide/how-to-use/continuous-deployments/use-gitops-for-continous-deployment.md +++ b/content/en/docs/v3.3/devops-user-guide/how-to-use/continuous-deployments/use-gitops-for-continous-deployment.md @@ -6,7 +6,7 @@ linkTitle: "Use GitOps to Achieve Continuous Deployment of Applications" weight: 11221 --- -In KubeSphere 3.3.0, we introduce the GitOps concept, which is a way of implementing continuous deployment for cloud-native applications. The core component of GitOps is a Git repository that always stores applications and declarative description of the infrastructure for version control. With GitOps and Kubernetes, you can enable CI/CD pipelines to apply changes to any cluster, which ensures consistency in cross-cloud deployment scenarios. +In KubeSphere 3.3, we introduce the GitOps concept, which is a way of implementing continuous deployment for cloud-native applications. The core component of GitOps is a Git repository that always stores applications and declarative description of the infrastructure for version control. With GitOps and Kubernetes, you can enable CI/CD pipelines to apply changes to any cluster, which ensures consistency in cross-cloud deployment scenarios. This section walks you through the process of deploying an application using a continuous deployment. ## Prerequisites diff --git a/content/en/docs/v3.3/devops-user-guide/how-to-use/devops-settings/add-cd-allowlist.md b/content/en/docs/v3.3/devops-user-guide/how-to-use/devops-settings/add-cd-allowlist.md index 76d1f8def..0fdc74a62 100644 --- a/content/en/docs/v3.3/devops-user-guide/how-to-use/devops-settings/add-cd-allowlist.md +++ b/content/en/docs/v3.3/devops-user-guide/how-to-use/devops-settings/add-cd-allowlist.md @@ -5,7 +5,7 @@ description: 'Describe how to add a continuous deployment allowlist on KubeSpher linkTitle: "Add a Continuous Deployment Allowlist" weight: 11243 --- -In KubeSphere 3.3.0, you can set an allowlist so that only specific code repositories and deployment locations can be used for continuous deployment. +In KubeSphere 3.3, you can set an allowlist so that only specific code repositories and deployment locations can be used for continuous deployment. ## Prerequisites diff --git a/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md b/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md index f11b15ae4..b1fcdc9ba 100644 --- a/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md +++ b/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md @@ -288,7 +288,7 @@ This stage uses SonarQube to test your code. You can skip this stage if you do n {{< notice note >}} - In KubeSphere 3.3.0, the account that can run a pipeline will be able to continue or terminate the pipeline if there is no reviewer specified. Pipeline creators, accounts with the role of `admin` in a project, or the account you specify will be able to continue or terminate a pipeline. + In KubeSphere 3.3, the account that can run a pipeline will be able to continue or terminate the pipeline if there is no reviewer specified. Pipeline creators, accounts with the role of `admin` in a project, or the account you specify will be able to continue or terminate a pipeline. {{}} diff --git a/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-jenkinsfile.md b/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-jenkinsfile.md index 845eb62cb..a77f8adc5 100644 --- a/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-jenkinsfile.md +++ b/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-jenkinsfile.md @@ -219,7 +219,7 @@ The account `project-admin` needs to be created in advance since it is the revie {{< notice note >}} - In KubeSphere 3.3.0, the account that can run a pipeline will be able to continue or terminate the pipeline if there is no reviewer specified. Pipeline creators, accounts with the role of `admin` in the project, or the account you specify will be able to continue or terminate the pipeline. + In KubeSphere 3.3, the account that can run a pipeline will be able to continue or terminate the pipeline if there is no reviewer specified. Pipeline creators, accounts with the role of `admin` in the project, or the account you specify will be able to continue or terminate the pipeline. {{}} diff --git a/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md b/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md index c630c9f1c..fcdb34cec 100644 --- a/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md +++ b/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md @@ -6,7 +6,7 @@ linkTitle: "Use Pipeline Templates" weight: 11213 --- -KubeSphere offers a graphical editing panel where the stages and steps of a Jenkins pipeline can be defined through interactive operations. KubeSphere 3.3.0 provides built-in pipeline templates, such as Node.js, Maven, and Golang, to help users quickly create pipelines. Additionally, KubeSphere 3.3.0 also supports customization of pipeline templates to meet diversified needs of enterprises. +KubeSphere offers a graphical editing panel where the stages and steps of a Jenkins pipeline can be defined through interactive operations. KubeSphere 3.3 provides built-in pipeline templates, such as Node.js, Maven, and Golang, to help users quickly create pipelines. Additionally, KubeSphere 3.3 also supports customization of pipeline templates to meet diversified needs of enterprises. This section describes how to use pipeline templates on KubeSphere. ## Prerequisites diff --git a/content/en/docs/v3.3/faq/access-control/cannot-login.md b/content/en/docs/v3.3/faq/access-control/cannot-login.md index 2bae75aac..c516c3099 100644 --- a/content/en/docs/v3.3/faq/access-control/cannot-login.md +++ b/content/en/docs/v3.3/faq/access-control/cannot-login.md @@ -76,7 +76,7 @@ kubectl -n kubesphere-system rollout restart deploy ks-controller-manager ### Wrong code branch used -If you used the incorrect version of ks-installer, the versions of different components would not match after the installation. Execute the following commands to check version consistency. Note that the correct image tag is `v3.3.0`. +If you used the incorrect version of ks-installer, the versions of different components would not match after the installation. Execute the following commands to check version consistency. Note that the correct image tag is `v3.3.1`. ``` kubectl -n kubesphere-system get deploy ks-installer -o jsonpath='{.spec.template.spec.containers[0].image}' diff --git a/content/en/docs/v3.3/faq/console/edit-resources-in-system-workspace.md b/content/en/docs/v3.3/faq/console/edit-resources-in-system-workspace.md index be58cf85c..0adfe98bf 100644 --- a/content/en/docs/v3.3/faq/console/edit-resources-in-system-workspace.md +++ b/content/en/docs/v3.3/faq/console/edit-resources-in-system-workspace.md @@ -31,8 +31,8 @@ Editing resources in `system-workspace` may cause unexpected results, such as Ku ```yaml client: version: - kubesphere: v3.3.0 - kubernetes: v1.22.10 + kubesphere: v3.3.1 + kubernetes: v1.21.5 openpitrix: v3.3.0 enableKubeConfig: true systemWorkspace: "$" # Add this line manually. diff --git a/content/en/docs/v3.3/faq/installation/telemetry.md b/content/en/docs/v3.3/faq/installation/telemetry.md index f06e33c8e..e051ebfdc 100644 --- a/content/en/docs/v3.3/faq/installation/telemetry.md +++ b/content/en/docs/v3.3/faq/installation/telemetry.md @@ -29,7 +29,7 @@ Telemetry is enabled by default when you install KubeSphere, while you also have ### Disable Telemetry before installation -When you install KubeSphere on an existing Kubernetes cluster, you need to download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) for cluster settings. If you want to disable Telemetry, do not run `kubectl apply -f` directly for this file. +When you install KubeSphere on an existing Kubernetes cluster, you need to download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) for cluster settings. If you want to disable Telemetry, do not run `kubectl apply -f` directly for this file. {{< notice note >}} @@ -37,7 +37,7 @@ If you install KubeSphere on Linux, see [Disable Telemetry After Installation](. {{}} -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it: +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it: ```bash vi cluster-configuration.yaml @@ -57,7 +57,7 @@ If you install KubeSphere on Linux, see [Disable Telemetry After Installation](. 3. Save the file and run the following commands to start installation. ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/en/docs/v3.3/faq/observability/byop.md b/content/en/docs/v3.3/faq/observability/byop.md index d79707b80..2a31b9186 100644 --- a/content/en/docs/v3.3/faq/observability/byop.md +++ b/content/en/docs/v3.3/faq/observability/byop.md @@ -6,9 +6,19 @@ linkTitle: "Bring Your Own Prometheus" Weight: 16330 --- -KubeSphere comes with several pre-installed customized monitoring components, including Prometheus Operator, Prometheus, Alertmanager, Grafana (Optional), various ServiceMonitors, node-exporter, and kube-state-metrics. These components might already exist before you install KubeSphere. It is possible to use your own Prometheus stack setup in KubeSphere v3.3.0. +KubeSphere comes with several pre-installed customized monitoring components including Prometheus Operator, Prometheus, Alertmanager, Grafana (Optional), various ServiceMonitors, node-exporter, and kube-state-metrics. These components might already exist before you install KubeSphere. It is possible to use your own Prometheus stack setup in KubeSphere 3.3. -## Bring Your Own Prometheus +## Steps to Bring Your Own Prometheus + +To use your own Prometheus stack setup, perform the following steps: + +1. Uninstall the customized Prometheus stack of KubeSphere + +2. Install your own Prometheus stack + +3. Install KubeSphere customized stuff to your Prometheus stack + +4. Change KubeSphere's `monitoring endpoint` ### Step 1. Uninstall the customized Prometheus stack of KubeSphere @@ -29,7 +39,7 @@ KubeSphere comes with several pre-installed customized monitoring components, in # kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-installer -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/init/ 2>/dev/null ``` -2. Delete the PVC that Prometheus uses. +2. Delete the PVC that Prometheus used. ```bash kubectl -n kubesphere-monitoring-system delete pvc `kubectl -n kubesphere-monitoring-system get pvc | grep -v VOLUME | awk '{print $1}' | tr '\n' ' '` @@ -39,112 +49,108 @@ KubeSphere comes with several pre-installed customized monitoring components, in {{< notice note >}} -KubeSphere 3.3.0 was certified to work well with the following Prometheus stack components: +KubeSphere 3.3 was certified to work well with the following Prometheus stack components: -- Prometheus Operator **v0.55.1+** -- Prometheus **v2.34.0+** -- Alertmanager **v0.23.0+** -- kube-state-metrics **v2.5.0** -- node-exporter **vv1.3.1** +- Prometheus Operator **v0.38.3+** +- Prometheus **v2.20.1+** +- Alertmanager **v0.21.0+** +- kube-state-metrics **v1.9.6** +- node-exporter **v0.18.1** -Make sure your Prometheus stack components' version meets these version requirements, especially **node-exporter** and **kube-state-metrics**. +Make sure your Prometheus stack components' version meets these version requirements especially **node-exporter** and **kube-state-metrics**. -Make sure you install **node-exporter** and **kube-state-metrics** if only **Prometheus Operator** and **Prometheus** are installed. **node-exporter** and **kube-state-metrics** are required for KubeSphere to work properly. +Make sure you install **node-exporter** and **kube-state-metrics** if only **Prometheus Operator** and **Prometheus** were installed. **node-exporter** and **kube-state-metrics** are required for KubeSphere to work properly. **If you've already had the entire Prometheus stack up and running, you can skip this step.** {{}} -The Prometheus stack can be installed in many ways. The following steps show how to install it into the namespace `monitoring` using `ks-prometheus` (based on the **upstream `kube-prometheus`** project). +The Prometheus stack can be installed in many ways. The following steps show how to install it into the namespace `monitoring` using **upstream `kube-prometheus`**. -1. Obtain `ks-prometheus` that KubeSphere v3.3.0 uses. +1. Get kube-prometheus version v0.6.0 whose node-exporter's version v0.18.1 matches the one KubeSphere 3.3 is using. ```bash - cd ~ && git clone -b release-3.3 https://github.com/kubesphere/ks-prometheus.git && cd ks-prometheus + cd ~ && git clone https://github.com/prometheus-operator/kube-prometheus.git && cd kube-prometheus && git checkout tags/v0.6.0 -b v0.6.0 ``` -2. Set up the `monitoring` namespace. +2. Setup the `monitoring` namespace, and install Prometheus Operator and corresponding roles: ```bash - sed -i 's/kubesphere-monitoring-system/monitoring/g' kustomization.yaml + kubectl apply -f manifests/setup/ ``` -3. Remove unnecessary components. For example, if Grafana is not enabled in KubeSphere, you can run the following command to delete the Grafana section in `kustomization.yaml`. +3. Wait until Prometheus Operator is up and running. ```bash - sed -i '/manifests\/grafana\//d' kustomization.yaml + kubectl -n monitoring get pod --watch ``` -4. Install the stack. +4. Remove unnecessary components such as Prometheus Adapter. ```bash - kubectl apply -k . + rm -rf manifests/prometheus-adapter-*.yaml + ``` + +5. Change kube-state-metrics to the same version v1.9.6 as KubeSphere 3.3 is using. + + ```bash + sed -i 's/v1.9.5/v1.9.6/g' manifests/kube-state-metrics-deployment.yaml + ``` + +6. Install Prometheus, Alertmanager, Grafana, kube-state-metrics, and node-exporter. You can only install kube-state-metrics or node-exporter by only applying the yaml file `kube-state-metrics-*.yaml` or `node-exporter-*.yaml`. + + ```bash + kubectl apply -f manifests/ ``` ### Step 3. Install KubeSphere customized stuff to your Prometheus stack {{< notice note >}} -If your Prometheus stack is not installed using `ks-prometheus`, skip this step. +KubeSphere 3.3 uses Prometheus Operator to manage Prometheus/Alertmanager config and lifecycle, ServiceMonitor (to manage scrape config), and PrometheusRule (to manage Prometheus recording/alert rules). -KubeSphere 3.3.0 uses Prometheus Operator to manage Prometheus/Alertmanager config and lifecycle, ServiceMonitor (to manage scrape config), and PrometheusRule (to manage Prometheus recording/alert rules). +There are a few items listed in [KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml), among which `prometheus-rules.yaml` and `prometheus-rulesEtcd.yaml` are required for KubeSphere 3.3 to work properly and others are optional. You can remove `alertmanager-secret.yaml` if you don't want your existing Alertmanager's config to be overwritten. You can remove `xxx-serviceMonitor.yaml` if you don't want your own ServiceMonitors to be overwritten (KubeSphere customized ServiceMonitors discard many irrelevant metrics to make sure Prometheus only stores the most useful metrics). If your Prometheus stack setup isn't managed by Prometheus Operator, you can skip this step. But you have to make sure that: -- You must copy the recording/alerting rules in [PrometheusRule](https://github.com/kubesphere/ks-prometheus/blob/release-3.3/manifests/kubernetes/kubernetes-prometheusRule.yaml) and [PrometheusRule for etcd](https://github.com/kubesphere/ks-prometheus/blob/release-3.3/manifests/etcd/prometheus-rulesEtcd.yaml) to your Prometheus config for KubeSphere v3.3.0 to work properly. +- You must copy the recording/alerting rules in [PrometheusRule](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/prometheus-rules.yaml) and [PrometheusRule for etcd](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/prometheus-rulesEtcd.yaml) to your Prometheus config for KubeSphere 3.3 to work properly. -- Configure your Prometheus to scrape metrics from the same targets as that in [serviceMonitor](https://github.com/kubesphere/ks-prometheus/tree/release-3.3/manifests) of each component. +- Configure your Prometheus to scrape metrics from the same targets as the ServiceMonitors listed in [KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml). {{}} -1. Obtain `ks-prometheus` that KubeSphere v3.3.0 uses. +1. Get KubeSphere 3.3 customized kube-prometheus. ```bash - cd ~ && git clone -b release-3.3 https://github.com/kubesphere/ks-prometheus.git && cd ks-prometheus + cd ~ && mkdir kubesphere && cd kubesphere && git clone https://github.com/kubesphere/kube-prometheus.git && cd kube-prometheus/kustomize ``` -2. Configure `kustomization.yaml` and retain the following content only. +2. Change the namespace to your own in which the Prometheus stack is deployed. For example, it is `monitoring` if you install Prometheus in the `monitoring` namespace following Step 2. - ```yaml - apiVersion: kustomize.config.k8s.io/v1beta1 - kind: Kustomization - namespace: - resources: - - ./manifests/alertmanager/alertmanager-secret.yaml - - ./manifests/etcd/prometheus-rulesEtcd.yaml - - ./manifests/kube-state-metrics/kube-state-metrics-serviceMonitor.yaml - - ./manifests/kubernetes/kubernetes-prometheusRule.yaml - - ./manifests/kubernetes/kubernetes-serviceKubeControllerManager.yaml - - ./manifests/kubernetes/kubernetes-serviceKubeScheduler.yaml - - ./manifests/kubernetes/kubernetes-serviceMonitorApiserver.yaml - - ./manifests/kubernetes/kubernetes-serviceMonitorCoreDNS.yaml - - ./manifests/kubernetes/kubernetes-serviceMonitorKubeControllerManager.yaml - - ./manifests/kubernetes/kubernetes-serviceMonitorKubeScheduler.yaml - - ./manifests/kubernetes/kubernetes-serviceMonitorKubelet.yaml - - ./manifests/node-exporter/node-exporter-serviceMonitor.yaml - - ./manifests/prometheus/prometheus-clusterRole.yaml + ```bash + sed -i 's/my-namespace//g' kustomization.yaml ``` - {{< notice note >}} - - - Set the value of `namespace` to your own namespace in which the Prometheus stack is deployed. For example, it is `monitoring` if you install Prometheus in the `monitoring` namespace in Step 2. - - If you have enabled the alerting component for KubeSphere, retain `thanos-ruler` in the `kustomization.yaml` file. - - {{}} - -3. Install the required components of KubeSphere. +3. Apply KubeSphere customized stuff including Prometheus rules, Alertmanager config, and various ServiceMonitors. ```bash kubectl apply -k . ``` -4. Find the Prometheus CR which is usually `k8s` in your own namespace. +4. Setup Services for kube-scheduler and kube-controller-manager metrics exposure. + + ```bash + kubectl apply -f ./prometheus-serviceKubeScheduler.yaml + kubectl apply -f ./prometheus-serviceKubeControllerManager.yaml + ``` + +5. Find the Prometheus CR which is usually Kubernetes in your own namespace. ```bash kubectl -n get prometheus ``` -5. Set the Prometheus rule evaluation interval to 1m to be consistent with the KubeSphere v3.3.0 customized ServiceMonitor. The Rule evaluation interval should be greater than or equal to the scrape interval. +6. Set the Prometheus rule evaluation interval to 1m to be consistent with the KubeSphere 3.3 customized ServiceMonitor. The Rule evaluation interval should be greater or equal to the scrape interval. ```bash kubectl -n patch prometheus k8s --patch '{ @@ -158,13 +164,13 @@ If your Prometheus stack setup isn't managed by Prometheus Operator, you can ski Now that your own Prometheus stack is up and running, you can change KubeSphere's monitoring endpoint to use your own Prometheus. -1. Run the following command to edit `kubesphere-config`. +1. Edit `kubesphere-config` by running the following command: ```bash kubectl edit cm -n kubesphere-system kubesphere-config ``` -2. Navigate to the `monitoring endpoint` section, as shown in the following: +2. Navigate to the `monitoring endpoint` section as below: ```bash monitoring: @@ -178,20 +184,14 @@ Now that your own Prometheus stack is up and running, you can change KubeSphere' endpoint: http://prometheus-operated.monitoring.svc:9090 ``` -4. If you have enabled the alerting component of KubeSphere, navigate to `prometheusEndpoint` and `thanosRulerEndpoint` of `alerting`, and change the values according to the following sample. KubeSphere APIServer will restart automatically to make your configurations take effect. +4. Run the following command to restart the KubeSphere APIServer. - ```yaml - ... - alerting: - ... - prometheusEndpoint: http://prometheus-operated.monitoring.svc:9090 - thanosRulerEndpoint: http://thanos-ruler-operated.monitoring.svc:10902 - ... - ... + ```bash + kubectl -n kubesphere-system rollout restart deployment/ks-apiserver ``` {{< notice warning >}} -If you enable/disable KubeSphere pluggable components following [this guide](../../../pluggable-components/overview/) , the `monitoring endpoint` will be reset to the original value. In this case, you need to change it to the new one. +If you enable/disable KubeSphere pluggable components following [this guide](../../../pluggable-components/overview/) , the `monitoring endpoint` will be reset to the original one. In this case, you have to change it to the new one and then restart the KubeSphere APIServer again. {{}} diff --git a/content/en/docs/v3.3/faq/observability/logging.md b/content/en/docs/v3.3/faq/observability/logging.md index 3d0df57c9..792124ad4 100644 --- a/content/en/docs/v3.3/faq/observability/logging.md +++ b/content/en/docs/v3.3/faq/observability/logging.md @@ -19,7 +19,7 @@ This page contains some of the frequently asked questions about logging. ## How to change the log store to the external Elasticsearch and shut down the internal Elasticsearch -If you are using the KubeSphere internal Elasticsearch and want to change it to your external alternate, follow the steps below. If you haven't enabled the logging system, refer to [KubeSphere Logging System](../../../pluggable-components/logging/) to set up your external Elasticsearch directly. +If you are using the KubeSphere internal Elasticsearch and want to change it to your external alternate, follow the steps below. If you haven't enabled the logging system, refer to [KubeSphere Logging System](../../../pluggable-components/logging/) to setup your external Elasticsearch directly. 1. First, you need to update the KubeKey configuration. Execute the following command: @@ -27,7 +27,7 @@ If you are using the KubeSphere internal Elasticsearch and want to change it to kubectl edit cc -n kubesphere-system ks-installer ``` -2. Comment out `es.elasticsearchDataXXX`, `es.elasticsearchMasterXXX` and `status.logging`, and set `es.externalElasticsearchHost` to the address of your Elasticsearch and `es.externalElasticsearchPort` to its port number. Below is an example for your reference. +2. Comment out `es.elasticsearchDataXXX`, `es.elasticsearchMasterXXX` and `status.logging`, and set `es.externalElasticsearchUrl` to the address of your Elasticsearch and `es.externalElasticsearchPort` to its port number. Below is an example for your reference. ```yaml apiVersion: installer.kubesphere.io/v1alpha1 @@ -39,18 +39,14 @@ If you are using the KubeSphere internal Elasticsearch and want to change it to spec: ... common: - es: # Storage backend for logging, events and auditing. - # master: - # volumeSize: 4Gi # The volume size of Elasticsearch master nodes. - # replicas: 1 # The total number of master nodes. Even numbers are not allowed. - # resources: {} - # data: - # volumeSize: 20Gi # The volume size of Elasticsearch data nodes. - # replicas: 1 # The total number of data nodes. - # resources: {} + es: + # elasticsearchDataReplicas: 1 + # elasticsearchDataVolumeSize: 20Gi + # elasticsearchMasterReplicas: 1 + # elasticsearchMasterVolumeSize: 4Gi elkPrefix: logstash logMaxAge: 7 - externalElasticsearchHost: <192.168.0.2> + externalElasticsearchUrl: <192.168.0.2> externalElasticsearchPort: <9200> ... status: @@ -90,9 +86,9 @@ Currently, KubeSphere doesn't support the integration of Elasticsearch with X-Pa ## How to set the data retention period of logs, events, auditing logs, and Istio logs -Before KubeSphere v3.3.0, you can only set the retention period of logs, which is 7 days by default. In KubeSphere v3.3.0, apart from logs, you can also set the data retention period of events, auditing logs, and Istio logs. +Before KubeSphere 3.3, you can only set the retention period of logs, which is 7 days by default. In KubeSphere 3.3, apart from logs, you can also set the data retention period of events, auditing logs, and Istio logs. -Perform the following to update the KubeKey configurations. +You need to update the KubeKey configuration and rerun `ks-installer`. 1. Execute the following command: @@ -100,7 +96,7 @@ Perform the following to update the KubeKey configurations. kubectl edit cc -n kubesphere-system ks-installer ``` -2. In the YAML file, if you only want to change the retention period of logs, you can directly change the default value of `logMaxAge` to a desired one. If you want to set the retention period of events, auditing logs, and Istio logs, add parameters `auditingMaxAge`, `eventMaxAge`, and `istioMaxAge` and set a value for them, respectively, as shown in the following example: +2. In the YAML file, if you only want to change the retention period of logs, you can directly change the default value of `logMaxAge` to a desired one. If you want to set the retention period of events, auditing logs, and Istio logs, you need to add parameters `auditingMaxAge`, `eventMaxAge`, and `istioMaxAge` and set a value for them, respectively, as shown in the following example: ```yaml @@ -122,27 +118,10 @@ Perform the following to update the KubeKey configurations. ... ``` - {{< notice note >}} - If you have not set the retention period of events, auditing logs, and Istio logs, the value of `logMaxAge` is used by default. - {{}} +3. Rerun `ks-installer`. -3. In the YAML file, delete the `es` parameter, save the changes, and ks-installer will automatically restart to make the changes take effective. - - ```yaml - apiVersion: installer.kubesphere.io/v1alpha1 - kind: ClusterConfiguration - metadata: - name: ks-installer - namespace: kubesphere-system - ... - status: - alerting: - enabledTime: 2022-08-11T06:22:01UTC - status: enabled - ... - es: # delete this line. - enabledTime: 2022-08-11T06:22:01UTC # delete this line. - status: enabled # delete this line. + ```bash + kubectl rollout restart deploy -n kubesphere-system ks-installer ``` ## I cannot find logs from workloads on some nodes using Toolbox @@ -181,4 +160,4 @@ kubectl edit input -n kubesphere-logging-system tail Update the field `Input.Spec.Tail.ExcludePath`. For example, set the path to `/var/log/containers/*_kube*-system_*.log` to exclude any log from system components. -For more information, see [Fluent Bit Operator](https://github.com/kubesphere/fluentbit-operator). +For more information, see [Fluent Operator](https://github.com/kubesphere/fluentbit-operator). diff --git a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md index 8bc3c95f4..77adf85ee 100644 --- a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md +++ b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md @@ -77,9 +77,9 @@ All the other Resources will be placed in `MC_KubeSphereRG_KuberSphereCluster_we To start deploying KubeSphere, use the following commands. ```bash -kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml +kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml -kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml +kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` You can inspect the logs of installation through the following command: diff --git a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md index 83c3c869c..418561812 100644 --- a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md +++ b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md @@ -28,8 +28,8 @@ You need to select: {{< notice note >}} -- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). -- 2 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment. +- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). +- 2 nodes are included in this example. You can add more nodes based on your own needs, especially in a production environment. - The machine type Standard / 4 GB / 2 vCPUs is for minimal installation. If you plan to enable several pluggable components or use the cluster for production, you can upgrade your nodes to a more powerful type (such as CPU-Optimized / 8 GB / 4 vCPUs). It seems that DigitalOcean provisions the control plane nodes based on the type of the worker nodes, and for Standard ones the API server can become unresponsive quite soon. {{}} @@ -45,9 +45,9 @@ Now that the cluster is ready, you can install KubeSphere following the steps be - Install KubeSphere using kubectl. The following commands are only for the default minimal installation. ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` - Inspect the logs of installation: diff --git a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md index 33b70bc3f..9c09d7fb4 100644 --- a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md +++ b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md @@ -79,7 +79,7 @@ Check the installation with `aws --version`. {{< notice note >}} -- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). +- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). - 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment. - The machine type t3.medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources. - For other settings, you can change them as well based on your own needs or use the default value. @@ -125,9 +125,9 @@ We will use the kubectl command-line utility for communicating with the cluster - Install KubeSphere using kubectl. The following commands are only for the default minimal installation. ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` - Inspect the logs of installation: diff --git a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md index 22e7577c2..6a39d79e4 100644 --- a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md +++ b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md @@ -30,7 +30,7 @@ This guide walks you through the steps of deploying KubeSphere on [Google Kubern {{< notice note >}} -- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). +- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). - 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment. - The machine type e2-medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources. - For other settings, you can change them as well based on your own needs or use the default value. @@ -46,9 +46,9 @@ This guide walks you through the steps of deploying KubeSphere on [Google Kubern - Install KubeSphere using kubectl. The following commands are only for the default minimal installation. ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` - Inspect the logs of installation: diff --git a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md index eb273f308..c6ddbd50f 100644 --- a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md +++ b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md @@ -14,7 +14,7 @@ This guide walks you through the steps of deploying KubeSphere on [Huaiwei CCE]( First, create a Kubernetes cluster based on the requirements below. -- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). +- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). - Ensure the cloud computing network for your Kubernetes cluster works, or use an elastic IP when you use **Auto Create** or **Select Existing**. You can also configure the network after the cluster is created. Refer to [NAT Gateway](https://support.huaweicloud.com/en-us/productdesc-natgateway/en-us_topic_0086739762.html). - Select `s3.xlarge.2` `4-core|8GB` for nodes and add more if necessary (3 and more nodes are required for a production environment). @@ -76,9 +76,9 @@ For how to set up or cancel a default StorageClass, refer to Kubernetes official Use [ks-installer](https://github.com/kubesphere/ks-installer) to deploy KubeSphere on an existing Kubernetes cluster. Execute the following commands directly for a minimal installation: ```bash -kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml +kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml -kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml +kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` Go to **Workload** > **Pod**, and check the running status of the pod in `kubesphere-system` of its namespace to understand the minimal deployment of KubeSphere. Check `ks-console-xxxx` of the namespace to understand the availability of KubeSphere console. diff --git a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md index 4922b6a18..4c5752e51 100644 --- a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md +++ b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md @@ -30,7 +30,7 @@ This guide walks you through the steps of deploying KubeSphere on [Oracle Kubern {{< notice note >}} - - To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). + - To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). - It is recommended that you should select **Public** for **Visibility Type**, which will assign a public IP address for every node. The IP address can be used later to access the web console of KubeSphere. - In Oracle Cloud, a Shape is a template that determines the number of CPUs, amount of memory, and other resources that are allocated to an instance. `VM.Standard.E2.2 (2 CPUs and 16G Memory)` is used in this example. For more information, see [Standard Shapes](https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/References/computeshapes.htm#vmshapes__vm-standard). - 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment. @@ -68,9 +68,9 @@ This guide walks you through the steps of deploying KubeSphere on [Oracle Kubern - Install KubeSphere using kubectl. The following commands are only for the default minimal installation. ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` - Inspect the logs of installation: diff --git a/content/en/docs/v3.3/installing-on-kubernetes/introduction/overview.md b/content/en/docs/v3.3/installing-on-kubernetes/introduction/overview.md index 1655ba928..69f338e94 100644 --- a/content/en/docs/v3.3/installing-on-kubernetes/introduction/overview.md +++ b/content/en/docs/v3.3/installing-on-kubernetes/introduction/overview.md @@ -29,9 +29,9 @@ After you make sure your existing Kubernetes cluster meets all the requirements, 1. Execute the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` 2. Inspect the logs of installation: diff --git a/content/en/docs/v3.3/installing-on-kubernetes/introduction/prerequisites.md b/content/en/docs/v3.3/installing-on-kubernetes/introduction/prerequisites.md index fea56440d..adc2e7c09 100644 --- a/content/en/docs/v3.3/installing-on-kubernetes/introduction/prerequisites.md +++ b/content/en/docs/v3.3/installing-on-kubernetes/introduction/prerequisites.md @@ -8,7 +8,7 @@ weight: 4120 You can install KubeSphere on virtual machines and bare metal with Kubernetes also provisioned. In addition, KubeSphere can also be deployed on cloud-hosted and on-premises Kubernetes clusters as long as your Kubernetes cluster meets the prerequisites below. -- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). +- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). - Available CPU > 1 Core and Memory > 2 G. Only x86_64 CPUs are supported, and Arm CPUs are not fully supported at present. - A **default** StorageClass in your Kubernetes cluster is configured; use `kubectl get sc` to verify it. - The CSR signing feature is activated in kube-apiserver when it is started with the `--cluster-signing-cert-file` and `--cluster-signing-key-file` parameters. See [RKE installation issue](https://github.com/kubesphere/kubesphere/issues/1925#issuecomment-591698309). diff --git a/content/en/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md b/content/en/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md index f4680b5c7..b48d8606d 100644 --- a/content/en/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md +++ b/content/en/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md @@ -89,7 +89,7 @@ As you install KubeSphere in an air-gapped environment, you need to prepare an i 1. Download the image list file `images-list.txt` from a machine that has access to the Internet through the following command: ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/images-list.txt ``` {{< notice note >}} @@ -101,7 +101,7 @@ As you install KubeSphere in an air-gapped environment, you need to prepare an i 2. Download `offline-installation-tool.sh`. ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/offline-installation-tool.sh + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/offline-installation-tool.sh ``` 3. Make the `.sh` file executable. @@ -124,7 +124,7 @@ As you install KubeSphere in an air-gapped environment, you need to prepare an i -l IMAGES-LIST : text file with list of images. -r PRIVATE-REGISTRY : target private registry:port. -s : save model will be applied. Pull the images in the IMAGES-LIST and save images as a tar.gz file. - -v KUBERNETES-VERSION : download kubernetes' binaries. default: v1.22.10 + -v KUBERNETES-VERSION : download kubernetes' binaries. default: v1.21.5 -h : usage message ``` @@ -161,8 +161,8 @@ Similar to installing KubeSphere on an existing Kubernetes cluster in an online 1. Execute the following commands to download these two files and transfer them to your machine that serves as the taskbox for installation. ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml ``` 2. Edit `cluster-configuration.yaml` to add your private image registry. For example, `dockerhub.kubekey.local` is the registry address in this tutorial, then use it as the value of `.spec.local_registry` as below: @@ -242,37 +242,37 @@ To access the console, make sure port 30880 is opened in your security group. ## Appendix -### Image list of KubeSphere 3.3.0 +### Image list of KubeSphere 3.3 ```txt ##k8s-images -kubesphere/kube-apiserver:v1.23.7 -kubesphere/kube-controller-manager:v1.23.7 -kubesphere/kube-proxy:v1.23.7 -kubesphere/kube-scheduler:v1.23.7 -kubesphere/kube-apiserver:v1.24.1 -kubesphere/kube-controller-manager:v1.24.1 -kubesphere/kube-proxy:v1.24.1 -kubesphere/kube-scheduler:v1.24.1 -kubesphere/kube-apiserver:v1.22.10 -kubesphere/kube-controller-manager:v1.22.10 -kubesphere/kube-proxy:v1.22.10 -kubesphere/kube-scheduler:v1.22.10 -kubesphere/kube-apiserver:v1.21.13 -kubesphere/kube-controller-manager:v1.21.13 -kubesphere/kube-proxy:v1.21.13 -kubesphere/kube-scheduler:v1.21.13 +kubesphere/kube-apiserver:v1.23.10 +kubesphere/kube-controller-manager:v1.23.10 +kubesphere/kube-proxy:v1.23.10 +kubesphere/kube-scheduler:v1.23.10 +kubesphere/kube-apiserver:v1.24.3 +kubesphere/kube-controller-manager:v1.24.3 +kubesphere/kube-proxy:v1.24.3 +kubesphere/kube-scheduler:v1.24.3 +kubesphere/kube-apiserver:v1.22.12 +kubesphere/kube-controller-manager:v1.22.12 +kubesphere/kube-proxy:v1.22.12 +kubesphere/kube-scheduler:v1.22.12 +kubesphere/kube-apiserver:v1.21.14 +kubesphere/kube-controller-manager:v1.21.14 +kubesphere/kube-proxy:v1.21.14 +kubesphere/kube-scheduler:v1.21.14 kubesphere/pause:3.7 kubesphere/pause:3.6 kubesphere/pause:3.5 kubesphere/pause:3.4.1 coredns/coredns:1.8.0 coredns/coredns:1.8.6 -calico/cni:v3.20.0 -calico/kube-controllers:v3.20.0 -calico/node:v3.20.0 -calico/pod2daemon-flexvol:v3.20.0 -calico/typha:v3.20.0 +calico/cni:v3.23.2 +calico/kube-controllers:v3.23.2 +calico/node:v3.23.2 +calico/pod2daemon-flexvol:v3.23.2 +calico/typha:v3.23.2 kubesphere/flannel:v0.12.0 openebs/provisioner-localpv:2.10.1 openebs/linux-utils:2.10.0 @@ -280,10 +280,11 @@ library/haproxy:2.3 kubesphere/nfs-subdir-external-provisioner:v4.0.2 kubesphere/k8s-dns-node-cache:1.15.12 ##kubesphere-images -kubesphere/ks-installer:v3.3.0 -kubesphere/ks-apiserver:v3.3.0 -kubesphere/ks-console:v3.3.0 -kubesphere/ks-controller-manager:v3.3.0 +kubesphere/ks-installer:v3.3.1 +kubesphere/ks-apiserver:v3.3.1 +kubesphere/ks-console:v3.3.1 +kubesphere/ks-controller-manager:v3.3.1 +kubesphere/ks-upgrade:v3.3.1 kubesphere/kubectl:v1.22.0 kubesphere/kubectl:v1.21.0 kubesphere/kubectl:v1.20.0 @@ -307,11 +308,11 @@ kubesphere/edgeservice:v0.2.0 ##gatekeeper-images openpolicyagent/gatekeeper:v3.5.2 ##openpitrix-images -kubesphere/openpitrix-jobs:v3.2.1 +kubesphere/openpitrix-jobs:v3.3.1 ##kubesphere-devops-images -kubesphere/devops-apiserver:v3.3.0 -kubesphere/devops-controller:v3.3.0 -kubesphere/devops-tools:v3.3.0 +kubesphere/devops-apiserver:v3.3.1 +kubesphere/devops-controller:v3.3.1 +kubesphere/devops-tools:v3.3.1 kubesphere/ks-jenkins:v3.3.0-2.319.1 jenkins/inbound-agent:4.10-2 kubesphere/builder-base:v3.2.2 @@ -360,7 +361,7 @@ prom/prometheus:v2.34.0 kubesphere/prometheus-config-reloader:v0.55.1 kubesphere/prometheus-operator:v0.55.1 kubesphere/kube-rbac-proxy:v0.11.0 -kubesphere/kube-state-metrics:v2.3.0 +kubesphere/kube-state-metrics:v2.5.0 prom/node-exporter:v1.3.1 prom/alertmanager:v0.23.0 thanosio/thanos:v0.25.2 @@ -399,7 +400,6 @@ joosthofman/wget:1.0 nginxdemos/hello:plain-text wordpress:4.8-apache mirrorgooglecontainers/hpa-example:latest -java:openjdk-8-jre-alpine fluent/fluentd:v1.4.2-2.0 perl:latest kubesphere/examples-bookinfo-productpage-v1:1.16.2 diff --git a/content/en/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md b/content/en/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md index 9f798a2fe..6f3d41de6 100644 --- a/content/en/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md +++ b/content/en/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md @@ -21,55 +21,12 @@ This tutorial demonstrates how to add an edge node to your cluster. ## Prerequisites - You have enabled [KubeEdge](../../../pluggable-components/kubeedge/). -- To prevent compatability issues, you are advised to install Kubernetes v1.21.x or earlier. - You have an available node to serve as an edge node. The node can run either Ubuntu (recommended) or CentOS. This tutorial uses Ubuntu 18.04 as an example. - Edge nodes, unlike Kubernetes cluster nodes, should work in a separate network. -## Prevent non-edge workloads from being scheduled to edge nodes - -Due to the tolerations some daemonsets (for example, Calico) have, to ensure that the newly added edge nodes work properly, you need to run the following command to manually patch the pods so that non-edge workloads will not be scheduled to the edge nodes. - -```bash -#!/bin/bash - - -NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}' - -ns="kube-system" - - -DaemonSets=("nodelocaldns" "kube-proxy" "calico-node") - -length=${#DaemonSets[@]} - -for((i=0;i}} - In `ClusterConfiguration` of the ks-installer, if you set an internal IP address, you need to set the forwarding rule. If you have not set the forwarding rule, you can directly connect to ports 30000 to 30004. - {{}} - -| Fields | External Ports | Fields | Internal Ports | -| ------------------- | -------------- | ----------------------- | -------------- | -| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` | -| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` | -| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` | -| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` | -| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` | - ## Configure an Edge Node -You need to configure the edge node as follows. +You need to install a container runtime and configure EdgeMesh on your edge node. ### Install a container runtime @@ -115,6 +72,22 @@ Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/ net.ipv4.ip_forward = 1 ``` +## Create Firewall Rules and Port Forwarding Rules + +To make sure edge nodes can successfully talk to your cluster, you must forward ports for outside traffic to get into your network. Specifically, map an external port to the corresponding internal IP address (control plane node) and port based on the table below. Besides, you also need to create firewall rules to allow traffic to these ports (`10000` to `10004`). + + {{< notice note >}} + In `ClusterConfiguration` of the ks-installer, if you set an internal IP address, you need to set the forwarding rule. If you have not set the forwarding rule, you can directly connect to ports 30000 to 30004. + {{}} + +| Fields | External Ports | Fields | Internal Ports | +| ------------------- | -------------- | ----------------------- | -------------- | +| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` | +| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` | +| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` | +| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` | +| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` | + ## Add an Edge Node 1. Log in to the console as `admin` and click **Platform** in the upper-left corner. @@ -129,8 +102,6 @@ Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/ 3. Click **Add**. In the dialog that appears, set a node name and enter an internal IP address of your edge node. Click **Validate** to continue. - ![add-edge-node](/images/docs/v3.3/installing-on-linux/add-and-delete-nodes/add-edge-nodes/add-edge-node.png) - {{< notice note >}} - The internal IP address is only used for inter-node communication and you do not necessarily need to use the actual internal IP address of the edge node. As long as the IP address is successfully validated, you can use it. @@ -140,8 +111,6 @@ Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/ 4. Copy the command automatically created under **Edge Node Configuration Command** and run it on your edge node. - ![edge-command](/images/docs/v3.3/installing-on-linux/add-and-delete-nodes/add-edge-nodes/edge-command.png) - {{< notice note >}} Make sure `wget` is installed on your edge node before you run the command. @@ -200,7 +169,38 @@ To collect monitoring information on edge node, you need to enable `metrics_serv systemctl restart edgecore.service ``` -9. If you still cannot see the monitoring data, run the following command: +9. After an edge node joins your cluster, some Pods may be scheduled to it while they remain in the `Pending` state on the edge node. Due to the tolerations some DaemonSets (for example, Calico) have, you need to manually patch some Pods so that they will not be scheduled to the edge node. + + ```bash + #!/bin/bash + + NodeSelectorPatchJson='{"spec":{"template":{"spec":{"nodeSelector":{"node-role.kubernetes.io/master": "","node-role.kubernetes.io/worker": ""}}}}}' + + NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}' + + edgenode="edgenode" + if [ $1 ]; then + edgenode="$1" + fi + + + namespaces=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $1}' )) + pods=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $2}' )) + length=${#namespaces[@]} + + + for((i=0;i<$length;i++)); + do + ns=${namespaces[$i]} + pod=${pods[$i]} + resources=$(kubectl -n $ns describe pod $pod | grep "Controlled By" |awk '{print $3}') + echo "Patching for ns:"${namespaces[$i]}",resources:"$resources + kubectl -n $ns patch $resources --type merge --patch "$NoShedulePatchJson" + sleep 1 + done + ``` + +10. If you still cannot see the monitoring data, run the following command: ```bash journalctl -u edgecore.service -b -r diff --git a/content/en/docs/v3.3/installing-on-linux/high-availability-configurations/ha-configuration.md b/content/en/docs/v3.3/installing-on-linux/high-availability-configurations/ha-configuration.md index d9015da3f..3db607388 100644 --- a/content/en/docs/v3.3/installing-on-linux/high-availability-configurations/ha-configuration.md +++ b/content/en/docs/v3.3/installing-on-linux/high-availability-configurations/ha-configuration.md @@ -48,7 +48,7 @@ You must create a load balancer in your environment to listen (also known as lis Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -64,7 +64,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -79,7 +79,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} @@ -92,12 +92,12 @@ chmod +x kk Create an example configuration file with default configurations. Here Kubernetes v1.22.10 is used as an example. ```bash -./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 +./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 ``` {{< notice note >}} -- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). +- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. diff --git a/content/en/docs/v3.3/installing-on-linux/high-availability-configurations/internal-ha-configuration.md b/content/en/docs/v3.3/installing-on-linux/high-availability-configurations/internal-ha-configuration.md index 793c2be73..73185d39f 100644 --- a/content/en/docs/v3.3/installing-on-linux/high-availability-configurations/internal-ha-configuration.md +++ b/content/en/docs/v3.3/installing-on-linux/high-availability-configurations/internal-ha-configuration.md @@ -33,7 +33,7 @@ Refer to the following steps to download KubeKey. Download KubeKey from [its GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or run the following command. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -49,7 +49,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -64,7 +64,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The preceding commands download the latest release of KubeKey (v2.2.2). You can modify the version number in the command to download a specific version. +The preceding commands download the latest release of KubeKey (v2.3.0). You can modify the version number in the command to download a specific version. {{}} @@ -77,12 +77,12 @@ chmod +x kk Create an example configuration file with default configurations. Here Kubernetes v1.22.10 is used as an example. ```bash -./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 +./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 ``` {{< notice note >}} -- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). +- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. @@ -132,7 +132,7 @@ For more information about different fields in this configuration file, see [Kub spec: controlPlaneEndpoint: ##Internal loadbalancer for apiservers - internalLoadbalancer: haproxy + #internalLoadbalancer: haproxy domain: lb.kubesphere.local address: "" diff --git a/content/en/docs/v3.3/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy.md b/content/en/docs/v3.3/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy.md index 3d080e597..73ef10d6d 100644 --- a/content/en/docs/v3.3/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy.md +++ b/content/en/docs/v3.3/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy.md @@ -268,7 +268,7 @@ Before you start to create your Kubernetes cluster, make sure you have tested th Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -284,7 +284,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -299,7 +299,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} @@ -312,12 +312,12 @@ chmod +x kk Create an example configuration file with default configurations. Here Kubernetes v1.22.10 is used as an example. ```bash -./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 +./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 ``` {{< notice note >}} -- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). +- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. diff --git a/content/en/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md b/content/en/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md index 24fa2b1ea..a395da02f 100644 --- a/content/en/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md +++ b/content/en/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md @@ -15,12 +15,12 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides |Host IP| Host Name | Usage | | ---------------- | ---- | ---------------- | -|192.168.0.2 | node1 | Online host for packaging the source cluster with Kubernetes v1.22.10 and KubeSphere v3.3.0 installed | +|192.168.0.2 | node1 | Online host for packaging the source cluster with Kubernetes v1.22.10 and KubeSphere v3.3.1 installed | |192.168.0.3 | node2 | Control plane node of the air-gapped environment | |192.168.0.4 | node3 | Image registry node of the air-gapped environment | ## Preparations -1. Run the following commands to download KubeKey v2.2.2. +1. Run the following commands to download KubeKey v2.3.0 . {{< tabs >}} {{< tab "Good network connections to GitHub/Googleapis" >}} @@ -28,7 +28,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash - curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - + curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -44,7 +44,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides Run the following command to download KubeKey: ```bash - curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - + curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -83,7 +83,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides repository: iso: localPath: - url: https://github.com/kubesphere/kubekey/releases/download/v2.2.2/centos7-rpms-amd64.iso + url: https://github.com/kubesphere/kubekey/releases/download/v2.3.0/centos7-rpms-amd64.iso - arch: amd64 type: linux id: ubuntu @@ -91,13 +91,13 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides repository: iso: localPath: - url: https://github.com/kubesphere/kubekey/releases/download/v2.2.2/ubuntu-20.04-debs-amd64.iso + url: https://github.com/kubesphere/kubekey/releases/download/v2.3.0/ubuntu-20.04-debs-amd64.iso kubernetesDistributions: - type: kubernetes - version: v1.22.10 + version: v1.22.12 components: helm: - version: v3.6.3 + version: v3.9.0 cni: version: v0.9.1 etcd: @@ -112,14 +112,14 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides docker-registry: version: "2" harbor: - version: v2.4.1 + version: v2.5.3 docker-compose: version: v2.2.2 images: - - docker.io/kubesphere/kube-apiserver:v1.22.10 - - docker.io/kubesphere/kube-controller-manager:v1.22.10 - - docker.io/kubesphere/kube-proxy:v1.22.10 - - docker.io/kubesphere/kube-scheduler:v1.22.10 + - docker.io/kubesphere/kube-apiserver:v1.22.12 + - docker.io/kubesphere/kube-controller-manager:v1.22.12 + - docker.io/kubesphere/kube-proxy:v1.22.12 + - docker.io/kubesphere/kube-scheduler:v1.22.12 - docker.io/kubesphere/pause:3.5 - docker.io/coredns/coredns:1.8.0 - docker.io/calico/cni:v3.23.2 @@ -133,13 +133,14 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides - docker.io/library/haproxy:2.3 - docker.io/kubesphere/nfs-subdir-external-provisioner:v4.0.2 - docker.io/kubesphere/k8s-dns-node-cache:1.15.12 - - docker.io/kubesphere/ks-installer:v3.3.0 - - docker.io/kubesphere/ks-apiserver:v3.3.0 - - docker.io/kubesphere/ks-console:v3.3.0 - - docker.io/kubesphere/ks-controller-manager:v3.3.0 - - docker.io/kubesphere/kubectl:v1.20.0 - - docker.io/kubesphere/kubectl:v1.21.0 + - docker.io/kubesphere/ks-installer:v3.3.1 + - docker.io/kubesphere/ks-apiserver:v3.3.1 + - docker.io/kubesphere/ks-console:v3.3.1 + - docker.io/kubesphere/ks-controller-manager:v3.3.1 + - docker.io/kubesphere/ks-upgrade:v3.3.1 - docker.io/kubesphere/kubectl:v1.22.0 + - docker.io/kubesphere/kubectl:v1.21.0 + - docker.io/kubesphere/kubectl:v1.20.0 - docker.io/kubesphere/kubefed:v0.8.1 - docker.io/kubesphere/tower:v0.2.0 - docker.io/minio/minio:RELEASE.2019-08-07T01-59-21Z @@ -156,10 +157,11 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides - docker.io/kubeedge/cloudcore:v1.9.2 - docker.io/kubeedge/iptables-manager:v1.9.2 - docker.io/kubesphere/edgeservice:v0.2.0 - - docker.io/kubesphere/openpitrix-jobs:v3.2.1 - - docker.io/kubesphere/devops-apiserver:v3.3.0 - - docker.io/kubesphere/devops-controller:v3.3.0 - - docker.io/kubesphere/devops-tools:v3.3.0 + - docker.io/openpolicyagent/gatekeeper:v3.5.2 + - docker.io/kubesphere/openpitrix-jobs:v3.3.1 + - docker.io/kubesphere/devops-apiserver:v3.3.1 + - docker.io/kubesphere/devops-controller:v3.3.1 + - docker.io/kubesphere/devops-tools:v3.3.1 - docker.io/kubesphere/ks-jenkins:v3.3.0-2.319.1 - docker.io/jenkins/inbound-agent:4.10-2 - docker.io/kubesphere/builder-base:v3.2.2 @@ -207,7 +209,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides - docker.io/kubesphere/prometheus-config-reloader:v0.55.1 - docker.io/kubesphere/prometheus-operator:v0.55.1 - docker.io/kubesphere/kube-rbac-proxy:v0.11.0 - - docker.io/kubesphere/kube-state-metrics:v2.3.0 + - docker.io/kubesphere/kube-state-metrics:v2.5.0 - docker.io/prom/node-exporter:v1.3.1 - docker.io/prom/alertmanager:v0.23.0 - docker.io/thanosio/thanos:v0.25.2 @@ -243,7 +245,6 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides - docker.io/nginxdemos/hello:plain-text - docker.io/library/wordpress:4.8-apache - docker.io/mirrorgooglecontainers/hpa-example:latest - - docker.io/library/java:openjdk-8-jre-alpine - docker.io/fluent/fluentd:v1.4.2-2.0 - docker.io/library/perl:latest - docker.io/kubesphere/examples-bookinfo-productpage-v1:1.16.2 @@ -264,7 +265,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides - You can customize the **manifest-sample.yaml** file to export the desired artifact file. - - You can download the ISO files at https://github.com/kubesphere/kubekey/releases/tag/v2.2.2. + - You can download the ISO files at https://github.com/kubesphere/kubekey/releases/tag/v2.3.0. {{}} @@ -309,7 +310,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides 2. Run the following command to create a configuration file for the air-gapped cluster: ```bash - ./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 -f config-sample.yaml + ./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 -f config-sample.yaml ``` 3. Run the following command to modify the configuration file: @@ -354,7 +355,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides address: "" port: 6443 kubernetes: - version: v1.22.10 + version: v1.21.5 clusterName: cluster.local network: plugin: calico diff --git a/content/en/docs/v3.3/installing-on-linux/introduction/kubekey.md b/content/en/docs/v3.3/installing-on-linux/introduction/kubekey.md index c49a3343d..4ed386719 100644 --- a/content/en/docs/v3.3/installing-on-linux/introduction/kubekey.md +++ b/content/en/docs/v3.3/installing-on-linux/introduction/kubekey.md @@ -38,7 +38,7 @@ With the configuration file in place, you execute the `./kk` command with varied Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -54,7 +54,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -69,21 +69,21 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} ## Support Matrix -If you want to use KubeKey to install both Kubernetes and KubeSphere 3.3.0, see the following table of all supported Kubernetes versions. +If you want to use KubeKey to install both Kubernetes and KubeSphere 3.3, see the following table of all supported Kubernetes versions. | KubeSphere version | Supported Kubernetes versions | | ------------------ | ------------------------------------------------------------ | -| v3.3.0 | v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support) | +| v3.3.1 | v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support) | {{< notice note >}} - You can also run `./kk version --show-supported-k8s` to see all supported Kubernetes versions that can be installed by KubeKey. -- The Kubernetes versions that can be installed using KubeKey are different from the Kubernetes versions supported by KubeSphere v3.3.0. If you want to [install KubeSphere 3.3.0 on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/), your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). -- If you want to use KubeEdge, you are advised to install Kubernetes v1.21.x or earlier to prevent compatability issues. +- The Kubernetes versions that can be installed using KubeKey are different from the Kubernetes versions supported by KubeSphere 3.3. If you want to [install KubeSphere 3.3 on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/), your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). +- If you want to use KubeEdge, you are advised to install Kubernetes v1.22.x or earlier to prevent compatability issues. {{}} \ No newline at end of file diff --git a/content/en/docs/v3.3/installing-on-linux/introduction/multioverview.md b/content/en/docs/v3.3/installing-on-linux/introduction/multioverview.md index 5de688d5a..01f36357f 100644 --- a/content/en/docs/v3.3/installing-on-linux/introduction/multioverview.md +++ b/content/en/docs/v3.3/installing-on-linux/introduction/multioverview.md @@ -110,7 +110,7 @@ Follow the step below to download [KubeKey](../kubekey). Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -126,7 +126,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -141,7 +141,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} @@ -165,7 +165,7 @@ Command: {{< notice note >}} -- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../kubekey/#support-matrix). +- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../kubekey/#support-matrix). - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. @@ -180,7 +180,7 @@ Here are some examples for your reference: ./kk create config [-f ~/myfolder/abc.yaml] ``` -- You can specify a KubeSphere version that you want to install (for example, `--with-kubesphere v3.3.0`). +- You can specify a KubeSphere version that you want to install (for example, `--with-kubesphere v3.3.1`). ```bash ./kk create config --with-kubesphere [version] @@ -254,13 +254,6 @@ At the same time, you must provide the login information used to connect to each hosts: - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, privateKeyPath: "~/.ssh/id_rsa"} ``` - -- For installation on ARM devices: - - ```yaml - hosts: - - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123, arch: arm64} - ``` {{< notice tip >}} diff --git a/content/en/docs/v3.3/installing-on-linux/introduction/vars.md b/content/en/docs/v3.3/installing-on-linux/introduction/vars.md index 189f4e36e..704d55e61 100644 --- a/content/en/docs/v3.3/installing-on-linux/introduction/vars.md +++ b/content/en/docs/v3.3/installing-on-linux/introduction/vars.md @@ -10,7 +10,7 @@ When creating a Kubernetes cluster, you can use [KubeKey](../kubekey/) to define ```yaml kubernetes: - version: v1.22.10 + version: v1.21.5 imageRepo: kubesphere clusterName: cluster.local masqueradeAll: false @@ -45,7 +45,7 @@ The below table describes the above parameters in detail. version - The Kubernetes version to be installed. If you do not specify a Kubernetes version, {{< contentLink "docs/installing-on-linux/introduction/kubekey" "KubeKey" >}} v2.2.2 will install Kubernetes v1.23.7 by default. For more information, see {{< contentLink "docs/installing-on-linux/introduction/kubekey/#support-matrix" "Support Matrix" >}}. + The Kubernetes version to be installed. If you do not specify a Kubernetes version, {{< contentLink "docs/installing-on-linux/introduction/kubekey" "KubeKey" >}} v2.3.0 will install Kubernetes v1.23.7 by default. For more information, see {{< contentLink "docs/installing-on-linux/introduction/kubekey/#support-matrix" "Support Matrix" >}}. imageRepo @@ -111,7 +111,7 @@ The below table describes the above parameters in detail. privateRegistry* - Configure a private image registry for air-gapped installation (for example, a Docker local registry or Harbor). For more information, see {{< contentLink "docs/installing-on-linux/introduction/air-gapped-installation/" "Air-gapped Installation on Linux" >}}. + Configure a private image registry for air-gapped installation (for example, a Docker local registry or Harbor). For more information, see {{< contentLink "docs/v3.3/installing-on-linux/introduction/air-gapped-installation/" "Air-gapped Installation on Linux" >}}. diff --git a/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-and-k3s.md b/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-and-k3s.md index 142969343..21f810169 100644 --- a/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-and-k3s.md +++ b/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-and-k3s.md @@ -32,7 +32,7 @@ Follow the step below to download [KubeKey](../../../installing-on-linux/introdu Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -48,7 +48,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -63,7 +63,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. Note that an earlier version of KubeKey cannot be used to install K3s. +The commands above download the latest release (v2.3.0) of KubeKey. Note that an earlier version of KubeKey cannot be used to install K3s. {{}} @@ -78,12 +78,12 @@ chmod +x kk 1. Create a configuration file of your cluster by running the following command: ```bash - ./kk create config --with-kubernetes v1.21.4-k3s --with-kubesphere v3.3.0 + ./kk create config --with-kubernetes v1.21.4-k3s --with-kubesphere v3.3.1 ``` {{< notice note >}} - KubeKey v2.2.2 supports the installation of K3s v1.21.4. + KubeKey v2.3.0 supports the installation of K3s v1.21.4. {{}} diff --git a/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md b/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md index b34d3378c..c0cc50c8b 100644 --- a/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md +++ b/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md @@ -199,7 +199,7 @@ Follow the step below to download KubeKey. Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -215,7 +215,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -230,7 +230,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} @@ -244,15 +244,15 @@ chmod +x kk With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file. -Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.3.0`): +Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.3.1`): ```bash -./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 +./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` {{< notice note >}} -- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). +- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). - If you do not add the flag `--with-kubesphere` in the command above, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. diff --git a/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md b/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md index a3586a340..002d2b983 100644 --- a/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md +++ b/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md @@ -289,7 +289,7 @@ systemctl status -l keepalived ## Download KubeKey -[Kubekey](https://github.com/kubesphere/kubekey) is the brand-new installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere 3.3.0. +[Kubekey](https://github.com/kubesphere/kubekey) is the brand-new installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere 3.3. Follow the step below to download KubeKey. @@ -300,7 +300,7 @@ Follow the step below to download KubeKey. Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -316,7 +316,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -331,7 +331,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} @@ -345,15 +345,15 @@ chmod +x kk With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file. -Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.3.0`): +Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.3.1`): ```bash -./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 +./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` {{< notice note >}} -- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). +- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. @@ -398,7 +398,7 @@ spec: address: "10.10.71.67" port: 6443 kubernetes: - version: v1.22.10 + version: v1.21.5 imageRepo: kubesphere clusterName: cluster.local masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false] @@ -422,8 +422,6 @@ spec: localVolume: storageClassName: local ---- ---- --- apiVersion: installer.kubesphere.io/v1alpha1 kind: ClusterConfiguration @@ -431,184 +429,70 @@ metadata: name: ks-installer namespace: kubesphere-system labels: - version: v3.3.0 + version: v3.3.1 spec: + local_registry: "" persistence: - storageClass: "" # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here. + storageClass: "" authentication: - jwtSecret: "" # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster. - local_registry: "" # Add your private registry address if it is needed. - # dev_tag: "" # Add your kubesphere image tag you want to install, by default it's same as ks-installer release version. + jwtSecret: "" etcd: - monitoring: false # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it. - endpointIps: localhost # etcd cluster EndpointIps. It can be a bunch of IPs here. - port: 2379 # etcd port. + monitoring: true # Whether to install etcd monitoring dashboard + endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9 # etcd cluster endpointIps + port: 2379 # etcd port tlsEnable: true common: - core: - console: - enableMultiLogin: true # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time. - port: 30880 - type: NodePort - # apiserver: # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster - # resources: {} - # controllerManager: - # resources: {} - redis: - enabled: false - enableHA: false - volumeSize: 2Gi # Redis PVC size. - openldap: - enabled: false - volumeSize: 2Gi # openldap PVC size. - minio: - volumeSize: 20Gi # Minio PVC size. - monitoring: - # type: external # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line. - endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data. - GPUMonitoring: # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero. - enabled: false - gpu: # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs. - kinds: - - resourceName: "nvidia.com/gpu" - resourceType: "GPU" - default: true - es: # Storage backend for logging, events and auditing. - # master: - # volumeSize: 4Gi # The volume size of Elasticsearch master nodes. - # replicas: 1 # The total number of master nodes. Even numbers are not allowed. - # resources: {} - # data: - # volumeSize: 20Gi # The volume size of Elasticsearch data nodes. - # replicas: 1 # The total number of data nodes. - # resources: {} - logMaxAge: 7 # Log retention time in built-in Elasticsearch. It is 7 days by default. - elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - basicAuth: - enabled: false - username: "" - password: "" - externalElasticsearchHost: "" - externalElasticsearchPort: "" - alerting: # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from. - enabled: false # Enable or disable the KubeSphere Alerting System. - # thanosruler: - # replicas: 1 - # resources: {} - auditing: # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants. - enabled: false # Enable or disable the KubeSphere Auditing Log System. - # operator: - # resources: {} - # webhook: - # resources: {} - devops: # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image. - enabled: false # Enable or disable the KubeSphere DevOps System. - # resources: {} - jenkinsMemoryLim: 2Gi # Jenkins memory limit. - jenkinsMemoryReq: 1500Mi # Jenkins memory request. - jenkinsVolumeSize: 8Gi # Jenkins volume size. - jenkinsJavaOpts_Xms: 1200m # The following three fields are JVM parameters. - jenkinsJavaOpts_Xmx: 1600m - jenkinsJavaOpts_MaxRAM: 2g - events: # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters. - enabled: false # Enable or disable the KubeSphere Events System. - # operator: - # resources: {} - # exporter: - # resources: {} - # ruler: - # enabled: true - # replicas: 2 - # resources: {} - logging: # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd. - enabled: false # Enable or disable the KubeSphere Logging System. - logsidecar: - enabled: true - replicas: 2 - # resources: {} - metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler). - enabled: false # Enable or disable metrics-server. - monitoring: - storageClass: "" # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default. - node_exporter: - port: 9100 - # resources: {} - # kube_rbac_proxy: - # resources: {} - # kube_state_metrics: - # resources: {} - # prometheus: - # replicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability. - # volumeSize: 20Gi # Prometheus PVC size. - # resources: {} - # operator: - # resources: {} - # alertmanager: - # replicas: 1 # AlertManager Replicas. - # resources: {} - # notification_manager: - # resources: {} - # operator: - # resources: {} - # proxy: - # resources: {} - gpu: # GPU monitoring-related plug-in installation. - nvidia_dcgm_exporter: # Ensure that gpu resources on your hosts can be used normally, otherwise this plug-in will not work properly. - enabled: false # Check whether the labels on the GPU hosts contain "nvidia.com/gpu.present=true" to ensure that the DCGM pod is scheduled to these nodes. - # resources: {} - multicluster: - clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the Host or Member Cluster. - network: - networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods). - # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net. - enabled: false # Enable or disable network policies. - ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool. - type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled. - topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope. - type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled. - openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle. - store: - enabled: false # Enable or disable the KubeSphere App Store. - servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology. - enabled: false # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based). - istio: # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/ - components: - ingressGateways: - - name: istio-ingressgateway - enabled: false - cni: - enabled: false - edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes. + mysqlVolumeSize: 20Gi # MySQL PVC size + minioVolumeSize: 20Gi # Minio PVC size + etcdVolumeSize: 20Gi # etcd PVC size + openldapVolumeSize: 2Gi # openldap PVC size + redisVolumSize: 2Gi # Redis PVC size + es: # Storage backend for logging, tracing, events and auditing. + elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number + elasticsearchDataReplicas: 1 # total number of data nodes + elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes + elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes + logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. + elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log + # externalElasticsearchUrl: + # externalElasticsearchPort: + console: + enableMultiLogin: false # enable/disable multiple sing on, it allows a user can be used by different users at the same time. + port: 30880 + alerting: # Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from. + enabled: false + auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants. + enabled: false + devops: # Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image + enabled: false + jenkinsMemoryLim: 2Gi # Jenkins memory limit + jenkinsMemoryReq: 1500Mi # Jenkins memory request + jenkinsVolumeSize: 8Gi # Jenkins volume size + jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters + jenkinsJavaOpts_Xmx: 512m + jenkinsJavaOpts_MaxRAM: 2g + events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters. + enabled: false + logging: # Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd. + enabled: false + logsidecarReplicas: 2 + metrics_server: # Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler). + enabled: true + monitoring: # + prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well. + prometheusMemoryRequest: 400Mi # Prometheus request memory + prometheusVolumeSize: 20Gi # Prometheus PVC size + alertmanagerReplicas: 1 # AlertManager Replicas + multicluster: + clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster + networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods). + enabled: false + notification: # It supports notification management in multi-tenant Kubernetes clusters. It allows you to set AlertManager as its sender, and receivers include Email, Wechat Work, and Slack. + enabled: false + openpitrix: # Whether to install KubeSphere App Store. It provides an application store for Helm-based applications, and offer application lifecycle management + enabled: false + servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology enabled: false - kubeedge: # kubeedge configurations - enabled: false - cloudCore: - cloudHub: - advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. - - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided. - service: - cloudhubNodePort: "30000" - cloudhubQuicNodePort: "30001" - cloudhubHttpsNodePort: "30002" - cloudstreamNodePort: "30003" - tunnelNodePort: "30004" - # resources: {} - # hostNetWork: false - iptables-manager: - enabled: true - mode: "external" - # resources: {} - # edgeService: - # resources: {} - gatekeeper: # Provide admission policy and rule management, A validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent. - enabled: false # Enable or disable Gatekeeper. - # controller_manager: - # resources: {} - # audit: - # resources: {} - terminal: - # image: 'alpine:3.15' # There must be an nsenter program in the image - timeout: 600 # Container timeout, if set to 0, no timeout will be used. The unit is seconds ``` Create a cluster using the configuration file you customized above: diff --git a/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-glusterfs.md b/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-glusterfs.md index 34669da74..4687a4634 100644 --- a/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-glusterfs.md +++ b/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-glusterfs.md @@ -119,7 +119,7 @@ Follow the steps below to download [KubeKey](../../../installing-on-linux/introd Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -135,7 +135,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -150,7 +150,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} @@ -165,12 +165,12 @@ chmod +x kk 1. Specify a Kubernetes version and a KubeSphere version that you want to install. For example: ```bash - ./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 + ./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` {{< notice note >}} - - Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). + - Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. @@ -205,7 +205,7 @@ chmod +x kk address: "" port: 6443 kubernetes: - version: v1.22.10 + version: v1.21.5 imageRepo: kubesphere clusterName: cluster.local network: diff --git a/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md b/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md index 97f38d46e..4dca043a8 100644 --- a/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md +++ b/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md @@ -11,7 +11,7 @@ This tutorial demonstrates how to set up a KubeSphere cluster and configure NFS {{< notice note >}} - Ubuntu 16.04 is used as an example in this tutorial. -- NFS is incompatible with some applications, for example, Prometheus, which may result in pod creation failures. If you need to use NFS in the production environment, ensure that you have understood the risks. For more information, contact support@kubesphere.cloud. +- It is not recommended that you use NFS storage for production (especially on Kubernetes version 1.20 or later) as some issues may occur, such as `failed to obtain lock` and `input/output error`, resulting in Pod `CrashLoopBackOff`. Besides, some apps may not be compatible with NFS, including [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects). {{}} @@ -71,7 +71,7 @@ Follow the steps below to download [KubeKey](../../../installing-on-linux/introd Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -87,7 +87,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -102,7 +102,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} @@ -117,12 +117,12 @@ chmod +x kk 1. Specify a Kubernetes version and a KubeSphere version that you want to install. For example: ```bash - ./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 + ./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` {{< notice note >}} - - Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). + - Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. @@ -157,7 +157,7 @@ chmod +x kk address: "" port: 6443 kubernetes: - version: v1.22.10 + version: v1.21.5 imageRepo: kubesphere clusterName: cluster.local network: diff --git a/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-qingcloud-csi.md b/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-qingcloud-csi.md index a74bbbd59..0cfd088aa 100644 --- a/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-qingcloud-csi.md +++ b/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-qingcloud-csi.md @@ -73,7 +73,7 @@ Follow the steps below to download [KubeKey](../../../installing-on-linux/introd Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -89,7 +89,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -104,7 +104,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} @@ -119,12 +119,12 @@ chmod +x kk 1. Specify a Kubernetes version and a KubeSphere version that you want to install. For example: ```bash - ./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 + ./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` {{< notice note >}} - - Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). + - Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. @@ -159,7 +159,7 @@ chmod +x kk address: "" port: 6443 kubernetes: - version: v1.22.10 + version: v1.21.5 imageRepo: kubesphere clusterName: cluster.local network: diff --git a/content/en/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md b/content/en/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md index 055043645..4ee993a3e 100644 --- a/content/en/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md +++ b/content/en/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md @@ -101,7 +101,7 @@ ssh -i .ssh/id_rsa2 -p50200 kubesphere@40.81.5.xx Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -117,7 +117,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -132,7 +132,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} @@ -145,12 +145,12 @@ The commands above download the latest release (v2.2.2) of KubeKey. You can chan 2. Create an example configuration file with default configurations. Here Kubernetes v1.22.10 is used as an example. ```bash - ./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 + ./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 ``` {{< notice note >}} -- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). +- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. diff --git a/content/en/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md b/content/en/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md index 4464eb500..9df911cad 100644 --- a/content/en/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md +++ b/content/en/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md @@ -126,7 +126,7 @@ Follow the step below to download KubeKey. Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -142,7 +142,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -157,7 +157,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} @@ -170,12 +170,12 @@ chmod +x kk Create an example configuration file with default configurations. Here Kubernetes v1.22.10 is used as an example. ```bash -./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 +./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 ``` {{< notice note >}} -- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). +- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix). - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later. - If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. diff --git a/content/en/docs/v3.3/introduction/architecture.md b/content/en/docs/v3.3/introduction/architecture.md index d41bcb9b0..bc38abdfc 100644 --- a/content/en/docs/v3.3/introduction/architecture.md +++ b/content/en/docs/v3.3/introduction/architecture.md @@ -42,5 +42,3 @@ KubeSphere separates [frontend](https://github.com/kubesphere/console) from [bac ## Service Components Each component has many services. See [Overview](../../pluggable-components/overview/) for more details. - -![Service Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191017163549.png) diff --git a/content/en/docs/v3.3/introduction/features.md b/content/en/docs/v3.3/introduction/features.md index c7253c751..df83978ff 100644 --- a/content/en/docs/v3.3/introduction/features.md +++ b/content/en/docs/v3.3/introduction/features.md @@ -29,7 +29,7 @@ The following modules elaborate on the key features and benefits provided by Kub KubeSphere provides a graphical web console, giving users a clear view of a variety of Kubernetes resources, including Pods and containers, clusters and nodes, workloads, secrets and ConfigMaps, services and Ingress, jobs and CronJobs, and applications. With wizard user interfaces, users can easily interact with these resources for service discovery, HPA, image management, scheduling, high availability implementation, container health check and more. -As KubeSphere 3.3.0 features enhanced observability, users are able to keep track of resources from multi-tenant perspectives, such as custom monitoring, events, auditing logs, alerts and notifications. +As KubeSphere 3.3 features enhanced observability, users are able to keep track of resources from multi-tenant perspectives, such as custom monitoring, events, auditing logs, alerts and notifications. ### Cluster Upgrade and Scaling diff --git a/content/en/docs/v3.3/introduction/what's-new-in-3.3.0.md b/content/en/docs/v3.3/introduction/what's-new-in-3.3.0.md deleted file mode 100644 index 6721ecbd4..000000000 --- a/content/en/docs/v3.3/introduction/what's-new-in-3.3.0.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: "What's New in 3.3.0" -keywords: 'Kubernetes, KubeSphere, new features' -description: "What's New in 3.3.0" -linkTitle: "What's New in 3.3.0" -weight: 1400 ---- - -In June 2022, KubeSphere 3.3.0 has been released with more exciting features. This release introduces GitOps-based continuous deployment and supports Git-based code repository management to further optimize the DevOps feature. Moreover, it also provides enhanced features of storage, multi-tenancy, multi-cluster, observability, app store, service mesh, and edge computing, to further perfect the interactive design for better user experience. - -If you want to know details about new feature of KubeSphere 3.3.0, you can read the article [KubeSphere 3.3.0: Embrace GitOps](/../../../news/kubesphere-3.3.0-ga-announcement/). - -In addition to the above highlights, this release also features other functionality upgrades and fixes the known bugs. There were some deprecated or removed features in 3.3.0. For more and detailed information, see the [Release Notes for 3.3.0](../../../v3.3/release/release-v330/). \ No newline at end of file diff --git a/content/en/docs/v3.3/introduction/what's-new-in-3.3.md b/content/en/docs/v3.3/introduction/what's-new-in-3.3.md new file mode 100644 index 000000000..2c398c01f --- /dev/null +++ b/content/en/docs/v3.3/introduction/what's-new-in-3.3.md @@ -0,0 +1,13 @@ +--- +title: "What's New in 3.3" +keywords: 'Kubernetes, KubeSphere, new features' +description: "What's New in 3.3" +linkTitle: "What's New in 3.3" +weight: 1400 +--- + +In June 2022, KubeSphere 3.3 has been released with more exciting features. This release introduces GitOps-based continuous deployment and supports Git-based code repository management to further optimize the DevOps feature. Moreover, it also provides enhanced features of storage, multi-tenancy, multi-cluster, observability, app store, service mesh, and edge computing, to further perfect the interactive design for better user experience. + +If you want to know details about new feature of KubeSphere 3.3, you can read the article [KubeSphere 3.3.0: Embrace GitOps](/../../../news/kubesphere-3.3.0-ga-announcement/). + +In addition to the above highlights, this release also features other functionality upgrades and fixes the known bugs. There were some deprecated or removed features in 3.3. For more and detailed information, see the [Release Notes for 3.3.0](../../../v3.3/release/release-v330/). \ No newline at end of file diff --git a/content/en/docs/v3.3/pluggable-components/alerting.md b/content/en/docs/v3.3/pluggable-components/alerting.md index c0498d3e0..1bbc3ee12 100644 --- a/content/en/docs/v3.3/pluggable-components/alerting.md +++ b/content/en/docs/v3.3/pluggable-components/alerting.md @@ -39,9 +39,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), ### Installing on Kubernetes -As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Alerting first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file. +As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Alerting first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file. -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it. +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -57,7 +57,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu 3. Execute the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/en/docs/v3.3/pluggable-components/app-store.md b/content/en/docs/v3.3/pluggable-components/app-store.md index d35d1a0e2..25676e244 100644 --- a/content/en/docs/v3.3/pluggable-components/app-store.md +++ b/content/en/docs/v3.3/pluggable-components/app-store.md @@ -44,9 +44,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), ### Installing on Kubernetes -As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the KubeSphere App Store first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file. +As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the KubeSphere App Store first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file. -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it. +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -63,7 +63,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu 3. Run the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` @@ -109,7 +109,7 @@ After you log in to the console, if you can see **App Store** in the upper-left {{< notice note >}} - You can even access the App Store without logging in to the console by visiting `:30880/apps`. -- The **OpenPitrix** tab in KubeSphere 3.3.0 does not appear on the **System Components** page after the App Store is enabled. +- The **OpenPitrix** tab in KubeSphere 3.3 does not appear on the **System Components** page after the App Store is enabled. {{}} diff --git a/content/en/docs/v3.3/pluggable-components/auditing-logs.md b/content/en/docs/v3.3/pluggable-components/auditing-logs.md index 22aaeccf3..47c4ffcad 100644 --- a/content/en/docs/v3.3/pluggable-components/auditing-logs.md +++ b/content/en/docs/v3.3/pluggable-components/auditing-logs.md @@ -34,7 +34,7 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), ``` {{< notice note >}} -By default, KubeKey will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Auditing, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. +By default, KubeKey will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -45,7 +45,7 @@ By default, KubeKey will install Elasticsearch internally if Auditing is enabled elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -57,9 +57,9 @@ By default, KubeKey will install Elasticsearch internally if Auditing is enabled ### Installing on Kubernetes -As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Auditing first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file. +As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Auditing first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file. -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it. +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -73,7 +73,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu ``` {{< notice note >}} -By default, ks-installer will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Auditing, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. +By default, ks-installer will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -84,14 +84,14 @@ By default, ks-installer will install Elasticsearch internally if Auditing is en elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` 3. Execute the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` @@ -116,7 +116,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource ``` {{< notice note >}} -By default, Elasticsearch will be installed internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Auditing, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. +By default, Elasticsearch will be installed internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -127,7 +127,7 @@ By default, Elasticsearch will be installed internally if Auditing is enabled. F elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` diff --git a/content/en/docs/v3.3/pluggable-components/devops.md b/content/en/docs/v3.3/pluggable-components/devops.md index ad745a5c6..9d70016b9 100644 --- a/content/en/docs/v3.3/pluggable-components/devops.md +++ b/content/en/docs/v3.3/pluggable-components/devops.md @@ -43,9 +43,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), ### Installing on Kubernetes -As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere DevOps first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file. +As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere DevOps first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file. -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it. +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -61,7 +61,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu 3. Run the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/en/docs/v3.3/pluggable-components/events.md b/content/en/docs/v3.3/pluggable-components/events.md index 989e6f1f9..9d53eb3ca 100644 --- a/content/en/docs/v3.3/pluggable-components/events.md +++ b/content/en/docs/v3.3/pluggable-components/events.md @@ -36,7 +36,7 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), ``` {{< notice note >}} -By default, KubeKey will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. +By default, KubeKey will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -47,7 +47,7 @@ By default, KubeKey will install Elasticsearch internally if Events is enabled. elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -59,9 +59,9 @@ By default, KubeKey will install Elasticsearch internally if Events is enabled. ### Installing on Kubernetes -As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Events first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file. +As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Events first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file. -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it. +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -75,7 +75,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu ``` {{< notice note >}} -By default, ks-installer will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. +By default, ks-installer will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -86,14 +86,14 @@ By default, ks-installer will install Elasticsearch internally if Events is enab elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` 3. Execute the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` @@ -121,7 +121,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource {{< notice note >}} -By default, Elasticsearch will be installed internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. +By default, Elasticsearch will be installed internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -132,7 +132,7 @@ By default, Elasticsearch will be installed internally if Events is enabled. For elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` diff --git a/content/en/docs/v3.3/pluggable-components/kubeedge.md b/content/en/docs/v3.3/pluggable-components/kubeedge.md index 265a7273d..a915f6b04 100644 --- a/content/en/docs/v3.3/pluggable-components/kubeedge.md +++ b/content/en/docs/v3.3/pluggable-components/kubeedge.md @@ -34,21 +34,21 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c ```yaml edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes. - enabled: false - kubeedge: # kubeedge configurations - enabled: false - cloudCore: - cloudHub: - advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. + enabled: false + kubeedge: # kubeedge configurations + enabled: false + cloudCore: + cloudHub: + advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided. - service: - cloudhubNodePort: "30000" - cloudhubQuicNodePort: "30001" - cloudhubHttpsNodePort: "30002" - cloudstreamNodePort: "30003" - tunnelNodePort: "30004" - # resources: {} - # hostNetWork: false + service: + cloudhubNodePort: "30000" + cloudhubQuicNodePort: "30001" + cloudhubHttpsNodePort: "30002" + cloudstreamNodePort: "30003" + tunnelNodePort: "30004" + # resources: {} + # hostNetWork: false ``` 3. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes. Save the file when you finish editing. @@ -61,13 +61,9 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c ### Installing on Kubernetes -As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeEdge first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file. +As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeEdge first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file. -{{< notice note >}} -To prevent compatability issues, you are advised to install Kubernetes v1.21.x or earlier. -{{}} - -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it. +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -75,31 +71,31 @@ To prevent compatability issues, you are advised to install Kubernetes v1.21.x o 2. In this local `cluster-configuration.yaml` file, navigate to `edgeruntime` and `kubeedge`, and change the value of `enabled` from `false` to `true` to enable all KubeEdge components. Click **OK**. - ```yaml + ```yaml edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes. - enabled: false - kubeedge: # kubeedge configurations - enabled: false - cloudCore: - cloudHub: - advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. + enabled: false + kubeedge: # kubeedge configurations + enabled: false + cloudCore: + cloudHub: + advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided. - service: - cloudhubNodePort: "30000" - cloudhubQuicNodePort: "30001" - cloudhubHttpsNodePort: "30002" - cloudstreamNodePort: "30003" - tunnelNodePort: "30004" - # resources: {} - # hostNetWork: false - ``` + service: + cloudhubNodePort: "30000" + cloudhubQuicNodePort: "30001" + cloudhubHttpsNodePort: "30002" + cloudstreamNodePort: "30003" + tunnelNodePort: "30004" + # resources: {} + # hostNetWork: false + ``` 3. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes. 4. Save the file and execute the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` @@ -118,24 +114,24 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource 4. In this YAML file, navigate to `edgeruntime` and `kubeedge`, and change the value of `enabled` from `false` to `true` to enable all KubeEdge components. Click **OK**. - ```yaml + ```yaml edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes. - enabled: false - kubeedge: # kubeedge configurations - enabled: false - cloudCore: - cloudHub: - advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. + enabled: false + kubeedge: # kubeedge configurations + enabled: false + cloudCore: + cloudHub: + advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided. - service: - cloudhubNodePort: "30000" - cloudhubQuicNodePort: "30001" - cloudhubHttpsNodePort: "30002" - cloudstreamNodePort: "30003" - tunnelNodePort: "30004" - # resources: {} - # hostNetWork: false - ``` + service: + cloudhubNodePort: "30000" + cloudhubQuicNodePort: "30001" + cloudhubHttpsNodePort: "30002" + cloudstreamNodePort: "30003" + tunnelNodePort: "30004" + # resources: {} + # hostNetWork: false + ``` 5. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes. After you finish, click **OK** in the lower-right corner to save the configuration. diff --git a/content/en/docs/v3.3/pluggable-components/logging.md b/content/en/docs/v3.3/pluggable-components/logging.md index 2fd8de91d..7fc81460c 100644 --- a/content/en/docs/v3.3/pluggable-components/logging.md +++ b/content/en/docs/v3.3/pluggable-components/logging.md @@ -35,9 +35,14 @@ When you install KubeSphere on Linux, you need to create a configuration file, w ```yaml logging: enabled: true # Change "false" to "true". + containerruntime: docker ``` - {{< notice note >}}By default, KubeKey will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. + {{< notice info >}}To use containerd as the container runtime, change the value of the field `containerruntime` to `containerd`. If you upgraded to KubeSphere 3.3 from earlier versions, you have to manually add the field `containerruntime` under `logging` when enabling KubeSphere Logging system. + + {{}} + + {{< notice note >}}By default, KubeKey will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -48,7 +53,7 @@ When you install KubeSphere on Linux, you need to create a configuration file, w elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -60,9 +65,9 @@ When you install KubeSphere on Linux, you need to create a configuration file, w ### Installing on Kubernetes -As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Logging first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file. +As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Logging first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file. -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it. +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -73,9 +78,14 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu ```yaml logging: enabled: true # Change "false" to "true". + containerruntime: docker ``` - {{< notice note >}}By default, ks-installer will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. + {{< notice info >}}To use containerd as the container runtime, change the value of the field `.logging.containerruntime` to `containerd`. If you upgraded to KubeSphere 3.3 from earlier versions, you have to manually add the field `containerruntime` under `logging` when enabling KubeSphere Logging system. + + {{}} + + {{< notice note >}}By default, ks-installer will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -86,14 +96,14 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` 3. Execute the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` @@ -117,9 +127,14 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource ```yaml logging: enabled: true # Change "false" to "true". + containerruntime: docker ``` - {{< notice note >}}By default, Elasticsearch will be installed internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. + {{< notice info >}}To use containerd as the container runtime, change the value of the field `.logging.containerruntime` to `containerd`. If you upgraded to KubeSphere 3.3 from earlier versions, you have to manually add the field `containerruntime` under `logging` when enabling KubeSphere Logging system. + + {{}} + + {{< notice note >}}By default, Elasticsearch will be installed internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -130,7 +145,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` diff --git a/content/en/docs/v3.3/pluggable-components/metrics-server.md b/content/en/docs/v3.3/pluggable-components/metrics-server.md index aa62efdc2..f4d299ed3 100644 --- a/content/en/docs/v3.3/pluggable-components/metrics-server.md +++ b/content/en/docs/v3.3/pluggable-components/metrics-server.md @@ -39,9 +39,9 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c ### Installing on Kubernetes -As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Metrics Server first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file. +As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Metrics Server first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file. -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it. +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -57,7 +57,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu 3. Execute the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/en/docs/v3.3/pluggable-components/network-policy.md b/content/en/docs/v3.3/pluggable-components/network-policy.md index c2882fc17..1d84ea702 100644 --- a/content/en/docs/v3.3/pluggable-components/network-policy.md +++ b/content/en/docs/v3.3/pluggable-components/network-policy.md @@ -49,9 +49,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), ### Installing on Kubernetes -As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Network Policy first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file. +As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Network Policy first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file. -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it. +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -68,7 +68,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu 3. Execute the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/en/docs/v3.3/pluggable-components/pod-ip-pools.md b/content/en/docs/v3.3/pluggable-components/pod-ip-pools.md index 70f757f28..2dc6d7576 100644 --- a/content/en/docs/v3.3/pluggable-components/pod-ip-pools.md +++ b/content/en/docs/v3.3/pluggable-components/pod-ip-pools.md @@ -40,9 +40,9 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c ### Installing on Kubernetes -As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Pod IP Pools first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file. +As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Pod IP Pools first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file. -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it. +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -59,7 +59,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu 3. Execute the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/en/docs/v3.3/pluggable-components/service-mesh.md b/content/en/docs/v3.3/pluggable-components/service-mesh.md index 7c4e5405f..a2357c169 100644 --- a/content/en/docs/v3.3/pluggable-components/service-mesh.md +++ b/content/en/docs/v3.3/pluggable-components/service-mesh.md @@ -53,9 +53,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), ### Installing on Kubernetes -As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Service Mesh first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file. +As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Service Mesh first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file. -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it. +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -78,7 +78,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu 3. Run the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/en/docs/v3.3/pluggable-components/service-topology.md b/content/en/docs/v3.3/pluggable-components/service-topology.md index 31df80474..d5e4c59fa 100644 --- a/content/en/docs/v3.3/pluggable-components/service-topology.md +++ b/content/en/docs/v3.3/pluggable-components/service-topology.md @@ -40,9 +40,9 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c ### Installing on Kubernetes -As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Service Topology first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file. +As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Service Topology first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file. -1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it. +1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -59,7 +59,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu 3. Execute the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/en/docs/v3.3/pluggable-components/uninstall-pluggable-components.md b/content/en/docs/v3.3/pluggable-components/uninstall-pluggable-components.md index 7537871ee..17c2cc4b1 100644 --- a/content/en/docs/v3.3/pluggable-components/uninstall-pluggable-components.md +++ b/content/en/docs/v3.3/pluggable-components/uninstall-pluggable-components.md @@ -8,12 +8,6 @@ Weight: 6940 After you [enable the pluggable components of KubeSphere](../../pluggable-components/), you can also uninstall them by performing the following steps. Please back up any necessary data before you uninstall these components. -{{< notice note >}} - -The methods of uninstalling certain pluggable components on KubeSphere 3.3.0 are different from the methods on KubeSphere v3.3.0. For more information about the uninstallation methods on KubeSphere v3.3.0, see [Uninstall Pluggable Components from KubeSphere](https://v3-0.docs.kubesphere.io/docs/faq/installation/uninstall-pluggable-components/). - -{{}} - ## Prerequisites You have to change the value of the field `enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration` before you uninstall any pluggable components except Service Topology and Pod IP Pools. @@ -128,7 +122,7 @@ Change the value of `openpitrix.store.enabled` from `true` to `false` in `ks-ins {{< notice note >}} - Notification is installed in KubeSphere 3.3.0 by default, so you do not need to uninstall it. + Notification is installed in KubeSphere 3.3 by default, so you do not need to uninstall it. {{}} diff --git a/content/en/docs/v3.3/project-administration/project-network-isolation.md b/content/en/docs/v3.3/project-administration/project-network-isolation.md index 7eafbb7e8..9624aef77 100644 --- a/content/en/docs/v3.3/project-administration/project-network-isolation.md +++ b/content/en/docs/v3.3/project-administration/project-network-isolation.md @@ -152,7 +152,7 @@ If egress traffic is controlled, you should have a clear plan of what projects, Q: Why cannot the custom monitoring system of KubeSphere get data after I enabled network isolation? -A: After you enable custom monitoring, the KubeSphere monitoring system will access the metrics of the pod. You need to allow ingress traffic for the KubeSphere monitoring system. Otherwise, it cannot access pod metrics. +A: After you enable custom monitoring, the KubeSphere monitoring system will access the metrics of the Pod. You need to allow ingress traffic for the KubeSphere monitoring system. Otherwise, it cannot access Pod metrics. KubeSphere provides a configuration item `allowedIngressNamespaces` to simplify similar configurations, which allows all projects listed in the configuration. diff --git a/content/en/docs/v3.3/project-user-guide/application-workloads/routes.md b/content/en/docs/v3.3/project-user-guide/application-workloads/routes.md index 90aac66d1..b3cdf96d4 100644 --- a/content/en/docs/v3.3/project-user-guide/application-workloads/routes.md +++ b/content/en/docs/v3.3/project-user-guide/application-workloads/routes.md @@ -48,11 +48,15 @@ A Route on KubeSphere is the same as an [Ingress](https://kubernetes.io/docs/con 2. Select a mode, configure routing rules, click **√**, and click **Next**. - **Domain Name**: Set a domain name for the route. - - **Protocol**:Select `http` or `https`. If `https` is selected, you need to select a Secret that contains the `tls.crt` (TLS certificate) and `tls.key` (TLS private key) keys used for encryption. - - **Paths**:Map each service to a path. Enter a path name and select a service and port. You can also click **Add** to add multiple paths. + * **Auto Generate**: KubeSphere automatically generates a domain name in the `...nip.io` format and the domain name is automatically resolved by [nip.io](https://nip.io/) into the gateway address. This mode supports only HTTP. + + * **Paths**: Map each Service to a path. You can click **Add** to add multiple paths. + + * **Specify Domain**: A user-defined domain name is used. This mode supports both HTTP and HTTPS. + + * **Domain Name**: Set a domain name for the Route. + * **Protocol**: Select `http` or `https`. If `https` is selected, you need to select a Secret that contains the `tls.crt` (TLS certificate) and `tls.key` (TLS private key) keys used for encryption. + * **Paths**: Map each Service to a path. You can click **Add** to add multiple paths. ### (Optional) Step 3: Configure advanced settings diff --git a/content/en/docs/v3.3/project-user-guide/application/compose-app.md b/content/en/docs/v3.3/project-user-guide/application/compose-app.md index 393eebe6e..5a7e7bb27 100644 --- a/content/en/docs/v3.3/project-user-guide/application/compose-app.md +++ b/content/en/docs/v3.3/project-user-guide/application/compose-app.md @@ -21,7 +21,7 @@ This tutorial demonstrates how to create a microservices-based app Bookinfo, whi 2. Set a name for the app (for example, `bookinfo`) and click **Next**. -3. On the **Services** page, you need to create microservices that compose the app. Click **Create Service** and select **Stateless Service**. +3. On the **Service Settings** page, you need to create microservices that compose the app. Click **Create Service** and select **Stateless Service**. 4. Set a name for the Service (e.g `productpage`) and click **Next**. diff --git a/content/en/docs/v3.3/project-user-guide/application/deploy-app-from-appstore.md b/content/en/docs/v3.3/project-user-guide/application/deploy-app-from-appstore.md index ddb1513fc..f9613b89c 100644 --- a/content/en/docs/v3.3/project-user-guide/application/deploy-app-from-appstore.md +++ b/content/en/docs/v3.3/project-user-guide/application/deploy-app-from-appstore.md @@ -27,7 +27,7 @@ This tutorial demonstrates how to quickly deploy [NGINX](https://www.nginx.com/) {{}} -2. Search for NGINX, click it, and click **Install** on the **App Information** page. Make sure you click **Agree** in the displayed **App Deploy Agreement** dialog box. +2. Search for NGINX, click it, and click **Install** on the **App Information** page. Make sure you click **Agree** in the displayed **Deployment Agreement** dialog box. 3. Set a name and select an app version, confirm the location where NGINX will be deployed , and click **Next**. diff --git a/content/en/docs/v3.3/project-user-guide/custom-application-monitoring/introduction.md b/content/en/docs/v3.3/project-user-guide/custom-application-monitoring/introduction.md index 1b0053610..04581f603 100644 --- a/content/en/docs/v3.3/project-user-guide/custom-application-monitoring/introduction.md +++ b/content/en/docs/v3.3/project-user-guide/custom-application-monitoring/introduction.md @@ -42,7 +42,7 @@ In the previous step, you expose metric endpoints in a Kubernetes Service object The ServiceMonitor CRD is defined by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator). A ServiceMonitor contains information about the metrics endpoints. With ServiceMonitor objects, the KubeSphere monitoring engine knows where and how to scape metrics. For each monitoring target, you apply a ServiceMonitor object to hook your application (or exporters) up to KubeSphere. -In KubeSphere v3.3.0, you need to pack a ServiceMonitor with your applications (or exporters) into a Helm chart for reuse. In future releases, KubeSphere will provide graphical interfaces for easy operation. +In KubeSphere 3.3, you need to pack a ServiceMonitor with your applications (or exporters) into a Helm chart for reuse. In future releases, KubeSphere will provide graphical interfaces for easy operation. Please read [Monitor a Sample Web Application](../examples/monitor-sample-web/) to learn how to pack a ServiceMonitor with your application. diff --git a/content/en/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/overview.md b/content/en/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/overview.md index 00e1b4524..51ad6ee56 100644 --- a/content/en/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/overview.md +++ b/content/en/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/overview.md @@ -12,7 +12,7 @@ This section introduces monitoring dashboard features. You will learn how to vis To create new dashboards for your app metrics, navigate to **Custom Monitoring** under **Monitoring & Alerting** in a project. There are three ways to create monitoring dashboards: built-in templates, blank templates for customization and YAML files. -There are three available built-in templates for MySQL, Elasticsearch, and Redis respectively. These templates are for demonstration purposes and are updated with KubeSphere releases. Besides, you can choose to customize monitoring dashboards. +Built-in templates include MySQL, Elasticsearch, Redis, and more. These templates are for demonstration purposes and are updated with KubeSphere releases. Besides, you can choose to customize monitoring dashboards. A KubeSphere custom monitoring dashboard can be seen as simply a YAML configuration file. The data model is heavily inspired by [Grafana](https://github.com/grafana/grafana), an open-source tool for monitoring and observability. Please find KubeSphere monitoring dashboard data model design in [kubesphere/monitoring-dashboard](https://github.com/kubesphere/monitoring-dashboard). The configuration file is portable and sharable. You are welcome to contribute dashboard templates to the KubeSphere community via [Monitoring Dashboards Gallery](https://github.com/kubesphere/monitoring-dashboard/tree/master/contrib/gallery). diff --git a/content/en/docs/v3.3/quick-start/all-in-one-on-linux.md b/content/en/docs/v3.3/quick-start/all-in-one-on-linux.md index a5d1e732a..a9796f7d1 100644 --- a/content/en/docs/v3.3/quick-start/all-in-one-on-linux.md +++ b/content/en/docs/v3.3/quick-start/all-in-one-on-linux.md @@ -145,7 +145,7 @@ Perform the following steps to download KubeKey. Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or run the following command: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -161,7 +161,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -176,7 +176,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} @@ -197,12 +197,12 @@ You only need to run one command for all-in-one installation. The template is as To create a Kubernetes cluster with KubeSphere installed, refer to the following command as an example: ```bash -./kk create cluster --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 +./kk create cluster --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` {{< notice note >}} -- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey installs Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix). +- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey installs Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix). - For all-in-one installation, you do not need to change any configuration. - If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed. KubeKey will install Kubernetes only. If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed. - KubeKey will install [OpenEBS](https://openebs.io/) to provision LocalPV for the development and testing environment by default, which is convenient for new users. For other storage classes, see [Persistent Storage Configurations](../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/). diff --git a/content/en/docs/v3.3/quick-start/create-workspace-and-project.md b/content/en/docs/v3.3/quick-start/create-workspace-and-project.md index 84448b7b8..cc0d8d9eb 100644 --- a/content/en/docs/v3.3/quick-start/create-workspace-and-project.md +++ b/content/en/docs/v3.3/quick-start/create-workspace-and-project.md @@ -24,7 +24,7 @@ You can create multiple workspaces within a KubeSphere cluster. Under each works ### Step 1: Create a user -After KubeSphere is installed, you need to add different users with varied roles to the platform so that they can work at different levels on various resources. Initially, you only have one default user, which is `admin`, granted the role `platform-admin`. In the first step, you create a sample user `user-manager` and further create more users as `user-manager`. +After KubeSphere is installed, you need to add different users with varied roles to the platform so that they can work at different levels on various resources. Initially, you only have one default user, which is `admin`, granted the role `platform-admin`. In the first step, you create a sample user `user-manager`. 1. Log in to the web console as `admin` with the default user and password (`admin/P@88w0rd`). @@ -32,7 +32,7 @@ After KubeSphere is installed, you need to add different users with varied roles For account security, it is highly recommended that you change your password the first time you log in to the console. To change your password, select **User Settings** in the drop-down list in the upper-right corner. In **Password Settings**, set a new password. You also can change the console language in **User Settings**. {{}} -2. Click **Platform** in the upper-left corner, and then select **Access Control**. In the left nevigation pane, select **Platform Roles**. There are four built-in roles, as shown in the following table. +2. Click **Platform** in the upper-left corner, and then select **Access Control**. In the left nevigation pane, select **Platform Roles**. The built-in roles are shown in the following table. @@ -41,21 +41,16 @@ After KubeSphere is installed, you need to add different users with varied roles - - - - - - - + + - + - +
Description
workspaces-managerWorkspace manager who can manage all workspaces on the platform.
users-managerUser manager who can manage all users on the platform.platform-self-provisionerCreate workspaces and become the admin of the created workspaces.
platform-regularRegular user who has no access to any resources before joining a workspace or cluster.Has no access to any resources before joining a workspace or cluster.
platform-adminAdministrator who can manage all resources on the platform.Manage all resources on the platform.
@@ -64,11 +59,15 @@ After KubeSphere is installed, you need to add different users with varied roles Built-in roles are created automatically by KubeSphere and cannot be edited or deleted. {{}} -3. In **Users**, click **Create**. In the displayed dialog box, provide all the necessary information (marked with *) and select `users-manager` for **Platform Role**. +3. In **Users**, click **Create**. In the displayed dialog box, provide all the necessary information (marked with *) and select `platform-self-provisioner` for **Platform Role**. Click **OK** after you finish. The new user will display on the **Users** page. -4. Log out of the console and log back in with user `user-manager` to create four users that will be used in other tutorials. + {{< notice note >}} + If you have not specified a platform role, the created user cannot perform any operations. In this case, you need to create a workspace and invite the created user to the workspace. + {{}} + +4. Repeat the previous steps to create other users that will be used in other tutorials. {{< notice tip >}} - To log out, click your username in the upper-right corner and select **Log Out**. @@ -82,11 +81,6 @@ After KubeSphere is installed, you need to add different users with varied roles Assigned Platform Role User Permissions - - ws-manager - workspaces-manager - Create and manage all workspaces. - ws-admin platform-regular @@ -103,7 +97,7 @@ After KubeSphere is installed, you need to add different users with varied roles -5. On **Users** page, verify the four users created. +5. On **Users** page, view the created users. {{< notice note >}} @@ -112,11 +106,13 @@ After KubeSphere is installed, you need to add different users with varied roles {{}} ### Step 2: Create a workspace -In this step, you create a workspace using user `ws-manager` created in the previous step. As the basic logic unit for the management of projects, DevOps projects and organization members, workspaces underpin the multi-tenant system of KubeSphere. +As the basic logic unit for the management of projects, DevOps projects and organization members, workspaces underpin the multi-tenant system of KubeSphere. -1. Log in to KubeSphere as `ws-manager`. Click **Platform** in the upper-left corner and select **Access Control**. In **Workspaces**, you can see there is only one default workspace `system-workspace`, where system-related components and services run. Deleting this workspace is not allowed. +1. In the navigation pane on the left, click **Workspaces**. You can see there is only one default workspace `system-workspace`, where system-related components and services run. Deleting this workspace is not allowed. -2. Click **Create** on the right, set a name for the new workspace (for example, `demo-workspace`) and set user `ws-admin` as the workspace manager. Click **Create** after you finish. +2. On the **Workspaces** page on the right, click **Create**, set a name for the new workspace (for example, `demo-workspace`) and set user `ws-admin` as the workspace manager. + +3. Click **Create** after you finish. {{< notice note >}} @@ -124,9 +120,9 @@ In this step, you create a workspace using user `ws-manager` created in the prev {{}} -3. Log out of the console and log back in as `ws-admin`. In **Workspace Settings**, select **Workspace Members** and click **Invite**. +4. Log out of the console and log back in as `ws-admin`. In **Workspace Settings**, select **Workspace Members** and click **Invite**. -4. Invite both `project-admin` and `project-regular` to the workspace. Assign them the role `workspace-self-provisioner` and `workspace-viewer` respectively and click **OK**. +5. Invite both `project-admin` and `project-regular` to the workspace. Assign them the role `workspace-self-provisioner` and `workspace-viewer` respectively and click **OK**. {{< notice note >}} The actual role name follows a naming convention: `-`. For example, in this workspace named `demo-workspace`, the actual role name of the role `viewer` is `demo-workspace-viewer`. diff --git a/content/en/docs/v3.3/quick-start/enable-pluggable-components.md b/content/en/docs/v3.3/quick-start/enable-pluggable-components.md index 29df905a3..0646022b7 100644 --- a/content/en/docs/v3.3/quick-start/enable-pluggable-components.md +++ b/content/en/docs/v3.3/quick-start/enable-pluggable-components.md @@ -62,7 +62,7 @@ If you adopt [All-in-one Installation](../../quick-start/all-in-one-on-linux/), When you install KubeSphere on Kubernetes, you need to use [ks-installer](https://github.com/kubesphere/ks-installer/) by applying two YAML files as below. -1. First download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/cluster-configuration.yaml) and edit it. +1. First download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it. ```bash vi cluster-configuration.yaml @@ -73,7 +73,7 @@ When you install KubeSphere on Kubernetes, you need to use [ks-installer](https: 3. Save this local file and execute the following commands to start the installation. ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/en/docs/v3.3/quick-start/minimal-kubesphere-on-k8s.md b/content/en/docs/v3.3/quick-start/minimal-kubesphere-on-k8s.md index c4bf1ff20..8191f623c 100644 --- a/content/en/docs/v3.3/quick-start/minimal-kubesphere-on-k8s.md +++ b/content/en/docs/v3.3/quick-start/minimal-kubesphere-on-k8s.md @@ -11,7 +11,7 @@ In addition to installing KubeSphere on a Linux machine, you can also deploy it ## Prerequisites -- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). +- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). - Make sure your machine meets the minimal hardware requirement: CPU > 1 Core, Memory > 2 GB. - A **default** Storage Class in your Kubernetes cluster needs to be configured before the installation. @@ -33,9 +33,9 @@ After you make sure your machine meets the conditions, perform the following ste 1. Run the following commands to start installation: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` 2. After KubeSphere is successfully installed, you can run the following command to view the installation logs: diff --git a/content/en/docs/v3.3/reference/api-changes/logging.md b/content/en/docs/v3.3/reference/api-changes/logging.md index 59eac0ab4..2158ff499 100644 --- a/content/en/docs/v3.3/reference/api-changes/logging.md +++ b/content/en/docs/v3.3/reference/api-changes/logging.md @@ -1,12 +1,12 @@ --- title: "Logging" keywords: 'Kubernetes, KubeSphere, API, Logging' -description: 'The API changes of the component **logging** in KubeSphere v3.3.0.' +description: 'The API changes of the component **logging** in KubeSphere 3.3.' linkTitle: "Logging" weight: 17310 --- -The API changes of the component **logging** in KubeSphere v3.3.0. +The API changes of the component **logging** in KubeSphere 3.3. ## Time Format @@ -22,6 +22,6 @@ The following APIs are removed: - GET /namespaces/{namespace}/pods/{pod} - The whole log setting API group -## Fluent Bit Operator +## Fluent Operator -In KubeSphere 3.3.0, the whole log setting APIs are removed from the KubeSphere core since the project Fluent Bit Operator is refactored in an incompatible way. Please refer to [Fluent Bit Operator docs](https://github.com/kubesphere/fluentbit-operator) for how to configure log collection in KubeSphere 3.3.0. \ No newline at end of file +In KubeSphere 3.3, the whole log setting APIs are removed from the KubeSphere core since the project Fluent Operator is refactored in an incompatible way. Please refer to [Fluent Operator docs](https://github.com/kubesphere/fluentbit-operator) for how to configure log collection in KubeSphere 3.3. \ No newline at end of file diff --git a/content/en/docs/v3.3/reference/api-changes/monitoring.md b/content/en/docs/v3.3/reference/api-changes/monitoring.md index 40ee61c9f..89df00d63 100644 --- a/content/en/docs/v3.3/reference/api-changes/monitoring.md +++ b/content/en/docs/v3.3/reference/api-changes/monitoring.md @@ -1,7 +1,7 @@ --- title: "Monitoring" keywords: 'Kubernetes, KubeSphere, API, Monitoring' -description: 'The API changes of the component **monitoring** in KubeSphere v3.3.0.' +description: 'The API changes of the component **monitoring** in KubeSphere v3.3.1.' linkTitle: "Monitoring" weight: 17320 --- @@ -16,9 +16,9 @@ The time format of query parameters must be in Unix timestamps (the number of se ## Deprecated Metrics -In KubeSphere 3.3.0, the metrics on the left have been renamed to the ones on the right. +In KubeSphere 3.3, the metrics on the left have been renamed to the ones on the right. -|V2.0|V3.0| +|V2.0|V3.3| |---|---| |workload_pod_cpu_usage | workload_cpu_usage| |workload_pod_memory_usage| workload_memory_usage| @@ -48,7 +48,7 @@ The following metrics have been deprecated and removed. |prometheus_up_sum| |prometheus_tsdb_head_samples_appended_rate| -New metrics are introduced in 3.3.0. +New metrics are introduced in KubeSphere 3.3. |New Metrics| |---| @@ -59,7 +59,7 @@ New metrics are introduced in 3.3.0. ## Response Fields -In KubeSphere 3.3.0, the response fields `metrics_level`, `status` and `errorType` are removed. +In KubeSphere 3.3, the response fields `metrics_level`, `status` and `errorType` are removed. In addition, the field name `resource_name` has been replaced with the specific resource type names. These types are `node`, `workspace`, `namespace`, `workload`, `pod`, `container` and `persistentvolumeclaim`. For example, instead of `resource_name: node1`, you will get `node: node1`. See the example response below: diff --git a/content/en/docs/v3.3/reference/api-docs.md b/content/en/docs/v3.3/reference/api-docs.md index 3679e6185..e4e084d25 100644 --- a/content/en/docs/v3.3/reference/api-docs.md +++ b/content/en/docs/v3.3/reference/api-docs.md @@ -114,7 +114,7 @@ Replace `[node ip]` with your actual IP address. ## API Reference -The KubeSphere API swagger JSON file can be found in the repository https://github.com/kubesphere/kubesphere/tree/release-3.1/api. +The KubeSphere API swagger JSON file can be found in the repository https://github.com/kubesphere/kubesphere/tree/release-3.3/api. - KubeSphere specified the API [swagger json](https://github.com/kubesphere/kubesphere/blob/release-3.1/api/ks-openapi-spec/swagger.json) file. It contains all the APIs that are only applied to KubeSphere. - KubeSphere specified the CRD [swagger json](https://github.com/kubesphere/kubesphere/blob/release-3.1/api/openapi-spec/swagger.json) file. It contains all the generated CRDs API documentation. It is same as Kubernetes API objects. diff --git a/content/en/docs/v3.3/reference/storage-system-installation/nfs-server.md b/content/en/docs/v3.3/reference/storage-system-installation/nfs-server.md index b69cec05e..44dfa83a0 100644 --- a/content/en/docs/v3.3/reference/storage-system-installation/nfs-server.md +++ b/content/en/docs/v3.3/reference/storage-system-installation/nfs-server.md @@ -13,7 +13,7 @@ Once your NFS server machine is ready, you can use [KubeKey](../../../installing {{< notice note >}} - You can also create the storage class of NFS-client after you install a KubeSphere cluster. -- NFS is incompatible with some applications, for example, Prometheus, which may result in pod creation failures. If you need to use NFS in the production environment, ensure that you have understood the risks. For more information, contact support@kubesphere.cloud. +- It is not recommended that you use NFS storage for production (especially on Kubernetes version 1.20 or later) as some issues may occur, such as `failed to obtain lock` and `input/output error`, resulting in Pod `CrashLoopBackOff`. Besides, some apps may not be compatible with NFS, including [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects). {{}} diff --git a/content/en/docs/v3.3/release/release-v330.md b/content/en/docs/v3.3/release/release-v330.md index a6b3003a8..0f54a8e13 100644 --- a/content/en/docs/v3.3/release/release-v330.md +++ b/content/en/docs/v3.3/release/release-v330.md @@ -1,8 +1,8 @@ --- -title: "Release Notes for 3.3.0" +title: "Release Notes for 3.3" keywords: "Kubernetes, KubeSphere, Release Notes" -description: "KubeSphere 3.3.0 Release Notes" -linkTitle: "Release Notes - 3.3.0" +description: "KubeSphere 3.3 Release Notes" +linkTitle: "Release Notes - 3.3" weight: 18098 --- @@ -13,13 +13,19 @@ weight: 18098 - Add support for importing and managing code repositories. - Add support for built-in CRD-based pipeline templates and parameter customization. - Add support for viewing pipeline events. - +### Enhancements & Updates +- Add support for editing the binding mode of the pipeline's kubeconfig file on the UI. +### Bug Fixes +- Fix an issue where users fail to check the CI/CD template. +- Remove the `Deprecated` tag from the CI/CD template and replace `kubernetesDeploy` with `kubeconfig binding` at the deployment phase. ## Storage ### Features - Add support for tenant-level storage class permission management. - Add the volume snapshot content management and volume snapshot class management features. - Add support for automatic restart of deployments and statefulsets after a PVC has been changed. - Add the PVC auto expansion feature, which automatically expands PVCs when remaining capacity is insufficient. +### Bug Fixes +- Set `hostpath` as a required option when users are mounting volumes. ## Multi-tenancy and Multi-cluster ### Features @@ -61,7 +67,7 @@ weight: 18098 - Integrate OpenELB with KubeSphere for exposing LoadBalancer services. ### Bug Fixes - Fix an issue where the gateway of a project is not deleted after the project is deleted. - +- Fix an issue where users fail to create routing rules in IPv6 and IPv4 dual-stack environments. ## App Store ### Bug Fixes - Fix a ks-controller-manager crash caused by Helm controller NPE errors. @@ -69,7 +75,10 @@ weight: 18098 ## Authentication & Authorization ### Features - Add support for manually disabling and enabling users. - +### Bug Fixes +- Delete roles `users-manager` and `workspace-manager`. +- Add role `platform-self-provisioner`. +- Block some permissions of user-defined roles. ## User Experience - Add a prompt when the audit log of Kubernetes has been enabled. - Add the lifecycle management feature for containers. @@ -87,6 +96,7 @@ weight: 18098 - Prevent ks-apiserver and ks-controller-manager from restarting when the cluster configuration is changed. - Optimize some UI texts. - Optimize display of the service topology on the **Service** page. +- Add support for changing the number of items displayed on each page of a table. +- Add support for batch stopping workloads. - -For more information about issues and contributors of KubeSphere 3.3.0, see [GitHub](https://github.com/kubesphere/kubesphere/blob/master/CHANGELOG/CHANGELOG-3.3.md). \ No newline at end of file +For more information about issues and contributors of KubeSphere 3.3, see [GitHub](https://github.com/kubesphere/kubesphere/blob/master/CHANGELOG/CHANGELOG-3.3.md). \ No newline at end of file diff --git a/content/en/docs/v3.3/upgrade/_index.md b/content/en/docs/v3.3/upgrade/_index.md index d679f27de..cb4d5072a 100644 --- a/content/en/docs/v3.3/upgrade/_index.md +++ b/content/en/docs/v3.3/upgrade/_index.md @@ -11,4 +11,4 @@ icon: "/images/docs/v3.3/docs.svg" --- -This chapter demonstrates how cluster operators can upgrade KubeSphere to 3.3.0. \ No newline at end of file +This chapter demonstrates how cluster operators can upgrade KubeSphere to 3.3.1. \ No newline at end of file diff --git a/content/en/docs/v3.3/upgrade/air-gapped-upgrade-with-ks-installer.md b/content/en/docs/v3.3/upgrade/air-gapped-upgrade-with-ks-installer.md index d36cd02ef..5594740b8 100644 --- a/content/en/docs/v3.3/upgrade/air-gapped-upgrade-with-ks-installer.md +++ b/content/en/docs/v3.3/upgrade/air-gapped-upgrade-with-ks-installer.md @@ -1,6 +1,6 @@ --- title: "Air-Gapped Upgrade with ks-installer" -keywords: "Air-Gapped, upgrade, kubesphere, 3.3.0" +keywords: "Air-Gapped, upgrade, kubesphere, 3.3" description: "Use ks-installer and offline package to upgrade KubeSphere." linkTitle: "Air-Gapped Upgrade with ks-installer" weight: 7500 @@ -12,11 +12,22 @@ ks-installer is recommended for users whose Kubernetes clusters were not set up ## Prerequisites - You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first. -- Read [Release Notes for 3.3.0](../../../v3.3/release/release-v330/) carefully. +- Read [Release Notes for 3.3](../../../v3.3/release/release-v330/) carefully. - Back up any important component beforehand. - A Docker registry. You need to have a Harbor or other Docker registries. For more information, see [Prepare a Private Image Registry](../../installing-on-linux/introduction/air-gapped-installation/#step-2-prepare-a-private-image-registry). -- Supported Kubernetes versions of KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). +- Supported Kubernetes versions of KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). +## Major Updates + +In KubeSphere 3.3.1, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.3.1, please note the following: + + - Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project). + + - Some permission of custom roles are removed: + - Removed permissions of platform-level custom roles: user management, role management, and workspace management. + - Removed permissions of workspace-level custom roles: user management, role management, and user group management. + - Removed permissions of namespace-level custom roles: user management and role management. + - After you upgrade KubeSphere to 3.3.1, custom roles will be retained, but removed permissions of the custom roles will be revoked. ## Step 1: Prepare Installation Images As you install KubeSphere in an air-gapped environment, you need to prepare an image package containing all the necessary images in advance. @@ -24,7 +35,7 @@ As you install KubeSphere in an air-gapped environment, you need to prepare an i 1. Download the image list file `images-list.txt` from a machine that has access to Internet through the following command: ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/images-list.txt ``` {{< notice note >}} @@ -36,7 +47,7 @@ As you install KubeSphere in an air-gapped environment, you need to prepare an i 2. Download `offline-installation-tool.sh`. ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/offline-installation-tool.sh + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/offline-installation-tool.sh ``` 3. Make the `.sh` file executable. @@ -96,7 +107,7 @@ Similar to installing KubeSphere on an existing Kubernetes cluster in an online 1. Execute the following command to download ks-installer and transfer it to your machine that serves as the taskbox for installation. ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml ``` 2. Verify that you have specified your private image registry in `spec.local_registry` in `cluster-configuration.yaml`. Note that if your existing cluster was installed in an air-gapped environment, you may already have this field specified. Otherwise, run the following command to edit `cluster-configuration.yaml` of your existing KubeSphere v3.1.x cluster and add the private image registry: diff --git a/content/en/docs/v3.3/upgrade/air-gapped-upgrade-with-kubekey.md b/content/en/docs/v3.3/upgrade/air-gapped-upgrade-with-kubekey.md index 1c9f01921..415d722a6 100644 --- a/content/en/docs/v3.3/upgrade/air-gapped-upgrade-with-kubekey.md +++ b/content/en/docs/v3.3/upgrade/air-gapped-upgrade-with-kubekey.md @@ -1,6 +1,6 @@ --- title: "Air-Gapped Upgrade with KubeKey" -keywords: "Air-Gapped, kubernetes, upgrade, kubesphere, 3.3.0" +keywords: "Air-Gapped, kubernetes, upgrade, kubesphere, 3.3.1" description: "Use the offline package to upgrade Kubernetes and KubeSphere." linkTitle: "Air-Gapped Upgrade with KubeKey" weight: 7400 @@ -11,11 +11,22 @@ Air-gapped upgrade with KubeKey is recommended for users whose KubeSphere and Ku - You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first. - Your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). -- Read [Release Notes for 3.3.0](../../../v3.3/release/release-v330/) carefully. +- Read [Release Notes for 3.3](../../../v3.3/release/release-v330/) carefully. - Back up any important component beforehand. - A Docker registry. You need to have a Harbor or other Docker registries. - Make sure every node can push and pull images from the Docker Registry. +## Major Updates + +In KubeSphere 3.3.1, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.3.1, please note the following: + + - Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project). + + - Some permission of custom roles are removed: + - Removed permissions of platform-level custom roles: user management, role management, and workspace management. + - Removed permissions of workspace-level custom roles: user management, role management, and user group management. + - Removed permissions of namespace-level custom roles: user management and role management. + - After you upgrade KubeSphere to 3.3.1, custom roles will be retained, but removed permissions of the custom roles will be revoked. ## Upgrade KubeSphere and Kubernetes @@ -46,7 +57,7 @@ KubeKey upgrades Kubernetes from one MINOR version to the next MINOR version unt ### Step 1: Download KubeKey -1. 1. Run the following commands to download KubeKey v2.2.2. +1. 1. Run the following commands to download KubeKey v2.3.0. {{< tabs >}} {{< tab "Good network connections to GitHub/Googleapis" >}} @@ -54,7 +65,7 @@ KubeKey upgrades Kubernetes from one MINOR version to the next MINOR version unt Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash - curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - + curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -70,7 +81,7 @@ KubeKey upgrades Kubernetes from one MINOR version to the next MINOR version unt Run the following command to download KubeKey: ```bash - curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - + curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -89,7 +100,7 @@ As you install KubeSphere and Kubernetes on Linux, you need to prepare an image 1. Download the image list file `images-list.txt` from a machine that has access to Internet through the following command: ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/images-list.txt ``` {{< notice note >}} @@ -101,7 +112,7 @@ As you install KubeSphere and Kubernetes on Linux, you need to prepare an image 2. Download `offline-installation-tool.sh`. ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/offline-installation-tool.sh + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/offline-installation-tool.sh ``` 3. Make the `.sh` file executable. @@ -142,7 +153,7 @@ As you install KubeSphere and Kubernetes on Linux, you need to prepare an image {{< notice note >}} - - You can change the Kubernetes version downloaded based on your needs. Recommended Kubernetes versions for KubeSphere 3.3.0 are v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix). + - You can change the Kubernetes version downloaded based on your needs. Recommended Kubernetes versions for KubeSphere 3.3 are v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix). - You can upgrade Kubernetes from v1.16.13 to v1.17.9 by downloading the v1.17.9 Kubernetes binary file, but for cross-version upgrades, all intermediate versions need to be downloaded in advance. For example, if you want to upgrade Kubernetes from v1.15.12 to v1.18.6, you need to download Kubernetes v1.16.13 and v1.17.9, and the v1.18.6 binary file. @@ -189,7 +200,7 @@ Transfer your packaged image file to your local machine and execute the followin | | Kubernetes | KubeSphere | | ------ | ---------- | ---------- | | Before | v1.18.6 | v3.2.x | -| After | v1.22.10 | 3.3.0 | +| After | v1.22.10 | 3.3.1 | #### Upgrade a cluster @@ -206,7 +217,7 @@ Execute the following command to generate an example configuration file for inst For example: ```bash -./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 -f config-sample.yaml +./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 -f config-sample.yaml ``` {{< notice note >}} @@ -247,7 +258,7 @@ Set `privateRegistry` of your `config-sample.yaml` file: privateRegistry: dockerhub.kubekey.local ``` -#### Upgrade your single-node cluster to KubeSphere 3.3.0 and Kubernetes v1.22.10 +#### Upgrade your single-node cluster to KubeSphere 3.3 and Kubernetes v1.22.10 ```bash ./kk upgrade -f config-sample.yaml @@ -271,7 +282,7 @@ To upgrade Kubernetes to a specific version, explicitly provide the version afte | | Kubernetes | KubeSphere | | ------ | ---------- | ---------- | | Before | v1.18.6 | v3.2.x | -| After | v1.22.10 | 3.3.0 | +| After | v1.22.10 | 3.3.1 | #### Upgrade a cluster @@ -288,7 +299,7 @@ In this example, KubeSphere is installed on multiple nodes, so you need to speci For example: ```bash -./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 -f config-sample.yaml +./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 -f config-sample.yaml ``` {{< notice note >}} @@ -331,7 +342,7 @@ Set `privateRegistry` of your `config-sample.yaml` file: privateRegistry: dockerhub.kubekey.local ``` -#### Upgrade your multi-node cluster to KubeSphere 3.3.0 and Kubernetes v1.22.10 +#### Upgrade your multi-node cluster to KubeSphere 3.3 and Kubernetes v1.22.10 ```bash ./kk upgrade -f config-sample.yaml diff --git a/content/en/docs/v3.3/upgrade/overview.md b/content/en/docs/v3.3/upgrade/overview.md index da69b1efa..bea4040b8 100644 --- a/content/en/docs/v3.3/upgrade/overview.md +++ b/content/en/docs/v3.3/upgrade/overview.md @@ -1,6 +1,6 @@ --- title: "Upgrade — Overview" -keywords: "Kubernetes, upgrade, KubeSphere, 3.3.0, upgrade" +keywords: "Kubernetes, upgrade, KubeSphere, 3.3, upgrade" description: "Understand what you need to pay attention to before the upgrade, such as versions, and upgrade tools." linkTitle: "Overview" weight: 7100 @@ -8,10 +8,10 @@ weight: 7100 ## Make Your Upgrade Plan -KubeSphere 3.3.0 is compatible with Kubernetes 1.19.x, 1.20.x, 1.21.x, 1.22.x, and 1.23.x (experimental support): +KubeSphere 3.3 is compatible with Kubernetes 1.19.x, 1.20.x, 1.21.x, 1.22.x, and 1.23.x (experimental support): -- Before you upgrade your cluster to KubeSphere 3.3.0, you need to have a KubeSphere cluster running v3.2.x. -- If your existing KubeSphere v3.1.x cluster is installed on Kubernetes 1.19.x+, you can choose to only upgrade KubeSphere to 3.3.0 or upgrade Kubernetes (to a higher version) and KubeSphere (to 3.3.0) at the same time. +- Before you upgrade your cluster to KubeSphere 3.3, you need to have a KubeSphere cluster running v3.2.x. +- If your existing KubeSphere v3.1.x cluster is installed on Kubernetes 1.19.x+, you can choose to only upgrade KubeSphere to 3.3 or upgrade Kubernetes (to a higher version) and KubeSphere (to 3.3) at the same time. ## Before the Upgrade diff --git a/content/en/docs/v3.3/upgrade/upgrade-with-ks-installer.md b/content/en/docs/v3.3/upgrade/upgrade-with-ks-installer.md index 0517f9148..042b076c2 100644 --- a/content/en/docs/v3.3/upgrade/upgrade-with-ks-installer.md +++ b/content/en/docs/v3.3/upgrade/upgrade-with-ks-installer.md @@ -1,6 +1,6 @@ --- title: "Upgrade with ks-installer" -keywords: "Kubernetes, upgrade, KubeSphere, v3.3.0" +keywords: "Kubernetes, upgrade, KubeSphere, v3.3.1" description: "Use ks-installer to upgrade KubeSphere." linkTitle: "Upgrade with ks-installer" weight: 7300 @@ -11,19 +11,31 @@ ks-installer is recommended for users whose Kubernetes clusters were not set up ## Prerequisites - You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first. -- Read [Release Notes for 3.3.0](../../../v3.3/release/release-v330/) carefully. +- Read [Release Notes for 3.3](../../../v3.3/release/release-v330/) carefully. - Back up any important component beforehand. -- Supported Kubernetes versions of KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). +- Supported Kubernetes versions of KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). + +## Major Updates + +In KubeSphere 3.3.1, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.3.1, please note the following: + + - Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project). + + - Some permission of custom roles are removed: + - Removed permissions of platform-level custom roles: user management, role management, and workspace management. + - Removed permissions of workspace-level custom roles: user management, role management, and user group management. + - Removed permissions of namespace-level custom roles: user management and role management. + - After you upgrade KubeSphere to 3.3.1, custom roles will be retained, but removed permissions of the custom roles will be revoked. ## Apply ks-installer Run the following command to upgrade your cluster. ```bash -kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml --force +kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml --force ``` ## Enable Pluggable Components -You can [enable new pluggable components](../../pluggable-components/overview/) of KubeSphere 3.3.0 after the upgrade to explore more features of the container platform. +You can [enable new pluggable components](../../pluggable-components/overview/) of KubeSphere 3.3 after the upgrade to explore more features of the container platform. diff --git a/content/en/docs/v3.3/upgrade/upgrade-with-kubekey.md b/content/en/docs/v3.3/upgrade/upgrade-with-kubekey.md index 990d0f6c5..ecb5cb2b2 100644 --- a/content/en/docs/v3.3/upgrade/upgrade-with-kubekey.md +++ b/content/en/docs/v3.3/upgrade/upgrade-with-kubekey.md @@ -1,6 +1,6 @@ --- title: "Upgrade with KubeKey" -keywords: "Kubernetes, upgrade, KubeSphere, 3.3.0, KubeKey" +keywords: "Kubernetes, upgrade, KubeSphere, 3.3, KubeKey" description: "Use KubeKey to upgrade Kubernetes and KubeSphere." linkTitle: "Upgrade with KubeKey" weight: 7200 @@ -12,10 +12,22 @@ This tutorial demonstrates how to upgrade your cluster using KubeKey. ## Prerequisites - You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first. -- Read [Release Notes for 3.3.0](../../../v3.3/release/release-v330/) carefully. +- Read [Release Notes for 3.3](../../../v3.3/release/release-v330/) carefully. - Back up any important component beforehand. - Make your upgrade plan. Two scenarios are provided in this document for [all-in-one clusters](#all-in-one-cluster) and [multi-node clusters](#multi-node-cluster) respectively. +## Major Updates + +In KubeSphere 3.3.1, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.3.1, please note the following: + + - Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project). + + - Some permission of custom roles are removed: + - Removed permissions of platform-level custom roles: user management, role management, and workspace management. + - Removed permissions of workspace-level custom roles: user management, role management, and user group management. + - Removed permissions of namespace-level custom roles: user management and role management. + - After you upgrade KubeSphere to 3.3.1, custom roles will be retained, but removed permissions of the custom roles will be revoked. + ## Download KubeKey Follow the steps below to download KubeKey before you upgrade your cluster. @@ -27,7 +39,7 @@ Follow the steps below to download KubeKey before you upgrade your cluster. Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly. ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -43,7 +55,7 @@ export KKZONE=cn Run the following command to download KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -58,7 +70,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n {{< notice note >}} -The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version. +The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version. {{}} @@ -80,10 +92,10 @@ When upgrading Kubernetes, KubeKey will upgrade from one MINOR version to the ne ### All-in-one cluster -Run the following command to use KubeKey to upgrade your single-node cluster to KubeSphere 3.3.0 and Kubernetes v1.22.10: +Run the following command to use KubeKey to upgrade your single-node cluster to KubeSphere 3.3 and Kubernetes v1.22.10: ```bash -./kk upgrade --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 +./kk upgrade --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). @@ -120,16 +132,16 @@ For more information, see [Edit the configuration file](../../installing-on-linu {{}} #### Step 3: Upgrade your cluster -The following command upgrades your cluster to KubeSphere 3.3.0 and Kubernetes v1.22.10: +The following command upgrades your cluster to KubeSphere 3.3 and Kubernetes v1.22.10: ```bash -./kk upgrade --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 -f sample.yaml +./kk upgrade --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 -f sample.yaml ``` To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). {{< notice note >}} -To use new features of KubeSphere 3.3.0, you may need to enable some pluggable components after the upgrade. +To use new features of KubeSphere 3.3, you may need to enable some pluggable components after the upgrade. {{}} \ No newline at end of file diff --git a/content/en/docs/v3.3/upgrade/what-changed.md b/content/en/docs/v3.3/upgrade/what-changed.md index a3c693036..22c495540 100644 --- a/content/en/docs/v3.3/upgrade/what-changed.md +++ b/content/en/docs/v3.3/upgrade/what-changed.md @@ -1,12 +1,12 @@ --- title: "Changes after Upgrade" -keywords: "Kubernetes, upgrade, KubeSphere, 3.3.0" +keywords: "Kubernetes, upgrade, KubeSphere, 3.3" description: "Understand what will be changed after the upgrade." linkTitle: "Changes after Upgrade" weight: 7600 --- -This section covers the changes after upgrade for existing settings in previous versions. If you want to know all the new features and enhancements in KubeSphere 3.3.0, see [Release Notes for 3.3.0](../../../v3.3/release/release-v330/). +This section covers the changes after upgrade for existing settings in previous versions. If you want to know all the new features and enhancements in KubeSphere 3.3, see [Release Notes for 3.3](../../../v3.3/release/release-v330/). diff --git a/content/en/docs/v3.3/workspace-administration/department-management.md b/content/en/docs/v3.3/workspace-administration/department-management.md index a5ce3ea2c..1201d50db 100644 --- a/content/en/docs/v3.3/workspace-administration/department-management.md +++ b/content/en/docs/v3.3/workspace-administration/department-management.md @@ -19,7 +19,7 @@ A department in a workspace is a logical unit used for permission control. You c 1. Log in to the KubeSphere web console as `ws-admin` and go to the `demo-ws` workspace. -2. On the left navigation bar, choose **Department Management** under **Workspace Settings**, and click **Set Departments** on the right. +2. On the left navigation bar, choose **Departments** under **Workspace Settings**, and click **Set Departments** on the right. 3. In the **Set Departments** dialog box, set the following parameters and click **OK** to create a department. @@ -36,11 +36,11 @@ A department in a workspace is a logical unit used for permission control. You c * **Project Role**: Role of all department members in a project. You can click **Add Project** to specify multiple project roles. Only one role can be specified for each project. * **DevOps Project Role**: Role of all department members in a DevOps project. You can click **Add DevOps Project** to specify multiple DevOps project roles. Only one role can be specified for each DevOps project. -4. Click **OK** after the department is created, and then click **Close**. On the **Department Management** page, the created department is displayed in a department tree on the left. +4. Click **OK** after the department is created, and then click **Close**. On the **Departments** page, the created department is displayed in a department tree on the left. ## Assign a User to a Department -1. On the **Department Management** page, select a department in the department tree on the left and click **Not Assigned** on the right. +1. On the **Departments** page, select a department in the department tree on the left and click **Not Assigned** on the right. 2. In the user list, click on the right of a user, and click **OK** for the displayed message to assign the user to the department. @@ -53,12 +53,12 @@ A department in a workspace is a logical unit used for permission control. You c ## Remove a User from a Department -1. On the **Department Management** page, select a department in the department tree on the left and click **Assigned** on the right. +1. On the **Departments** page, select a department in the department tree on the left and click **Assigned** on the right. 2. In the assigned user list, click on the right of a user, enter the username in the displayed dialog box, and click **OK** to remove the user. ## Delete and Edit a Department -1. On the **Department Management** page, click **Set Departments**. +1. On the **Departments** page, click **Set Departments**. 2. In the **Set Departments** dialog box, on the left, click the upper level of the department to be edited or deleted. diff --git a/content/en/docs/v3.3/workspace-administration/what-is-workspace.md b/content/en/docs/v3.3/workspace-administration/what-is-workspace.md index 4d3576f93..98e650db7 100644 --- a/content/en/docs/v3.3/workspace-administration/what-is-workspace.md +++ b/content/en/docs/v3.3/workspace-administration/what-is-workspace.md @@ -21,11 +21,6 @@ You have a user granted the role of `workspaces-manager`, such as `ws-manager` i 1. Log in to the web console of KubeSphere as `ws-manager`. Click **Platform** on the upper-left corner, and then select **Access Control**. On the **Workspaces** page, click **Create**. - {{< notice note >}} - - By default, you have at least one workspace `system-workspace` in the list which contains all system projects. - - {{}} 2. For single-node cluster, on the **Basic Information** page, specify a name for the workspace and select an administrator from the drop-down list. Click **Create**. diff --git a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/cas-identity-provider.md b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/cas-identity-provider.md deleted file mode 100644 index 48a73703d..000000000 --- a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/cas-identity-provider.md +++ /dev/null @@ -1,61 +0,0 @@ ---- -title: "CAS 身份提供者" -keywords: "CAS, 身份提供者" -description: "如何使用外部 CAS 身份提供者。" - -linkTitle: "CAS 身份提供者" -weight: 12223 ---- - -## CAS 身份提供者 - -CAS (Central Authentication Service) 是耶鲁 Yale 大学发起的一个java开源项目,旨在为 Web应用系统提供一种可靠的 单点登录 解决方案( Web SSO ), CAS 具有以下特点: - -- 开源的企业级单点登录解决方案 -- CAS Server 为需要独立部署的 Web 应用----一个独立的Web应用程序(cas.war)。 -- CAS Client 支持非常多的客户端 ( 指单点登录系统中的各个 Web 应用 ) ,包括 Java, .Net, PHP, Perl, 等。 - - -## 准备工作 - -您需要部署一个 Kubernetes 集群,并在集群中安装 KubeSphere。有关详细信息,请参阅[在 Linux 上安装](../../../installing-on-linux/)和[在 Kubernetes 上安装](../../../installing-on-kubernetes/)。 - -## 步骤 - -1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角 icon ,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`: - - ```bash - kubectl -n kubesphere-system edit cc ks-installer - ``` - -2. 在 `spec.authentication.jwtSecret` 字段下添加以下字段。 - - ```yaml - spec: - authentication: - jwtSecret: '' - authenticateRateLimiterMaxTries: 10 - authenticateRateLimiterDuration: 10m0s - oauthOptions: - accessTokenMaxAge: 1h - accessTokenInactivityTimeout: 30m - identityProviders: - - name: cas - type: CASIdentityProvider - mappingMethod: auto - provider: - redirectURL: "https://ks-console:30880/oauth/redirect/cas" - casServerURL: "https://cas.example.org/cas" - insecureSkipVerify: true - ``` - - 字段描述如下: - - | 参数 | 描述 | - | -------------------- | ------------------------------------------------------------ | - | redirectURL | 重定向到 ks-console 的 URL,格式为:`https://<域名>/oauth/redirect/<身份提供者名称>`。URL 中的 `<身份提供者名称>` 对应 `oauthOptions:identityProviders:name` 的值。 | - | casServerURL | 定义cas 认证的url 地址 | - | insecureSkipVerify | 关闭 TLS 证书验证。 | - - - diff --git a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md index bc880d62b..ee3d826a2 100644 --- a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md +++ b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md @@ -105,7 +105,7 @@ KubeSphere 默认提供了以下几种类型的身份提供者: * GitHub Identity Provider -* [CAS Identity Provider](../cas-identity-provider) +* CAS Identity Provider * Aliyun IDaaS Provider diff --git a/content/zh/docs/v3.3/cluster-administration/cluster-settings/cluster-gateway.md b/content/zh/docs/v3.3/cluster-administration/cluster-settings/cluster-gateway.md index 706ecd795..9076f29b4 100644 --- a/content/zh/docs/v3.3/cluster-administration/cluster-settings/cluster-gateway.md +++ b/content/zh/docs/v3.3/cluster-administration/cluster-settings/cluster-gateway.md @@ -7,7 +7,7 @@ weight: 8630 --- -KubeSphere v3.3.0 提供集群级别的网关,使所有项目共用一个全局网关。本文档介绍如何在 KubeSphere 设置集群网关。 +KubeSphere 3.3 提供集群级别的网关,使所有项目共用一个全局网关。本文档介绍如何在 KubeSphere 设置集群网关。 ## 准备工作 @@ -17,7 +17,7 @@ KubeSphere v3.3.0 提供集群级别的网关,使所有项目共用一个全 1. 以 `admin` 身份登录 web 控制台,点击左上角的**平台管理**并选择**集群管理**。 -2. 点击导航面板中**集群设置**下的**网关设置**,选择**集群网关**选项卡,并点击**开启网关**。 +2. 点击导航面板中**集群设置**下的**网关设置**,选择**集群网关**选项卡,并点击**启用网关**。 3. 在显示的对话框中,从以下的两个选项中选择网关的访问模式: diff --git a/content/zh/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md b/content/zh/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md index f40743679..cae95cd58 100644 --- a/content/zh/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md +++ b/content/zh/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md @@ -6,7 +6,7 @@ linkTitle: "介绍" weight: 8621 --- -KubeSphere 提供灵活的日志接收器配置方式。基于 [FluentBit Operator](https://github.com/kubesphere/fluentbit-operator/),用户可以轻松添加、修改、删除、启用或禁用 Elasticsearch、Kafka 和 Fluentd 接收器。接收器添加后,日志会发送至该接收器。 +KubeSphere 提供灵活的日志接收器配置方式。基于 [Fluent Operator](https://github.com/fluent/fluent-operator),用户可以轻松添加、修改、删除、启用或禁用 Elasticsearch、Kafka 和 Fluentd 接收器。接收器添加后,日志会发送至该接收器。 此教程简述在 KubeSphere 中添加日志接收器的一般性步骤。 @@ -45,7 +45,7 @@ KubeSphere 提供灵活的日志接收器配置方式。基于 [FluentBit Operat 如果 [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) 中启用了 `logging`、`events` 或 `auditing`,则会添加默认的 Elasticsearch 接收器,服务地址会设为 Elasticsearch 集群。 -当 `logging`、`events` 或 `auditing` 启用时,如果 [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) 中未指定 `externalElasticsearchHost` 和 `externalElasticsearchPort`,则内置 Elasticsearch 集群会部署至 Kubernetes 集群。内置 Elasticsearch 集群仅用于测试和开发。生产环境下,建议您集成外置 Elasticsearch 集群。 +当 `logging`、`events` 或 `auditing` 启用时,如果 [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) 中未指定 `externalElasticsearchUrl` 和 `externalElasticsearchPort`,则内置 Elasticsearch 集群会部署至 Kubernetes 集群。内置 Elasticsearch 集群仅用于测试和开发。生产环境下,建议您集成外置 Elasticsearch 集群。 日志查询需要依靠所配置的内置或外置 Elasticsearch 集群。 diff --git a/content/zh/docs/v3.3/cluster-administration/cluster-wide-alerting-and-notification/alertmanager.md b/content/zh/docs/v3.3/cluster-administration/cluster-wide-alerting-and-notification/alertmanager.md index 7297337bf..c56c69fe7 100644 --- a/content/zh/docs/v3.3/cluster-administration/cluster-wide-alerting-and-notification/alertmanager.md +++ b/content/zh/docs/v3.3/cluster-administration/cluster-wide-alerting-and-notification/alertmanager.md @@ -16,7 +16,7 @@ Alertmanager 处理由客户端应用程序(例如 Prometheus 服务器)发 Prometheus 的告警分为两部分。Prometheus 服务器根据告警规则向 Alertmanager 发送告警。随后,Alertmanager 管理这些告警,包括沉默、抑制、聚合等,并通过不同方式发送通知,例如电子邮件、应需 (on-call) 通知系统以及聊天平台。 -从 3.0 版本开始,KubeSphere 向 Prometheus 添加了开源社区中流行的告警规则,用作内置告警规则。默认情况下,KubeSphere 3.3.0 中的 Prometheus 会持续评估这些内置告警规则,然后向 Alertmanager 发送告警。 +从 3.0 版本开始,KubeSphere 向 Prometheus 添加了开源社区中流行的告警规则,用作内置告警规则。默认情况下,KubeSphere 3.3 中的 Prometheus 会持续评估这些内置告警规则,然后向 Alertmanager 发送告警。 ## 使用 Alertmanager 管理 Kubernetes 事件告警 diff --git a/content/zh/docs/v3.3/cluster-administration/storageclass.md b/content/zh/docs/v3.3/cluster-administration/storageclass.md index c9e63d4e0..59c3db04c 100644 --- a/content/zh/docs/v3.3/cluster-administration/storageclass.md +++ b/content/zh/docs/v3.3/cluster-administration/storageclass.md @@ -62,7 +62,7 @@ table th:nth-of-type(2) { | 参数 | 描述信息 | | :---- | :---- | -| 卷扩容 | 在 YAML 文件中由 `allowVolumeExpansion` 指定。 | +| 卷扩展 | 在 YAML 文件中由 `allowVolumeExpansion` 指定。 | | 回收机制 | 在 YAML 文件中由 `reclaimPolicy` 指定。 | | 访问模式 | 在 YAML 文件中由 `.metadata.annotations.storageclass.kubesphere.io/supported-access-modes` 指定。默认 `ReadWriteOnce`、`ReadOnlyMany` 和 `ReadWriteMany` 全选。 | | 供应者 | 在 YAML 文件中由 `provisioner` 指定。如果您使用 [NFS-Client 的 Chart](https://github.com/kubesphere/helm-charts/tree/master/src/main/nfs-client-provisioner) 来安装存储类型,可以设为 `cluster.local/nfs-client-nfs-client-provisioner`。 | @@ -144,17 +144,17 @@ Ceph RBD 也是 Kubernetes 上的一种树内存储插件,即 Kubernetes 中 | 参数 | 描述 | | :---- | :---- | -| monitors| Ceph 集群 Monitors 的 IP 地址。 | -| adminId| Ceph 集群能够创建卷的用户 ID。 | -| adminSecretName| `adminId` 的密钥名称。 | -| adminSecretNamespace| `adminSecret` 所在的项目。 | -| pool | Ceph RBD 的 Pool 名称。 | -| userId | Ceph 集群能够挂载卷的用户 ID。 | -| userSecretName | `userId` 的密钥名称。 | -| userSecretNamespace | `userSecret` 所在的项目。 | +| MONITORS| Ceph 集群 Monitors 的 IP 地址。 | +| ADMINID| Ceph 集群能够创建卷的用户 ID。 | +| ADMINSECRETNAME| `adminId` 的密钥名称。 | +| ADMINSECRETNAMESPACE| `adminSecret` 所在的项目。 | +| POOL | Ceph RBD 的 Pool 名称。 | +| USERID | Ceph 集群能够挂载卷的用户 ID。 | +| USERSECRETNAME | `userId` 的密钥名称。 | +| USERSECRETNAMESPACE | `userSecret` 所在的项目。 | | 文件系统类型 | 卷的文件系统类型。 | -| imageFormat | Ceph 卷的选项。该值可为 `1` 或 `2`,选择 `2` 后需要填写 `imageFeatures`。 | -| imageFeatures| Ceph 集群的额外功能。仅当设置 `imageFormat` 为 `2` 时,才需要填写该值。 | +| IMAGEFORMAT | Ceph 卷的选项。该值可为 `1` 或 `2`,选择 `2` 后需要填写 `imageFeatures`。 | +| IMAGEFEATURES| Ceph 集群的额外功能。仅当设置 `imageFormat` 为 `2` 时,才需要填写该值。 | 有关存储类参数的更多信息,请参见 [Kubernetes 文档中的 Ceph RBD](https://kubernetes.io/zh/docs/concepts/storage/storage-classes/#ceph-rbd)。 @@ -168,7 +168,7 @@ NFS(网络文件系统)广泛用于带有 [NFS-Client](https://github.com/ku {{< notice note >}} -NFS 与部分应用不兼容(例如 Prometheus),可能会导致容器组创建失败。如果确实需要在生产环境中使用 NFS,请确保您了解相关风险或咨询 KubeSphere 技术支持 support@kubesphere.cloud。 +不建议您在生产环境中使用 NFS 存储(尤其是在 Kubernetes 1.20 或以上版本),这可能会引起 `failed to obtain lock` 和 `input/output error` 等问题,从而导致容器组 `CrashLoopBackOff`。此外,部分应用不兼容 NFS,例如 [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects) 等。 {{}} diff --git a/content/zh/docs/v3.3/devops-user-guide/examples/create-multi-cluster-pipeline.md b/content/zh/docs/v3.3/devops-user-guide/examples/create-multi-cluster-pipeline.md index f4086dc5e..a446ccca5 100644 --- a/content/zh/docs/v3.3/devops-user-guide/examples/create-multi-cluster-pipeline.md +++ b/content/zh/docs/v3.3/devops-user-guide/examples/create-multi-cluster-pipeline.md @@ -40,7 +40,7 @@ weight: 11440 {{< notice note >}} -这些 Kubernetes 集群可以被托管至不同的云厂商,也可以使用不同的 Kubernetes 版本。针对 KubeSphere 3.3.0 推荐的 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x 、v1.22.x 和 v1.23.x(实验性支持)。 +这些 Kubernetes 集群可以被托管至不同的云厂商,也可以使用不同的 Kubernetes 版本。针对 KubeSphere 3.3 推荐的 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x 、v1.22.x 和 v1.23.x(实验性支持)。 {{}} diff --git a/content/zh/docs/v3.3/devops-user-guide/how-to-use/code-repositories/import-code-repositories.md b/content/zh/docs/v3.3/devops-user-guide/how-to-use/code-repositories/import-code-repositories.md index 98a1cee3f..a30e1f655 100755 --- a/content/zh/docs/v3.3/devops-user-guide/how-to-use/code-repositories/import-code-repositories.md +++ b/content/zh/docs/v3.3/devops-user-guide/how-to-use/code-repositories/import-code-repositories.md @@ -7,7 +7,7 @@ weight: 11231 --- -KubeSphere 3.3.0 支持您导入 GitHub、GitLab、Bitbucket 或其它基于 Git 的代码仓库,如 Gitee。下面以 Github 仓库为例,展示如何导入代码仓库。 +KubeSphere 3.3 支持您导入 GitHub、GitLab、Bitbucket 或其它基于 Git 的代码仓库,如 Gitee。下面以 Github 仓库为例,展示如何导入代码仓库。 ## 准备工作 diff --git a/content/zh/docs/v3.3/devops-user-guide/how-to-use/continuous-deployments/use-gitops-for-continous-deployment.md b/content/zh/docs/v3.3/devops-user-guide/how-to-use/continuous-deployments/use-gitops-for-continous-deployment.md index 7d7f8fe0e..370ae8cdf 100755 --- a/content/zh/docs/v3.3/devops-user-guide/how-to-use/continuous-deployments/use-gitops-for-continous-deployment.md +++ b/content/zh/docs/v3.3/devops-user-guide/how-to-use/continuous-deployments/use-gitops-for-continous-deployment.md @@ -6,7 +6,7 @@ linkTitle: "使用 GitOps 实现应用持续部署" weight: 11221 --- -KubeSphere 3.3.0 引入了一种为云原生应用实现持续部署的理念 – GitOps。GitOps 的核心思想是拥有一个 Git 仓库,并将应用系统的申明式基础架构和应用程序存放在 Git 仓库中进行版本控制。GitOps 结合 Kubernetes 能够利用自动交付流水线将更改应用到指定的任意多个集群中,从而解决跨云部署的一致性问题。 +KubeSphere 3.3 引入了一种为云原生应用实现持续部署的理念 – GitOps。GitOps 的核心思想是拥有一个 Git 仓库,并将应用系统的申明式基础架构和应用程序存放在 Git 仓库中进行版本控制。GitOps 结合 Kubernetes 能够利用自动交付流水线将更改应用到指定的任意多个集群中,从而解决跨云部署的一致性问题。 本示例演示如何创建持续部署实现应用的部署。 diff --git a/content/zh/docs/v3.3/devops-user-guide/how-to-use/devops-settings/add-cd-allowlist.md b/content/zh/docs/v3.3/devops-user-guide/how-to-use/devops-settings/add-cd-allowlist.md index 15940cb8a..e768f8258 100644 --- a/content/zh/docs/v3.3/devops-user-guide/how-to-use/devops-settings/add-cd-allowlist.md +++ b/content/zh/docs/v3.3/devops-user-guide/how-to-use/devops-settings/add-cd-allowlist.md @@ -5,7 +5,7 @@ description: '介绍如何在 KubeSphere 中添加持续部署白名单。' linkTitle: "添加持续部署白名单" weight: 11243 --- -在 KubeSphere 3.3.0 中,您可以通过设置白名单限制资源持续部署的目标位置。 +在 KubeSphere 3.3 中,您可以通过设置白名单限制资源持续部署的目标位置。 ## 准备工作 diff --git a/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md b/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md index 2db37e23e..83ff24749 100644 --- a/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md +++ b/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md @@ -288,7 +288,7 @@ KubeSphere 中的图形编辑面板包含用于 Jenkins [阶段 (Stage)](https:/ {{< notice note >}} - 在 KubeSphere 3.3.0 中,能够运行流水线的帐户也能够继续或终止该流水线。此外,流水线创建者、拥有该项目管理员角色的用户或者您指定的帐户也有权限继续或终止流水线。 + 在 KubeSphere 3.3 中,能够运行流水线的帐户也能够继续或终止该流水线。此外,流水线创建者、拥有该项目管理员角色的用户或者您指定的帐户也有权限继续或终止流水线。 {{}} diff --git a/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-jenkinsfile.md b/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-jenkinsfile.md index bfc989222..0c037e5b0 100644 --- a/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-jenkinsfile.md +++ b/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-jenkinsfile.md @@ -219,7 +219,7 @@ KubeSphere 中可以创建两种类型的流水线:一种是本教程中介绍 {{< notice note >}} - 在 KubeSphere 3.3.0 中,如果不指定审核员,那么能够运行流水线的帐户也能够继续或终止该流水线。流水线创建者、在该项目中具有 `admin` 角色的用户或者您指定的帐户也有权限继续或终止流水线。 + 在 KubeSphere 3.3 中,如果不指定审核员,那么能够运行流水线的帐户也能够继续或终止该流水线。流水线创建者、在该项目中具有 `admin` 角色的用户或者您指定的帐户也有权限继续或终止流水线。 {{}} diff --git a/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/gitlab-multibranch-pipeline.md b/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/gitlab-multibranch-pipeline.md index 11ad9de61..e22d39590 100644 --- a/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/gitlab-multibranch-pipeline.md +++ b/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/gitlab-multibranch-pipeline.md @@ -8,7 +8,7 @@ weight: 11215 [GitLab](https://about.gitlab.com/) 是一个提供公开和私有仓库的开源代码仓库平台。它也是一个完整的 DevOps 平台,专业人士能够使用 GitLab 在项目中执行任务。 -在 KubeSphere 3.3.0 以及更新版本中,您可以使用 GitLab 在 DevOps 项目中创建多分支流水线。本教程介绍如何使用 GitLab 创建多分支流水线。 +在 KubeSphere 3.3 中,您可以使用 GitLab 在 DevOps 项目中创建多分支流水线。本教程介绍如何使用 GitLab 创建多分支流水线。 ## 准备工作 diff --git a/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md b/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md index 8b304a449..4d7d6b9a2 100644 --- a/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md +++ b/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/use-pipeline-templates.md @@ -6,7 +6,7 @@ linkTitle: "使用流水线模板" weight: 11213 --- -KubeSphere 提供图形编辑面板,您可以通过交互式操作定义 Jenkins 流水线的阶段和步骤。KubeSphere 3.3.0 中提供了内置流水线模板,如 Node.js、Maven 以及 Golang,使用户能够快速创建对应模板的流水线。同时,KubeSphere 3.3.0 还支持自定义流水线模板,以满足企业不同的需求。 +KubeSphere 提供图形编辑面板,您可以通过交互式操作定义 Jenkins 流水线的阶段和步骤。KubeSphere 3.3 中提供了内置流水线模板,如 Node.js、Maven 以及 Golang,使用户能够快速创建对应模板的流水线。同时,KubeSphere 3.3 还支持自定义流水线模板,以满足企业不同的需求。 本文档演示如何在 KubeSphere 上使用流水线模板。 diff --git a/content/zh/docs/v3.3/faq/access-control/cannot-login.md b/content/zh/docs/v3.3/faq/access-control/cannot-login.md index b47dca46a..e2d485159 100644 --- a/content/zh/docs/v3.3/faq/access-control/cannot-login.md +++ b/content/zh/docs/v3.3/faq/access-control/cannot-login.md @@ -78,7 +78,7 @@ kubectl -n kubesphere-system rollout restart deploy ks-controller-manager 如果您使用了错误的 ks-installer 版本,会导致安装之后各组件版本不匹配。 -通过以下方式检查各组件版本是否一致,正确的 image tag 应该是 v3.3.0。 +通过以下方式检查各组件版本是否一致,正确的 image tag 应该是 v3.3.1。 ``` kubectl -n kubesphere-system get deploy ks-installer -o jsonpath='{.spec.template.spec.containers[0].image}' diff --git a/content/zh/docs/v3.3/faq/console/edit-resources-in-system-workspace.md b/content/zh/docs/v3.3/faq/console/edit-resources-in-system-workspace.md index 02bd7ce74..d6033cd7a 100644 --- a/content/zh/docs/v3.3/faq/console/edit-resources-in-system-workspace.md +++ b/content/zh/docs/v3.3/faq/console/edit-resources-in-system-workspace.md @@ -31,9 +31,9 @@ Weight: 16520 ```yaml client: version: - kubesphere: v3.3.0 - kubernetes: v1.22.10 - openpitrix: v3.3.0 + kubesphere: v3.3.1 + kubernetes: v1.21.5 + openpitrix: v3.3.1 enableKubeConfig: true systemWorkspace: "$" # 请手动添加此行。 ``` diff --git a/content/zh/docs/v3.3/faq/installation/configure-booster.md b/content/zh/docs/v3.3/faq/installation/configure-booster.md index b16a49e13..8a7cc6c3d 100644 --- a/content/zh/docs/v3.3/faq/installation/configure-booster.md +++ b/content/zh/docs/v3.3/faq/installation/configure-booster.md @@ -10,7 +10,7 @@ weight: 16200 ## 获取加速器地址 -您需要获取仓库的一个镜像地址以配置加速器。您可以参考如何[从阿里云获取加速器地址](https://www.alibabacloud.com/help/zh/doc-detail/60750.htm?spm=a2c63.p38356.b99.18.4f4133f0uTKb8S)。 +您需要获取仓库的一个镜像地址以配置加速器。您可以参考如何[从阿里云获取加速器地址](https://help.aliyun.com/document_detail/60750.html)。 ## 配置仓库镜像地址 diff --git a/content/zh/docs/v3.3/faq/installation/telemetry.md b/content/zh/docs/v3.3/faq/installation/telemetry.md index 29a3c5e57..cd980c360 100644 --- a/content/zh/docs/v3.3/faq/installation/telemetry.md +++ b/content/zh/docs/v3.3/faq/installation/telemetry.md @@ -29,7 +29,7 @@ Telemetry 收集已安装 KubeSphere 集群的大小、KubeSphere 和 Kubernetes ### 安装前禁用 Telemetry -在现有 Kubernetes 集群上安装 KubeSphere 时,您需要下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件用于配置集群。如需禁用 Telemetry,请勿直接执行 `kubectl apply -f` 命令应用该文件。 +在现有 Kubernetes 集群上安装 KubeSphere 时,您需要下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件用于配置集群。如需禁用 Telemetry,请勿直接执行 `kubectl apply -f` 命令应用该文件。 {{< notice note >}} @@ -37,7 +37,7 @@ Telemetry 收集已安装 KubeSphere 集群的大小、KubeSphere 和 Kubernetes {{}} -1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件并编辑。 +1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件并编辑。 ```bash vi cluster-configuration.yaml @@ -57,7 +57,7 @@ Telemetry 收集已安装 KubeSphere 集群的大小、KubeSphere 和 Kubernetes 3. 保存文件并执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/zh/docs/v3.3/faq/observability/byop.md b/content/zh/docs/v3.3/faq/observability/byop.md index 6e8bd17b8..46fa8ee58 100644 --- a/content/zh/docs/v3.3/faq/observability/byop.md +++ b/content/zh/docs/v3.3/faq/observability/byop.md @@ -6,12 +6,20 @@ linkTitle: "集成您自己的 Prometheus" Weight: 16330 --- -KubeSphere 自带一些预装的自定义监控组件,包括 Prometheus Operator、Prometheus、Alertmanager、Grafana(可选)、各种 ServiceMonitor、node-exporter 和 kube-state-metrics。在您安装 KubeSphere 之前,这些组件可能已经存在。在 KubeSphere 3.3.0 中,您可以使用自己的 Prometheus 堆栈设置。 +KubeSphere 自带一些预装的自定义监控组件,包括 Prometheus Operator、Prometheus、Alertmanager、Grafana(可选)、各种 ServiceMonitor、node-exporter 和 kube-state-metrics。在您安装 KubeSphere 之前,这些组件可能已经存在。在 KubeSphere 3.3 中,您可以使用自己的 Prometheus 堆栈设置。 -## 集成您自己的 Prometheus +## 集成您自己的 Prometheus 的步骤 要使用您自己的 Prometheus 堆栈设置,请执行以下步骤: +1. 卸载 KubeSphere 的自定义 Prometheus 堆栈 + +2. 安装您自己的 Prometheus 堆栈 + +3. 将 KubeSphere 自定义组件安装至您的 Prometheus 堆栈 + +4. 更改 KubeSphere 的 `monitoring endpoint` + ### 步骤 1:卸载 KubeSphere 的自定义 Prometheus 堆栈 1. 执行以下命令,卸载堆栈: @@ -41,13 +49,13 @@ KubeSphere 自带一些预装的自定义监控组件,包括 Prometheus Operat {{< notice note >}} -KubeSphere 3.3.0 已经过认证,可以与以下 Prometheus 堆栈组件搭配使用: +KubeSphere 3.3 已经过认证,可以与以下 Prometheus 堆栈组件搭配使用: -- Prometheus Operator **v0.55.1+** -- Prometheus **v2.34.0+** -- Alertmanager **v0.23.0+** -- kube-state-metrics **v2.5.0** -- node-exporter **v1.3.1** +- Prometheus Operator **v0.38.3+** +- Prometheus **v2.20.1+** +- Alertmanager **v0.21.0+** +- kube-state-metrics **v1.9.6** +- node-exporter **v0.18.1** 请确保您的 Prometheus 堆栈组件版本符合上述版本要求,尤其是 **node-exporter** 和 **kube-state-metrics**。 @@ -57,97 +65,92 @@ KubeSphere 3.3.0 已经过认证,可以与以下 Prometheus 堆栈组件搭配 {{}} -Prometheus 堆栈可以通过多种方式进行安装。下面的步骤演示如何使用 `ks-prometheus`(基于上游的 `kube-prometheus` 项目) 将 Prometheus 堆栈安装至命名空间 `monitoring` 中。 +Prometheus 堆栈可以通过多种方式进行安装。下面的步骤演示如何使用**上游 `kube-prometheus`** 将 Prometheus 堆栈安装至命名空间 `monitoring` 中。 -1. 获取 KubeSphere 3.3.0 所使用的 `ks-prometheus`。 +1. 获取 v0.6.0 版 kube-prometheus,它的 node-exporter 版本为 v0.18.1,与 KubeSphere 3.3 所使用的版本相匹配。 ```bash - cd ~ && git clone -b release-3.3 https://github.com/kubesphere/ks-prometheus.git && cd ks-prometheus + cd ~ && git clone https://github.com/prometheus-operator/kube-prometheus.git && cd kube-prometheus && git checkout tags/v0.6.0 -b v0.6.0 ``` -2. 设置命名空间。 +2. 设置命名空间 `monitoring`,安装 Prometheus Operator 和相应角色: ```bash - sed -i 's/kubesphere-monitoring-system/monitoring/g' kustomization.yaml + kubectl apply -f manifests/setup/ ``` -3. (可选)移除不必要的组件。例如,KubeSphere 未启用 Grafana 时,可以删除 `kustomization.yaml` 中的 `grafana` 部分: +3. 稍等片刻待 Prometheus Operator 启动并运行。 ```bash - sed -i '/manifests\/grafana\//d' kustomization.yaml + kubectl -n monitoring get pod --watch ``` -4. 安装堆栈。 +4. 移除不必要组件,例如 Prometheus Adapter。 ```bash - kubectl apply -k . + rm -rf manifests/prometheus-adapter-*.yaml + ``` + +5. 将 kube-state-metrics 的版本变更为 KubeSphere 3.3 所使用的 v1.9.6。 + + ```bash + sed -i 's/v1.9.5/v1.9.6/g' manifests/kube-state-metrics-deployment.yaml + ``` + +6. 安装 Prometheus、Alertmanager、Grafana、kube-state-metrics 以及 node-exporter。您可以只应用 YAML 文件 `kube-state-metrics-*.yaml` 或 `node-exporter-*.yaml` 来分别安装 kube-state-metrics 或 node-exporter。 + + ```bash + kubectl apply -f manifests/ ``` ### 步骤 3:将 KubeSphere 自定义组件安装至您的 Prometheus 堆栈 {{< notice note >}} -如果您的 Prometheus 堆栈是通过 `ks-prometheus` 进行安装,您可以跳过此步骤。 +KubeSphere 3.3 使用 Prometheus Operator 来管理 Prometheus/Alertmanager 配置和生命周期、ServiceMonitor(用于管理抓取配置)和 PrometheusRule(用于管理 Prometheus 记录/告警规则)。 -KubeSphere 3.3.0 使用 Prometheus Operator 来管理 Prometheus/Alertmanager 配置和生命周期、ServiceMonitor(用于管理抓取配置)和 PrometheusRule(用于管理 Prometheus 记录/告警规则)。 +[KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml) 中列出了一些条目,其中 `prometheus-rules.yaml` 和 `prometheus-rulesEtcd.yaml` 是 KubeSphere 3.3 正常运行的必要条件,其他均为可选。如果您不希望现有 Alertmanager 的配置被覆盖,您可以移除 `alertmanager-secret.yaml`。如果您不希望自己的 ServiceMonitor 被覆盖(KubeSphere 自定义的 ServiceMonitor 弃用许多无关指标,以便 Prometheus 只存储最有用的指标),您可以移除 `xxx-serviceMonitor.yaml`。 如果您的 Prometheus 堆栈不是由 Prometheus Operator 进行管理,您可以跳过此步骤。但请务必确保: -- 您必须将 [PrometheusRule](https://github.com/kubesphere/ks-prometheus/blob/release-3.3/manifests/kubernetes/kubernetes-prometheusRule.yaml) 和 [PrometheusRule for etcd](https://github.com/kubesphere/ks-prometheus/blob/release-3.3/manifests/etcd/prometheus-rulesEtcd.yaml) 中的记录/告警规则复制至您的 Prometheus 配置中,以便 KubeSphere 3.3.0 能够正常运行。 +- 您必须将 [PrometheusRule](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/prometheus-rules.yaml) 和 [PrometheusRule for etcd](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/prometheus-rulesEtcd.yaml) 中的记录/告警规则复制至您的 Prometheus 配置中,以便 KubeSphere 3.3 能够正常运行。 -- 配置您的 Prometheus,使其抓取指标的目标 (Target) 与 各组件的 [serviceMonitor](https://github.com/kubesphere/ks-prometheus/tree/release-3.3/manifests) 文件中列出的目标相同。 +- 配置您的 Prometheus,使其抓取指标的目标 (Target) 与 [KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml) 中列出的 ServiceMonitor 的目标相同。 {{}} -1. 获取 KubeSphere 3.3.0 所使用的 `ks-prometheus`。 +1. 获取 KubeSphere 3.3 的自定义 kube-prometheus。 ```bash - cd ~ && git clone -b release-3.3 https://github.com/kubesphere/ks-prometheus.git && cd ks-prometheus + cd ~ && mkdir kubesphere && cd kubesphere && git clone https://github.com/kubesphere/kube-prometheus.git && cd kube-prometheus/kustomize ``` -2. 设置 `kustomization.yaml`,仅保留如下内容。 +2. 将命名空间更改为您自己部署 Prometheus 堆栈的命名空间。例如,如果您按照步骤 2 将 Prometheus 安装在命名空间 `monitoring` 中,这里即为 `monitoring`。 - ```yaml - apiVersion: kustomize.config.k8s.io/v1beta1 - kind: Kustomization - namespace: - resources: - - ./manifests/alertmanager/alertmanager-secret.yaml - - ./manifests/etcd/prometheus-rulesEtcd.yaml - - ./manifests/kube-state-metrics/kube-state-metrics-serviceMonitor.yaml - - ./manifests/kubernetes/kubernetes-prometheusRule.yaml - - ./manifests/kubernetes/kubernetes-serviceKubeControllerManager.yaml - - ./manifests/kubernetes/kubernetes-serviceKubeScheduler.yaml - - ./manifests/kubernetes/kubernetes-serviceMonitorApiserver.yaml - - ./manifests/kubernetes/kubernetes-serviceMonitorCoreDNS.yaml - - ./manifests/kubernetes/kubernetes-serviceMonitorKubeControllerManager.yaml - - ./manifests/kubernetes/kubernetes-serviceMonitorKubeScheduler.yaml - - ./manifests/kubernetes/kubernetes-serviceMonitorKubelet.yaml - - ./manifests/node-exporter/node-exporter-serviceMonitor.yaml - - ./manifests/prometheus/prometheus-clusterRole.yaml + ```bash + sed -i 's/my-namespace//g' kustomization.yaml ``` - {{< notice note >}} - - - 将此处 `namespace` 的值设置为您自己的命名空间。例如,如果您在步骤 2 将 Prometheus 安装在命名空间 `monitoring` 中,这里即为 `monitoring`。 - - 如果您启用了 KubeSphere 的告警,还需要保留 `kustomization.yaml` 中的 `thanos-ruler` 部分。 - - {{}} - - -3. 安装以上 KubeSphere 必要组件。 +3. 应用 KubeSphere 自定义组件,包括 Prometheus 规则、Alertmanager 配置和各种 ServiceMonitor 等。 ```bash kubectl apply -k . ``` -4. 在您自己的命名空间中查找 Prometheus CR,通常为 k8s。 +4. 配置服务 (Service) 用于暴露 kube-scheduler 和 kube-controller-manager 指标。 + + ```bash + kubectl apply -f ./prometheus-serviceKubeScheduler.yaml + kubectl apply -f ./prometheus-serviceKubeControllerManager.yaml + ``` + +5. 在您自己的命名空间中查找 Prometheus CR,通常为 Kubernetes。 ```bash kubectl -n get prometheus ``` -5. 将 Prometheus 规则评估间隔设置为 1m,与 KubeSphere 3.3.0 的自定义 ServiceMonitor 保持一致。规则评估间隔应大于或等于抓取间隔。 +6. 将 Prometheus 规则评估间隔设置为 1m,与 KubeSphere 3.3 的自定义 ServiceMonitor 保持一致。规则评估间隔应大于或等于抓取间隔。 ```bash kubectl -n patch prometheus k8s --patch '{ @@ -161,40 +164,34 @@ KubeSphere 3.3.0 使用 Prometheus Operator 来管理 Prometheus/Alertmanager 您自己的 Prometheus 堆栈现在已启动并运行,您可以更改 KubeSphere 的监控 Endpoint 来使用您自己的 Prometheus。 -1. 运行以下命令,编辑 `kubesphere-config`。 +1. 运行以下命令,编辑 `kubesphere-config`: ```bash kubectl edit cm -n kubesphere-system kubesphere-config ``` -2. 搜索 `monitoring endpoint` 部分,如下所示。 +2. 搜寻到 `monitoring endpoint` 部分,如下所示: - ```yaml + ```bash monitoring: endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 ``` -3. 将 `endpoint` 的值更改为您自己的 Prometheus。 +3. 将 `monitoring endpoint` 更改为您自己的 Prometheus: - ```yaml + ```bash monitoring: endpoint: http://prometheus-operated.monitoring.svc:9090 ``` -4. 如果您启用了 KubeSphere 的告警组件,请搜索 `alerting` 的 `prometheusEndpoint` 和 `thanosRulerEndpoint`,并参照如下示例修改。KubeSphere Apiserver 将自动重启使设置生效。 +4. 运行以下命令,重启 KubeSphere APIserver。 - ```yaml - ... - alerting: - ... - prometheusEndpoint: http://prometheus-operated.monitoring.svc:9090 - thanosRulerEndpoint: http://thanos-ruler-operated.monitoring.svc:10902 - ... - ... + ```bash + kubectl -n kubesphere-system rollout restart deployment/ks-apiserver ``` {{< notice warning >}} -如果您按照[此指南](../../../pluggable-components/overview/)启用/禁用 KubeSphere 可插拔组件,`monitoring endpoint` 会重置为初始值。此时,您需要再次将其更改为您自己的 Prometheus。 +如果您按照[此指南](../../../pluggable-components/overview/)启用/禁用 KubeSphere 可插拔组件,`monitoring endpoint` 会重置为初始值。此时,您需要再次将其更改为您自己的 Prometheus 并重启 KubeSphere APIserver。 {{}} \ No newline at end of file diff --git a/content/zh/docs/v3.3/faq/observability/logging.md b/content/zh/docs/v3.3/faq/observability/logging.md index 1361f2213..7886122bd 100644 --- a/content/zh/docs/v3.3/faq/observability/logging.md +++ b/content/zh/docs/v3.3/faq/observability/logging.md @@ -28,7 +28,7 @@ weight: 16310 kubectl edit cc -n kubesphere-system ks-installer ``` -2. 将 `es.elasticsearchDataXXX`、`es.elasticsearchMasterXXX` 和 `status.logging` 的注释取消,将 `es.externalElasticsearchHost` 设置为 Elasticsearch 的地址,将 `es.externalElasticsearchPort` 设置为其端口号。以下示例供您参考: +2. 将 `es.elasticsearchDataXXX`、`es.elasticsearchMasterXXX` 和 `status.logging` 的注释取消,将 `es.externalElasticsearchUrl` 设置为 Elasticsearch 的地址,将 `es.externalElasticsearchPort` 设置为其端口号。以下示例供您参考: ```yaml apiVersion: installer.kubesphere.io/v1alpha1 @@ -40,18 +40,14 @@ weight: 16310 spec: ... common: - es: # Storage backend for logging, events and auditing. - # master: - # volumeSize: 4Gi # The volume size of Elasticsearch master nodes. - # replicas: 1 # The total number of master nodes. Even numbers are not allowed. - # resources: {} - # data: - # volumeSize: 20Gi # The volume size of Elasticsearch data nodes. - # replicas: 1 # The total number of data nodes. - # resources: {} + es: + # elasticsearchDataReplicas: 1 + # elasticsearchDataVolumeSize: 20Gi + # elasticsearchMasterReplicas: 1 + # elasticsearchMasterVolumeSize: 4Gi elkPrefix: logstash logMaxAge: 7 - externalElasticsearchHost: <192.168.0.2> + externalElasticsearchUrl: <192.168.0.2> externalElasticsearchPort: <9200> ... status: @@ -91,9 +87,9 @@ KubeSphere 暂不支持启用 X-Pack Security 的 Elasticsearch 集成,此功 ## 如何设置审计、事件、日志及 Istio 日志信息的保留期限 -在 KubeSphere v3.3.0 之前的版本,您只能修改日志的保存期限(默认为 7 天)。除了日志外,KubeSphere v3.3.0 还支持您设置审计、事件及 Istio 日志信息的保留期限。 +KubeSphere v3.3 还支持您设置日志、审计、事件及 Istio 日志信息的保留期限。 -参考以下步骤更新 KubeKey 配置。 +您需要更新 KubeKey 配置并重新运行 `ks-installer`。 1. 执行以下命令: @@ -122,27 +118,10 @@ KubeSphere 暂不支持启用 X-Pack Security 的 Elasticsearch 集成,此功 ... ``` - {{< notice note >}} - 如果您未设置审计、事件及 Istio 日志信息的保留期限,默认使用 `logMaxAge` 的值。 - {{}} +3. 重新运行 `ks-installer`。 -3. 在 YAML 文件中,删除 `es` 部分的内容,保存修改,ks-installer 会自动重启使配置生效。 - - ```yaml - apiVersion: installer.kubesphere.io/v1alpha1 - kind: ClusterConfiguration - metadata: - name: ks-installer - namespace: kubesphere-system - ... - status: - alerting: - enabledTime: 2022-08-11T06:22:01UTC - status: enabled - ... - es: # delete this line. - enabledTime: 2022-08-11T06:22:01UTC # delete this line. - status: enabled # delete this line. + ```bash + kubectl rollout restart deploy -n kubesphere-system ks-installer ``` ## 无法使用工具箱找到某些节点上工作负载的日志 diff --git a/content/zh/docs/v3.3/faq/upgrade/qingcloud-csi-upgrade.md b/content/zh/docs/v3.3/faq/upgrade/qingcloud-csi-upgrade.md index 777c47a21..9ebcbd4b2 100644 --- a/content/zh/docs/v3.3/faq/upgrade/qingcloud-csi-upgrade.md +++ b/content/zh/docs/v3.3/faq/upgrade/qingcloud-csi-upgrade.md @@ -1,6 +1,6 @@ --- title: "升级 QingCloud CSI" -keywords: "Kubernetes, 升级, KubeSphere, v3.3.0" +keywords: "Kubernetes, 升级, KubeSphere, v3.3.1" description: "升级 KubeSphere 后升级 QingCloud CSI。" linkTitle: "升级 QingCloud CSI" weight: 16210 diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-ks-on-tencent-tke.md b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-ks-on-tencent-tke.md index 430079fb0..4e4e46e68 100644 --- a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-ks-on-tencent-tke.md +++ b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-ks-on-tencent-tke.md @@ -7,14 +7,14 @@ description: "介绍如何在腾讯云 TKE 上部署 KubeSphere。" weight: 4270 --- -本指南将介绍如何在[腾讯云 TKE](https://cloud.tencent.com/document/product/457/6759) 上部署并使用 KubeSphere 3.3.0 平台。 +本指南将介绍如何在[腾讯云 TKE](https://cloud.tencent.com/document/product/457/6759) 上部署并使用 KubeSphere 3.3 平台。 ## 腾讯云 TKE 环境准备 ### 创建 Kubernetes 集群 首先按使用环境的资源需求[创建 Kubernetes 集群](https://cloud.tencent.com/document/product/457/32189),满足以下一些条件即可(如已有环境并满足条件可跳过本节内容): -- KubeSphere 3.3.0 默认支持的 Kubernetes 版本为 v1.19.x, v1.20.x, v1.21.x, v1.22.x 和 v1.23.x(实验性支持),选择支持的版本创建集群; +- KubeSphere 3.3 默认支持的 Kubernetes 版本为 v1.19.x, v1.20.x, v1.21.x, v1.22.x 和 v1.23.x(实验性支持),选择支持的版本创建集群; - 如果老集群版本不大于1.15.0,需要操作控制台先升级master节点然后升级node节点,依次升级至符合要求版本即可。 - 工作节点机型配置规格方面选择 `标准型S5` 的 `4核|8GB` 配置即可,并按需扩展工作节点数量(通常生产环境需要 3 个及以上工作节点)。 @@ -42,13 +42,13 @@ Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.4-tke.2", - 使用 kubectl 执行以下命令安装 KubeSphere: ```bash -kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml +kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml ``` - 下载集群配置文件 ```bash -wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml +wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` {{< notice tip >}} diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-ack.md b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-ack.md index fa0842ae5..493ac46a5 100644 --- a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-ack.md +++ b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-ack.md @@ -7,7 +7,7 @@ description: "了解如何在阿里云容器服务 ACK 上部署 KubeSphere。" weight: 4250 --- -本指南将介绍如果在[阿里云容器服务 ACK](https://www.aliyun.com/product/kubernetes/) 上部署并使用 KubeSphere 3.3.0 平台。 +本指南将介绍如果在[阿里云容器服务 ACK](https://www.aliyun.com/product/kubernetes/) 上部署并使用 KubeSphere 3.3 平台。 ## 阿里云 ACK 环境准备 @@ -15,7 +15,7 @@ weight: 4250 首先按使用环境的资源需求创建 Kubernetes 集群,满足以下一些条件即可(如已有环境并满足条件可跳过本节内容): -- KubeSphere 3.3.0 默认支持的 Kubernetes 版本为 v1.19.x, v1.20.x, v1.21.x, v1.22.x 和 v1.23.x(实验性支持),选择支持的版本创建集群; +- KubeSphere 3.3 默认支持的 Kubernetes 版本为 v1.19.x, v1.20.x, v1.21.x, v1.22.x 和 v1.23.x(实验性支持),选择支持的版本创建集群; - 需要确保 Kubernetes 集群所使用的 ECS 实例的网络正常工作,可以通过在创建集群的同时**自动创建**或**使用已有**弹性 IP;或者在集群创建后自行配置网络(如配置 [NAT 网关](https://www.aliyun.com/product/network/nat/)); - 小规模场景下工作节点规格建议选择 `4核|8GB` 配置,不推荐`2核|4GB` ,并按需扩展工作节点数量(通常生产环境需要 3 个及以上工作节点),详情可参考[最佳实践- ECS 选型](https://help.aliyun.com/document_detail/98886.html)。 @@ -142,8 +142,8 @@ alicloud-disk-topology diskplugin.csi.alibabacloud.com Delete 1.使用 [ks-installer](https://github.com/kubesphere/ks-installer) 在已有的 Kubernetes 集群上来部署 KubeSphere,下载 YAML 文件: ``` -wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml -wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml +wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml +wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md index ef35d2953..047396640 100644 --- a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md +++ b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-aks.md @@ -98,9 +98,9 @@ Azure Kubernetes Services 本身将放置在`KubeSphereRG`中。 请使用以下命令开始部署 KubeSphere。 ```bash -kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml +kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml -kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml +kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` 可以通过以下命令检查安装日志: diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md index 4f9b3ad59..796196856 100644 --- a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md +++ b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md @@ -30,7 +30,7 @@ weight: 4230 {{< notice note >}} -- 如需在 Kubernetes 上安装 KubeSphere 3.3.0,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 +- 如需在 Kubernetes 上安装 KubeSphere 3.3,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 - 此示例中包括 3 个节点。您可以根据自己的需求添加更多节点,尤其是在生产环境中。 - 机器类型 Standard/4 GB/2 vCPU 仅用于最小化安装的,如果您计划启用多个可插拔组件或将集群用于生产,建议将节点升级到规格更大的类型(例如,CPU-Optimized /8 GB /4 vCPUs)。DigitalOcean 是基于工作节点类型来配置主节点,而对于标准节点,API server 可能会很快会变得无响应。 @@ -47,9 +47,9 @@ weight: 4230 - 使用 kubectl 安装 KubeSphere,以下命令仅用于默认的最小安装。 ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` - 检查安装日志: diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md index d6340d2e9..6194025b6 100644 --- a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md +++ b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-eks.md @@ -84,7 +84,7 @@ aws-cli/2.1.2 Python/3.7.3 Linux/4.18.0-193.6.3.el8_2.x86_64 exe/x86_64.centos.8 {{< notice note >}} -- 如需在 Kubernetes 上安装 KubeSphere 3.3.0,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 +- 如需在 Kubernetes 上安装 KubeSphere 3.3,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 - 此示例中包括 3 个节点。您可以根据自己的需求添加更多节点,尤其是在生产环境中。 - t3.medium(2 个 vCPU,4 GB 内存)机器类型仅用于最小化安装,如果要启用可插拔组件或集群用于生产,请选择具有更大规格的机器类型。 - 对于其他设置,您也可以根据自己的需要进行更改,也可以使用默认值。 @@ -130,9 +130,9 @@ aws-cli/2.1.2 Python/3.7.3 Linux/4.18.0-193.6.3.el8_2.x86_64 exe/x86_64.centos.8 - 使用 kubectl 安装 KubeSphere,以下命令仅用于默认的最小安装。 ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` - 检查安装日志: diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md index 85f0699ca..1280073bc 100644 --- a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md +++ b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-gke.md @@ -30,7 +30,7 @@ weight: 4240 {{< notice note >}} -- 如需在 Kubernetes 上安装 KubeSphere 3.3.0,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 +- 如需在 Kubernetes 上安装 KubeSphere 3.3,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 - 此示例中包括3个节点,可以根据自己的需求添加更多节点,尤其是在生产环境中。 - 最小安装的机器类型为 e2-medium(2 个 vCPU,4GB 内存)。如果要启用可插拔组件或将集群用于生产,请选择具有更高配置的机器类型。 - 对于其他设置,可以根据自己的需要进行更改,也可以使用默认值。 @@ -46,9 +46,9 @@ weight: 4240 - 使用 kubectl 安装 KubeSphere,以下命令仅用于默认的最小安装。 ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` - 检查安装日志: diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md index dfb43185d..10f35dbe3 100644 --- a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md +++ b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-huaweicloud-cce.md @@ -7,7 +7,7 @@ description: "了解如何在华为云容器引擎上部署 KubeSphere。" weight: 4250 --- -本指南将介绍如果在[华为云 CCE 容器引擎](https://support.huaweicloud.com/cce/)上部署并使用 KubeSphere 3.3.0 平台。 +本指南将介绍如果在[华为云 CCE 容器引擎](https://support.huaweicloud.com/cce/)上部署并使用 KubeSphere 3.3 平台。 ## 华为云 CCE 环境准备 @@ -15,7 +15,7 @@ weight: 4250 首先按使用环境的资源需求创建 Kubernetes 集群,满足以下一些条件即可(如已有环境并满足条件可跳过本节内容): -- 如需在 Kubernetes 上安装 KubeSphere 3.3.0,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 +- 如需在 Kubernetes 上安装 KubeSphere 3.3,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 - 需要确保 Kubernetes 集群所使用的云主机的网络正常工作,可以通过在创建集群的同时**自动创建**或**使用已有**弹性 IP;或者在集群创建后自行配置网络(如配置 [NAT 网关](https://support.huaweicloud.com/natgateway/))。 - 工作节点规格建议选择 `s3.xlarge.2` 的 `4核|8GB` 配置,并按需扩展工作节点数量(通常生产环境需要 3 个及以上工作节点)。 @@ -74,8 +74,8 @@ volumeBindingMode: Immediate 接下来就可以使用 [ks-installer](https://github.com/kubesphere/ks-installer) 在已有的 Kubernetes 集群上来部署 KubeSphere,建议首先还是以最小功能集进行安装,可执行以下命令: ```bash -kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml -kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml +kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml +kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` 执行部署命令后,可以通过进入**工作负载** > **容器组 Pod** 界面,在右侧面板中查询 `kubesphere-system` 命名空间下的 Pod 运行状态了解 KubeSphere 平台最小功能集的部署状态;通过该命名空间下 `ks-console-xxxx` 容器的状态来了解 KubeSphere 控制台应用的可用状态。 diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md index 110def481..de1e84e6c 100644 --- a/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md +++ b/content/zh/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-oke.md @@ -28,7 +28,7 @@ weight: 4260 {{< notice note >}} -- 如需在 Kubernetes 上安装 KubeSphere 3.3.0,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 +- 如需在 Kubernetes 上安装 KubeSphere 3.3,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 - 建议您在**可见性类型**中选择**公共**,即每个节点会分配到一个公共 IP 地址,此地址之后可用于访问 KubeSphere Web 控制台。 - 在 Oracle Cloud 中,**配置**定义了一个实例会分配到的 CPU 和内存等资源量,本示例使用 `VM.Standard.E2.2 (2 CPUs and 16G Memory)`。有关更多信息,请参见 [Standard Shapes](https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/References/computeshapes.htm#vmshapes__vm-standard)。 - 本示例包含 3 个节点,可以根据需求自行添加节点(尤其是生产环境)。 @@ -64,9 +64,9 @@ weight: 4260 1. 使用 kubectl 安装 KubeSphere。直接输入以下命令会默认执行 KubeSphere 的最小化安装。 ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` 2. 检查安装日志: diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/introduction/overview.md b/content/zh/docs/v3.3/installing-on-kubernetes/introduction/overview.md index ffb6ee5cc..38bcb0fd2 100644 --- a/content/zh/docs/v3.3/installing-on-kubernetes/introduction/overview.md +++ b/content/zh/docs/v3.3/installing-on-kubernetes/introduction/overview.md @@ -32,9 +32,9 @@ KubeSphere 承诺为用户提供即插即用架构,您可以轻松地将 KubeS 1. 执行以下命令以开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` 2. 检查安装日志: diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/introduction/prerequisites.md b/content/zh/docs/v3.3/installing-on-kubernetes/introduction/prerequisites.md index b7eada26e..640951318 100644 --- a/content/zh/docs/v3.3/installing-on-kubernetes/introduction/prerequisites.md +++ b/content/zh/docs/v3.3/installing-on-kubernetes/introduction/prerequisites.md @@ -10,7 +10,7 @@ weight: 4120 您可以在虚拟机和裸机上安装 KubeSphere,并同时配置 Kubernetes。另外,只要 Kubernetes 集群满足以下前提条件,那么您也可以在云托管和本地 Kubernetes 集群上部署 KubeSphere。 -- 如需在 Kubernetes 上安装 KubeSphere 3.3.0,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 +- 如需在 Kubernetes 上安装 KubeSphere 3.3,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 - 可用 CPU > 1 核;内存 > 2 G。CPU 必须为 x86_64,暂时不支持 Arm 架构的 CPU。 - Kubernetes 集群已配置**默认** StorageClass(请使用 `kubectl get sc` 进行确认)。 - 使用 `--cluster-signing-cert-file` 和 `--cluster-signing-key-file` 参数启动集群时,kube-apiserver 将启用 CSR 签名功能。请参见 [RKE 安装问题](https://github.com/kubesphere/kubesphere/issues/1925#issuecomment-591698309)。 diff --git a/content/zh/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md b/content/zh/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md index 7db4ddb8a..fb57c1406 100644 --- a/content/zh/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md +++ b/content/zh/docs/v3.3/installing-on-kubernetes/on-prem-kubernetes/install-ks-on-linux-airgapped.md @@ -90,7 +90,7 @@ Docker 使用 `/var/lib/docker` 作为默认路径来存储所有 Docker 相关 1. 使用以下命令从能够访问互联网的机器上下载镜像清单文件 `images-list.txt`: ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/images-list.txt ``` {{< notice note >}} @@ -102,7 +102,7 @@ Docker 使用 `/var/lib/docker` 作为默认路径来存储所有 Docker 相关 2. 下载 `offline-installation-tool.sh`。 ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/offline-installation-tool.sh + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/offline-installation-tool.sh ``` 3. 使 `.sh` 文件可执行。 @@ -162,8 +162,8 @@ Docker 使用 `/var/lib/docker` 作为默认路径来存储所有 Docker 相关 1. 执行以下命令下载这两个文件,并将它们传输至您充当任务机的机器,用于安装。 ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml ``` 2. 编辑 `cluster-configuration.yaml` 添加您的私有镜像仓库。例如,本教程中的仓库地址是 `dockerhub.kubekey.local`,将它用作 `.spec.local_registry` 的值,如下所示: @@ -241,171 +241,171 @@ https://kubesphere.io 20xx-xx-xx xx:xx:xx ## 附录 -### KubeSphere 3.3.0 镜像清单 +### KubeSphere 3.3 镜像清单 ```txt ##k8s-images -kubesphere/kube-apiserver:v1.23.7 -kubesphere/kube-controller-manager:v1.23.7 -kubesphere/kube-proxy:v1.23.7 -kubesphere/kube-scheduler:v1.23.7 -kubesphere/kube-apiserver:v1.24.1 -kubesphere/kube-controller-manager:v1.24.1 -kubesphere/kube-proxy:v1.24.1 -kubesphere/kube-scheduler:v1.24.1 -kubesphere/kube-apiserver:v1.22.10 -kubesphere/kube-controller-manager:v1.22.10 -kubesphere/kube-proxy:v1.22.10 -kubesphere/kube-scheduler:v1.22.10 -kubesphere/kube-apiserver:v1.21.13 -kubesphere/kube-controller-manager:v1.21.13 -kubesphere/kube-proxy:v1.21.13 -kubesphere/kube-scheduler:v1.21.13 -kubesphere/pause:3.7 -kubesphere/pause:3.6 -kubesphere/pause:3.5 -kubesphere/pause:3.4.1 -coredns/coredns:1.8.0 -coredns/coredns:1.8.6 -calico/cni:v3.20.0 -calico/kube-controllers:v3.20.0 -calico/node:v3.20.0 -calico/pod2daemon-flexvol:v3.20.0 -calico/typha:v3.20.0 -kubesphere/flannel:v0.12.0 -openebs/provisioner-localpv:2.10.1 -openebs/linux-utils:2.10.0 -library/haproxy:2.3 -kubesphere/nfs-subdir-external-provisioner:v4.0.2 -kubesphere/k8s-dns-node-cache:1.15.12 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.23.10 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.23.10 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.23.10 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.23.10 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.24.3 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.24.3 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.24.3 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.24.3 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.21.14 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.21.14 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.21.14 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.21.14 +registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.7 +registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.6 +registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5 +registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.4.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.6 +registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controllers:v3.23.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/node:v3.23.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/pod2daemon-flexvol:v3.23.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/typha:v3.23.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/flannel:v0.12.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/provisioner-localpv:2.10.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/linux-utils:2.10.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3 +registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12 ##kubesphere-images -kubesphere/ks-installer:v3.3.0 -kubesphere/ks-apiserver:v3.3.0 -kubesphere/ks-console:v3.3.0 -kubesphere/ks-controller-manager:v3.3.0 -kubesphere/kubectl:v1.22.0 -kubesphere/kubectl:v1.21.0 -kubesphere/kubectl:v1.20.0 -kubesphere/kubefed:v0.8.1 -kubesphere/tower:v0.2.0 -minio/minio:RELEASE.2019-08-07T01-59-21Z -minio/mc:RELEASE.2019-08-07T23-14-43Z -csiplugin/snapshot-controller:v4.0.0 -kubesphere/nginx-ingress-controller:v1.1.0 -mirrorgooglecontainers/defaultbackend-amd64:1.4 -kubesphere/metrics-server:v0.4.2 -redis:5.0.14-alpine -haproxy:2.0.25-alpine -alpine:3.14 -osixia/openldap:1.3.0 -kubesphere/netshoot:v1.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/ks-upgrade:v3.3.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z +registry.cn-beijing.aliyuncs.com/kubesphereio/mc:RELEASE.2019-08-07T23-14-43Z +registry.cn-beijing.aliyuncs.com/kubesphereio/snapshot-controller:v4.0.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/nginx-ingress-controller:v1.1.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/defaultbackend-amd64:1.4 +registry.cn-beijing.aliyuncs.com/kubesphereio/metrics-server:v0.4.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/redis:5.0.14-alpine +registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.0.25-alpine +registry.cn-beijing.aliyuncs.com/kubesphereio/alpine:3.14 +registry.cn-beijing.aliyuncs.com/kubesphereio/openldap:1.3.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/netshoot:v1.0 ##kubeedge-images -kubeedge/cloudcore:v1.9.2 -kubeedge/iptables-manager:v1.9.2 -kubesphere/edgeservice:v0.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.9.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.9.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.2.0 ##gatekeeper-images -openpolicyagent/gatekeeper:v3.5.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2 ##openpitrix-images -kubesphere/openpitrix-jobs:v3.2.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.1 ##kubesphere-devops-images -kubesphere/devops-apiserver:v3.3.0 -kubesphere/devops-controller:v3.3.0 -kubesphere/devops-tools:v3.3.0 -kubesphere/ks-jenkins:v3.3.0-2.319.1 -jenkins/inbound-agent:4.10-2 -kubesphere/builder-base:v3.2.2 -kubesphere/builder-nodejs:v3.2.0 -kubesphere/builder-maven:v3.2.0 -kubesphere/builder-maven:v3.2.1-jdk11 -kubesphere/builder-python:v3.2.0 -kubesphere/builder-go:v3.2.0 -kubesphere/builder-go:v3.2.2-1.16 -kubesphere/builder-go:v3.2.2-1.17 -kubesphere/builder-go:v3.2.2-1.18 -kubesphere/builder-base:v3.2.2-podman -kubesphere/builder-nodejs:v3.2.0-podman -kubesphere/builder-maven:v3.2.0-podman -kubesphere/builder-maven:v3.2.1-jdk11-podman -kubesphere/builder-python:v3.2.0-podman -kubesphere/builder-go:v3.2.0-podman -kubesphere/builder-go:v3.2.2-1.16-podman -kubesphere/builder-go:v3.2.2-1.17-podman -kubesphere/builder-go:v3.2.2-1.18-podman -kubesphere/s2ioperator:v3.2.1 -kubesphere/s2irun:v3.2.0 -kubesphere/s2i-binary:v3.2.0 -kubesphere/tomcat85-java11-centos7:v3.2.0 -kubesphere/tomcat85-java11-runtime:v3.2.0 -kubesphere/tomcat85-java8-centos7:v3.2.0 -kubesphere/tomcat85-java8-runtime:v3.2.0 -kubesphere/java-11-centos7:v3.2.0 -kubesphere/java-8-centos7:v3.2.0 -kubesphere/java-8-runtime:v3.2.0 -kubesphere/java-11-runtime:v3.2.0 -kubesphere/nodejs-8-centos7:v3.2.0 -kubesphere/nodejs-6-centos7:v3.2.0 -kubesphere/nodejs-4-centos7:v3.2.0 -kubesphere/python-36-centos7:v3.2.0 -kubesphere/python-35-centos7:v3.2.0 -kubesphere/python-34-centos7:v3.2.0 -kubesphere/python-27-centos7:v3.2.0 -quay.io/argoproj/argocd:v2.3.3 -quay.io/argoproj/argocd-applicationset:v0.4.1 -ghcr.io/dexidp/dex:v2.30.2 -redis:6.2.6-alpine +registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.3.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.3.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.3.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.3.0-2.319.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2 +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11 +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16 +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17 +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18 +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2-podman +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-nodejs:v3.2.0-podman +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.0-podman +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-maven:v3.2.1-jdk11-podman +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-python:v3.2.0-podman +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.0-podman +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.16-podman +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.17-podman +registry.cn-beijing.aliyuncs.com/kubesphereio/builder-go:v3.2.2-1.18-podman +registry.cn-beijing.aliyuncs.com/kubesphereio/s2ioperator:v3.2.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/s2irun:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/s2i-binary:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-centos7:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java11-runtime:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-centos7:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/tomcat85-java8-runtime:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-centos7:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-centos7:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/java-8-runtime:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/java-11-runtime:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-8-centos7:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-6-centos7:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/nodejs-4-centos7:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/python-36-centos7:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/python-35-centos7:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/python-34-centos7:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/python-27-centos7:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/argocd:v2.3.3 +registry.cn-beijing.aliyuncs.com/kubesphereio/argocd-applicationset:v0.4.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/dex:v2.30.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/redis:6.2.6-alpine ##kubesphere-monitoring-images -jimmidyson/configmap-reload:v0.5.0 -prom/prometheus:v2.34.0 -kubesphere/prometheus-config-reloader:v0.55.1 -kubesphere/prometheus-operator:v0.55.1 -kubesphere/kube-rbac-proxy:v0.11.0 -kubesphere/kube-state-metrics:v2.3.0 -prom/node-exporter:v1.3.1 -prom/alertmanager:v0.23.0 -thanosio/thanos:v0.25.2 -grafana/grafana:8.3.3 -kubesphere/kube-rbac-proxy:v0.8.0 -kubesphere/notification-manager-operator:v1.4.0 -kubesphere/notification-manager:v1.4.0 -kubesphere/notification-tenant-sidecar:v3.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/configmap-reload:v0.5.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus:v2.34.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/grafana:8.3.3 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.8.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager-operator:v1.4.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/notification-manager:v1.4.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/notification-tenant-sidecar:v3.2.0 ##kubesphere-logging-images -kubesphere/elasticsearch-curator:v5.7.6 -kubesphere/elasticsearch-oss:6.8.22 -kubesphere/fluentbit-operator:v0.13.0 -docker:19.03 -kubesphere/fluent-bit:v1.8.11 -kubesphere/log-sidecar-injector:1.1 -elastic/filebeat:6.7.0 -kubesphere/kube-events-operator:v0.4.0 -kubesphere/kube-events-exporter:v0.4.0 -kubesphere/kube-events-ruler:v0.4.0 -kubesphere/kube-auditing-operator:v0.2.0 -kubesphere/kube-auditing-webhook:v0.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-curator:v5.7.6 +registry.cn-beijing.aliyuncs.com/kubesphereio/elasticsearch-oss:6.8.22 +registry.cn-beijing.aliyuncs.com/kubesphereio/fluentbit-operator:v0.13.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/docker:19.03 +registry.cn-beijing.aliyuncs.com/kubesphereio/fluent-bit:v1.8.11 +registry.cn-beijing.aliyuncs.com/kubesphereio/log-sidecar-injector:1.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/filebeat:6.7.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-operator:v0.4.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-exporter:v0.4.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-events-ruler:v0.4.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-operator:v0.2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/kube-auditing-webhook:v0.2.0 ##istio-images -istio/pilot:1.11.1 -istio/proxyv2:1.11.1 -jaegertracing/jaeger-operator:1.27 -jaegertracing/jaeger-agent:1.27 -jaegertracing/jaeger-collector:1.27 -jaegertracing/jaeger-query:1.27 -jaegertracing/jaeger-es-index-cleaner:1.27 -kubesphere/kiali-operator:v1.38.1 -kubesphere/kiali:v1.38 +registry.cn-beijing.aliyuncs.com/kubesphereio/pilot:1.11.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/proxyv2:1.11.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-operator:1.27 +registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-agent:1.27 +registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-collector:1.27 +registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-query:1.27 +registry.cn-beijing.aliyuncs.com/kubesphereio/jaeger-es-index-cleaner:1.27 +registry.cn-beijing.aliyuncs.com/kubesphereio/kiali-operator:v1.38.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/kiali:v1.38 ##example-images -busybox:1.31.1 -nginx:1.14-alpine -joosthofman/wget:1.0 -nginxdemos/hello:plain-text -wordpress:4.8-apache -mirrorgooglecontainers/hpa-example:latest -java:openjdk-8-jre-alpine -fluent/fluentd:v1.4.2-2.0 -perl:latest -kubesphere/examples-bookinfo-productpage-v1:1.16.2 -kubesphere/examples-bookinfo-reviews-v1:1.16.2 -kubesphere/examples-bookinfo-reviews-v2:1.16.2 -kubesphere/examples-bookinfo-details-v1:1.16.2 -kubesphere/examples-bookinfo-ratings-v1:1.16.3 +registry.cn-beijing.aliyuncs.com/kubesphereio/busybox:1.31.1 +registry.cn-beijing.aliyuncs.com/kubesphereio/nginx:1.14-alpine +registry.cn-beijing.aliyuncs.com/kubesphereio/wget:1.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text +registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache +registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest +registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest +registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v1:1.16.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-reviews-v2:1.16.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-details-v1:1.16.2 +registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-ratings-v1:1.16.3 ##weave-scope-images -weaveworks/scope:1.13.0 +registry.cn-beijing.aliyuncs.com/kubesphereio/scope:1.13.0 ``` diff --git a/content/zh/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md b/content/zh/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md index 87433d77b..b4a3e0d1f 100644 --- a/content/zh/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md +++ b/content/zh/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md @@ -21,52 +21,9 @@ KubeSphere 利用 [KubeEdge](https://kubeedge.io/zh/) 将原生容器化应用 ## 准备工作 - 您需要启用 [KubeEdge](../../../pluggable-components/kubeedge/)。 -- 为了避免兼容性问题,建议安装 v1.21.x 及以下版本的 Kubernetes。 - 您有一个可用节点作为边缘节点,该节点可以运行 Ubuntu(建议)或 CentOS。本教程以 Ubuntu 18.04 为例。 - 与 Kubernetes 集群节点不同,边缘节点应部署在单独的网络中。 -## 防止非边缘工作负载调度到边缘节点 - -由于部分守护进程集(例如,Calico)有强容忍度,为了避免影响边缘节点的正常工作,您需要手动 Patch Pod 以防止非边缘工作负载调度至边缘节点。 - -```bash -#!/bin/bash - - -NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}' - -ns="kube-system" - - -DaemonSets=("nodelocaldns" "kube-proxy" "calico-node") - -length=${#DaemonSets[@]} - -for((i=0;i}} - 在 ks-installer 的 `ClusterConfiguration`中,如果您设置的是局域网地址,那么需要配置转发规则。如果您未配置转发规则,直接连接 30000 – 30004 端口即可。 - {{}} - -| 字段 | 外网端口 | 字段 | 内网端口 | -| ------------------- | -------- | ----------------------- | -------- | -| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` | -| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` | -| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` | -| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` | -| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` | - ## 配置边缘节点 您需要在边缘节点上安装容器运行时并配置 EdgeMesh。 @@ -115,6 +72,22 @@ done net.ipv4.ip_forward = 1 ``` +## 创建防火墙规则和端口转发规则 + +若要确保边缘节点可以成功地与集群通信,您必须转发端口,以便外部流量进入您的网络。您可以根据下表将外网端口映射到相应的内网 IP 地址(主节点)和端口。此外,您还需要创建防火墙规则以允许流量进入这些端口(`10000` 至 `10004`)。 + + {{< notice note >}} + 在 ks-installer 的 `ClusterConfiguration`中,如果您设置的是局域网地址,那么需要配置转发规则。如果您未配置转发规则,直接连接 30000 – 30004 端口即可。 + {{}} + +| 字段 | 外网端口 | 字段 | 内网端口 | +| ------------------- | -------- | ----------------------- | -------- | +| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` | +| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` | +| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` | +| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` | +| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` | + ## 添加边缘节点 1. 使用 `admin` 用户登录控制台,点击左上角的**平台管理**。 @@ -128,8 +101,6 @@ done {{}} 3. 点击**添加**。在出现的对话框中,设置边缘节点的节点名称并输入其内网 IP 地址。点击**验证**以继续。 - - ![add-edge-node](/images/docs/v3.3/zh-cn/installing-on-linux/add-and-delete-nodes/add-edge-nodes/add-edge-node.png) {{< notice note >}} @@ -140,8 +111,6 @@ done 4. 复制**边缘节点配置命令**下自动创建的命令,并在您的边缘节点上运行该命令。 - ![edge-command](/images/docs/v3.3/zh-cn/installing-on-linux/add-and-delete-nodes/add-edge-nodes/edge-command.png) - {{< notice note >}} 在运行该命令前,请确保您的边缘节点上已安装 `wget`。 @@ -201,7 +170,39 @@ done systemctl restart edgecore.service ``` -9. 如果仍然无法显示监控数据,执行以下命令: +9. 边缘节点加入集群后,部分 Pod 在调度至该边缘节点上后可能会一直处于 `Pending` 状态。由于部分守护进程集(例如,Calico)有强容忍度,您需要手动 Patch Pod 以防止它们调度至该边缘节点。 + + + ```bash + #!/bin/bash + + NodeSelectorPatchJson='{"spec":{"template":{"spec":{"nodeSelector":{"node-role.kubernetes.io/master": "","node-role.kubernetes.io/worker": ""}}}}}' + + NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}' + + edgenode="edgenode" + if [ $1 ]; then + edgenode="$1" + fi + + + namespaces=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $1}' )) + pods=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $2}' )) + length=${#namespaces[@]} + + + for((i=0;i<$length;i++)); + do + ns=${namespaces[$i]} + pod=${pods[$i]} + resources=$(kubectl -n $ns describe pod $pod | grep "Controlled By" |awk '{print $3}') + echo "Patching for ns:"${namespaces[$i]}",resources:"$resources + kubectl -n $ns patch $resources --type merge --patch "$NoShedulePatchJson" + sleep 1 + done + ``` + +10. 如果仍然无法显示监控数据,执行以下命令: ```bash journalctl -u edgecore.service -b -r ``` diff --git a/content/zh/docs/v3.3/installing-on-linux/cluster-operation/add-new-nodes.md b/content/zh/docs/v3.3/installing-on-linux/cluster-operation/add-new-nodes.md index 79f853ddb..bbd131dc1 100644 --- a/content/zh/docs/v3.3/installing-on-linux/cluster-operation/add-new-nodes.md +++ b/content/zh/docs/v3.3/installing-on-linux/cluster-operation/add-new-nodes.md @@ -121,7 +121,7 @@ KubeSphere 使用一段时间之后,由于工作负载不断增加,您可能 address: 172.16.0.253 port: 6443 kubernetes: - version: v1.22.10 + version: v1.21.5 imageRepo: kubesphere clusterName: cluster.local proxyMode: ipvs diff --git a/content/zh/docs/v3.3/installing-on-linux/high-availability-configurations/ha-configuration.md b/content/zh/docs/v3.3/installing-on-linux/high-availability-configurations/ha-configuration.md index 82c93b3a8..4ec125092 100644 --- a/content/zh/docs/v3.3/installing-on-linux/high-availability-configurations/ha-configuration.md +++ b/content/zh/docs/v3.3/installing-on-linux/high-availability-configurations/ha-configuration.md @@ -48,7 +48,7 @@ weight: 3150 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -64,7 +64,7 @@ export KKZONE=cn 执行以下命令下载 KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -79,7 +79,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -执行以上命令会下载最新版 KubeKey (v2.2.2),您可以修改命令中的版本号下载指定版本。 +执行以上命令会下载最新版 KubeKey (v2.3.0),您可以修改命令中的版本号下载指定版本。 {{}} @@ -92,12 +92,12 @@ chmod +x kk 创建包含默认配置的示例配置文件。这里使用 Kubernetes v1.22.10 作为示例。 ```bash -./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 +./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 ``` {{< notice note >}} -- 安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 +- 安装 KubeSphere 3.3 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 - 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。 diff --git a/content/zh/docs/v3.3/installing-on-linux/high-availability-configurations/internal-ha-configuration.md b/content/zh/docs/v3.3/installing-on-linux/high-availability-configurations/internal-ha-configuration.md index eff4b85e7..cb884590b 100644 --- a/content/zh/docs/v3.3/installing-on-linux/high-availability-configurations/internal-ha-configuration.md +++ b/content/zh/docs/v3.3/installing-on-linux/high-availability-configurations/internal-ha-configuration.md @@ -33,7 +33,7 @@ weight: 3150 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -49,7 +49,7 @@ export KKZONE=cn 执行以下命令下载 KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -64,7 +64,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -执行以上命令会下载最新版 KubeKey (v2.2.2),您可以修改命令中的版本号下载指定版本。 +执行以上命令会下载最新版 KubeKey (v2.3.0),您可以修改命令中的版本号下载指定版本。 {{}} @@ -77,12 +77,12 @@ chmod +x kk 创建包含默认配置的示例配置文件。这里使用 Kubernetes v1.22.10 作为示例。 ```bash -./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 +./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 ``` {{< notice note >}} -- 安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 +- 安装 KubeSphere 3.3 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 - 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。 @@ -134,7 +134,7 @@ spec: spec: controlPlaneEndpoint: ##Internal loadbalancer for apiservers - internalLoadbalancer: haproxy + #internalLoadbalancer: haproxy domain: lb.kubesphere.local address: "" diff --git a/content/zh/docs/v3.3/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy.md b/content/zh/docs/v3.3/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy.md index 9ce39b733..dfea39623 100644 --- a/content/zh/docs/v3.3/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy.md +++ b/content/zh/docs/v3.3/installing-on-linux/high-availability-configurations/set-up-ha-cluster-using-keepalived-haproxy.md @@ -267,7 +267,7 @@ yum install keepalived haproxy psmisc -y 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接使用以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -283,7 +283,7 @@ export KKZONE=cn 运行以下命令来下载 KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -298,7 +298,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -通过以上命令,可以下载 KubeKey 的最新版本 (v2.2.2)。您可以更改命令中的版本号来下载特定的版本。 +通过以上命令,可以下载 KubeKey 的最新版本 (v2.3.0)。您可以更改命令中的版本号来下载特定的版本。 {{}} @@ -311,12 +311,12 @@ chmod +x kk 使用默认配置创建一个示例配置文件。此处以 Kubernetes v1.22.10 作为示例。 ```bash -./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 +./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 ``` {{< notice note >}} -- 安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 +- 安装 KubeSphere 3.3 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 - 如果您没有在本步骤的命令中添加标志 `--with-kubesphere`,那么除非您使用配置文件中的 `addons` 字段进行安装,或者稍后使用 `./kk create cluster` 时再添加该标志,否则 KubeSphere 将不会被部署。 - 如果您添加标志 `--with-kubesphere` 时未指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。 diff --git a/content/zh/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md b/content/zh/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md index fbcf11457..6b016d685 100644 --- a/content/zh/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md +++ b/content/zh/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md @@ -13,17 +13,17 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 ## 前提条件 -如果您要进行多节点安装,需要参考如下示例准备至少三台主机。 +要开始进行多节点安装,您需要参考如下示例准备至少三台主机。 | 主机 IP | 主机名称 | 角色 | | ---------------- | ---- | ---------------- | -| 192.168.0.2 | node1 | 联网主机用于制作离线包 | +| 192.168.0.2 | node1 | 联网主机用于源集群打包使用。已部署 Kubernetes v1.22.10 和 KubeSphere v3.3.1 | | 192.168.0.3 | node2 | 离线环境主节点 | | 192.168.0.4 | node3 | 离线环境镜像仓库节点 | ## 部署准备 -1. 执行以下命令下载 KubeKey v2.2.2 并解压: +1. 执行以下命令下载 KubeKey v2.3.0 并解压: {{< tabs >}} @@ -32,7 +32,7 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接运行以下命令。 ```bash - curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - + curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -48,13 +48,23 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 运行以下命令来下载 KubeKey: ```bash - curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - + curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} {{}} -2. 在联网主机上执行以下命令,并复制示例中的 manifest 内容。关于更多信息,请参阅 [manifest-example](https://github.com/kubesphere/kubekey/blob/master/docs/manifest-example.md)。 +2. 在源集群中使用 KubeKey 创建 manifest。支持下面 2 种方式: + + - (推荐)在已创建的集群中执行 KubeKey 命令生成该文件。生成的yaml只是提供一个示例(镜像列表不完整),需要自行补充修改,第一次离线部署推荐复制下方第三点的配置内容。 + + ```bash + ./kk create manifest + ``` + + - 根据模板手动创建并编写该文件(需要一定的基础推荐使用第一种方式)。关于更多信息,请参阅 [manifest-example](https://github.com/kubesphere/kubekey/blob/master/docs/manifest-example.md)。 + +3. 执行以下命令在源集群中修改 manifest 配置: ```bash vim manifest.yaml @@ -77,7 +87,7 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 repository: iso: localPath: - url: https://github.com/kubesphere/kubekey/releases/download/v2.2.2/centos7-rpms-amd64.iso + url: https://github.com/kubesphere/kubekey/releases/download/v2.3.0/centos7-rpms-amd64.iso - arch: amd64 type: linux id: ubuntu @@ -85,13 +95,13 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 repository: iso: localPath: - url: https://github.com/kubesphere/kubekey/releases/download/v2.2.2/ubuntu-20.04-debs-amd64.iso + url: https://github.com/kubesphere/kubekey/releases/download/v2.3.0/ubuntu-20.04-debs-amd64.iso kubernetesDistributions: - type: kubernetes - version: v1.22.10 + version: v1.22.12 components: helm: - version: v3.6.3 + version: v3.9.0 cni: version: v0.9.1 etcd: @@ -106,14 +116,14 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 docker-registry: version: "2" harbor: - version: v2.4.1 + version: v2.5.3 docker-compose: version: v2.2.2 images: - - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.10 - - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.10 - - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.10 - - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.10 + - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-apiserver:v1.22.12 + - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-controller-manager:v1.22.12 + - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-proxy:v1.22.12 + - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-scheduler:v1.22.12 - registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5 - registry.cn-beijing.aliyuncs.com/kubesphereio/coredns:1.8.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/cni:v3.23.2 @@ -127,13 +137,14 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 - registry.cn-beijing.aliyuncs.com/kubesphereio/haproxy:2.3 - registry.cn-beijing.aliyuncs.com/kubesphereio/nfs-subdir-external-provisioner:v4.0.2 - registry.cn-beijing.aliyuncs.com/kubesphereio/k8s-dns-node-cache:1.15.12 - - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.0 - - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.0 - - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.0 - - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.0 - - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0 - - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0 + - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-installer:v3.3.1 + - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-apiserver:v3.3.1 + - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-console:v3.3.1 + - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-controller-manager:v3.3.1 + - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-upgrade:v3.3.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.22.0 + - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.21.0 + - registry.cn-beijing.aliyuncs.com/kubesphereio/kubectl:v1.20.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/kubefed:v0.8.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/tower:v0.2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/minio:RELEASE.2019-08-07T01-59-21Z @@ -150,10 +161,11 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 - registry.cn-beijing.aliyuncs.com/kubesphereio/cloudcore:v1.9.2 - registry.cn-beijing.aliyuncs.com/kubesphereio/iptables-manager:v1.9.2 - registry.cn-beijing.aliyuncs.com/kubesphereio/edgeservice:v0.2.0 - - registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.2.1 - - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.3.0 - - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.3.0 - - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.3.0 + - registry.cn-beijing.aliyuncs.com/kubesphereio/gatekeeper:v3.5.2 + - registry.cn-beijing.aliyuncs.com/kubesphereio/openpitrix-jobs:v3.3.1 + - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-apiserver:v3.3.1 + - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-controller:v3.3.1 + - registry.cn-beijing.aliyuncs.com/kubesphereio/devops-tools:v3.3.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/ks-jenkins:v3.3.0-2.319.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/inbound-agent:4.10-2 - registry.cn-beijing.aliyuncs.com/kubesphereio/builder-base:v3.2.2 @@ -201,7 +213,7 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-config-reloader:v0.55.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/prometheus-operator:v0.55.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-rbac-proxy:v0.11.0 - - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.3.0 + - registry.cn-beijing.aliyuncs.com/kubesphereio/kube-state-metrics:v2.5.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/node-exporter:v1.3.1 - registry.cn-beijing.aliyuncs.com/kubesphereio/alertmanager:v0.23.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/thanos:v0.25.2 @@ -237,7 +249,6 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 - registry.cn-beijing.aliyuncs.com/kubesphereio/hello:plain-text - registry.cn-beijing.aliyuncs.com/kubesphereio/wordpress:4.8-apache - registry.cn-beijing.aliyuncs.com/kubesphereio/hpa-example:latest - - registry.cn-beijing.aliyuncs.com/kubesphereio/java:openjdk-8-jre-alpine - registry.cn-beijing.aliyuncs.com/kubesphereio/fluentd:v1.4.2-2.0 - registry.cn-beijing.aliyuncs.com/kubesphereio/perl:latest - registry.cn-beijing.aliyuncs.com/kubesphereio/examples-bookinfo-productpage-v1:1.16.2 @@ -258,17 +269,11 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 - 可根据实际情况修改 **manifest-sample.yaml** 文件的内容,用于之后导出期望的 artifact 文件。 - - 您可以访问 https://github.com/kubesphere/kubekey/releases/tag/v2.2.2 下载 ISO 文件。 + - 您可以访问 https://github.com/kubesphere/kubekey/releases/tag/v2.3.0 下载 ISO 文件。 {{}} -3. (可选)如果您已经拥有集群,那么可以在已有集群中执行 KubeKey 命令生成 manifest 文件,并参照步骤 2 中的示例配置 manifest 文件内容。 - - ```bash - ./kk create manifest - ``` - -4. 导出制品 artifact。 +4. 从源集群中导出制品 artifact。 {{< tabs >}} @@ -313,7 +318,7 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 2. 执行以下命令创建离线集群配置文件: ```bash - ./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 -f config-sample.yaml + ./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 -f config-sample.yaml ``` 3. 执行以下命令修改配置文件: @@ -358,7 +363,7 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 address: "" port: 6443 kubernetes: - version: v1.22.10 + version: v1.21.5 clusterName: cluster.local network: plugin: calico @@ -409,7 +414,7 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 - 公共项目(Public):任何用户都可以从这个项目中拉取镜像。 - 私有项目(Private):只有作为项目成员的用户可以拉取镜像。 - Harbor 管理员账号:**admin**,密码:**Harbor12345**。Harbor 安装文件在 **/opt/harbor**, 如需运维 Harbor,可至该目录下。 + Harbor 管理员账号:**admin**,密码:**Harbor12345**。Harbor 安装文件在 **/opt/harbor** , 如需运维 Harbor,可至该目录下。 {{}} @@ -542,8 +547,8 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 参数解释如下: - - **config-sample.yaml**:离线环境的配置文件。 - - **kubesphere.tar.gz**:打包的 tar 包镜像。 + - **config-sample.yaml**:离线环境集群的配置文件。 + - **kubesphere.tar.gz**:源集群打包出来的 tar 包镜像。 - **--with-packages**:若需要安装操作系统依赖,需指定该选项。 8. 执行以下命令查看集群状态: diff --git a/content/zh/docs/v3.3/installing-on-linux/introduction/kubekey.md b/content/zh/docs/v3.3/installing-on-linux/introduction/kubekey.md index 692840c2c..475054f60 100644 --- a/content/zh/docs/v3.3/installing-on-linux/introduction/kubekey.md +++ b/content/zh/docs/v3.3/installing-on-linux/introduction/kubekey.md @@ -39,7 +39,7 @@ KubeKey 的几种使用场景: 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接运行以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -55,7 +55,7 @@ export KKZONE=cn 运行以下命令来下载 KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -70,21 +70,21 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -通过以上的命令,可以下载 KubeKey 的最新版本 (v2.2.2)。您可以更改命令中的版本号来下载特定的版本。 +通过以上的命令,可以下载 KubeKey 的最新版本 (v2.3.0)。您可以更改命令中的版本号来下载特定的版本。 {{}} ## 支持矩阵 -若需使用 KubeKey 来安装 Kubernetes 和 KubeSphere 3.3.0,请参见下表以查看所有受支持的 Kubernetes 版本。 +若需使用 KubeKey 来安装 Kubernetes 和 KubeSphere 3.3,请参见下表以查看所有受支持的 Kubernetes 版本。 | KubeSphere 版本 | 受支持的 Kubernetes 版本 | | ------------------ | ------------------------------------------------------------ | -| v3.3.0 | v1.19.x、v1.20.x、v1.21.x、v1.22.x、v1.23.x(实验性支持) | +| v3.3 | v1.19.x、v1.20.x、v1.21.x、v1.22.x、v1.23.x(实验性支持) | {{< notice note >}} - 您也可以运行 `./kk version --show-supported-k8s`,查看能使用 KubeKey 安装的所有受支持的 Kubernetes 版本。 -- 能使用 KubeKey 安装的 Kubernetes 版本与 KubeSphere v3.3.0 支持的 Kubernetes 版本不同。如需[在现有 Kubernetes 集群上安装 KubeSphere 3.3.0](../../../installing-on-kubernetes/introduction/overview/),您的 Kubernetes 版本必须为 v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 -- 如果您需要使用 KubeEdge,为了避免兼容性问题,建议安装 v1.21.x 及以下版本的 Kubernetes。 +- 能使用 KubeKey 安装的 Kubernetes 版本与 KubeSphere 3.3 支持的 Kubernetes 版本不同。如需[在现有 Kubernetes 集群上安装 KubeSphere 3.3](../../../installing-on-kubernetes/introduction/overview/),您的 Kubernetes 版本必须为 v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 +- 如果您需要使用 KubeEdge,为了避免兼容性问题,建议安装 v1.22.x 及以下版本的 Kubernetes。 {{}} \ No newline at end of file diff --git a/content/zh/docs/v3.3/installing-on-linux/introduction/multioverview.md b/content/zh/docs/v3.3/installing-on-linux/introduction/multioverview.md index 4b5366dd6..49f03a44c 100644 --- a/content/zh/docs/v3.3/installing-on-linux/introduction/multioverview.md +++ b/content/zh/docs/v3.3/installing-on-linux/introduction/multioverview.md @@ -32,7 +32,7 @@ weight: 3120 | 系统 | 最低要求(每个节点) | | ------------------------------------------------------------ | -------------------------------- | -| **Ubuntu** *16.04,18.04,20.04, 22.04* | CPU:2 核,内存:4 G,硬盘:40 G | +| **Ubuntu** *16.04,18.04,20.04* | CPU:2 核,内存:4 G,硬盘:40 G | | **Debian** *Buster,Stretch* | CPU:2 核,内存:4 G,硬盘:40 G | | **CentOS** *7*.x | CPU:2 核,内存:4 G,硬盘:40 G | | **Red Hat Enterprise Linux** *7* | CPU:2 核,内存:4 G,硬盘:40 G | @@ -101,7 +101,7 @@ KubeKey 可以一同安装 Kubernetes 和 KubeSphere。根据要安装的 Kubern 从 [GitHub 发布页面](https://github.com/kubesphere/kubekey/releases)下载 KubeKey 或直接使用以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -117,7 +117,7 @@ export KKZONE=cn 执行以下命令下载 KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -132,7 +132,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -执行以上命令会下载最新版 KubeKey (v2.2.2),您可以修改命令中的版本号下载指定版本。 +执行以上命令会下载最新版 KubeKey (v2.3.0),您可以修改命令中的版本号下载指定版本。 {{}} @@ -156,7 +156,7 @@ chmod +x kk {{< notice note >}} -- 安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 +- 安装 KubeSphere 3.3 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 - 如果您在此步骤的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。 @@ -172,7 +172,7 @@ chmod +x kk ./kk create config [-f ~/myfolder/abc.yaml] ``` -- 您可以指定要安装的 KubeSphere 版本(例如 `--with-kubesphere v3.3.0`)。 +- 您可以指定要安装的 KubeSphere 版本(例如 `--with-kubesphere v3.3.1`)。 ```bash ./kk create config --with-kubesphere [version] @@ -246,13 +246,6 @@ spec: hosts: - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, privateKeyPath: "~/.ssh/id_rsa"} ``` - -- 在 ARM 设备上安装的示例: - - ```yaml - hosts: - - {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123, arch: arm64} - ``` {{< notice tip >}} @@ -358,4 +351,4 @@ kubectl completion bash >/etc/bash_completion.d/kubectl 详细信息[见此](https://kubernetes.io/docs/tasks/tools/install-kubectl/#enabling-shell-autocompletion)。 ## 代码演示 - + \ No newline at end of file diff --git a/content/zh/docs/v3.3/installing-on-linux/introduction/vars.md b/content/zh/docs/v3.3/installing-on-linux/introduction/vars.md index 3c576a4e2..2b0660927 100644 --- a/content/zh/docs/v3.3/installing-on-linux/introduction/vars.md +++ b/content/zh/docs/v3.3/installing-on-linux/introduction/vars.md @@ -10,7 +10,7 @@ weight: 3160 ```yaml kubernetes: - version: v1.22.10 + version: v1.21.5 imageRepo: kubesphere clusterName: cluster.local masqueradeAll: false @@ -45,7 +45,7 @@ weight: 3160 version - Kubernetes 安装版本。如未指定 Kubernetes 版本,{{< contentLink "docs/installing-on-linux/introduction/kubekey" "KubeKey" >}} v2.2.2 默认安装 Kubernetes v1.23.7。有关更多信息,请参阅{{< contentLink "docs/installing-on-linux/introduction/kubekey/#support-matrix" "支持矩阵" >}}。 + Kubernetes 安装版本。如未指定 Kubernetes 版本,{{< contentLink "docs/installing-on-linux/introduction/kubekey" "KubeKey" >}} v2.3.0 默认安装 Kubernetes v1.23.7。有关更多信息,请参阅{{< contentLink "docs/installing-on-linux/introduction/kubekey/#support-matrix" "支持矩阵" >}}。 imageRepo @@ -112,7 +112,7 @@ weight: 3160 privateRegistry* - 配置私有镜像仓库,用于离线安装(例如,Docker 本地仓库或 Harbor)。有关详细信息,请参阅{{< contentLink "docs/installing-on-linux/introduction/air-gapped-installation/" "离线安装" >}}。 + 配置私有镜像仓库,用于离线安装(例如,Docker 本地仓库或 Harbor)。有关详细信息,请参阅{{< contentLink "docs/v3.3/installing-on-linux/introduction/air-gapped-installation/" "离线安装" >}}。 diff --git a/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-and-k3s.md b/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-and-k3s.md index 9bc1f39bc..9ac5a0757 100644 --- a/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-and-k3s.md +++ b/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-and-k3s.md @@ -32,7 +32,7 @@ weight: 3530 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接运行以下命令: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -48,7 +48,7 @@ export KKZONE=cn 运行以下命令来下载 KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -63,7 +63,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -通过以上的命令可以下载 KubeKey 的最新版本 (v2.2.2)。请注意,更早版本的 KubeKey 无法下载 K3s。 +通过以上的命令可以下载 KubeKey 的最新版本 (v2.3.0)。请注意,更早版本的 KubeKey 无法下载 K3s。 {{}} @@ -78,12 +78,12 @@ chmod +x kk 1. 执行以下命令为集群创建一个配置文件: ```bash - ./kk create config --with-kubernetes v1.21.4-k3s --with-kubesphere v3.3.0 + ./kk create config --with-kubernetes v1.21.4-k3s --with-kubesphere v3.3.1 ``` {{< notice note >}} - - KubeKey v2.2.2 支持安装 K3s v1.21.4。 + - KubeKey v2.3.0 支持安装 K3s v1.21.4。 - 您可以在以上命令中使用 `-f` 或 `--file` 参数指定配置文件的路径和名称。如未指定路径和名称,KubeKey 将默认在当前目录下创建 `config-sample.yaml` 配置文件。 diff --git a/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md b/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md index 47f6b1fbb..c9f889e83 100644 --- a/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md +++ b/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-bare-metal.md @@ -200,7 +200,7 @@ yum install conntrack-tools 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或使用以下命令: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -216,7 +216,7 @@ export KKZONE=cn 执行以下命令下载 KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -231,7 +231,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -执行以上命令会下载最新版 KubeKey (v2.2.2),您可以修改命令中的版本号下载指定版本。 +执行以上命令会下载最新版 KubeKey (v2.3.0),您可以修改命令中的版本号下载指定版本。 {{}} @@ -245,15 +245,15 @@ chmod +x kk 您可用使用 KubeKey 同时安装 Kubernetes 和 KubeSphere,通过自定义配置文件中的参数创建多节点集群。 -创建安装有 KubeSphere 的 Kubernetes 集群(例如使用 `--with-kubesphere v3.3.0`): +创建安装有 KubeSphere 的 Kubernetes 集群(例如使用 `--with-kubesphere v3.3.1`): ```bash -./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 +./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` {{< notice note >}} -- 安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 +- 安装 KubeSphere 3.3 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 - 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装 KubeSphere,或者在您后续使用 `./kk create cluster` 命令时再次添加该标志。 - 如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。 diff --git a/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md b/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md index 38be3e65c..9025236c2 100644 --- a/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md +++ b/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md @@ -241,14 +241,14 @@ track_script { ```bash systemctl restart keepalived && systemctl enable keepalived -systemctl stop keepalived +systemctl stop keepaliv ``` 开启 keepalived服务 ```bash -systemctl start keepalived +systemctl start keepalivedb ``` ### 验证可用性 @@ -288,7 +288,7 @@ systemctl status -l keepalived 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -304,7 +304,7 @@ export KKZONE=cn 执行以下命令下载 KubeKey。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -319,7 +319,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -执行以上命令会下载最新版 KubeKey (v2.2.2),您可以修改命令中的版本号下载指定版本。 +执行以上命令会下载最新版 KubeKey (v2.3.0),您可以修改命令中的版本号下载指定版本。 {{}} @@ -338,12 +338,12 @@ chmod +x kk 创建配置文件(一个示例配置文件)。 ```bash -./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 +./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` {{< notice note >}} -- 安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 +- 安装 KubeSphere 3.3 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 - 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。 @@ -389,7 +389,7 @@ spec: address: "10.10.71.67" port: 6443 kubernetes: - version: v1.22.10 + version: v1.21.5 imageRepo: kubesphere clusterName: cluster.local masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false] diff --git a/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-glusterfs.md b/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-glusterfs.md index 2c7f81397..99460a66b 100644 --- a/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-glusterfs.md +++ b/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-glusterfs.md @@ -119,7 +119,7 @@ weight: 3340 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接运行以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -135,7 +135,7 @@ export KKZONE=cn 运行以下命令来下载 KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -150,7 +150,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -通过以上的命令,可以下载 KubeKey 的最新版本 (v2.2.2)。您可以更改命令中的版本号来下载特定的版本。 +通过以上的命令,可以下载 KubeKey 的最新版本 (v2.3.0)。您可以更改命令中的版本号来下载特定的版本。 {{}} @@ -165,12 +165,12 @@ chmod +x kk 1. 指定想要安装的 Kubernetes 版本和 KubeSphere 版本,例如: ```bash - ./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 + ./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` {{< notice note >}} - - 安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 + - 安装 KubeSphere 3.3 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 - 如果您在此步骤的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。 - 如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。 @@ -205,7 +205,7 @@ chmod +x kk address: "" port: 6443 kubernetes: - version: v1.22.10 + version: v1.21.5 imageRepo: kubesphere clusterName: cluster.local network: diff --git a/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md b/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md index beb97c51c..a6d190366 100644 --- a/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md +++ b/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md @@ -11,7 +11,7 @@ weight: 3330 {{< notice note >}} - 本教程以 Ubuntu 16.04 为例。 -- NFS 与部分应用不兼容(例如 Prometheus),可能会导致容器组创建失败。如果确实需要在生产环境中使用 NFS,请确保您了解相关风险或咨询 KubeSphere 技术支持 support@kubesphere.cloud。 +- 不建议您在生产环境中使用 NFS 存储(尤其是在 Kubernetes 1.20 或以上版本),这可能会引起 `failed to obtain lock` 和 `input/output error` 等问题,从而导致 Pod `CrashLoopBackOff`。此外,部分应用不兼容 NFS,例如 [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects) 等。 {{}} @@ -71,7 +71,7 @@ weight: 3330 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接运行以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -87,7 +87,7 @@ export KKZONE=cn 运行以下命令来下载 KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -102,7 +102,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -通过以上命令,可以下载 KubeKey 的最新版本 (v2.2.2)。您可以更改命令中的版本号来下载特定的版本。 +通过以上命令,可以下载 KubeKey 的最新版本 (v2.3.0)。您可以更改命令中的版本号来下载特定的版本。 {{}} @@ -117,12 +117,12 @@ chmod +x kk 1. 指定您想要安装的 Kubernetes 版本和 KubeSphere 版本,例如: ```bash - ./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 + ./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` {{< notice note >}} - - 安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 + - 安装 KubeSphere 3.3 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 - 如果您在此步骤的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。 - 如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。 @@ -157,7 +157,7 @@ chmod +x kk address: "" port: 6443 kubernetes: - version: v1.22.10 + version: v1.21.5 imageRepo: kubesphere clusterName: cluster.local network: diff --git a/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-qingcloud-csi.md b/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-qingcloud-csi.md index 23d8d5e70..bee03c593 100644 --- a/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-qingcloud-csi.md +++ b/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-qingcloud-csi.md @@ -73,7 +73,7 @@ weight: 3320 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接运行以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -89,7 +89,7 @@ export KKZONE=cn 运行以下命令下载 KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -104,7 +104,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -通过以上的命令,可以下载 KubeKey 的最新版本 (v2.2.2)。您可以更改命令中的版本号来下载特定的版本。 +通过以上的命令,可以下载 KubeKey 的最新版本 (v2.3.0)。您可以更改命令中的版本号来下载特定的版本。 {{}} @@ -119,12 +119,12 @@ chmod +x kk 1. 指定您想要安装的 Kubernetes 版本和 KubeSphere 版本,例如: ```bash - ./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 + ./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` {{< notice note >}} - - 安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 + - 安装 KubeSphere 3.3 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 - 如果您在此步骤的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。 - 如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。 @@ -159,7 +159,7 @@ chmod +x kk address: "" port: 6443 kubernetes: - version: v1.22.10 + version: v1.21.5 imageRepo: kubesphere clusterName: cluster.local network: diff --git a/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs.md b/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs.md index 3ff5bfa7c..144462ec3 100644 --- a/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs.md +++ b/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-ali-ecs.md @@ -91,7 +91,7 @@ controlPlaneEndpoint: 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -107,7 +107,7 @@ export KKZONE=cn 执行以下命令下载 KubeKey。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -122,7 +122,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -执行以上命令会下载最新版 KubeKey (v2.2.2),您可以修改命令中的版本号下载指定版本。 +执行以上命令会下载最新版 KubeKey (v2.3.0),您可以修改命令中的版本号下载指定版本。 {{}} @@ -141,7 +141,7 @@ chmod +x kk 在当前位置创建配置文件 `config-sample.yaml`: ```bash -./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 -f config-sample.yaml +./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 -f config-sample.yaml ``` > 提示:默认是 Kubernetes 1.17.9,这些 Kubernetes 版本也与 KubeSphere 同时进行过充分的测试: v1.15.12, v1.16.13, v1.17.9 (default), v1.18.6,您可以根据需要指定版本。 diff --git a/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md b/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md index c1bc52a47..d658b6f7c 100644 --- a/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md +++ b/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-azure-vms.md @@ -102,7 +102,7 @@ ssh -i .ssh/id_rsa2 -p50200 kubesphere@40.81.5.xx 从 KubeKey 的 [Github 发布页面](https://github.com/kubesphere/kubekey/releases)下载,或执行以下命令: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -118,7 +118,7 @@ export KKZONE=cn 运行以下命令下载 KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -133,7 +133,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -上面的命令会下载 KubeKey 最新版本 (v2.2.2)。您可以在命令中更改版本号以下载特定版本。 +上面的命令会下载 KubeKey 最新版本 (v2.3.0)。您可以在命令中更改版本号以下载特定版本。 {{}} @@ -148,12 +148,12 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - 2. 使用默认配置创建示例配置文件,这里以 Kubernetes v1.22.10 为例。 ```bash - ./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 + ./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 ``` {{< notice note >}} -- KubeSphere 3.3.0 对应 Kubernetes 版本推荐:v1.19.x、v1.20.x、v1.21.x、 v1.22.x 和 v1.23.x(实验性支持)。如果未指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关支持的 Kubernetes 版本请参阅[支持矩阵](../../../installing-on-linux/introduction/kubekey/#support-matrix)。 +- KubeSphere 3.3 对应 Kubernetes 版本推荐:v1.19.x、v1.20.x、v1.21.x、 v1.22.x 和 v1.23.x(实验性支持)。如果未指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关支持的 Kubernetes 版本请参阅[支持矩阵](../../../installing-on-linux/introduction/kubekey/#support-matrix)。 - 如果在此步骤中的命令中未添加标志 `--with-kubesphere`,则不会部署 KubeSphere,除非您使用配置文件中的 `addons` 字段进行安装,或稍后使用 `./kk create cluster` 时再次添加此标志。 - 如果在未指定 KubeSphere 版本的情况下添加标志 --with kubesphere`,将安装 KubeSphere 的最新版本。 diff --git a/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-huaweicloud-ecs.md b/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-huaweicloud-ecs.md index 3a59ebb3c..5241ff5b2 100644 --- a/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-huaweicloud-ecs.md +++ b/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-huaweicloud-ecs.md @@ -85,7 +85,7 @@ Kubernetes 服务需要做到高可用,需要保证 kube-apiserver 的 HA , 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -101,7 +101,7 @@ export KKZONE=cn 执行以下命令下载 KubeKey。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -116,7 +116,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -执行以上命令会下载最新版 KubeKey (v2.2.2),您可以修改命令中的版本号下载指定版本。 +执行以上命令会下载最新版 KubeKey (v2.3.0),您可以修改命令中的版本号下载指定版本。 {{}} @@ -137,7 +137,7 @@ chmod +x kk 在当前位置创建配置文件 `master-HA.yaml`: ```bash -./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 -f master-HA.yaml +./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 -f master-HA.yaml ``` > 提示:默认是 Kubernetes 1.17.9,这些 Kubernetes 版本也与 KubeSphere 同时进行过充分的测试: v1.15.12, v1.16.13, v1.17.9 (default), v1.18.6,您可以根据需要指定版本。 @@ -202,146 +202,70 @@ metadata: name: ks-installer namespace: kubesphere-system labels: - version: v3.3.0 + version: v3.3.1 spec: + local_registry: "" persistence: - storageClass: "" # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here. + storageClass: "" authentication: - jwtSecret: "" # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster. - local_registry: "" # Add your private registry address if it is needed. - # dev_tag: "" # Add your kubesphere image tag you want to install, by default it's same as ks-installer release version. + jwtSecret: "" etcd: - monitoring: false # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it. - endpointIps: localhost # etcd cluster EndpointIps. It can be a bunch of IPs here. - port: 2379 # etcd port. + monitoring: true # Whether to install etcd monitoring dashboard + endpointIps: 192.168.1.10,192.168.1.11,192.168.1.12 # etcd cluster endpointIps + port: 2379 # etcd port tlsEnable: true common: - core: - console: - enableMultiLogin: true # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time. - port: 30880 - type: NodePort - # apiserver: # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster - # resources: {} - # controllerManager: - # resources: {} - redis: - enabled: false - enableHA: false - volumeSize: 2Gi # Redis PVC size. - openldap: - enabled: false - volumeSize: 2Gi # openldap PVC size. - minio: - volumeSize: 20Gi # Minio PVC size. - monitoring: - # type: external # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line. - endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data. - GPUMonitoring: # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero. - enabled: false - gpu: # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs. - kinds: - - resourceName: "nvidia.com/gpu" - resourceType: "GPU" - default: true - es: # Storage backend for logging, events and auditing. - # master: - # volumeSize: 4Gi # The volume size of Elasticsearch master nodes. - # replicas: 1 # The total number of master nodes. Even numbers are not allowed. - # resources: {} - # data: - # volumeSize: 20Gi # The volume size of Elasticsearch data nodes. - # replicas: 1 # The total number of data nodes. - # resources: {} - logMaxAge: 7 # Log retention time in built-in Elasticsearch. It is 7 days by default. - elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - basicAuth: - enabled: false - username: "" - password: "" - externalElasticsearchHost: "" - externalElasticsearchPort: "" - alerting: # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from. - enabled: false # Enable or disable the KubeSphere Alerting System. - # thanosruler: - # replicas: 1 - # resources: {} - auditing: # Provide a security-relevant chronological set of records,recording the sequence of activities happening on the platform, initiated by different tenants. - enabled: false # Enable or disable the KubeSphere Auditing Log System. - # operator: - # resources: {} - # webhook: - # resources: {} - devops: # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image. - enabled: false # Enable or disable the KubeSphere DevOps System. - # resources: {} - jenkinsMemoryLim: 2Gi # Jenkins memory limit. - jenkinsMemoryReq: 1500Mi # Jenkins memory request. - jenkinsVolumeSize: 8Gi # Jenkins volume size. - jenkinsJavaOpts_Xms: 1200m # The following three fields are JVM parameters. - jenkinsJavaOpts_Xmx: 1600m + mysqlVolumeSize: 20Gi # MySQL PVC size + minioVolumeSize: 20Gi # Minio PVC size + etcdVolumeSize: 20Gi # etcd PVC size + openldapVolumeSize: 2Gi # openldap PVC size + redisVolumSize: 2Gi # Redis PVC size + es: # Storage backend for logging, tracing, events and auditing. + elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number + elasticsearchDataReplicas: 1 # total number of data nodes + elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes + elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes + logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. + elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log + # externalElasticsearchUrl: + # externalElasticsearchPort: + console: + enableMultiLogin: false # enable/disable multiple sing on, it allows a user can be used by different users at the same time. + port: 30880 + alerting: # Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from. + enabled: true + auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of records,recording the sequence of activities happened in platform, initiated by different tenants. + enabled: true + devops: # Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image + enabled: true + jenkinsMemoryLim: 2Gi # Jenkins memory limit + jenkinsMemoryReq: 1500Mi # Jenkins memory request + jenkinsVolumeSize: 8Gi # Jenkins volume size + jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters + jenkinsJavaOpts_Xmx: 512m jenkinsJavaOpts_MaxRAM: 2g - events: # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters. - enabled: false # Enable or disable the KubeSphere Events System. - # operator: - # resources: {} - # exporter: - # resources: {} - # ruler: - # enabled: true - # replicas: 2 - # resources: {} - logging: # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd. - enabled: false # Enable or disable the KubeSphere Logging System. - logsidecar: - enabled: true - replicas: 2 - # resources: {} - metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler). - enabled: false # Enable or disable metrics-server. - monitoring: - storageClass: "" # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default. - node_exporter: - port: 9100 - # resources: {} - # kube_rbac_proxy: - # resources: {} - # kube_state_metrics: - # resources: {} - # prometheus: - # replicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability. - # volumeSize: 20Gi # Prometheus PVC size. - # resources: {} - # operator: - # resources: {} - # alertmanager: - # replicas: 1 # AlertManager Replicas. - # resources: {} - # notification_manager: - # resources: {} - # operator: - # resources: {} - # proxy: - # resources: {} - gpu: # GPU monitoring-related plug-in installation. - nvidia_dcgm_exporter: # Ensure that gpu resources on your hosts can be used normally, otherwise this plug-in will not work properly. - enabled: false # Check whether the labels on the GPU hosts contain "nvidia.com/gpu.present=true" to ensure that the DCGM pod is scheduled to these nodes. - # resources: {} + events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters. + enabled: true + logging: # Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd. + enabled: true + logsidecarReplicas: 2 + metrics_server: # Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler). + enabled: true + monitoring: # + prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well. + prometheusMemoryRequest: 400Mi # Prometheus request memory + prometheusVolumeSize: 20Gi # Prometheus PVC size + alertmanagerReplicas: 1 # AlertManager Replicas multicluster: - clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the Host or Member Cluster. - network: - networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods). - # Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net. - enabled: false # Enable or disable network policies. - ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool. - type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled. - topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope. - type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled. - openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle. - store: - enabled: false # Enable or disable the KubeSphere App Store. - servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology. - enabled: false # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based). + clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster + networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods). + enabled: true + notification: # It supports notification management in multi-tenant Kubernetes clusters. It allows you to set AlertManager as its sender, and receivers include Email, Wechat Work, and Slack. + enabled: true + openpitrix: # Whether to install KubeSphere App Store. It provides an application store for Helm-based applications, and offer application lifecycle management + enabled: true + servicemesh: # Whether to install KubeSphere Service Mesh (Istio-based). It provides fine-grained traffic management, observability and tracing, and offer visualization for traffic topology + enabled: true ``` #### 持久化存储配置 @@ -356,7 +280,7 @@ spec: ```bash # 指定配置文件创建集群 - ./kk create cluster --with-kubesphere v3.3.0 -f master-HA.yaml + ./kk create cluster --with-kubesphere v3.3.1 -f master-HA.yaml # 查看 KubeSphere 安装日志 -- 直到出现控制台的访问地址和登录帐户 kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f diff --git a/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md b/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md index 50ed86ef0..b7dcabcd2 100644 --- a/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md +++ b/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-qingcloud-vms.md @@ -126,7 +126,7 @@ Weight: 3420 从 [GitHub 发布页面](https://github.com/kubesphere/kubekey/releases)下载 KubeKey 或直接使用以下命令: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -142,7 +142,7 @@ export KKZONE=cn 执行以下命令下载 KubeKey: ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -157,7 +157,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -执行以上命令会下载最新版 KubeKey (v2.2.2),您可以修改命令中的版本号下载指定版本。 +执行以上命令会下载最新版 KubeKey (v2.3.0),您可以修改命令中的版本号下载指定版本。 {{}} @@ -170,12 +170,12 @@ chmod +x kk 创建包含默认配置的示例配置文件。以下以 Kubernetes v1.22.10 为例。 ```bash -./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 +./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 ``` {{< notice note >}} -- 安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 +- 安装 KubeSphere 3.3 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../../installing-on-linux/introduction/kubekey/#支持矩阵)。 - 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,只能使用配置文件中的 `addons` 字段安装,或者在您后续使用 `./kk create cluster` 命令时再次添加这个标志。 diff --git a/content/zh/docs/v3.3/introduction/advantages.md b/content/zh/docs/v3.3/introduction/advantages.md index af6609c60..ffaaa7d4d 100644 --- a/content/zh/docs/v3.3/introduction/advantages.md +++ b/content/zh/docs/v3.3/introduction/advantages.md @@ -30,7 +30,7 @@ KubeSphere 为企业用户提供高性能可伸缩的容器应用管理服务, **统一管理**:用户可以使用直接连接或间接连接导入 Kubernetes 集群。只需简单配置,即可在数分钟内在 KubeSphere 的互动式 Web 控制台上完成整个流程。集群导入后,用户可以通过统一的中央控制平面监控集群状态、运维集群资源。 -**高可用**:在多集群架构中,一个集群可以运行主要服务,于此同时由另一集群作为备用。一旦该主集群宕机,备用集群可以迅速接管相关服务。此外,当集群跨区域部署时,为最大限度地减少延迟,请求可以发送至距离最近的集群,由此实现跨区跨集群的高可用。 +**高可用**:在多集群架构中,一个集群可以运行主要服务,另一集群作为备用集群。一旦该主集群宕机,备用集群可以迅速接管相关服务。此外,当集群跨区域部署时,为最大限度地减少延迟,请求可以发送至距离最近的集群,由此实现跨区跨集群的高可用。 有关更多信息,请参见[多集群管理](../../multicluster-management/)。 @@ -64,7 +64,7 @@ KubeSphere 为用户提供不同级别的权限控制,包括集群、企业空 **自定义角色**:除了系统内置的角色外,KubeSphere 还支持自定义角色,用户可以给角色分配不同的权限以执行不同的操作,以满足企业对不同租户具体工作分配的要求,即可以定义每个租户所应该负责的部分,不被无关资源所影响。 -**安全**:由于不同级别的租户之前完全隔离,他们在贡献部分资源的同时也不会相互影响。租户之间的网络也完全隔离,确保数据安全。 +**安全**:由于不同级别的租户之间完全隔离,他们在共享部分资源的同时也不会相互影响。租户之间的网络也完全隔离,确保数据安全。 有关更多信息,请参见[企业空间](../../workspace-administration/role-and-member-management/)和[项目](../../project-administration/role-and-member-management/)中的角色和成员管理。 @@ -89,4 +89,4 @@ KubeSphere 社区具备充分的能力和技术知识,让大家能共享开源 **贡献者**:KubeSphere 贡献者通过贡献代码或文档等对整个社区进行贡献。就算您不是该领域的专家,无论是细微的代码修改或是语言改进,您的贡献也会帮助到整个社区。 -有关更多信息,请参见[合作伙伴项目](https://kubesphere.io/partner/)和[社区治理](https://kubesphere.io/contribution/)。 +有关更多信息,请参见[合作伙伴项目](https://kubesphere.io/zh/partner/)和[社区治理](https://kubesphere.io/zh/contribution/)。 diff --git a/content/zh/docs/v3.3/introduction/architecture.md b/content/zh/docs/v3.3/introduction/architecture.md index 164e02d0b..5931b08ff 100644 --- a/content/zh/docs/v3.3/introduction/architecture.md +++ b/content/zh/docs/v3.3/introduction/architecture.md @@ -39,5 +39,3 @@ KubeSphere 将 [前端](https://github.com/kubesphere/console) 与 [后端](http ## 服务组件 以上列表中每个功能组件下还有多个服务组件,关于服务组件的说明,可参考 [服务组件说明](../../pluggable-components/)。 - -![Service Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191017163549.png) diff --git a/content/zh/docs/v3.3/introduction/features.md b/content/zh/docs/v3.3/introduction/features.md index d3032c74d..dd803271b 100644 --- a/content/zh/docs/v3.3/introduction/features.md +++ b/content/zh/docs/v3.3/introduction/features.md @@ -29,7 +29,7 @@ KubeSphere 作为开源的企业级全栈化容器平台,为用户提供了一 对底层 Kubernetes 中的多种类型的资源提供可视化的展示与监控数据,以向导式 UI 实现工作负载管理、镜像管理、服务与应用路由管理 (服务发现)、密钥配置管理等,并提供弹性伸缩 (HPA) 和容器健康检查支持,支持数万规模的容器资源调度,保证业务在高峰并发情况下的高可用性。 -由于 KubeSphere 3.3.0 具有增强的可观测性,用户可以从多租户角度跟踪资源,例如自定义监视、事件、审核日志、告警通知。 +由于 KubeSphere 3.3 具有增强的可观测性,用户可以从多租户角度跟踪资源,例如自定义监视、事件、审核日志、告警通知。 ### 集群升级和扩展 @@ -170,4 +170,4 @@ KubeSphere 通过可视化界面操作监控、运维功能,可简化操作和 6. 通过 CRD 动态配置BGP服务器 (v0.3.0) 7. 通过 CRD 动态配置BGP对等 (v0.3.0) - 有关 OpenELB 的更多信息,请参见[本文](https://kubesphere.io/conferences/porter/)。 + 有关 OpenELB 的更多信息,请参见[本文](https://kubesphere.io/zh/conferences/porter/)。 diff --git a/content/zh/docs/v3.3/introduction/what's-new-in-3.3.0.md b/content/zh/docs/v3.3/introduction/what's-new-in-3.3.0.md deleted file mode 100644 index 15d51d346..000000000 --- a/content/zh/docs/v3.3/introduction/what's-new-in-3.3.0.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: "3.3.0 重要更新" -keywords: 'Kubernetes, KubeSphere, 介绍' -description: '3.3.0 新增了对 “边缘计算” 场景的支持。同时在 3.2.x 的基础上新增了计量计费,让基础设施的运营成本更清晰,并进一步优化了在 “多云、多集群、多团队、多租户” 等应用场景下的使用体验' -linkTitle: "3.3.0 重要更新" -weight: 1400 ---- - -2022 年 6 月 24 日,KubeSphere 3.3.0 正式发布,带来了更多令人期待的功能。新增了基于 GitOps 的持续部署方案,进一步优化了 DevOps 的使用体验。同时还增强了 “多集群管理、多租户管理、可观测性、应用商店、微服务治理、边缘计算、存储” 等特性,更进一步完善交互设计,并全面提升了用户体验。 - -关于 3.3.0 新特性的详细解读,可参考博客 [KubeSphere 3.3.0 发布:全面拥抱 GitOps](/../../news/kubesphere-3.3.0-ga-announcement/)。 - -关于 3.3.0 的新功能及增强、Bug 修复、重要的技术调整,以及废弃或移除的功能,请参见 [3.3.0 版本说明](../../../v3.3/release/release-v330/)。 \ No newline at end of file diff --git a/content/zh/docs/v3.3/introduction/what's-new-in-3.3.md b/content/zh/docs/v3.3/introduction/what's-new-in-3.3.md new file mode 100644 index 000000000..17ef796d1 --- /dev/null +++ b/content/zh/docs/v3.3/introduction/what's-new-in-3.3.md @@ -0,0 +1,13 @@ +--- +title: "3.3 重要更新" +keywords: 'Kubernetes, KubeSphere, 介绍' +description: '3.3 新增了对 “边缘计算” 场景的支持。同时在 3.2.x 的基础上新增了计量计费,让基础设施的运营成本更清晰,并进一步优化了在 “多云、多集群、多团队、多租户” 等应用场景下的使用体验' +linkTitle: "3.3 重要更新" +weight: 1400 +--- + +2022 年 6 月 24 日,KubeSphere 3.3 正式发布,带来了更多令人期待的功能。新增了基于 GitOps 的持续部署方案,进一步优化了 DevOps 的使用体验。同时还增强了 “多集群管理、多租户管理、可观测性、应用商店、微服务治理、边缘计算、存储” 等特性,更进一步完善交互设计,并全面提升了用户体验。 + +关于 3.3 新特性的详细解读,可参考博客 [KubeSphere 3.3.0 发布:全面拥抱 GitOps](/../../news/kubesphere-3.3.0-ga-announcement/)。 + +关于 3.3 的新功能及增强、Bug 修复、重要的技术调整,以及废弃或移除的功能,请参见 [3.3 版本说明](../../../v3.3/release/release-v330/)。 \ No newline at end of file diff --git a/content/zh/docs/v3.3/pluggable-components/alerting.md b/content/zh/docs/v3.3/pluggable-components/alerting.md index d65590361..6861c4411 100644 --- a/content/zh/docs/v3.3/pluggable-components/alerting.md +++ b/content/zh/docs/v3.3/pluggable-components/alerting.md @@ -39,9 +39,9 @@ weight: 6600 ### 在 Kubernetes 上安装 -当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件中启用告警系统。 +当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件中启用告警系统。 -1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件并进行编辑。 +1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件并进行编辑。 ```bash vi cluster-configuration.yaml @@ -57,7 +57,7 @@ weight: 6600 3. 执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/zh/docs/v3.3/pluggable-components/app-store.md b/content/zh/docs/v3.3/pluggable-components/app-store.md index e5c951fd3..153b7d32e 100644 --- a/content/zh/docs/v3.3/pluggable-components/app-store.md +++ b/content/zh/docs/v3.3/pluggable-components/app-store.md @@ -44,9 +44,9 @@ weight: 6200 ### 在 Kubernetes 上安装 -当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件中启用应用商店。 +当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件中启用应用商店。 -1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件,执行以下命令打开并编辑该文件。 +1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件,执行以下命令打开并编辑该文件。 ```bash vi cluster-configuration.yaml @@ -63,7 +63,7 @@ weight: 6200 3. 执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/zh/docs/v3.3/pluggable-components/auditing-logs.md b/content/zh/docs/v3.3/pluggable-components/auditing-logs.md index b0cf450db..3e1190b7a 100644 --- a/content/zh/docs/v3.3/pluggable-components/auditing-logs.md +++ b/content/zh/docs/v3.3/pluggable-components/auditing-logs.md @@ -34,7 +34,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 ``` {{< notice note >}} -默认情况下,如果启用了审计功能,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 +默认情况下,如果启用了审计功能,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -45,7 +45,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -57,9 +57,9 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 ### 在 Kubernetes 上安装 -当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件中启用审计功能。 +当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件中启用审计功能。 -1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件,执行以下命令打开并编辑该文件: +1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件,执行以下命令打开并编辑该文件: ```bash vi cluster-configuration.yaml @@ -73,7 +73,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 ``` {{< notice note >}} -默认情况下,如果启用了审计功能,ks-installer 会安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在 `cluster-configuration.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,ks-installer 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 +默认情况下,如果启用了审计功能,ks-installer 会安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在 `cluster-configuration.yaml` 中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在安装前提供以下信息后,ks-installer 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -84,14 +84,14 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` 3. 执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` @@ -116,7 +116,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 ``` {{< notice note >}} -默认情况下,如果启用了审计功能,将安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 +默认情况下,如果启用了审计功能,将安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -127,7 +127,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` diff --git a/content/zh/docs/v3.3/pluggable-components/devops.md b/content/zh/docs/v3.3/pluggable-components/devops.md index ab4837f23..9c3648643 100644 --- a/content/zh/docs/v3.3/pluggable-components/devops.md +++ b/content/zh/docs/v3.3/pluggable-components/devops.md @@ -43,9 +43,9 @@ DevOps 系统为用户提供了一个自动化的环境,应用可以自动发 ### 在 Kubernetes 上安装 -当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件中启用 DevOps。 +当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件中启用 DevOps。 -1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件,执行以下命令打开并编辑该文件: +1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件,执行以下命令打开并编辑该文件: ```bash vi cluster-configuration.yaml @@ -61,7 +61,7 @@ DevOps 系统为用户提供了一个自动化的环境,应用可以自动发 3. 执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/zh/docs/v3.3/pluggable-components/events.md b/content/zh/docs/v3.3/pluggable-components/events.md index 992e03f45..025ccb9c4 100644 --- a/content/zh/docs/v3.3/pluggable-components/events.md +++ b/content/zh/docs/v3.3/pluggable-components/events.md @@ -36,7 +36,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 ``` {{< notice note >}} -默认情况下,如果启用了事件系统,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用事件系统,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 +默认情况下,如果启用了事件系统,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用事件系统,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -47,7 +47,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -59,9 +59,9 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 ### 在 Kubernetes 上安装 -当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件中启用事件系统。 +当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件中启用事件系统。 -1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件,然后打开并开始编辑。 +1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件,然后打开并开始编辑。 ```bash vi cluster-configuration.yaml @@ -75,7 +75,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 ``` {{< notice note >}} -对于生产环境,如果您想启用事件系统,强烈建议在 `cluster-configuration.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,ks-installer 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 +对于生产环境,如果您想启用事件系统,强烈建议在 `cluster-configuration.yaml` 中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在安装前提供以下信息后,ks-installer 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -86,14 +86,14 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` 3. 执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` @@ -121,7 +121,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 {{< notice note >}} -默认情况下,如果启用了事件系统,将会安装内置 Elasticsearch。对于生产环境,如果您想启用事件系统,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在文件中提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 +默认情况下,如果启用了事件系统,将会安装内置 Elasticsearch。对于生产环境,如果您想启用事件系统,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在文件中提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -132,7 +132,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` diff --git a/content/zh/docs/v3.3/pluggable-components/kubeedge.md b/content/zh/docs/v3.3/pluggable-components/kubeedge.md index 03cfee11e..a0a90c3cd 100644 --- a/content/zh/docs/v3.3/pluggable-components/kubeedge.md +++ b/content/zh/docs/v3.3/pluggable-components/kubeedge.md @@ -35,21 +35,21 @@ KubeEdge 的组件在两个单独的位置运行——云上和边缘节点上 ```yaml edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes. - enabled: false - kubeedge: # kubeedge configurations - enabled: false - cloudCore: - cloudHub: - advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. + enabled: false + kubeedge: # kubeedge configurations + enabled: false + cloudCore: + cloudHub: + advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided. - service: - cloudhubNodePort: "30000" - cloudhubQuicNodePort: "30001" - cloudhubHttpsNodePort: "30002" - cloudstreamNodePort: "30003" - tunnelNodePort: "30004" - # resources: {} - # hostNetWork: false + service: + cloudhubNodePort: "30000" + cloudhubQuicNodePort: "30001" + cloudhubHttpsNodePort: "30002" + cloudstreamNodePort: "30003" + tunnelNodePort: "30004" + # resources: {} + # hostNetWork: false ``` 3. 将 `kubeedge.cloudCore.cloudHub.advertiseAddress` 的值设置为集群的公共 IP 地址或边缘节点可以访问的 IP 地址。编辑完成后保存文件。 @@ -62,13 +62,9 @@ KubeEdge 的组件在两个单独的位置运行——云上和边缘节点上 ### 在 Kubernetes 上安装 -当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件中启用 KubeEdge。 +当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件中启用 KubeEdge。 -{{< notice note >}} -为了避免兼容性问题,建议安装 Kubernetes v1.21.x 及其以下版本。 -{{}} - -1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件并进行编辑。 +1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件并进行编辑。 ```bash vi cluster-configuration.yaml @@ -76,31 +72,31 @@ KubeEdge 的组件在两个单独的位置运行——云上和边缘节点上 2. 在本地 `cluster-configuration.yaml` 文件中,搜索 `edgeruntime` 和 `kubeedge`,然后将它们 `enabled` 值从 `false` 更改为 `true` 以便开启所有 KubeEdge 组件。完成后保存文件。 - ```yaml + ```yaml edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes. - enabled: false - kubeedge: # kubeedge configurations - enabled: false - cloudCore: - cloudHub: - advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. + enabled: false + kubeedge: # kubeedge configurations + enabled: false + cloudCore: + cloudHub: + advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided. - service: - cloudhubNodePort: "30000" - cloudhubQuicNodePort: "30001" - cloudhubHttpsNodePort: "30002" - cloudstreamNodePort: "30003" - tunnelNodePort: "30004" - # resources: {} - # hostNetWork: false - ``` + service: + cloudhubNodePort: "30000" + cloudhubQuicNodePort: "30001" + cloudhubHttpsNodePort: "30002" + cloudstreamNodePort: "30003" + tunnelNodePort: "30004" + # resources: {} + # hostNetWork: false + ``` 3. 将 `kubeedge.cloudCore.cloudHub.advertiseAddress` 的值设置为集群的公共 IP 地址或边缘节点可以访问的 IP 地址。 4. 执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` @@ -119,24 +115,24 @@ KubeEdge 的组件在两个单独的位置运行——云上和边缘节点上 4. 在该配置文件中,搜索 `edgeruntime` 和 `kubeedge`,然后将它们 `enabled` 值从 `false` 更改为 `true` 以便开启所有 KubeEdge 组件。完成后保存文件。 - ```yaml + ```yaml edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes. - enabled: false - kubeedge: # kubeedge configurations - enabled: false - cloudCore: - cloudHub: - advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. + enabled: false + kubeedge: # kubeedge configurations + enabled: false + cloudCore: + cloudHub: + advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided. - "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided. - service: - cloudhubNodePort: "30000" - cloudhubQuicNodePort: "30001" - cloudhubHttpsNodePort: "30002" - cloudstreamNodePort: "30003" - tunnelNodePort: "30004" - # resources: {} - # hostNetWork: false - ``` + service: + cloudhubNodePort: "30000" + cloudhubQuicNodePort: "30001" + cloudhubHttpsNodePort: "30002" + cloudstreamNodePort: "30003" + tunnelNodePort: "30004" + # resources: {} + # hostNetWork: false + ``` 5. 将 `kubeedge.cloudCore.cloudHub.advertiseAddress` 的值设置为集群的公共 IP 地址或边缘节点可以访问的 IP 地址。完成后,点击右下角的**确定**保存配置。 diff --git a/content/zh/docs/v3.3/pluggable-components/logging.md b/content/zh/docs/v3.3/pluggable-components/logging.md index daf320129..07f0171c1 100644 --- a/content/zh/docs/v3.3/pluggable-components/logging.md +++ b/content/zh/docs/v3.3/pluggable-components/logging.md @@ -35,9 +35,14 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 ```yaml logging: enabled: true # 将“false”更改为“true”。 + containerruntime: docker ``` - {{< notice note >}}默认情况下,如果启用了日志系统,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 + {{< notice info >}}若使用 containerd 作为容器运行时,请将 `containerruntime` 字段的值更改为 `containerd`。如果您从低版本升级至 KubeSphere 3.3,则启用 KubeSphere 日志系统时必须在 `logging` 字段下手动添加 `containerruntime` 字段。 + + {{}} + + {{< notice note >}}默认情况下,如果启用了日志系统,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -48,7 +53,7 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -60,9 +65,9 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 ### 在 Kubernetes 上安装 -当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件中启用日志系统。 +当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件中启用日志系统。 -1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件,然后打开并开始编辑。 +1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件,然后打开并开始编辑。 ```bash vi cluster-configuration.yaml @@ -73,9 +78,14 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 ```yaml logging: enabled: true # 将“false”更改为“true”。 + containerruntime: docker ``` - {{< notice note >}}默认情况下,如果启用了日志系统,ks-installer 将安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在 `cluster-configuration.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,ks-installer 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 + {{< notice info >}}若使用 containerd 作为容器运行时,请将 `.logging.containerruntime` 字段的值更改为 `containerd`。如果您从低版本升级至 KubeSphere 3.3,则启用 KubeSphere 日志系统时必须在 `logging` 字段下手动添加 `containerruntime` 字段。 + + {{}} + + {{< notice note >}}默认情况下,如果启用了日志系统,ks-installer 将安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在 `cluster-configuration.yaml` 中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在安装前提供以下信息后,ks-installer 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -86,14 +96,14 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` 3. 执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` @@ -117,9 +127,14 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 ```yaml logging: enabled: true # 将“false”更改为“true”。 + containerruntime: docker ``` - {{< notice note >}}默认情况下,如果启用了日志系统,将会安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在文件中提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 + {{< notice info >}}若使用 containerd 作为容器运行时,请将 `.logging.containerruntime` 字段的值更改为 `containerd`。如果您从低版本升级至 KubeSphere 3.3,则启用 KubeSphere 日志系统时必须在 `logging` 字段下手动添加 `containerruntime` 字段。 + + {{}} + + {{< notice note >}}默认情况下,如果启用了日志系统,将会安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在文件中提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -130,7 +145,7 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchHost: # The Host of external Elasticsearch. + externalElasticsearchUrl: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` diff --git a/content/zh/docs/v3.3/pluggable-components/metrics-server.md b/content/zh/docs/v3.3/pluggable-components/metrics-server.md index a3b855047..1d48a3c40 100644 --- a/content/zh/docs/v3.3/pluggable-components/metrics-server.md +++ b/content/zh/docs/v3.3/pluggable-components/metrics-server.md @@ -39,9 +39,9 @@ KubeSphere 支持用于[部署](../../project-user-guide/application-workloads/d ### 在 Kubernetes 上安装 -当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件中先启用 Metrics Server组件。 +当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件中先启用 Metrics Server组件。 -1. 下载文件 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml),并打开文件进行编辑。 +1. 下载文件 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml),并打开文件进行编辑。 ```bash vi cluster-configuration.yaml @@ -57,7 +57,7 @@ KubeSphere 支持用于[部署](../../project-user-guide/application-workloads/d 3. 执行以下命令以开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/zh/docs/v3.3/pluggable-components/network-policy.md b/content/zh/docs/v3.3/pluggable-components/network-policy.md index 3fddec929..a7f9dfb9b 100644 --- a/content/zh/docs/v3.3/pluggable-components/network-policy.md +++ b/content/zh/docs/v3.3/pluggable-components/network-policy.md @@ -49,9 +49,9 @@ weight: 6900 ### 在 Kubernetes 上安装 -当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件中启用网络策略。 +当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件中启用网络策略。 -1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件,然后打开并开始编辑。 +1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件,然后打开并开始编辑。 ```bash vi cluster-configuration.yaml @@ -68,7 +68,7 @@ weight: 6900 3. 执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/zh/docs/v3.3/pluggable-components/pod-ip-pools.md b/content/zh/docs/v3.3/pluggable-components/pod-ip-pools.md index 97cf14f54..7fa7853f4 100644 --- a/content/zh/docs/v3.3/pluggable-components/pod-ip-pools.md +++ b/content/zh/docs/v3.3/pluggable-components/pod-ip-pools.md @@ -41,9 +41,9 @@ weight: 6920 ### 在 Kubernetes 上安装 -当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要现在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件中启用容器组 IP 池。 +当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要现在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件中启用容器组 IP 池。 -1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件并进行编辑。 +1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件并进行编辑。 ```bash vi cluster-configuration.yaml @@ -60,7 +60,7 @@ weight: 6920 3. 执行以下命令开始安装。 ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/zh/docs/v3.3/pluggable-components/service-mesh.md b/content/zh/docs/v3.3/pluggable-components/service-mesh.md index 2fad33b4a..965912689 100644 --- a/content/zh/docs/v3.3/pluggable-components/service-mesh.md +++ b/content/zh/docs/v3.3/pluggable-components/service-mesh.md @@ -53,9 +53,9 @@ KubeSphere 服务网格基于 [Istio](https://istio.io/),将微服务治理和 ### 在 Kubernetes 上安装 -当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件中启用服务网格。 +当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件中启用服务网格。 -1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件,执行以下命令打开并编辑该文件: +1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件,执行以下命令打开并编辑该文件: ```bash vi cluster-configuration.yaml @@ -78,7 +78,7 @@ KubeSphere 服务网格基于 [Istio](https://istio.io/),将微服务治理和 3. 执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/zh/docs/v3.3/pluggable-components/service-topology.md b/content/zh/docs/v3.3/pluggable-components/service-topology.md index a89b159b2..17e68bafa 100644 --- a/content/zh/docs/v3.3/pluggable-components/service-topology.md +++ b/content/zh/docs/v3.3/pluggable-components/service-topology.md @@ -41,9 +41,9 @@ weight: 6915 ### 在 Kubernetes 上安装 -当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在[cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件中启用服务拓扑图。 +当您[在 Kubernetes 上安装 KubeSphere](../../installing-on-kubernetes/introduction/overview/) 时,需要先在[cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件中启用服务拓扑图。 -1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件并进行编辑。 +1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件并进行编辑。 ```bash vi cluster-configuration.yaml @@ -60,7 +60,7 @@ weight: 6915 3. 执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/zh/docs/v3.3/pluggable-components/uninstall-pluggable-components.md b/content/zh/docs/v3.3/pluggable-components/uninstall-pluggable-components.md index 531bcec0d..9d64fdd2c 100644 --- a/content/zh/docs/v3.3/pluggable-components/uninstall-pluggable-components.md +++ b/content/zh/docs/v3.3/pluggable-components/uninstall-pluggable-components.md @@ -7,14 +7,6 @@ Weight: 6940 --- [启用 KubeSphere 可插拔组件之后](../../pluggable-components/),还可以根据以下步骤卸载他们。请在卸载这些组件之前,备份所有重要数据。 - -{{< notice note >}} - -KubeSphere 3.3.0 卸载某些可插拔组件的方法与 KubeSphere v3.0.0 不相同。有关 KubeSphere v3.0.0 卸载可插拔组件的详细方法,请参阅从 KubeSphere 上卸载可插拔组件](https://v3-0.docs.kubesphere.io/zh/docs/faq/installation/uninstall-pluggable-components/)。 - - -{{}} - ## 准备工作 在卸载除服务拓扑图和容器组 IP 池之外的可插拔组件之前,必须将 CRD 配置文件 `ClusterConfiguration` 中的 `ks-installer` 中的 `enabled` 字段的值从 `true` 改为 `false`。 @@ -129,7 +121,7 @@ kubectl -n kubesphere-system edit clusterconfiguration ks-installer {{< notice note >}} - KubeSphere 3.3.0 通知系统为默认安装,您无需卸载。 + KubeSphere 3.3 通知系统为默认安装,您无需卸载。 {{}} diff --git a/content/zh/docs/v3.3/project-user-guide/application-workloads/routes.md b/content/zh/docs/v3.3/project-user-guide/application-workloads/routes.md index b2e269e28..28d429058 100644 --- a/content/zh/docs/v3.3/project-user-guide/application-workloads/routes.md +++ b/content/zh/docs/v3.3/project-user-guide/application-workloads/routes.md @@ -48,12 +48,15 @@ KubeSphere 上的应用路由和 Kubernetes 上的 [Ingress](https://kubernetes. 2. 选择一种模式来配置路由规则,点击 **√**,然后点击**下一步**。 - **域名**:指定自定义域名。 - - **协议**:选择 `http` 或 `https`。如果选择了 `https`,则需要选择包含 `tls.crt`(TLS 证书)和 `tls.key`(TLS 私钥)的保密字典用于加密。 - - **路径**:将每个服务映射到一条路径。输入路径名,并选择服务和端口。您也可以点击**添加**来添加多条路径。 - + * **自动生成**:KubeSphere 自动以`<服务名称>.<项目名称>.<网关地址>.nip.io` 格式生成域名,该域名由 [nip.io](https://nip.io/) 自动解析为网关地址。该模式仅支持 HTTP。 + + * **路径**:将每个服务映射到一条路径。您可以点击**添加**来添加多条路径。 + + * **指定域名**:使用用户定义的域名。此模式同时支持 HTTP 和 HTTPS。 + + * **域名**:为应用路由设置域名。 + * **协议**:选择 `http` 或 `https`。如果选择了 `https`,则需要选择包含 `tls.crt`(TLS 证书)和 `tls.key`(TLS 私钥)的密钥用于加密。 + * **路径**:将每个服务映射到一条路径。您可以点击**添加**来添加多条路径。 ### (可选)步骤 3:配置高级设置 diff --git a/content/zh/docs/v3.3/project-user-guide/application-workloads/services.md b/content/zh/docs/v3.3/project-user-guide/application-workloads/services.md index ccaa986ae..71f37690a 100644 --- a/content/zh/docs/v3.3/project-user-guide/application-workloads/services.md +++ b/content/zh/docs/v3.3/project-user-guide/application-workloads/services.md @@ -161,7 +161,7 @@ KubeSphere 提供三种创建服务的基本方法:**无状态服务**、**有 1. 创建服务后,您可以点击右侧的 icon 进一步编辑它,例如元数据(**名称**无法编辑)、配置文件、端口以及外部访问。 - - **编辑**:查看和编辑基本信息。 + - **编辑信息**:查看和编辑基本信息。 - **编辑 YAML**:查看、上传、下载或者更新 YAML 文件。 - **编辑服务**:查看访问类型并设置选择器和端口。 - **编辑外部访问**:编辑服务的外部访问方法。 diff --git a/content/zh/docs/v3.3/project-user-guide/application/compose-app.md b/content/zh/docs/v3.3/project-user-guide/application/compose-app.md index 0c7c75d17..01709fc9a 100644 --- a/content/zh/docs/v3.3/project-user-guide/application/compose-app.md +++ b/content/zh/docs/v3.3/project-user-guide/application/compose-app.md @@ -21,7 +21,7 @@ weight: 10140 2. 设置应用名称(例如 `bookinfo`)并点击**下一步**。 -3. 在**服务**页面,您需要构建自制应用的微服务。点击**创建服务**,选择**无状态服务**。 +3. 在**服务设置**页面,您需要构建自制应用的微服务。点击**创建服务**,选择**无状态服务**。 4. 设置服务名称(例如 `productpage`)并点击**下一步**。 diff --git a/content/zh/docs/v3.3/project-user-guide/application/deploy-app-from-appstore.md b/content/zh/docs/v3.3/project-user-guide/application/deploy-app-from-appstore.md index 9a8911810..3fe6a97b0 100644 --- a/content/zh/docs/v3.3/project-user-guide/application/deploy-app-from-appstore.md +++ b/content/zh/docs/v3.3/project-user-guide/application/deploy-app-from-appstore.md @@ -27,7 +27,7 @@ weight: 10130 {{}} -2. 找到并点击 NGINX,在**应用信息**页面点击**安装**。请确保在**应用部署须知**对话框中点击**确认**。 +2. 找到并点击 NGINX,在**应用信息**页面点击**安装**。请确保在**安装须知**对话框中点击**同意**。 3. 设置应用的名称和版本,确保 NGINX 部署的位置,点击**下一步**。 diff --git a/content/zh/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/overview.md b/content/zh/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/overview.md index 4febe95bc..b9def6f17 100644 --- a/content/zh/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/overview.md +++ b/content/zh/docs/v3.3/project-user-guide/custom-application-monitoring/visualization/overview.md @@ -12,7 +12,7 @@ weight: 10815 您可以在项目的**监控告警**下的**自定义监控**页面为应用指标创建监控面板。共有三种方式可创建监控面板:使用内置模板创建、使用空白模板进行自定义或者使用 YAML 文件创建。 -内置模板共有三种,可分别用于 MySQL、Elasticsearch 和 Redis。这些模板仅供演示使用,并会根据 KubeSphere 新版本的发布同步更新。此外,您还可以创建自定义监控面板。 +内置模板包括 MySQL、Elasticsearch、Redis等。这些模板仅供演示使用,并会根据 KubeSphere 新版本的发布同步更新。此外,您还可以创建自定义监控面板。 KubeSphere 自定义监控面板可以视作为一个 YAML 配置文件。数据模型主要基于 [Grafana](https://github.com/grafana/grafana)(一个用于监控和可观测性的开源工具)创建,您可以在 [kubesphere/monitoring-dashboard](https://github.com/kubesphere/monitoring-dashboard) 中找到 KubeSphere 监控面板数据模型的设计。该配置文件便捷,可进行分享,欢迎您通过 [Monitoring Dashboards Gallery](https://github.com/kubesphere/monitoring-dashboard/tree/master/contrib/gallery) 对 KubeSphere 社区贡献面板模板。 diff --git a/content/zh/docs/v3.3/project-user-guide/image-builder/s2i-and-b2i-webhooks.md b/content/zh/docs/v3.3/project-user-guide/image-builder/s2i-and-b2i-webhooks.md index 665476484..b35e6c0fe 100644 --- a/content/zh/docs/v3.3/project-user-guide/image-builder/s2i-and-b2i-webhooks.md +++ b/content/zh/docs/v3.3/project-user-guide/image-builder/s2i-and-b2i-webhooks.md @@ -7,7 +7,7 @@ weight: 10650 --- -KubeSphere 提供 Source-to-Image (S2I) 和 Binary-to-Image (B2I) 功能,以自动化镜像构建、推送和应用程序部署。在 KubeSphere v3.3.0 以及后续版本中,您可以配置 S2I 和 B2I Webhook,以便当代码仓库中存在任何相关活动时,自动触发镜像构建器。 +KubeSphere 提供 Source-to-Image (S2I) 和 Binary-to-Image (B2I) 功能,以自动化镜像构建、推送和应用程序部署。在 KubeSphere 3.3 中,您可以配置 S2I 和 B2I Webhook,以便当代码仓库中存在任何相关活动时,自动触发镜像构建器。 本教程演示如何配置 S2I 和 B2I webhooks。 diff --git a/content/zh/docs/v3.3/quick-start/all-in-one-on-linux.md b/content/zh/docs/v3.3/quick-start/all-in-one-on-linux.md index c4d12c62d..4e751ea55 100644 --- a/content/zh/docs/v3.3/quick-start/all-in-one-on-linux.md +++ b/content/zh/docs/v3.3/quick-start/all-in-one-on-linux.md @@ -145,7 +145,7 @@ KubeKey 是用 Go 语言开发的一款全新的安装工具,代替了以前 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或直接使用以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -161,7 +161,7 @@ export KKZONE=cn 执行以下命令下载 KubeKey。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -176,7 +176,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -执行以上命令会下载最新版 KubeKey (v2.2.2),您可以修改命令中的版本号下载指定版本。 +执行以上命令会下载最新版 KubeKey (v2.3.0),您可以修改命令中的版本号下载指定版本。 {{}} @@ -197,12 +197,12 @@ chmod +x kk 若要同时安装 Kubernetes 和 KubeSphere,可参考以下示例命令: ```bash -./kk create cluster --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 +./kk create cluster --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` {{< notice note >}} -- 安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:1.19.x、1.20.x、1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../installing-on-linux/introduction/kubekey/#支持矩阵)。 +- 安装 KubeSphere 3.3 的建议 Kubernetes 版本:1.19.x、1.20.x、1.21.x、v1.22.x 和 v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../installing-on-linux/introduction/kubekey/#支持矩阵)。 - 一般来说,对于 All-in-One 安装,您无需更改任何配置。 - 如果您在这一步的命令中不添加标志 `--with-kubesphere`,则不会部署 KubeSphere,KubeKey 将只安装 Kubernetes。如果您添加标志 `--with-kubesphere` 时不指定 KubeSphere 版本,则会安装最新版本的 KubeSphere。 diff --git a/content/zh/docs/v3.3/quick-start/create-workspace-and-project.md b/content/zh/docs/v3.3/quick-start/create-workspace-and-project.md index 3606145a0..acfbbd8f4 100644 --- a/content/zh/docs/v3.3/quick-start/create-workspace-and-project.md +++ b/content/zh/docs/v3.3/quick-start/create-workspace-and-project.md @@ -24,7 +24,7 @@ KubeSphere 的多租户系统分**三个**层级,即集群、企业空间和 ### 步骤 1:创建用户 -安装 KubeSphere 之后,您需要向平台添加具有不同角色的用户,以便他们可以针对自己授权的资源在不同的层级进行工作。一开始,系统默认只有一个用户 `admin`,具有 `platform-admin` 角色。在本步骤中,您将创建一个示例用户 `user-manager`,然后使用 `user-manager` 创建新用户。 +安装 KubeSphere 之后,您需要向平台添加具有不同角色的用户,以便他们可以针对自己授权的资源在不同的层级进行工作。一开始,系统默认只有一个用户 `admin`,具有 `platform-admin` 角色。在本步骤中,您将创建一个示例用户 `ws-manager`。 1. 以 `admin` 身份使用默认帐户和密码 (`admin/P@88w0rd`) 登录 Web 控制台。 @@ -32,7 +32,7 @@ KubeSphere 的多租户系统分**三个**层级,即集群、企业空间和 出于安全考虑,强烈建议您在首次登录控制台时更改密码。若要更改密码,在右上角的下拉列表中选择**用户设置**,在**密码设置**中设置新密码,您也可以在**用户设置** > **基本信息**中修改控制台语言。 {{}} -2. 点击左上角的**平台管理**,然后选择**访问控制**。在左侧导航栏中,选择**平台角色**。四个内置角色的描述信息如下表所示。 +2. 点击左上角的**平台管理**,然后选择**访问控制**。在左侧导航栏中,选择**平台角色**。内置角色的描述信息如下表所示。 @@ -41,14 +41,10 @@ KubeSphere 的多租户系统分**三个**层级,即集群、企业空间和 - - + + - - - - @@ -64,11 +60,15 @@ KubeSphere 的多租户系统分**三个**层级,即集群、企业空间和 内置角色由 KubeSphere 自动创建,无法编辑或删除。 {{}} -3. 在**用户**中,点击**创建**。在弹出的对话框中,提供所有必要信息(带有*标记),然后在**平台角色**一栏选择 `users-manager`。 +3. 在**用户**中,点击**创建**。在弹出的对话框中,提供所有必要信息(带有*标记)。在**平台角色**下拉列表,选择**platform-self-provisioner**。 - 完成后,点击**确定**。新创建的用户将显示在**用户**页面。。 + 完成后,点击**确定**。新创建的用户将显示在**用户**页面。 -4. 切换用户使用 `user-manager` 重新登录,创建如下四个新用户,这些用户将在其他的教程中使用。 + {{< notice note >}} + 如果您在此处未指定**平台角色**,该用户将无法执行任何操作。您需要在创建企业空间后,将该用户邀请至企业空间。 + {{}} + +4. 重复以上的步骤创建新用户,这些用户将在其他的教程中使用。 {{< notice tip >}} - 帐户登出请点击右上角的用户名,然后选择**登出**。 @@ -82,10 +82,6 @@ KubeSphere 的多租户系统分**三个**层级,即集群、企业空间和 - - - - @@ -103,7 +99,7 @@ KubeSphere 的多租户系统分**三个**层级,即集群、企业空间和
描述
workspaces-manager企业空间管理员,管理平台所有企业空间。platform-self-provisioner创建企业空间并成为所创建企业空间的管理员。
users-manager用户管理员,管理平台所有用户。
platform-regular 平台普通用户,在被邀请加入企业空间或集群之前没有任何资源操作权限。 指定的平台角色 用户权限
ws-managerworkspaces-manager创建和管理所有企业空间。
ws-admin
-5. 在**用户**页面,查看创建的四个用户。 +5. 在**用户**页面,查看创建的用户。 {{< notice note >}} @@ -112,11 +108,11 @@ KubeSphere 的多租户系统分**三个**层级,即集群、企业空间和 {{}} ### 步骤 2:创建企业空间 -在本步骤中,您需要使用上一个步骤中创建的用户 `ws-manager` 创建一个企业空间。作为管理项目、DevOps 项目和组织成员的基本逻辑单元,企业空间是 KubeSphere 多租户系统的基础。 +作为管理项目、DevOps 项目和组织成员的基本逻辑单元,企业空间是 KubeSphere 多租户系统的基础。 -1. 以 `ws-manager` 身份登录 KubeSphere。点击左上角的**平台管理**,选择**访问控制**。在**企业空间**中,可以看到仅列出了一个默认企业空间 `system-workspace`,即系统企业空间,其中运行着与系统相关的组件和服务,您无法删除该企业空间。 +1. 在左侧导航栏,选择**企业空间**。企业空间列表中已列出默认企业空间 **system-workspace**,该企业空间包含所有系统项目。其中运行着与系统相关的组件和服务,您无法删除该企业空间。 -2. 点击右侧的**创建**,将新企业空间命名为 `demo-workspace`,并将用户 `ws-admin` 设置为企业空间管理员。完成后,点击**创建**。 +2. 在企业空间列表页面,点击**创建**,输入企业空间的名称(例如 **demo-workspace**),并将用户 `ws-admin` 设置为企业空间管理员。完成后,点击**创建**。 {{< notice note >}} diff --git a/content/zh/docs/v3.3/quick-start/enable-pluggable-components.md b/content/zh/docs/v3.3/quick-start/enable-pluggable-components.md index 3f46bbf47..83b49093e 100644 --- a/content/zh/docs/v3.3/quick-start/enable-pluggable-components.md +++ b/content/zh/docs/v3.3/quick-start/enable-pluggable-components.md @@ -62,7 +62,7 @@ weight: 2600 在已有 Kubernetes 集群上安装 KubeSphere 时,需要部署 [ks-installer](https://github.com/kubesphere/ks-installer/) 的两个 YAML 文件。 -1. 首先下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/cluster-configuration.yaml) 文件,然后打开编辑。 +1. 首先下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件,然后打开编辑。 ```bash vi cluster-configuration.yaml @@ -73,7 +73,7 @@ weight: 2600 3. 编辑完成后保存文件,执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml kubectl apply -f cluster-configuration.yaml ``` diff --git a/content/zh/docs/v3.3/quick-start/minimal-kubesphere-on-k8s.md b/content/zh/docs/v3.3/quick-start/minimal-kubesphere-on-k8s.md index f4e3325dd..5c99f4f7d 100644 --- a/content/zh/docs/v3.3/quick-start/minimal-kubesphere-on-k8s.md +++ b/content/zh/docs/v3.3/quick-start/minimal-kubesphere-on-k8s.md @@ -10,7 +10,7 @@ weight: 2200 ## 准备工作 -- 如需在 Kubernetes 上安装 KubeSphere 3.3.0,您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 +- 您的 Kubernetes 版本必须为:v1.19.x,v1.20.x,v1.21.x,v1.22.x 或 v1.23.x(实验性支持)。 - 确保您的机器满足最低硬件要求:CPU > 1 核,内存 > 2 GB。 - 在安装之前,需要配置 Kubernetes 集群中的**默认**存储类型。 @@ -28,9 +28,9 @@ weight: 2200 1. 执行以下命令开始安装: ```bash - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml - kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml + kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml ``` 2. 检查安装日志: diff --git a/content/zh/docs/v3.3/reference/api-changes/logging.md b/content/zh/docs/v3.3/reference/api-changes/logging.md index 461861efa..63ad9b4bb 100644 --- a/content/zh/docs/v3.3/reference/api-changes/logging.md +++ b/content/zh/docs/v3.3/reference/api-changes/logging.md @@ -1,12 +1,12 @@ --- title: "日志系统" keywords: 'Kubernetes, KubeSphere, API, 日志系统' -description: 'KubeSphere 3.3.0 中日志系统(服务组件)的 API 变更。' +description: 'KubeSphere 3.3 中日志系统(服务组件)的 API 变更。' linkTitle: "日志系统" weight: 17310 --- -KubeSphere 3.3.0 中**日志系统**(服务组件)的 API 变更。 +KubeSphere 3.3 中**日志系统**(服务组件)的 API 变更。 ## 时间格式 @@ -22,6 +22,6 @@ KubeSphere 3.3.0 中**日志系统**(服务组件)的 API 变更。 - GET /namespaces/{namespace}/pods/{pod} - 整个日志设置 API 组 -## Fluent Bit Operator +## Fluent Operator -在 KubeSphere 3.3.0 中,由于 Fluent Bit Operator 项目已重构且不兼容,整个日志设置 API 已从 KubeSphere 内核中移除。有关如何在 KubeSphere 3.3.0 中配置日志收集,请参考 [Fluent Bit Operator](https://github.com/kubesphere/fluentbit-operator) 文档。 \ No newline at end of file +在 KubeSphere 3.3 中,由于 Fluent Operator 项目已重构且不兼容,整个日志设置 API 已从 KubeSphere 内核中移除。有关如何在 KubeSphere 3.3 中配置日志收集,请参考 [Fluent Operator](https://github.com/kubesphere/fluentbit-operator) 文档。 \ No newline at end of file diff --git a/content/zh/docs/v3.3/reference/api-changes/monitoring.md b/content/zh/docs/v3.3/reference/api-changes/monitoring.md index 61e0298dc..af7916da9 100644 --- a/content/zh/docs/v3.3/reference/api-changes/monitoring.md +++ b/content/zh/docs/v3.3/reference/api-changes/monitoring.md @@ -1,7 +1,7 @@ --- title: "监控系统" keywords: 'Kubernetes, KubeSphere, API, 监控系统' -description: 'KubeSphere 3.3.0 中监控系统(服务组件)的 API 变更。' +description: 'KubeSphere 3.3 中监控系统(服务组件)的 API 变更。' linkTitle: "监控系统" weight: 17320 --- @@ -16,9 +16,9 @@ weight: 17320 ## 已弃用的指标 -在 KubeSphere 3.3.0 中,下表左侧的指标已重命名为右侧的指标。 +在 KubeSphere 3.3 中,下表左侧的指标已重命名为右侧的指标。 -|V2.0|V3.0| +|V2.0|V3.3| |---|---| |workload_pod_cpu_usage | workload_cpu_usage| |workload_pod_memory_usage| workload_memory_usage| @@ -48,7 +48,7 @@ weight: 17320 |prometheus_up_sum| |prometheus_tsdb_head_samples_appended_rate| -KubeSphere 3.3.0 中引入的新指标。 +KubeSphere 3.3 中引入的新指标。 |新指标| |---| @@ -59,7 +59,7 @@ KubeSphere 3.3.0 中引入的新指标。 ## 响应字段 -在 KubeSphere 3.3.0 中,已移除响应字段 `metrics_level`、`status` 和 `errorType`。 +在 KubeSphere 3.3 中,已移除响应字段 `metrics_level`、`status` 和 `errorType`。 另外,字段名称 `resource_name` 已替换为具体资源类型名称。这些类型是 `node`、`workspace`、`namespace`、`workload`、`pod`、`container` 和 `persistentvolumeclaim`。例如,您将获取 `node: node1`,而不是 `resource_name: node1`。请参见以下示例响应: diff --git a/content/zh/docs/v3.3/reference/api-docs.md b/content/zh/docs/v3.3/reference/api-docs.md index 2eb79cab4..d5e1f68dd 100644 --- a/content/zh/docs/v3.3/reference/api-docs.md +++ b/content/zh/docs/v3.3/reference/api-docs.md @@ -47,7 +47,7 @@ curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \ 'http://[node ip]:31407/oauth/token' \ --data-urlencode 'grant_type=password' \ --data-urlencode 'username=admin' \ - --data-urlencode 'password=P#$$w0rd' \ + --data-urlencode 'password=P#$$w0rd' --data-urlencode 'client_id=kubesphere' \ --data-urlencode 'client_secret=kubesphere' ``` @@ -116,7 +116,7 @@ $ curl -X GET -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ ## API 参考 -KubeSphere API Swagger JSON 文件可以在 https://github.com/kubesphere/kubesphere/tree/release-3.1/api 仓库中找到。 +KubeSphere API Swagger JSON 文件可以在 https://github.com/kubesphere/kubesphere/tree/release-3.3/api 仓库中找到。 - KubeSphere 已指定 API [Swagger Json](https://github.com/kubesphere/kubesphere/blob/release-3.1/api/ks-openapi-spec/swagger.json) 文件,它包含所有只适用于 KubeSphere 的 API。 - KubeSphere 已指定 CRD [Swagger Json](https://github.com/kubesphere/kubesphere/blob/release-3.1/api/openapi-spec/swagger.json) 文件,它包含所有已生成的 CRD API 文档,与 Kubernetes API 对象相同。 diff --git a/content/zh/docs/v3.3/reference/storage-system-installation/nfs-server.md b/content/zh/docs/v3.3/reference/storage-system-installation/nfs-server.md index 7b919484c..b1b03b3b0 100644 --- a/content/zh/docs/v3.3/reference/storage-system-installation/nfs-server.md +++ b/content/zh/docs/v3.3/reference/storage-system-installation/nfs-server.md @@ -13,7 +13,7 @@ NFS 服务器机器就绪后,您可以使用 [KubeKey](../../../installing-on- {{< notice note >}} - 您也可以在安装 KubeSphere 集群后创建 NFS-client 的存储类型。 -- NFS 与部分应用不兼容(例如 Prometheus),可能会导致容器组创建失败。如果确实需要在生产环境中使用 NFS,请确保您了解相关风险或咨询 KubeSphere 技术支持 support@kubesphere.cloud。 +- 不建议您在生产环境中使用 NFS 存储(尤其是在 Kubernetes 1.20 或以上版本),这可能会引起 `failed to obtain lock` 和 `input/output error` 等问题,从而导致 Pod `CrashLoopBackOff`。此外,部分应用不兼容 NFS,例如 [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects) 等。 {{}} diff --git a/content/zh/docs/v3.3/release/release-v320.md b/content/zh/docs/v3.3/release/release-v320.md index 50bfa218d..1ad18884f 100644 --- a/content/zh/docs/v3.3/release/release-v320.md +++ b/content/zh/docs/v3.3/release/release-v320.md @@ -72,7 +72,7 @@ weight: 18100 ### 新特性 - 在应用路由列表页面新增应用路由排序、路由规则编辑和注解编辑功能。([#2165](https://github.com/kubesphere/console/pull/2165),[@harrisonliu5](https://github.com/harrisonliu5)) -- 重构项目网关和新增集群网关功能。([#2262](https://github.com/kubesphere/console/pull/2262),[@harrisonliu5](https://github.com/harrisonliu5)) +- 重构集群网关和项目网关功能。([#2262](https://github.com/kubesphere/console/pull/2262),[@harrisonliu5](https://github.com/harrisonliu5)) - 在路由规则创建过程中新增服务名称自动补全功能。([#2196](https://github.com/kubesphere/console/pull/2196),[@wengzhisong-hz](https://github.com/wengzhisong-hz)) - 对 ks-console 进行了以下 DNS 优化: - 直接使用 ks-apiserver 服务的名称作为 API URL,不再使用 `ks-apiserver.kubesphere-system.svc`。 diff --git a/content/zh/docs/v3.3/release/release-v330.md b/content/zh/docs/v3.3/release/release-v330.md index 19a4f1589..e842a0d9b 100644 --- a/content/zh/docs/v3.3/release/release-v330.md +++ b/content/zh/docs/v3.3/release/release-v330.md @@ -1,8 +1,8 @@ --- -title: "3.3.0 版本说明" -keywords: "Kubernetes, KubeSphere, 版本说明" -description: "KubeSphere 3.3.0 版本说明" -linkTitle: "3.3.0 版本说明" +title: "3.3 版本说明" +keywords: "Kubernetes, KubeSphere, 版本说明" +description: "KubeSphere 3.3 版本说明" +linkTitle: "3.3 版本说明" weight: 18098 --- @@ -13,14 +13,19 @@ weight: 18098 - 支持导入并管理代码仓库。 - 新增多款基于 CRD 的内置流水线模板,支持参数自定义。 - 支持查看流水线事件。 - +### 优化增强 +- 支持通过 UI 编辑流水线 kubeconfig 绑定方式。 +### 问题修复 +- 修复用户查看 CI/CD 模板失败的问题。 +- 将 `Deprecated` 标签从 CI/CD 模版中移除,并将部署环节由 `kubernetesDeploy` 修改为 kubeconfig 绑定方式。 ## 存储 ### 新特性 - 支持租户级存储类权限管理。 - 新增卷快照内容和卷快照类管理。 - 支持 deployment 与 statefulSet 资源调整存储卷声明修改后自动重启。 - 支持存储卷声明设定使用阈值自动扩容。 - +### 问题修复 +- 当用户使用 `hostpath` 作为存储时,必须填写主机路径。 ## 多租户和多集群 ### 新特性 - 支持 kubeconfig 证书到期提示。 @@ -60,7 +65,7 @@ weight: 18098 - 负载均衡类型选择新增 OpenELB。 ### 问题修复 - 修复了删除项目后项目网关遗留的问题。 - +- 修复 IPv4/IPv6 双栈模式下用户创建路由规则失败的问题。 ## App Store ### 问题修复 - 修复 Helm Controller NPE 错误引起的 ks-controller-manager 崩溃。 @@ -68,6 +73,11 @@ weight: 18098 ## 验证和授权 ### 新特性 - 支持手动启用/禁用用户。 +### 问题修复 +- 删除角色 `users-manager` 和 `workspace-manager`。 +- 新增角色 `platform-self-provisioner`。 +- 屏蔽用户自定义角色的部分权限。 +- 修复 `cluster-admin` 角色用户无法创建企业空间的问题。 ## 用户体验 - 新增 Kubernetes 审计日志开启提示。 @@ -87,5 +97,7 @@ weight: 18098 - 优化了服务拓扑图详情展示窗口。 - 优化了 ClusterConfiguration 更新机制,无需重启 ks-apiserver、ks-controller-manager。 - 优化了部分页面文案描述。 +- 支持修改每页列表的展示数量。 +- 支持批量停止工作负载。 -有关 KubeSphere 3.3.0 的 Issue 和贡献者详细信息,请参阅 [GitHub](https://github.com/kubesphere/kubesphere/blob/master/CHANGELOG/CHANGELOG-3.3.md)。 +有关 KubeSphere 3.3 的 Issue 和贡献者详细信息,请参阅 [GitHub](https://github.com/kubesphere/kubesphere/blob/master/CHANGELOG/CHANGELOG-3.3.md)。 diff --git a/content/zh/docs/v3.3/upgrade/_index.md b/content/zh/docs/v3.3/upgrade/_index.md index 78637a106..1c4ee4f12 100644 --- a/content/zh/docs/v3.3/upgrade/_index.md +++ b/content/zh/docs/v3.3/upgrade/_index.md @@ -11,4 +11,4 @@ icon: "/images/docs/v3.3/docs.svg" --- -本章演示集群管理员如何将 KubeSphere 升级到 3.3.0。 \ No newline at end of file +本章演示集群管理员如何将 KubeSphere 升级到 3.3.1。 \ No newline at end of file diff --git a/content/zh/docs/v3.3/upgrade/air-gapped-upgrade-with-ks-installer.md b/content/zh/docs/v3.3/upgrade/air-gapped-upgrade-with-ks-installer.md index 71574106e..9f7ac52ca 100644 --- a/content/zh/docs/v3.3/upgrade/air-gapped-upgrade-with-ks-installer.md +++ b/content/zh/docs/v3.3/upgrade/air-gapped-upgrade-with-ks-installer.md @@ -1,6 +1,6 @@ --- title: "使用 ks-installer 离线升级" -keywords: "离线环境, 升级, kubesphere, 3.3.0" +keywords: "离线环境, 升级, kubesphere, 3.3.1" description: "使用 ks-installer 和离线包升级 KubeSphere。" linkTitle: "使用 ks-installer 离线升级" weight: 7500 @@ -12,10 +12,21 @@ weight: 7500 ## 准备工作 - 您需要有一个运行 KubeSphere v3.2.x 的集群。如果您的 KubeSphere 是 v3.1.0 或更早的版本,请先升级至 v3.2.x。 -- 请仔细阅读 [3.3.0 版本说明](../../../v3.3/release/release-v330/)。 +- 请仔细阅读 [3.3 版本说明](../../../v3.3/release/release-v330/)。 - 提前备份所有重要的组件。 - Docker 仓库。您需要有一个 Harbor 或其他 Docker 仓库。有关更多信息,请参见[准备一个私有镜像仓库](../../installing-on-linux/introduction/air-gapped-installation/#步骤-2准备一个私有镜像仓库)。 -- KubeSphere 3.3.0 支持的 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、 v1.22.x 和 v1.23.x(实验性支持)。 +- KubeSphere 3.3 支持的 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、 v1.22.x 和 v1.23.x(实验性支持)。 + +## 重要提示 + +KubeSphere 3.3.1 对内置角色和自定义角色的授权项做了一些调整。在您升级到 KubeSphere 3.3.1时,请注意以下几点: + + - 内置角色调整:移除了平台级内置角色 `users-manager`(用户管理员)和 `workspace-manager`(企业空间管理员),如果已有用户绑定了 `users-manager` 或 `workspace-manager`,他们的角色将会在升级之后变更为 `platform-regular`。增加了平台级内置角色 `platform-self-provisioner`。关于平台角色的具体描述,请参见[创建用户](../../quick-start/create-workspace-and-project/#创建用户)。 + - 自定义角色授权项调整: + - 移除平台层级自定义角色授权项:用户管理,角色管理,企业空间管理。 + - 移除企业空间层级自定义角色授权项:成员管理,角色管理,用户组管理。 + - 移除命名空间层级自定义角色授权项:成员管理,角色管理。 + - 升级到 KubeSphere 3.3.1 后,自定义角色会被保留,但是其包含的已被移除的授权项会被删除。 ## 步骤 1:准备安装镜像 @@ -24,7 +35,7 @@ weight: 7500 1. 使用以下命令从能够访问互联网的机器上下载镜像清单文件 `images-list.txt`: ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/images-list.txt ``` {{< notice note >}} @@ -36,7 +47,7 @@ weight: 7500 2. 下载 `offline-installation-tool.sh`。 ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/offline-installation-tool.sh + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/offline-installation-tool.sh ``` 3. 使 `.sh` 文件可执行。 @@ -96,10 +107,10 @@ weight: 7500 1. 执行以下命令下载 ks-installer,并将其传输至您充当任务机的机器,用于安装。 ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml ``` -2. 验证您已在 `cluster-configuration.yaml` 中的 `spec.local_registry` 字段指定了私有镜像仓库地址。请注意,如果您的已有集群通过离线安装方式搭建,您应该已配置了此地址。如果您的集群采用在线安装方式搭建而需要进行离线升级,执行以下命令编辑您已有 KubeSphere v3.3.0 集群的 `cluster-configuration.yaml` 文件,并添加私有镜像仓库地址: +2. 验证您已在 `cluster-configuration.yaml` 中的 `spec.local_registry` 字段指定了私有镜像仓库地址。请注意,如果您的已有集群通过离线安装方式搭建,您应该已配置了此地址。如果您的集群采用在线安装方式搭建而需要进行离线升级,执行以下命令编辑您已有 KubeSphere 3.3 集群的 `cluster-configuration.yaml` 文件,并添加私有镜像仓库地址: ```bash kubectl edit cc -n kubesphere-system @@ -119,7 +130,7 @@ weight: 7500 3. 编辑完成后保存 `cluster-configuration.yaml`。使用以下命令将 `ks-installer` 替换为您**自己仓库的地址**。 ```bash - sed -i "s#^\s*image: kubesphere.*/ks-installer:.*# image: dockerhub.kubekey.local/kubesphere/ks-installer:v3.3.0#" kubesphere-installer.yaml + sed -i "s#^\s*image: kubesphere.*/ks-installer:.*# image: dockerhub.kubekey.local/kubesphere/ks-installer:v3.3.1#" kubesphere-installer.yaml ``` {{< notice warning >}} diff --git a/content/zh/docs/v3.3/upgrade/air-gapped-upgrade-with-kubekey.md b/content/zh/docs/v3.3/upgrade/air-gapped-upgrade-with-kubekey.md index 5c95d65cf..62dfb9f54 100644 --- a/content/zh/docs/v3.3/upgrade/air-gapped-upgrade-with-kubekey.md +++ b/content/zh/docs/v3.3/upgrade/air-gapped-upgrade-with-kubekey.md @@ -1,6 +1,6 @@ --- title: "使用 KubeKey 离线升级" -keywords: "离线环境, kubernetes, 升级, kubesphere, 3.3.0" +keywords: "离线环境, kubernetes, 升级, kubesphere, 3.3" description: "使用离线包升级 Kubernetes 和 KubeSphere。" linkTitle: "使用 KubeKey 离线升级" weight: 7400 @@ -11,11 +11,22 @@ weight: 7400 - 您需要有一个运行 KubeSphere v3.2.x 的集群。如果您的 KubeSphere 是 v3.1.0 或更早的版本,请先升级至 v3.2.x。 - 您的 Kubernetes 版本必须为 v1.19.x及以上版本。 -- 请仔细阅读 [3.3.0 版本说明](../../../v3.3/release/release-v330/)。 +- 请仔细阅读 [3.3 版本说明](../../../v3.3/release/release-v330/)。 - 提前备份所有重要的组件。 - Docker 仓库。您需要有一个 Harbor 或其他 Docker 仓库。 - 请确保每个节点都可以从该 Docker 仓库拉取镜像或向其推送镜像。 +## 重要提示 + +KubeSphere 3.3.1 对内置角色和自定义角色的授权项做了一些调整。在您升级到 KubeSphere 3.3.1时,请注意以下几点: + + - 内置角色调整:移除了平台级内置角色 `users-manager`(用户管理员)和 `workspace-manager`(企业空间管理员),如果已有用户绑定了 `users-manager` 或 `workspace-manager`,他们的角色将会在升级之后变更为 `platform-regular`。增加了平台级内置角色 `platform-self-provisioner`。关于平台角色的具体描述,请参见[创建用户](../../quick-start/create-workspace-and-project/#创建用户)。 + + - 自定义角色授权项调整: + - 移除平台层级自定义角色授权项:用户管理,角色管理,企业空间管理。 + - 移除企业空间层级自定义角色授权项:成员管理,角色管理,用户组管理。 + - 移除命名空间层级自定义角色授权项:成员管理,角色管理。 + - 升级到 KubeSphere 3.3.1 后,自定义角色会被保留,但是其包含的已被移除的授权项会被删除。 ## 升级 KubeSphere 和 Kubernetes @@ -47,7 +58,7 @@ weight: 7400 ### 步骤 1:下载 KubeKey -1. 执行以下命令下载 KubeKey v2.2.2 并解压: +1. 执行以下命令下载 KubeKey v2.3.0 并解压: {{< tabs >}} @@ -56,7 +67,7 @@ weight: 7400 从 [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) 下载 KubeKey 或者直接运行以下命令。 ```bash - curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - + curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -72,7 +83,7 @@ weight: 7400 运行以下命令来下载 KubeKey: ```bash - curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - + curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -91,7 +102,7 @@ weight: 7400 1. 使用以下命令从能够访问互联网的机器上下载镜像清单文件 `images-list.txt`: ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/images-list.txt ``` {{< notice note >}} @@ -103,7 +114,7 @@ weight: 7400 2. 下载 `offline-installation-tool.sh`。 ```bash - curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/offline-installation-tool.sh + curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/offline-installation-tool.sh ``` 3. 使 `.sh` 文件可执行。 @@ -144,7 +155,7 @@ weight: 7400 {{< notice note >}} - - 您可以根据自己的需求变更下载的 Kubernetes 版本。安装 KubeSphere 3.3.0 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x和v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../installing-on-linux/introduction/kubekey/#支持矩阵)。 + - 您可以根据自己的需求变更下载的 Kubernetes 版本。安装 KubeSphere 3.3 的建议 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x和v1.23.x(实验性支持)。如果不指定 Kubernetes 版本,KubeKey 将默认安装 Kubernetes v1.23.7。有关受支持的 Kubernetes 版本的更多信息,请参见[支持矩阵](../../installing-on-linux/introduction/kubekey/#支持矩阵)。 - 您可以通过下载 Kubernetes v1.17.9 二进制文件将 Kubernetes 从 v1.16.13 升级到 v1.17.9。但对于跨多个版本升级,需要事先下载所有中间版本,例如您想将 Kubernetes 从 v1.15.12 升级到 v1.18.6,则需要下载 Kubernetes v1.16.13、v1.17.9 和 v1.18.6 二进制文件。 @@ -191,7 +202,7 @@ weight: 7400 | | Kubernetes | KubeSphere | | ------ | ---------- | ---------- | | 升级前 | v1.18.6 | v3.2.x | -| 升级后 | v1.22.10 | 3.3.0 | +| 升级后 | v1.22.10 | 3.3 | #### 升级集群 @@ -208,7 +219,7 @@ weight: 7400 例如: ```bash -./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 -f config-sample.yaml +./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 -f config-sample.yaml ``` {{< notice note >}} @@ -249,7 +260,7 @@ weight: 7400 privateRegistry: dockerhub.kubekey.local ``` -#### 将单节点集群升级至 KubeSphere 3.3.0 和 Kubernetes v1.22.10 +#### 将单节点集群升级至 KubeSphere 3.3 和 Kubernetes v1.22.10 ```bash ./kk upgrade -f config-sample.yaml @@ -273,7 +284,7 @@ weight: 7400 | | Kubernetes | KubeSphere | | ------ | ---------- | ---------- | | 升级前 | v1.18.6 | v3.2.x | -| 升级后 | v1.22.10 | 3.3.0 | +| 升级后 | v1.22.10 | 3.3 | #### 升级集群 @@ -290,7 +301,7 @@ weight: 7400 例如: ```bash -./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 -f config-sample.yaml +./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 -f config-sample.yaml ``` {{< notice note >}} @@ -333,7 +344,7 @@ weight: 7400 privateRegistry: dockerhub.kubekey.local ``` -#### 将多节点集群升级至 KubeSphere 3.3.0 和 Kubernetes v1.22.10 +#### 将多节点集群升级至 KubeSphere 3.3 和 Kubernetes v1.22.10 ```bash ./kk upgrade -f config-sample.yaml diff --git a/content/zh/docs/v3.3/upgrade/overview.md b/content/zh/docs/v3.3/upgrade/overview.md index 3019a109f..c7fd07064 100644 --- a/content/zh/docs/v3.3/upgrade/overview.md +++ b/content/zh/docs/v3.3/upgrade/overview.md @@ -1,6 +1,6 @@ --- title: "概述" -keywords: "Kubernetes, 升级, KubeSphere, 3.3.0, 升级" +keywords: "Kubernetes, 升级, KubeSphere, 3.3, 升级" description: "了解升级之前需要注意的事项,例如版本和升级工具。" linkTitle: "概述" weight: 7100 @@ -8,11 +8,11 @@ weight: 7100 ## 确定您的升级方案 -KubeSphere 3.3.0 与 Kubernetes 1.19.x、1.20.x、1.21.x、1.22.x、1.23.x 兼容: +KubeSphere 3.3 与 Kubernetes 1.19.x、1.20.x、1.21.x、1.22.x、1.23.x 兼容: -- 在您升级集群至 KubeSphere 3.3.0 之前,您的 KubeSphere 集群版本必须为 v3.2.x。 +- 在您升级集群至 KubeSphere 3.3 之前,您的 KubeSphere 集群版本必须为 v3.2.x。 -- 如果您的现有 KubeSphere v3.2.x 集群安装在 Kubernetes 1.19.x+ 上,您可选择只将 KubeSphere 升级到 3.3.0 或者同时升级 Kubernetes(到更高版本)和 KubeSphere(到 3.3.0)。 +- 如果您的现有 KubeSphere v3.2.x 集群安装在 Kubernetes 1.19.x+ 上,您可选择只将 KubeSphere 升级到 3.3 或者同时升级 Kubernetes(到更高版本)和 KubeSphere(到 3.3)。 ## 升级前 diff --git a/content/zh/docs/v3.3/upgrade/upgrade-with-ks-installer.md b/content/zh/docs/v3.3/upgrade/upgrade-with-ks-installer.md index 700e19b60..538af3acd 100644 --- a/content/zh/docs/v3.3/upgrade/upgrade-with-ks-installer.md +++ b/content/zh/docs/v3.3/upgrade/upgrade-with-ks-installer.md @@ -1,6 +1,6 @@ --- title: "使用 ks-installer 升级" -keywords: "kubernetes, 升级, kubesphere, 3.3.0" +keywords: "kubernetes, 升级, kubesphere, 3.3" description: "使用 ks-installer 升级 KubeSphere。" linkTitle: "使用 ks-installer 升级" weight: 7300 @@ -11,18 +11,30 @@ weight: 7300 ## 准备工作 - 您需要有一个运行 KubeSphere v3.2.x 的集群。如果您的 KubeSphere 是 v3.1.0 或更早的版本,请先升级至 v3.2.x。 -- 请仔细阅读 [3.3.0 版本说明](../../../v3.3/release/release-v330/)。 +- 请仔细阅读 [3.3 版本说明](../../../v3.3/release/release-v330/)。 - 提前备份所有重要的组件。 -- KubeSphere 3.3.0 支持的 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。 +- KubeSphere 3.3 支持的 Kubernetes 版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。 + +## 重要提示 + +KubeSphere 3.3.1 对内置角色和自定义角色的授权项做了一些调整。在您升级到 KubeSphere 3.3.1时,请注意以下几点: + + - 内置角色调整:移除了平台级内置角色 `users-manager`(用户管理员)和 `workspace-manager`(企业空间管理员),如果已有用户绑定了 `users-manager` 或 `workspace-manager`,他们的角色将会在升级之后变更为 `platform-regular`。增加了平台级内置角色 `platform-self-provisioner`。关于平台角色的具体描述,请参见[创建用户](../../quick-start/create-workspace-and-project/#创建用户)。 + + - 自定义角色授权项调整: + - 移除平台层级自定义角色授权项:用户管理,角色管理,企业空间管理。 + - 移除企业空间层级自定义角色授权项:成员管理,角色管理,用户组管理。 + - 移除命名空间层级自定义角色授权项:成员管理,角色管理。 + - 升级到 KubeSphere 3.3.1 后,自定义角色会被保留,但是其包含的已被移除的授权项会被删除。 ## 应用 ks-installer 运行以下命令升级集群: ```bash -kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml --force +kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml --force ``` ## 启用可插拔组件 -您可以在升级后启用 KubeSphere 3.3.0 的[可插拔组件](../../pluggable-components/overview/)以体验该容器平台的更多功能。 \ No newline at end of file +您可以在升级后启用 KubeSphere 3.3 的[可插拔组件](../../pluggable-components/overview/)以体验该容器平台的更多功能。 \ No newline at end of file diff --git a/content/zh/docs/v3.3/upgrade/upgrade-with-kubekey.md b/content/zh/docs/v3.3/upgrade/upgrade-with-kubekey.md index d19c76ef3..e8010932c 100644 --- a/content/zh/docs/v3.3/upgrade/upgrade-with-kubekey.md +++ b/content/zh/docs/v3.3/upgrade/upgrade-with-kubekey.md @@ -1,6 +1,6 @@ --- title: "使用 KubeKey 升级" -keywords: "Kubernetes, 升级, KubeSphere, 3.3.0, KubeKey" +keywords: "Kubernetes, 升级, KubeSphere, 3.3, KubeKey" description: "使用 KubeKey 升级 Kubernetes 和 KubeSphere。" linkTitle: "使用 KubeKey 升级" weight: 7200 @@ -14,10 +14,22 @@ weight: 7200 ## 准备工作 - 您需要有一个运行 KubeSphere v3.2.x 的集群。如果您的 KubeSphere 是 v3.1.0 或更早的版本,请先升级至 v3.2.x。 -- 请仔细阅读 [3.3.0 版本说明](../../../v3.3/release/release-v330/)。 +- 请仔细阅读 [3.3 版本说明](../../../v3.3/release/release-v330/)。 - 提前备份所有重要的组件。 - 确定您的升级方案。本文档中提供 [All-in-One 集群](#all-in-one-集群)和[多节点集群](#多节点集群)的两种升级场景。 +## 重要提示 + +KubeSphere 3.3.1 对内置角色和自定义角色的授权项做了一些调整。在您升级到 KubeSphere 3.3.1时,请注意以下几点: + + - 内置角色调整:移除了平台级内置角色 `users-manager`(用户管理员)和 `workspace-manager`(企业空间管理员),如果已有用户绑定了 `users-manager` 或 `workspace-manager`,他们的角色将会在升级之后变更为 `platform-regular`。增加了平台级内置角色 `platform-self-provisioner`。关于平台角色的具体描述,请参见[创建用户](../../quick-start/create-workspace-and-project/#创建用户)。 + + - 自定义角色授权项调整: + - 移除平台层级自定义角色授权项:用户管理,角色管理,企业空间管理。 + - 移除企业空间层级自定义角色授权项:成员管理,角色管理,用户组管理。 + - 移除命名空间层级自定义角色授权项:成员管理,角色管理。 + - 升级到 KubeSphere 3.3.1 后,自定义角色会被保留,但是其包含的已被移除的授权项会被删除。 + ## 下载 KubeKey 升级集群前执行以下命令下载 KubeKey。 @@ -29,7 +41,7 @@ weight: 7200 从 [GitHub 发布页面](https://github.com/kubesphere/kubekey/releases)下载 KubeKey 或直接使用以下命令。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{}} @@ -45,7 +57,7 @@ export KKZONE=cn 执行以下命令下载 KubeKey。 ```bash -curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - +curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh - ``` {{< notice note >}} @@ -60,7 +72,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh - {{< notice note >}} -执行以上命令会下载最新版 KubeKey (v2.2.2),您可以修改命令中的版本号以下载指定版本。 +执行以上命令会下载最新版 KubeKey (v2.3.0),您可以修改命令中的版本号以下载指定版本。 {{}} @@ -81,10 +93,10 @@ chmod +x kk ### All-in-One 集群 -运行以下命令使用 KubeKey 将您的单节点集群升级至 KubeSphere 3.3.0 和 Kubernetes v1.22.10: +运行以下命令使用 KubeKey 将您的单节点集群升级至 KubeSphere 3.3 和 Kubernetes v1.22.10: ```bash -./kk upgrade --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 +./kk upgrade --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 ``` 要将 Kubernetes 升级至特定版本,请在 `--with-kubernetes` 标志后明确指定版本号。以下是可用版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。 @@ -122,16 +134,16 @@ chmod +x kk #### 步骤 3:升级集群 -运行以下命令,将您的集群升级至 KubeSphere 3.3.0 和 Kubernetes v1.22.10: +运行以下命令,将您的集群升级至 KubeSphere 3.3 和 Kubernetes v1.22.10: ```bash -./kk upgrade --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 -f sample.yaml +./kk upgrade --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 -f sample.yaml ``` 要将 Kubernetes 升级至特定版本,请在 `--with-kubernetes` 标志后明确指定版本号。以下是可用版本:v1.19.x、v1.20.x、v1.21.x、v1.22.x 和 v1.23.x(实验性支持)。 {{< notice note >}} -若要使用 KubeSphere 3.3.0 的新功能,您需要在升级后启用对应的可插拔组件。 +若要使用 KubeSphere 3.3 的新功能,您需要在升级后启用对应的可插拔组件。 {{}} \ No newline at end of file diff --git a/content/zh/docs/v3.3/upgrade/what-changed.md b/content/zh/docs/v3.3/upgrade/what-changed.md index d5ebfe69d..0c96e7af5 100644 --- a/content/zh/docs/v3.3/upgrade/what-changed.md +++ b/content/zh/docs/v3.3/upgrade/what-changed.md @@ -1,12 +1,12 @@ --- title: "升级后的变更" -keywords: "Kubernetes, 升级, KubeSphere, 3.3.0" +keywords: "Kubernetes, 升级, KubeSphere, 3.3" description: "了解升级后的变更。" linkTitle: "升级后的变更" weight: 7600 --- -本文说明先前版本现有设置在升级后的变更。如果您想了解 KubeSphere 3.3.0 的所有新功能和优化,请直接参阅 [3.3.0 版本说明](../../../v3.3/release/release-v330/)。 +本文说明先前版本现有设置在升级后的变更。如果您想了解 KubeSphere 3.3 的所有新功能和优化,请直接参阅 [3.3 版本说明](../../../v3.3/release/release-v330/)。 diff --git a/content/zh/docs/v3.3/workspace-administration/department-management.md b/content/zh/docs/v3.3/workspace-administration/department-management.md index 2020977ab..9d87bbef8 100644 --- a/content/zh/docs/v3.3/workspace-administration/department-management.md +++ b/content/zh/docs/v3.3/workspace-administration/department-management.md @@ -19,7 +19,7 @@ weight: 9800 1. 以 `ws-admin` 用户登录 KubeSphere Web 控制台并进入 `demo-ws` 企业空间。 -2. 在左侧导航栏选择**企业空间设置**下的**部门管理**,点击右侧的**设置部门**。 +2. 在左侧导航栏选择**企业空间设置**下的**部门**,点击右侧的**设置部门**。 3. 在**设置部门**对话框中,设置以下参数,然后点击**确定**创建部门。 @@ -36,11 +36,11 @@ weight: 9800 * **项目角色**:一个项目中所有部门成员的角色。您可以点击**添加项目**来指定多个项目角色。每个项目只能指定一个角色。 * **DevOps 项目角色**:一个 DevOps 项目中所有部门成员的角色。您可以点击**添加 DevOps 项目**来指定多个 DevOps 项目角色。每个 DevOps 项目只能指定一个角色。 -4. 部门创建完成后,点击**确定**,然后点击**关闭**。在**部门管理**页面,可以在左侧的部门树中看到已创建的部门。 +4. 部门创建完成后,点击**确定**,然后点击**关闭**。在**部门**页面,可以在左侧的部门树中看到已创建的部门。 ## 分配用户至部门 -1. 在**部门管理**页面,选择左侧部门树中的一个部门,点击右侧的**未分配**。 +1. 在**部门**页面,选择左侧部门树中的一个部门,点击右侧的**未分配**。 2. 在用户列表中,点击用户右侧的 ,对出现的提示消息点击**确定**,以将用户分配到该部门。 @@ -53,12 +53,12 @@ weight: 9800 ## 从部门中移除用户 -1. 在**部门管理**页面,选择左侧部门树中的一个部门,然后点击右侧的**已分配**。 +1. 在**部门**页面,选择左侧部门树中的一个部门,然后点击右侧的**已分配**。 2. 在已分配用户列表中,点击用户右侧的 ,在出现的对话框中输入相应的用户名,然后点击**确定**来移除用户。 ## 删除和编辑部门 -1. 在**部门管理**页面,点击**设置部门**。 +1. 在**部门**页面,点击**设置部门**。 2. 在**设置部门**对话框的左侧,点击需要编辑或删除部门的上级部门。 diff --git a/content/zh/docs/v3.3/workspace-administration/what-is-workspace.md b/content/zh/docs/v3.3/workspace-administration/what-is-workspace.md index 847e225de..7c22be939 100644 --- a/content/zh/docs/v3.3/workspace-administration/what-is-workspace.md +++ b/content/zh/docs/v3.3/workspace-administration/what-is-workspace.md @@ -20,11 +20,6 @@ weight: 9100 1. 以 `ws-manager` 身份登录 KubeSphere Web 控制台。点击左上角的**平台管理**并选择**访问控制**。在**企业空间**页面,点击**创建**。 - {{< notice note >}} - - 列表中已列出默认企业空间 `system-workspace`,该企业空间包含所有系统项目。 - - {{}} 2. 对于单节点集群,您需要在**基本信息**页面,为创建的企业空间输入名称,并从下拉菜单中选择一名企业空间管理员。点击**创建**。 diff --git a/layouts/partials/header.html b/layouts/partials/header.html index 131d92fda..25ae7d046 100644 --- a/layouts/partials/header.html +++ b/layouts/partials/header.html @@ -3,7 +3,7 @@ {{ if eq .Site.Language.Lang "zh"}}
- 🚀 KubeSphere v3.3.0 已经发布,为您带来新特性和功能增强。请参阅 v3.3.0 版本说明 → + 🚀 KubeSphere v3.3 已经发布,为您带来新特性和功能增强。请参阅 v3.3.0 版本说明 → close
@@ -11,7 +11,7 @@ {{ if eq .Site.Language.Lang "en"}}
- 🚀 KubeSphere v3.3.0 with new features and enhancements is available now. Read the release notes for v3.3.0 → + 🚀 KubeSphere v3.3 with new features and enhancements is available now. Read the release notes for v3.3.0 → close