3.3.1 documentation

Signed-off-by: Bettygogo2021 <bettygogo@kubesphere.io>
This commit is contained in:
Bettygogo2021 2022-10-28 15:35:41 +08:00
parent 5e3f43e8b1
commit 504604bead
176 changed files with 1521 additions and 1719 deletions

View File

@ -6,7 +6,7 @@ linkTitle: "Cluster Gateway"
weight: 8630
---
KubeSphere 3.3.0 provides cluster-scope gateways to let all projects share a global gateway. This document describes how to set a cluster gateway on KubeSphere.
KubeSphere 3.3 provides cluster-scope gateways to let all projects share a global gateway. This document describes how to set a cluster gateway on KubeSphere.
## Prerequisites

View File

@ -6,7 +6,7 @@ linkTitle: "Introduction"
weight: 8621
---
KubeSphere provides a flexible log receiver configuration method. Powered by [FluentBit Operator](https://github.com/kubesphere/fluentbit-operator/), users can easily add, modify, delete, enable, or disable Elasticsearch, Kafka and Fluentd receivers. Once a receiver is added, logs will be sent to this receiver.
KubeSphere provides a flexible log receiver configuration method. Powered by [Fluent Operator](https://github.com/fluent/fluent-operator), users can easily add, modify, delete, enable, or disable Elasticsearch, Kafka and Fluentd receivers. Once a receiver is added, logs will be sent to this receiver.
This tutorial gives a brief introduction about the general steps of adding log receivers in KubeSphere.
@ -45,7 +45,7 @@ To add a log receiver:
A default Elasticsearch receiver will be added with its service address set to an Elasticsearch cluster if `logging`, `events`, or `auditing` is enabled in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md).
An internal Elasticsearch cluster will be deployed to the Kubernetes cluster if neither `externalElasticsearchHost` nor `externalElasticsearchPort` is specified in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) when `logging`, `events`, or `auditing` is enabled. The internal Elasticsearch cluster is for testing and development only. It is recommended that you configure an external Elasticsearch cluster for production.
An internal Elasticsearch cluster will be deployed to the Kubernetes cluster if neither `externalElasticsearchUrl` nor `externalElasticsearchPort` is specified in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) when `logging`, `events`, or `auditing` is enabled. The internal Elasticsearch cluster is for testing and development only. It is recommended that you configure an external Elasticsearch cluster for production.
Log searching relies on the internal or external Elasticsearch cluster configured.

View File

@ -121,17 +121,17 @@ Nevertheless, you can use [rbd provisioner](https://github.com/kubernetes-incuba
| Parameter | Description |
| :---- | :---- |
| Monitors| IP address of Ceph monitors. |
| adminId| Ceph client ID that is capable of creating images in the pool. |
| adminSecretName| Secret name of `adminId`. |
| adminSecretNamespace| Namespace of `adminSecretName`. |
| pool | Name of the Ceph RBD pool. |
| userId | The Ceph client ID that is used to map the RBD image. |
| userSecretName | The name of Ceph Secret for `userId` to map RBD image. |
| userSecretNamespace | The namespace for `userSecretName`. |
| MONITORS| IP address of Ceph monitors. |
| ADMINID| Ceph client ID that is capable of creating images in the pool. |
| ADMINSECRETNAME| Secret name of `adminId`. |
| ADMINSECRETNAMESPACE| Namespace of `adminSecretName`. |
| POOL | Name of the Ceph RBD pool. |
| USERID | The Ceph client ID that is used to map the RBD image. |
| USERSECRETNAME | The name of Ceph Secret for `userId` to map RBD image. |
| USERSECRETNAMESPACE | The namespace for `userSecretName`. |
| File System Type | File system type of the storage volume. |
| imageFormat | Option of the Ceph volume. The value can be `1` or `2`. `imageFeatures` needs to be filled when you set imageFormat to `2`. |
| imageFeatures| Additional function of the Ceph cluster. The value should only be set when you set imageFormat to `2`. |
| IMAGEFORMAT | Option of the Ceph volume. The value can be `1` or `2`. `imageFeatures` needs to be filled when you set imageFormat to `2`. |
| IMAGEFEATURES| Additional function of the Ceph cluster. The value should only be set when you set imageFormat to `2`. |
For more information about StorageClass parameters, see [Ceph RBD in Kubernetes Documentation](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd).
@ -146,7 +146,7 @@ NFS (Net File System) is widely used on Kubernetes with the external-provisioner
{{< notice note >}}
NFS is incompatible with some applications, for example, Prometheus, which may result in pod creation failures. If you need to use NFS in the production environment, ensure that you have understood the risks. For more information, contact support@kubesphere.cloud.
It is not recommended that you use NFS storage for production (especially on Kubernetes version 1.20 or later) as some issues may occur, such as `failed to obtain lock` and `input/output error`, resulting in Pod `CrashLoopBackOff`. Besides, some apps may not be compatible with NFS, including [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects).
{{</ notice >}}

View File

@ -40,7 +40,7 @@ See the table below for the role of each cluster.
{{< notice note >}}
These Kubernetes clusters can be hosted across different cloud providers and their Kubernetes versions can also vary. Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
These Kubernetes clusters can be hosted across different cloud providers and their Kubernetes versions can also vary. Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
{{</ notice >}}

View File

@ -6,7 +6,7 @@ linkTitle: "Import a Code Repository"
weight: 11231
---
In KubeSphere 3.3.0, you can import a GitHub, GitLab, Bitbucket, or Git-based repository. The following describes how to import a GitHub repository.
In KubeSphere 3.3, you can import a GitHub, GitLab, Bitbucket, or Git-based repository. The following describes how to import a GitHub repository.
## Prerequisites

View File

@ -6,7 +6,7 @@ linkTitle: "Use GitOps to Achieve Continuous Deployment of Applications"
weight: 11221
---
In KubeSphere 3.3.0, we introduce the GitOps concept, which is a way of implementing continuous deployment for cloud-native applications. The core component of GitOps is a Git repository that always stores applications and declarative description of the infrastructure for version control. With GitOps and Kubernetes, you can enable CI/CD pipelines to apply changes to any cluster, which ensures consistency in cross-cloud deployment scenarios.
In KubeSphere 3.3, we introduce the GitOps concept, which is a way of implementing continuous deployment for cloud-native applications. The core component of GitOps is a Git repository that always stores applications and declarative description of the infrastructure for version control. With GitOps and Kubernetes, you can enable CI/CD pipelines to apply changes to any cluster, which ensures consistency in cross-cloud deployment scenarios.
This section walks you through the process of deploying an application using a continuous deployment.
## Prerequisites

View File

@ -5,7 +5,7 @@ description: 'Describe how to add a continuous deployment allowlist on KubeSpher
linkTitle: "Add a Continuous Deployment Allowlist"
weight: 11243
---
In KubeSphere 3.3.0, you can set an allowlist so that only specific code repositories and deployment locations can be used for continuous deployment.
In KubeSphere 3.3, you can set an allowlist so that only specific code repositories and deployment locations can be used for continuous deployment.
## Prerequisites

View File

@ -288,7 +288,7 @@ This stage uses SonarQube to test your code. You can skip this stage if you do n
{{< notice note >}}
In KubeSphere 3.3.0, the account that can run a pipeline will be able to continue or terminate the pipeline if there is no reviewer specified. Pipeline creators, accounts with the role of `admin` in a project, or the account you specify will be able to continue or terminate a pipeline.
In KubeSphere 3.3, the account that can run a pipeline will be able to continue or terminate the pipeline if there is no reviewer specified. Pipeline creators, accounts with the role of `admin` in a project, or the account you specify will be able to continue or terminate a pipeline.
{{</ notice >}}

View File

@ -219,7 +219,7 @@ The account `project-admin` needs to be created in advance since it is the revie
{{< notice note >}}
In KubeSphere 3.3.0, the account that can run a pipeline will be able to continue or terminate the pipeline if there is no reviewer specified. Pipeline creators, accounts with the role of `admin` in the project, or the account you specify will be able to continue or terminate the pipeline.
In KubeSphere 3.3, the account that can run a pipeline will be able to continue or terminate the pipeline if there is no reviewer specified. Pipeline creators, accounts with the role of `admin` in the project, or the account you specify will be able to continue or terminate the pipeline.
{{</ notice >}}

View File

@ -6,7 +6,7 @@ linkTitle: "Use Pipeline Templates"
weight: 11213
---
KubeSphere offers a graphical editing panel where the stages and steps of a Jenkins pipeline can be defined through interactive operations. KubeSphere 3.3.0 provides built-in pipeline templates, such as Node.js, Maven, and Golang, to help users quickly create pipelines. Additionally, KubeSphere 3.3.0 also supports customization of pipeline templates to meet diversified needs of enterprises.
KubeSphere offers a graphical editing panel where the stages and steps of a Jenkins pipeline can be defined through interactive operations. KubeSphere 3.3 provides built-in pipeline templates, such as Node.js, Maven, and Golang, to help users quickly create pipelines. Additionally, KubeSphere 3.3 also supports customization of pipeline templates to meet diversified needs of enterprises.
This section describes how to use pipeline templates on KubeSphere.
## Prerequisites

View File

@ -76,7 +76,7 @@ kubectl -n kubesphere-system rollout restart deploy ks-controller-manager
### Wrong code branch used
If you used the incorrect version of ks-installer, the versions of different components would not match after the installation. Execute the following commands to check version consistency. Note that the correct image tag is `v3.3.0`.
If you used the incorrect version of ks-installer, the versions of different components would not match after the installation. Execute the following commands to check version consistency. Note that the correct image tag is `v3.3.1`.
```
kubectl -n kubesphere-system get deploy ks-installer -o jsonpath='{.spec.template.spec.containers[0].image}'

View File

@ -31,8 +31,8 @@ Editing resources in `system-workspace` may cause unexpected results, such as Ku
```yaml
client:
version:
kubesphere: v3.3.0
kubernetes: v1.22.10
kubesphere: v3.3.1
kubernetes: v1.21.5
openpitrix: v3.3.0
enableKubeConfig: true
systemWorkspace: "$" # Add this line manually.

View File

@ -29,7 +29,7 @@ Telemetry is enabled by default when you install KubeSphere, while you also have
### Disable Telemetry before installation
When you install KubeSphere on an existing Kubernetes cluster, you need to download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) for cluster settings. If you want to disable Telemetry, do not run `kubectl apply -f` directly for this file.
When you install KubeSphere on an existing Kubernetes cluster, you need to download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) for cluster settings. If you want to disable Telemetry, do not run `kubectl apply -f` directly for this file.
{{< notice note >}}
@ -37,7 +37,7 @@ If you install KubeSphere on Linux, see [Disable Telemetry After Installation](.
{{</ notice >}}
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it:
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it:
```bash
vi cluster-configuration.yaml
@ -57,7 +57,7 @@ If you install KubeSphere on Linux, see [Disable Telemetry After Installation](.
3. Save the file and run the following commands to start installation.
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```

View File

@ -6,9 +6,19 @@ linkTitle: "Bring Your Own Prometheus"
Weight: 16330
---
KubeSphere comes with several pre-installed customized monitoring components, including Prometheus Operator, Prometheus, Alertmanager, Grafana (Optional), various ServiceMonitors, node-exporter, and kube-state-metrics. These components might already exist before you install KubeSphere. It is possible to use your own Prometheus stack setup in KubeSphere v3.3.0.
KubeSphere comes with several pre-installed customized monitoring components including Prometheus Operator, Prometheus, Alertmanager, Grafana (Optional), various ServiceMonitors, node-exporter, and kube-state-metrics. These components might already exist before you install KubeSphere. It is possible to use your own Prometheus stack setup in KubeSphere 3.3.
## Bring Your Own Prometheus
## Steps to Bring Your Own Prometheus
To use your own Prometheus stack setup, perform the following steps:
1. Uninstall the customized Prometheus stack of KubeSphere
2. Install your own Prometheus stack
3. Install KubeSphere customized stuff to your Prometheus stack
4. Change KubeSphere's `monitoring endpoint`
### Step 1. Uninstall the customized Prometheus stack of KubeSphere
@ -29,7 +39,7 @@ KubeSphere comes with several pre-installed customized monitoring components, in
# kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-installer -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/init/ 2>/dev/null
```
2. Delete the PVC that Prometheus uses.
2. Delete the PVC that Prometheus used.
```bash
kubectl -n kubesphere-monitoring-system delete pvc `kubectl -n kubesphere-monitoring-system get pvc | grep -v VOLUME | awk '{print $1}' | tr '\n' ' '`
@ -39,112 +49,108 @@ KubeSphere comes with several pre-installed customized monitoring components, in
{{< notice note >}}
KubeSphere 3.3.0 was certified to work well with the following Prometheus stack components:
KubeSphere 3.3 was certified to work well with the following Prometheus stack components:
- Prometheus Operator **v0.55.1+**
- Prometheus **v2.34.0+**
- Alertmanager **v0.23.0+**
- kube-state-metrics **v2.5.0**
- node-exporter **vv1.3.1**
- Prometheus Operator **v0.38.3+**
- Prometheus **v2.20.1+**
- Alertmanager **v0.21.0+**
- kube-state-metrics **v1.9.6**
- node-exporter **v0.18.1**
Make sure your Prometheus stack components' version meets these version requirements, especially **node-exporter** and **kube-state-metrics**.
Make sure your Prometheus stack components' version meets these version requirements especially **node-exporter** and **kube-state-metrics**.
Make sure you install **node-exporter** and **kube-state-metrics** if only **Prometheus Operator** and **Prometheus** are installed. **node-exporter** and **kube-state-metrics** are required for KubeSphere to work properly.
Make sure you install **node-exporter** and **kube-state-metrics** if only **Prometheus Operator** and **Prometheus** were installed. **node-exporter** and **kube-state-metrics** are required for KubeSphere to work properly.
**If you've already had the entire Prometheus stack up and running, you can skip this step.**
{{</ notice >}}
The Prometheus stack can be installed in many ways. The following steps show how to install it into the namespace `monitoring` using `ks-prometheus` (based on the **upstream `kube-prometheus`** project).
The Prometheus stack can be installed in many ways. The following steps show how to install it into the namespace `monitoring` using **upstream `kube-prometheus`**.
1. Obtain `ks-prometheus` that KubeSphere v3.3.0 uses.
1. Get kube-prometheus version v0.6.0 whose node-exporter's version v0.18.1 matches the one KubeSphere 3.3 is using.
```bash
cd ~ && git clone -b release-3.3 https://github.com/kubesphere/ks-prometheus.git && cd ks-prometheus
cd ~ && git clone https://github.com/prometheus-operator/kube-prometheus.git && cd kube-prometheus && git checkout tags/v0.6.0 -b v0.6.0
```
2. Set up the `monitoring` namespace.
2. Setup the `monitoring` namespace, and install Prometheus Operator and corresponding roles:
```bash
sed -i 's/kubesphere-monitoring-system/monitoring/g' kustomization.yaml
kubectl apply -f manifests/setup/
```
3. Remove unnecessary components. For example, if Grafana is not enabled in KubeSphere, you can run the following command to delete the Grafana section in `kustomization.yaml`.
3. Wait until Prometheus Operator is up and running.
```bash
sed -i '/manifests\/grafana\//d' kustomization.yaml
kubectl -n monitoring get pod --watch
```
4. Install the stack.
4. Remove unnecessary components such as Prometheus Adapter.
```bash
kubectl apply -k .
rm -rf manifests/prometheus-adapter-*.yaml
```
5. Change kube-state-metrics to the same version v1.9.6 as KubeSphere 3.3 is using.
```bash
sed -i 's/v1.9.5/v1.9.6/g' manifests/kube-state-metrics-deployment.yaml
```
6. Install Prometheus, Alertmanager, Grafana, kube-state-metrics, and node-exporter. You can only install kube-state-metrics or node-exporter by only applying the yaml file `kube-state-metrics-*.yaml` or `node-exporter-*.yaml`.
```bash
kubectl apply -f manifests/
```
### Step 3. Install KubeSphere customized stuff to your Prometheus stack
{{< notice note >}}
If your Prometheus stack is not installed using `ks-prometheus`, skip this step.
KubeSphere 3.3 uses Prometheus Operator to manage Prometheus/Alertmanager config and lifecycle, ServiceMonitor (to manage scrape config), and PrometheusRule (to manage Prometheus recording/alert rules).
KubeSphere 3.3.0 uses Prometheus Operator to manage Prometheus/Alertmanager config and lifecycle, ServiceMonitor (to manage scrape config), and PrometheusRule (to manage Prometheus recording/alert rules).
There are a few items listed in [KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml), among which `prometheus-rules.yaml` and `prometheus-rulesEtcd.yaml` are required for KubeSphere 3.3 to work properly and others are optional. You can remove `alertmanager-secret.yaml` if you don't want your existing Alertmanager's config to be overwritten. You can remove `xxx-serviceMonitor.yaml` if you don't want your own ServiceMonitors to be overwritten (KubeSphere customized ServiceMonitors discard many irrelevant metrics to make sure Prometheus only stores the most useful metrics).
If your Prometheus stack setup isn't managed by Prometheus Operator, you can skip this step. But you have to make sure that:
- You must copy the recording/alerting rules in [PrometheusRule](https://github.com/kubesphere/ks-prometheus/blob/release-3.3/manifests/kubernetes/kubernetes-prometheusRule.yaml) and [PrometheusRule for etcd](https://github.com/kubesphere/ks-prometheus/blob/release-3.3/manifests/etcd/prometheus-rulesEtcd.yaml) to your Prometheus config for KubeSphere v3.3.0 to work properly.
- You must copy the recording/alerting rules in [PrometheusRule](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/prometheus-rules.yaml) and [PrometheusRule for etcd](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/prometheus-rulesEtcd.yaml) to your Prometheus config for KubeSphere 3.3 to work properly.
- Configure your Prometheus to scrape metrics from the same targets as that in [serviceMonitor](https://github.com/kubesphere/ks-prometheus/tree/release-3.3/manifests) of each component.
- Configure your Prometheus to scrape metrics from the same targets as the ServiceMonitors listed in [KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml).
{{</ notice >}}
1. Obtain `ks-prometheus` that KubeSphere v3.3.0 uses.
1. Get KubeSphere 3.3 customized kube-prometheus.
```bash
cd ~ && git clone -b release-3.3 https://github.com/kubesphere/ks-prometheus.git && cd ks-prometheus
cd ~ && mkdir kubesphere && cd kubesphere && git clone https://github.com/kubesphere/kube-prometheus.git && cd kube-prometheus/kustomize
```
2. Configure `kustomization.yaml` and retain the following content only.
2. Change the namespace to your own in which the Prometheus stack is deployed. For example, it is `monitoring` if you install Prometheus in the `monitoring` namespace following Step 2.
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: <your own namespace>
resources:
- ./manifests/alertmanager/alertmanager-secret.yaml
- ./manifests/etcd/prometheus-rulesEtcd.yaml
- ./manifests/kube-state-metrics/kube-state-metrics-serviceMonitor.yaml
- ./manifests/kubernetes/kubernetes-prometheusRule.yaml
- ./manifests/kubernetes/kubernetes-serviceKubeControllerManager.yaml
- ./manifests/kubernetes/kubernetes-serviceKubeScheduler.yaml
- ./manifests/kubernetes/kubernetes-serviceMonitorApiserver.yaml
- ./manifests/kubernetes/kubernetes-serviceMonitorCoreDNS.yaml
- ./manifests/kubernetes/kubernetes-serviceMonitorKubeControllerManager.yaml
- ./manifests/kubernetes/kubernetes-serviceMonitorKubeScheduler.yaml
- ./manifests/kubernetes/kubernetes-serviceMonitorKubelet.yaml
- ./manifests/node-exporter/node-exporter-serviceMonitor.yaml
- ./manifests/prometheus/prometheus-clusterRole.yaml
```bash
sed -i 's/my-namespace/<your own namespace>/g' kustomization.yaml
```
{{< notice note >}}
- Set the value of `namespace` to your own namespace in which the Prometheus stack is deployed. For example, it is `monitoring` if you install Prometheus in the `monitoring` namespace in Step 2.
- If you have enabled the alerting component for KubeSphere, retain `thanos-ruler` in the `kustomization.yaml` file.
{{</ notice >}}
3. Install the required components of KubeSphere.
3. Apply KubeSphere customized stuff including Prometheus rules, Alertmanager config, and various ServiceMonitors.
```bash
kubectl apply -k .
```
4. Find the Prometheus CR which is usually `k8s` in your own namespace.
4. Setup Services for kube-scheduler and kube-controller-manager metrics exposure.
```bash
kubectl apply -f ./prometheus-serviceKubeScheduler.yaml
kubectl apply -f ./prometheus-serviceKubeControllerManager.yaml
```
5. Find the Prometheus CR which is usually Kubernetes in your own namespace.
```bash
kubectl -n <your own namespace> get prometheus
```
5. Set the Prometheus rule evaluation interval to 1m to be consistent with the KubeSphere v3.3.0 customized ServiceMonitor. The Rule evaluation interval should be greater than or equal to the scrape interval.
6. Set the Prometheus rule evaluation interval to 1m to be consistent with the KubeSphere 3.3 customized ServiceMonitor. The Rule evaluation interval should be greater or equal to the scrape interval.
```bash
kubectl -n <your own namespace> patch prometheus k8s --patch '{
@ -158,13 +164,13 @@ If your Prometheus stack setup isn't managed by Prometheus Operator, you can ski
Now that your own Prometheus stack is up and running, you can change KubeSphere's monitoring endpoint to use your own Prometheus.
1. Run the following command to edit `kubesphere-config`.
1. Edit `kubesphere-config` by running the following command:
```bash
kubectl edit cm -n kubesphere-system kubesphere-config
```
2. Navigate to the `monitoring endpoint` section, as shown in the following:
2. Navigate to the `monitoring endpoint` section as below:
```bash
monitoring:
@ -178,20 +184,14 @@ Now that your own Prometheus stack is up and running, you can change KubeSphere'
endpoint: http://prometheus-operated.monitoring.svc:9090
```
4. If you have enabled the alerting component of KubeSphere, navigate to `prometheusEndpoint` and `thanosRulerEndpoint` of `alerting`, and change the values according to the following sample. KubeSphere APIServer will restart automatically to make your configurations take effect.
4. Run the following command to restart the KubeSphere APIServer.
```yaml
...
alerting:
...
prometheusEndpoint: http://prometheus-operated.monitoring.svc:9090
thanosRulerEndpoint: http://thanos-ruler-operated.monitoring.svc:10902
...
...
```bash
kubectl -n kubesphere-system rollout restart deployment/ks-apiserver
```
{{< notice warning >}}
If you enable/disable KubeSphere pluggable components following [this guide](../../../pluggable-components/overview/) , the `monitoring endpoint` will be reset to the original value. In this case, you need to change it to the new one.
If you enable/disable KubeSphere pluggable components following [this guide](../../../pluggable-components/overview/) , the `monitoring endpoint` will be reset to the original one. In this case, you have to change it to the new one and then restart the KubeSphere APIServer again.
{{</ notice >}}

View File

@ -19,7 +19,7 @@ This page contains some of the frequently asked questions about logging.
## How to change the log store to the external Elasticsearch and shut down the internal Elasticsearch
If you are using the KubeSphere internal Elasticsearch and want to change it to your external alternate, follow the steps below. If you haven't enabled the logging system, refer to [KubeSphere Logging System](../../../pluggable-components/logging/) to set up your external Elasticsearch directly.
If you are using the KubeSphere internal Elasticsearch and want to change it to your external alternate, follow the steps below. If you haven't enabled the logging system, refer to [KubeSphere Logging System](../../../pluggable-components/logging/) to setup your external Elasticsearch directly.
1. First, you need to update the KubeKey configuration. Execute the following command:
@ -27,7 +27,7 @@ If you are using the KubeSphere internal Elasticsearch and want to change it to
kubectl edit cc -n kubesphere-system ks-installer
```
2. Comment out `es.elasticsearchDataXXX`, `es.elasticsearchMasterXXX` and `status.logging`, and set `es.externalElasticsearchHost` to the address of your Elasticsearch and `es.externalElasticsearchPort` to its port number. Below is an example for your reference.
2. Comment out `es.elasticsearchDataXXX`, `es.elasticsearchMasterXXX` and `status.logging`, and set `es.externalElasticsearchUrl` to the address of your Elasticsearch and `es.externalElasticsearchPort` to its port number. Below is an example for your reference.
```yaml
apiVersion: installer.kubesphere.io/v1alpha1
@ -39,18 +39,14 @@ If you are using the KubeSphere internal Elasticsearch and want to change it to
spec:
...
common:
es: # Storage backend for logging, events and auditing.
# master:
# volumeSize: 4Gi # The volume size of Elasticsearch master nodes.
# replicas: 1 # The total number of master nodes. Even numbers are not allowed.
# resources: {}
# data:
# volumeSize: 20Gi # The volume size of Elasticsearch data nodes.
# replicas: 1 # The total number of data nodes.
# resources: {}
es:
# elasticsearchDataReplicas: 1
# elasticsearchDataVolumeSize: 20Gi
# elasticsearchMasterReplicas: 1
# elasticsearchMasterVolumeSize: 4Gi
elkPrefix: logstash
logMaxAge: 7
externalElasticsearchHost: <192.168.0.2>
externalElasticsearchUrl: <192.168.0.2>
externalElasticsearchPort: <9200>
...
status:
@ -90,9 +86,9 @@ Currently, KubeSphere doesn't support the integration of Elasticsearch with X-Pa
## How to set the data retention period of logs, events, auditing logs, and Istio logs
Before KubeSphere v3.3.0, you can only set the retention period of logs, which is 7 days by default. In KubeSphere v3.3.0, apart from logs, you can also set the data retention period of events, auditing logs, and Istio logs.
Before KubeSphere 3.3, you can only set the retention period of logs, which is 7 days by default. In KubeSphere 3.3, apart from logs, you can also set the data retention period of events, auditing logs, and Istio logs.
Perform the following to update the KubeKey configurations.
You need to update the KubeKey configuration and rerun `ks-installer`.
1. Execute the following command:
@ -100,7 +96,7 @@ Perform the following to update the KubeKey configurations.
kubectl edit cc -n kubesphere-system ks-installer
```
2. In the YAML file, if you only want to change the retention period of logs, you can directly change the default value of `logMaxAge` to a desired one. If you want to set the retention period of events, auditing logs, and Istio logs, add parameters `auditingMaxAge`, `eventMaxAge`, and `istioMaxAge` and set a value for them, respectively, as shown in the following example:
2. In the YAML file, if you only want to change the retention period of logs, you can directly change the default value of `logMaxAge` to a desired one. If you want to set the retention period of events, auditing logs, and Istio logs, you need to add parameters `auditingMaxAge`, `eventMaxAge`, and `istioMaxAge` and set a value for them, respectively, as shown in the following example:
```yaml
@ -122,27 +118,10 @@ Perform the following to update the KubeKey configurations.
...
```
{{< notice note >}}
If you have not set the retention period of events, auditing logs, and Istio logs, the value of `logMaxAge` is used by default.
{{</ notice >}}
3. Rerun `ks-installer`.
3. In the YAML file, delete the `es` parameter, save the changes, and ks-installer will automatically restart to make the changes take effective.
```yaml
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
...
status:
alerting:
enabledTime: 2022-08-11T06:22:01UTC
status: enabled
...
es: # delete this line.
enabledTime: 2022-08-11T06:22:01UTC # delete this line.
status: enabled # delete this line.
```bash
kubectl rollout restart deploy -n kubesphere-system ks-installer
```
## I cannot find logs from workloads on some nodes using Toolbox
@ -181,4 +160,4 @@ kubectl edit input -n kubesphere-logging-system tail
Update the field `Input.Spec.Tail.ExcludePath`. For example, set the path to `/var/log/containers/*_kube*-system_*.log` to exclude any log from system components.
For more information, see [Fluent Bit Operator](https://github.com/kubesphere/fluentbit-operator).
For more information, see [Fluent Operator](https://github.com/kubesphere/fluentbit-operator).

View File

@ -77,9 +77,9 @@ All the other Resources will be placed in `MC_KubeSphereRG_KuberSphereCluster_we
To start deploying KubeSphere, use the following commands.
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml
```
You can inspect the logs of installation through the following command:

View File

@ -28,8 +28,8 @@ You need to select:
{{< notice note >}}
- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- 2 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- 2 nodes are included in this example. You can add more nodes based on your own needs, especially in a production environment.
- The machine type Standard / 4 GB / 2 vCPUs is for minimal installation. If you plan to enable several pluggable components or use the cluster for production, you can upgrade your nodes to a more powerful type (such as CPU-Optimized / 8 GB / 4 vCPUs). It seems that DigitalOcean provisions the control plane nodes based on the type of the worker nodes, and for Standard ones the API server can become unresponsive quite soon.
{{</ notice >}}
@ -45,9 +45,9 @@ Now that the cluster is ready, you can install KubeSphere following the steps be
- Install KubeSphere using kubectl. The following commands are only for the default minimal installation.
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml
```
- Inspect the logs of installation:

View File

@ -79,7 +79,7 @@ Check the installation with `aws --version`.
{{< notice note >}}
- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
- The machine type t3.medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources.
- For other settings, you can change them as well based on your own needs or use the default value.
@ -125,9 +125,9 @@ We will use the kubectl command-line utility for communicating with the cluster
- Install KubeSphere using kubectl. The following commands are only for the default minimal installation.
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml
```
- Inspect the logs of installation:

View File

@ -30,7 +30,7 @@ This guide walks you through the steps of deploying KubeSphere on [Google Kubern
{{< notice note >}}
- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
- The machine type e2-medium (2 vCPU, 4GB memory) is for minimal installation. If you want to enable pluggable components or use the cluster for production, please select a machine type with more resources.
- For other settings, you can change them as well based on your own needs or use the default value.
@ -46,9 +46,9 @@ This guide walks you through the steps of deploying KubeSphere on [Google Kubern
- Install KubeSphere using kubectl. The following commands are only for the default minimal installation.
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml
```
- Inspect the logs of installation:

View File

@ -14,7 +14,7 @@ This guide walks you through the steps of deploying KubeSphere on [Huaiwei CCE](
First, create a Kubernetes cluster based on the requirements below.
- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- Ensure the cloud computing network for your Kubernetes cluster works, or use an elastic IP when you use **Auto Create** or **Select Existing**. You can also configure the network after the cluster is created. Refer to [NAT Gateway](https://support.huaweicloud.com/en-us/productdesc-natgateway/en-us_topic_0086739762.html).
- Select `s3.xlarge.2` `4-core8GB` for nodes and add more if necessary (3 and more nodes are required for a production environment).
@ -76,9 +76,9 @@ For how to set up or cancel a default StorageClass, refer to Kubernetes official
Use [ks-installer](https://github.com/kubesphere/ks-installer) to deploy KubeSphere on an existing Kubernetes cluster. Execute the following commands directly for a minimal installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml
```
Go to **Workload** > **Pod**, and check the running status of the pod in `kubesphere-system` of its namespace to understand the minimal deployment of KubeSphere. Check `ks-console-xxxx` of the namespace to understand the availability of KubeSphere console.

View File

@ -30,7 +30,7 @@ This guide walks you through the steps of deploying KubeSphere on [Oracle Kubern
{{< notice note >}}
- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- It is recommended that you should select **Public** for **Visibility Type**, which will assign a public IP address for every node. The IP address can be used later to access the web console of KubeSphere.
- In Oracle Cloud, a Shape is a template that determines the number of CPUs, amount of memory, and other resources that are allocated to an instance. `VM.Standard.E2.2 (2 CPUs and 16G Memory)` is used in this example. For more information, see [Standard Shapes](https://docs.cloud.oracle.com/en-us/iaas/Content/Compute/References/computeshapes.htm#vmshapes__vm-standard).
- 3 nodes are included in this example. You can add more nodes based on your own needs especially in a production environment.
@ -68,9 +68,9 @@ This guide walks you through the steps of deploying KubeSphere on [Oracle Kubern
- Install KubeSphere using kubectl. The following commands are only for the default minimal installation.
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml
```
- Inspect the logs of installation:

View File

@ -29,9 +29,9 @@ After you make sure your existing Kubernetes cluster meets all the requirements,
1. Execute the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml
```
2. Inspect the logs of installation:

View File

@ -8,7 +8,7 @@ weight: 4120
You can install KubeSphere on virtual machines and bare metal with Kubernetes also provisioned. In addition, KubeSphere can also be deployed on cloud-hosted and on-premises Kubernetes clusters as long as your Kubernetes cluster meets the prerequisites below.
- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- Available CPU > 1 Core and Memory > 2 G. Only x86_64 CPUs are supported, and Arm CPUs are not fully supported at present.
- A **default** StorageClass in your Kubernetes cluster is configured; use `kubectl get sc` to verify it.
- The CSR signing feature is activated in kube-apiserver when it is started with the `--cluster-signing-cert-file` and `--cluster-signing-key-file` parameters. See [RKE installation issue](https://github.com/kubesphere/kubesphere/issues/1925#issuecomment-591698309).

View File

@ -89,7 +89,7 @@ As you install KubeSphere in an air-gapped environment, you need to prepare an i
1. Download the image list file `images-list.txt` from a machine that has access to the Internet through the following command:
```bash
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/images-list.txt
```
{{< notice note >}}
@ -101,7 +101,7 @@ As you install KubeSphere in an air-gapped environment, you need to prepare an i
2. Download `offline-installation-tool.sh`.
```bash
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/offline-installation-tool.sh
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/offline-installation-tool.sh
```
3. Make the `.sh` file executable.
@ -124,7 +124,7 @@ As you install KubeSphere in an air-gapped environment, you need to prepare an i
-l IMAGES-LIST : text file with list of images.
-r PRIVATE-REGISTRY : target private registry:port.
-s : save model will be applied. Pull the images in the IMAGES-LIST and save images as a tar.gz file.
-v KUBERNETES-VERSION : download kubernetes' binaries. default: v1.22.10
-v KUBERNETES-VERSION : download kubernetes' binaries. default: v1.21.5
-h : usage message
```
@ -161,8 +161,8 @@ Similar to installing KubeSphere on an existing Kubernetes cluster in an online
1. Execute the following commands to download these two files and transfer them to your machine that serves as the taskbox for installation.
```bash
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
```
2. Edit `cluster-configuration.yaml` to add your private image registry. For example, `dockerhub.kubekey.local` is the registry address in this tutorial, then use it as the value of `.spec.local_registry` as below:
@ -242,37 +242,37 @@ To access the console, make sure port 30880 is opened in your security group.
## Appendix
### Image list of KubeSphere 3.3.0
### Image list of KubeSphere 3.3
```txt
##k8s-images
kubesphere/kube-apiserver:v1.23.7
kubesphere/kube-controller-manager:v1.23.7
kubesphere/kube-proxy:v1.23.7
kubesphere/kube-scheduler:v1.23.7
kubesphere/kube-apiserver:v1.24.1
kubesphere/kube-controller-manager:v1.24.1
kubesphere/kube-proxy:v1.24.1
kubesphere/kube-scheduler:v1.24.1
kubesphere/kube-apiserver:v1.22.10
kubesphere/kube-controller-manager:v1.22.10
kubesphere/kube-proxy:v1.22.10
kubesphere/kube-scheduler:v1.22.10
kubesphere/kube-apiserver:v1.21.13
kubesphere/kube-controller-manager:v1.21.13
kubesphere/kube-proxy:v1.21.13
kubesphere/kube-scheduler:v1.21.13
kubesphere/kube-apiserver:v1.23.10
kubesphere/kube-controller-manager:v1.23.10
kubesphere/kube-proxy:v1.23.10
kubesphere/kube-scheduler:v1.23.10
kubesphere/kube-apiserver:v1.24.3
kubesphere/kube-controller-manager:v1.24.3
kubesphere/kube-proxy:v1.24.3
kubesphere/kube-scheduler:v1.24.3
kubesphere/kube-apiserver:v1.22.12
kubesphere/kube-controller-manager:v1.22.12
kubesphere/kube-proxy:v1.22.12
kubesphere/kube-scheduler:v1.22.12
kubesphere/kube-apiserver:v1.21.14
kubesphere/kube-controller-manager:v1.21.14
kubesphere/kube-proxy:v1.21.14
kubesphere/kube-scheduler:v1.21.14
kubesphere/pause:3.7
kubesphere/pause:3.6
kubesphere/pause:3.5
kubesphere/pause:3.4.1
coredns/coredns:1.8.0
coredns/coredns:1.8.6
calico/cni:v3.20.0
calico/kube-controllers:v3.20.0
calico/node:v3.20.0
calico/pod2daemon-flexvol:v3.20.0
calico/typha:v3.20.0
calico/cni:v3.23.2
calico/kube-controllers:v3.23.2
calico/node:v3.23.2
calico/pod2daemon-flexvol:v3.23.2
calico/typha:v3.23.2
kubesphere/flannel:v0.12.0
openebs/provisioner-localpv:2.10.1
openebs/linux-utils:2.10.0
@ -280,10 +280,11 @@ library/haproxy:2.3
kubesphere/nfs-subdir-external-provisioner:v4.0.2
kubesphere/k8s-dns-node-cache:1.15.12
##kubesphere-images
kubesphere/ks-installer:v3.3.0
kubesphere/ks-apiserver:v3.3.0
kubesphere/ks-console:v3.3.0
kubesphere/ks-controller-manager:v3.3.0
kubesphere/ks-installer:v3.3.1
kubesphere/ks-apiserver:v3.3.1
kubesphere/ks-console:v3.3.1
kubesphere/ks-controller-manager:v3.3.1
kubesphere/ks-upgrade:v3.3.1
kubesphere/kubectl:v1.22.0
kubesphere/kubectl:v1.21.0
kubesphere/kubectl:v1.20.0
@ -307,11 +308,11 @@ kubesphere/edgeservice:v0.2.0
##gatekeeper-images
openpolicyagent/gatekeeper:v3.5.2
##openpitrix-images
kubesphere/openpitrix-jobs:v3.2.1
kubesphere/openpitrix-jobs:v3.3.1
##kubesphere-devops-images
kubesphere/devops-apiserver:v3.3.0
kubesphere/devops-controller:v3.3.0
kubesphere/devops-tools:v3.3.0
kubesphere/devops-apiserver:v3.3.1
kubesphere/devops-controller:v3.3.1
kubesphere/devops-tools:v3.3.1
kubesphere/ks-jenkins:v3.3.0-2.319.1
jenkins/inbound-agent:4.10-2
kubesphere/builder-base:v3.2.2
@ -360,7 +361,7 @@ prom/prometheus:v2.34.0
kubesphere/prometheus-config-reloader:v0.55.1
kubesphere/prometheus-operator:v0.55.1
kubesphere/kube-rbac-proxy:v0.11.0
kubesphere/kube-state-metrics:v2.3.0
kubesphere/kube-state-metrics:v2.5.0
prom/node-exporter:v1.3.1
prom/alertmanager:v0.23.0
thanosio/thanos:v0.25.2
@ -399,7 +400,6 @@ joosthofman/wget:1.0
nginxdemos/hello:plain-text
wordpress:4.8-apache
mirrorgooglecontainers/hpa-example:latest
java:openjdk-8-jre-alpine
fluent/fluentd:v1.4.2-2.0
perl:latest
kubesphere/examples-bookinfo-productpage-v1:1.16.2

View File

@ -21,55 +21,12 @@ This tutorial demonstrates how to add an edge node to your cluster.
## Prerequisites
- You have enabled [KubeEdge](../../../pluggable-components/kubeedge/).
- To prevent compatability issues, you are advised to install Kubernetes v1.21.x or earlier.
- You have an available node to serve as an edge node. The node can run either Ubuntu (recommended) or CentOS. This tutorial uses Ubuntu 18.04 as an example.
- Edge nodes, unlike Kubernetes cluster nodes, should work in a separate network.
## Prevent non-edge workloads from being scheduled to edge nodes
Due to the tolerations some daemonsets (for example, Calico) have, to ensure that the newly added edge nodes work properly, you need to run the following command to manually patch the pods so that non-edge workloads will not be scheduled to the edge nodes.
```bash
#!/bin/bash
NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}'
ns="kube-system"
DaemonSets=("nodelocaldns" "kube-proxy" "calico-node")
length=${#DaemonSets[@]}
for((i=0;i<length;i++));
do
ds=${DaemonSets[$i]}
echo "Patching resources:DaemonSet/${ds}" in ns:"$ns",
kubectl -n $ns patch DaemonSet/${ds} --type merge --patch "$NoShedulePatchJson"
sleep 1
done
```
## Create Firewall Rules and Port Forwarding Rules
To make sure edge nodes can successfully talk to your cluster, you must forward ports for outside traffic to get into your network. Specifically, map an external port to the corresponding internal IP address (control plane node) and port based on the table below. Besides, you also need to create firewall rules to allow traffic to these ports (`10000` to `10004`).
{{< notice note >}}
In `ClusterConfiguration` of the ks-installer, if you set an internal IP address, you need to set the forwarding rule. If you have not set the forwarding rule, you can directly connect to ports 30000 to 30004.
{{</ notice >}}
| Fields | External Ports | Fields | Internal Ports |
| ------------------- | -------------- | ----------------------- | -------------- |
| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` |
| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` |
| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` |
| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` |
| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` |
## Configure an Edge Node
You need to configure the edge node as follows.
You need to install a container runtime and configure EdgeMesh on your edge node.
### Install a container runtime
@ -115,6 +72,22 @@ Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/
net.ipv4.ip_forward = 1
```
## Create Firewall Rules and Port Forwarding Rules
To make sure edge nodes can successfully talk to your cluster, you must forward ports for outside traffic to get into your network. Specifically, map an external port to the corresponding internal IP address (control plane node) and port based on the table below. Besides, you also need to create firewall rules to allow traffic to these ports (`10000` to `10004`).
{{< notice note >}}
In `ClusterConfiguration` of the ks-installer, if you set an internal IP address, you need to set the forwarding rule. If you have not set the forwarding rule, you can directly connect to ports 30000 to 30004.
{{</ notice >}}
| Fields | External Ports | Fields | Internal Ports |
| ------------------- | -------------- | ----------------------- | -------------- |
| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` |
| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` |
| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` |
| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` |
| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` |
## Add an Edge Node
1. Log in to the console as `admin` and click **Platform** in the upper-left corner.
@ -129,8 +102,6 @@ Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/
3. Click **Add**. In the dialog that appears, set a node name and enter an internal IP address of your edge node. Click **Validate** to continue.
![add-edge-node](/images/docs/v3.3/installing-on-linux/add-and-delete-nodes/add-edge-nodes/add-edge-node.png)
{{< notice note >}}
- The internal IP address is only used for inter-node communication and you do not necessarily need to use the actual internal IP address of the edge node. As long as the IP address is successfully validated, you can use it.
@ -140,8 +111,6 @@ Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/
4. Copy the command automatically created under **Edge Node Configuration Command** and run it on your edge node.
![edge-command](/images/docs/v3.3/installing-on-linux/add-and-delete-nodes/add-edge-nodes/edge-command.png)
{{< notice note >}}
Make sure `wget` is installed on your edge node before you run the command.
@ -200,7 +169,38 @@ To collect monitoring information on edge node, you need to enable `metrics_serv
systemctl restart edgecore.service
```
9. If you still cannot see the monitoring data, run the following command:
9. After an edge node joins your cluster, some Pods may be scheduled to it while they remain in the `Pending` state on the edge node. Due to the tolerations some DaemonSets (for example, Calico) have, you need to manually patch some Pods so that they will not be scheduled to the edge node.
```bash
#!/bin/bash
NodeSelectorPatchJson='{"spec":{"template":{"spec":{"nodeSelector":{"node-role.kubernetes.io/master": "","node-role.kubernetes.io/worker": ""}}}}}'
NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}'
edgenode="edgenode"
if [ $1 ]; then
edgenode="$1"
fi
namespaces=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $1}' ))
pods=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $2}' ))
length=${#namespaces[@]}
for((i=0;i<$length;i++));
do
ns=${namespaces[$i]}
pod=${pods[$i]}
resources=$(kubectl -n $ns describe pod $pod | grep "Controlled By" |awk '{print $3}')
echo "Patching for ns:"${namespaces[$i]}",resources:"$resources
kubectl -n $ns patch $resources --type merge --patch "$NoShedulePatchJson"
sleep 1
done
```
10. If you still cannot see the monitoring data, run the following command:
```bash
journalctl -u edgecore.service -b -r

View File

@ -48,7 +48,7 @@ You must create a load balancer in your environment to listen (also known as lis
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -64,7 +64,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -79,7 +79,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
@ -92,12 +92,12 @@ chmod +x kk
Create an example configuration file with default configurations. Here Kubernetes v1.22.10 is used as an example.
```bash
./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10
./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10
```
{{< notice note >}}
- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.

View File

@ -33,7 +33,7 @@ Refer to the following steps to download KubeKey.
Download KubeKey from [its GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or run the following command.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -49,7 +49,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -64,7 +64,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The preceding commands download the latest release of KubeKey (v2.2.2). You can modify the version number in the command to download a specific version.
The preceding commands download the latest release of KubeKey (v2.3.0). You can modify the version number in the command to download a specific version.
{{</ notice >}}
@ -77,12 +77,12 @@ chmod +x kk
Create an example configuration file with default configurations. Here Kubernetes v1.22.10 is used as an example.
```bash
./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10
./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10
```
{{< notice note >}}
- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
@ -132,7 +132,7 @@ For more information about different fields in this configuration file, see [Kub
spec:
controlPlaneEndpoint:
##Internal loadbalancer for apiservers
internalLoadbalancer: haproxy
#internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""

View File

@ -268,7 +268,7 @@ Before you start to create your Kubernetes cluster, make sure you have tested th
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -284,7 +284,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -299,7 +299,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
@ -312,12 +312,12 @@ chmod +x kk
Create an example configuration file with default configurations. Here Kubernetes v1.22.10 is used as an example.
```bash
./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10
./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10
```
{{< notice note >}}
- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.

View File

@ -15,12 +15,12 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
|Host IP| Host Name | Usage |
| ---------------- | ---- | ---------------- |
|192.168.0.2 | node1 | Online host for packaging the source cluster with Kubernetes v1.22.10 and KubeSphere v3.3.0 installed |
|192.168.0.2 | node1 | Online host for packaging the source cluster with Kubernetes v1.22.10 and KubeSphere v3.3.1 installed |
|192.168.0.3 | node2 | Control plane node of the air-gapped environment |
|192.168.0.4 | node3 | Image registry node of the air-gapped environment |
## Preparations
1. Run the following commands to download KubeKey v2.2.2.
1. Run the following commands to download KubeKey v2.3.0 .
{{< tabs >}}
{{< tab "Good network connections to GitHub/Googleapis" >}}
@ -28,7 +28,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -44,7 +44,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -83,7 +83,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
repository:
iso:
localPath:
url: https://github.com/kubesphere/kubekey/releases/download/v2.2.2/centos7-rpms-amd64.iso
url: https://github.com/kubesphere/kubekey/releases/download/v2.3.0/centos7-rpms-amd64.iso
- arch: amd64
type: linux
id: ubuntu
@ -91,13 +91,13 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
repository:
iso:
localPath:
url: https://github.com/kubesphere/kubekey/releases/download/v2.2.2/ubuntu-20.04-debs-amd64.iso
url: https://github.com/kubesphere/kubekey/releases/download/v2.3.0/ubuntu-20.04-debs-amd64.iso
kubernetesDistributions:
- type: kubernetes
version: v1.22.10
version: v1.22.12
components:
helm:
version: v3.6.3
version: v3.9.0
cni:
version: v0.9.1
etcd:
@ -112,14 +112,14 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
docker-registry:
version: "2"
harbor:
version: v2.4.1
version: v2.5.3
docker-compose:
version: v2.2.2
images:
- docker.io/kubesphere/kube-apiserver:v1.22.10
- docker.io/kubesphere/kube-controller-manager:v1.22.10
- docker.io/kubesphere/kube-proxy:v1.22.10
- docker.io/kubesphere/kube-scheduler:v1.22.10
- docker.io/kubesphere/kube-apiserver:v1.22.12
- docker.io/kubesphere/kube-controller-manager:v1.22.12
- docker.io/kubesphere/kube-proxy:v1.22.12
- docker.io/kubesphere/kube-scheduler:v1.22.12
- docker.io/kubesphere/pause:3.5
- docker.io/coredns/coredns:1.8.0
- docker.io/calico/cni:v3.23.2
@ -133,13 +133,14 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
- docker.io/library/haproxy:2.3
- docker.io/kubesphere/nfs-subdir-external-provisioner:v4.0.2
- docker.io/kubesphere/k8s-dns-node-cache:1.15.12
- docker.io/kubesphere/ks-installer:v3.3.0
- docker.io/kubesphere/ks-apiserver:v3.3.0
- docker.io/kubesphere/ks-console:v3.3.0
- docker.io/kubesphere/ks-controller-manager:v3.3.0
- docker.io/kubesphere/kubectl:v1.20.0
- docker.io/kubesphere/kubectl:v1.21.0
- docker.io/kubesphere/ks-installer:v3.3.1
- docker.io/kubesphere/ks-apiserver:v3.3.1
- docker.io/kubesphere/ks-console:v3.3.1
- docker.io/kubesphere/ks-controller-manager:v3.3.1
- docker.io/kubesphere/ks-upgrade:v3.3.1
- docker.io/kubesphere/kubectl:v1.22.0
- docker.io/kubesphere/kubectl:v1.21.0
- docker.io/kubesphere/kubectl:v1.20.0
- docker.io/kubesphere/kubefed:v0.8.1
- docker.io/kubesphere/tower:v0.2.0
- docker.io/minio/minio:RELEASE.2019-08-07T01-59-21Z
@ -156,10 +157,11 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
- docker.io/kubeedge/cloudcore:v1.9.2
- docker.io/kubeedge/iptables-manager:v1.9.2
- docker.io/kubesphere/edgeservice:v0.2.0
- docker.io/kubesphere/openpitrix-jobs:v3.2.1
- docker.io/kubesphere/devops-apiserver:v3.3.0
- docker.io/kubesphere/devops-controller:v3.3.0
- docker.io/kubesphere/devops-tools:v3.3.0
- docker.io/openpolicyagent/gatekeeper:v3.5.2
- docker.io/kubesphere/openpitrix-jobs:v3.3.1
- docker.io/kubesphere/devops-apiserver:v3.3.1
- docker.io/kubesphere/devops-controller:v3.3.1
- docker.io/kubesphere/devops-tools:v3.3.1
- docker.io/kubesphere/ks-jenkins:v3.3.0-2.319.1
- docker.io/jenkins/inbound-agent:4.10-2
- docker.io/kubesphere/builder-base:v3.2.2
@ -207,7 +209,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
- docker.io/kubesphere/prometheus-config-reloader:v0.55.1
- docker.io/kubesphere/prometheus-operator:v0.55.1
- docker.io/kubesphere/kube-rbac-proxy:v0.11.0
- docker.io/kubesphere/kube-state-metrics:v2.3.0
- docker.io/kubesphere/kube-state-metrics:v2.5.0
- docker.io/prom/node-exporter:v1.3.1
- docker.io/prom/alertmanager:v0.23.0
- docker.io/thanosio/thanos:v0.25.2
@ -243,7 +245,6 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
- docker.io/nginxdemos/hello:plain-text
- docker.io/library/wordpress:4.8-apache
- docker.io/mirrorgooglecontainers/hpa-example:latest
- docker.io/library/java:openjdk-8-jre-alpine
- docker.io/fluent/fluentd:v1.4.2-2.0
- docker.io/library/perl:latest
- docker.io/kubesphere/examples-bookinfo-productpage-v1:1.16.2
@ -264,7 +265,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
- You can customize the **manifest-sample.yaml** file to export the desired artifact file.
- You can download the ISO files at https://github.com/kubesphere/kubekey/releases/tag/v2.2.2.
- You can download the ISO files at https://github.com/kubesphere/kubekey/releases/tag/v2.3.0.
{{</ notice >}}
@ -309,7 +310,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
2. Run the following command to create a configuration file for the air-gapped cluster:
```bash
./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10 -f config-sample.yaml
./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10 -f config-sample.yaml
```
3. Run the following command to modify the configuration file:
@ -354,7 +355,7 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides
address: ""
port: 6443
kubernetes:
version: v1.22.10
version: v1.21.5
clusterName: cluster.local
network:
plugin: calico

View File

@ -38,7 +38,7 @@ With the configuration file in place, you execute the `./kk` command with varied
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -54,7 +54,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -69,21 +69,21 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
## Support Matrix
If you want to use KubeKey to install both Kubernetes and KubeSphere 3.3.0, see the following table of all supported Kubernetes versions.
If you want to use KubeKey to install both Kubernetes and KubeSphere 3.3, see the following table of all supported Kubernetes versions.
| KubeSphere version | Supported Kubernetes versions |
| ------------------ | ------------------------------------------------------------ |
| v3.3.0 | v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support) |
| v3.3.1 | v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support) |
{{< notice note >}}
- You can also run `./kk version --show-supported-k8s` to see all supported Kubernetes versions that can be installed by KubeKey.
- The Kubernetes versions that can be installed using KubeKey are different from the Kubernetes versions supported by KubeSphere v3.3.0. If you want to [install KubeSphere 3.3.0 on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/), your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- If you want to use KubeEdge, you are advised to install Kubernetes v1.21.x or earlier to prevent compatability issues.
- The Kubernetes versions that can be installed using KubeKey are different from the Kubernetes versions supported by KubeSphere 3.3. If you want to [install KubeSphere 3.3 on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/), your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- If you want to use KubeEdge, you are advised to install Kubernetes v1.22.x or earlier to prevent compatability issues.
{{</ notice >}}

View File

@ -110,7 +110,7 @@ Follow the step below to download [KubeKey](../kubekey).
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -126,7 +126,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -141,7 +141,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
@ -165,7 +165,7 @@ Command:
{{< notice note >}}
- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../kubekey/#support-matrix).
- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../kubekey/#support-matrix).
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
@ -180,7 +180,7 @@ Here are some examples for your reference:
./kk create config [-f ~/myfolder/abc.yaml]
```
- You can specify a KubeSphere version that you want to install (for example, `--with-kubesphere v3.3.0`).
- You can specify a KubeSphere version that you want to install (for example, `--with-kubesphere v3.3.1`).
```bash
./kk create config --with-kubesphere [version]
@ -254,13 +254,6 @@ At the same time, you must provide the login information used to connect to each
hosts:
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, privateKeyPath: "~/.ssh/id_rsa"}
```
- For installation on ARM devices:
```yaml
hosts:
- {name: master, address: 192.168.0.2, internalAddress: 192.168.0.2, user: ubuntu, password: Testing123, arch: arm64}
```
{{< notice tip >}}

View File

@ -10,7 +10,7 @@ When creating a Kubernetes cluster, you can use [KubeKey](../kubekey/) to define
```yaml
kubernetes:
version: v1.22.10
version: v1.21.5
imageRepo: kubesphere
clusterName: cluster.local
masqueradeAll: false
@ -45,7 +45,7 @@ The below table describes the above parameters in detail.
</tr>
<tr>
<td><code>version</code></td>
<td>The Kubernetes version to be installed. If you do not specify a Kubernetes version, {{< contentLink "docs/installing-on-linux/introduction/kubekey" "KubeKey" >}} v2.2.2 will install Kubernetes v1.23.7 by default. For more information, see {{< contentLink "docs/installing-on-linux/introduction/kubekey/#support-matrix" "Support Matrix" >}}.</td>
<td>The Kubernetes version to be installed. If you do not specify a Kubernetes version, {{< contentLink "docs/installing-on-linux/introduction/kubekey" "KubeKey" >}} v2.3.0 will install Kubernetes v1.23.7 by default. For more information, see {{< contentLink "docs/installing-on-linux/introduction/kubekey/#support-matrix" "Support Matrix" >}}.</td>
</tr>
<tr>
<td><code>imageRepo</code></td>
@ -111,7 +111,7 @@ The below table describes the above parameters in detail.
</tr>
<tr>
<td><code>privateRegistry</code>*</td>
<td>Configure a private image registry for air-gapped installation (for example, a Docker local registry or Harbor). For more information, see {{< contentLink "docs/installing-on-linux/introduction/air-gapped-installation/" "Air-gapped Installation on Linux" >}}.</td>
<td>Configure a private image registry for air-gapped installation (for example, a Docker local registry or Harbor). For more information, see {{< contentLink "docs/v3.3/installing-on-linux/introduction/air-gapped-installation/" "Air-gapped Installation on Linux" >}}.</td>
</tr>
</tbody>
</table>

View File

@ -32,7 +32,7 @@ Follow the step below to download [KubeKey](../../../installing-on-linux/introdu
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -48,7 +48,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -63,7 +63,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. Note that an earlier version of KubeKey cannot be used to install K3s.
The commands above download the latest release (v2.3.0) of KubeKey. Note that an earlier version of KubeKey cannot be used to install K3s.
{{</ notice >}}
@ -78,12 +78,12 @@ chmod +x kk
1. Create a configuration file of your cluster by running the following command:
```bash
./kk create config --with-kubernetes v1.21.4-k3s --with-kubesphere v3.3.0
./kk create config --with-kubernetes v1.21.4-k3s --with-kubesphere v3.3.1
```
{{< notice note >}}
KubeKey v2.2.2 supports the installation of K3s v1.21.4.
KubeKey v2.3.0 supports the installation of K3s v1.21.4.
{{</ notice >}}

View File

@ -199,7 +199,7 @@ Follow the step below to download KubeKey.
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -215,7 +215,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -230,7 +230,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
@ -244,15 +244,15 @@ chmod +x kk
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.3.0`):
Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.3.1`):
```bash
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1
```
{{< notice note >}}
- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- If you do not add the flag `--with-kubesphere` in the command above, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.

View File

@ -289,7 +289,7 @@ systemctl status -l keepalived
## Download KubeKey
[Kubekey](https://github.com/kubesphere/kubekey) is the brand-new installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere 3.3.0.
[Kubekey](https://github.com/kubesphere/kubekey) is the brand-new installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere 3.3.
Follow the step below to download KubeKey.
@ -300,7 +300,7 @@ Follow the step below to download KubeKey.
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -316,7 +316,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -331,7 +331,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
@ -345,15 +345,15 @@ chmod +x kk
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.3.0`):
Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.3.1`):
```bash
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1
```
{{< notice note >}}
- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
@ -398,7 +398,7 @@ spec:
address: "10.10.71.67"
port: 6443
kubernetes:
version: v1.22.10
version: v1.21.5
imageRepo: kubesphere
clusterName: cluster.local
masqueradeAll: false # masqueradeAll tells kube-proxy to SNAT everything if using the pure iptables proxy mode. [Default: false]
@ -422,8 +422,6 @@ spec:
localVolume:
storageClassName: local
---
---
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
@ -431,184 +429,70 @@ metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.3.0
version: v3.3.1
spec:
local_registry: ""
persistence:
storageClass: "" # If there is no default StorageClass in your cluster, you need to specify an existing StorageClass here.
storageClass: ""
authentication:
jwtSecret: "" # Keep the jwtSecret consistent with the Host Cluster. Retrieve the jwtSecret by executing "kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret" on the Host Cluster.
local_registry: "" # Add your private registry address if it is needed.
# dev_tag: "" # Add your kubesphere image tag you want to install, by default it's same as ks-installer release version.
jwtSecret: ""
etcd:
monitoring: false # Enable or disable etcd monitoring dashboard installation. You have to create a Secret for etcd before you enable it.
endpointIps: localhost # etcd cluster EndpointIps. It can be a bunch of IPs here.
port: 2379 # etcd port.
monitoring: true # Whether to install etcd monitoring dashboard
endpointIps: 192.168.0.7,192.168.0.8,192.168.0.9 # etcd cluster endpointIps
port: 2379 # etcd port
tlsEnable: true
common:
core:
console:
enableMultiLogin: true # Enable or disable simultaneous logins. It allows different users to log in with the same account at the same time.
port: 30880
type: NodePort
# apiserver: # Enlarge the apiserver and controller manager's resource requests and limits for the large cluster
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
enableHA: false
volumeSize: 2Gi # Redis PVC size.
openldap:
enabled: false
volumeSize: 2Gi # openldap PVC size.
minio:
volumeSize: 20Gi # Minio PVC size.
monitoring:
# type: external # Whether to specify the external prometheus stack, and need to modify the endpoint at the next line.
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 # Prometheus endpoint to get metrics data.
GPUMonitoring: # Enable or disable the GPU-related metrics. If you enable this switch but have no GPU resources, Kubesphere will set it to zero.
enabled: false
gpu: # Install GPUKinds. The default GPU kind is nvidia.com/gpu. Other GPU kinds can be added here according to your needs.
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es: # Storage backend for logging, events and auditing.
# master:
# volumeSize: 4Gi # The volume size of Elasticsearch master nodes.
# replicas: 1 # The total number of master nodes. Even numbers are not allowed.
# resources: {}
# data:
# volumeSize: 20Gi # The volume size of Elasticsearch data nodes.
# replicas: 1 # The total number of data nodes.
# resources: {}
logMaxAge: 7 # Log retention time in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
alerting: # (CPU: 0.1 Core, Memory: 100 MiB) It enables users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
enabled: false # Enable or disable the KubeSphere Alerting System.
# thanosruler:
# replicas: 1
# resources: {}
auditing: # Provide a security-relevant chronological set of recordsrecording the sequence of activities happening on the platform, initiated by different tenants.
enabled: false # Enable or disable the KubeSphere Auditing Log System.
# operator:
# resources: {}
# webhook:
# resources: {}
devops: # (CPU: 0.47 Core, Memory: 8.6 G) Provide an out-of-the-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image.
enabled: false # Enable or disable the KubeSphere DevOps System.
# resources: {}
jenkinsMemoryLim: 2Gi # Jenkins memory limit.
jenkinsMemoryReq: 1500Mi # Jenkins memory request.
jenkinsVolumeSize: 8Gi # Jenkins volume size.
jenkinsJavaOpts_Xms: 1200m # The following three fields are JVM parameters.
jenkinsJavaOpts_Xmx: 1600m
jenkinsJavaOpts_MaxRAM: 2g
events: # Provide a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
enabled: false # Enable or disable the KubeSphere Events System.
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging: # (CPU: 57 m, Memory: 2.76 G) Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
enabled: false # Enable or disable the KubeSphere Logging System.
logsidecar:
enabled: true
replicas: 2
# resources: {}
metrics_server: # (CPU: 56 m, Memory: 44.35 MiB) It enables HPA (Horizontal Pod Autoscaler).
enabled: false # Enable or disable metrics-server.
monitoring:
storageClass: "" # If there is an independent StorageClass you need for Prometheus, you can specify it here. The default StorageClass is used by default.
node_exporter:
port: 9100
# resources: {}
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and providing high availability.
# volumeSize: 20Gi # Prometheus PVC size.
# resources: {}
# operator:
# resources: {}
# alertmanager:
# replicas: 1 # AlertManager Replicas.
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu: # GPU monitoring-related plug-in installation.
nvidia_dcgm_exporter: # Ensure that gpu resources on your hosts can be used normally, otherwise this plug-in will not work properly.
enabled: false # Check whether the labels on the GPU hosts contain "nvidia.com/gpu.present=true" to ensure that the DCGM pod is scheduled to these nodes.
# resources: {}
multicluster:
clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the Host or Member Cluster.
network:
networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
# Make sure that the CNI network plugin used by the cluster supports NetworkPolicy. There are a number of CNI network plugins that support NetworkPolicy, including Calico, Cilium, Kube-router, Romana and Weave Net.
enabled: false # Enable or disable network policies.
ippool: # Use Pod IP Pools to manage the Pod network address space. Pods to be created can be assigned IP addresses from a Pod IP Pool.
type: none # Specify "calico" for this field if Calico is used as your CNI plugin. "none" means that Pod IP Pools are disabled.
topology: # Use Service Topology to view Service-to-Service communication based on Weave Scope.
type: none # Specify "weave-scope" for this field to enable Service Topology. "none" means that Service Topology is disabled.
openpitrix: # An App Store that is accessible to all platform tenants. You can use it to manage apps across their entire lifecycle.
store:
enabled: false # Enable or disable the KubeSphere App Store.
servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology.
enabled: false # Base component (pilot). Enable or disable KubeSphere Service Mesh (Istio-based).
istio: # Customizing the istio installation configuration, refer to https://istio.io/latest/docs/setup/additional-setup/customize-installation/
components:
ingressGateways:
- name: istio-ingressgateway
enabled: false
cni:
enabled: false
edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
mysqlVolumeSize: 20Gi # MySQL PVC size
minioVolumeSize: 20Gi # Minio PVC size
etcdVolumeSize: 20Gi # etcd PVC size
openldapVolumeSize: 2Gi # openldap PVC size
redisVolumSize: 2Gi # Redis PVC size
es: # Storage backend for logging, tracing, events and auditing.
elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
elasticsearchDataReplicas: 1 # total number of data nodes
elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes
elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes
logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log
# externalElasticsearchUrl:
# externalElasticsearchPort:
console:
enableMultiLogin: false # enable/disable multiple sing on, it allows a user can be used by different users at the same time.
port: 30880
alerting: # Whether to install KubeSphere alerting system. It enables Users to customize alerting policies to send messages to receivers in time with different time intervals and alerting levels to choose from.
enabled: false
auditing: # Whether to install KubeSphere audit log system. It provides a security-relevant chronological set of recordsrecording the sequence of activities happened in platform, initiated by different tenants.
enabled: false
devops: # Whether to install KubeSphere DevOps System. It provides out-of-box CI/CD system based on Jenkins, and automated workflow tools including Source-to-Image & Binary-to-Image
enabled: false
jenkinsMemoryLim: 2Gi # Jenkins memory limit
jenkinsMemoryReq: 1500Mi # Jenkins memory request
jenkinsVolumeSize: 8Gi # Jenkins volume size
jenkinsJavaOpts_Xms: 512m # The following three fields are JVM parameters
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events: # Whether to install KubeSphere events system. It provides a graphical web console for Kubernetes Events exporting, filtering and alerting in multi-tenant Kubernetes clusters.
enabled: false
logging: # Whether to install KubeSphere logging system. Flexible logging functions are provided for log query, collection and management in a unified console. Additional log collectors can be added, such as Elasticsearch, Kafka and Fluentd.
enabled: false
logsidecarReplicas: 2
metrics_server: # Whether to install metrics-server. IT enables HPA (Horizontal Pod Autoscaler).
enabled: true
monitoring: #
prometheusReplicas: 1 # Prometheus replicas are responsible for monitoring different segments of data source and provide high availability as well.
prometheusMemoryRequest: 400Mi # Prometheus request memory
prometheusVolumeSize: 20Gi # Prometheus PVC size
alertmanagerReplicas: 1 # AlertManager Replicas
multicluster:
clusterRole: none # host | member | none # You can install a solo cluster, or specify it as the role of host or member cluster
networkpolicy: # Network policies allow network isolation within the same cluster, which means firewalls can be set up between certain instances (Pods).
enabled: false
notification: # It supports notification management in multi-tenant Kubernetes clusters. It allows you to set AlertManager as its sender, and receivers include Email, Wechat Work, and Slack.
enabled: false
openpitrix: # Whether to install KubeSphere App Store. It provides an application store for Helm-based applications, and offer application lifecycle management
enabled: false
servicemesh: # (0.3 Core, 300 MiB) Provide fine-grained traffic management, observability and tracing, and visualized traffic topology
enabled: false
kubeedge: # kubeedge configurations
enabled: false
cloudCore:
cloudHub:
advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
- "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
iptables-manager:
enabled: true
mode: "external"
# resources: {}
# edgeService:
# resources: {}
gatekeeper: # Provide admission policy and rule management, A validating (mutating TBA) webhook that enforces CRD-based policies executed by Open Policy Agent.
enabled: false # Enable or disable Gatekeeper.
# controller_manager:
# resources: {}
# audit:
# resources: {}
terminal:
# image: 'alpine:3.15' # There must be an nsenter program in the image
timeout: 600 # Container timeout, if set to 0, no timeout will be used. The unit is seconds
```
Create a cluster using the configuration file you customized above:

View File

@ -119,7 +119,7 @@ Follow the steps below to download [KubeKey](../../../installing-on-linux/introd
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -135,7 +135,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -150,7 +150,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
@ -165,12 +165,12 @@ chmod +x kk
1. Specify a Kubernetes version and a KubeSphere version that you want to install. For example:
```bash
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1
```
{{< notice note >}}
- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
@ -205,7 +205,7 @@ chmod +x kk
address: ""
port: 6443
kubernetes:
version: v1.22.10
version: v1.21.5
imageRepo: kubesphere
clusterName: cluster.local
network:

View File

@ -11,7 +11,7 @@ This tutorial demonstrates how to set up a KubeSphere cluster and configure NFS
{{< notice note >}}
- Ubuntu 16.04 is used as an example in this tutorial.
- NFS is incompatible with some applications, for example, Prometheus, which may result in pod creation failures. If you need to use NFS in the production environment, ensure that you have understood the risks. For more information, contact support@kubesphere.cloud.
- It is not recommended that you use NFS storage for production (especially on Kubernetes version 1.20 or later) as some issues may occur, such as `failed to obtain lock` and `input/output error`, resulting in Pod `CrashLoopBackOff`. Besides, some apps may not be compatible with NFS, including [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects).
{{</ notice >}}
@ -71,7 +71,7 @@ Follow the steps below to download [KubeKey](../../../installing-on-linux/introd
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -87,7 +87,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -102,7 +102,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
@ -117,12 +117,12 @@ chmod +x kk
1. Specify a Kubernetes version and a KubeSphere version that you want to install. For example:
```bash
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1
```
{{< notice note >}}
- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
@ -157,7 +157,7 @@ chmod +x kk
address: ""
port: 6443
kubernetes:
version: v1.22.10
version: v1.21.5
imageRepo: kubesphere
clusterName: cluster.local
network:

View File

@ -73,7 +73,7 @@ Follow the steps below to download [KubeKey](../../../installing-on-linux/introd
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -89,7 +89,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -104,7 +104,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
@ -119,12 +119,12 @@ chmod +x kk
1. Specify a Kubernetes version and a KubeSphere version that you want to install. For example:
```bash
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1
```
{{< notice note >}}
- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
@ -159,7 +159,7 @@ chmod +x kk
address: ""
port: 6443
kubernetes:
version: v1.22.10
version: v1.21.5
imageRepo: kubesphere
clusterName: cluster.local
network:

View File

@ -101,7 +101,7 @@ ssh -i .ssh/id_rsa2 -p50200 kubesphere@40.81.5.xx
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -117,7 +117,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -132,7 +132,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
@ -145,12 +145,12 @@ The commands above download the latest release (v2.2.2) of KubeKey. You can chan
2. Create an example configuration file with default configurations. Here Kubernetes v1.22.10 is used as an example.
```bash
./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10
./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10
```
{{< notice note >}}
- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.

View File

@ -126,7 +126,7 @@ Follow the step below to download KubeKey.
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -142,7 +142,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -157,7 +157,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
@ -170,12 +170,12 @@ chmod +x kk
Create an example configuration file with default configurations. Here Kubernetes v1.22.10 is used as an example.
```bash
./kk create config --with-kubesphere v3.3.0 --with-kubernetes v1.22.10
./kk create config --with-kubesphere v3.3.1 --with-kubernetes v1.22.10
```
{{< notice note >}}
- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../../installing-on-linux/introduction/kubekey/#support-matrix).
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed unless you install it using the `addons` field in the configuration file or add this flag again when you use `./kk create cluster` later.
- If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.

View File

@ -42,5 +42,3 @@ KubeSphere separates [frontend](https://github.com/kubesphere/console) from [bac
## Service Components
Each component has many services. See [Overview](../../pluggable-components/overview/) for more details.
![Service Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191017163549.png)

View File

@ -29,7 +29,7 @@ The following modules elaborate on the key features and benefits provided by Kub
KubeSphere provides a graphical web console, giving users a clear view of a variety of Kubernetes resources, including Pods and containers, clusters and nodes, workloads, secrets and ConfigMaps, services and Ingress, jobs and CronJobs, and applications. With wizard user interfaces, users can easily interact with these resources for service discovery, HPA, image management, scheduling, high availability implementation, container health check and more.
As KubeSphere 3.3.0 features enhanced observability, users are able to keep track of resources from multi-tenant perspectives, such as custom monitoring, events, auditing logs, alerts and notifications.
As KubeSphere 3.3 features enhanced observability, users are able to keep track of resources from multi-tenant perspectives, such as custom monitoring, events, auditing logs, alerts and notifications.
### Cluster Upgrade and Scaling

View File

@ -1,13 +0,0 @@
---
title: "What's New in 3.3.0"
keywords: 'Kubernetes, KubeSphere, new features'
description: "What's New in 3.3.0"
linkTitle: "What's New in 3.3.0"
weight: 1400
---
In June 2022, KubeSphere 3.3.0 has been released with more exciting features. This release introduces GitOps-based continuous deployment and supports Git-based code repository management to further optimize the DevOps feature. Moreover, it also provides enhanced features of storage, multi-tenancy, multi-cluster, observability, app store, service mesh, and edge computing, to further perfect the interactive design for better user experience.
If you want to know details about new feature of KubeSphere 3.3.0, you can read the article [KubeSphere 3.3.0: Embrace GitOps](/../../../news/kubesphere-3.3.0-ga-announcement/).
In addition to the above highlights, this release also features other functionality upgrades and fixes the known bugs. There were some deprecated or removed features in 3.3.0. For more and detailed information, see the [Release Notes for 3.3.0](../../../v3.3/release/release-v330/).

View File

@ -0,0 +1,13 @@
---
title: "What's New in 3.3"
keywords: 'Kubernetes, KubeSphere, new features'
description: "What's New in 3.3"
linkTitle: "What's New in 3.3"
weight: 1400
---
In June 2022, KubeSphere 3.3 has been released with more exciting features. This release introduces GitOps-based continuous deployment and supports Git-based code repository management to further optimize the DevOps feature. Moreover, it also provides enhanced features of storage, multi-tenancy, multi-cluster, observability, app store, service mesh, and edge computing, to further perfect the interactive design for better user experience.
If you want to know details about new feature of KubeSphere 3.3, you can read the article [KubeSphere 3.3.0: Embrace GitOps](/../../../news/kubesphere-3.3.0-ga-announcement/).
In addition to the above highlights, this release also features other functionality upgrades and fixes the known bugs. There were some deprecated or removed features in 3.3. For more and detailed information, see the [Release Notes for 3.3.0](../../../v3.3/release/release-v330/).

View File

@ -39,9 +39,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/),
### Installing on Kubernetes
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Alerting first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file.
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Alerting first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -57,7 +57,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Execute the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```

View File

@ -44,9 +44,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/),
### Installing on Kubernetes
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the KubeSphere App Store first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file.
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the KubeSphere App Store first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -63,7 +63,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Run the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```
@ -109,7 +109,7 @@ After you log in to the console, if you can see **App Store** in the upper-left
{{< notice note >}}
- You can even access the App Store without logging in to the console by visiting `<Node IP Address>:30880/apps`.
- The **OpenPitrix** tab in KubeSphere 3.3.0 does not appear on the **System Components** page after the App Store is enabled.
- The **OpenPitrix** tab in KubeSphere 3.3 does not appear on the **System Components** page after the App Store is enabled.
{{</ notice >}}

View File

@ -34,7 +34,7 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/),
```
{{< notice note >}}
By default, KubeKey will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Auditing, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
By default, KubeKey will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
{{</ notice >}}
```yaml
@ -45,7 +45,7 @@ By default, KubeKey will install Elasticsearch internally if Auditing is enabled
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
externalElasticsearchHost: # The Host of external Elasticsearch.
externalElasticsearchUrl: # The Host of external Elasticsearch.
externalElasticsearchPort: # The port of external Elasticsearch.
```
@ -57,9 +57,9 @@ By default, KubeKey will install Elasticsearch internally if Auditing is enabled
### Installing on Kubernetes
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Auditing first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file.
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Auditing first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -73,7 +73,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
```
{{< notice note >}}
By default, ks-installer will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Auditing, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one.
By default, ks-installer will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one.
{{</ notice >}}
```yaml
@ -84,14 +84,14 @@ By default, ks-installer will install Elasticsearch internally if Auditing is en
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
externalElasticsearchHost: # The Host of external Elasticsearch.
externalElasticsearchUrl: # The Host of external Elasticsearch.
externalElasticsearchPort: # The port of external Elasticsearch.
```
3. Execute the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```
@ -116,7 +116,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
```
{{< notice note >}}
By default, Elasticsearch will be installed internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Auditing, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
By default, Elasticsearch will be installed internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
{{</ notice >}}
```yaml
@ -127,7 +127,7 @@ By default, Elasticsearch will be installed internally if Auditing is enabled. F
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
externalElasticsearchHost: # The Host of external Elasticsearch.
externalElasticsearchUrl: # The Host of external Elasticsearch.
externalElasticsearchPort: # The port of external Elasticsearch.
```

View File

@ -43,9 +43,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/),
### Installing on Kubernetes
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere DevOps first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file.
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere DevOps first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -61,7 +61,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Run the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```

View File

@ -36,7 +36,7 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/),
```
{{< notice note >}}
By default, KubeKey will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
By default, KubeKey will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
{{</ notice >}}
```yaml
@ -47,7 +47,7 @@ By default, KubeKey will install Elasticsearch internally if Events is enabled.
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
externalElasticsearchHost: # The Host of external Elasticsearch.
externalElasticsearchUrl: # The Host of external Elasticsearch.
externalElasticsearchPort: # The port of external Elasticsearch.
```
@ -59,9 +59,9 @@ By default, KubeKey will install Elasticsearch internally if Events is enabled.
### Installing on Kubernetes
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Events first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file.
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Events first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -75,7 +75,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
```
{{< notice note >}}
By default, ks-installer will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one.
By default, ks-installer will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one.
{{</ notice >}}
```yaml
@ -86,14 +86,14 @@ By default, ks-installer will install Elasticsearch internally if Events is enab
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
externalElasticsearchHost: # The Host of external Elasticsearch.
externalElasticsearchUrl: # The Host of external Elasticsearch.
externalElasticsearchPort: # The port of external Elasticsearch.
```
3. Execute the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```
@ -121,7 +121,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
{{< notice note >}}
By default, Elasticsearch will be installed internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
By default, Elasticsearch will be installed internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
{{</ notice >}}
```yaml
@ -132,7 +132,7 @@ By default, Elasticsearch will be installed internally if Events is enabled. For
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
externalElasticsearchHost: # The Host of external Elasticsearch.
externalElasticsearchUrl: # The Host of external Elasticsearch.
externalElasticsearchPort: # The port of external Elasticsearch.
```

View File

@ -34,21 +34,21 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```yaml
edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
enabled: false
kubeedge: # kubeedge configurations
enabled: false
cloudCore:
cloudHub:
advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
enabled: false
kubeedge: # kubeedge configurations
enabled: false
cloudCore:
cloudHub:
advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
- "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
```
3. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes. Save the file when you finish editing.
@ -61,13 +61,9 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
### Installing on Kubernetes
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeEdge first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file.
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeEdge first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file.
{{< notice note >}}
To prevent compatability issues, you are advised to install Kubernetes v1.21.x or earlier.
{{</ notice >}}
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -75,31 +71,31 @@ To prevent compatability issues, you are advised to install Kubernetes v1.21.x o
2. In this local `cluster-configuration.yaml` file, navigate to `edgeruntime` and `kubeedge`, and change the value of `enabled` from `false` to `true` to enable all KubeEdge components. Click **OK**.
```yaml
```yaml
edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
enabled: false
kubeedge: # kubeedge configurations
enabled: false
cloudCore:
cloudHub:
advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
enabled: false
kubeedge: # kubeedge configurations
enabled: false
cloudCore:
cloudHub:
advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
- "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
```
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
```
3. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes.
4. Save the file and execute the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```
@ -118,24 +114,24 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
4. In this YAML file, navigate to `edgeruntime` and `kubeedge`, and change the value of `enabled` from `false` to `true` to enable all KubeEdge components. Click **OK**.
```yaml
```yaml
edgeruntime: # Add edge nodes to your cluster and deploy workloads on edge nodes.
enabled: false
kubeedge: # kubeedge configurations
enabled: false
cloudCore:
cloudHub:
advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
enabled: false
kubeedge: # kubeedge configurations
enabled: false
cloudCore:
cloudHub:
advertiseAddress: # At least a public IP address or an IP address which can be accessed by edge nodes must be provided.
- "" # Note that once KubeEdge is enabled, CloudCore will malfunction if the address is not provided.
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
```
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
```
5. Set the value of `kubeedge.cloudCore.cloudHub.advertiseAddress` to the public IP address of your cluster or an IP address that can be accessed by edge nodes. After you finish, click **OK** in the lower-right corner to save the configuration.

View File

@ -35,9 +35,14 @@ When you install KubeSphere on Linux, you need to create a configuration file, w
```yaml
logging:
enabled: true # Change "false" to "true".
containerruntime: docker
```
{{< notice note >}}By default, KubeKey will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
{{< notice info >}}To use containerd as the container runtime, change the value of the field `containerruntime` to `containerd`. If you upgraded to KubeSphere 3.3 from earlier versions, you have to manually add the field `containerruntime` under `logging` when enabling KubeSphere Logging system.
{{</ notice >}}
{{< notice note >}}By default, KubeKey will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
{{</ notice >}}
```yaml
@ -48,7 +53,7 @@ When you install KubeSphere on Linux, you need to create a configuration file, w
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
externalElasticsearchHost: # The Host of external Elasticsearch.
externalElasticsearchUrl: # The Host of external Elasticsearch.
externalElasticsearchPort: # The port of external Elasticsearch.
```
@ -60,9 +65,9 @@ When you install KubeSphere on Linux, you need to create a configuration file, w
### Installing on Kubernetes
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Logging first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file.
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Logging first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -73,9 +78,14 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
```yaml
logging:
enabled: true # Change "false" to "true".
containerruntime: docker
```
{{< notice note >}}By default, ks-installer will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one.
{{< notice info >}}To use containerd as the container runtime, change the value of the field `.logging.containerruntime` to `containerd`. If you upgraded to KubeSphere 3.3 from earlier versions, you have to manually add the field `containerruntime` under `logging` when enabling KubeSphere Logging system.
{{</ notice >}}
{{< notice note >}}By default, ks-installer will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one.
{{</ notice >}}
```yaml
@ -86,14 +96,14 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
externalElasticsearchHost: # The Host of external Elasticsearch.
externalElasticsearchUrl: # The Host of external Elasticsearch.
externalElasticsearchPort: # The port of external Elasticsearch.
```
3. Execute the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```
@ -117,9 +127,14 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
```yaml
logging:
enabled: true # Change "false" to "true".
containerruntime: docker
```
{{< notice note >}}By default, Elasticsearch will be installed internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
{{< notice info >}}To use containerd as the container runtime, change the value of the field `.logging.containerruntime` to `containerd`. If you upgraded to KubeSphere 3.3 from earlier versions, you have to manually add the field `containerruntime` under `logging` when enabling KubeSphere Logging system.
{{</ notice >}}
{{< notice note >}}By default, Elasticsearch will be installed internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
{{</ notice >}}
```yaml
@ -130,7 +145,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource
elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes.
logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log.
externalElasticsearchHost: # The Host of external Elasticsearch.
externalElasticsearchUrl: # The Host of external Elasticsearch.
externalElasticsearchPort: # The port of external Elasticsearch.
```

View File

@ -39,9 +39,9 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
### Installing on Kubernetes
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Metrics Server first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file.
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Metrics Server first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -57,7 +57,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Execute the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```

View File

@ -49,9 +49,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/),
### Installing on Kubernetes
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Network Policy first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file.
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable the Network Policy first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -68,7 +68,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Execute the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```

View File

@ -40,9 +40,9 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
### Installing on Kubernetes
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Pod IP Pools first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file.
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Pod IP Pools first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -59,7 +59,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Execute the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```

View File

@ -53,9 +53,9 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/),
### Installing on Kubernetes
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Service Mesh first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file.
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable KubeSphere Service Mesh first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -78,7 +78,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Run the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```

View File

@ -40,9 +40,9 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
### Installing on Kubernetes
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Service Topology first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) file.
As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introduction/overview/), you can enable Service Topology first in the [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) file.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) and edit it.
1. Download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -59,7 +59,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu
3. Execute the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```

View File

@ -8,12 +8,6 @@ Weight: 6940
After you [enable the pluggable components of KubeSphere](../../pluggable-components/), you can also uninstall them by performing the following steps. Please back up any necessary data before you uninstall these components.
{{< notice note >}}
The methods of uninstalling certain pluggable components on KubeSphere 3.3.0 are different from the methods on KubeSphere v3.3.0. For more information about the uninstallation methods on KubeSphere v3.3.0, see [Uninstall Pluggable Components from KubeSphere](https://v3-0.docs.kubesphere.io/docs/faq/installation/uninstall-pluggable-components/).
{{</ notice >}}
## Prerequisites
You have to change the value of the field `enabled` from `true` to `false` in `ks-installer` of the CRD `ClusterConfiguration` before you uninstall any pluggable components except Service Topology and Pod IP Pools.
@ -128,7 +122,7 @@ Change the value of `openpitrix.store.enabled` from `true` to `false` in `ks-ins
{{< notice note >}}
Notification is installed in KubeSphere 3.3.0 by default, so you do not need to uninstall it.
Notification is installed in KubeSphere 3.3 by default, so you do not need to uninstall it.
{{</ notice >}}

View File

@ -152,7 +152,7 @@ If egress traffic is controlled, you should have a clear plan of what projects,
Q: Why cannot the custom monitoring system of KubeSphere get data after I enabled network isolation?
A: After you enable custom monitoring, the KubeSphere monitoring system will access the metrics of the pod. You need to allow ingress traffic for the KubeSphere monitoring system. Otherwise, it cannot access pod metrics.
A: After you enable custom monitoring, the KubeSphere monitoring system will access the metrics of the Pod. You need to allow ingress traffic for the KubeSphere monitoring system. Otherwise, it cannot access Pod metrics.
KubeSphere provides a configuration item `allowedIngressNamespaces` to simplify similar configurations, which allows all projects listed in the configuration.

View File

@ -48,11 +48,15 @@ A Route on KubeSphere is the same as an [Ingress](https://kubernetes.io/docs/con
2. Select a mode, configure routing rules, click **√**, and click **Next**.
**Domain Name**: Set a domain name for the route.
**Protocol**Select `http` or `https`. If `https` is selected, you need to select a Secret that contains the `tls.crt` (TLS certificate) and `tls.key` (TLS private key) keys used for encryption.
**Paths**Map each service to a path. Enter a path name and select a service and port. You can also click **Add** to add multiple paths.
* **Auto Generate**: KubeSphere automatically generates a domain name in the `<Service name>.<Project name>.<Gateway address>.nip.io` format and the domain name is automatically resolved by [nip.io](https://nip.io/) into the gateway address. This mode supports only HTTP.
* **Paths**: Map each Service to a path. You can click **Add** to add multiple paths.
* **Specify Domain**: A user-defined domain name is used. This mode supports both HTTP and HTTPS.
* **Domain Name**: Set a domain name for the Route.
* **Protocol**: Select `http` or `https`. If `https` is selected, you need to select a Secret that contains the `tls.crt` (TLS certificate) and `tls.key` (TLS private key) keys used for encryption.
* **Paths**: Map each Service to a path. You can click **Add** to add multiple paths.
### (Optional) Step 3: Configure advanced settings

View File

@ -21,7 +21,7 @@ This tutorial demonstrates how to create a microservices-based app Bookinfo, whi
2. Set a name for the app (for example, `bookinfo`) and click **Next**.
3. On the **Services** page, you need to create microservices that compose the app. Click **Create Service** and select **Stateless Service**.
3. On the **Service Settings** page, you need to create microservices that compose the app. Click **Create Service** and select **Stateless Service**.
4. Set a name for the Service (e.g `productpage`) and click **Next**.

View File

@ -27,7 +27,7 @@ This tutorial demonstrates how to quickly deploy [NGINX](https://www.nginx.com/)
{{</ notice >}}
2. Search for NGINX, click it, and click **Install** on the **App Information** page. Make sure you click **Agree** in the displayed **App Deploy Agreement** dialog box.
2. Search for NGINX, click it, and click **Install** on the **App Information** page. Make sure you click **Agree** in the displayed **Deployment Agreement** dialog box.
3. Set a name and select an app version, confirm the location where NGINX will be deployed , and click **Next**.

View File

@ -42,7 +42,7 @@ In the previous step, you expose metric endpoints in a Kubernetes Service object
The ServiceMonitor CRD is defined by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator). A ServiceMonitor contains information about the metrics endpoints. With ServiceMonitor objects, the KubeSphere monitoring engine knows where and how to scape metrics. For each monitoring target, you apply a ServiceMonitor object to hook your application (or exporters) up to KubeSphere.
In KubeSphere v3.3.0, you need to pack a ServiceMonitor with your applications (or exporters) into a Helm chart for reuse. In future releases, KubeSphere will provide graphical interfaces for easy operation.
In KubeSphere 3.3, you need to pack a ServiceMonitor with your applications (or exporters) into a Helm chart for reuse. In future releases, KubeSphere will provide graphical interfaces for easy operation.
Please read [Monitor a Sample Web Application](../examples/monitor-sample-web/) to learn how to pack a ServiceMonitor with your application.

View File

@ -12,7 +12,7 @@ This section introduces monitoring dashboard features. You will learn how to vis
To create new dashboards for your app metrics, navigate to **Custom Monitoring** under **Monitoring & Alerting** in a project. There are three ways to create monitoring dashboards: built-in templates, blank templates for customization and YAML files.
There are three available built-in templates for MySQL, Elasticsearch, and Redis respectively. These templates are for demonstration purposes and are updated with KubeSphere releases. Besides, you can choose to customize monitoring dashboards.
Built-in templates include MySQL, Elasticsearch, Redis, and more. These templates are for demonstration purposes and are updated with KubeSphere releases. Besides, you can choose to customize monitoring dashboards.
A KubeSphere custom monitoring dashboard can be seen as simply a YAML configuration file. The data model is heavily inspired by [Grafana](https://github.com/grafana/grafana), an open-source tool for monitoring and observability. Please find KubeSphere monitoring dashboard data model design in [kubesphere/monitoring-dashboard](https://github.com/kubesphere/monitoring-dashboard). The configuration file is portable and sharable. You are welcome to contribute dashboard templates to the KubeSphere community via [Monitoring Dashboards Gallery](https://github.com/kubesphere/monitoring-dashboard/tree/master/contrib/gallery).

View File

@ -145,7 +145,7 @@ Perform the following steps to download KubeKey.
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or run the following command:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -161,7 +161,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -176,7 +176,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
@ -197,12 +197,12 @@ You only need to run one command for all-in-one installation. The template is as
To create a Kubernetes cluster with KubeSphere installed, refer to the following command as an example:
```bash
./kk create cluster --with-kubernetes v1.22.10 --with-kubesphere v3.3.0
./kk create cluster --with-kubernetes v1.22.10 --with-kubesphere v3.3.1
```
{{< notice note >}}
- Recommended Kubernetes versions for KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey installs Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix).
- Recommended Kubernetes versions for KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey installs Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix).
- For all-in-one installation, you do not need to change any configuration.
- If you do not add the flag `--with-kubesphere` in the command in this step, KubeSphere will not be deployed. KubeKey will install Kubernetes only. If you add the flag `--with-kubesphere` without specifying a KubeSphere version, the latest version of KubeSphere will be installed.
- KubeKey will install [OpenEBS](https://openebs.io/) to provision LocalPV for the development and testing environment by default, which is convenient for new users. For other storage classes, see [Persistent Storage Configurations](../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).

View File

@ -24,7 +24,7 @@ You can create multiple workspaces within a KubeSphere cluster. Under each works
### Step 1: Create a user
After KubeSphere is installed, you need to add different users with varied roles to the platform so that they can work at different levels on various resources. Initially, you only have one default user, which is `admin`, granted the role `platform-admin`. In the first step, you create a sample user `user-manager` and further create more users as `user-manager`.
After KubeSphere is installed, you need to add different users with varied roles to the platform so that they can work at different levels on various resources. Initially, you only have one default user, which is `admin`, granted the role `platform-admin`. In the first step, you create a sample user `user-manager`.
1. Log in to the web console as `admin` with the default user and password (`admin/P@88w0rd`).
@ -32,7 +32,7 @@ After KubeSphere is installed, you need to add different users with varied roles
For account security, it is highly recommended that you change your password the first time you log in to the console. To change your password, select **User Settings** in the drop-down list in the upper-right corner. In **Password Settings**, set a new password. You also can change the console language in **User Settings**.
{{</ notice >}}
2. Click **Platform** in the upper-left corner, and then select **Access Control**. In the left nevigation pane, select **Platform Roles**. There are four built-in roles, as shown in the following table.
2. Click **Platform** in the upper-left corner, and then select **Access Control**. In the left nevigation pane, select **Platform Roles**. The built-in roles are shown in the following table.
<table>
<tbody>
@ -41,21 +41,16 @@ After KubeSphere is installed, you need to add different users with varied roles
<th>Description</th>
</tr>
<tr>
<td><code>workspaces-manager</code></td>
<td>Workspace manager who can manage all workspaces on the platform.</td>
</tr>
</tr>
<tr>
<td><code>users-manager</code></td>
<td>User manager who can manage all users on the platform.</td>
<td><code>platform-self-provisioner</code></td>
<td>Create workspaces and become the admin of the created workspaces.</td>
</tr></tr>
<tr>
<td><code>platform-regular</code></td>
<td>Regular user who has no access to any resources before joining a workspace or cluster.</td>
<td>Has no access to any resources before joining a workspace or cluster.</td>
</tr>
<tr>
<td><code>platform-admin</code></td>
<td>Administrator who can manage all resources on the platform.</td>
<td>Manage all resources on the platform.</td>
</tr>
</tbody>
</table>
@ -64,11 +59,15 @@ After KubeSphere is installed, you need to add different users with varied roles
Built-in roles are created automatically by KubeSphere and cannot be edited or deleted.
{{</ notice >}}
3. In **Users**, click **Create**. In the displayed dialog box, provide all the necessary information (marked with *) and select `users-manager` for **Platform Role**.
3. In **Users**, click **Create**. In the displayed dialog box, provide all the necessary information (marked with *) and select `platform-self-provisioner` for **Platform Role**.
Click **OK** after you finish. The new user will display on the **Users** page.
4. Log out of the console and log back in with user `user-manager` to create four users that will be used in other tutorials.
{{< notice note >}}
If you have not specified a platform role, the created user cannot perform any operations. In this case, you need to create a workspace and invite the created user to the workspace.
{{</ notice >}}
4. Repeat the previous steps to create other users that will be used in other tutorials.
{{< notice tip >}}
- To log out, click your username in the upper-right corner and select **Log Out**.
@ -82,11 +81,6 @@ After KubeSphere is installed, you need to add different users with varied roles
<th width='180'>Assigned Platform Role</th>
<th>User Permissions</th>
</tr>
<tr>
<td><code>ws-manager</code></td>
<td><code>workspaces-manager</code></td>
<td>Create and manage all workspaces.</td>
</tr>
<tr>
<td><code>ws-admin</code></td>
<td><code>platform-regular</code></td>
@ -103,7 +97,7 @@ After KubeSphere is installed, you need to add different users with varied roles
</tbody>
</table>
5. On **Users** page, verify the four users created.
5. On **Users** page, view the created users.
{{< notice note >}}
@ -112,11 +106,13 @@ After KubeSphere is installed, you need to add different users with varied roles
{{</ notice >}}
### Step 2: Create a workspace
In this step, you create a workspace using user `ws-manager` created in the previous step. As the basic logic unit for the management of projects, DevOps projects and organization members, workspaces underpin the multi-tenant system of KubeSphere.
As the basic logic unit for the management of projects, DevOps projects and organization members, workspaces underpin the multi-tenant system of KubeSphere.
1. Log in to KubeSphere as `ws-manager`. Click **Platform** in the upper-left corner and select **Access Control**. In **Workspaces**, you can see there is only one default workspace `system-workspace`, where system-related components and services run. Deleting this workspace is not allowed.
1. In the navigation pane on the left, click **Workspaces**. You can see there is only one default workspace `system-workspace`, where system-related components and services run. Deleting this workspace is not allowed.
2. Click **Create** on the right, set a name for the new workspace (for example, `demo-workspace`) and set user `ws-admin` as the workspace manager. Click **Create** after you finish.
2. On the **Workspaces** page on the right, click **Create**, set a name for the new workspace (for example, `demo-workspace`) and set user `ws-admin` as the workspace manager.
3. Click **Create** after you finish.
{{< notice note >}}
@ -124,9 +120,9 @@ In this step, you create a workspace using user `ws-manager` created in the prev
{{</ notice >}}
3. Log out of the console and log back in as `ws-admin`. In **Workspace Settings**, select **Workspace Members** and click **Invite**.
4. Log out of the console and log back in as `ws-admin`. In **Workspace Settings**, select **Workspace Members** and click **Invite**.
4. Invite both `project-admin` and `project-regular` to the workspace. Assign them the role `workspace-self-provisioner` and `workspace-viewer` respectively and click **OK**.
5. Invite both `project-admin` and `project-regular` to the workspace. Assign them the role `workspace-self-provisioner` and `workspace-viewer` respectively and click **OK**.
{{< notice note >}}
The actual role name follows a naming convention: `<workspace name>-<role name>`. For example, in this workspace named `demo-workspace`, the actual role name of the role `viewer` is `demo-workspace-viewer`.

View File

@ -62,7 +62,7 @@ If you adopt [All-in-one Installation](../../quick-start/all-in-one-on-linux/),
When you install KubeSphere on Kubernetes, you need to use [ks-installer](https://github.com/kubesphere/ks-installer/) by applying two YAML files as below.
1. First download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/cluster-configuration.yaml) and edit it.
1. First download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) and edit it.
```bash
vi cluster-configuration.yaml
@ -73,7 +73,7 @@ When you install KubeSphere on Kubernetes, you need to use [ks-installer](https:
3. Save this local file and execute the following commands to start the installation.
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```

View File

@ -11,7 +11,7 @@ In addition to installing KubeSphere on a Linux machine, you can also deploy it
## Prerequisites
- To install KubeSphere 3.3.0 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- Make sure your machine meets the minimal hardware requirement: CPU > 1 Core, Memory > 2 GB.
- A **default** Storage Class in your Kubernetes cluster needs to be configured before the installation.
@ -33,9 +33,9 @@ After you make sure your machine meets the conditions, perform the following ste
1. Run the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml
```
2. After KubeSphere is successfully installed, you can run the following command to view the installation logs:

View File

@ -1,12 +1,12 @@
---
title: "Logging"
keywords: 'Kubernetes, KubeSphere, API, Logging'
description: 'The API changes of the component **logging** in KubeSphere v3.3.0.'
description: 'The API changes of the component **logging** in KubeSphere 3.3.'
linkTitle: "Logging"
weight: 17310
---
The API changes of the component **logging** in KubeSphere v3.3.0.
The API changes of the component **logging** in KubeSphere 3.3.
## Time Format
@ -22,6 +22,6 @@ The following APIs are removed:
- GET /namespaces/{namespace}/pods/{pod}
- The whole log setting API group
## Fluent Bit Operator
## Fluent Operator
In KubeSphere 3.3.0, the whole log setting APIs are removed from the KubeSphere core since the project Fluent Bit Operator is refactored in an incompatible way. Please refer to [Fluent Bit Operator docs](https://github.com/kubesphere/fluentbit-operator) for how to configure log collection in KubeSphere 3.3.0.
In KubeSphere 3.3, the whole log setting APIs are removed from the KubeSphere core since the project Fluent Operator is refactored in an incompatible way. Please refer to [Fluent Operator docs](https://github.com/kubesphere/fluentbit-operator) for how to configure log collection in KubeSphere 3.3.

View File

@ -1,7 +1,7 @@
---
title: "Monitoring"
keywords: 'Kubernetes, KubeSphere, API, Monitoring'
description: 'The API changes of the component **monitoring** in KubeSphere v3.3.0.'
description: 'The API changes of the component **monitoring** in KubeSphere v3.3.1.'
linkTitle: "Monitoring"
weight: 17320
---
@ -16,9 +16,9 @@ The time format of query parameters must be in Unix timestamps (the number of se
## Deprecated Metrics
In KubeSphere 3.3.0, the metrics on the left have been renamed to the ones on the right.
In KubeSphere 3.3, the metrics on the left have been renamed to the ones on the right.
|V2.0|V3.0|
|V2.0|V3.3|
|---|---|
|workload_pod_cpu_usage | workload_cpu_usage|
|workload_pod_memory_usage| workload_memory_usage|
@ -48,7 +48,7 @@ The following metrics have been deprecated and removed.
|prometheus_up_sum|
|prometheus_tsdb_head_samples_appended_rate|
New metrics are introduced in 3.3.0.
New metrics are introduced in KubeSphere 3.3.
|New Metrics|
|---|
@ -59,7 +59,7 @@ New metrics are introduced in 3.3.0.
## Response Fields
In KubeSphere 3.3.0, the response fields `metrics_level`, `status` and `errorType` are removed.
In KubeSphere 3.3, the response fields `metrics_level`, `status` and `errorType` are removed.
In addition, the field name `resource_name` has been replaced with the specific resource type names. These types are `node`, `workspace`, `namespace`, `workload`, `pod`, `container` and `persistentvolumeclaim`. For example, instead of `resource_name: node1`, you will get `node: node1`. See the example response below:

View File

@ -114,7 +114,7 @@ Replace `[node ip]` with your actual IP address.
## API Reference
The KubeSphere API swagger JSON file can be found in the repository https://github.com/kubesphere/kubesphere/tree/release-3.1/api.
The KubeSphere API swagger JSON file can be found in the repository https://github.com/kubesphere/kubesphere/tree/release-3.3/api.
- KubeSphere specified the API [swagger json](https://github.com/kubesphere/kubesphere/blob/release-3.1/api/ks-openapi-spec/swagger.json) file. It contains all the APIs that are only applied to KubeSphere.
- KubeSphere specified the CRD [swagger json](https://github.com/kubesphere/kubesphere/blob/release-3.1/api/openapi-spec/swagger.json) file. It contains all the generated CRDs API documentation. It is same as Kubernetes API objects.

View File

@ -13,7 +13,7 @@ Once your NFS server machine is ready, you can use [KubeKey](../../../installing
{{< notice note >}}
- You can also create the storage class of NFS-client after you install a KubeSphere cluster.
- NFS is incompatible with some applications, for example, Prometheus, which may result in pod creation failures. If you need to use NFS in the production environment, ensure that you have understood the risks. For more information, contact support@kubesphere.cloud.
- It is not recommended that you use NFS storage for production (especially on Kubernetes version 1.20 or later) as some issues may occur, such as `failed to obtain lock` and `input/output error`, resulting in Pod `CrashLoopBackOff`. Besides, some apps may not be compatible with NFS, including [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects).
{{</ notice >}}

View File

@ -1,8 +1,8 @@
---
title: "Release Notes for 3.3.0"
title: "Release Notes for 3.3"
keywords: "Kubernetes, KubeSphere, Release Notes"
description: "KubeSphere 3.3.0 Release Notes"
linkTitle: "Release Notes - 3.3.0"
description: "KubeSphere 3.3 Release Notes"
linkTitle: "Release Notes - 3.3"
weight: 18098
---
@ -13,13 +13,19 @@ weight: 18098
- Add support for importing and managing code repositories.
- Add support for built-in CRD-based pipeline templates and parameter customization.
- Add support for viewing pipeline events.
### Enhancements & Updates
- Add support for editing the binding mode of the pipeline's kubeconfig file on the UI.
### Bug Fixes
- Fix an issue where users fail to check the CI/CD template.
- Remove the `Deprecated` tag from the CI/CD template and replace `kubernetesDeploy` with `kubeconfig binding` at the deployment phase.
## Storage
### Features
- Add support for tenant-level storage class permission management.
- Add the volume snapshot content management and volume snapshot class management features.
- Add support for automatic restart of deployments and statefulsets after a PVC has been changed.
- Add the PVC auto expansion feature, which automatically expands PVCs when remaining capacity is insufficient.
### Bug Fixes
- Set `hostpath` as a required option when users are mounting volumes.
## Multi-tenancy and Multi-cluster
### Features
@ -61,7 +67,7 @@ weight: 18098
- Integrate OpenELB with KubeSphere for exposing LoadBalancer services.
### Bug Fixes
- Fix an issue where the gateway of a project is not deleted after the project is deleted.
- Fix an issue where users fail to create routing rules in IPv6 and IPv4 dual-stack environments.
## App Store
### Bug Fixes
- Fix a ks-controller-manager crash caused by Helm controller NPE errors.
@ -69,7 +75,10 @@ weight: 18098
## Authentication & Authorization
### Features
- Add support for manually disabling and enabling users.
### Bug Fixes
- Delete roles `users-manager` and `workspace-manager`.
- Add role `platform-self-provisioner`.
- Block some permissions of user-defined roles.
## User Experience
- Add a prompt when the audit log of Kubernetes has been enabled.
- Add the lifecycle management feature for containers.
@ -87,6 +96,7 @@ weight: 18098
- Prevent ks-apiserver and ks-controller-manager from restarting when the cluster configuration is changed.
- Optimize some UI texts.
- Optimize display of the service topology on the **Service** page.
- Add support for changing the number of items displayed on each page of a table.
- Add support for batch stopping workloads.
For more information about issues and contributors of KubeSphere 3.3.0, see [GitHub](https://github.com/kubesphere/kubesphere/blob/master/CHANGELOG/CHANGELOG-3.3.md).
For more information about issues and contributors of KubeSphere 3.3, see [GitHub](https://github.com/kubesphere/kubesphere/blob/master/CHANGELOG/CHANGELOG-3.3.md).

View File

@ -11,4 +11,4 @@ icon: "/images/docs/v3.3/docs.svg"
---
This chapter demonstrates how cluster operators can upgrade KubeSphere to 3.3.0.
This chapter demonstrates how cluster operators can upgrade KubeSphere to 3.3.1.

View File

@ -1,6 +1,6 @@
---
title: "Air-Gapped Upgrade with ks-installer"
keywords: "Air-Gapped, upgrade, kubesphere, 3.3.0"
keywords: "Air-Gapped, upgrade, kubesphere, 3.3"
description: "Use ks-installer and offline package to upgrade KubeSphere."
linkTitle: "Air-Gapped Upgrade with ks-installer"
weight: 7500
@ -12,11 +12,22 @@ ks-installer is recommended for users whose Kubernetes clusters were not set up
## Prerequisites
- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
- Read [Release Notes for 3.3.0](../../../v3.3/release/release-v330/) carefully.
- Read [Release Notes for 3.3](../../../v3.3/release/release-v330/) carefully.
- Back up any important component beforehand.
- A Docker registry. You need to have a Harbor or other Docker registries. For more information, see [Prepare a Private Image Registry](../../installing-on-linux/introduction/air-gapped-installation/#step-2-prepare-a-private-image-registry).
- Supported Kubernetes versions of KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- Supported Kubernetes versions of KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
## Major Updates
In KubeSphere 3.3.1, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.3.1, please note the following:
- Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project).
- Some permission of custom roles are removed:
- Removed permissions of platform-level custom roles: user management, role management, and workspace management.
- Removed permissions of workspace-level custom roles: user management, role management, and user group management.
- Removed permissions of namespace-level custom roles: user management and role management.
- After you upgrade KubeSphere to 3.3.1, custom roles will be retained, but removed permissions of the custom roles will be revoked.
## Step 1: Prepare Installation Images
As you install KubeSphere in an air-gapped environment, you need to prepare an image package containing all the necessary images in advance.
@ -24,7 +35,7 @@ As you install KubeSphere in an air-gapped environment, you need to prepare an i
1. Download the image list file `images-list.txt` from a machine that has access to Internet through the following command:
```bash
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/images-list.txt
```
{{< notice note >}}
@ -36,7 +47,7 @@ As you install KubeSphere in an air-gapped environment, you need to prepare an i
2. Download `offline-installation-tool.sh`.
```bash
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/offline-installation-tool.sh
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/offline-installation-tool.sh
```
3. Make the `.sh` file executable.
@ -96,7 +107,7 @@ Similar to installing KubeSphere on an existing Kubernetes cluster in an online
1. Execute the following command to download ks-installer and transfer it to your machine that serves as the taskbox for installation.
```bash
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
```
2. Verify that you have specified your private image registry in `spec.local_registry` in `cluster-configuration.yaml`. Note that if your existing cluster was installed in an air-gapped environment, you may already have this field specified. Otherwise, run the following command to edit `cluster-configuration.yaml` of your existing KubeSphere v3.1.x cluster and add the private image registry:

View File

@ -1,6 +1,6 @@
---
title: "Air-Gapped Upgrade with KubeKey"
keywords: "Air-Gapped, kubernetes, upgrade, kubesphere, 3.3.0"
keywords: "Air-Gapped, kubernetes, upgrade, kubesphere, 3.3.1"
description: "Use the offline package to upgrade Kubernetes and KubeSphere."
linkTitle: "Air-Gapped Upgrade with KubeKey"
weight: 7400
@ -11,11 +11,22 @@ Air-gapped upgrade with KubeKey is recommended for users whose KubeSphere and Ku
- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
- Your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- Read [Release Notes for 3.3.0](../../../v3.3/release/release-v330/) carefully.
- Read [Release Notes for 3.3](../../../v3.3/release/release-v330/) carefully.
- Back up any important component beforehand.
- A Docker registry. You need to have a Harbor or other Docker registries.
- Make sure every node can push and pull images from the Docker Registry.
## Major Updates
In KubeSphere 3.3.1, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.3.1, please note the following:
- Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project).
- Some permission of custom roles are removed:
- Removed permissions of platform-level custom roles: user management, role management, and workspace management.
- Removed permissions of workspace-level custom roles: user management, role management, and user group management.
- Removed permissions of namespace-level custom roles: user management and role management.
- After you upgrade KubeSphere to 3.3.1, custom roles will be retained, but removed permissions of the custom roles will be revoked.
## Upgrade KubeSphere and Kubernetes
@ -46,7 +57,7 @@ KubeKey upgrades Kubernetes from one MINOR version to the next MINOR version unt
### Step 1: Download KubeKey
1. 1. Run the following commands to download KubeKey v2.2.2.
1. 1. Run the following commands to download KubeKey v2.3.0.
{{< tabs >}}
{{< tab "Good network connections to GitHub/Googleapis" >}}
@ -54,7 +65,7 @@ KubeKey upgrades Kubernetes from one MINOR version to the next MINOR version unt
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -70,7 +81,7 @@ KubeKey upgrades Kubernetes from one MINOR version to the next MINOR version unt
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -89,7 +100,7 @@ As you install KubeSphere and Kubernetes on Linux, you need to prepare an image
1. Download the image list file `images-list.txt` from a machine that has access to Internet through the following command:
```bash
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/images-list.txt
```
{{< notice note >}}
@ -101,7 +112,7 @@ As you install KubeSphere and Kubernetes on Linux, you need to prepare an image
2. Download `offline-installation-tool.sh`.
```bash
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/offline-installation-tool.sh
curl -L -O https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/offline-installation-tool.sh
```
3. Make the `.sh` file executable.
@ -142,7 +153,7 @@ As you install KubeSphere and Kubernetes on Linux, you need to prepare an image
{{< notice note >}}
- You can change the Kubernetes version downloaded based on your needs. Recommended Kubernetes versions for KubeSphere 3.3.0 are v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix).
- You can change the Kubernetes version downloaded based on your needs. Recommended Kubernetes versions for KubeSphere 3.3 are v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support). If you do not specify a Kubernetes version, KubeKey will install Kubernetes v1.23.7 by default. For more information about supported Kubernetes versions, see [Support Matrix](../../installing-on-linux/introduction/kubekey/#support-matrix).
- You can upgrade Kubernetes from v1.16.13 to v1.17.9 by downloading the v1.17.9 Kubernetes binary file, but for cross-version upgrades, all intermediate versions need to be downloaded in advance. For example, if you want to upgrade Kubernetes from v1.15.12 to v1.18.6, you need to download Kubernetes v1.16.13 and v1.17.9, and the v1.18.6 binary file.
@ -189,7 +200,7 @@ Transfer your packaged image file to your local machine and execute the followin
| | Kubernetes | KubeSphere |
| ------ | ---------- | ---------- |
| Before | v1.18.6 | v3.2.x |
| After | v1.22.10 | 3.3.0 |
| After | v1.22.10 | 3.3.1 |
#### Upgrade a cluster
@ -206,7 +217,7 @@ Execute the following command to generate an example configuration file for inst
For example:
```bash
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 -f config-sample.yaml
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 -f config-sample.yaml
```
{{< notice note >}}
@ -247,7 +258,7 @@ Set `privateRegistry` of your `config-sample.yaml` file:
privateRegistry: dockerhub.kubekey.local
```
#### Upgrade your single-node cluster to KubeSphere 3.3.0 and Kubernetes v1.22.10
#### Upgrade your single-node cluster to KubeSphere 3.3 and Kubernetes v1.22.10
```bash
./kk upgrade -f config-sample.yaml
@ -271,7 +282,7 @@ To upgrade Kubernetes to a specific version, explicitly provide the version afte
| | Kubernetes | KubeSphere |
| ------ | ---------- | ---------- |
| Before | v1.18.6 | v3.2.x |
| After | v1.22.10 | 3.3.0 |
| After | v1.22.10 | 3.3.1 |
#### Upgrade a cluster
@ -288,7 +299,7 @@ In this example, KubeSphere is installed on multiple nodes, so you need to speci
For example:
```bash
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 -f config-sample.yaml
./kk create config --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 -f config-sample.yaml
```
{{< notice note >}}
@ -331,7 +342,7 @@ Set `privateRegistry` of your `config-sample.yaml` file:
privateRegistry: dockerhub.kubekey.local
```
#### Upgrade your multi-node cluster to KubeSphere 3.3.0 and Kubernetes v1.22.10
#### Upgrade your multi-node cluster to KubeSphere 3.3 and Kubernetes v1.22.10
```bash
./kk upgrade -f config-sample.yaml

View File

@ -1,6 +1,6 @@
---
title: "Upgrade — Overview"
keywords: "Kubernetes, upgrade, KubeSphere, 3.3.0, upgrade"
keywords: "Kubernetes, upgrade, KubeSphere, 3.3, upgrade"
description: "Understand what you need to pay attention to before the upgrade, such as versions, and upgrade tools."
linkTitle: "Overview"
weight: 7100
@ -8,10 +8,10 @@ weight: 7100
## Make Your Upgrade Plan
KubeSphere 3.3.0 is compatible with Kubernetes 1.19.x, 1.20.x, 1.21.x, 1.22.x, and 1.23.x (experimental support):
KubeSphere 3.3 is compatible with Kubernetes 1.19.x, 1.20.x, 1.21.x, 1.22.x, and 1.23.x (experimental support):
- Before you upgrade your cluster to KubeSphere 3.3.0, you need to have a KubeSphere cluster running v3.2.x.
- If your existing KubeSphere v3.1.x cluster is installed on Kubernetes 1.19.x+, you can choose to only upgrade KubeSphere to 3.3.0 or upgrade Kubernetes (to a higher version) and KubeSphere (to 3.3.0) at the same time.
- Before you upgrade your cluster to KubeSphere 3.3, you need to have a KubeSphere cluster running v3.2.x.
- If your existing KubeSphere v3.1.x cluster is installed on Kubernetes 1.19.x+, you can choose to only upgrade KubeSphere to 3.3 or upgrade Kubernetes (to a higher version) and KubeSphere (to 3.3) at the same time.
## Before the Upgrade

View File

@ -1,6 +1,6 @@
---
title: "Upgrade with ks-installer"
keywords: "Kubernetes, upgrade, KubeSphere, v3.3.0"
keywords: "Kubernetes, upgrade, KubeSphere, v3.3.1"
description: "Use ks-installer to upgrade KubeSphere."
linkTitle: "Upgrade with ks-installer"
weight: 7300
@ -11,19 +11,31 @@ ks-installer is recommended for users whose Kubernetes clusters were not set up
## Prerequisites
- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
- Read [Release Notes for 3.3.0](../../../v3.3/release/release-v330/) carefully.
- Read [Release Notes for 3.3](../../../v3.3/release/release-v330/) carefully.
- Back up any important component beforehand.
- Supported Kubernetes versions of KubeSphere 3.3.0: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
- Supported Kubernetes versions of KubeSphere 3.3: v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
## Major Updates
In KubeSphere 3.3.1, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.3.1, please note the following:
- Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project).
- Some permission of custom roles are removed:
- Removed permissions of platform-level custom roles: user management, role management, and workspace management.
- Removed permissions of workspace-level custom roles: user management, role management, and user group management.
- Removed permissions of namespace-level custom roles: user management and role management.
- After you upgrade KubeSphere to 3.3.1, custom roles will be retained, but removed permissions of the custom roles will be revoked.
## Apply ks-installer
Run the following command to upgrade your cluster.
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml --force
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml --force
```
## Enable Pluggable Components
You can [enable new pluggable components](../../pluggable-components/overview/) of KubeSphere 3.3.0 after the upgrade to explore more features of the container platform.
You can [enable new pluggable components](../../pluggable-components/overview/) of KubeSphere 3.3 after the upgrade to explore more features of the container platform.

View File

@ -1,6 +1,6 @@
---
title: "Upgrade with KubeKey"
keywords: "Kubernetes, upgrade, KubeSphere, 3.3.0, KubeKey"
keywords: "Kubernetes, upgrade, KubeSphere, 3.3, KubeKey"
description: "Use KubeKey to upgrade Kubernetes and KubeSphere."
linkTitle: "Upgrade with KubeKey"
weight: 7200
@ -12,10 +12,22 @@ This tutorial demonstrates how to upgrade your cluster using KubeKey.
## Prerequisites
- You need to have a KubeSphere cluster running v3.2.x. If your KubeSphere version is v3.1.x or earlier, upgrade to v3.2.x first.
- Read [Release Notes for 3.3.0](../../../v3.3/release/release-v330/) carefully.
- Read [Release Notes for 3.3](../../../v3.3/release/release-v330/) carefully.
- Back up any important component beforehand.
- Make your upgrade plan. Two scenarios are provided in this document for [all-in-one clusters](#all-in-one-cluster) and [multi-node clusters](#multi-node-cluster) respectively.
## Major Updates
In KubeSphere 3.3.1, some changes have made on built-in roles and permissions of custom roles. Therefore, before you upgrade KubeSphere to 3.3.1, please note the following:
- Change of built-in roles: Platform-level built-in roles `users-manager` and `workspace-manager` are removed. If an existing user has been bound to `users-manager` or `workspace-manager`, its role will be changed to `platform-regular` after the upgrade is completed. Role `platform-self-provisioner` is added. For more information about built-in roles, refer to [Create a user](../../quick-start/create-workspace-and-project).
- Some permission of custom roles are removed:
- Removed permissions of platform-level custom roles: user management, role management, and workspace management.
- Removed permissions of workspace-level custom roles: user management, role management, and user group management.
- Removed permissions of namespace-level custom roles: user management and role management.
- After you upgrade KubeSphere to 3.3.1, custom roles will be retained, but removed permissions of the custom roles will be revoked.
## Download KubeKey
Follow the steps below to download KubeKey before you upgrade your cluster.
@ -27,7 +39,7 @@ Follow the steps below to download KubeKey before you upgrade your cluster.
Download KubeKey from its [GitHub Release Page](https://github.com/kubesphere/kubekey/releases) or use the following command directly.
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{</ tab >}}
@ -43,7 +55,7 @@ export KKZONE=cn
Run the following command to download KubeKey:
```bash
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.2.2 sh -
curl -sfL https://get-kk.kubesphere.io | VERSION=v2.3.0 sh -
```
{{< notice note >}}
@ -58,7 +70,7 @@ After you download KubeKey, if you transfer it to a new machine also with poor n
{{< notice note >}}
The commands above download the latest release (v2.2.2) of KubeKey. You can change the version number in the command to download a specific version.
The commands above download the latest release (v2.3.0) of KubeKey. You can change the version number in the command to download a specific version.
{{</ notice >}}
@ -80,10 +92,10 @@ When upgrading Kubernetes, KubeKey will upgrade from one MINOR version to the ne
### All-in-one cluster
Run the following command to use KubeKey to upgrade your single-node cluster to KubeSphere 3.3.0 and Kubernetes v1.22.10:
Run the following command to use KubeKey to upgrade your single-node cluster to KubeSphere 3.3 and Kubernetes v1.22.10:
```bash
./kk upgrade --with-kubernetes v1.22.10 --with-kubesphere v3.3.0
./kk upgrade --with-kubernetes v1.22.10 --with-kubesphere v3.3.1
```
To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
@ -120,16 +132,16 @@ For more information, see [Edit the configuration file](../../installing-on-linu
{{</ notice >}}
#### Step 3: Upgrade your cluster
The following command upgrades your cluster to KubeSphere 3.3.0 and Kubernetes v1.22.10:
The following command upgrades your cluster to KubeSphere 3.3 and Kubernetes v1.22.10:
```bash
./kk upgrade --with-kubernetes v1.22.10 --with-kubesphere v3.3.0 -f sample.yaml
./kk upgrade --with-kubernetes v1.22.10 --with-kubesphere v3.3.1 -f sample.yaml
```
To upgrade Kubernetes to a specific version, explicitly provide the version after the flag `--with-kubernetes`. Available versions are v1.19.x, v1.20.x, v1.21.x, v1.22.x, and v1.23.x (experimental support).
{{< notice note >}}
To use new features of KubeSphere 3.3.0, you may need to enable some pluggable components after the upgrade.
To use new features of KubeSphere 3.3, you may need to enable some pluggable components after the upgrade.
{{</ notice >}}

View File

@ -1,12 +1,12 @@
---
title: "Changes after Upgrade"
keywords: "Kubernetes, upgrade, KubeSphere, 3.3.0"
keywords: "Kubernetes, upgrade, KubeSphere, 3.3"
description: "Understand what will be changed after the upgrade."
linkTitle: "Changes after Upgrade"
weight: 7600
---
This section covers the changes after upgrade for existing settings in previous versions. If you want to know all the new features and enhancements in KubeSphere 3.3.0, see [Release Notes for 3.3.0](../../../v3.3/release/release-v330/).
This section covers the changes after upgrade for existing settings in previous versions. If you want to know all the new features and enhancements in KubeSphere 3.3, see [Release Notes for 3.3](../../../v3.3/release/release-v330/).

View File

@ -19,7 +19,7 @@ A department in a workspace is a logical unit used for permission control. You c
1. Log in to the KubeSphere web console as `ws-admin` and go to the `demo-ws` workspace.
2. On the left navigation bar, choose **Department Management** under **Workspace Settings**, and click **Set Departments** on the right.
2. On the left navigation bar, choose **Departments** under **Workspace Settings**, and click **Set Departments** on the right.
3. In the **Set Departments** dialog box, set the following parameters and click **OK** to create a department.
@ -36,11 +36,11 @@ A department in a workspace is a logical unit used for permission control. You c
* **Project Role**: Role of all department members in a project. You can click **Add Project** to specify multiple project roles. Only one role can be specified for each project.
* **DevOps Project Role**: Role of all department members in a DevOps project. You can click **Add DevOps Project** to specify multiple DevOps project roles. Only one role can be specified for each DevOps project.
4. Click **OK** after the department is created, and then click **Close**. On the **Department Management** page, the created department is displayed in a department tree on the left.
4. Click **OK** after the department is created, and then click **Close**. On the **Departments** page, the created department is displayed in a department tree on the left.
## Assign a User to a Department
1. On the **Department Management** page, select a department in the department tree on the left and click **Not Assigned** on the right.
1. On the **Departments** page, select a department in the department tree on the left and click **Not Assigned** on the right.
2. In the user list, click <img src="/images/docs/v3.3/workspace-administration/department-management/assign.png" height="20px"> on the right of a user, and click **OK** for the displayed message to assign the user to the department.
@ -53,12 +53,12 @@ A department in a workspace is a logical unit used for permission control. You c
## Remove a User from a Department
1. On the **Department Management** page, select a department in the department tree on the left and click **Assigned** on the right.
1. On the **Departments** page, select a department in the department tree on the left and click **Assigned** on the right.
2. In the assigned user list, click <img src="/images/docs/v3.3/workspace-administration/department-management/remove.png" height="20px"> on the right of a user, enter the username in the displayed dialog box, and click **OK** to remove the user.
## Delete and Edit a Department
1. On the **Department Management** page, click **Set Departments**.
1. On the **Departments** page, click **Set Departments**.
2. In the **Set Departments** dialog box, on the left, click the upper level of the department to be edited or deleted.

View File

@ -21,11 +21,6 @@ You have a user granted the role of `workspaces-manager`, such as `ws-manager` i
1. Log in to the web console of KubeSphere as `ws-manager`. Click **Platform** on the upper-left corner, and then select **Access Control**. On the **Workspaces** page, click **Create**.
{{< notice note >}}
By default, you have at least one workspace `system-workspace` in the list which contains all system projects.
{{</ notice >}}
2. For single-node cluster, on the **Basic Information** page, specify a name for the workspace and select an administrator from the drop-down list. Click **Create**.

View File

@ -1,61 +0,0 @@
---
title: "CAS 身份提供者"
keywords: "CAS, 身份提供者"
description: "如何使用外部 CAS 身份提供者。"
linkTitle: "CAS 身份提供者"
weight: 12223
---
## CAS 身份提供者
CAS (Central Authentication Service) 是耶鲁 Yale 大学发起的一个java开源项目旨在为 Web应用系统提供一种可靠的 单点登录 解决方案( Web SSO CAS 具有以下特点:
- 开源的企业级单点登录解决方案
- CAS Server 为需要独立部署的 Web 应用----一个独立的Web应用程序(cas.war)。
- CAS Client 支持非常多的客户端 ( 指单点登录系统中的各个 Web 应用 ) ,包括 Java, .Net, PHP, Perl, 等。
## 准备工作
您需要部署一个 Kubernetes 集群,并在集群中安装 KubeSphere。有关详细信息请参阅[在 Linux 上安装](../../../installing-on-linux/)和[在 Kubernetes 上安装](../../../installing-on-kubernetes/)。
## 步骤
1. 以 `admin` 身份登录 KubeSphere将光标移动到右下角 <img src="/images/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication/toolbox.png" width="20px" height="20px" alt="icon"> ,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`
```bash
kubectl -n kubesphere-system edit cc ks-installer
```
2. 在 `spec.authentication.jwtSecret` 字段下添加以下字段。
```yaml
spec:
authentication:
jwtSecret: ''
authenticateRateLimiterMaxTries: 10
authenticateRateLimiterDuration: 10m0s
oauthOptions:
accessTokenMaxAge: 1h
accessTokenInactivityTimeout: 30m
identityProviders:
- name: cas
type: CASIdentityProvider
mappingMethod: auto
provider:
redirectURL: "https://ks-console:30880/oauth/redirect/cas"
casServerURL: "https://cas.example.org/cas"
insecureSkipVerify: true
```
字段描述如下:
| 参数 | 描述 |
| -------------------- | ------------------------------------------------------------ |
| redirectURL | 重定向到 ks-console 的 URL格式为`https://<域名>/oauth/redirect/<身份提供者名称>`。URL 中的 `<身份提供者名称>` 对应 `oauthOptions:identityProviders:name` 的值。 |
| casServerURL | 定义cas 认证的url 地址 |
| insecureSkipVerify | 关闭 TLS 证书验证。 |

View File

@ -105,7 +105,7 @@ KubeSphere 默认提供了以下几种类型的身份提供者:
* GitHub Identity Provider
* [CAS Identity Provider](../cas-identity-provider)
* CAS Identity Provider
* Aliyun IDaaS Provider

View File

@ -7,7 +7,7 @@ weight: 8630
---
KubeSphere v3.3.0 提供集群级别的网关,使所有项目共用一个全局网关。本文档介绍如何在 KubeSphere 设置集群网关。
KubeSphere 3.3 提供集群级别的网关,使所有项目共用一个全局网关。本文档介绍如何在 KubeSphere 设置集群网关。
## 准备工作
@ -17,7 +17,7 @@ KubeSphere v3.3.0 提供集群级别的网关,使所有项目共用一个全
1. 以 `admin` 身份登录 web 控制台,点击左上角的**平台管理**并选择**集群管理**。
2. 点击导航面板中**集群设置**下的**网关设置**,选择**集群网关**选项卡,并点击**启网关**。
2. 点击导航面板中**集群设置**下的**网关设置**,选择**集群网关**选项卡,并点击**启网关**。
3. 在显示的对话框中,从以下的两个选项中选择网关的访问模式:

View File

@ -6,7 +6,7 @@ linkTitle: "介绍"
weight: 8621
---
KubeSphere 提供灵活的日志接收器配置方式。基于 [FluentBit Operator](https://github.com/kubesphere/fluentbit-operator/),用户可以轻松添加、修改、删除、启用或禁用 Elasticsearch、Kafka 和 Fluentd 接收器。接收器添加后,日志会发送至该接收器。
KubeSphere 提供灵活的日志接收器配置方式。基于 [Fluent Operator](https://github.com/fluent/fluent-operator),用户可以轻松添加、修改、删除、启用或禁用 Elasticsearch、Kafka 和 Fluentd 接收器。接收器添加后,日志会发送至该接收器。
此教程简述在 KubeSphere 中添加日志接收器的一般性步骤。
@ -45,7 +45,7 @@ KubeSphere 提供灵活的日志接收器配置方式。基于 [FluentBit Operat
如果 [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) 中启用了 `logging`、`events` 或 `auditing`,则会添加默认的 Elasticsearch 接收器,服务地址会设为 Elasticsearch 集群。
`logging`、`events` 或 `auditing` 启用时,如果 [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) 中未指定 `externalElasticsearchHost` 和 `externalElasticsearchPort`,则内置 Elasticsearch 集群会部署至 Kubernetes 集群。内置 Elasticsearch 集群仅用于测试和开发。生产环境下,建议您集成外置 Elasticsearch 集群。
`logging`、`events` 或 `auditing` 启用时,如果 [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) 中未指定 `externalElasticsearchUrl` 和 `externalElasticsearchPort`,则内置 Elasticsearch 集群会部署至 Kubernetes 集群。内置 Elasticsearch 集群仅用于测试和开发。生产环境下,建议您集成外置 Elasticsearch 集群。
日志查询需要依靠所配置的内置或外置 Elasticsearch 集群。

View File

@ -16,7 +16,7 @@ Alertmanager 处理由客户端应用程序(例如 Prometheus 服务器)发
Prometheus 的告警分为两部分。Prometheus 服务器根据告警规则向 Alertmanager 发送告警。随后Alertmanager 管理这些告警,包括沉默、抑制、聚合等,并通过不同方式发送通知,例如电子邮件、应需 (on-call) 通知系统以及聊天平台。
从 3.0 版本开始KubeSphere 向 Prometheus 添加了开源社区中流行的告警规则用作内置告警规则。默认情况下KubeSphere 3.3.0 中的 Prometheus 会持续评估这些内置告警规则,然后向 Alertmanager 发送告警。
从 3.0 版本开始KubeSphere 向 Prometheus 添加了开源社区中流行的告警规则用作内置告警规则。默认情况下KubeSphere 3.3 中的 Prometheus 会持续评估这些内置告警规则,然后向 Alertmanager 发送告警。
## 使用 Alertmanager 管理 Kubernetes 事件告警

View File

@ -62,7 +62,7 @@ table th:nth-of-type(2) {
| 参数 | 描述信息 |
| :---- | :---- |
| 卷扩 | 在 YAML 文件中由 `allowVolumeExpansion` 指定。 |
| 卷扩 | 在 YAML 文件中由 `allowVolumeExpansion` 指定。 |
| 回收机制 | 在 YAML 文件中由 `reclaimPolicy` 指定。 |
| 访问模式 | 在 YAML 文件中由 `.metadata.annotations.storageclass.kubesphere.io/supported-access-modes` 指定。默认 `ReadWriteOnce`、`ReadOnlyMany` 和 `ReadWriteMany` 全选。 |
| 供应者 | 在 YAML 文件中由 `provisioner` 指定。如果您使用 [NFS-Client 的 Chart](https://github.com/kubesphere/helm-charts/tree/master/src/main/nfs-client-provisioner) 来安装存储类型,可以设为 `cluster.local/nfs-client-nfs-client-provisioner`。 |
@ -144,17 +144,17 @@ Ceph RBD 也是 Kubernetes 上的一种树内存储插件,即 Kubernetes 中
| 参数 | 描述 |
| :---- | :---- |
| monitors| Ceph 集群 Monitors 的 IP 地址。 |
| adminId| Ceph 集群能够创建卷的用户 ID。 |
| adminSecretName| `adminId` 的密钥名称。 |
| adminSecretNamespace| `adminSecret` 所在的项目。 |
| pool | Ceph RBD 的 Pool 名称。 |
| userId | Ceph 集群能够挂载卷的用户 ID。 |
| userSecretName | `userId` 的密钥名称。 |
| userSecretNamespace | `userSecret` 所在的项目。 |
| MONITORS| Ceph 集群 Monitors 的 IP 地址。 |
| ADMINID| Ceph 集群能够创建卷的用户 ID。 |
| ADMINSECRETNAME| `adminId` 的密钥名称。 |
| ADMINSECRETNAMESPACE| `adminSecret` 所在的项目。 |
| POOL | Ceph RBD 的 Pool 名称。 |
| USERID | Ceph 集群能够挂载卷的用户 ID。 |
| USERSECRETNAME | `userId` 的密钥名称。 |
| USERSECRETNAMESPACE | `userSecret` 所在的项目。 |
| 文件系统类型 | 卷的文件系统类型。 |
| imageFormat | Ceph 卷的选项。该值可为 `1``2`,选择 `2` 后需要填写 `imageFeatures`。 |
| imageFeatures| Ceph 集群的额外功能。仅当设置 `imageFormat``2` 时,才需要填写该值。 |
| IMAGEFORMAT | Ceph 卷的选项。该值可为 `1``2`,选择 `2` 后需要填写 `imageFeatures`。 |
| IMAGEFEATURES| Ceph 集群的额外功能。仅当设置 `imageFormat``2` 时,才需要填写该值。 |
有关存储类参数的更多信息,请参见 [Kubernetes 文档中的 Ceph RBD](https://kubernetes.io/zh/docs/concepts/storage/storage-classes/#ceph-rbd)。
@ -168,7 +168,7 @@ NFS网络文件系统广泛用于带有 [NFS-Client](https://github.com/ku
{{< notice note >}}
NFS 与部分应用不兼容(例如 Prometheus可能会导致容器组创建失败。如果确实需要在生产环境中使用 NFS请确保您了解相关风险或咨询 KubeSphere 技术支持 support@kubesphere.cloud
不建议您在生产环境中使用 NFS 存储(尤其是在 Kubernetes 1.20 或以上版本),这可能会引起 `failed to obtain lock``input/output error` 等问题,从而导致容器组 `CrashLoopBackOff`。此外,部分应用不兼容 NFS例如 [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects) 等
{{</ notice >}}

View File

@ -40,7 +40,7 @@ weight: 11440
{{< notice note >}}
这些 Kubernetes 集群可以被托管至不同的云厂商,也可以使用不同的 Kubernetes 版本。针对 KubeSphere 3.3.0 推荐的 Kubernetes 版本v1.19.x、v1.20.x、v1.21.x 、v1.22.x 和 v1.23.x实验性支持
这些 Kubernetes 集群可以被托管至不同的云厂商,也可以使用不同的 Kubernetes 版本。针对 KubeSphere 3.3 推荐的 Kubernetes 版本v1.19.x、v1.20.x、v1.21.x 、v1.22.x 和 v1.23.x实验性支持
{{</ notice >}}

View File

@ -7,7 +7,7 @@ weight: 11231
---
KubeSphere 3.3.0 支持您导入 GitHub、GitLab、Bitbucket 或其它基于 Git 的代码仓库,如 Gitee。下面以 Github 仓库为例,展示如何导入代码仓库。
KubeSphere 3.3 支持您导入 GitHub、GitLab、Bitbucket 或其它基于 Git 的代码仓库,如 Gitee。下面以 Github 仓库为例,展示如何导入代码仓库。
## 准备工作

View File

@ -6,7 +6,7 @@ linkTitle: "使用 GitOps 实现应用持续部署"
weight: 11221
---
KubeSphere 3.3.0 引入了一种为云原生应用实现持续部署的理念 GitOps。GitOps 的核心思想是拥有一个 Git 仓库,并将应用系统的申明式基础架构和应用程序存放在 Git 仓库中进行版本控制。GitOps 结合 Kubernetes 能够利用自动交付流水线将更改应用到指定的任意多个集群中,从而解决跨云部署的一致性问题。
KubeSphere 3.3 引入了一种为云原生应用实现持续部署的理念 GitOps。GitOps 的核心思想是拥有一个 Git 仓库,并将应用系统的申明式基础架构和应用程序存放在 Git 仓库中进行版本控制。GitOps 结合 Kubernetes 能够利用自动交付流水线将更改应用到指定的任意多个集群中,从而解决跨云部署的一致性问题。
本示例演示如何创建持续部署实现应用的部署。

View File

@ -5,7 +5,7 @@ description: '介绍如何在 KubeSphere 中添加持续部署白名单。'
linkTitle: "添加持续部署白名单"
weight: 11243
---
在 KubeSphere 3.3.0 中,您可以通过设置白名单限制资源持续部署的目标位置。
在 KubeSphere 3.3 中,您可以通过设置白名单限制资源持续部署的目标位置。
## 准备工作

View File

@ -288,7 +288,7 @@ KubeSphere 中的图形编辑面板包含用于 Jenkins [阶段 (Stage)](https:/
{{< notice note >}}
在 KubeSphere 3.3.0 中,能够运行流水线的帐户也能够继续或终止该流水线。此外,流水线创建者、拥有该项目管理员角色的用户或者您指定的帐户也有权限继续或终止流水线。
在 KubeSphere 3.3 中,能够运行流水线的帐户也能够继续或终止该流水线。此外,流水线创建者、拥有该项目管理员角色的用户或者您指定的帐户也有权限继续或终止流水线。
{{</ notice >}}

View File

@ -219,7 +219,7 @@ KubeSphere 中可以创建两种类型的流水线:一种是本教程中介绍
{{< notice note >}}
在 KubeSphere 3.3.0 中,如果不指定审核员,那么能够运行流水线的帐户也能够继续或终止该流水线。流水线创建者、在该项目中具有 `admin` 角色的用户或者您指定的帐户也有权限继续或终止流水线。
在 KubeSphere 3.3 中,如果不指定审核员,那么能够运行流水线的帐户也能够继续或终止该流水线。流水线创建者、在该项目中具有 `admin` 角色的用户或者您指定的帐户也有权限继续或终止流水线。
{{</ notice >}}

View File

@ -8,7 +8,7 @@ weight: 11215
[GitLab](https://about.gitlab.com/) 是一个提供公开和私有仓库的开源代码仓库平台。它也是一个完整的 DevOps 平台,专业人士能够使用 GitLab 在项目中执行任务。
在 KubeSphere 3.3.0 以及更新版本中,您可以使用 GitLab 在 DevOps 项目中创建多分支流水线。本教程介绍如何使用 GitLab 创建多分支流水线。
在 KubeSphere 3.3 中,您可以使用 GitLab 在 DevOps 项目中创建多分支流水线。本教程介绍如何使用 GitLab 创建多分支流水线。
## 准备工作

View File

@ -6,7 +6,7 @@ linkTitle: "使用流水线模板"
weight: 11213
---
KubeSphere 提供图形编辑面板,您可以通过交互式操作定义 Jenkins 流水线的阶段和步骤。KubeSphere 3.3.0 中提供了内置流水线模板,如 Node.js、Maven 以及 Golang使用户能够快速创建对应模板的流水线。同时KubeSphere 3.3.0 还支持自定义流水线模板,以满足企业不同的需求。
KubeSphere 提供图形编辑面板,您可以通过交互式操作定义 Jenkins 流水线的阶段和步骤。KubeSphere 3.3 中提供了内置流水线模板,如 Node.js、Maven 以及 Golang使用户能够快速创建对应模板的流水线。同时KubeSphere 3.3 还支持自定义流水线模板,以满足企业不同的需求。
本文档演示如何在 KubeSphere 上使用流水线模板。

View File

@ -78,7 +78,7 @@ kubectl -n kubesphere-system rollout restart deploy ks-controller-manager
如果您使用了错误的 ks-installer 版本,会导致安装之后各组件版本不匹配。
通过以下方式检查各组件版本是否一致,正确的 image tag 应该是 v3.3.0
通过以下方式检查各组件版本是否一致,正确的 image tag 应该是 v3.3.1
```
kubectl -n kubesphere-system get deploy ks-installer -o jsonpath='{.spec.template.spec.containers[0].image}'

View File

@ -31,9 +31,9 @@ Weight: 16520
```yaml
client:
version:
kubesphere: v3.3.0
kubernetes: v1.22.10
openpitrix: v3.3.0
kubesphere: v3.3.1
kubernetes: v1.21.5
openpitrix: v3.3.1
enableKubeConfig: true
systemWorkspace: "$" # 请手动添加此行。
```

View File

@ -10,7 +10,7 @@ weight: 16200
## 获取加速器地址
您需要获取仓库的一个镜像地址以配置加速器。您可以参考如何[从阿里云获取加速器地址](https://www.alibabacloud.com/help/zh/doc-detail/60750.htm?spm=a2c63.p38356.b99.18.4f4133f0uTKb8S)。
您需要获取仓库的一个镜像地址以配置加速器。您可以参考如何[从阿里云获取加速器地址](https://help.aliyun.com/document_detail/60750.html)。
## 配置仓库镜像地址

View File

@ -29,7 +29,7 @@ Telemetry 收集已安装 KubeSphere 集群的大小、KubeSphere 和 Kubernetes
### 安装前禁用 Telemetry
在现有 Kubernetes 集群上安装 KubeSphere 时,您需要下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件用于配置集群。如需禁用 Telemetry请勿直接执行 `kubectl apply -f` 命令应用该文件。
在现有 Kubernetes 集群上安装 KubeSphere 时,您需要下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件用于配置集群。如需禁用 Telemetry请勿直接执行 `kubectl apply -f` 命令应用该文件。
{{< notice note >}}
@ -37,7 +37,7 @@ Telemetry 收集已安装 KubeSphere 集群的大小、KubeSphere 和 Kubernetes
{{</ notice >}}
1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/cluster-configuration.yaml) 文件并编辑。
1. 下载 [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/cluster-configuration.yaml) 文件并编辑。
```bash
vi cluster-configuration.yaml
@ -57,7 +57,7 @@ Telemetry 收集已安装 KubeSphere 集群的大小、KubeSphere 和 Kubernetes
3. 保存文件并执行以下命令开始安装:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.3.1/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```

View File

@ -6,12 +6,20 @@ linkTitle: "集成您自己的 Prometheus"
Weight: 16330
---
KubeSphere 自带一些预装的自定义监控组件,包括 Prometheus Operator、Prometheus、Alertmanager、Grafana可选、各种 ServiceMonitor、node-exporter 和 kube-state-metrics。在您安装 KubeSphere 之前,这些组件可能已经存在。在 KubeSphere 3.3.0 中,您可以使用自己的 Prometheus 堆栈设置。
KubeSphere 自带一些预装的自定义监控组件,包括 Prometheus Operator、Prometheus、Alertmanager、Grafana可选、各种 ServiceMonitor、node-exporter 和 kube-state-metrics。在您安装 KubeSphere 之前,这些组件可能已经存在。在 KubeSphere 3.3 中,您可以使用自己的 Prometheus 堆栈设置。
## 集成您自己的 Prometheus
## 集成您自己的 Prometheus 的步骤
要使用您自己的 Prometheus 堆栈设置,请执行以下步骤:
1. 卸载 KubeSphere 的自定义 Prometheus 堆栈
2. 安装您自己的 Prometheus 堆栈
3. 将 KubeSphere 自定义组件安装至您的 Prometheus 堆栈
4. 更改 KubeSphere 的 `monitoring endpoint`
### 步骤 1卸载 KubeSphere 的自定义 Prometheus 堆栈
1. 执行以下命令,卸载堆栈:
@ -41,13 +49,13 @@ KubeSphere 自带一些预装的自定义监控组件,包括 Prometheus Operat
{{< notice note >}}
KubeSphere 3.3.0 已经过认证,可以与以下 Prometheus 堆栈组件搭配使用:
KubeSphere 3.3 已经过认证,可以与以下 Prometheus 堆栈组件搭配使用:
- Prometheus Operator **v0.55.1+**
- Prometheus **v2.34.0+**
- Alertmanager **v0.23.0+**
- kube-state-metrics **v2.5.0**
- node-exporter **v1.3.1**
- Prometheus Operator **v0.38.3+**
- Prometheus **v2.20.1+**
- Alertmanager **v0.21.0+**
- kube-state-metrics **v1.9.6**
- node-exporter **v0.18.1**
请确保您的 Prometheus 堆栈组件版本符合上述版本要求,尤其是 **node-exporter****kube-state-metrics**
@ -57,97 +65,92 @@ KubeSphere 3.3.0 已经过认证,可以与以下 Prometheus 堆栈组件搭配
{{</ notice >}}
Prometheus 堆栈可以通过多种方式进行安装。下面的步骤演示如何使用 `ks-prometheus`(基于上游的 `kube-prometheus` 项目) 将 Prometheus 堆栈安装至命名空间 `monitoring` 中。
Prometheus 堆栈可以通过多种方式进行安装。下面的步骤演示如何使用**上游 `kube-prometheus`** 将 Prometheus 堆栈安装至命名空间 `monitoring` 中。
1. 获取 KubeSphere 3.3.0 所使用的 `ks-prometheus`
1. 获取 v0.6.0 版 kube-prometheus它的 node-exporter 版本为 v0.18.1,与 KubeSphere 3.3 所使用的版本相匹配
```bash
cd ~ && git clone -b release-3.3 https://github.com/kubesphere/ks-prometheus.git && cd ks-prometheus
cd ~ && git clone https://github.com/prometheus-operator/kube-prometheus.git && cd kube-prometheus && git checkout tags/v0.6.0 -b v0.6.0
```
2. 设置命名空间
2. 设置命名空间 `monitoring`,安装 Prometheus Operator 和相应角色:
```bash
sed -i 's/kubesphere-monitoring-system/monitoring/g' kustomization.yaml
kubectl apply -f manifests/setup/
```
3. 可选移除不必要的组件。例如KubeSphere 未启用 Grafana 时,可以删除 `kustomization.yaml` 中的 `grafana` 部分:
3. 稍等片刻待 Prometheus Operator 启动并运行。
```bash
sed -i '/manifests\/grafana\//d' kustomization.yaml
kubectl -n monitoring get pod --watch
```
4. 安装堆栈
4. 移除不必要组件,例如 Prometheus Adapter
```bash
kubectl apply -k .
rm -rf manifests/prometheus-adapter-*.yaml
```
5. 将 kube-state-metrics 的版本变更为 KubeSphere 3.3 所使用的 v1.9.6。
```bash
sed -i 's/v1.9.5/v1.9.6/g' manifests/kube-state-metrics-deployment.yaml
```
6. 安装 Prometheus、Alertmanager、Grafana、kube-state-metrics 以及 node-exporter。您可以只应用 YAML 文件 `kube-state-metrics-*.yaml``node-exporter-*.yaml` 来分别安装 kube-state-metrics 或 node-exporter。
```bash
kubectl apply -f manifests/
```
### 步骤 3将 KubeSphere 自定义组件安装至您的 Prometheus 堆栈
{{< notice note >}}
如果您的 Prometheus 堆栈是通过 `ks-prometheus` 进行安装,您可以跳过此步骤。
KubeSphere 3.3 使用 Prometheus Operator 来管理 Prometheus/Alertmanager 配置和生命周期、ServiceMonitor用于管理抓取配置和 PrometheusRule用于管理 Prometheus 记录/告警规则)
KubeSphere 3.3.0 使用 Prometheus Operator 来管理 Prometheus/Alertmanager 配置和生命周期、ServiceMonitor用于管理抓取配置和 PrometheusRule用于管理 Prometheus 记录/告警规则)。
[KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml) 中列出了一些条目,其中 `prometheus-rules.yaml``prometheus-rulesEtcd.yaml` 是 KubeSphere 3.3 正常运行的必要条件,其他均为可选。如果您不希望现有 Alertmanager 的配置被覆盖,您可以移除 `alertmanager-secret.yaml`。如果您不希望自己的 ServiceMonitor 被覆盖KubeSphere 自定义的 ServiceMonitor 弃用许多无关指标,以便 Prometheus 只存储最有用的指标),您可以移除 `xxx-serviceMonitor.yaml`
如果您的 Prometheus 堆栈不是由 Prometheus Operator 进行管理,您可以跳过此步骤。但请务必确保:
- 您必须将 [PrometheusRule](https://github.com/kubesphere/ks-prometheus/blob/release-3.3/manifests/kubernetes/kubernetes-prometheusRule.yaml) 和 [PrometheusRule for etcd](https://github.com/kubesphere/ks-prometheus/blob/release-3.3/manifests/etcd/prometheus-rulesEtcd.yaml) 中的记录/告警规则复制至您的 Prometheus 配置中,以便 KubeSphere 3.3.0 能够正常运行。
- 您必须将 [PrometheusRule](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/prometheus-rules.yaml) 和 [PrometheusRule for etcd](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/prometheus-rulesEtcd.yaml) 中的记录/告警规则复制至您的 Prometheus 配置中,以便 KubeSphere 3.3 能够正常运行。
- 配置您的 Prometheus使其抓取指标的目标 (Target) 与 各组件的 [serviceMonitor](https://github.com/kubesphere/ks-prometheus/tree/release-3.3/manifests) 文件中列出的目标相同。
- 配置您的 Prometheus使其抓取指标的目标 (Target) 与 [KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml) 中列出的 ServiceMonitor 的目标相同。
{{</ notice >}}
1. 获取 KubeSphere 3.3.0 所使用的 `ks-prometheus`
1. 获取 KubeSphere 3.3 的自定义 kube-prometheus
```bash
cd ~ && git clone -b release-3.3 https://github.com/kubesphere/ks-prometheus.git && cd ks-prometheus
cd ~ && mkdir kubesphere && cd kubesphere && git clone https://github.com/kubesphere/kube-prometheus.git && cd kube-prometheus/kustomize
```
2. 设置 `kustomization.yaml`,仅保留如下内容
2. 将命名空间更改为您自己部署 Prometheus 堆栈的命名空间。例如,如果您按照步骤 2 将 Prometheus 安装在命名空间 `monitoring` 中,这里即为 `monitoring`
```yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: <your own namespace>
resources:
- ./manifests/alertmanager/alertmanager-secret.yaml
- ./manifests/etcd/prometheus-rulesEtcd.yaml
- ./manifests/kube-state-metrics/kube-state-metrics-serviceMonitor.yaml
- ./manifests/kubernetes/kubernetes-prometheusRule.yaml
- ./manifests/kubernetes/kubernetes-serviceKubeControllerManager.yaml
- ./manifests/kubernetes/kubernetes-serviceKubeScheduler.yaml
- ./manifests/kubernetes/kubernetes-serviceMonitorApiserver.yaml
- ./manifests/kubernetes/kubernetes-serviceMonitorCoreDNS.yaml
- ./manifests/kubernetes/kubernetes-serviceMonitorKubeControllerManager.yaml
- ./manifests/kubernetes/kubernetes-serviceMonitorKubeScheduler.yaml
- ./manifests/kubernetes/kubernetes-serviceMonitorKubelet.yaml
- ./manifests/node-exporter/node-exporter-serviceMonitor.yaml
- ./manifests/prometheus/prometheus-clusterRole.yaml
```bash
sed -i 's/my-namespace/<your own namespace>/g' kustomization.yaml
```
{{< notice note >}}
- 将此处 `namespace` 的值设置为您自己的命名空间。例如,如果您在步骤 2 将 Prometheus 安装在命名空间 `monitoring` 中,这里即为 `monitoring`
- 如果您启用了 KubeSphere 的告警,还需要保留 `kustomization.yaml` 中的 `thanos-ruler` 部分。
{{</ notice >}}
3. 安装以上 KubeSphere 必要组件。
3. 应用 KubeSphere 自定义组件,包括 Prometheus 规则、Alertmanager 配置和各种 ServiceMonitor 等。
```bash
kubectl apply -k .
```
4. 在您自己的命名空间中查找 Prometheus CR通常为 k8s。
4. 配置服务 (Service) 用于暴露 kube-scheduler 和 kube-controller-manager 指标。
```bash
kubectl apply -f ./prometheus-serviceKubeScheduler.yaml
kubectl apply -f ./prometheus-serviceKubeControllerManager.yaml
```
5. 在您自己的命名空间中查找 Prometheus CR通常为 Kubernetes。
```bash
kubectl -n <your own namespace> get prometheus
```
5. 将 Prometheus 规则评估间隔设置为 1m与 KubeSphere 3.3.0 的自定义 ServiceMonitor 保持一致。规则评估间隔应大于或等于抓取间隔。
6. 将 Prometheus 规则评估间隔设置为 1m与 KubeSphere 3.3 的自定义 ServiceMonitor 保持一致。规则评估间隔应大于或等于抓取间隔。
```bash
kubectl -n <your own namespace> patch prometheus k8s --patch '{
@ -161,40 +164,34 @@ KubeSphere 3.3.0 使用 Prometheus Operator 来管理 Prometheus/Alertmanager
您自己的 Prometheus 堆栈现在已启动并运行,您可以更改 KubeSphere 的监控 Endpoint 来使用您自己的 Prometheus。
1. 运行以下命令,编辑 `kubesphere-config`
1. 运行以下命令,编辑 `kubesphere-config`
```bash
kubectl edit cm -n kubesphere-system kubesphere-config
```
2. 搜`monitoring endpoint` 部分,如下所示。
2. 搜寻到 `monitoring endpoint` 部分,如下所示:
```yaml
```bash
monitoring:
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
```
3. 将 `endpoint` 的值更改为您自己的 Prometheus。
3. 将 `monitoring endpoint` 更改为您自己的 Prometheus
```yaml
```bash
monitoring:
endpoint: http://prometheus-operated.monitoring.svc:9090
```
4. 如果您启用了 KubeSphere 的告警组件,请搜索 `alerting``prometheusEndpoint``thanosRulerEndpoint`并参照如下示例修改。KubeSphere Apiserver 将自动重启使设置生效
4. 运行以下命令,重启 KubeSphere APIserver
```yaml
...
alerting:
...
prometheusEndpoint: http://prometheus-operated.monitoring.svc:9090
thanosRulerEndpoint: http://thanos-ruler-operated.monitoring.svc:10902
...
...
```bash
kubectl -n kubesphere-system rollout restart deployment/ks-apiserver
```
{{< notice warning >}}
如果您按照[此指南](../../../pluggable-components/overview/)启用/禁用 KubeSphere 可插拔组件,`monitoring endpoint` 会重置为初始值。此时,您需要再次将其更改为您自己的 Prometheus。
如果您按照[此指南](../../../pluggable-components/overview/)启用/禁用 KubeSphere 可插拔组件,`monitoring endpoint` 会重置为初始值。此时,您需要再次将其更改为您自己的 Prometheus 并重启 KubeSphere APIserver。
{{</ notice >}}

Some files were not shown because too many files have changed in this diff Show More