diff --git a/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/installing-kubesphere-on-minikube.md b/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/installing-kubesphere-on-minikube.md index 46478a0cc..7c02e6564 100644 --- a/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/installing-kubesphere-on-minikube.md +++ b/content/en/docs/installing-on-kubernetes/on-prem-kubernetes/installing-kubesphere-on-minikube.md @@ -116,7 +116,7 @@ After you make sure your machine meets the conditions, perform the following ste 3. After KubeSphere is successfully installed, you can run the following command to view the installation logs: ```bash - kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f + kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l 'app in (ks-install, ks-installer)' -o jsonpath='{.items[0].metadata.name}') -f ``` 4. Use `kubectl get pod --all-namespaces` to see whether all Pods are running normally in relevant namespaces of KubeSphere. If they are, check the port (`30880` by default) of the console by running the following command: diff --git a/content/en/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md b/content/en/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md index 3d97e57b7..833e9c790 100644 --- a/content/en/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md +++ b/content/en/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md @@ -45,7 +45,7 @@ To add a log receiver: A default Elasticsearch receiver will be added with its service address set to an Elasticsearch cluster if `logging`, `events`, or `auditing` is enabled in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md). -An internal Elasticsearch cluster will be deployed to the Kubernetes cluster if neither `externalElasticsearchUrl` nor `externalElasticsearchPort` is specified in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) when `logging`, `events`, or `auditing` is enabled. The internal Elasticsearch cluster is for testing and development only. It is recommended that you configure an external Elasticsearch cluster for production. +An internal Elasticsearch cluster will be deployed to the Kubernetes cluster if neither `externalElasticsearchHost` nor `externalElasticsearchPort` is specified in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) when `logging`, `events`, or `auditing` is enabled. The internal Elasticsearch cluster is for testing and development only. It is recommended that you configure an external Elasticsearch cluster for production. Log searching relies on the internal or external Elasticsearch cluster configured. diff --git a/content/en/docs/v3.3/cluster-administration/storageclass.md b/content/en/docs/v3.3/cluster-administration/storageclass.md index 94f27321e..5ccc1ac37 100644 --- a/content/en/docs/v3.3/cluster-administration/storageclass.md +++ b/content/en/docs/v3.3/cluster-administration/storageclass.md @@ -146,7 +146,7 @@ NFS (Net File System) is widely used on Kubernetes with the external-provisioner {{< notice note >}} -It is not recommended that you use NFS storage for production (especially on Kubernetes version 1.20 or later) as some issues may occur, such as `failed to obtain lock` and `input/output error`, resulting in Pod `CrashLoopBackOff`. Besides, some apps may not be compatible with NFS, including [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects). +NFS is incompatible with some applications, for example, Prometheus, which may result in pod creation failures. If you need to use NFS in the production environment, ensure that you have understood the risks. For more information, contact support@kubesphere.cloud. {{}} diff --git a/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md b/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md index b1fcdc9ba..5640d4f8a 100644 --- a/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md +++ b/content/en/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md @@ -288,7 +288,7 @@ This stage uses SonarQube to test your code. You can skip this stage if you do n {{< notice note >}} - In KubeSphere 3.3, the account that can run a pipeline will be able to continue or terminate the pipeline if there is no reviewer specified. Pipeline creators, accounts with the role of `admin` in a project, or the account you specify will be able to continue or terminate a pipeline. + In KubeSphere 3.3, the account that can run a pipeline will be able to continue or terminate the pipeline if there is no reviewer specified. Pipeline creators and the account you specify will be able to continue or terminate a pipeline. {{}} diff --git a/content/en/docs/v3.3/faq/observability/byop.md b/content/en/docs/v3.3/faq/observability/byop.md index 2a31b9186..3906bd245 100644 --- a/content/en/docs/v3.3/faq/observability/byop.md +++ b/content/en/docs/v3.3/faq/observability/byop.md @@ -6,19 +6,9 @@ linkTitle: "Bring Your Own Prometheus" Weight: 16330 --- -KubeSphere comes with several pre-installed customized monitoring components including Prometheus Operator, Prometheus, Alertmanager, Grafana (Optional), various ServiceMonitors, node-exporter, and kube-state-metrics. These components might already exist before you install KubeSphere. It is possible to use your own Prometheus stack setup in KubeSphere 3.3. +KubeSphere comes with several pre-installed customized monitoring components, including Prometheus Operator, Prometheus, Alertmanager, Grafana (Optional), various ServiceMonitors, node-exporter, and kube-state-metrics. These components might already exist before you install KubeSphere. It is possible to use your own Prometheus stack setup in KubeSphere v3.3. -## Steps to Bring Your Own Prometheus - -To use your own Prometheus stack setup, perform the following steps: - -1. Uninstall the customized Prometheus stack of KubeSphere - -2. Install your own Prometheus stack - -3. Install KubeSphere customized stuff to your Prometheus stack - -4. Change KubeSphere's `monitoring endpoint` +## Bring Your Own Prometheus ### Step 1. Uninstall the customized Prometheus stack of KubeSphere @@ -39,7 +29,7 @@ To use your own Prometheus stack setup, perform the following steps: # kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-installer -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/init/ 2>/dev/null ``` -2. Delete the PVC that Prometheus used. +2. Delete the PVC that Prometheus uses. ```bash kubectl -n kubesphere-monitoring-system delete pvc `kubectl -n kubesphere-monitoring-system get pvc | grep -v VOLUME | awk '{print $1}' | tr '\n' ' '` @@ -51,106 +41,110 @@ To use your own Prometheus stack setup, perform the following steps: KubeSphere 3.3 was certified to work well with the following Prometheus stack components: -- Prometheus Operator **v0.38.3+** -- Prometheus **v2.20.1+** -- Alertmanager **v0.21.0+** -- kube-state-metrics **v1.9.6** -- node-exporter **v0.18.1** +- Prometheus Operator **v0.55.1+** +- Prometheus **v2.34.0+** +- Alertmanager **v0.23.0+** +- kube-state-metrics **v2.5.0** +- node-exporter **v1.3.1** -Make sure your Prometheus stack components' version meets these version requirements especially **node-exporter** and **kube-state-metrics**. +Make sure your Prometheus stack components' version meets these version requirements, especially **node-exporter** and **kube-state-metrics**. -Make sure you install **node-exporter** and **kube-state-metrics** if only **Prometheus Operator** and **Prometheus** were installed. **node-exporter** and **kube-state-metrics** are required for KubeSphere to work properly. +Make sure you install **node-exporter** and **kube-state-metrics** if only **Prometheus Operator** and **Prometheus** are installed. **node-exporter** and **kube-state-metrics** are required for KubeSphere to work properly. **If you've already had the entire Prometheus stack up and running, you can skip this step.** {{}} -The Prometheus stack can be installed in many ways. The following steps show how to install it into the namespace `monitoring` using **upstream `kube-prometheus`**. +The Prometheus stack can be installed in many ways. The following steps show how to install it into the namespace `monitoring` using `ks-prometheus` (based on the **upstream `kube-prometheus`** project). -1. Get kube-prometheus version v0.6.0 whose node-exporter's version v0.18.1 matches the one KubeSphere 3.3 is using. +1. Obtain `ks-prometheus` that KubeSphere v3.3.0 uses. ```bash - cd ~ && git clone https://github.com/prometheus-operator/kube-prometheus.git && cd kube-prometheus && git checkout tags/v0.6.0 -b v0.6.0 + cd ~ && git clone -b release-3.3 https://github.com/kubesphere/ks-prometheus.git && cd ks-prometheus ``` -2. Setup the `monitoring` namespace, and install Prometheus Operator and corresponding roles: +2. Set up the `monitoring` namespace. ```bash - kubectl apply -f manifests/setup/ + sed -i 's/kubesphere-monitoring-system/monitoring/g' kustomization.yaml ``` -3. Wait until Prometheus Operator is up and running. +3. Remove unnecessary components. For example, if Grafana is not enabled in KubeSphere, you can run the following command to delete the Grafana section in `kustomization.yaml`. ```bash - kubectl -n monitoring get pod --watch + sed -i '/manifests\/grafana\//d' kustomization.yaml ``` -4. Remove unnecessary components such as Prometheus Adapter. +4. Install the stack. ```bash - rm -rf manifests/prometheus-adapter-*.yaml - ``` - -5. Change kube-state-metrics to the same version v1.9.6 as KubeSphere 3.3 is using. - - ```bash - sed -i 's/v1.9.5/v1.9.6/g' manifests/kube-state-metrics-deployment.yaml - ``` - -6. Install Prometheus, Alertmanager, Grafana, kube-state-metrics, and node-exporter. You can only install kube-state-metrics or node-exporter by only applying the yaml file `kube-state-metrics-*.yaml` or `node-exporter-*.yaml`. - - ```bash - kubectl apply -f manifests/ + kubectl apply -k . ``` ### Step 3. Install KubeSphere customized stuff to your Prometheus stack {{< notice note >}} -KubeSphere 3.3 uses Prometheus Operator to manage Prometheus/Alertmanager config and lifecycle, ServiceMonitor (to manage scrape config), and PrometheusRule (to manage Prometheus recording/alert rules). +If your Prometheus stack is not installed using `ks-prometheus`, skip this step. -There are a few items listed in [KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml), among which `prometheus-rules.yaml` and `prometheus-rulesEtcd.yaml` are required for KubeSphere 3.3 to work properly and others are optional. You can remove `alertmanager-secret.yaml` if you don't want your existing Alertmanager's config to be overwritten. You can remove `xxx-serviceMonitor.yaml` if you don't want your own ServiceMonitors to be overwritten (KubeSphere customized ServiceMonitors discard many irrelevant metrics to make sure Prometheus only stores the most useful metrics). +KubeSphere 3.3.0 uses Prometheus Operator to manage Prometheus/Alertmanager config and lifecycle, ServiceMonitor (to manage scrape config), and PrometheusRule (to manage Prometheus recording/alert rules). If your Prometheus stack setup isn't managed by Prometheus Operator, you can skip this step. But you have to make sure that: -- You must copy the recording/alerting rules in [PrometheusRule](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/prometheus-rules.yaml) and [PrometheusRule for etcd](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/prometheus-rulesEtcd.yaml) to your Prometheus config for KubeSphere 3.3 to work properly. +- You must copy the recording/alerting rules in [PrometheusRule](https://github.com/kubesphere/ks-prometheus/blob/release-3.3/manifests/kubernetes/kubernetes-prometheusRule.yaml) and [PrometheusRule for etcd](https://github.com/kubesphere/ks-prometheus/blob/release-3.3/manifests/etcd/prometheus-rulesEtcd.yaml) to your Prometheus config for KubeSphere v3.3.0 to work properly. -- Configure your Prometheus to scrape metrics from the same targets as the ServiceMonitors listed in [KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml). +- Configure your Prometheus to scrape metrics from the same targets as that in [serviceMonitor](https://github.com/kubesphere/ks-prometheus/tree/release-3.3/manifests) of each component. {{}} -1. Get KubeSphere 3.3 customized kube-prometheus. +1. Obtain `ks-prometheus` that KubeSphere v3.3.0 uses. ```bash - cd ~ && mkdir kubesphere && cd kubesphere && git clone https://github.com/kubesphere/kube-prometheus.git && cd kube-prometheus/kustomize + cd ~ && git clone -b release-3.3 https://github.com/kubesphere/ks-prometheus.git && cd ks-prometheus ``` -2. Change the namespace to your own in which the Prometheus stack is deployed. For example, it is `monitoring` if you install Prometheus in the `monitoring` namespace following Step 2. +2. Configure `kustomization.yaml` and retain the following content only. - ```bash - sed -i 's/my-namespace//g' kustomization.yaml + ```yaml + apiVersion: kustomize.config.k8s.io/v1beta1 + kind: Kustomization + namespace: + resources: + - ./manifests/alertmanager/alertmanager-secret.yaml + - ./manifests/etcd/prometheus-rulesEtcd.yaml + - ./manifests/kube-state-metrics/kube-state-metrics-serviceMonitor.yaml + - ./manifests/kubernetes/kubernetes-prometheusRule.yaml + - ./manifests/kubernetes/kubernetes-serviceKubeControllerManager.yaml + - ./manifests/kubernetes/kubernetes-serviceKubeScheduler.yaml + - ./manifests/kubernetes/kubernetes-serviceMonitorApiserver.yaml + - ./manifests/kubernetes/kubernetes-serviceMonitorCoreDNS.yaml + - ./manifests/kubernetes/kubernetes-serviceMonitorKubeControllerManager.yaml + - ./manifests/kubernetes/kubernetes-serviceMonitorKubeScheduler.yaml + - ./manifests/kubernetes/kubernetes-serviceMonitorKubelet.yaml + - ./manifests/node-exporter/node-exporter-serviceMonitor.yaml + - ./manifests/prometheus/prometheus-clusterRole.yaml ``` -3. Apply KubeSphere customized stuff including Prometheus rules, Alertmanager config, and various ServiceMonitors. + {{< notice note >}} + + - Set the value of `namespace` to your own namespace in which the Prometheus stack is deployed. For example, it is `monitoring` if you install Prometheus in the `monitoring` namespace in Step 2. + - If you have enabled the alerting component for KubeSphere, retain `thanos-ruler` in the `kustomization.yaml` file. + + {{}} + +3. Install the required components of KubeSphere. ```bash kubectl apply -k . ``` -4. Setup Services for kube-scheduler and kube-controller-manager metrics exposure. - - ```bash - kubectl apply -f ./prometheus-serviceKubeScheduler.yaml - kubectl apply -f ./prometheus-serviceKubeControllerManager.yaml - ``` - -5. Find the Prometheus CR which is usually Kubernetes in your own namespace. +4. Find the Prometheus CR which is usually `k8s` in your own namespace. ```bash kubectl -n get prometheus ``` -6. Set the Prometheus rule evaluation interval to 1m to be consistent with the KubeSphere 3.3 customized ServiceMonitor. The Rule evaluation interval should be greater or equal to the scrape interval. +5. Set the Prometheus rule evaluation interval to 1m to be consistent with the KubeSphere v3.3.0 customized ServiceMonitor. The Rule evaluation interval should be greater than or equal to the scrape interval. ```bash kubectl -n patch prometheus k8s --patch '{ @@ -164,13 +158,13 @@ If your Prometheus stack setup isn't managed by Prometheus Operator, you can ski Now that your own Prometheus stack is up and running, you can change KubeSphere's monitoring endpoint to use your own Prometheus. -1. Edit `kubesphere-config` by running the following command: +1. Run the following command to edit `kubesphere-config`. ```bash kubectl edit cm -n kubesphere-system kubesphere-config ``` -2. Navigate to the `monitoring endpoint` section as below: +2. Navigate to the `monitoring endpoint` section, as shown in the following: ```bash monitoring: @@ -184,14 +178,20 @@ Now that your own Prometheus stack is up and running, you can change KubeSphere' endpoint: http://prometheus-operated.monitoring.svc:9090 ``` -4. Run the following command to restart the KubeSphere APIServer. +4. If you have enabled the alerting component of KubeSphere, navigate to `prometheusEndpoint` and `thanosRulerEndpoint` of `alerting`, and change the values according to the following sample. KubeSphere APIServer will restart automatically to make your configurations take effect. - ```bash - kubectl -n kubesphere-system rollout restart deployment/ks-apiserver + ```yaml + ... + alerting: + ... + prometheusEndpoint: http://prometheus-operated.monitoring.svc:9090 + thanosRulerEndpoint: http://thanos-ruler-operated.monitoring.svc:10902 + ... + ... ``` {{< notice warning >}} -If you enable/disable KubeSphere pluggable components following [this guide](../../../pluggable-components/overview/) , the `monitoring endpoint` will be reset to the original one. In this case, you have to change it to the new one and then restart the KubeSphere APIServer again. +If you enable/disable KubeSphere pluggable components following [this guide](../../../pluggable-components/overview/) , the `monitoring endpoint` will be reset to the original value. In this case, you need to change it to the new one. -{{}} +{{}} \ No newline at end of file diff --git a/content/en/docs/v3.3/faq/observability/logging.md b/content/en/docs/v3.3/faq/observability/logging.md index 792124ad4..6d483a355 100644 --- a/content/en/docs/v3.3/faq/observability/logging.md +++ b/content/en/docs/v3.3/faq/observability/logging.md @@ -27,7 +27,7 @@ If you are using the KubeSphere internal Elasticsearch and want to change it to kubectl edit cc -n kubesphere-system ks-installer ``` -2. Comment out `es.elasticsearchDataXXX`, `es.elasticsearchMasterXXX` and `status.logging`, and set `es.externalElasticsearchUrl` to the address of your Elasticsearch and `es.externalElasticsearchPort` to its port number. Below is an example for your reference. +2. Comment out `es.elasticsearchDataXXX`, `es.elasticsearchMasterXXX` and `status.logging`, and set `es.externalElasticsearchHost` to the address of your Elasticsearch and `es.externalElasticsearchPort` to its port number. Below is an example for your reference. ```yaml apiVersion: installer.kubesphere.io/v1alpha1 @@ -46,7 +46,7 @@ If you are using the KubeSphere internal Elasticsearch and want to change it to # elasticsearchMasterVolumeSize: 4Gi elkPrefix: logstash logMaxAge: 7 - externalElasticsearchUrl: <192.168.0.2> + externalElasticsearchHost: <192.168.0.2> externalElasticsearchPort: <9200> ... status: diff --git a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md index f1e85d322..a7019e12d 100644 --- a/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md +++ b/content/en/docs/v3.3/installing-on-kubernetes/hosted-kubernetes/install-kubesphere-on-do.md @@ -30,7 +30,7 @@ You need to select: - To install KubeSphere 3.3 on Kubernetes, your Kubernetes version must be v1.19.x, v1.20.x, v1.21.x, * v1.22.x, * v1.23.x, and * v1.24.x. For Kubernetes versions with an asterisk, some features of edge nodes may be unavailable due to incompatability. Therefore, if you want to use edge nodes, you are advised to install Kubernetes v1.21.x or earlier. - 2 nodes are included in this example. You can add more nodes based on your own needs, especially in a production environment. -- The machine type Standard / 4 GB / 2 vCPUs is for minimal installation. If you plan to enable several pluggable components or use the cluster for production, you can upgrade your nodes to a more powerful type (such as CPU-Optimized / 8 GB / 4 vCPUs). It seems that DigitalOcean provisions the control plane nodes based on the type of the worker nodes, and for Standard ones the API server can become unresponsive quite soon. +- The machine type Standard/4 GB/2 vCPUs is for minimal installation. If you plan to enable several pluggable components or use the cluster for production, you can upgrade your nodes to a more powerful type (such as CPU-Optimized / 8 GB / 4 vCPUs). It seems that DigitalOcean provisions the control plane nodes based on the type of the worker nodes, and for Standard ones the API server can become unresponsive quite soon. {{}} diff --git a/content/en/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md b/content/en/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md index 6f3d41de6..4f10b8e15 100644 --- a/content/en/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md +++ b/content/en/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md @@ -21,12 +21,55 @@ This tutorial demonstrates how to add an edge node to your cluster. ## Prerequisites - You have enabled [KubeEdge](../../../pluggable-components/kubeedge/). +- To prevent compatability issues, you are advised to install Kubernetes v1.21.x or earlier. - You have an available node to serve as an edge node. The node can run either Ubuntu (recommended) or CentOS. This tutorial uses Ubuntu 18.04 as an example. - Edge nodes, unlike Kubernetes cluster nodes, should work in a separate network. +## Prevent non-edge workloads from being scheduled to edge nodes + +Due to the tolerations some daemonsets (for example, Calico) have, to ensure that the newly added edge nodes work properly, you need to run the following command to manually patch the pods so that non-edge workloads will not be scheduled to the edge nodes. + +```bash +#!/bin/bash + + +NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}' + +ns="kube-system" + + +DaemonSets=("nodelocaldns" "kube-proxy" "calico-node") + +length=${#DaemonSets[@]} + +for((i=0;i}} + In `ClusterConfiguration` of the ks-installer, if you set an internal IP address, you need to set the forwarding rule. If you have not set the forwarding rule, you can directly connect to ports 30000 to 30004. + {{}} + +| Fields | External Ports | Fields | Internal Ports | +| ------------------- | -------------- | ----------------------- | -------------- | +| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` | +| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` | +| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` | +| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` | +| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` | + ## Configure an Edge Node -You need to install a container runtime and configure EdgeMesh on your edge node. +You need to configure the edge node as follows. ### Install a container runtime @@ -72,22 +115,6 @@ Perform the following steps to configure [EdgeMesh](https://kubeedge.io/en/docs/ net.ipv4.ip_forward = 1 ``` -## Create Firewall Rules and Port Forwarding Rules - -To make sure edge nodes can successfully talk to your cluster, you must forward ports for outside traffic to get into your network. Specifically, map an external port to the corresponding internal IP address (control plane node) and port based on the table below. Besides, you also need to create firewall rules to allow traffic to these ports (`10000` to `10004`). - - {{< notice note >}} - In `ClusterConfiguration` of the ks-installer, if you set an internal IP address, you need to set the forwarding rule. If you have not set the forwarding rule, you can directly connect to ports 30000 to 30004. - {{}} - -| Fields | External Ports | Fields | Internal Ports | -| ------------------- | -------------- | ----------------------- | -------------- | -| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` | -| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` | -| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` | -| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` | -| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` | - ## Add an Edge Node 1. Log in to the console as `admin` and click **Platform** in the upper-left corner. @@ -102,6 +129,8 @@ To make sure edge nodes can successfully talk to your cluster, you must forward 3. Click **Add**. In the dialog that appears, set a node name and enter an internal IP address of your edge node. Click **Validate** to continue. + ![add-edge-node](/images/docs/v3.3/installing-on-linux/add-and-delete-nodes/add-edge-nodes/add-edge-node.png) + {{< notice note >}} - The internal IP address is only used for inter-node communication and you do not necessarily need to use the actual internal IP address of the edge node. As long as the IP address is successfully validated, you can use it. @@ -111,6 +140,8 @@ To make sure edge nodes can successfully talk to your cluster, you must forward 4. Copy the command automatically created under **Edge Node Configuration Command** and run it on your edge node. + ![edge-command](/images/docs/v3.3/installing-on-linux/add-and-delete-nodes/add-edge-nodes/edge-command.png) + {{< notice note >}} Make sure `wget` is installed on your edge node before you run the command. @@ -169,38 +200,7 @@ To collect monitoring information on edge node, you need to enable `metrics_serv systemctl restart edgecore.service ``` -9. After an edge node joins your cluster, some Pods may be scheduled to it while they remain in the `Pending` state on the edge node. Due to the tolerations some DaemonSets (for example, Calico) have, you need to manually patch some Pods so that they will not be scheduled to the edge node. - - ```bash - #!/bin/bash - - NodeSelectorPatchJson='{"spec":{"template":{"spec":{"nodeSelector":{"node-role.kubernetes.io/master": "","node-role.kubernetes.io/worker": ""}}}}}' - - NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}' - - edgenode="edgenode" - if [ $1 ]; then - edgenode="$1" - fi - - - namespaces=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $1}' )) - pods=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $2}' )) - length=${#namespaces[@]} - - - for((i=0;i<$length;i++)); - do - ns=${namespaces[$i]} - pod=${pods[$i]} - resources=$(kubectl -n $ns describe pod $pod | grep "Controlled By" |awk '{print $3}') - echo "Patching for ns:"${namespaces[$i]}",resources:"$resources - kubectl -n $ns patch $resources --type merge --patch "$NoShedulePatchJson" - sleep 1 - done - ``` - -10. If you still cannot see the monitoring data, run the following command: +9. If you still cannot see the monitoring data, run the following command: ```bash journalctl -u edgecore.service -b -r @@ -256,4 +256,4 @@ Before you remove an edge node, delete all your workloads running on it. After uninstallation, you will not be able to add edge nodes to your cluster. - {{}} + {{}} \ No newline at end of file diff --git a/content/en/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md b/content/en/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md index f4ec106d5..7ce399d51 100644 --- a/content/en/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md +++ b/content/en/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md @@ -15,12 +15,12 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides |Host IP| Host Name | Usage | | ---------------- | ---- | ---------------- | -|192.168.0.2 | node1 | Online host for packaging the source cluster with Kubernetes v1.22.12 and KubeSphere v3.3.1 installed | +|192.168.0.2 | node1 | Online host for packaging the source cluster | |192.168.0.3 | node2 | Control plane node of the air-gapped environment | |192.168.0.4 | node3 | Image registry node of the air-gapped environment | ## Preparations -1. Run the following commands to download KubeKey v3.0.2 . +1. Run the following commands to download KubeKey v3.0.2. {{< tabs >}} {{< tab "Good network connections to GitHub/Googleapis" >}} @@ -50,18 +50,8 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides {{}} -2. In the source cluster, use KubeKey to create a manifest. The following two methods are supported: +2. On the online host, run the following command and copy content in the [manifest-example](https://github.com/kubesphere/kubekey/blob/master/docs/manifest-example.md). - - (Recommended) In the created cluster, run the following command to create a manifest file: - - ```bash - ./kk create manifest - ``` - - - Create and compile the manifest file manually according to the template. For more information, see [ manifest-example ](https://github.com/kubesphere/kubekey/blob/master/docs/manifest-example.md). - -3. Run the following command to modify the manifest configurations in the source cluster. - ```bash vim manifest.yaml ``` @@ -269,7 +259,14 @@ In KubeKey v2.1.0, we bring in concepts of manifest and artifact, which provides {{}} -4. Export the artifact from the source cluster. +3. If you already deployed a cluster, you can run the following command in the cluster to create a manifest file and configure the file according to the sample in Step 2. + + ```bash + ./kk create manifest + ``` + +4. Export the artifact. + {{< tabs >}} {{< tab "Good network connections to GitHub/Googleapis" >}} diff --git a/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md b/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md index 9b97c0328..4b5fe9615 100644 --- a/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md +++ b/content/en/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md @@ -454,7 +454,7 @@ spec: elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log - # externalElasticsearchUrl: + # externalElasticsearchHost: # externalElasticsearchPort: console: enableMultiLogin: false # enable/disable multiple sing on, it allows a user can be used by different users at the same time. diff --git a/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md b/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md index 5ba10324f..c0d57c0e2 100644 --- a/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md +++ b/content/en/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md @@ -11,7 +11,7 @@ This tutorial demonstrates how to set up a KubeSphere cluster and configure NFS {{< notice note >}} - Ubuntu 16.04 is used as an example in this tutorial. -- It is not recommended that you use NFS storage for production (especially on Kubernetes version 1.20 or later) as some issues may occur, such as `failed to obtain lock` and `input/output error`, resulting in Pod `CrashLoopBackOff`. Besides, some apps may not be compatible with NFS, including [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects). +- NFS is incompatible with some applications, for example, Prometheus, which may result in pod creation failures. If you need to use NFS in the production environment, ensure that you have understood the risks. For more information, contact support@kubesphere.cloud. {{}} diff --git a/content/en/docs/v3.3/pluggable-components/auditing-logs.md b/content/en/docs/v3.3/pluggable-components/auditing-logs.md index 47c4ffcad..5bbaa262d 100644 --- a/content/en/docs/v3.3/pluggable-components/auditing-logs.md +++ b/content/en/docs/v3.3/pluggable-components/auditing-logs.md @@ -34,7 +34,7 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), ``` {{< notice note >}} -By default, KubeKey will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. +By default, KubeKey will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Auditing, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -45,7 +45,7 @@ By default, KubeKey will install Elasticsearch internally if Auditing is enabled elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -73,7 +73,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu ``` {{< notice note >}} -By default, ks-installer will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. +By default, ks-installer will install Elasticsearch internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Auditing, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -84,7 +84,7 @@ By default, ks-installer will install Elasticsearch internally if Auditing is en elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -116,7 +116,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource ``` {{< notice note >}} -By default, Elasticsearch will be installed internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Auditing, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. +By default, Elasticsearch will be installed internally if Auditing is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Auditing, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -127,7 +127,7 @@ By default, Elasticsearch will be installed internally if Auditing is enabled. F elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` diff --git a/content/en/docs/v3.3/pluggable-components/events.md b/content/en/docs/v3.3/pluggable-components/events.md index 9d53eb3ca..202b5026b 100644 --- a/content/en/docs/v3.3/pluggable-components/events.md +++ b/content/en/docs/v3.3/pluggable-components/events.md @@ -36,7 +36,7 @@ If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), ``` {{< notice note >}} -By default, KubeKey will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. +By default, KubeKey will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -47,7 +47,7 @@ By default, KubeKey will install Elasticsearch internally if Events is enabled. elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -75,7 +75,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu ``` {{< notice note >}} -By default, ks-installer will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. +By default, ks-installer will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -86,7 +86,7 @@ By default, ks-installer will install Elasticsearch internally if Events is enab elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -121,7 +121,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource {{< notice note >}} -By default, Elasticsearch will be installed internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. +By default, Elasticsearch will be installed internally if Events is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Events, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -132,7 +132,7 @@ By default, Elasticsearch will be installed internally if Events is enabled. For elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` diff --git a/content/en/docs/v3.3/pluggable-components/logging.md b/content/en/docs/v3.3/pluggable-components/logging.md index 7fc81460c..2db149180 100644 --- a/content/en/docs/v3.3/pluggable-components/logging.md +++ b/content/en/docs/v3.3/pluggable-components/logging.md @@ -42,7 +42,7 @@ When you install KubeSphere on Linux, you need to create a configuration file, w {{}} - {{< notice note >}}By default, KubeKey will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. + {{< notice note >}}By default, KubeKey will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `config-sample.yaml` if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -53,7 +53,7 @@ When you install KubeSphere on Linux, you need to create a configuration file, w elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -85,7 +85,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu {{}} - {{< notice note >}}By default, ks-installer will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. + {{< notice note >}}By default, ks-installer will install Elasticsearch internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in `cluster-configuration.yaml` if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -96,7 +96,7 @@ As you [install KubeSphere on Kubernetes](../../installing-on-kubernetes/introdu elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -134,7 +134,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource {{}} - {{< notice note >}}By default, Elasticsearch will be installed internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Logging, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. + {{< notice note >}}By default, Elasticsearch will be installed internally if Logging is enabled. For a production environment, it is highly recommended that you set the following values in this yaml file if you want to enable Logging, especially `externalElasticsearchHost` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one. {{}} ```yaml @@ -145,7 +145,7 @@ A Custom Resource Definition (CRD) allows users to create a new type of resource elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` diff --git a/content/en/docs/v3.3/project-user-guide/application-workloads/routes.md b/content/en/docs/v3.3/project-user-guide/application-workloads/routes.md index b3cdf96d4..586f5cee2 100644 --- a/content/en/docs/v3.3/project-user-guide/application-workloads/routes.md +++ b/content/en/docs/v3.3/project-user-guide/application-workloads/routes.md @@ -50,10 +50,6 @@ A Route on KubeSphere is the same as an [Ingress](https://kubernetes.io/docs/con * **Auto Generate**: KubeSphere automatically generates a domain name in the `...nip.io` format and the domain name is automatically resolved by [nip.io](https://nip.io/) into the gateway address. This mode supports only HTTP. - * **Paths**: Map each Service to a path. You can click **Add** to add multiple paths. - - * **Specify Domain**: A user-defined domain name is used. This mode supports both HTTP and HTTPS. - * **Domain Name**: Set a domain name for the Route. * **Protocol**: Select `http` or `https`. If `https` is selected, you need to select a Secret that contains the `tls.crt` (TLS certificate) and `tls.key` (TLS private key) keys used for encryption. * **Paths**: Map each Service to a path. You can click **Add** to add multiple paths. diff --git a/content/en/docs/v3.3/quick-start/all-in-one-on-linux.md b/content/en/docs/v3.3/quick-start/all-in-one-on-linux.md index 092b3881f..b0242177f 100644 --- a/content/en/docs/v3.3/quick-start/all-in-one-on-linux.md +++ b/content/en/docs/v3.3/quick-start/all-in-one-on-linux.md @@ -26,7 +26,7 @@ To get started with all-in-one installation, you only need to prepare one host a Minimum Requirements - Ubuntu 16.04, 18.04 + Ubuntu 16.04, 18.04, 20.04, 22.04 2 CPU cores, 4 GB memory, and 40 GB disk space diff --git a/content/en/docs/v3.3/reference/storage-system-installation/nfs-server.md b/content/en/docs/v3.3/reference/storage-system-installation/nfs-server.md index 44dfa83a0..b69cec05e 100644 --- a/content/en/docs/v3.3/reference/storage-system-installation/nfs-server.md +++ b/content/en/docs/v3.3/reference/storage-system-installation/nfs-server.md @@ -13,7 +13,7 @@ Once your NFS server machine is ready, you can use [KubeKey](../../../installing {{< notice note >}} - You can also create the storage class of NFS-client after you install a KubeSphere cluster. -- It is not recommended that you use NFS storage for production (especially on Kubernetes version 1.20 or later) as some issues may occur, such as `failed to obtain lock` and `input/output error`, resulting in Pod `CrashLoopBackOff`. Besides, some apps may not be compatible with NFS, including [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects). +- NFS is incompatible with some applications, for example, Prometheus, which may result in pod creation failures. If you need to use NFS in the production environment, ensure that you have understood the risks. For more information, contact support@kubesphere.cloud. {{}} diff --git a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/cas-identity-provider.md b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/cas-identity-provider.md new file mode 100644 index 000000000..eb05799c6 --- /dev/null +++ b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/cas-identity-provider.md @@ -0,0 +1,58 @@ +--- +title: "CAS 身份提供者" +keywords: "CAS, 身份提供者" +description: "如何使用外部 CAS 身份提供者。" + +linkTitle: "CAS 身份提供者" +weight: 12223 +--- + +CAS (Central Authentication Service) 是耶鲁 Yale 大学发起的一个java开源项目,旨在为 Web应用系统提供一种可靠的 单点登录 解决方案( Web SSO ), CAS 具有以下特点: + +- 开源的企业级单点登录解决方案 +- CAS Server 为需要独立部署的 Web 应用----一个独立的Web应用程序(cas.war)。 +- CAS Client 支持非常多的客户端 ( 指单点登录系统中的各个 Web 应用 ) ,包括 Java, .Net, PHP, Perl, 等。 + + +## 准备工作 + +您需要部署一个 Kubernetes 集群,并在集群中安装 KubeSphere。有关详细信息,请参阅[在 Linux 上安装](../../../installing-on-linux/)和[在 Kubernetes 上安装](../../../installing-on-kubernetes/)。 + +## 步骤 + +1. 以 `admin` 身份登录 KubeSphere,将光标移动到右下角 icon ,点击 **kubectl**,然后执行以下命令来编辑 CRD `ClusterConfiguration` 中的 `ks-installer`: + + ```bash + kubectl -n kubesphere-system edit cc ks-installer + ``` + +2. 在 `spec.authentication.jwtSecret` 字段下添加以下字段。 + + ```yaml + spec: + authentication: + jwtSecret: '' + authenticateRateLimiterMaxTries: 10 + authenticateRateLimiterDuration: 10m0s + oauthOptions: + accessTokenMaxAge: 1h + accessTokenInactivityTimeout: 30m + identityProviders: + - name: cas + type: CASIdentityProvider + mappingMethod: auto + provider: + redirectURL: "https://ks-console:30880/oauth/redirect/cas" + casServerURL: "https://cas.example.org/cas" + insecureSkipVerify: true + ``` + + 字段描述如下: + + | 参数 | 描述 | + | -------------------- | ------------------------------------------------------------ | + | redirectURL | 重定向到 ks-console 的 URL,格式为:`https://<域名>/oauth/redirect/<身份提供者名称>`。URL 中的 `<身份提供者名称>` 对应 `oauthOptions:identityProviders:name` 的值。 | + | casServerURL | 定义cas 认证的url 地址 | + | insecureSkipVerify | 关闭 TLS 证书验证。 | + + diff --git a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md index ee3d826a2..bc880d62b 100644 --- a/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md +++ b/content/zh/docs/v3.3/access-control-and-account-management/external-authentication/set-up-external-authentication.md @@ -105,7 +105,7 @@ KubeSphere 默认提供了以下几种类型的身份提供者: * GitHub Identity Provider -* CAS Identity Provider +* [CAS Identity Provider](../cas-identity-provider) * Aliyun IDaaS Provider diff --git a/content/zh/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md b/content/zh/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md index cae95cd58..1d6aca019 100644 --- a/content/zh/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md +++ b/content/zh/docs/v3.3/cluster-administration/cluster-settings/log-collections/introduction.md @@ -45,7 +45,7 @@ KubeSphere 提供灵活的日志接收器配置方式。基于 [Fluent Operator] 如果 [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) 中启用了 `logging`、`events` 或 `auditing`,则会添加默认的 Elasticsearch 接收器,服务地址会设为 Elasticsearch 集群。 -当 `logging`、`events` 或 `auditing` 启用时,如果 [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) 中未指定 `externalElasticsearchUrl` 和 `externalElasticsearchPort`,则内置 Elasticsearch 集群会部署至 Kubernetes 集群。内置 Elasticsearch 集群仅用于测试和开发。生产环境下,建议您集成外置 Elasticsearch 集群。 +当 `logging`、`events` 或 `auditing` 启用时,如果 [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-2.2/docs/config-example.md) 中未指定 `externalElasticsearchHost` 和 `externalElasticsearchPort`,则内置 Elasticsearch 集群会部署至 Kubernetes 集群。内置 Elasticsearch 集群仅用于测试和开发。生产环境下,建议您集成外置 Elasticsearch 集群。 日志查询需要依靠所配置的内置或外置 Elasticsearch 集群。 diff --git a/content/zh/docs/v3.3/cluster-administration/storageclass.md b/content/zh/docs/v3.3/cluster-administration/storageclass.md index 7b1257ad0..27b975334 100644 --- a/content/zh/docs/v3.3/cluster-administration/storageclass.md +++ b/content/zh/docs/v3.3/cluster-administration/storageclass.md @@ -168,7 +168,7 @@ NFS(网络文件系统)广泛用于带有 [nfs-subdir-external-provisioner]( {{< notice note >}} -不建议您在生产环境中使用 NFS 存储(尤其是在 Kubernetes 1.20 或以上版本),这可能会引起 `failed to obtain lock` 和 `input/output error` 等问题,从而导致容器组 `CrashLoopBackOff`。此外,部分应用不兼容 NFS,例如 [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects) 等。 +NFS 与部分应用不兼容(例如 Prometheus),可能会导致容器组创建失败。如果确实需要在生产环境中使用 NFS,请确保您了解相关风险或咨询 KubeSphere 技术支持 support@kubesphere.cloud。 {{}} diff --git a/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md b/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md index 83ff24749..f712d34c9 100644 --- a/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md +++ b/content/zh/docs/v3.3/devops-user-guide/how-to-use/pipelines/create-a-pipeline-using-graphical-editing-panel.md @@ -288,7 +288,7 @@ KubeSphere 中的图形编辑面板包含用于 Jenkins [阶段 (Stage)](https:/ {{< notice note >}} - 在 KubeSphere 3.3 中,能够运行流水线的帐户也能够继续或终止该流水线。此外,流水线创建者、拥有该项目管理员角色的用户或者您指定的帐户也有权限继续或终止流水线。 + 在 KubeSphere 3.3 中,能够运行流水线的帐户也能够继续或终止该流水线。此外,流水线创建者或者您指定的帐户也有权限继续或终止流水线。 {{}} diff --git a/content/zh/docs/v3.3/faq/observability/byop.md b/content/zh/docs/v3.3/faq/observability/byop.md index 46fa8ee58..86ce3729b 100644 --- a/content/zh/docs/v3.3/faq/observability/byop.md +++ b/content/zh/docs/v3.3/faq/observability/byop.md @@ -8,18 +8,10 @@ Weight: 16330 KubeSphere 自带一些预装的自定义监控组件,包括 Prometheus Operator、Prometheus、Alertmanager、Grafana(可选)、各种 ServiceMonitor、node-exporter 和 kube-state-metrics。在您安装 KubeSphere 之前,这些组件可能已经存在。在 KubeSphere 3.3 中,您可以使用自己的 Prometheus 堆栈设置。 -## 集成您自己的 Prometheus 的步骤 +## 集成您自己的 Prometheus 要使用您自己的 Prometheus 堆栈设置,请执行以下步骤: -1. 卸载 KubeSphere 的自定义 Prometheus 堆栈 - -2. 安装您自己的 Prometheus 堆栈 - -3. 将 KubeSphere 自定义组件安装至您的 Prometheus 堆栈 - -4. 更改 KubeSphere 的 `monitoring endpoint` - ### 步骤 1:卸载 KubeSphere 的自定义 Prometheus 堆栈 1. 执行以下命令,卸载堆栈: @@ -51,11 +43,11 @@ KubeSphere 自带一些预装的自定义监控组件,包括 Prometheus Operat KubeSphere 3.3 已经过认证,可以与以下 Prometheus 堆栈组件搭配使用: -- Prometheus Operator **v0.38.3+** -- Prometheus **v2.20.1+** -- Alertmanager **v0.21.0+** -- kube-state-metrics **v1.9.6** -- node-exporter **v0.18.1** +- Prometheus Operator **v0.55.1+** +- Prometheus **v2.34.0+** +- Alertmanager **v0.23.0+** +- kube-state-metrics **v2.5.0** +- node-exporter **v1.3.1** 请确保您的 Prometheus 堆栈组件版本符合上述版本要求,尤其是 **node-exporter** 和 **kube-state-metrics**。 @@ -65,92 +57,97 @@ KubeSphere 3.3 已经过认证,可以与以下 Prometheus 堆栈组件搭配 {{}} -Prometheus 堆栈可以通过多种方式进行安装。下面的步骤演示如何使用**上游 `kube-prometheus`** 将 Prometheus 堆栈安装至命名空间 `monitoring` 中。 +Prometheus 堆栈可以通过多种方式进行安装。下面的步骤演示如何使用 `ks-prometheus`(基于上游的 `kube-prometheus` 项目) 将 Prometheus 堆栈安装至命名空间 `monitoring` 中。 -1. 获取 v0.6.0 版 kube-prometheus,它的 node-exporter 版本为 v0.18.1,与 KubeSphere 3.3 所使用的版本相匹配。 +1. 获取 KubeSphere 3.3.0 所使用的 `ks-prometheus`。 ```bash - cd ~ && git clone https://github.com/prometheus-operator/kube-prometheus.git && cd kube-prometheus && git checkout tags/v0.6.0 -b v0.6.0 + cd ~ && git clone -b release-3.3 https://github.com/kubesphere/ks-prometheus.git && cd ks-prometheus ``` -2. 设置命名空间 `monitoring`,安装 Prometheus Operator 和相应角色: +2. 设置命名空间。 ```bash - kubectl apply -f manifests/setup/ + sed -i 's/kubesphere-monitoring-system/monitoring/g' kustomization.yaml ``` -3. 稍等片刻待 Prometheus Operator 启动并运行。 +3. (可选)移除不必要的组件。例如,KubeSphere 未启用 Grafana 时,可以删除 `kustomization.yaml` 中的 `grafana` 部分: ```bash - kubectl -n monitoring get pod --watch + sed -i '/manifests\/grafana\//d' kustomization.yaml ``` -4. 移除不必要组件,例如 Prometheus Adapter。 +4. 安装堆栈。 ```bash - rm -rf manifests/prometheus-adapter-*.yaml - ``` - -5. 将 kube-state-metrics 的版本变更为 KubeSphere 3.3 所使用的 v1.9.6。 - - ```bash - sed -i 's/v1.9.5/v1.9.6/g' manifests/kube-state-metrics-deployment.yaml - ``` - -6. 安装 Prometheus、Alertmanager、Grafana、kube-state-metrics 以及 node-exporter。您可以只应用 YAML 文件 `kube-state-metrics-*.yaml` 或 `node-exporter-*.yaml` 来分别安装 kube-state-metrics 或 node-exporter。 - - ```bash - kubectl apply -f manifests/ + kubectl apply -k . ``` ### 步骤 3:将 KubeSphere 自定义组件安装至您的 Prometheus 堆栈 {{< notice note >}} -KubeSphere 3.3 使用 Prometheus Operator 来管理 Prometheus/Alertmanager 配置和生命周期、ServiceMonitor(用于管理抓取配置)和 PrometheusRule(用于管理 Prometheus 记录/告警规则)。 +如果您的 Prometheus 堆栈是通过 `ks-prometheus` 进行安装,您可以跳过此步骤。 -[KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml) 中列出了一些条目,其中 `prometheus-rules.yaml` 和 `prometheus-rulesEtcd.yaml` 是 KubeSphere 3.3 正常运行的必要条件,其他均为可选。如果您不希望现有 Alertmanager 的配置被覆盖,您可以移除 `alertmanager-secret.yaml`。如果您不希望自己的 ServiceMonitor 被覆盖(KubeSphere 自定义的 ServiceMonitor 弃用许多无关指标,以便 Prometheus 只存储最有用的指标),您可以移除 `xxx-serviceMonitor.yaml`。 +KubeSphere 3.3.0 使用 Prometheus Operator 来管理 Prometheus/Alertmanager 配置和生命周期、ServiceMonitor(用于管理抓取配置)和 PrometheusRule(用于管理 Prometheus 记录/告警规则)。 如果您的 Prometheus 堆栈不是由 Prometheus Operator 进行管理,您可以跳过此步骤。但请务必确保: -- 您必须将 [PrometheusRule](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/prometheus-rules.yaml) 和 [PrometheusRule for etcd](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/prometheus-rulesEtcd.yaml) 中的记录/告警规则复制至您的 Prometheus 配置中,以便 KubeSphere 3.3 能够正常运行。 +- 您必须将 [PrometheusRule](https://github.com/kubesphere/ks-prometheus/blob/release-3.3/manifests/kubernetes/kubernetes-prometheusRule.yaml) 和 [PrometheusRule for etcd](https://github.com/kubesphere/ks-prometheus/blob/release-3.3/manifests/etcd/prometheus-rulesEtcd.yaml) 中的记录/告警规则复制至您的 Prometheus 配置中,以便 KubeSphere 3.3.0 能够正常运行。 -- 配置您的 Prometheus,使其抓取指标的目标 (Target) 与 [KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml) 中列出的 ServiceMonitor 的目标相同。 +- 配置您的 Prometheus,使其抓取指标的目标 (Target) 与 各组件的 [serviceMonitor](https://github.com/kubesphere/ks-prometheus/tree/release-3.3/manifests) 文件中列出的目标相同。 {{}} -1. 获取 KubeSphere 3.3 的自定义 kube-prometheus。 +1. 获取 KubeSphere 3.3.0 所使用的 `ks-prometheus`。 ```bash - cd ~ && mkdir kubesphere && cd kubesphere && git clone https://github.com/kubesphere/kube-prometheus.git && cd kube-prometheus/kustomize + cd ~ && git clone -b release-3.3 https://github.com/kubesphere/ks-prometheus.git && cd ks-prometheus ``` -2. 将命名空间更改为您自己部署 Prometheus 堆栈的命名空间。例如,如果您按照步骤 2 将 Prometheus 安装在命名空间 `monitoring` 中,这里即为 `monitoring`。 +2. 设置 `kustomization.yaml`,仅保留如下内容。 - ```bash - sed -i 's/my-namespace//g' kustomization.yaml + ```yaml + apiVersion: kustomize.config.k8s.io/v1beta1 + kind: Kustomization + namespace: + resources: + - ./manifests/alertmanager/alertmanager-secret.yaml + - ./manifests/etcd/prometheus-rulesEtcd.yaml + - ./manifests/kube-state-metrics/kube-state-metrics-serviceMonitor.yaml + - ./manifests/kubernetes/kubernetes-prometheusRule.yaml + - ./manifests/kubernetes/kubernetes-serviceKubeControllerManager.yaml + - ./manifests/kubernetes/kubernetes-serviceKubeScheduler.yaml + - ./manifests/kubernetes/kubernetes-serviceMonitorApiserver.yaml + - ./manifests/kubernetes/kubernetes-serviceMonitorCoreDNS.yaml + - ./manifests/kubernetes/kubernetes-serviceMonitorKubeControllerManager.yaml + - ./manifests/kubernetes/kubernetes-serviceMonitorKubeScheduler.yaml + - ./manifests/kubernetes/kubernetes-serviceMonitorKubelet.yaml + - ./manifests/node-exporter/node-exporter-serviceMonitor.yaml + - ./manifests/prometheus/prometheus-clusterRole.yaml ``` -3. 应用 KubeSphere 自定义组件,包括 Prometheus 规则、Alertmanager 配置和各种 ServiceMonitor 等。 + {{< notice note >}} + + - 将此处 `namespace` 的值设置为您自己的命名空间。例如,如果您在步骤 2 将 Prometheus 安装在命名空间 `monitoring` 中,这里即为 `monitoring`。 + - 如果您启用了 KubeSphere 的告警,还需要保留 `kustomization.yaml` 中的 `thanos-ruler` 部分。 + + {{}} + + +3. 安装以上 KubeSphere 必要组件。 ```bash kubectl apply -k . ``` -4. 配置服务 (Service) 用于暴露 kube-scheduler 和 kube-controller-manager 指标。 - - ```bash - kubectl apply -f ./prometheus-serviceKubeScheduler.yaml - kubectl apply -f ./prometheus-serviceKubeControllerManager.yaml - ``` - -5. 在您自己的命名空间中查找 Prometheus CR,通常为 Kubernetes。 +4. 在您自己的命名空间中查找 Prometheus CR,通常为 k8s。 ```bash kubectl -n get prometheus ``` -6. 将 Prometheus 规则评估间隔设置为 1m,与 KubeSphere 3.3 的自定义 ServiceMonitor 保持一致。规则评估间隔应大于或等于抓取间隔。 +5. 将 Prometheus 规则评估间隔设置为 1m,与 KubeSphere 3.3.0 的自定义 ServiceMonitor 保持一致。规则评估间隔应大于或等于抓取间隔。 ```bash kubectl -n patch prometheus k8s --patch '{ @@ -164,34 +161,40 @@ KubeSphere 3.3 使用 Prometheus Operator 来管理 Prometheus/Alertmanager 配 您自己的 Prometheus 堆栈现在已启动并运行,您可以更改 KubeSphere 的监控 Endpoint 来使用您自己的 Prometheus。 -1. 运行以下命令,编辑 `kubesphere-config`: +1. 运行以下命令,编辑 `kubesphere-config`。 ```bash kubectl edit cm -n kubesphere-system kubesphere-config ``` -2. 搜寻到 `monitoring endpoint` 部分,如下所示: +2. 搜索 `monitoring endpoint` 部分,如下所示。 - ```bash + ```yaml monitoring: endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090 ``` -3. 将 `monitoring endpoint` 更改为您自己的 Prometheus: +3. 将 `endpoint` 的值更改为您自己的 Prometheus。 - ```bash + ```yaml monitoring: endpoint: http://prometheus-operated.monitoring.svc:9090 ``` -4. 运行以下命令,重启 KubeSphere APIserver。 +4. 如果您启用了 KubeSphere 的告警组件,请搜索 `alerting` 的 `prometheusEndpoint` 和 `thanosRulerEndpoint`,并参照如下示例修改。KubeSphere Apiserver 将自动重启使设置生效。 - ```bash - kubectl -n kubesphere-system rollout restart deployment/ks-apiserver + ```yaml + ... + alerting: + ... + prometheusEndpoint: http://prometheus-operated.monitoring.svc:9090 + thanosRulerEndpoint: http://thanos-ruler-operated.monitoring.svc:10902 + ... + ... ``` {{< notice warning >}} -如果您按照[此指南](../../../pluggable-components/overview/)启用/禁用 KubeSphere 可插拔组件,`monitoring endpoint` 会重置为初始值。此时,您需要再次将其更改为您自己的 Prometheus 并重启 KubeSphere APIserver。 +如果您按照[此指南](../../../pluggable-components/overview/)启用/禁用 KubeSphere 可插拔组件,`monitoring endpoint` 会重置为初始值。此时,您需要再次将其更改为您自己的 Prometheus。 {{}} \ No newline at end of file diff --git a/content/zh/docs/v3.3/faq/observability/logging.md b/content/zh/docs/v3.3/faq/observability/logging.md index 7886122bd..a29fe0e44 100644 --- a/content/zh/docs/v3.3/faq/observability/logging.md +++ b/content/zh/docs/v3.3/faq/observability/logging.md @@ -28,7 +28,7 @@ weight: 16310 kubectl edit cc -n kubesphere-system ks-installer ``` -2. 将 `es.elasticsearchDataXXX`、`es.elasticsearchMasterXXX` 和 `status.logging` 的注释取消,将 `es.externalElasticsearchUrl` 设置为 Elasticsearch 的地址,将 `es.externalElasticsearchPort` 设置为其端口号。以下示例供您参考: +2. 将 `es.elasticsearchDataXXX`、`es.elasticsearchMasterXXX` 和 `status.logging` 的注释取消,将 `es.externalElasticsearchHost` 设置为 Elasticsearch 的地址,将 `es.externalElasticsearchPort` 设置为其端口号。以下示例供您参考: ```yaml apiVersion: installer.kubesphere.io/v1alpha1 @@ -47,7 +47,7 @@ weight: 16310 # elasticsearchMasterVolumeSize: 4Gi elkPrefix: logstash logMaxAge: 7 - externalElasticsearchUrl: <192.168.0.2> + externalElasticsearchHost: <192.168.0.2> externalElasticsearchPort: <9200> ... status: diff --git a/content/zh/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md b/content/zh/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md index b4a3e0d1f..e36b29056 100644 --- a/content/zh/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md +++ b/content/zh/docs/v3.3/installing-on-linux/cluster-operation/add-edge-nodes.md @@ -21,9 +21,52 @@ KubeSphere 利用 [KubeEdge](https://kubeedge.io/zh/) 将原生容器化应用 ## 准备工作 - 您需要启用 [KubeEdge](../../../pluggable-components/kubeedge/)。 +- 为了避免兼容性问题,建议安装 v1.21.x 及以下版本的 Kubernetes。 - 您有一个可用节点作为边缘节点,该节点可以运行 Ubuntu(建议)或 CentOS。本教程以 Ubuntu 18.04 为例。 - 与 Kubernetes 集群节点不同,边缘节点应部署在单独的网络中。 +## 防止非边缘工作负载调度到边缘节点 + +由于部分守护进程集(例如,Calico)有强容忍度,为了避免影响边缘节点的正常工作,您需要手动 Patch Pod 以防止非边缘工作负载调度至边缘节点。 + +```bash +#!/bin/bash + + +NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}' + +ns="kube-system" + + +DaemonSets=("nodelocaldns" "kube-proxy" "calico-node") + +length=${#DaemonSets[@]} + +for((i=0;i}} + 在 ks-installer 的 `ClusterConfiguration`中,如果您设置的是局域网地址,那么需要配置转发规则。如果您未配置转发规则,直接连接 30000 – 30004 端口即可。 + {{}} + +| 字段 | 外网端口 | 字段 | 内网端口 | +| ------------------- | -------- | ----------------------- | -------- | +| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` | +| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` | +| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` | +| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` | +| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` | + ## 配置边缘节点 您需要在边缘节点上安装容器运行时并配置 EdgeMesh。 @@ -72,22 +115,6 @@ KubeSphere 利用 [KubeEdge](https://kubeedge.io/zh/) 将原生容器化应用 net.ipv4.ip_forward = 1 ``` -## 创建防火墙规则和端口转发规则 - -若要确保边缘节点可以成功地与集群通信,您必须转发端口,以便外部流量进入您的网络。您可以根据下表将外网端口映射到相应的内网 IP 地址(主节点)和端口。此外,您还需要创建防火墙规则以允许流量进入这些端口(`10000` 至 `10004`)。 - - {{< notice note >}} - 在 ks-installer 的 `ClusterConfiguration`中,如果您设置的是局域网地址,那么需要配置转发规则。如果您未配置转发规则,直接连接 30000 – 30004 端口即可。 - {{}} - -| 字段 | 外网端口 | 字段 | 内网端口 | -| ------------------- | -------- | ----------------------- | -------- | -| `cloudhubPort` | `10000` | `cloudhubNodePort` | `30000` | -| `cloudhubQuicPort` | `10001` | `cloudhubQuicNodePort` | `30001` | -| `cloudhubHttpsPort` | `10002` | `cloudhubHttpsNodePort` | `30002` | -| `cloudstreamPort` | `10003` | `cloudstreamNodePort` | `30003` | -| `tunnelPort` | `10004` | `tunnelNodePort` | `30004` | - ## 添加边缘节点 1. 使用 `admin` 用户登录控制台,点击左上角的**平台管理**。 @@ -101,6 +128,8 @@ KubeSphere 利用 [KubeEdge](https://kubeedge.io/zh/) 将原生容器化应用 {{}} 3. 点击**添加**。在出现的对话框中,设置边缘节点的节点名称并输入其内网 IP 地址。点击**验证**以继续。 + + ![add-edge-node](/images/docs/v3.3/zh-cn/installing-on-linux/add-and-delete-nodes/add-edge-nodes/add-edge-node.png) {{< notice note >}} @@ -111,6 +140,8 @@ KubeSphere 利用 [KubeEdge](https://kubeedge.io/zh/) 将原生容器化应用 4. 复制**边缘节点配置命令**下自动创建的命令,并在您的边缘节点上运行该命令。 + ![edge-command](/images/docs/v3.3/zh-cn/installing-on-linux/add-and-delete-nodes/add-edge-nodes/edge-command.png) + {{< notice note >}} 在运行该命令前,请确保您的边缘节点上已安装 `wget`。 @@ -170,39 +201,7 @@ KubeSphere 利用 [KubeEdge](https://kubeedge.io/zh/) 将原生容器化应用 systemctl restart edgecore.service ``` -9. 边缘节点加入集群后,部分 Pod 在调度至该边缘节点上后可能会一直处于 `Pending` 状态。由于部分守护进程集(例如,Calico)有强容忍度,您需要手动 Patch Pod 以防止它们调度至该边缘节点。 - - - ```bash - #!/bin/bash - - NodeSelectorPatchJson='{"spec":{"template":{"spec":{"nodeSelector":{"node-role.kubernetes.io/master": "","node-role.kubernetes.io/worker": ""}}}}}' - - NoShedulePatchJson='{"spec":{"template":{"spec":{"affinity":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}}}}' - - edgenode="edgenode" - if [ $1 ]; then - edgenode="$1" - fi - - - namespaces=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $1}' )) - pods=($(kubectl get pods -A -o wide |egrep -i $edgenode | awk '{print $2}' )) - length=${#namespaces[@]} - - - for((i=0;i<$length;i++)); - do - ns=${namespaces[$i]} - pod=${pods[$i]} - resources=$(kubectl -n $ns describe pod $pod | grep "Controlled By" |awk '{print $3}') - echo "Patching for ns:"${namespaces[$i]}",resources:"$resources - kubectl -n $ns patch $resources --type merge --patch "$NoShedulePatchJson" - sleep 1 - done - ``` - -10. 如果仍然无法显示监控数据,执行以下命令: +9. 如果仍然无法显示监控数据,执行以下命令: ```bash journalctl -u edgecore.service -b -r ``` @@ -256,4 +255,4 @@ KubeSphere 利用 [KubeEdge](https://kubeedge.io/zh/) 将原生容器化应用 卸载完成后,您将无法为集群添加边缘节点。 - {{}} + {{}} \ No newline at end of file diff --git a/content/zh/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md b/content/zh/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md index 77ef56980..bb9c46bab 100644 --- a/content/zh/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md +++ b/content/zh/docs/v3.3/installing-on-linux/introduction/air-gapped-installation.md @@ -17,7 +17,7 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 | 主机 IP | 主机名称 | 角色 | | ---------------- | ---- | ---------------- | -| 192.168.0.2 | node1 | 联网主机用于源集群打包使用。已部署 Kubernetes v1.22.12 和 KubeSphere v3.3.1 | +| 192.168.0.2 | node1 | 联网主机用于制作离线包 | | 192.168.0.3 | node2 | 离线环境主节点 | | 192.168.0.4 | node3 | 离线环境镜像仓库节点 | @@ -54,17 +54,7 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 {{}} -2. 在源集群中使用 KubeKey 创建 manifest。支持下面 2 种方式: - - - (推荐)在已创建的集群中执行 KubeKey 命令生成该文件。生成的yaml只是提供一个示例(镜像列表不完整),需要自行补充修改,第一次离线部署推荐复制下方第三点的配置内容。 - - ```bash - ./kk create manifest - ``` - - - 根据模板手动创建并编写该文件(需要一定的基础推荐使用第一种方式)。关于更多信息,请参阅 [manifest-example](https://github.com/kubesphere/kubekey/blob/master/docs/manifest-example.md)。 - -3. 执行以下命令在源集群中修改 manifest 配置: +2. 在联网主机上执行以下命令,并复制示例中的 manifest 内容。关于更多信息,请参阅 [manifest-example](https://github.com/kubesphere/kubekey/blob/master/docs/manifest-example.md)。 ```bash vim manifest.yaml @@ -273,7 +263,11 @@ KubeKey v2.1.0 版本新增了清单(manifest)和制品(artifact)的概 {{}} -4. 从源集群中导出制品 artifact。 +3. (可选)如果您已经拥有集群,那么可以在已有集群中执行 KubeKey 命令生成 manifest 文件,并参照步骤 2 中的示例配置 manifest 文件内容。 + ```bash + ./kk create manifest + ``` +4. 导出制品 artifact。 {{< tabs >}} diff --git a/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md b/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md index 56952b2eb..c08563a91 100644 --- a/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md +++ b/content/zh/docs/v3.3/installing-on-linux/on-premises/install-kubesphere-on-vmware-vsphere.md @@ -241,14 +241,14 @@ track_script { ```bash systemctl restart keepalived && systemctl enable keepalived -systemctl stop keepaliv +systemctl stop keepalived ``` 开启 keepalived服务 ```bash -systemctl start keepalivedb +systemctl start keepalived ``` ### 验证可用性 diff --git a/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md b/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md index ed346cda0..bcb2bfbba 100644 --- a/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md +++ b/content/zh/docs/v3.3/installing-on-linux/persistent-storage-configurations/install-nfs-client.md @@ -11,7 +11,7 @@ weight: 3330 {{< notice note >}} - 本教程以 Ubuntu 16.04 为例。 -- 不建议您在生产环境中使用 NFS 存储(尤其是在 Kubernetes 1.20 或以上版本),这可能会引起 `failed to obtain lock` 和 `input/output error` 等问题,从而导致 Pod `CrashLoopBackOff`。此外,部分应用不兼容 NFS,例如 [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects) 等。 +- NFS 与部分应用不兼容(例如 Prometheus),可能会导致容器组创建失败。如果确实需要在生产环境中使用 NFS,请确保您了解相关风险或咨询 KubeSphere 技术支持 support@kubesphere.cloud。 {{}} diff --git a/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-huaweicloud-ecs.md b/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-huaweicloud-ecs.md index f64f9738b..47f605cde 100644 --- a/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-huaweicloud-ecs.md +++ b/content/zh/docs/v3.3/installing-on-linux/public-cloud/install-kubesphere-on-huaweicloud-ecs.md @@ -227,7 +227,7 @@ spec: elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log - # externalElasticsearchUrl: + # externalElasticsearchHost: # externalElasticsearchPort: console: enableMultiLogin: false # enable/disable multiple sing on, it allows a user can be used by different users at the same time. diff --git a/content/zh/docs/v3.3/pluggable-components/auditing-logs.md b/content/zh/docs/v3.3/pluggable-components/auditing-logs.md index 3e1190b7a..fafe75cb6 100644 --- a/content/zh/docs/v3.3/pluggable-components/auditing-logs.md +++ b/content/zh/docs/v3.3/pluggable-components/auditing-logs.md @@ -34,7 +34,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 ``` {{< notice note >}} -默认情况下,如果启用了审计功能,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 +默认情况下,如果启用了审计功能,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -45,7 +45,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -73,7 +73,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 ``` {{< notice note >}} -默认情况下,如果启用了审计功能,ks-installer 会安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在 `cluster-configuration.yaml` 中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在安装前提供以下信息后,ks-installer 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 +默认情况下,如果启用了审计功能,ks-installer 会安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在 `cluster-configuration.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,ks-installer 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -84,7 +84,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -116,7 +116,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 ``` {{< notice note >}} -默认情况下,如果启用了审计功能,将安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 +默认情况下,如果启用了审计功能,将安装内置 Elasticsearch。对于生产环境,如果您想启用审计功能,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -127,7 +127,7 @@ KubeSphere 审计日志系统提供了一套与安全相关并按时间顺序排 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` diff --git a/content/zh/docs/v3.3/pluggable-components/events.md b/content/zh/docs/v3.3/pluggable-components/events.md index 025ccb9c4..9fa7bb078 100644 --- a/content/zh/docs/v3.3/pluggable-components/events.md +++ b/content/zh/docs/v3.3/pluggable-components/events.md @@ -36,7 +36,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 ``` {{< notice note >}} -默认情况下,如果启用了事件系统,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用事件系统,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 +默认情况下,如果启用了事件系统,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用事件系统,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -47,7 +47,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -75,7 +75,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 ``` {{< notice note >}} -对于生产环境,如果您想启用事件系统,强烈建议在 `cluster-configuration.yaml` 中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在安装前提供以下信息后,ks-installer 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 +对于生产环境,如果您想启用事件系统,强烈建议在 `cluster-configuration.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,ks-installer 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -86,7 +86,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -121,7 +121,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 {{< notice note >}} -默认情况下,如果启用了事件系统,将会安装内置 Elasticsearch。对于生产环境,如果您想启用事件系统,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在文件中提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 +默认情况下,如果启用了事件系统,将会安装内置 Elasticsearch。对于生产环境,如果您想启用事件系统,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在文件中提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -132,7 +132,7 @@ KubeSphere 事件系统使用户能够跟踪集群内部发生的事件,例如 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` diff --git a/content/zh/docs/v3.3/pluggable-components/logging.md b/content/zh/docs/v3.3/pluggable-components/logging.md index 07f0171c1..b6a8fefc5 100644 --- a/content/zh/docs/v3.3/pluggable-components/logging.md +++ b/content/zh/docs/v3.3/pluggable-components/logging.md @@ -42,7 +42,7 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 {{}} - {{< notice note >}}默认情况下,如果启用了日志系统,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 + {{< notice note >}}默认情况下,如果启用了日志系统,KubeKey 将安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在 `config-sample.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,KubeKey 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -53,7 +53,7 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -85,7 +85,7 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 {{}} - {{< notice note >}}默认情况下,如果启用了日志系统,ks-installer 将安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在 `cluster-configuration.yaml` 中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在安装前提供以下信息后,ks-installer 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 + {{< notice note >}}默认情况下,如果启用了日志系统,ks-installer 将安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在 `cluster-configuration.yaml` 中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在安装前提供以下信息后,ks-installer 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -96,7 +96,7 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` @@ -134,7 +134,7 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 {{}} - {{< notice note >}}默认情况下,如果启用了日志系统,将会安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchUrl` 和 `externalElasticsearchPort`。在文件中提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 + {{< notice note >}}默认情况下,如果启用了日志系统,将会安装内置 Elasticsearch。对于生产环境,如果您想启用日志系统,强烈建议在该 YAML 文件中设置以下值,尤其是 `externalElasticsearchHost` 和 `externalElasticsearchPort`。在文件中提供以下信息后,KubeSphere 将直接对接您的外部 Elasticsearch,不再安装内置 Elasticsearch。 {{}} ```yaml @@ -145,7 +145,7 @@ KubeSphere 为日志收集、查询和管理提供了一个强大的、全面的 elasticsearchDataVolumeSize: 20Gi # The volume size of Elasticsearch data nodes. logMaxAge: 7 # Log retention day in built-in Elasticsearch. It is 7 days by default. elkPrefix: logstash # The string making up index names. The index name will be formatted as ks--log. - externalElasticsearchUrl: # The Host of external Elasticsearch. + externalElasticsearchHost: # The Host of external Elasticsearch. externalElasticsearchPort: # The port of external Elasticsearch. ``` diff --git a/content/zh/docs/v3.3/project-user-guide/application-workloads/routes.md b/content/zh/docs/v3.3/project-user-guide/application-workloads/routes.md index 28d429058..7d09c1690 100644 --- a/content/zh/docs/v3.3/project-user-guide/application-workloads/routes.md +++ b/content/zh/docs/v3.3/project-user-guide/application-workloads/routes.md @@ -50,10 +50,6 @@ KubeSphere 上的应用路由和 Kubernetes 上的 [Ingress](https://kubernetes. * **自动生成**:KubeSphere 自动以`<服务名称>.<项目名称>.<网关地址>.nip.io` 格式生成域名,该域名由 [nip.io](https://nip.io/) 自动解析为网关地址。该模式仅支持 HTTP。 - * **路径**:将每个服务映射到一条路径。您可以点击**添加**来添加多条路径。 - - * **指定域名**:使用用户定义的域名。此模式同时支持 HTTP 和 HTTPS。 - * **域名**:为应用路由设置域名。 * **协议**:选择 `http` 或 `https`。如果选择了 `https`,则需要选择包含 `tls.crt`(TLS 证书)和 `tls.key`(TLS 私钥)的密钥用于加密。 * **路径**:将每个服务映射到一条路径。您可以点击**添加**来添加多条路径。 diff --git a/content/zh/docs/v3.3/quick-start/all-in-one-on-linux.md b/content/zh/docs/v3.3/quick-start/all-in-one-on-linux.md index 63462d4b5..e6142f423 100644 --- a/content/zh/docs/v3.3/quick-start/all-in-one-on-linux.md +++ b/content/zh/docs/v3.3/quick-start/all-in-one-on-linux.md @@ -27,7 +27,7 @@ weight: 2100 最低配置 - Ubuntu 16.04, 18.04 + Ubuntu 16.04, 18.04, 20.04, 22.04 2 核 CPU,4 GB 内存,40 GB 磁盘空间 diff --git a/content/zh/docs/v3.3/reference/api-docs.md b/content/zh/docs/v3.3/reference/api-docs.md index d5e1f68dd..3b5f22a88 100644 --- a/content/zh/docs/v3.3/reference/api-docs.md +++ b/content/zh/docs/v3.3/reference/api-docs.md @@ -47,7 +47,7 @@ curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \ 'http://[node ip]:31407/oauth/token' \ --data-urlencode 'grant_type=password' \ --data-urlencode 'username=admin' \ - --data-urlencode 'password=P#$$w0rd' + --data-urlencode 'password=P#$$w0rd' \ --data-urlencode 'client_id=kubesphere' \ --data-urlencode 'client_secret=kubesphere' ``` diff --git a/content/zh/docs/v3.3/reference/storage-system-installation/nfs-server.md b/content/zh/docs/v3.3/reference/storage-system-installation/nfs-server.md index b1b03b3b0..7b919484c 100644 --- a/content/zh/docs/v3.3/reference/storage-system-installation/nfs-server.md +++ b/content/zh/docs/v3.3/reference/storage-system-installation/nfs-server.md @@ -13,7 +13,7 @@ NFS 服务器机器就绪后,您可以使用 [KubeKey](../../../installing-on- {{< notice note >}} - 您也可以在安装 KubeSphere 集群后创建 NFS-client 的存储类型。 -- 不建议您在生产环境中使用 NFS 存储(尤其是在 Kubernetes 1.20 或以上版本),这可能会引起 `failed to obtain lock` 和 `input/output error` 等问题,从而导致 Pod `CrashLoopBackOff`。此外,部分应用不兼容 NFS,例如 [Prometheus](https://github.com/prometheus/prometheus/blob/03b354d4d9386e4b3bfbcd45da4bb58b182051a5/docs/storage.md#operational-aspects) 等。 +- NFS 与部分应用不兼容(例如 Prometheus),可能会导致容器组创建失败。如果确实需要在生产环境中使用 NFS,请确保您了解相关风险或咨询 KubeSphere 技术支持 support@kubesphere.cloud。 {{}}