mirror of
https://github.com/kubesphere/website.git
synced 2025-12-26 00:12:48 +00:00
replace e.g. with for example
Signed-off-by: Sherlock113 <sherlockxu@yunify.com>
This commit is contained in:
parent
30f4eee52f
commit
de488994dd
|
|
@ -45,7 +45,7 @@ dependencies: (Optional) A list of the chart requirements.
|
|||
- name: The name of the chart, such as nginx.
|
||||
version: The version of the chart, such as "1.2.3".
|
||||
repository: The repository URL ("https://example.com/charts") or alias ("@repo-name").
|
||||
condition: (Optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (e.g. subchart1.enabled ).
|
||||
condition: (Optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (for example, subchart1.enabled ).
|
||||
tags: (Optional)
|
||||
- Tags can be used to group charts for enabling/disabling together.
|
||||
import-values: (Optional)
|
||||
|
|
|
|||
|
|
@ -71,6 +71,6 @@ After the app is deployed, you can use etcdctl, a command-line tool for interact
|
|||
|
||||

|
||||
|
||||
4. For clients within the KubeSphere cluster, the etcd service can be accessed through `<app name>.<project name>.svc.<K8s domain>:2379` (e.g. `etcd-bqe0g4.demo-project.svc.cluster.local:2379` in this guide).
|
||||
4. For clients within the KubeSphere cluster, the etcd service can be accessed through `<app name>.<project name>.svc.<K8s domain>:2379` (for example, `etcd-bqe0g4.demo-project.svc.cluster.local:2379` in this guide).
|
||||
|
||||
5. For more information, see [the official documentation of etcd](https://etcd.io/docs/v3.4.0/).
|
||||
|
|
@ -51,7 +51,7 @@ Click a node from the list and you can go to its detail page.
|
|||

|
||||
|
||||
- **Cordon/Uncordon**: Marking a node as unschedulable is very useful during a node reboot or other maintenance. The Kubernetes scheduler will not schedule new Pods to this node if it's been marked unschedulable. Besides, this does not affect existing workloads already on the node. In KubeSphere, you mark a node as unschedulable by clicking **Cordon** on the node detail page. The node will be schedulable if you click the button (**Uncordon**) again.
|
||||
- **Labels**: Node labels can be very useful when you want to assign Pods to specific nodes. Label a node first (e.g. label GPU nodes with `node-role.kubernetes.io/gpu-node`), and then add the label in **Advanced Settings** [when you create a workload](../../project-user-guide/application-workloads/deployments/#step-5-configure-advanced-settings) so that you can allow Pods to run on GPU nodes explicitly. To add node labels, click **More** and select **Edit Labels**.
|
||||
- **Labels**: Node labels can be very useful when you want to assign Pods to specific nodes. Label a node first (for example, label GPU nodes with `node-role.kubernetes.io/gpu-node`), and then add the label in **Advanced Settings** [when you create a workload](../../project-user-guide/application-workloads/deployments/#step-5-configure-advanced-settings) so that you can allow Pods to run on GPU nodes explicitly. To add node labels, click **More** and select **Edit Labels**.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ The table below summarizes common volume plugins for various provisioners (stora
|
|||
| -------------------- | ------------------------------------------------------------ |
|
||||
| In-tree | Built-in and run as part of Kubernetes, such as [RBD](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) and [Glusterfs](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs). For more plugins of this kind, see [Provisioner](https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner). |
|
||||
| External-provisioner | Deployed independently from Kubernetes, but works like an in-tree plugin, such as [nfs-client](https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client). For more plugins of this kind, see [External Storage](https://github.com/kubernetes-retired/external-storage). |
|
||||
| CSI | Container Storage Interface, a standard for exposing storage resources to workloads on COs (e.g. Kubernetes), such as [QingCloud-csi](https://github.com/yunify/qingcloud-csi) and [Ceph-CSI](https://github.com/ceph/ceph-csi). For more plugins of this kind, see [Drivers](https://kubernetes-csi.github.io/docs/drivers.html). |
|
||||
| CSI | Container Storage Interface, a standard for exposing storage resources to workloads on COs (for example, Kubernetes), such as [QingCloud-csi](https://github.com/yunify/qingcloud-csi) and [Ceph-CSI](https://github.com/ceph/ceph-csi). For more plugins of this kind, see [Drivers](https://kubernetes-csi.github.io/docs/drivers.html). |
|
||||
|
||||
## Prerequisites
|
||||
|
||||
|
|
|
|||
|
|
@ -190,7 +190,7 @@ To integrate SonarQube into your pipeline, you must install SonarQube Server fir
|
|||
http://192.168.0.4:30180
|
||||
```
|
||||
|
||||
3. Access Jenkins with the address `http://{$Public IP}:30180`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (e.g. `admin/P@88w0rd`) directly. For more information about configuring Jenkins, see [Jenkins System Settings](../../../devops-user-guide/how-to-use/jenkins-setting/).
|
||||
3. Access Jenkins with the address `http://{$Public IP}:30180`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (for example, `admin/P@88w0rd`) directly. For more information about configuring Jenkins, see [Jenkins System Settings](../../../devops-user-guide/how-to-use/jenkins-setting/).
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -247,7 +247,7 @@ The account `project-admin` needs to be created in advance since it is the revie
|
|||
|
||||

|
||||
|
||||
In a development or production environment, it requires someone who has higher authority (e.g. release manager) to review the pipeline, images, as well as the code analysis result. They have the authority to determine whether the pipeline can go to the next stage. In the Jenkinsfile, you use the section `input` to specify who reviews the pipeline. If you want to specify a user (e.g. `project-admin`) to review it, you can add a field in the Jenkinsfile. If there are multiple users, you need to use commas to separate them as follows:
|
||||
In a development or production environment, it requires someone who has higher authority (for example, release manager) to review the pipeline, images, as well as the code analysis result. They have the authority to determine whether the pipeline can go to the next stage. In the Jenkinsfile, you use the section `input` to specify who reviews the pipeline. If you want to specify a user (for example, `project-admin`) to review it, you can add a field in the Jenkinsfile. If there are multiple users, you need to use commas to separate them as follows:
|
||||
|
||||
```groovy
|
||||
···
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ Log in to the console of KubeSphere as `project-regular`. Navigate to your DevOp
|
|||
|
||||
### Create GitHub credentials
|
||||
|
||||
Similarly, follow the same steps above to create GitHub credentials. Set a different Credential ID (e.g. `github-id`) and also select **Account Credentials** for **Type**. Enter your GitHub username and password for **Username** and **Token/Password** respectively.
|
||||
Similarly, follow the same steps above to create GitHub credentials. Set a different Credential ID (for example, `github-id`) and also select **Account Credentials** for **Type**. Enter your GitHub username and password for **Username** and **Token/Password** respectively.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
|
|
@ -58,7 +58,7 @@ If there are any special characters such as `@` and `$` in your account or passw
|
|||
|
||||
### Create kubeconfig credentials
|
||||
|
||||
Similarly, follow the same steps above to create kubeconfig credentials. Set a different Credential ID (e.g. `demo-kubeconfig`) and select **kubeconfig**.
|
||||
Similarly, follow the same steps above to create kubeconfig credentials. Set a different Credential ID (for example, `demo-kubeconfig`) and select **kubeconfig**.
|
||||
|
||||
{{< notice info >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ The built-in Jenkins cannot share the same email configuration with the platform
|
|||
| Environment Variable Name | Description |
|
||||
| ------------------------- | -------------------------------- |
|
||||
| EMAIL\_SMTP\_HOST | SMTP server address |
|
||||
| EMAIL\_SMTP\_PORT | SMTP server port (e.g. 25) |
|
||||
| EMAIL\_SMTP\_PORT | SMTP server port (for example, 25) |
|
||||
| EMAIL\_FROM\_ADDR | Email sender address |
|
||||
| EMAIL\_FROM\_NAME | Email sender name |
|
||||
| EMAIL\_FROM\_PASS | Email sender password |
|
||||
|
|
|
|||
|
|
@ -56,7 +56,7 @@ After you modified `jenkins-casc-config`, you need to reload your updated system
|
|||
http://192.168.0.4:30180
|
||||
```
|
||||
|
||||
3. Access Jenkins at `http://Node IP:Port Number`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (e.g. `admin/P@88w0rd`) directly.
|
||||
3. Access Jenkins at `http://Node IP:Port Number`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (for example, `admin/P@88w0rd`) directly.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -158,7 +158,7 @@ You can select a pipeline from the drop-down list for **When Create Pipeline** a
|
|||
|
||||

|
||||
|
||||
**Webhook Push** is an efficient way to allow pipelines to discover changes in the remote code repository and automatically trigger a new running. Webhook should be the primary method to trigger Jenkins automatic scanning for GitHub and Git (e.g. GitLab).
|
||||
**Webhook Push** is an efficient way to allow pipelines to discover changes in the remote code repository and automatically trigger a new running. Webhook should be the primary method to trigger Jenkins automatic scanning for GitHub and Git (for example, GitLab).
|
||||
|
||||
### Advanced Settings with No Code Repository Selected
|
||||
|
||||
|
|
|
|||
|
|
@ -48,7 +48,7 @@ A DevOps project user with required permissions can configure credentials for pi
|
|||
|
||||
### Members and roles
|
||||
|
||||
Similar to a project, a DevOps project also requires users to be granted different roles before they can work in the DevOps project. Project administrators (e.g. `project-admin`) are responsible for inviting tenants and granting them different roles. For more information, see [Role and Member Management](../role-and-member-management/).
|
||||
Similar to a project, a DevOps project also requires users to be granted different roles before they can work in the DevOps project. Project administrators (for example, `project-admin`) are responsible for inviting tenants and granting them different roles. For more information, see [Role and Member Management](../role-and-member-management/).
|
||||
|
||||
## Edit or Delete a DevOps Project
|
||||
|
||||
|
|
|
|||
|
|
@ -17,7 +17,7 @@ In DevOps project scope, you can grant the following resources' permissions to a
|
|||
|
||||
## Prerequisites
|
||||
|
||||
At least one DevOps project has been created, such as `demo-devops`. Besides, you need an account of the `admin` role (e.g. `devops-admin`) at the DevOps project level.
|
||||
At least one DevOps project has been created, such as `demo-devops`. Besides, you need an account of the `admin` role (for example, `devops-admin`) at the DevOps project level.
|
||||
|
||||
## Built-in Roles
|
||||
|
||||
|
|
@ -31,7 +31,7 @@ In **Project Roles**, there are three available built-in roles as shown below. B
|
|||
|
||||
## Create a DevOps Project Role
|
||||
|
||||
1. Log in to the console as `devops-admin` and select a DevOps project (e.g. `demo-devops`) under **DevOps Projects** list.
|
||||
1. Log in to the console as `devops-admin` and select a DevOps project (for example, `demo-devops`) under **DevOps Projects** list.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -10,7 +10,7 @@ As an open-source and app-centric container platform, KubeSphere integrates 16 b
|
|||
|
||||
## Prerequisites
|
||||
|
||||
- You need to use an account with the role of `platform-admin` (e.g. `admin`) for this tutorial.
|
||||
- You need to use an account with the role of `platform-admin` (for example, `admin`) for this tutorial.
|
||||
- You need to [enable the App Store](../../../pluggable-components/app-store/).
|
||||
|
||||
## Remove a Built-in App
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ You need to enable [the KubeSphere DevOps system](../../../pluggable-components/
|
|||
echo http://$NODE_IP:$NODE_PORT
|
||||
```
|
||||
|
||||
2. You can get the output similar to the following. You can access the Jenkins dashboard through the address with your own KubeSphere account and password (e.g. `admin/P@88w0rd`).
|
||||
2. You can get the output similar to the following. You can access the Jenkins dashboard through the address with your own KubeSphere account and password (for example, `admin/P@88w0rd`).
|
||||
|
||||
```
|
||||
http://192.168.0.4:30180
|
||||
|
|
|
|||
|
|
@ -78,7 +78,7 @@ Docker needs to be installed in advance for this method.
|
|||
registry:
|
||||
registryMirrors: [] # For users who need to speed up downloads
|
||||
insecureRegistries: [] # Set an address of insecure image registry. See https://docs.docker.com/registry/insecure/
|
||||
privateRegistry: "" # Configure a private image registry for air-gapped installation (e.g. docker local registry or Harbor)
|
||||
privateRegistry: "" # Configure a private image registry for air-gapped installation (for example, docker local registry or Harbor)
|
||||
```
|
||||
|
||||
2. Input the registry mirror address above and save the file. For more information about the installation process, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/).
|
||||
|
|
|
|||
|
|
@ -18,11 +18,11 @@ A Kubernetes cluster in DO is a prerequisite for installing KubeSphere. Go to yo
|
|||
|
||||
You need to select:
|
||||
|
||||
1. Kubernetes version (e.g. *1.18.6-do.0*)
|
||||
2. Datacenter region (e.g. *Frankfurt*)
|
||||
3. VPC network (e.g. *default-fra1*)
|
||||
4. Cluster capacity (e.g. 2 standard nodes with 2 vCPUs and 4GB of RAM each)
|
||||
5. A name for the cluster (e.g. *kubesphere-3*)
|
||||
1. Kubernetes version (for example, *1.18.6-do.0*)
|
||||
2. Datacenter region (for example, *Frankfurt*)
|
||||
3. VPC network (for example, *default-fra1*)
|
||||
4. Cluster capacity (for example, 2 standard nodes with 2 vCPUs and 4GB of RAM each)
|
||||
5. A name for the cluster (for example, *kubesphere-3*)
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ weight: 4110
|
|||
|
||||

|
||||
|
||||
As part of KubeSphere's commitment to provide a plug-and-play architecture for users, it can be easily installed on existing Kubernetes clusters. More specifically, KubeSphere can be deployed on Kubernetes either hosted on clouds (e.g. AWS EKS, QingCloud QKE and Google GKE) or on-premises. This is because KubeSphere does not hack Kubernetes itself. It only interacts with the Kubernetes API to manage Kubernetes cluster resources. In other words, KubeSphere can be installed on any native Kubernetes cluster and Kubernetes distribution.
|
||||
As part of KubeSphere's commitment to provide a plug-and-play architecture for users, it can be easily installed on existing Kubernetes clusters. More specifically, KubeSphere can be deployed on Kubernetes either hosted on clouds (for example, AWS EKS, QingCloud QKE and Google GKE) or on-premises. This is because KubeSphere does not hack Kubernetes itself. It only interacts with the Kubernetes API to manage Kubernetes cluster resources. In other words, KubeSphere can be installed on any native Kubernetes cluster and Kubernetes distribution.
|
||||
|
||||
This section gives you an overview of the general steps of installing KubeSphere on Kubernetes. For more information about the specific way of installation in different environments, see Installing on Hosted Kubernetes and Installing on On-premises Kubernetes.
|
||||
|
||||
|
|
|
|||
|
|
@ -76,7 +76,7 @@ You can skip this step if you already have the configuration file on your machin
|
|||
|
||||
## Add Master Nodes for High Availability
|
||||
|
||||
The steps of adding master nodes are generally the same as adding worker nodes while you need to configure a load balancer for your cluster. You can use any cloud load balancers or hardware load balancers (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating highly available clusters.
|
||||
The steps of adding master nodes are generally the same as adding worker nodes while you need to configure a load balancer for your cluster. You can use any cloud load balancers or hardware load balancers (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating highly available clusters.
|
||||
|
||||
1. Create a configuration file using KubeKey.
|
||||
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ linkTitle: "Set up an HA Cluster Using a Load Balancer"
|
|||
weight: 3210
|
||||
---
|
||||
|
||||
You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial demonstrates the general configurations of a high-availability cluster as you install KubeSphere on Linux.
|
||||
|
||||
|
|
@ -163,7 +163,7 @@ For more information about different fields in this configuration file, see [Kub
|
|||
|
||||
### Persistent storage plugin configurations
|
||||
|
||||
For a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
For a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
### Enable pluggable components (Optional)
|
||||
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ This section gives you an overview of a single-master multi-node installation, i
|
|||
|
||||
## Concept
|
||||
|
||||
A multi-node cluster is composed of at least one master node and one worker node. You can use any node as the **taskbox** to carry out the installation task. You can add additional nodes based on your needs (e.g. for high availability) both before and after the installation.
|
||||
A multi-node cluster is composed of at least one master node and one worker node. You can use any node as the **taskbox** to carry out the installation task. You can add additional nodes based on your needs (for example, for high availability) both before and after the installation.
|
||||
|
||||
- **Master**. A master node generally hosts the control plane that controls and manages the whole system.
|
||||
- **Worker**. Worker nodes run the actual applications deployed on them.
|
||||
|
|
@ -177,7 +177,7 @@ Here are some examples for your reference:
|
|||
./kk create config [-f ~/myfolder/abc.yaml]
|
||||
```
|
||||
|
||||
- You can specify a KubeSphere version that you want to install (e.g. `--with-kubesphere v3.1.0`).
|
||||
- You can specify a KubeSphere version that you want to install (for example, `--with-kubesphere v3.1.0`).
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubesphere [version]
|
||||
|
|
@ -278,7 +278,7 @@ The `controlPlaneEndpoint` is where you provide your external load balancer info
|
|||
|
||||
#### addons
|
||||
|
||||
You can customize persistent storage plugins (e.g. NFS Client, Ceph RBD, and GlusterFS) by specifying storage under the field `addons` in `config-sample.yaml`. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
You can customize persistent storage plugins (for example, NFS Client, Ceph RBD, and GlusterFS) by specifying storage under the field `addons` in `config-sample.yaml`. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
|
||||
|
||||
KubeKey will install [OpenEBS](https://openebs.io/) to provision [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local) for development and testing environment by default, which is convenient for new users. In this example of multi-node installation, the default storage class (local volume) is used. For production, you can use NFS/Ceph/GlusterFS/CSI or commercial products as persistent storage solutions.
|
||||
|
||||
|
|
|
|||
|
|
@ -244,7 +244,7 @@ chmod +x kk
|
|||
|
||||
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
|
||||
|
||||
Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.1.0`):
|
||||
Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.1.0`):
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubernetes v1.20.4 --with-kubesphere v3.1.0
|
||||
|
|
|
|||
|
|
@ -9,7 +9,7 @@ weight: 3510
|
|||
|
||||
## Introduction
|
||||
|
||||
For a production environment, we need to consider the high availability of the cluster. If the key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
For a production environment, we need to consider the high availability of the cluster. If the key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create Keepalived and HAProxy, and implement high availability of master and etcd nodes using the load balancers on VMware vSphere.
|
||||
|
||||
|
|
@ -345,7 +345,7 @@ chmod +x kk
|
|||
|
||||
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
|
||||
|
||||
Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.1.0`):
|
||||
Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.1.0`):
|
||||
|
||||
```bash
|
||||
./kk create config --with-kubernetes v1.19.8 --with-kubesphere v3.1.0
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ mountOptions:
|
|||
|
||||
#### Add-on configurations
|
||||
|
||||
Save the above chart config and StorageClass locally (e.g. `/root/ceph-csi-rbd.yaml` and `/root/ceph-csi-rbd-sc.yaml`). The add-on configuration can be set like:
|
||||
Save the above chart config and StorageClass locally (for example, `/root/ceph-csi-rbd.yaml` and `/root/ceph-csi-rbd-sc.yaml`). The add-on configuration can be set like:
|
||||
|
||||
```yaml
|
||||
addons:
|
||||
|
|
@ -115,7 +115,7 @@ If you want to configure more values, see [chart configuration for rbd-provision
|
|||
|
||||
#### Add-on configurations
|
||||
|
||||
Save the above chart config locally (e.g. `/root/rbd-provisioner.yaml`). The add-on config for rbd provisioner cloud be like:
|
||||
Save the above chart config locally (for example, `/root/rbd-provisioner.yaml`). The add-on config for rbd provisioner cloud be like:
|
||||
|
||||
```yaml
|
||||
- name: rbd-provisioner
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ Weight: 3420
|
|||
|
||||
## Introduction
|
||||
|
||||
For a production environment, you need to consider the high availability of the cluster. If key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
For a production environment, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
|
||||
|
||||
This tutorial walks you through an example of how to create two [QingCloud load balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of master and etcd nodes using the load balancers.
|
||||
|
||||
|
|
@ -253,7 +253,7 @@ Kubekey provides some fields and parameters to allow the cluster administrator t
|
|||
|
||||
### Step 6: Persistent storage plugin configurations
|
||||
|
||||
Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want.
|
||||
Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ Automation represents a key part of implementing DevOps. With automatic, streaml
|
|||
|
||||
**Jenkins-powered**. The KubeSphere DevOps system is built with Jenkins as the engine, which is abundant in plugins. On top of that, Jenkins provides an enabling environment for extension development, making it possible for the DevOps team to work smoothly across the whole process (developing, testing, building, deploying, monitoring, logging, notifying, etc.) in a unified platform. The KubeSphere account can also be used for the built-in Jenkins, meeting the demand of enterprises for multi-tenant isolation of CI/CD pipelines and unified authentication.
|
||||
|
||||
**Convenient built-in tools**. Users can easily take advantage of automation tools (e.g. Binary-to-Image and Source-to-Image) even without a thorough understanding of how Docker or Kubernetes works. They only need to submit a registry address or upload binary files (e.g. JAR/WAR/Binary). Ultimately, services will be released to Kubernetes automatically without any coding in a Dockerfile.
|
||||
**Convenient built-in tools**. Users can easily take advantage of automation tools (for example, Binary-to-Image and Source-to-Image) even without a thorough understanding of how Docker or Kubernetes works. They only need to submit a registry address or upload binary files (for example, JAR/WAR/Binary). Ultimately, services will be released to Kubernetes automatically without any coding in a Dockerfile.
|
||||
|
||||
For more information, see [DevOps User Guide](../../devops-user-guide/).
|
||||
|
||||
|
|
@ -85,7 +85,7 @@ The KubeSphere community has the capabilities and technical know-how to help you
|
|||
|
||||
**Partners**. KubeSphere partners play a critical role in KubeSphere's go-to-market strategy. They can be app developers, technology companies, cloud providers or go-to-market partners, all of whom drive the community ahead in their respective aspects.
|
||||
|
||||
**Ambassadors**. As community representatives, ambassadors promote KubeSphere in a variety of ways (e.g. activities, blogs and user cases) so that more people can join the community.
|
||||
**Ambassadors**. As community representatives, ambassadors promote KubeSphere in a variety of ways (for example, activities, blogs and user cases) so that more people can join the community.
|
||||
|
||||
**Contributors**. KubeSphere contributors help the whole community by contributing to code or documentation. You don't need to be an expert while you can still make a difference even it is a minor code fix or language improvement.
|
||||
|
||||
|
|
|
|||
|
|
@ -39,7 +39,7 @@ The next-gen installer [KubeKey](https://github.com/kubesphere/kubekey) provides
|
|||
|
||||
As the IT world sees a growing number of cloud-native applications reshaping software portfolios for enterprises, users tend to deploy their clusters across locations, geographies, and clouds. Against this backdrop, KubeSphere has undergone a significant upgrade to address the pressing need of users with its brand-new multi-cluster feature.
|
||||
|
||||
With KubeSphere, users can manage the infrastructure underneath, such as adding or deleting clusters. Heterogeneous clusters deployed on any infrastructure (e.g. Amazon EKS and Google Kubernetes Engine) can be managed in a unified way. This is made possible by a central control plane of KubeSphere with two efficient management approaches available.
|
||||
With KubeSphere, users can manage the infrastructure underneath, such as adding or deleting clusters. Heterogeneous clusters deployed on any infrastructure (for example, Amazon EKS and Google Kubernetes Engine) can be managed in a unified way. This is made possible by a central control plane of KubeSphere with two efficient management approaches available.
|
||||
|
||||
- **Solo**. Independently deployed Kubernetes clusters can be maintained and managed together in KubeSphere container platform.
|
||||
- **Federation**. Multiple Kubernetes clusters can be aggregated together as a Kubernetes resource pool. When users deploy applications, replicas can be deployed on different Kubernetes clusters in the pool. In this regard, high availability is achieved across zones and clusters.
|
||||
|
|
@ -72,7 +72,7 @@ S2I allows you to publish your service to Kubernetes without writing a Dockerfil
|
|||
|
||||
### Binary-to-Image
|
||||
|
||||
Similar to S2I, Binary-to-Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (e.g. Jar, War, Binary package).
|
||||
Similar to S2I, Binary-to-Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (for example, Jar, War, Binary package).
|
||||
|
||||
You just need to upload your application binary package, and specify the image registry to which you want to push. The rest is exactly the same as S2I.
|
||||
|
||||
|
|
@ -103,7 +103,7 @@ Based on Jaeger, KubeSphere service mesh enables users to track how services int
|
|||
|
||||
## Multi-tenant Management
|
||||
|
||||
In KubeSphere, resources (e.g. clusters) can be shared between tenants. First, administrators or managers need to set different account roles with different authorizations. After that, members in the platform can be assigned with these roles to perform specific actions on varied resources. Meanwhile, as KubeSphere completely isolates tenants, they will not affect each other at all.
|
||||
In KubeSphere, resources (for example, clusters) can be shared between tenants. First, administrators or managers need to set different account roles with different authorizations. After that, members in the platform can be assigned with these roles to perform specific actions on varied resources. Meanwhile, as KubeSphere completely isolates tenants, they will not affect each other at all.
|
||||
|
||||
- **Multi-tenancy**. It provides role-based fine-grained authentication in a unified way and a three-tier authorization system.
|
||||
- **Unified authentication**. For enterprises, KubeSphere is compatible with their central authentication system that is base on LDAP or AD protocol. Single sign-on (SSO) is also supported to achieve unified authentication of tenant identity.
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ titleLink: "Agent Connection"
|
|||
weight: 5220
|
||||
---
|
||||
|
||||
The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the Host Cluster (H Cluster) cannot access the Member Cluster (M Cluster) directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H Cluster through the agent. This method is applicable when the M Cluster is in a private environment (e.g. IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed across different cloud providers.
|
||||
The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the Host Cluster (H Cluster) cannot access the Member Cluster (M Cluster) directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H Cluster through the agent. This method is applicable when the M Cluster is in a private environment (for example, IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed across different cloud providers.
|
||||
|
||||
To use the multi-cluster feature using an agent, you must have at least two clusters serving as the H Cluster and the M Cluster respectively. A cluster can be defined as the H Cluster or the M Cluster either before or after you install KubeSphere. For more information about installing KubeSphere, refer to [Installing on Linux](../../../installing-on-linux/) and [Installing on Kubernetes](../../../installing-on-kubernetes/).
|
||||
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
|
|||
```
|
||||
|
||||
{{< notice note >}}
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Alerting in this mode (e.g. for testing purposes), refer to [the following section](#enable-alerting-after-installation) to see how Alerting can be enabled after installation.
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Alerting in this mode (for example, for testing purposes), refer to [the following section](#enable-alerting-after-installation) to see how Alerting can be enabled after installation.
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, navigate to `alerting` and change `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
|
|||
```
|
||||
|
||||
{{< notice note >}}
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the App Store in this mode (e.g. for testing purposes), refer to [the following section](#enable-app-store-after-installation) to see how the App Store can be installed after installation.
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the App Store in this mode (for example, for testing purposes), refer to [the following section](#enable-app-store-after-installation) to see how the App Store can be installed after installation.
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, navigate to `openpitrix` and change `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ When you implement multi-node installation KubeSphere on Linux, you need to crea
|
|||
```
|
||||
|
||||
{{< notice note >}}
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Auditing in this mode (e.g. for testing purposes), refer to [the following section](#enable-auditing-logs-after-installation) to see how Auditing can be installed after installation.
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Auditing in this mode (for example, for testing purposes), refer to [the following section](#enable-auditing-logs-after-installation) to see how Auditing can be installed after installation.
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, navigate to `auditing` and change `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ weight: 6300
|
|||
|
||||
The KubeSphere DevOps System is designed for CI/CD workflows in Kubernetes. Based on [Jenkins](https://jenkins.io/), it provides one-stop solutions to help both development and Ops teams build, test and publish apps to Kubernetes in a straight-forward way. It also features plugin management, [Binary-to-Image (B2I)](../../project-user-guide/image-builder/binary-to-image/), [Source-to-Image (S2I)](../../project-user-guide/image-builder/source-to-image/), code dependency caching, code quality analysis, pipeline logging, etc.
|
||||
|
||||
The DevOps System offers an enabling environment for users as apps can be automatically released to the same platform. It is also compatible with third-party private image registries (e.g. Harbor) and code repositories (e.g. GitLab/GitHub/SVN/BitBucket). As such, it creates excellent user experiences by providing users with comprehensive, visualized CI/CD pipelines which are extremely useful in air-gapped environments.
|
||||
The DevOps System offers an enabling environment for users as apps can be automatically released to the same platform. It is also compatible with third-party private image registries (for example, Harbor) and code repositories (for example, GitLab/GitHub/SVN/BitBucket). As such, it creates excellent user experiences by providing users with comprehensive, visualized CI/CD pipelines which are extremely useful in air-gapped environments.
|
||||
|
||||
For more information, see [DevOps User Guide](../../devops-user-guide/).
|
||||
|
||||
|
|
@ -25,7 +25,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
|
|||
```
|
||||
|
||||
{{< notice note >}}
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable DevOps in this mode (e.g. for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how DevOps can be installed after installation.
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable DevOps in this mode (for example, for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how DevOps can be installed after installation.
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, navigate to `devops` and change `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Events in this mode (e.g. for testing purposes), refer to [the following section](#enable-events-after-installation) to see how Events can be [installed after installation](#enable-events-after-installation).
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Events in this mode (for example, for testing purposes), refer to [the following section](#enable-events-after-installation) to see how Events can be [installed after installation](#enable-events-after-installation).
|
||||
|
||||
{{</ notice >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
|
|||
```
|
||||
|
||||
{{< notice note >}}
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeEdge in this mode (e.g. for testing purposes), refer to [the following section](#enable-kubeedge-after-installation) to see how KubeEdge can be installed after installation.
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeEdge in this mode (for example, for testing purposes), refer to [the following section](#enable-kubeedge-after-installation) to see how KubeEdge can be installed after installation.
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, navigate to `kubeedge.enabled` and change `false` to `true`.
|
||||
|
|
|
|||
|
|
@ -24,7 +24,7 @@ When you install KubeSphere on Linux, you need to create a configuration file, w
|
|||
|
||||
{{< notice note >}}
|
||||
|
||||
- If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Logging in this mode (e.g. for testing purposes), refer to [the following section](#enable-logging-after-installation) to see how Logging can be installed after installation.
|
||||
- If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Logging in this mode (for example, for testing purposes), refer to [the following section](#enable-logging-after-installation) to see how Logging can be installed after installation.
|
||||
|
||||
- If you adopt [Multi-node Installation](../../installing-on-linux/introduction/multioverview/) and are using symbolic links for docker root directory, make sure all nodes follow the exactly same symbolic links. Logging agents are deployed in DaemonSets onto nodes. Any discrepancy in container log path may cause collection failures on that node.
|
||||
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
|
|||
```
|
||||
|
||||
{{< notice note >}}
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Metrics Server in this mode (e.g. for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how the Metrics Server can be installed after installation.
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Metrics Server in this mode (for example, for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how the Metrics Server can be installed after installation.
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, navigate to `metrics_server` and change `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
|
|||
```
|
||||
|
||||
{{< notice note >}}
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Network Policy in this mode (e.g. for testing purposes), refer to [the following section](#enable-network-policy-after-installation) to see how the Network Policy can be installed after installation.
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Network Policy in this mode (for example, for testing purposes), refer to [the following section](#enable-network-policy-after-installation) to see how the Network Policy can be installed after installation.
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, navigate to `network.networkpolicy` and change `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
|
|||
```
|
||||
|
||||
{{< notice note >}}
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Pod IP Pools in this mode (e.g. for testing purposes), refer to [the following section](#enable-pod-ip-pools-after-installation) to see how Pod IP Pools can be installed after installation.
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Pod IP Pools in this mode (for example, for testing purposes), refer to [the following section](#enable-pod-ip-pools-after-installation) to see how Pod IP Pools can be installed after installation.
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, navigate to `network.ippool.type` and change `none` to `calico`. Save the file after you finish.
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
|
|||
```
|
||||
|
||||
{{< notice note >}}
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeSphere Service Mesh in this mode (e.g. for testing purposes), refer to [the following section](#enable-service-mesh-after-installation) to see how KubeSphere Service Mesh can be installed after installation.
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeSphere Service Mesh in this mode (for example, for testing purposes), refer to [the following section](#enable-service-mesh-after-installation) to see how KubeSphere Service Mesh can be installed after installation.
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, navigate to `servicemesh` and change `false` to `true` for `enabled`. Save the file after you finish.
|
||||
|
|
|
|||
|
|
@ -21,7 +21,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
|
|||
```
|
||||
|
||||
{{< notice note >}}
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Service Topology in this mode (e.g. for testing purposes), refer to [the following section](#enable-service-topology-after-installation) to see how Service Topology can be installed after installation.
|
||||
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Service Topology in this mode (for example, for testing purposes), refer to [the following section](#enable-service-topology-after-installation) to see how Service Topology can be installed after installation.
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, navigate to `network.topology.type` and change `none` to `weave-scope`. Save the file after you finish.
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ linkTitle: "Container Limit Ranges"
|
|||
weight: 13400
|
||||
---
|
||||
|
||||
A container can use as much CPU and memory as set by [the resource quota for a project](../../workspace-administration/project-quotas/). At the same time, KubeSphere uses requests and limits to control resource (e.g. CPU and memory) usage for a container, also known as [LimitRanges](https://kubernetes.io/docs/concepts/policy/limit-range/) in Kubernetes. Requests make sure the container can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that container can never use resources above a certain value.
|
||||
A container can use as much CPU and memory as set by [the resource quota for a project](../../workspace-administration/project-quotas/). At the same time, KubeSphere uses requests and limits to control resource (for example, CPU and memory) usage for a container, also known as [LimitRanges](https://kubernetes.io/docs/concepts/policy/limit-range/) in Kubernetes. Requests make sure the container can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that container can never use resources above a certain value.
|
||||
|
||||
When you create a workload, such as a Deployment, you configure resource requests and limits for the container. To make these request and limit fields pre-populated with values, you can set default limit ranges.
|
||||
|
||||
|
|
|
|||
|
|
@ -27,7 +27,7 @@ This tutorial demonstrates how to collect disk logs for an example app.
|
|||
|
||||
1. From the left navigation bar, select **Workloads** in **Application Workloads**. Under the **Deployments** tab, click **Create**.
|
||||
|
||||
2. In the dialog that appears, set a name for the Deployment (e.g. `demo-deployment`) and click **Next**.
|
||||
2. In the dialog that appears, set a name for the Deployment (for example, `demo-deployment`) and click **Next**.
|
||||
|
||||
3. Under **Container Image**, click **Add Container Image**.
|
||||
|
||||
|
|
@ -61,7 +61,7 @@ This tutorial demonstrates how to collect disk logs for an example app.
|
|||
|
||||

|
||||
|
||||
7. On the **Temporary Volume** tab, input a name for the volume (e.g. `demo-disk-log-collection`) and set the access mode and path. Refer to the image below as an example.
|
||||
7. On the **Temporary Volume** tab, input a name for the volume (for example, `demo-disk-log-collection`) and set the access mode and path. Refer to the image below as an example.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -30,7 +30,7 @@ You need to create a workspace, a project and an account (`project-admin`). The
|
|||
|
||||
**LoadBalancer**: You can access Services with a single IP address through the gateway.
|
||||
|
||||
3. You can also enable **Application Governance** on the **Set Gateway** page. You need to enable **Application Governance** so that you can use the Tracing feature and use [different grayscale release strategies](../../project-user-guide/grayscale-release/overview/). Once it is enabled, check whether an annotation (e.g. `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your route (Ingress) if the route is inaccessible.
|
||||
3. You can also enable **Application Governance** on the **Set Gateway** page. You need to enable **Application Governance** so that you can use the Tracing feature and use [different grayscale release strategies](../../project-user-guide/grayscale-release/overview/). Once it is enabled, check whether an annotation (for example, `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your route (Ingress) if the route is inaccessible.
|
||||
|
||||
4. After you select an access method, click **Save**.
|
||||
|
||||
|
|
|
|||
|
|
@ -20,7 +20,7 @@ In project scope, you can grant the following resources' permissions to a role:
|
|||
|
||||
## Prerequisites
|
||||
|
||||
At least one project has been created, such as `demo-project`. Besides, you need an account of the `admin` role (e.g. `project-admin`) at the project level. See [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/) if it is not ready yet.
|
||||
At least one project has been created, such as `demo-project`. Besides, you need an account of the `admin` role (for example, `project-admin`) at the project level. See [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/) if it is not ready yet.
|
||||
|
||||
## Built-in Roles
|
||||
|
||||
|
|
@ -40,7 +40,7 @@ In **Project Roles**, there are three available built-in roles as shown below. B
|
|||
|
||||
## Create a Project Role
|
||||
|
||||
1. Log in to the console as `project-admin` and select a project (e.g. `demo-project`) under **Projects** list.
|
||||
1. Log in to the console as `project-admin` and select a project (for example, `demo-project`) under **Projects** list.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -32,7 +32,7 @@ Log in to the console as `project-regular`. Go to **Application Workloads** of a
|
|||
|
||||
### Step 2: Input basic information
|
||||
|
||||
Specify a name for the DaemonSet (e.g. `demo-daemonset`) and click **Next** to continue.
|
||||
Specify a name for the DaemonSet (for example, `demo-daemonset`) and click **Next** to continue.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -25,7 +25,7 @@ Log in to the console as `project-regular`. Go to **Application Workloads** of a
|
|||
|
||||
### Step 2: Input basic information
|
||||
|
||||
Specify a name for the Deployment (e.g. `demo-deployment`) and click **Next** to continue.
|
||||
Specify a name for the Deployment (for example, `demo-deployment`) and click **Next** to continue.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -145,7 +145,7 @@ You can rerun the Job if it fails, the reason of which displays under **Messages
|
|||
|
||||
{{< notice tip >}}
|
||||
|
||||
- In **Resource Status**, the Pod list provides the Pod's detailed information (e.g. creation time, node, Pod IP and monitoring data).
|
||||
- In **Resource Status**, the Pod list provides the Pod's detailed information (for example, creation time, node, Pod IP and monitoring data).
|
||||
- You can view the container information by clicking the Pod.
|
||||
- Click the container log icon to view the output logs of the container.
|
||||
- You can view the Pod detail page by clicking the Pod name.
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ Log in to the console as `project-regular`. Go to **Application Workloads** of a
|
|||
|
||||
### Step 2: Input basic information
|
||||
|
||||
Specify a name for the StatefulSet (e.g. `demo-stateful`) and click **Next** to continue.
|
||||
Specify a name for the StatefulSet (for example, `demo-stateful`) and click **Next** to continue.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ linkTitle: "App Templates"
|
|||
weight: 10110
|
||||
---
|
||||
|
||||
An app template serves as a way for users to upload, deliver and manage apps. Generally, an app is composed of one or more Kubernetes workloads (e.g. [Deployments](../../../project-user-guide/application-workloads/deployments/), [StatefulSets](../../../project-user-guide/application-workloads/statefulsets/) and [DaemonSets](../../../project-user-guide/application-workloads/daemonsets/)) and [Services](../../../project-user-guide/application-workloads/services/) based on how it functions and communicates with the external environment. Apps that are uploaded as app templates are built based on a [Helm](https://helm.sh/) package.
|
||||
An app template serves as a way for users to upload, deliver and manage apps. Generally, an app is composed of one or more Kubernetes workloads (for example, [Deployments](../../../project-user-guide/application-workloads/deployments/), [StatefulSets](../../../project-user-guide/application-workloads/statefulsets/) and [DaemonSets](../../../project-user-guide/application-workloads/daemonsets/)) and [Services](../../../project-user-guide/application-workloads/services/) based on how it functions and communicates with the external environment. Apps that are uploaded as app templates are built based on a [Helm](https://helm.sh/) package.
|
||||
|
||||
## How App Templates Work
|
||||
|
||||
|
|
@ -30,7 +30,7 @@ KubeSphere deploys app repository services based on [OpenPitrix](https://github.
|
|||
|
||||
## Why App Templates
|
||||
|
||||
App templates enable users to deploy and manage apps in a visualized way. Internally, they play an important role as shared resources (e.g. databases, middleware and operating systems) created by enterprises for the coordination and cooperation within teams. Externally, app templates set industry standards of building and delivery. Users can take advantage of app templates in different scenarios to meet their own needs through one-click deployment.
|
||||
App templates enable users to deploy and manage apps in a visualized way. Internally, they play an important role as shared resources (for example, databases, middleware and operating systems) created by enterprises for the coordination and cooperation within teams. Externally, app templates set industry standards of building and delivery. Users can take advantage of app templates in different scenarios to meet their own needs through one-click deployment.
|
||||
|
||||
In addition, as OpenPitrix is integrated to KubeSphere to provide application management across the entire lifecycle, the platform allows ISVs, developers and regular users to all participate in the process. Backed by the multi-tenant system of KubeSphere, each tenant is only responsible for their own part, such as app uploading, app review, release, test, and version management. Ultimately, enterprises can build their own App Store and enrich their application pools with their customized standards. As such, apps can also be delivered in a standardized fashion.
|
||||
|
||||
|
|
|
|||
|
|
@ -19,7 +19,7 @@ This tutorial demonstrates how to create a microservices-based app Bookinfo, whi
|
|||
|
||||
1. Log in to the web console of KubeSphere and navigate to **Apps** in **Application Workloads** of your project. On the **Composing Apps** tab, click **Create Composing App**.
|
||||
|
||||
2. Set a name for the app (e.g. `bookinfo`) and click **Next**.
|
||||
2. Set a name for the app (for example, `bookinfo`) and click **Next**.
|
||||
|
||||
3. On the **Components** page, you need to create microservices that compose the app. Click **Add Service** and select **Stateless Service**.
|
||||
|
||||
|
|
@ -55,7 +55,7 @@ This tutorial demonstrates how to create a microservices-based app Bookinfo, whi
|
|||
|
||||
10. When you finish adding microservices, click **Next**.
|
||||
|
||||
11. On the **Internet Access** page, click **Add Route Rule**. On the **Specify Domain** tab, set a domain name for your app (e.g. `demo.bookinfo`) and select `http` in the **Protocol** field. For `Paths`, select the Service `productpage` and port `9080`. Click **OK** to continue.
|
||||
11. On the **Internet Access** page, click **Add Route Rule**. On the **Specify Domain** tab, set a domain name for your app (for example, `demo.bookinfo`) and select `http` in the **Protocol** field. For `Paths`, select the Service `productpage` and port `9080`. Click **OK** to continue.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ You need to create a workspace, a project and an account (`project-regular`). Th
|
|||
|
||||
1. Log in to the console as `project-regular`. Go to **Configurations** of a project, choose **ConfigMaps** and click **Create**.
|
||||
|
||||
2. In the dialog that appears, specify a name for the ConfigMap (e.g. `demo-configmap`) and click **Next** to continue.
|
||||
2. In the dialog that appears, specify a name for the ConfigMap (for example, `demo-configmap`) and click **Next** to continue.
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ Log in to the web console of KubeSphere as `project-regular`. Go to **Configurat
|
|||
|
||||
### Step 2: Input basic information
|
||||
|
||||
Specify a name for the Secret (e.g. `demo-registry-secret`) and click **Next** to continue.
|
||||
Specify a name for the Secret (for example, `demo-registry-secret`) and click **Next** to continue.
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ Log in to the console as `project-regular`. Go to **Configurations** of a projec
|
|||
|
||||
### Step 2: Input basic information
|
||||
|
||||
Specify a name for the Secret (e.g. `demo-secret`) and click **Next** to continue.
|
||||
Specify a name for the Secret (for example, `demo-secret`) and click **Next** to continue.
|
||||
|
||||
{{< notice tip >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -11,7 +11,7 @@ This section walks you through monitoring a sample web application. The applicat
|
|||
## Prerequisites
|
||||
|
||||
- Please make sure you [enable the OpenPitrix system](../../../../pluggable-components/app-store/).
|
||||
- You need to create a workspace, a project, and a user account for this tutorial. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../../quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited to the workspace with the `self-provisioner` role. Namely, create an account `workspace-self-provisioner` of the `self-provisioner` role, and use this account to create a project (e.g. `test`). In this tutorial, you log in as `workspace-self-provisioner` and work in the project `test` in the workspace `demo-workspace`.
|
||||
- You need to create a workspace, a project, and a user account for this tutorial. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../../quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited to the workspace with the `self-provisioner` role. Namely, create an account `workspace-self-provisioner` of the `self-provisioner` role, and use this account to create a project (for example, `test`). In this tutorial, you log in as `workspace-self-provisioner` and work in the project `test` in the workspace `demo-workspace`.
|
||||
|
||||
- Knowledge of Helm charts and [PromQL](https://prometheus.io/docs/prometheus/latest/querying/examples/).
|
||||
|
||||
|
|
@ -87,11 +87,11 @@ This section guides you on how to create a dashboard from scratch. You will crea
|
|||
|
||||

|
||||
|
||||
2. Set a name (e.g. `sample-web`) and click **Create**.
|
||||
2. Set a name (for example, `sample-web`) and click **Create**.
|
||||
|
||||

|
||||
|
||||
3. Enter a title in the top left corner (e.g. `Sample Web Overview`).
|
||||
3. Enter a title in the top left corner (for example, `Sample Web Overview`).
|
||||
|
||||

|
||||
|
||||
|
|
@ -99,7 +99,7 @@ This section guides you on how to create a dashboard from scratch. You will crea
|
|||
|
||||

|
||||
|
||||
5. Type the PromQL expression `myapp_processed_ops_total` in the field **Monitoring Metrics** and give a chart name (e.g. `Operation Count`). Click **√** in the bottom right corner to continue.
|
||||
5. Type the PromQL expression `myapp_processed_ops_total` in the field **Monitoring Metrics** and give a chart name (for example, `Operation Count`). Click **√** in the bottom right corner to continue.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ This method serves as an efficient way to test performance and reliability of a
|
|||
|
||||
{{</ notice >}}
|
||||
|
||||
5. You send traffic to these two versions (`v1` and `v2`) either by a specific percentage or by the request content such as `Http Header`, `Cookie` and `URI`. Select **Forward by traffic ratio** and drag the icon in the middle to change the percentage of traffic sent to these two versions respectively (e.g. set 50% for either one). When you finish, click **Create**.
|
||||
5. You send traffic to these two versions (`v1` and `v2`) either by a specific percentage or by the request content such as `Http Header`, `Cookie` and `URI`. Select **Forward by traffic ratio** and drag the icon in the middle to change the percentage of traffic sent to these two versions respectively (for example, set 50% for either one). When you finish, click **Create**.
|
||||
|
||||

|
||||
|
||||
|
|
@ -114,7 +114,7 @@ Now that you have two available app versions, access the app to verify the canar
|
|||
|
||||

|
||||
|
||||
3. Click a component (e.g. **reviews**) and you can see the information of traffic monitoring on the right, displaying real-time data of **Traffic**, **Success rate** and **Duration**.
|
||||
3. Click a component (for example, **reviews**) and you can see the information of traffic monitoring on the right, displaying real-time data of **Traffic**, **Success rate** and **Duration**.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -22,7 +22,7 @@ Traffic mirroring, also called shadowing, is a powerful, risk-free method of tes
|
|||
|
||||
3. On the **Grayscale Release Components** tab, select your app from the drop-down list and the Service of which you want to mirror the traffic. If you also use the sample app Bookinfo, select **reviews** and click **Next**.
|
||||
|
||||
4. On the **Grayscale Release Version** tab, add another version of it (e.g. `v2`) as shown in the image below and click **Next**:
|
||||
4. On the **Grayscale Release Version** tab, add another version of it (for example, `v2`) as shown in the image below and click **Next**:
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -26,7 +26,7 @@ All the volumes that are created on the **Volumes** page are PersistentVolumeCla
|
|||
|
||||
2. To create a volume, click **Create** on the **Volumes** page.
|
||||
|
||||
3. In the dialog that appears, set a name (e.g. `demo-volume`) for the volume and click **Next**.
|
||||
3. In the dialog that appears, set a name (for example, `demo-volume`) for the volume and click **Next**.
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
|
|
|
|||
|
|
@ -126,7 +126,7 @@ In this step, you create a project using the account `project-admin` created in
|
|||
|
||||

|
||||
|
||||
2. Enter the project name (e.g. `demo-project`) and click **OK** to finish. You can also add an alias and description for the project.
|
||||
2. Enter the project name (for example, `demo-project`) and click **OK** to finish. You can also add an alias and description for the project.
|
||||
|
||||

|
||||
|
||||
|
|
@ -134,7 +134,7 @@ In this step, you create a project using the account `project-admin` created in
|
|||
|
||||

|
||||
|
||||
4. On the **Overview** page of the project, the project quota remains unset by default. You can click **Set** and specify [resource requests and limits](../../workspace-administration/project-quotas/) as needed (e.g. 1 core for CPU and 1000Gi for memory).
|
||||
4. On the **Overview** page of the project, the project quota remains unset by default. You can click **Set** and specify [resource requests and limits](../../workspace-administration/project-quotas/) as needed (for example, 1 core for CPU and 1000Gi for memory).
|
||||
|
||||

|
||||
|
||||
|
|
@ -214,7 +214,7 @@ To create a DevOps project, you must install the KubeSphere DevOps system in adv
|
|||
|
||||

|
||||
|
||||
2. Enter the DevOps project name (e.g. `demo-devops`) and click **OK**. You can also add an alias and description for the project.
|
||||
2. Enter the DevOps project name (for example, `demo-devops`) and click **OK**. You can also add an alias and description for the project.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -23,7 +23,7 @@ To provide consistent user experiences of managing microservices, KubeSphere int
|
|||
Log in to the console as `project-admin` and go to your project. Navigate to **Advanced Settings** under **Project Settings**, click **Edit**, and select **Edit Gateway**. In the dialog that appears, flip on the toggle switch next to **Application Governance**.
|
||||
|
||||
{{< notice note >}}
|
||||
You need to enable **Application Governance** so that you can use the Tracing feature. Once it is enabled, check whether an annotation (e.g. `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your Route (Ingress) if the Route is inaccessible.
|
||||
You need to enable **Application Governance** so that you can use the Tracing feature. Once it is enabled, check whether an annotation (for example, `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your Route (Ingress) if the Route is inaccessible.
|
||||
{{</ notice >}}
|
||||
|
||||
## What is Bookinfo
|
||||
|
|
|
|||
|
|
@ -47,7 +47,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
|
|||
```
|
||||
|
||||
{{< notice note >}}
|
||||
If you adopt [All-in-one Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable pluggable components in this mode (e.g. for testing purpose), refer to the [following section](#enable-pluggable-components-after-installation) to see how pluggable components can be installed after installation.
|
||||
If you adopt [All-in-one Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable pluggable components in this mode (for example, for testing purpose), refer to the [following section](#enable-pluggable-components-after-installation) to see how pluggable components can be installed after installation.
|
||||
{{</ notice >}}
|
||||
|
||||
2. In this file, enable the pluggable components you want to install by changing `false` to `true` for `enabled`. Here is [the complete file](https://github.com/kubesphere/kubekey/blob/release-1.1/docs/config-example.md) for your reference. Save the file after you finish.
|
||||
|
|
|
|||
|
|
@ -36,7 +36,7 @@ The environment variable `WORDPRESS_DB_PASSWORD` is the password to connect to t
|
|||
|
||||

|
||||
|
||||
2. Enter the basic information (e.g. name it `mysql-secret`) and click **Next**. On the next page, select **Opaque (Default)** for **Type** and click **Add Data** to add a key-value pair. Input the Key (`MYSQL_ROOT_PASSWORD`) and Value (`123456`) as below and click **√** in the bottom-right corner to confirm. When you finish, click **Create** to continue.
|
||||
2. Enter the basic information (for example, name it `mysql-secret`) and click **Next**. On the next page, select **Opaque (Default)** for **Type** and click **Add Data** to add a key-value pair. Input the Key (`MYSQL_ROOT_PASSWORD`) and Value (`123456`) as below and click **√** in the bottom-right corner to confirm. When you finish, click **Create** to continue.
|
||||
|
||||

|
||||
|
||||
|
|
@ -52,7 +52,7 @@ Follow the same steps above to create a WordPress Secret `wordpress-secret` with
|
|||
|
||||

|
||||
|
||||
2. Enter the basic information of the volume (e.g. name it `wordpress-pvc`) and click **Next**.
|
||||
2. Enter the basic information of the volume (for example, name it `wordpress-pvc`) and click **Next**.
|
||||
|
||||
3. In **Volume Settings**, you need to choose an available **Storage Class**, and set **Access Mode** and **Volume Capacity**. You can use the default value directly as shown below. Click **Next** to continue.
|
||||
|
||||
|
|
@ -68,7 +68,7 @@ Follow the same steps above to create a WordPress Secret `wordpress-secret` with
|
|||
|
||||

|
||||
|
||||
2. Enter the basic information (e.g. `wordpress` for **App Name**) and click **Next**.
|
||||
2. Enter the basic information (for example, `wordpress` for **App Name**) and click **Next**.
|
||||
|
||||

|
||||
|
||||
|
|
@ -78,7 +78,7 @@ Follow the same steps above to create a WordPress Secret `wordpress-secret` with
|
|||
|
||||
4. Define a service type for the component. Select **Stateful Service** here.
|
||||
|
||||
5. Enter the name for the stateful service (e.g. **mysql**) and click **Next**.
|
||||
5. Enter the name for the stateful service (for example, **mysql**) and click **Next**.
|
||||
|
||||

|
||||
|
||||
|
|
|
|||
|
|
@ -15,7 +15,7 @@ weight: 18100
|
|||
|
||||
### Multi-cluster management
|
||||
|
||||
- Simplified the steps to import Member Clusters with configuration validation (e.g. `jwtSecret`) added. ([#3232](https://github.com/kubesphere/kubesphere/issues/3232))
|
||||
- Simplified the steps to import Member Clusters with configuration validation (for example, `jwtSecret`) added. ([#3232](https://github.com/kubesphere/kubesphere/issues/3232))
|
||||
- Refactored the cluster controller and optimized the logic. ([#3234](https://github.com/kubesphere/kubesphere/issues/3234))
|
||||
- Upgraded the built-in web Kubectl, the version of which is now consistent with your Kubernetes cluster version. ([#3103](https://github.com/kubesphere/kubesphere/issues/3103))
|
||||
- Support customized resynchronization period of cluster controller. ([#3213](https://github.com/kubesphere/kubesphere/issues/3213))
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ linkTitle: "Project Quotas"
|
|||
weight: 9600
|
||||
---
|
||||
|
||||
KubeSphere uses requests and limits to control resource (e.g. CPU and memory) usage in a project, also known as [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value.
|
||||
KubeSphere uses requests and limits to control resource (for example, CPU and memory) usage in a project, also known as [ResourceQuotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/) in Kubernetes. Requests make sure a project can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that a project can never use resources above a certain value.
|
||||
|
||||
Besides CPU and memory, you can also set resource quotas for other objects separately such as Pods, [Deployments](../../project-user-guide/application-workloads/deployments/), [Jobs](../../project-user-guide/application-workloads/jobs/), [Services](../../project-user-guide/application-workloads/services/) and [ConfigMaps](../../project-user-guide/configuration/configmaps/) in a project.
|
||||
|
||||
|
|
|
|||
|
|
@ -16,7 +16,7 @@ This tutorial demonstrates how to manage roles and members in a workspace. At th
|
|||
|
||||
## Prerequisites
|
||||
|
||||
At least one workspace has been created, such as `demo-workspace`. Besides, you need an account of the `workspace-admin` role (e.g. `ws-admin`) at the workspace level. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
|
||||
At least one workspace has been created, such as `demo-workspace`. Besides, you need an account of the `workspace-admin` role (for example, `ws-admin`) at the workspace level. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
|
||||
|
||||
{{< notice note >}}
|
||||
|
||||
|
|
|
|||
Loading…
Reference in New Issue