Merge branch 'update-app-store-docs' of https://github.com/Felixnoo/website into update-app-store-docs

This commit is contained in:
Felixnoo 2021-06-03 18:56:49 +08:00
commit 46a0ee14bb
366 changed files with 36734 additions and 942 deletions

View File

@ -2,15 +2,15 @@
This style guide provides a set of editorial guidelines for those who are writing documentation for KubeSphere.
## **Basic Rules**
## Basic Rules
- Write clearly, concisely and precisely.
- English is the preferred language to use when you write documentation. If you are not sure whether you are writing correctly, you can use grammar checkers (e.g. [grammarly](https://www.grammarly.com/)). Although they are not 100% accurate, they can help you get rid of most of the wording issues. That said, Chinese is also acceptable if you really don't know how to express your meaning in English.
- It is recommended that you use more images or diagrams to show UI functions and logical relations with tools such as [draw.io](https://draw.io).
- English is the preferred language to use when you write documentation. If you are not sure whether you are writing correctly, you can use grammar checkers (for example, [grammarly](https://www.grammarly.com/)). Although they are not 100% accurate, they can help you get rid of most of the wording issues. That said, Chinese is also acceptable if you really don't know how to express your meaning in English.
- Recommended image or diagram tools: [draw.io](https://draw.io) and [Visio](https://www.microsoft.com/en-ww/microsoft-365/visio/flowchart-software/).
## Preparation Notice
Before you start writing the specific steps for a feature, state clearly what should be ready in advance, such as necessary components, accounts or roles (do not tell readers to use `admin` for all the operations, which is unreasonable in reality for different tenants), or a specific environment. You can add this part at the beginning of a tutorial or put it in a separate part (e.g. **Prerequisites**).
Before you start writing the specific steps for a feature, state clearly what should be ready in advance, such as necessary components, accounts or roles (do not tell readers to use `admin` for all the operations, which is unreasonable in reality for different tenants), or a specific environment. You can add this part at the beginning of a tutorial or put it in a separate part (for example, **Prerequisites**).
## Paragraphs
@ -19,7 +19,7 @@ Before you start writing the specific steps for a feature, state clearly what sh
- It is recommended that you use an ordered list to organize your paragraphs for a specific operation. This is to tell your readers what step they are in and they can have a clear view of the overall process. For example:
1. Go to **Application Workloads** and click **Workloads**.
2. Click **Create** on the right to create a deployment.
2. Click **Create** on the right to create a Deployment.
3. Enter the basic information and click **Next**.
## Titles
@ -34,7 +34,7 @@ Give a title first before you write a paragraph. It can be grouped into differen
```
- Heading 1: The title of a tutorial. You do not need to add this type of title in the main body as it is already defined at the beginning in the value `title`.
- Heading 2: The title of a major part in the tutorial. Make sure you capitalize each word in Heading 2, except prepositions, articles, conjunctions and words that are commonly written with a lower case letter at the beginning (e.g. macOS).
- Heading 2: The title of a major part in the tutorial. Make sure you capitalize each word in Heading 2, except prepositions, articles, conjunctions and words that are commonly written with a lowercase letter at the beginning (for example, macOS).
- Heading 3: A subtitle under Heading 2. You only need to capitalize the first word for Heading 3.
- Heading 4: This is rarely used as Heading 2 and Heading 3 will do in most cases. Make sure if Heading 4 is really needed before you use it.
- Do not add any periods after each heading.
@ -42,7 +42,7 @@ Give a title first before you write a paragraph. It can be grouped into differen
## Images
- When you submit your md files to GitHub, make sure you add related image files that appear in md files in the pull request as well. Please save your image files in static/images/docs. You can create a folder in the directory to save your images.
- If you want to add remarks (e.g. put a box on a UI button), use the color **green**. As some screenshot apps does not support the color picking function for a specific color code, as long as the color is **similar** to #09F709, #00FF00, #09F709 or #09F738, it is acceptable.
- If you want to add remarks (for example, put a box on a UI button), use the color **green**. As some screenshot apps does not support the color picking function for a specific color code, as long as the color is **similar** to #09F709, #00FF00, #09F709 or #09F738, it is acceptable.
- Image format: PNG.
- Make sure images in your guide match the content. For example, you mention that users need to log in to KubeSphere using an account of a role; this means the account that displays in your image is expected to be the one you are talking about. It confuses your readers if the content you are describing is not consistent with the image used.
- Recommended: [Xnip](https://xnipapp.com/) for Mac and [Sniptool](https://www.reasyze.com/sniptool/) for Windows.
@ -51,11 +51,12 @@ Give a title first before you write a paragraph. It can be grouped into differen
## Tone
- Do not use “we”. Address the reader as “you” directly. Using “we” in a sentence can be confusing, because the reader might not know whether they are part of the “we” you are describing. You can also use words like users, developers, administrators and engineers, depending on the feature you are describing.
- Do not use words which can imply a specific gender, including he, him, his, himself, she, her, hers and herself.
| Do | Don't |
| ------------------------------------------------------------ | ------------------------------------------------------------ |
| The component has been installed. You can now use the feature. | The component has been installed. We can now use the feature. |
| Do | Don't |
| ------------------------------------------------------------ | ------------------------------------------------------------ |
| The component has been installed. You can now use the feature. | The component has been installed. We can now use the feature. |
- Do not use words which can imply a specific gender, including he, him, his, himself, she, her, hers and herself.
## Format
@ -69,85 +70,127 @@ Use a **period** or a **conjunction** between two **complete** sentences.
| Check the status of the component. You can see it is running normally. | Check the status of the component, you can see it is running normally. |
| Check the status of the component, and you can see it is running normally. | Check the status of the component, you can see it is running normally. |
### **Bold**
### Bold
- Mark any UI text (e.g. a button) in bold.
- Mark any UI text (for example, a button) in bold.
| Do | Don't |
| ------------------------------------------------------------ | ------------------------------------------------------------ |
| In the top-right corner of this page, click **Save**. | In the top-right corner of this page, click Save. |
| In **Workspaces**, you can see all your workspaces listed. | In Workspaces, you can see all your workspaces listed. |
| On the **Create Project** Page, click **OK** in the bottom-right corner to continue. | On the Create Project Page, click OK in the bottom-right corner to continue. |
| Do | Don't |
| ------------------------------------------------------------ | ------------------------------------------------------------ |
| In the top-right corner of this page, click **Save**. | In the top-right corner of this page, click Save. |
| In **Workspaces**, you can see all your workspaces listed. | In Workspaces, you can see all your workspaces listed. |
| On the **Create Project** Page, click **OK** in the bottom-right corner to continue. | On the Create Project Page, click OK in the bottom-right corner to continue. |
- Mark the content of great importance or deserving special attention to readers in bold. For example:
KubeSphere is a **distributed operating system managing cloud-native applications** with Kubernetes as its kernel.
### **Code**
### Prepositions
When describing the UI, you can use the following prepositions.
<table>
<thead>
<tr>
<th width="15%">Preposition</th>
<th width="15%">UI element</th>
<th width="70%">Recommended</th>
</tr>
</thead>
<tbody>
<tr>
<td>in</td>
<td>
<p>dialogs</p>
<p>fields</p>
<p>lists</p>
<p>menus</p>
<p>sidebars</p>
<p>windows</p>
</td>
<td>
<p>In the <b>Delete User</b> dialog, enter the name and click <b>OK</b>.</p>
<p>In the <b>Name</b> field, enter <code>demo-name</code>.</p>
<p>In the <b>Language</b> drop-down list, select a desired language.</p>
<p>In the <b>More</b> menu, click <b>Delete</b>.</p>
<p>Click <b>Volumes</b> under <b>Storage</b> in the sidebar.</p>
<p>In the <b>Metering and Billing</b> window, click <b>View Consumption</b>.</p>
</td>
</tr>
<tr>
<td>on</td>
<td>
<p>pages</p>
<p>tabs</p>
</td>
<td>
<p>On the <b>Volumes</b> page, click <b>Create</b>.</p>
<p>On the <b>Deployments</b> tab, click <b>Create</b>.</p>
</td>
</tr>
</tbody>
</table>
### Code
- For short commands, you can just put them within ``. This is often used when you only need to tell readers about a short command or it is sufficient to express your meaning just with the command in a sentence.
| Do | Don't |
| ------------------------------------------------- | ----------------------------------------------- |
| You can use `kubectl get pods` to list your pods. | You can use kubectl get pods to list your pods. |
| Do | Don't |
| ------------------------------------------------- | ----------------------------------------------- |
| You can use `kubectl get pods` to list your Pods. | You can use kubectl get pods to list your Pods. |
- Alternatively, you can use code fences so that readers can copy them directly, especially for long commands. For example:
Run the following command to edit the configuration of `ks-console`:
Execute the following command to edit the configuration of `ks-console`:
```bash
kubectl edit svc ks-console -o yaml -n kubesphere-system
```
```bash
kubectl edit svc ks-console -o yaml -n kubesphere-system
```
- For values, strings, fields or parameters in yaml files, put them within ``.
| Do | Don't |
| ------------------------------------------------------------ | ------------------------------------------------------------ |
| Change the value of `auditing.enabled` to `false` to stop receiving auditing logs from KubeSphere. | Change the value of auditing.enabled to false to stop receiving auditing logs from KubeSphere. |
| Do | Don't |
| ------------------------------------------------------------ | ------------------------------------------------------------ |
| Change the value of `auditing.enabled` to `false` to stop receiving auditing logs from KubeSphere. | Change the value of auditing.enabled to false to stop receiving auditing logs from KubeSphere. |
- Put all path and file names within ``.
However, if the file name itself contains a link, do not put it within ``.
| Do | Don't |
| -------------------------- | ------------------------ |
| `/root/csi-qingcloud.yaml` | /root/csi-qingcloud.yaml |
| `config-sample.yaml` | config-sample.yaml |
| `/var/lib/docker` | /var/lib/docker |
| Do | Don't |
| -------------------------- | ------------------------ |
| `/root/csi-qingcloud.yaml` | /root/csi-qingcloud.yaml |
| `config-sample.yaml` | config-sample.yaml |
| `/var/lib/docker` | /var/lib/docker |
- Put workspace names, project names, account names, and role names within ``.
- Put account names or role names within ``.
| Do | Don't |
| ------------------------------------------------------ | ---------------------------------------------------- |
| Log in to the console as `admin`. | Log in to the console as admin. |
| The account will be assigned the role `users-manager`. | The account will be assigned the role users-manager. |
| Do | Don't |
| ------------------------------------------------------ | ---------------------------------------------------- |
| Log in to the console as `admin`. | Log in to the console as admin. |
| The account will be assigned the role `users-manager`. | The account will be assigned the role users-manager. |
### Code Comments
- If the comment is used only for a specific value, put the comment on the same line of the code. However, if the code is too long and putting the comment on the same line is not appropriate in terms of reading experience, you can put the code comment above the code. For example:
```yaml
registry:
registryMirrors: [] # For users who need to speed up downloads.
```
```yaml
registry:
registryMirrors: [] # For users who need to speed up downloads.
```
```bash
# Assume your original Kubernetes cluster is v1.17.9
./kk create config --with-kubesphere --with-kubernetes v1.17.9
```
```bash
# Assume your original Kubernetes cluster is v1.17.9
./kk create config --with-kubesphere --with-kubernetes v1.17.9
```
- If the comment is used for all the code (e.g. serving as a header for explanations), put the comment at the beginning above the code. For example:
- If the comment is used for all the code (for example, serving as a header for explanations), put the comment at the beginning above the code. For example:
```yaml
# Internal LB config example
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: "192.168.0.253"
port: "6443"
```
```yaml
# Internal LB config example
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: "192.168.0.253"
port: "6443"
```
### Variables

View File

@ -12,12 +12,8 @@ html {
.title-div {
position: relative;
padding-bottom: 68px;
h1 {
width: 1060px;
}
p {
width: 1060px;
margin-top: 25px;
letter-spacing: -0.04px;
color: #ffffff;
@ -34,6 +30,13 @@ html {
bottom: 64px;
}
.center {
position: relative;
margin-top: 32px;
text-align: center;
bottom: 0;
}
@media only screen and (max-width: $mobile-max-width) {
width: 100%;
h1 {

View File

@ -3,6 +3,6 @@ title: KubeSphere Api Documents
description: KubeSphere Api Documents
keywords: KubeSphere, KubeSphere Documents, Kubernetes
disallow: true
swaggerUrl: json/crd.json
swaggerUrl: json/crd-3.1.json
---

View File

@ -3,5 +3,5 @@ title: KubeSphere Api Documents
description: KubeSphere Api Documents
keywords: KubeSphere, KubeSphere Documents, Kubernetes
disallow: true
swaggerUrl: json/kubesphere.json
swaggerUrl: json/kubesphere-3.1.json
---

View File

@ -105,7 +105,7 @@ Now that you have Helm charts ready, you can upload them to KubeSphere as app te
You can release apps you have uploaded to KubeSphere to the public repository, also known as the App Store. In this way, all tenants on the platform can see these apps and deploy them if they have necessary permissions regardless of the workspace they belong to.
1. Click **Platform** in the top left corner and select **Access Control**.
1. Click **Platform** in the top-left corner and select **Access Control**.
2. On the **Workspaces** page, click the workspace where you have uploaded the Helm charts above.
@ -119,7 +119,7 @@ You can release apps you have uploaded to KubeSphere to the public repository, a
![detail-page](https://ap3.qingstor.com/kubesphere-website/docs/20201201150948.png)
5. After the app is submitted for review, I need to approve it before it can be released to the App Store. Click **Platform** in the top left corner and select **App Store Management**.
5. After the app is submitted for review, I need to approve it before it can be released to the App Store. Click **Platform** in the top-left corner and select **App Store Management**.
![app-store-management](https://ap3.qingstor.com/kubesphere-website/docs/20201201152220.png)
@ -131,7 +131,7 @@ You can release apps you have uploaded to KubeSphere to the public repository, a
![approve-app](https://ap3.qingstor.com/kubesphere-website/docs/20201201152734.png)
8. After the app is approved, you can release it to the App Store. Click **Platform** in the top left corner, select **Access Control**, and go back to your workspace. Select **App Templates** from the navigation bar and click **tidb-operator**.
8. After the app is approved, you can release it to the App Store. Click **Platform** in the top-left corner, select **Access Control**, and go back to your workspace. Select **App Templates** from the navigation bar and click **tidb-operator**.
![tidb-operator-app-template](https://ap3.qingstor.com/kubesphere-website/docs/20201201153102.png)
@ -141,7 +141,7 @@ You can release apps you have uploaded to KubeSphere to the public repository, a
![release-prompt](https://ap3.qingstor.com/kubesphere-website/docs/20201201153423.png)
11. To view the app released, click **App Store** in the top left corner and you can see it in the App Store. Likewise, you can deploy **tidb-cluster** to the App Store by following the same step.
11. To view the app released, click **App Store** in the top-left corner and you can see it in the App Store. Likewise, you can deploy **tidb-cluster** to the App Store by following the same step.
![tidb-operator](https://ap3.qingstor.com/kubesphere-website/docs/20201201154211.png)

View File

@ -14,7 +14,7 @@ In a world where Kubernetes has become the de facto standard to build applicatio
![tidb-architecture](https://ap3.qingstor.com/kubesphere-website/docs/tidb-architecture.png)
In addition to TiDB, I am also using [KubeSphere](https://kubesphere.io/), an open-source distributed operating system that manages cloud-native applications with [Kubernetes](https://kubernetes.io/) as its kernel. It provides a plug-and-play architecture for the seamless integration of third-party applications to boost its ecosystem. [KubeSphere can be run anywhere](https://kubesphere.io/docs/introduction/what-is-kubesphere/#run-kubesphere-everywhere) as it is highly pluggable without any hacking into Kubernetes.
In addition to TiDB, I am also using KubeSphere [Container Platform](https://kubesphere.io/), an open-source distributed operating system that manages cloud-native applications with [Kubernetes](https://kubernetes.io/) as its kernel. It provides a plug-and-play architecture for the seamless integration of third-party applications to boost its ecosystem. [KubeSphere can be run anywhere](https://kubesphere.io/docs/introduction/what-is-kubesphere/#run-kubesphere-everywhere) as it is highly pluggable without any hacking into Kubernetes.
![KubeSphere-structure-comp](https://ap3.qingstor.com/kubesphere-website/docs/KubeSphere-structure-comp.png)
@ -40,7 +40,7 @@ Therefore, I select QingCloud Kubernetes Engine (QKE) to prepare the environment
![cluster-management](https://ap3.qingstor.com/kubesphere-website/docs/20201026175447.png)
3. Use the built-in **Web Kubectl** from the Toolkit in the bottom right corner to execute the following command to install TiDB Operator CRD:
3. Use the built-in **Web Kubectl** from the Toolkit in the bottom-right corner to execute the following command to install TiDB Operator CRD:
```bash
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.6/manifests/crd.yaml
@ -243,4 +243,4 @@ If you have any questions, don't hesitate to contact us in [Slack](https://join.
**KubeSphere Introduction**: https://kubesphere.io/docs/introduction/what-is-kubesphere/
**KubeSphere Documentation**: https://kubesphere.io/docs/
**KubeSphere Documentation**: https://kubesphere.io/docs/

View File

@ -56,7 +56,7 @@ This is basically the same as what I did last time as we need to make sure all o
- As NFS itself does not have an internal provisioner, I will be using NFS-client Provisioner for dynamic provisioning of volumes.
- `kubectl` is integrated into the console of KubeSphere. You can run commands with it from **Toolbox** in the bottom right corner of the KubeSphere dashboard.
- `kubectl` is integrated into the console of KubeSphere. You can run commands with it from **Toolbox** in the bottom-right corner of the KubeSphere dashboard.
{{</ notice >}}
@ -113,7 +113,7 @@ To mount a volume to your workload, you need to create a [PersistentVolumeClaim]
{{< notice note >}}
To create workloads in KubeSphere, you can create and apply YAML files just as what you did before (**Edit Mode** in the top right corner). At the same time, you can also set parameters for your workloads on the KubeSphere dashboard one by one. I will not talk about the whole process in detail as this article is mainly about how to configure storage and create volumes. Have a look at [the KubeSphere documentation](https://kubesphere.io/docs/project-user-guide/application-workloads/deployments/) to learn more about how to create workloads.
To create workloads in KubeSphere, you can create and apply YAML files just as what you did before (**Edit Mode** in the top-right corner). At the same time, you can also set parameters for your workloads on the KubeSphere dashboard one by one. I will not talk about the whole process in detail as this article is mainly about how to configure storage and create volumes. Have a look at [the KubeSphere documentation](https://kubesphere.io/docs/project-user-guide/application-workloads/deployments/) to learn more about how to create workloads.
{{</ notice >}}

View File

@ -169,7 +169,7 @@ We surely can create an application through the Web UI, but let's use the CLI to
![argocd-service](/images/blogs/en/argo-cd-a-tool-for-devops/argocd-service.png)
3. Click it to see its details. You can also click the icon in the upper right corner to view its topology diagram.
3. Click it to see its details. You can also click the icon in the upper-right corner to view its topology diagram.
![argocd-topology](/images/blogs/en/argo-cd-a-tool-for-devops//argocd-topology.png)

View File

@ -317,7 +317,7 @@ You can verify that NFS-client has been successfully installed either from the c
1. The `ks-console` Service is being exposed through a NodePort. Log in to the console at `<node IP>:30880` with the default account and password (`admin/P@88w0rd`). You may need to open the port in your security groups and configure relevant port forwarding rules depending on your environment.
2. Click **Platform** in the top left corner and go to **Cluster Management**. In **Storage Classes** under **Storage**, you can see two storage classes:
2. Click **Platform** in the top-left corner and go to **Cluster Management**. In **Storage Classes** under **Storage**, you can see two storage classes:
![nfs-storage-class](/images/blogs/en/install-nfs-server-client-for-kubesphere-cluster/nfs-storage-class.png)

View File

@ -1,49 +1,55 @@
---
title: "DevOps With Kubernetes And KubeSphere"
description: KubeSphere DevOps offers powerful CI/CD features with excellent scalability and observability on top of Kubernetes for DevOps-oriented teams.
layout: "scenario"
css: "scss/scenario.scss"
section1:
title: KubeSphere DevOps offers end-to-end workflows and integrates popular CI/CD tools to boost delivery.
content: KubeSphere DevOps provides CI/CD pipelines based on Jenkins with automated workflows including Binary-to-Image (B2I) and Source-to-Image (S2I). It helps organizations accelerate the time to market for products.
image: /images/devops/banner.jpg
title: "KubeSphere DevOps: A Powerful CI/CD Platform Built on Top of Kubernetes for DevOps-oriented Teams."
content: KubeSphere DevOps integrates popular CI/CD tools, provides CI/CD pipelines based on Jenkins, offers automation toolkits including Binary-to-Image (B2I) and Source-to-Image (S2I), and boosts continuous delivery across Kubernetes clusters.
content2: With the container orchestration capability of Kubernetes, KubeSphere DevOps scales Jenkins Agents dynamically, improves CI/CD workflow efficiency, and helps organizations accelerate the time to market for products.
image: /images/devops/banner.png
showDownload: true
inCenter: true
image: /images/devops/dev-ops.png
section2:
title: Automatically Checkout Code, Test, Analyse, Build, Deploy and Release
title: Run CI/CD Pipelines in Kubernetes Clusters to Implement Automated Code Checkout, Testing, Code Analysis, Building, Deploying and Releasing
list:
- title: Out-of-box CI/CD Pipelines
image: /images/devops/CD-pipeline.png
contentList:
- content: <span>Easy to integrate with your SCM,</span> supporting GitLab / GitHub / BitBucket / SVN
- content: <span>Design graphical editing panels</span> to create CI/CD pipelines without writing any Jenkinsfile
- content: <span>Integrate SonarQube</span> to implement source code quality analysis
- content: <span>Support dependency cache</span> to accelerate build and deployment
- content: <span>Provide dynamic build agents</span> to automatically spin up Pods as necessary
- content: <span>Easy integration with SCM</span> including GitLab/GitHub/BitBucket/SVN to simplify continuous integration
- content: <span>Graphical editing panels</span> designed to visualize and simplify CI/CD pipeline creation without writing any Jenkinsfile
- content: <span>Easy SonarQube Integration</span> to implement source code quality analysis and view results on the KubeSphere console
- content: <span>Dependency cache available</span> for tools like Maven running in Kubernetes Pods to accelerate image building and workloads deployment across Kubernetes Clusters
- title: Built-in Automated Toolkits
- title: Built-in Automation Toolkits for DevOps with Kubernetes
image: /images/devops/Built-in-automated-toolkits.png
contentList:
- content: <span>Source-to-Image</span> builds reproducible container images from source code without writing any Dockerfile
- content: <span>Binary-to-Image</span> is the bridge between your artifact and a runnable image
- content: <span>Support automatically building and pushing</span> images to any registry, and finally deploying them to Kubernetes
- content: <span>Provide excellent recoverability and flexibility</span> as you can rebuild and rerun S2I / B2I whenever a patch is needed
- content: <span>Source-to-Image</span> builds reproducible container images from source code without writing any Dockerfile and deploys workloads to Kubernetes clusters
- content: <span>Binary-to-Image</span> builds your artifacts into runnable images and deploys workloads to Kubernetes clusters
- content: <span>Automating image building and pushing</span> to any registry and achieving continuous deployment to Kubernetes clusters
- content: <span>Excellent resiliency and recoverability</span> as you can copy pipelines and run them concurrently as well as rebuild and rerun S2I/B2I whenever a patch is needed
- title: Use GitOps to Implement DevOps
- title: Use GitOps to Implement DevOps on Top of Kubernetes
image: /images/devops/Clear-insight.png
contentList:
- content: <span>Combine Git with Kubernetes, automating cloud-native app delivery</span>
- content: <span>Designed for DevOps teamwork on the basis of the multi-tenant system of KubeSphere</span>
- content: <span>Powerful observability,</span> providing dynamic logs for S2I / B2I builds and pipelines
- content: Provide auditing, alerting and notifications in pipelines, ensuring issues can be quickly located and solved
- content: Support adding Git SCM webhooks to trigger a Jenkins build when new commits are submitted to the branch
- content: <span>Kubernetes combined with Git</span> to facilitate continuous integration with code repositories and boost continuous delivery of cloud-native applications
- content: <span>Efficient DevOps teamwork</span> through the KubeSphere multi-tenant system on the basis of Kubernetes RBAC to achieve better access control in CI/CD workflows
- content: <span>Powerful DevOps observability</span> with dynamic logs for S2I/B2I builds and pipelines to help you manage Kubernetes DevOps resources with ease
- content: <span>Auditing, alerting and notifications</span> available for pipelines to ensure quick identification and resolution of issues throughout CI/CD workflows
- content: <span>Git webhooks for SCM pipelines</span> to automatically trigger a Jenkins build when new commits are submitted to a branch
section3:
title: See KubeSphere One-stop DevOps Workflow In Action
videoLink: https://www.youtube.com/embed/c3V-2RX9yGY
image: /images/service-mesh/15.jpg
showDownload: true
content: Want to get started in action by following the hands-on lab?
btnContent: Start Hands-on Lab
link: docs/pluggable-components/devops/

View File

@ -1,5 +1,5 @@
---
title: "Multi-tenancy in KubeSphere"
title: "Kubernetes Multi-tenancy in KubeSphere"
keywords: "Kubernetes, Kubesphere, multi-tenancy"
description: "Understand the multi-tenant architecture in KubeSphere."
linkTitle: "Multi-tenancy in KubeSphere"
@ -12,7 +12,7 @@ The first and foremost challenge is how to define multi-tenancy in an enterprise
## Challenges in Kubernetes Multi-tenancy
Multi-tenancy is a common software architecture. Resources in a multi-tenant environment are shared by multiple users, also known as "tenants", with their respective data isolated from each other. The administrator of a multi-tenant cluster must minimize the damage that a compromised or malicious tenant can do to others and make sure resources are fairly allocated.
Multi-tenancy is a common software architecture. Resources in a multi-tenant environment are shared by multiple users, also known as "tenants", with their respective data isolated from each other. The administrator of a multi-tenant Kubernetes cluster must minimize the damage that a compromised or malicious tenant can do to others and make sure resources are fairly allocated.
No matter how an enterprise multi-tenant system is structured, it always comes with the following two building blocks: logical resource isolation and physical resource isolation.
@ -20,7 +20,7 @@ Logically, resource isolation mainly entails API access control and tenant-based
The isolation of physical resources includes nodes and networks, while it also relates to container runtime security. For example, you can create [NetworkPolicy](../../pluggable-components/network-policy/) resources to control traffic flow and use PodSecurityPolicy objects to control container behavior. [Kata Containers](https://katacontainers.io/) provides a more secure container runtime.
## Multi-tenancy in KubeSphere
## Kubernetes Multi-tenancy in KubeSphere
To solve the issues above, KubeSphere provides a multi-tenant management solution based on Kubernetes.

View File

@ -130,7 +130,7 @@ This tutorial demonstrates how to deploy GitLab on KubeSphere.
### Step 5: Access GitLab
1. Go to **Services** under **Application Workloads**, input `nginx-ingress-controller` in the search bar, and then press **Enter** on your keyboard to search the Service. You can see the Service is being exposed through port `32618`, which you can use to access GitLab.
1. Go to **Services** under **Application Workloads**, enter `nginx-ingress-controller` in the search bar, and then press **Enter** on your keyboard to search the Service. You can see the Service is being exposed through port `32618`, which you can use to access GitLab.
![search-service](/images/docs/appstore/external-apps/deploy-gitlab/search-service.PNG)

View File

@ -15,7 +15,7 @@ You need an account granted a role including the authorization of **Cluster Mana
## Resource Usage
1. Click **Platform** in the top left corner and select **Cluster Management**.
1. Click **Platform** in the top-left corner and select **Cluster Management**.
![Platform](/images/docs/cluster-administration/cluster-status-monitoring/platform.png)

View File

@ -20,7 +20,7 @@ This guide demonstrates how to set cluster visibility.
1. Log in to KubeSphere with an account that has the permission to create a workspace, such as `ws-manager`.
2. Click **Platform** in the top left corner and select **Access Control**. In **Workspaces** from the navigation bar, click **Create**.
2. Click **Platform** in the top-left corner and select **Access Control**. In **Workspaces** from the navigation bar, click **Create**.
![create-workspace](/images/docs/cluster-administration/cluster-settings/cluster-visibility-and-authorization/create-workspace.jpg)
@ -46,7 +46,7 @@ After a workspace is created, you can allocate additional clusters to the worksp
1. Log in to KubeSphere with an account that has the permission to manage clusters, such as `admin`.
2. Click **Platform** in the top left corner and select **Cluster Management**. Select a cluster from the list to view cluster information.
2. Click **Platform** in the top-left corner and select **Cluster Management**. Select a cluster from the list to view cluster information.
3. In **Cluster Settings** from the navigation bar, select **Cluster Visibility**.

View File

@ -1,5 +1,5 @@
---
linkTitle: "Log Collections"
linkTitle: "Log Collection"
weight: 8620
_build:

View File

@ -1,5 +1,5 @@
---
title: "Add Elasticsearch as a Receiver (i.e. Collector)"
title: "Add Elasticsearch as a Receiver"
keywords: 'Kubernetes, log, elasticsearch, pod, container, fluentbit, output'
description: 'Learn how to add Elasticsearch to receive logs, events or auditing logs.'
linkTitle: "Add Elasticsearch as a Receiver"
@ -9,29 +9,29 @@ You can use Elasticsearch, Kafka and Fluentd as log receivers in KubeSphere. Thi
## Prerequisites
- You need an account granted a role including the authorization of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to an account.
- You need an account granted a role including the permission of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the permission and assign it to an account.
- Before adding a log receiver, you need to enable any of the `logging`, `events` or `auditing` components. For more information, see [Enable Pluggable Components](../../../../pluggable-components/). `logging` is enabled as an example in this tutorial.
## Add Elasticsearch as a Receiver
1. Log in to KubeSphere as `admin`. Click **Platform** in the top left corner and select **Cluster Management**.
1. Log in to KubeSphere as `admin`. Click **Platform** in the top-left corner and select **Cluster Management**.
2. If you have enabled the [multi-cluster feature](../../../../multicluster-management/), you can select a specific cluster. If you have not enabled the feature, refer to the next step directly.
{{< notice note >}}
3. On the **Cluster Management** page, go to **Log Collections** in **Cluster Settings**.
If you have enabled the [multi-cluster feature](../../../../multicluster-management/), you can select a specific cluster.
4. Click **Add Log Collector** and choose **Elasticsearch**.
{{</ notice >}}
![add-receiver](/images/docs/cluster-administration/cluster-settings/log-collections/add-es-as-receiver/add-receiver.png)
2. On the **Cluster Management** page, go to **Log Collection** in **Cluster Settings**.
5. Provide the Elasticsearch service address and port as below:
3. Click **Add Log Receiver** and choose **Elasticsearch**.
4. Provide the Elasticsearch service address and port as below:
![add-es](/images/docs/cluster-administration/cluster-settings/log-collections/add-es-as-receiver/add-es.png)
6. Elasticsearch will appear in the receiver list on the **Log Collections** page, the status of which is **Collecting**.
5. Elasticsearch will appear in the receiver list on the **Log Collection** page, the status of which is **Collecting**.
![receiver-list](/images/docs/cluster-administration/cluster-settings/log-collections/add-es-as-receiver/receiver-list.png)
7. To verify whether Elasticsearch is receiving logs sent from Fluent Bit, click **Log Search** in the **Toolbox** in the bottom right corner and search logs on the console. For more information, read [Log Query](../../../../toolbox/log-query/).
6. To verify whether Elasticsearch is receiving logs sent from Fluent Bit, click **Log Search** in the **Toolbox** in the bottom-right corner and search logs on the console. For more information, read [Log Query](../../../../toolbox/log-query/).

View File

@ -1,5 +1,5 @@
---
title: "Add Fluentd as a Receiver (i.e. Collector)"
title: "Add Fluentd as a Receiver"
keywords: 'Kubernetes, log, fluentd, pod, container, fluentbit, output'
description: 'Learn how to add Fluentd to receive logs, events or auditing logs.'
linkTitle: "Add Fluentd as a Receiver"
@ -13,7 +13,7 @@ You can use Elasticsearch, Kafka and Fluentd as log receivers in KubeSphere. Thi
## Prerequisites
- You need an account granted a role including the authorization of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to an account.
- You need an account granted a role including the permission of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the permission and assign it to an account.
- Before adding a log receiver, you need to enable any of the `logging`, `events` or `auditing` components. For more information, see [Enable Pluggable Components](../../../../pluggable-components/). `logging` is enabled as an example in this tutorial.
@ -120,29 +120,32 @@ spec:
EOF
```
## Step 2: Add Fluentd as a Log Receiver (i.e. Collector)
## Step 2: Add Fluentd as a Log Receiver
1. Log in to KubeSphere as `admin`. Click **Platform** in the top left corner and select **Cluster Management**.
2. If you have enabled the [multi-cluster feature](../../../../multicluster-management/), you can select a specific cluster. If you have not enabled the feature, refer to the next step directly.
3. On the **Cluster Management** page, go to **Log Collections** in **Cluster Settings**.
1. Log in to KubeSphere as `admin`. Click **Platform** in the top-left corner and select **Cluster Management**.
4. Click **Add Log Collector** and choose **Fluentd**.
{{< notice note >}}
![add-receiver](/images/docs/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver/add-receiver.png)
If you have enabled the [multi-cluster feature](../../../../multicluster-management/), you can select a specific cluster.
5. Provide the Fluentd service address and port as below:
{{</ notice >}}
2. On the **Cluster Management** page, go to **Log Collection** in **Cluster Settings**.
3. Click **Add Log Receiver** and choose **Fluentd**.
4. Provide the Fluentd service address and port as below:
![add-fluentd](/images/docs/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver/add-fluentd.png)
6. Fluentd will appear in the receiver list on the **Log Collections** page, the status of which is **Collecting**.
5. Fluentd will appear in the receiver list on the **Log Collection** page, the status of which is **Collecting**.
![receiver-list](/images/docs/cluster-administration/cluster-settings/log-collections/add-fluentd-as-receiver/receiver-list.png)
## Step 3: Verify Fluentd is Receiving Logs Sent from Fluent Bit
1. Click **Application Workloads** on the **Cluster Management** page.
2. Select **Workloads** and then select the `default` project from the drop-down list in the **Deployments** tab.
2. Select **Workloads** and then select the `default` project from the drop-down list on the **Deployments** tab.
3. Click the **fluentd** item and then select the **fluentd-xxxxxxxxx-xxxxx** Pod.

View File

@ -1,5 +1,5 @@
---
title: "Add Kafka as a Receiver (i.e. Collector)"
title: "Add Kafka as a Receiver"
keywords: 'Kubernetes, log, kafka, pod, container, fluentbit, output'
description: 'Learn how to add Kafka to receive logs, events or auditing logs.'
linkTitle: "Add Kafka as a Receiver"
@ -13,7 +13,7 @@ You can use Elasticsearch, Kafka and Fluentd as log receivers in KubeSphere. Thi
## Prerequisites
- You need an account granted a role including the authorization of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to an account.
- You need an account granted a role including the permission of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the permission and assign it to an account.
- Before adding a log receiver, you need to enable any of the `logging`, `events` or `auditing` components. For more information, see [Enable Pluggable Components](../../../../pluggable-components/). `logging` is enabled as an example in this tutorial.
## Step 1: Create a Kafka Cluster and a Kafka Topic
@ -101,23 +101,27 @@ You can use [strimzi-kafka-operator](https://github.com/strimzi/strimzi-kafka-op
## Step 2: Add Kafka as a Log Receiver
1. Log in to KubeSphere as `admin`. Click **Platform** in the top left corner and select **Cluster Management**.
1. Log in to KubeSphere as `admin`. Click **Platform** in the top-left corner and select **Cluster Management**.
2. If you have enabled the [multi-cluster feature](../../../../multicluster-management/), you can select a specific cluster. If you have not enabled the feature, refer to the next step directly.
{{< notice note >}}
3. On the **Cluster Management** page, go to **Log Collections** in **Cluster Settings**.
If you have enabled the [multi-cluster feature](../../../../multicluster-management/), you can select a specific cluster.
4. Click **Add Log Collector** and select **Kafka**. Input the Kafka broker address and port as below, and then click **OK** to continue.
{{</ notice >}}
```bash
my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc 9092
my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc 9092
my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc 9092
```
2. On the **Cluster Management** page, go to **Log Collection** in **Cluster Settings**.
3. Click **Add Log Receiver** and select **Kafka**. Enter the Kafka broker address and port as below, and then click **OK** to continue.
| Address | Port |
| ------------------------------------------------------- | ---- |
| my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc | 9092 |
| my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc | 9092 |
| my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc | 9092 |
![add-kafka](/images/docs/cluster-administration/cluster-settings/log-collections/add-kafka-as-receiver/add-kafka.png)
5. Run the following commands to verify whether the Kafka cluster is receiving logs sent from Fluent Bit:
4. Run the following commands to verify whether the Kafka cluster is receiving logs sent from Fluent Bit:
```bash
# Start a util container

View File

@ -1,7 +1,7 @@
---
title: "Introduction to Log Collections"
title: "Introduction to Log Collection"
keywords: 'Kubernetes, log, elasticsearch, kafka, fluentd, pod, container, fluentbit, output'
description: 'Learn the basics of cluster log collections, including tools and general steps.'
description: 'Learn the basics of cluster log collection, including tools and general steps.'
linkTitle: "Introduction"
weight: 8621
---
@ -12,25 +12,27 @@ This tutorial gives a brief introduction about the general steps of adding log r
## Prerequisites
- You need an account granted a role including the authorization of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to an account.
- You need an account granted a role including the permission of **Cluster Management**. For example, you can log in to the console as `admin` directly or create a new role with the permission and assign it to an account.
- Before adding a log receiver, you need to enable any of the `logging`, `events` or `auditing` components. For more information, see [Enable Pluggable Components](../../../../pluggable-components/).
## Add a Log Receiver (i.e. Collector) for Container Logs
## Add a Log Receiver for Container Logs
To add a log receiver:
1. Log in to the web console of KubeSphere as `admin`.
2. Click **Platform** in the top left corner and select **Cluster Management**.
2. Click **Platform** in the top-left corner and select **Cluster Management**.
3. If you have enabled the [multi-cluster feature](../../../../multicluster-management/), you can select a specific cluster. If you have not enabled the feature, refer to the next step directly.
{{< notice note >}}
4. Go to **Log Collections** in **Cluster Settings**.
If you have enabled the [multi-cluster feature](../../../../multicluster-management/), you can select a specific cluster.
5. Click **Add Log Collector** in the **Logging** tab.
{{</ notice >}}
![log-collections](/images/docs/cluster-administration/cluster-settings/log-collections/introduction/log-collections.png)
3. Go to **Log Collection** under **Cluster Settings** in the sidebar.
4. Click **Add Log Receiver** on the **Logging** tab.
{{< notice note >}}
@ -57,11 +59,9 @@ Kafka is often used to receive logs and serves as a broker to other processing s
If you need to output logs to more places other than Elasticsearch or Kafka, you can add Fluentd as a log receiver. Fluentd has numerous output plugins which can forward logs to various destinations such as S3, MongoDB, Cassandra, MySQL, syslog, and Splunk. [Add Fluentd as a Receiver](../add-fluentd-as-receiver/) demonstrates how to add Fluentd to receive Kubernetes logs.
## Add a Log Receiver (i.e. Collector) for Events or Auditing Logs
## Add a Log Receiver for Events or Auditing Logs
Starting from KubeSphere v3.0.0, the logs of Kubernetes events and the auditing logs of Kubernetes and KubeSphere can be archived in the same way as container logs. The tab **Events** or **Auditing** on the **Log Collections** page will appear if `events` or `auditing` is enabled accordingly in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-1.1/docs/config-example.md). You can go to the corresponding tab to configure log receivers for Kubernetes events or Kubernetes and KubeSphere auditing logs.
![log-collections-events](/images/docs/cluster-administration/cluster-settings/log-collections/introduction/log-collections-events.png)
Starting from KubeSphere v3.0.0, the logs of Kubernetes events and the auditing logs of Kubernetes and KubeSphere can be archived in the same way as container logs. The tab **Events** or **Auditing** on the **Log Collection** page will appear if `events` or `auditing` is enabled accordingly in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/release-1.1/docs/config-example.md). You can go to the corresponding tab to configure log receivers for Kubernetes events or Kubernetes and KubeSphere auditing logs.
Container logs, Kubernetes events and Kubernetes and KubeSphere auditing logs should be stored in different Elasticsearch indices to be searched in KubeSphere. The index prefixes are:
@ -73,26 +73,19 @@ Container logs, Kubernetes events and Kubernetes and KubeSphere auditing logs sh
You can turn a log receiver on or off without adding or deleting it. To turn a log receiver on or off:
1. On the **Log Collections** page, click a log receiver and go to the receiver's detail page.
1. On the **Log Collection** page, click a log receiver and go to the receiver's detail page.
2. Click **More** and select **Change Status**.
![more](/images/docs/cluster-administration/cluster-settings/log-collections/introduction/more.png)
3. Select **Activate** or **Close** to turn the log receiver on or off.
![change-status](/images/docs/cluster-administration/cluster-settings/log-collections/introduction/change-status.png)
4. A log receiver's status will be changed to **Close** if you turn it off, otherwise the status will be **Collecting** on the **Log Collection** page.
4. A log receiver's status will be changed to **Close** if you turn it off, otherwise the status will be **Collecting**.
![receiver-status](/images/docs/cluster-administration/cluster-settings/log-collections/introduction/receiver-status.png)
## Modify or Delete a Log Receiver
You can modify a log receiver or delete it:
1. On the **Log Collections** page, click a log receiver and go to the receiver's detail page.
1. On the **Log Collection** page, click a log receiver and go to the receiver's detail page.
2. Edit a log receiver by clicking **Edit** or **Edit YAML** from the drop-down list.
![more](/images/docs/cluster-administration/cluster-settings/log-collections/introduction/more.png)
3. Delete a log receiver by clicking **Delete Log Collector**.
3. Delete a log receiver by clicking **Delete Log Receiver**.

View File

@ -14,7 +14,7 @@ You need an account granted a role including the authorization of **Cluster Mana
## Cluster Status Monitoring
1. Click **Platform** in the top left corner and select **Cluster Management**.
1. Click **Platform** in the top-left corner and select **Cluster Management**.
![Platform](/images/docs/cluster-administration/cluster-status-monitoring/platform.png)
@ -41,7 +41,7 @@ You need an account granted a role including the authorization of **Cluster Mana
![Monitoring](/images/docs/cluster-administration/cluster-status-monitoring/monitoring.png)
{{< notice tip >}}
You can customize the time range from the drop-down list in the top right corner to view historical data.
You can customize the time range from the drop-down list in the top-right corner to view historical data.
{{</ notice >}}
### Component status

View File

@ -19,7 +19,7 @@ KubeSphere also has built-in policies which will trigger alerts if conditions de
## Create an Alerting Policy
1. Log in to the console as `cluster-admin`. Click **Platform** in the top left corner, and then click **Cluster Management**.
1. Log in to the console as `cluster-admin`. Click **Platform** in the top-left corner, and then click **Cluster Management**.
2. Navigate to **Alerting Policies** under **Monitoring & Alerting**, and then click **Create**.

View File

@ -19,7 +19,7 @@ You need an account granted a role including the authorization of **Cluster Mana
Cluster nodes are only accessible to cluster administrators. Some node metrics are very important to clusters. Therefore, it is the administrator's responsibility to watch over these numbers and make sure nodes are available. Follow the steps below to view node status.
1. Click **Platform** in the top left corner and select **Cluster Management**.
1. Click **Platform** in the top-left corner and select **Cluster Management**.
![clusters-management-select](/images/docs/cluster-administration/node-management/clusters-management-select.jpg)
@ -51,7 +51,7 @@ Click a node from the list and you can go to its detail page.
![Node Detail](/images/docs/cluster-administration/node-management/node_detail.png)
- **Cordon/Uncordon**: Marking a node as unschedulable is very useful during a node reboot or other maintenance. The Kubernetes scheduler will not schedule new Pods to this node if it's been marked unschedulable. Besides, this does not affect existing workloads already on the node. In KubeSphere, you mark a node as unschedulable by clicking **Cordon** on the node detail page. The node will be schedulable if you click the button (**Uncordon**) again.
- **Labels**: Node labels can be very useful when you want to assign Pods to specific nodes. Label a node first (e.g. label GPU nodes with `node-role.kubernetes.io/gpu-node`), and then add the label in **Advanced Settings** [when you create a workload](../../project-user-guide/application-workloads/deployments/#step-5-configure-advanced-settings) so that you can allow Pods to run on GPU nodes explicitly. To add node labels, click **More** and select **Edit Labels**.
- **Labels**: Node labels can be very useful when you want to assign Pods to specific nodes. Label a node first (for example, label GPU nodes with `node-role.kubernetes.io/gpu-node`), and then add the label in **Advanced Settings** [when you create a workload](../../project-user-guide/application-workloads/deployments/#step-5-configure-advanced-settings) so that you can allow Pods to run on GPU nodes explicitly. To add node labels, click **More** and select **Edit Labels**.
![drop-down-list-node](/images/docs/cluster-administration/node-management/drop-down-list-node.jpg)

View File

@ -24,7 +24,7 @@ The table below summarizes common volume plugins for various provisioners (stora
| -------------------- | ------------------------------------------------------------ |
| In-tree | Built-in and run as part of Kubernetes, such as [RBD](https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd) and [Glusterfs](https://kubernetes.io/docs/concepts/storage/storage-classes/#glusterfs). For more plugins of this kind, see [Provisioner](https://kubernetes.io/docs/concepts/storage/storage-classes/#provisioner). |
| External-provisioner | Deployed independently from Kubernetes, but works like an in-tree plugin, such as [nfs-client](https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client). For more plugins of this kind, see [External Storage](https://github.com/kubernetes-retired/external-storage). |
| CSI | Container Storage Interface, a standard for exposing storage resources to workloads on COs (e.g. Kubernetes), such as [QingCloud-csi](https://github.com/yunify/qingcloud-csi) and [Ceph-CSI](https://github.com/ceph/ceph-csi). For more plugins of this kind, see [Drivers](https://kubernetes-csi.github.io/docs/drivers.html). |
| CSI | Container Storage Interface, a standard for exposing storage resources to workloads on COs (for example, Kubernetes), such as [QingCloud-csi](https://github.com/yunify/qingcloud-csi) and [Ceph-CSI](https://github.com/ceph/ceph-csi). For more plugins of this kind, see [Drivers](https://kubernetes-csi.github.io/docs/drivers.html). |
## Prerequisites
@ -32,7 +32,7 @@ You need an account granted a role including the authorization of **Cluster Mana
## Manage Storage Classes
1. Click **Platform** in the top left corner and select **Cluster Management**.
1. Click **Platform** in the top-left corner and select **Cluster Management**.
![clusters-management-select](/images/docs/cluster-administration/persistent-volume-and-storage-class/clusters-management-select.jpg)
@ -54,7 +54,7 @@ You need an account granted a role including the authorization of **Cluster Mana
### Common settings
Some settings are commonly used and shared among storage classes. You can find them as dashboard properties on the console, which are also indicated by fields or annotations in the StorageClass manifest. You can see the manifest file in YAML format by enabling **Edit Mode** in the top right corner.
Some settings are commonly used and shared among storage classes. You can find them as dashboard properties on the console, which are also indicated by fields or annotations in the StorageClass manifest. You can see the manifest file in YAML format by enabling **Edit Mode** in the top-right corner.
Here are property descriptions of some commonly used fields in KubeSphere.
| Property | Description |
@ -118,7 +118,7 @@ Ceph RBD is also an in-tree storage plugin on Kubernetes. The volume plugin is a
but the storage server must be installed before you create the storage class of Ceph RBD.
As **hyperkube** images were [deprecated since 1.17](https://github.com/kubernetes/kubernetes/pull/85094), in-tree Ceph RBD may not work without **hyperkube**.
Nevertheless, you can use [rbd provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd) as a substitute, whose format is the same as in-tree Ceph RBD. The only different parameter is `provisioner` (i.e **Storage System** on the KubeSphere console). If you want to use rbd-provisioner, the value of `provisioner` must be `ceph.com/rbd` (Input this value in **Storage System** in the image below). If you use in-tree Ceph RBD, the value must be `kubernetes.io/rbd`.
Nevertheless, you can use [rbd provisioner](https://github.com/kubernetes-incubator/external-storage/tree/master/ceph/rbd) as a substitute, whose format is the same as in-tree Ceph RBD. The only different parameter is `provisioner` (i.e **Storage System** on the KubeSphere console). If you want to use rbd-provisioner, the value of `provisioner` must be `ceph.com/rbd` (Enter this value in **Storage System** in the image below). If you use in-tree Ceph RBD, the value must be `kubernetes.io/rbd`.
![storage-system](/images/docs/cluster-administration/persistent-volume-and-storage-class/storage-system.png)

View File

@ -6,5 +6,5 @@ linkTitle: "Customize Basic Information"
weight: 8710
---
KubeSphere is an open-source enterprise-grade container platform based on Kubernetes, while it also provides customization services, including customized platform logo and name. For customization services, contact support@kubesphere.cloud.
KubeSphere is an open-source enterprise-grade container platform based on Kubernetes, while it also provides customization services, including customized platform logo and name. For customization services, contact support@kubesphere.cloud or visit https://kubesphere.cloud/en/.

View File

@ -12,7 +12,7 @@ This tutorial demonstrates how to configure your email server and add recipients
1. Log in to the web console with an account granted the role `platform-admin`.
2. Click **Platform** in the top left corner and select **Platform Settings**.
2. Click **Platform** in the top-left corner and select **Platform Settings**.
3. Navigate to **Email** under **Notification Management**.

View File

@ -36,7 +36,7 @@ You must provide the Slack token on the console for authentication so that KubeS
1. Log in to the web console with an account granted the role `platform-admin`.
2. Click **Platform** in the top left corner and select **Platform Settings**.
2. Click **Platform** in the top-left corner and select **Platform Settings**.
3. Navigate to **Slack** under **Notification Management**.

View File

@ -1,5 +1,5 @@
---
title: "Cluster Shutdown and Restart"
title: "Kubernetes Cluster Shutdown and Restart"
description: "Learn how to gracefully shut down your cluster and restart it."
layout: "single"
@ -8,7 +8,7 @@ weight: 8800
icon: "/images/docs/docs.svg"
---
This document describes the process of gracefully shutting down your cluster and how to restart it. You might need to temporarily shut down your cluster for maintenance reasons.
This document describes the process of gracefully shutting down your Kubernetes cluster and how to restart it. You might need to temporarily shut down your cluster for maintenance reasons.
{{< notice warning >}}
Shutting down a cluster is very dangerous. You must fully understand the operation and its consequences. Please make an etcd backup before you proceed.
@ -42,7 +42,7 @@ done
Then you can shut down other cluster dependencies, such as external storage.
## Restart a Cluster Gracefully
You can restart a cluster gracefully after shutting down the cluster gracefully.
You can restart a Kubernetes cluster gracefully after shutting down the cluster gracefully.
### Prerequisites
You have shut down your cluster gracefully.

View File

@ -243,7 +243,7 @@ You must create the projects as shown in the table below in advance. Make sure y
![pipeline-success](/images/docs/devops-user-guide/examples/create-multi-cluster-pipeline/pipeline-success.png)
3. Check the pipeline running logs by clicking **Show Logs** in the upper right corner. For each stage, you click it to inspect logs, which can be downloaded to your local machine for further analysis.
3. Check the pipeline running logs by clicking **Show Logs** in the upper-right corner. For each stage, you click it to inspect logs, which can be downloaded to your local machine for further analysis.
![pipeline-logs](/images/docs/devops-user-guide/examples/create-multi-cluster-pipeline/pipeline-logs.png)

View File

@ -14,7 +14,7 @@ weight: 11410
## Create a Docker Hub Access Token
1. Log in to [Docker Hub](https://hub.docker.com/) and select **Account Settings** from the menu in the top right corner.
1. Log in to [Docker Hub](https://hub.docker.com/) and select **Account Settings** from the menu in the top-right corner.
![dockerhub-settings](/images/docs/devops-user-guide/examples/compile-and-deploy-a-go-project/dockerhub-settings.jpg)

View File

@ -15,7 +15,7 @@ weight: 11420
## Create a Docker Hub Access Token
1. Log in to [Docker Hub](https://hub.docker.com/) and select **Account Settings** from the menu in the top right corner.
1. Log in to [Docker Hub](https://hub.docker.com/) and select **Account Settings** from the menu in the top-right corner.
![dockerhub-settings](/images/docs/devops-user-guide/examples/compile-and-deploy-a-go-multi-cluster-project/dockerhub-settings.jpg)

View File

@ -83,7 +83,7 @@ You have to configure Docker to disregard security for your Harbor registry.
![create-credentials](/images/docs/devops-user-guide/tool-integration/integrate-harbor-into-pipeline/create-credentials.png)
2. On the **Create Credentials** page, set a credential ID (`robot-test`) and select **Account Credentials** for **Type**. The **Username** field must be the same as the value of `name` in the JSON file you just downloaded and input the value of `token` in the file for **Token/Password**.
2. On the **Create Credentials** page, set a credential ID (`robot-test`) and select **Account Credentials** for **Type**. The **Username** field must be the same as the value of `name` in the JSON file you just downloaded and enter the value of `token` in the file for **Token/Password**.
![credentials-page](/images/docs/devops-user-guide/tool-integration/integrate-harbor-into-pipeline/credentials-page.png)

View File

@ -6,7 +6,7 @@ linkTitle: "Integrate SonarQube into Pipelines"
weight: 11310
---
[SonarQube](https://www.sonarqube.org/) is a popular continuous inspection tool for code quality. You can use it for static and dynamic analysis of a codebase. After it is integrated into pipelines in KubeSphere, you can view common code issues such as bugs and vulnerabilities directly on the dashboard as SonarQube detects issues in a running pipeline.
[SonarQube](https://www.sonarqube.org/) is a popular continuous inspection tool for code quality. You can use it for static and dynamic analysis of a codebase. After it is integrated into pipelines in KubeSphere [Container Platform](https://kubesphere.io/), you can view common code issues such as bugs and vulnerabilities directly on the dashboard as SonarQube detects issues in a running pipeline.
This tutorial demonstrates how you can integrate SonarQube into pipelines. Refer to the following steps first before you [create a pipeline using a Jenkinsfile](../../../devops-user-guide/how-to-use/create-a-pipeline-using-jenkinsfile/).
@ -90,7 +90,7 @@ To integrate SonarQube into your pipeline, you must install SonarQube Server fir
![access-sonarqube-console](/images/docs/devops-user-guide/tool-integration/integrate-sonarqube-into-pipeline/access-sonarqube-console.jpg)
3. Click **Log in** in the top right corner and use the default account `admin/admin`.
3. Click **Log in** in the top-right corner and use the default account `admin/admin`.
![log-in-page](/images/docs/devops-user-guide/tool-integration/integrate-sonarqube-into-pipeline/log-in-page.jpg)
@ -106,7 +106,7 @@ To integrate SonarQube into your pipeline, you must install SonarQube Server fir
![sonarqube-config-1](/images/docs/devops-user-guide/tool-integration/integrate-sonarqube-into-pipeline/sonarqube-config-1.jpg)
2. Click **Security** and input a token name, such as `kubesphere`.
2. Click **Security** and enter a token name, such as `kubesphere`.
![sonarqube-config-2](/images/docs/devops-user-guide/tool-integration/integrate-sonarqube-into-pipeline/sonarqube-config-2.jpg)
@ -144,7 +144,7 @@ To integrate SonarQube into your pipeline, you must install SonarQube Server fir
![sonarqube-webhook-3](/images/docs/devops-user-guide/tool-integration/integrate-sonarqube-into-pipeline/sonarqube-webhook-3.jpg)
5. Input **Name** and **Jenkins Console URL** (i.e. the SonarQube Webhook address) in the dialog that appears. Click **Create** to finish.
5. Enter **Name** and **Jenkins Console URL** (i.e. the SonarQube Webhook address) in the dialog that appears. Click **Create** to finish.
![webhook-page-info](/images/docs/devops-user-guide/tool-integration/integrate-sonarqube-into-pipeline/webhook-page-info.jpg)
@ -190,7 +190,7 @@ To integrate SonarQube into your pipeline, you must install SonarQube Server fir
http://192.168.0.4:30180
```
3. Access Jenkins with the address `http://{$Public IP}:30180`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (e.g. `admin/P@88w0rd`) directly. For more information about configuring Jenkins, see [Jenkins System Settings](../../../devops-user-guide/how-to-use/jenkins-setting/).
3. Access Jenkins with the address `http://{$Public IP}:30180`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (for example, `admin/P@88w0rd`) directly. For more information about configuring Jenkins, see [Jenkins System Settings](../../../devops-user-guide/how-to-use/jenkins-setting/).
![jenkins-login-page](/images/docs/devops-user-guide/tool-integration/integrate-sonarqube-into-pipeline/jenkins-login-page.jpg)
@ -289,4 +289,4 @@ You need a SonarQube token so that your pipeline can communicate with SonarQube
After you [create a pipeline using the graphical editing panel](../../how-to-use/create-a-pipeline-using-graphical-editing-panel/) or [create a pipeline using a Jenkinsfile](../../how-to-use/create-a-pipeline-using-jenkinsfile/), you can view the result of code quality analysis. For example, you may see an image as below if SonarQube runs successfully.
![sonarqube-view-result](/images/docs/devops-user-guide/tool-integration/integrate-sonarqube-into-pipeline/sonarqube-view-result.jpg)
![sonarqube-view-result](/images/docs/devops-user-guide/tool-integration/integrate-sonarqube-into-pipeline/sonarqube-view-result.jpg)

View File

@ -127,7 +127,7 @@ Pipelines include [declarative pipelines](https://www.jenkins.io/doc/book/pipeli
{{</ notice >}}
1. On the graphical editing panel, select **node** from the **Type** drop-down list and input `maven` for **label**.
1. On the graphical editing panel, select **node** from the **Type** drop-down list and enter `maven` for **label**.
{{< notice note >}}
@ -191,7 +191,7 @@ This stage uses SonarQube to test your code. You can skip this stage if you do n
![maven-container](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-graphical-editing-panels/maven-container.jpg)
3. Click **Add nesting steps** under the `maven` container to add a nested step. Click **withCredentials** and select the SonarQube token (`sonar-token`) from the **Credential ID** list. Input `SONAR_TOKEN` for **Text Variable**, then click **OK**.
3. Click **Add nesting steps** under the `maven` container to add a nested step. Click **withCredentials** and select the SonarQube token (`sonar-token`) from the **Credential ID** list. Enter `SONAR_TOKEN` for **Text Variable**, then click **OK**.
![sonarqube-credentials](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-graphical-editing-panels/sonarqube-credentials.jpg)
@ -215,7 +215,7 @@ This stage uses SonarQube to test your code. You can skip this stage if you do n
![sonarqube-shell-new](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-graphical-editing-panels/sonarqube-shell-new.jpg)
8. Click **Add nesting steps** (the third one) for the **container** step directly and select **timeout**. Input `1` for time and select **Hours** for unit. Click **OK** to finish.
8. Click **Add nesting steps** (the third one) for the **container** step directly and select **timeout**. Enter `1` for time and select **Hours** for unit. Click **OK** to finish.
![add-nested-step-2](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-graphical-editing-panels/add-nested-step-2.jpg)
@ -330,7 +330,7 @@ This stage uses SonarQube to test your code. You can skip this stage if you do n
{{</ notice >}}
5. When you finish the steps above, click **Confirm** and **Save** in the bottom right corner. You can see the pipeline now has a complete workflow with each stage clearly listed on the pipeline. When you define a pipeline using the graphical editing panel, KubeSphere automatically creates its corresponding Jenkinsfile. Click **Edit Jenkinsfile** to view the Jenkinsfile.
5. When you finish the steps above, click **Confirm** and **Save** in the bottom-right corner. You can see the pipeline now has a complete workflow with each stage clearly listed on the pipeline. When you define a pipeline using the graphical editing panel, KubeSphere automatically creates its corresponding Jenkinsfile. Click **Edit Jenkinsfile** to view the Jenkinsfile.
![pipeline-done](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-graphical-editing-panels/pipeline-done.jpg)
@ -362,7 +362,7 @@ This stage uses SonarQube to test your code. You can skip this stage if you do n
![complete](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-graphical-editing-panels/complete.jpg)
3. Click **Show Logs** in the top right corner to inspect all the logs. Click each stage to see detailed logs of it. You can debug any problems based on the logs which also can be downloaded locally for further analysis.
3. Click **Show Logs** in the top-right corner to inspect all the logs. Click each stage to see detailed logs of it. You can debug any problems based on the logs which also can be downloaded locally for further analysis.
![inspect-logs](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-graphical-editing-panels/inspect-logs.jpg)

View File

@ -247,7 +247,7 @@ The account `project-admin` needs to be created in advance since it is the revie
![pipeline-proceed](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-a-jenkinsfile/pipeline-proceed.png)
In a development or production environment, it requires someone who has higher authority (e.g. release manager) to review the pipeline, images, as well as the code analysis result. They have the authority to determine whether the pipeline can go to the next stage. In the Jenkinsfile, you use the section `input` to specify who reviews the pipeline. If you want to specify a user (e.g. `project-admin`) to review it, you can add a field in the Jenkinsfile. If there are multiple users, you need to use commas to separate them as follows:
In a development or production environment, it requires someone who has higher authority (for example, release manager) to review the pipeline, images, as well as the code analysis result. They have the authority to determine whether the pipeline can go to the next stage. In the Jenkinsfile, you use the section `input` to specify who reviews the pipeline. If you want to specify a user (for example, `project-admin`) to review it, you can add a field in the Jenkinsfile. If there are multiple users, you need to use commas to separate them as follows:
```groovy
···
@ -267,7 +267,7 @@ The account `project-admin` needs to be created in advance since it is the revie
![inspect-pipeline-log-1](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-a-jenkinsfile/inspect-pipeline-log-1.png)
2. Check the pipeline running logs by clicking **Show Logs** in the top right corner. You can see the dynamic log output of the pipeline, including any errors that may stop the pipeline from running. For each stage, you click it to inspect logs, which can be downloaded to your local machine for further analysis.
2. Check the pipeline running logs by clicking **Show Logs** in the top-right corner. You can see the dynamic log output of the pipeline, including any errors that may stop the pipeline from running. For each stage, you click it to inspect logs, which can be downloaded to your local machine for further analysis.
![inspect-pipeline-log-2](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-a-jenkinsfile/inspect-pipeline-log-2.jpg)
@ -316,7 +316,7 @@ The account `project-admin` needs to be created in advance since it is the revie
![access-endpoint](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-a-jenkinsfile/access-endpoint.png)
2. Use the **web kubectl** from **Toolbox** in the bottom right corner by executing the following command:
2. Use the **web kubectl** from **Toolbox** in the bottom-right corner by executing the following command:
```bash
curl 10.233.120.230:8080

View File

@ -48,7 +48,7 @@ Log in to the console of KubeSphere as `project-regular`. Navigate to your DevOp
### Create GitHub credentials
Similarly, follow the same steps above to create GitHub credentials. Set a different Credential ID (e.g. `github-id`) and also select **Account Credentials** for **Type**. Enter your GitHub username and password for **Username** and **Token/Password** respectively.
Similarly, follow the same steps above to create GitHub credentials. Set a different Credential ID (for example, `github-id`) and also select **Account Credentials** for **Type**. Enter your GitHub username and password for **Username** and **Token/Password** respectively.
{{< notice note >}}
@ -58,7 +58,7 @@ If there are any special characters such as `@` and `$` in your account or passw
### Create kubeconfig credentials
Similarly, follow the same steps above to create kubeconfig credentials. Set a different Credential ID (e.g. `demo-kubeconfig`) and select **kubeconfig**.
Similarly, follow the same steps above to create kubeconfig credentials. Set a different Credential ID (for example, `demo-kubeconfig`) and select **kubeconfig**.
{{< notice info >}}

View File

@ -84,7 +84,7 @@ You need to create two projects, such as `kubesphere-sample-dev` and `kubesphere
![create-pipeline](/images/docs/devops-user-guide/using-devops/gitlab-multibranch-pipeline/create-pipeline.png)
3. In the **GitLab** tab, select the default option `https://gitlab.com` for GitLab Server, enter the username of the GitLab project owner for **Owner**, and then select the `devops-java-sample` repository from the drop-down list for **Repository Name**. Click the tick icon in the bottom right corner and then click **Next**.
3. In the **GitLab** tab, select the default option `https://gitlab.com` for GitLab Server, enter the username of the GitLab project owner for **Owner**, and then select the `devops-java-sample` repository from the drop-down list for **Repository Name**. Click the tick icon in the bottom-right corner and then click **Next**.
![select-gitlab](/images/docs/devops-user-guide/using-devops/gitlab-multibranch-pipeline/select-gitlab.png)
@ -122,7 +122,7 @@ You need to create two projects, such as `kubesphere-sample-dev` and `kubesphere
### Step 6: Check the pipeline status
1. In the **Task Status** tab, you can see how a pipeline is running. Check the pipeline running logs by clicking **Show Logs** in the top right corner.
1. In the **Task Status** tab, you can see how a pipeline is running. Check the pipeline running logs by clicking **Show Logs** in the top-right corner.
![check-log](/images/docs/devops-user-guide/using-devops/gitlab-multibranch-pipeline/check-log.png)

View File

@ -16,7 +16,7 @@ The built-in Jenkins cannot share the same email configuration with the platform
## Set the Email Server
1. Click **Platform** in the top left corner and select **Cluster Management**.
1. Click **Platform** in the top-left corner and select **Cluster Management**.
![clusters-management](/images/docs/devops-user-guide/using-devops/jenkins-email/clusters-management.jpg)
@ -39,7 +39,7 @@ The built-in Jenkins cannot share the same email configuration with the platform
| Environment Variable Name | Description |
| ------------------------- | -------------------------------- |
| EMAIL\_SMTP\_HOST | SMTP server address |
| EMAIL\_SMTP\_PORT | SMTP server port (e.g. 25) |
| EMAIL\_SMTP\_PORT | SMTP server port (for example, 25) |
| EMAIL\_FROM\_ADDR | Email sender address |
| EMAIL\_FROM\_NAME | Email sender name |
| EMAIL\_FROM\_PASS | Email sender password |

View File

@ -20,7 +20,7 @@ You have enabled [the KubeSphere DevOps System](../../../pluggable-components/de
It is recommended that you configure Jenkins in KubeSphere through Configuration-as-Code (CasC). The built-in Jenkins CasC file is stored as a [ConfigMap](../../../project-user-guide/configuration/configmaps/).
1. Log in to KubeSphere as `admin`. Click **Platform** in the top left corner and select **Cluster Management**.
1. Log in to KubeSphere as `admin`. Click **Platform** in the top-left corner and select **Cluster Management**.
![cluster-management](/images/docs/devops-user-guide/using-devops/jenkins-system-settings/cluster-management.jpg)
@ -56,7 +56,7 @@ After you modified `jenkins-casc-config`, you need to reload your updated system
http://192.168.0.4:30180
```
3. Access Jenkins at `http://Node IP:Port Number`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (e.g. `admin/P@88w0rd`) directly.
3. Access Jenkins at `http://Node IP:Port Number`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (for example, `admin/P@88w0rd`) directly.
![jenkins-dashboard](/images/docs/devops-user-guide/using-devops/jenkins-system-settings/jenkins-dashboard.jpg)

View File

@ -158,7 +158,7 @@ You can select a pipeline from the drop-down list for **When Create Pipeline** a
![webhook-push](/images/docs/devops-user-guide/using-devops/pipeline-settings/webhook-push.png)
**Webhook Push** is an efficient way to allow pipelines to discover changes in the remote code repository and automatically trigger a new running. Webhook should be the primary method to trigger Jenkins automatic scanning for GitHub and Git (e.g. GitLab).
**Webhook Push** is an efficient way to allow pipelines to discover changes in the remote code repository and automatically trigger a new running. Webhook should be the primary method to trigger Jenkins automatic scanning for GitHub and Git (for example, GitLab).
### Advanced Settings with No Code Repository Selected

View File

@ -16,7 +16,7 @@ You need an account granted a role including the authorization of **Cluster Mana
## Label a CI Node
1. Click **Platform** in the top left corner and select **Cluster Management**.
1. Click **Platform** in the top-left corner and select **Cluster Management**.
![clusters-management](/images/docs/devops-user-guide/using-devops/set-ci-node-for-dependency-cache/clusters-management.jpg)
@ -36,7 +36,7 @@ You need an account granted a role including the authorization of **Cluster Mana
{{< notice note >}}
The node may already have the key without a value. You can input the value `ci` directly.
The node may already have the key without a value. You can enter the value `ci` directly.
{{</ notice >}}

View File

@ -48,7 +48,7 @@ A DevOps project user with required permissions can configure credentials for pi
### Members and roles
Similar to a project, a DevOps project also requires users to be granted different roles before they can work in the DevOps project. Project administrators (e.g. `project-admin`) are responsible for inviting tenants and granting them different roles. For more information, see [Role and Member Management](../role-and-member-management/).
Similar to a project, a DevOps project also requires users to be granted different roles before they can work in the DevOps project. Project administrators (for example, `project-admin`) are responsible for inviting tenants and granting them different roles. For more information, see [Role and Member Management](../role-and-member-management/).
## Edit or Delete a DevOps Project

View File

@ -17,7 +17,7 @@ In DevOps project scope, you can grant the following resources' permissions to a
## Prerequisites
At least one DevOps project has been created, such as `demo-devops`. Besides, you need an account of the `admin` role (e.g. `devops-admin`) at the DevOps project level.
At least one DevOps project has been created, such as `demo-devops`. Besides, you need an account of the `admin` role (for example, `devops-admin`) at the DevOps project level.
## Built-in Roles
@ -31,7 +31,7 @@ In **Project Roles**, there are three available built-in roles as shown below. B
## Create a DevOps Project Role
1. Log in to the console as `devops-admin` and select a DevOps project (e.g. `demo-devops`) under **DevOps Projects** list.
1. Log in to the console as `devops-admin` and select a DevOps project (for example, `demo-devops`) under **DevOps Projects** list.
{{< notice note >}}

View File

@ -10,12 +10,12 @@ As an open-source and app-centric container platform, KubeSphere integrates 16 b
## Prerequisites
- You need to use an account with the role of `platform-admin` (e.g. `admin`) for this tutorial.
- You need to use an account with the role of `platform-admin` (for example, `admin`) for this tutorial.
- You need to [enable the App Store](../../../pluggable-components/app-store/).
## Remove a Built-in App
1. Log in to the web console of KubeSphere as `admin`, click **Platform** in the upper left corner, and then select **App Store Management**.
1. Log in to the web console of KubeSphere as `admin`, click **Platform** in the upper-left corner, and then select **App Store Management**.
![click-platform](/images/docs/faq/applications/remove-built-in-apps/click-platform.PNG)

View File

@ -27,11 +27,11 @@ To deploy an app in KubeSphere, tenants can go to the App Store and select the a
### Reuse the same app name
1. If you try to deploy a new Redis app with the same app name as `redis-1`, you can see the following error prompt in the upper right corner.
1. If you try to deploy a new Redis app with the same app name as `redis-1`, you can see the following error prompt in the upper-right corner.
![error-prompt](/images/docs/faq/applications/use-the-same-app-name-after-deletion/error-prompt.PNG)
2. In your project, go to **Secrets** under **Configurations**, and input `redis-1` in the search bar to search the Secret.
2. In your project, go to **Secrets** under **Configurations**, and enter `redis-1` in the search bar to search the Secret.
![search-secret](/images/docs/faq/applications/use-the-same-app-name-after-deletion/search-secret.PNG)
@ -39,7 +39,7 @@ To deploy an app in KubeSphere, tenants can go to the App Store and select the a
![delete-secret](/images/docs/faq/applications/use-the-same-app-name-after-deletion/delete-secret.PNG)
4. In the dialog that appears, input the Secret name and click **OK** to delete it.
4. In the dialog that appears, enter the Secret name and click **OK** to delete it.
![confirm-delete](/images/docs/faq/applications/use-the-same-app-name-after-deletion/confirm-delete.PNG)

View File

@ -16,7 +16,7 @@ You have installed KubeSphere.
## Change the Console Language
1. Log in to KubeSphere with your account and click the account name in the top right corner.
1. Log in to KubeSphere with your account and click the account name in the top-right corner.
2. Select **User Settings**.

View File

@ -18,7 +18,7 @@ Editing resources in `system-workspace` may cause unexpected results, such as Ku
## Edit the Console Configuration
1. Log in to KubeSphere as `admin`. Click the hammer icon in the bottom right corner and select **Kubectl**.
1. Log in to KubeSphere as `admin`. Click the hammer icon in the bottom-right corner and select **Kubectl**.
2. Execute the following command:

View File

@ -30,7 +30,7 @@ You need to enable [the KubeSphere DevOps system](../../../pluggable-components/
echo http://$NODE_IP:$NODE_PORT
```
2. You can get the output similar to the following. You can access the Jenkins dashboard through the address with your own KubeSphere account and password (e.g. `admin/P@88w0rd`).
2. You can get the output similar to the following. You can access the Jenkins dashboard through the address with your own KubeSphere account and password (for example, `admin/P@88w0rd`).
```
http://192.168.0.4:30180
@ -52,7 +52,7 @@ You need to enable [the KubeSphere DevOps system](../../../pluggable-components/
![click-manage-plugins](/images/docs/faq/devops/install-plugins-to-jenkins/click-manage-plugins.png)
3. Select the **Available** tab and you can see all the available plugins listed on the page. You can also use the **Filter** in the upper right corner to search for the plugins you need. Check the checkbox next to the plugin you need, and then click **Install without restart** or **Download now and install after restart** based on your needs.
3. Select the **Available** tab and you can see all the available plugins listed on the page. You can also use the **Filter** in the upper-right corner to search for the plugins you need. Check the checkbox next to the plugin you need, and then click **Install without restart** or **Download now and install after restart** based on your needs.
![available-plugins](/images/docs/faq/devops/install-plugins-to-jenkins/available-plugins.png)

View File

@ -36,7 +36,7 @@ Docker needs to be installed in advance for this method.
{{</ notice >}}
1. Execute the following commands:
1. Run the following commands:
```bash
sudo mkdir -p /etc/docker
@ -58,7 +58,7 @@ Docker needs to be installed in advance for this method.
Make sure you replace the address within the quotation mark above with your own Booster URL.
{{</ notice >}}
{{</ notice >}}
3. Save the file and reload Docker by executing the following commands so that the change can take effect.
@ -76,12 +76,18 @@ Docker needs to be installed in advance for this method.
```yaml
registry:
registryMirrors: [] # For users who need to speed up downloads
insecureRegistries: [] # Set an address of insecure image registry. See https://docs.docker.com/registry/insecure/
privateRegistry: "" # Configure a private image registry for air-gapped installation (e.g. docker local registry or Harbor)
registryMirrors: []
insecureRegistries: []
privateRegistry: ""
```
2. Input the registry mirror address above and save the file. For more information about the installation process, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/).
{{< notice note >}}
For more information about each parameter under the `registry` section, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/).
{{</ notice >}}
2. Provide the registry mirror address as the value of `registryMirrors` and save the file. For more information about installation, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/).
{{< notice note >}}

View File

@ -6,7 +6,7 @@ linkTitle: "Telemetry in KubeSphere"
Weight: 16300
---
Telemetry collects aggregate information about the size of KubeSphere clusters installed, KubeSphere and Kubernetes versions, components enabled, cluster running time, error logs, etc. KubeSphere promises that the information is only used by the KubeSphere community to improve products and will not be shared with any third parties.
Telemetry collects aggregate information about the size of KubeSphere clusters installed, KubeSphere and Kubernetes versions, components enabled, cluster running time, error logs and so on. KubeSphere promises that the information is only used by the KubeSphere community to improve products and will not be shared with any third parties.
## What Information Is Collected
@ -29,7 +29,7 @@ Telemetry is enabled by default when you install KubeSphere, while you also have
### Disable Telemetry before installation
When you install KubeSphere on existing Kubernetes clusters, you need to download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/cluster-configuration.yaml) for cluster setting. If you want to disable Telemetry, do not use `kubectl apply -f` directly for this file.
When you install KubeSphere on an existing Kubernetes cluster, you need to download the file [cluster-configuration.yaml](https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/cluster-configuration.yaml) for cluster settings. If you want to disable Telemetry, do not run `kubectl apply -f` directly for this file.
{{< notice note >}}
@ -43,27 +43,28 @@ If you install KubeSphere on Linux, see [Disable Telemetry after Installation](.
vi cluster-configuration.yaml
```
2. In this local cluster-configuration.yaml file, scroll down to the bottom of the file and add the value `telemetry_enabled: false` as follows:
2. In this local `cluster-configuration.yaml` file, scroll down to the bottom of the file and add `telemetry_enabled: false` as follows:
```yaml
openpitrix:
enabled: false
store:
enabled: false
servicemesh:
enabled: false
telemetry_enabled: false # Add this line here to disable Telemetry.
telemetry_enabled: false # Add this line manually to disable Telemetry.
```
3. Save the file after you finish and execute the following commands to start installation.
3. Save the file and run the following commands to start installation.
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.1.0/kubesphere-installer.yaml
kubectl apply -f cluster-configuration.yaml
```
### Disable Telemetry after installation
1. Log in to the console as `admin` and click **Platform** in the top left corner.
1. Log in to the console as `admin` and click **Platform** in the top-left corner.
2. Select **Cluster Management** and navigate to **CRDs**.
@ -71,20 +72,15 @@ If you install KubeSphere on Linux, see [Disable Telemetry after Installation](.
If you have enabled [the multi-cluster feature](../../../multicluster-management/), you need to select a cluster first.
{{</ notice >}}
3. Input `clusterconfiguration` in the search bar and click the result to go to its detail page.
3. Enter `clusterconfiguration` in the search bar and click the result to go to its detail page.
![edit-crd](/images/docs/faq/telemetry-in-kubesphere/edit-crd.jpg)
4. Click <img src="/images/docs/faq/installation/telemetry-in-kubesphere/three-dots.png" height="20px"> on the right of `ks-installer` and select **Edit YAML**.
4. Click the three dots on the right of `ks-installer` and select **Edit YAML**.
5. Scroll down to the bottom of the file, add `telemetry_enabled: false`, and then click **Update**.
![edit-ks-installer](/images/docs/faq/telemetry-in-kubesphere/edit-ks-installer.jpg)
5. Scroll down to the bottom of the file and add the value `telemetry_enabled: false`. When you finish, click **Update**.
![enable-telemetry](/images/docs/faq/telemetry-in-kubesphere/enable-telemetry.jpg)
{{< notice note >}}
If you want to enable Telemetry again, you can update `ks-installer` by deleting the value `telemetry_enabled: false` or changing it to `telemetry_enabled: true`.
If you want to enable Telemetry again, you can update `ks-installer` by deleting `telemetry_enabled: false` or changing it to `telemetry_enabled: true`.
{{</ notice >}}

View File

@ -194,4 +194,4 @@ Now that your own Prometheus stack is up and running, you can change KubeSphere'
If you enable/disable KubeSphere pluggable components following [this guide](https://kubesphere.io/docs/pluggable-components/overview/) , the `monitoring endpoint` will be reset to the original one. In this case, you have to change it to the new one and then restart the KubeSphere APIServer again.
{{</ notice >}}
{{</ notice >}}

View File

@ -14,7 +14,7 @@ Azure can help you implement infrastructure as code by providing resource deploy
### Use Azure Cloud Shell
You don't have to install Azure CLI on your machine as Azure provides a web-based terminal. Click the Cloud Shell button on the menu bar at the upper right corner in Azure portal.
You don't have to install Azure CLI on your machine as Azure provides a web-based terminal. Click the Cloud Shell button on the menu bar at the upper-right corner in Azure portal.
![Cloud Shell](/images/docs/aks/aks-launch-icon.png)

View File

@ -18,11 +18,11 @@ A Kubernetes cluster in DO is a prerequisite for installing KubeSphere. Go to yo
You need to select:
1. Kubernetes version (e.g. *1.18.6-do.0*)
2. Datacenter region (e.g. *Frankfurt*)
3. VPC network (e.g. *default-fra1*)
4. Cluster capacity (e.g. 2 standard nodes with 2 vCPUs and 4GB of RAM each)
5. A name for the cluster (e.g. *kubesphere-3*)
1. Kubernetes version (for example, *1.18.6-do.0*)
2. Datacenter region (for example, *Frankfurt*)
3. VPC network (for example, *default-fra1*)
4. Cluster capacity (for example, 2 standard nodes with 2 vCPUs and 4GB of RAM each)
5. A name for the cluster (for example, *kubesphere-3*)
![config-cluster-do](/images/docs/do/config-cluster-do.png)

View File

@ -8,7 +8,7 @@ weight: 4110
![kubesphere+k8s](/images/docs/installing-on-kubernetes/introduction/overview/kubesphere+k8s.png)
As part of KubeSphere's commitment to provide a plug-and-play architecture for users, it can be easily installed on existing Kubernetes clusters. More specifically, KubeSphere can be deployed on Kubernetes either hosted on clouds (e.g. AWS EKS, QingCloud QKE and Google GKE) or on-premises. This is because KubeSphere does not hack Kubernetes itself. It only interacts with the Kubernetes API to manage Kubernetes cluster resources. In other words, KubeSphere can be installed on any native Kubernetes cluster and Kubernetes distribution.
As part of KubeSphere's commitment to provide a plug-and-play architecture for users, it can be easily installed on existing Kubernetes clusters. More specifically, KubeSphere can be deployed on Kubernetes either hosted on clouds (for example, AWS EKS, QingCloud QKE and Google GKE) or on-premises. This is because KubeSphere does not hack Kubernetes itself. It only interacts with the Kubernetes API to manage Kubernetes cluster resources. In other words, KubeSphere can be installed on any native Kubernetes cluster and Kubernetes distribution.
This section gives you an overview of the general steps of installing KubeSphere on Kubernetes. For more information about the specific way of installation in different environments, see Installing on Hosted Kubernetes and Installing on On-premises Kubernetes.

View File

@ -90,7 +90,7 @@ To make sure edge nodes can successfully talk to your cluster, you must forward
## Add an Edge Node
1. Log in to the console as `admin` and click **Platform** in the top left corner.
1. Log in to the console as `admin` and click **Platform** in the top-left corner.
2. Select **Cluster Management** and navigate to **Edge Nodes** under **Node Management**.

View File

@ -76,7 +76,7 @@ You can skip this step if you already have the configuration file on your machin
## Add Master Nodes for High Availability
The steps of adding master nodes are generally the same as adding worker nodes while you need to configure a load balancer for your cluster. You can use any cloud load balancers or hardware load balancers (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating highly available clusters.
The steps of adding master nodes are generally the same as adding worker nodes while you need to configure a load balancer for your cluster. You can use any cloud load balancers or hardware load balancers (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating highly available clusters.
1. Create a configuration file using KubeKey.

View File

@ -6,7 +6,7 @@ linkTitle: "Set up an HA Cluster Using a Load Balancer"
weight: 3210
---
You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
You can set up a single-master Kubernetes cluster with KubeSphere installed based on the tutorial of [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/). Single-master clusters may be sufficient for development and testing in most cases. For a production environment, however, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
This tutorial demonstrates the general configurations of a high-availability cluster as you install KubeSphere on Linux.
@ -163,7 +163,7 @@ For more information about different fields in this configuration file, see [Kub
### Persistent storage plugin configurations
For a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
For a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want to use. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
### Enable pluggable components (Optional)

View File

@ -168,7 +168,7 @@ Keepalived must be installed on both machines while the configuration of them is
- For the `interface` field, you must provide your own network card information. You can run `ifconfig` on your machine to get the value.
- The IP address provided for `unicast_src_ip` is the IP address of your current machine. For other machines where HAproxy and Keepalived are also installed for load balancing, their IP address must be input for the field `unicast_peer`.
- The IP address provided for `unicast_src_ip` is the IP address of your current machine. For other machines where HAproxy and Keepalived are also installed for load balancing, their IP address must be provided for the field `unicast_peer`.
{{</ notice >}}

View File

@ -16,7 +16,7 @@ This section gives you an overview of a single-master multi-node installation, i
## Concept
A multi-node cluster is composed of at least one master node and one worker node. You can use any node as the **taskbox** to carry out the installation task. You can add additional nodes based on your needs (e.g. for high availability) both before and after the installation.
A multi-node cluster is composed of at least one master node and one worker node. You can use any node as the **taskbox** to carry out the installation task. You can add additional nodes based on your needs (for example, for high availability) both before and after the installation.
- **Master**. A master node generally hosts the control plane that controls and manages the whole system.
- **Worker**. Worker nodes run the actual applications deployed on them.
@ -177,7 +177,7 @@ Here are some examples for your reference:
./kk create config [-f ~/myfolder/abc.yaml]
```
- You can specify a KubeSphere version that you want to install (e.g. `--with-kubesphere v3.1.0`).
- You can specify a KubeSphere version that you want to install (for example, `--with-kubesphere v3.1.0`).
```bash
./kk create config --with-kubesphere [version]
@ -219,7 +219,7 @@ List all your machines under `hosts` and add their detailed information as above
`name`: The hostname of the instance.
`address`: The IP address you use for the connection between the taskbox and other instances through SSH. This can be either the public IP address or the private IP address depending on your environment. For example, some cloud platforms provide every instance with a public IP address which you use to access instances through SSH. In this case, you can input the public IP address for this field.
`address`: The IP address you use for the connection between the taskbox and other instances through SSH. This can be either the public IP address or the private IP address depending on your environment. For example, some cloud platforms provide every instance with a public IP address which you use to access instances through SSH. In this case, you can provide the public IP address for this field.
`internalAddress`: The private IP address of the instance.
@ -278,7 +278,7 @@ The `controlPlaneEndpoint` is where you provide your external load balancer info
#### addons
You can customize persistent storage plugins (e.g. NFS Client, Ceph RBD, and GlusterFS) by specifying storage under the field `addons` in `config-sample.yaml`. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
You can customize persistent storage plugins (for example, NFS Client, Ceph RBD, and GlusterFS) by specifying storage under the field `addons` in `config-sample.yaml`. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/persistent-storage-configurations/understand-persistent-storage/).
KubeKey will install [OpenEBS](https://openebs.io/) to provision [LocalPV](https://kubernetes.io/docs/concepts/storage/volumes/#local) for development and testing environment by default, which is convenient for new users. In this example of multi-node installation, the default storage class (local volume) is used. For production, you can use NFS/Ceph/GlusterFS/CSI or commercial products as persistent storage solutions.

View File

@ -244,7 +244,7 @@ chmod +x kk
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.1.0`):
Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.1.0`):
```bash
./kk create config --with-kubernetes v1.20.4 --with-kubesphere v3.1.0

View File

@ -9,7 +9,7 @@ weight: 3510
## Introduction
For a production environment, we need to consider the high availability of the cluster. If the key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
For a production environment, we need to consider the high availability of the cluster. If the key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAProxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
This tutorial walks you through an example of how to create Keepalived and HAProxy, and implement high availability of master and etcd nodes using the load balancers on VMware vSphere.
@ -77,7 +77,7 @@ You can follow the New Virtual Machine wizard to create a virtual machine to pla
![kubesphereOnVsphere-en-0-1-7-hardware-4](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-7-hardware-4.png)
6. In **Ready to complete** page, you review the configuration selections that you have made for the virtual machine. Click **Finish** at the bottom right corner to continue.
6. In **Ready to complete** page, you review the configuration selections that you have made for the virtual machine. Click **Finish** at the bottom-right corner to continue.
![kubesphereOnVsphere-en-0-1-8](/images/docs/vsphere/kubesphereOnVsphere-en-0-1-8.png)
@ -345,7 +345,7 @@ chmod +x kk
With KubeKey, you can install Kubernetes and KubeSphere together. You have the option to create a multi-node cluster by customizing parameters in the configuration file.
Create a Kubernetes cluster with KubeSphere installed (e.g. `--with-kubesphere v3.1.0`):
Create a Kubernetes cluster with KubeSphere installed (for example, `--with-kubesphere v3.1.0`):
```bash
./kk create config --with-kubernetes v1.19.8 --with-kubesphere v3.1.0

View File

@ -77,7 +77,7 @@ mountOptions:
#### Add-on configurations
Save the above chart config and StorageClass locally (e.g. `/root/ceph-csi-rbd.yaml` and `/root/ceph-csi-rbd-sc.yaml`). The add-on configuration can be set like:
Save the above chart config and StorageClass locally (for example, `/root/ceph-csi-rbd.yaml` and `/root/ceph-csi-rbd-sc.yaml`). The add-on configuration can be set like:
```yaml
addons:
@ -115,7 +115,7 @@ If you want to configure more values, see [chart configuration for rbd-provision
#### Add-on configurations
Save the above chart config locally (e.g. `/root/rbd-provisioner.yaml`). The add-on config for rbd provisioner cloud be like:
Save the above chart config locally (for example, `/root/rbd-provisioner.yaml`). The add-on config for rbd provisioner cloud be like:
```yaml
- name: rbd-provisioner

View File

@ -284,7 +284,7 @@ glusterfs (default) kubernetes.io/glusterfs Delete Immediate
### KubeSphere console
1. Log in to the web console with the default account and password (`admin/P@88w0rd`) at `<NodeIP>:30880`. Click **Platform** in the top left corner and select **Cluster Management**.
1. Log in to the web console with the default account and password (`admin/P@88w0rd`) at `<NodeIP>:30880`. Click **Platform** in the top-left corner and select **Cluster Management**.
3. Go to **Volumes** under **Storage**, and you can see PVCs in use.

View File

@ -53,7 +53,7 @@ Install `nfs-common` on all of the clients. It provides necessary NFS functions
{{< notice note >}}
- If you want to configure more values, see [chart configurations for NFS-client](https://github.com/kubesphere/helm-charts/tree/master/src/main/nfs-client-provisioner#configuration).
- The `storageClass.defaultClass` field controls whether you want to set the storage class of NFS-client Provisioner as the default one. If you input `false` for it, KubeKey will install [OpenEBS](https://github.com/openebs/openebs) to provide local volumes, while they are not provisioned dynamically as you create workloads on your cluster. After you install KubeSphere, you can change the default storage class on the console directly.
- The `storageClass.defaultClass` field controls whether you want to set the storage class of NFS-client Provisioner as the default one. If you enter `false` for it, KubeKey will install [OpenEBS](https://github.com/openebs/openebs) to provide local volumes, while they are not provisioned dynamically as you create workloads on your cluster. After you install KubeSphere, you can change the default storage class on the console directly.
{{</ notice >}}
@ -256,7 +256,7 @@ You can verify that NFS-client has been successfully installed either from the c
### KubeSphere console
1. Log in to the web console as `admin` with the default account and password at `<NodeIP>:30880`. Click **Platform** in the top left corner and select **Cluster Management**.
1. Log in to the web console as `admin` with the default account and password at `<NodeIP>:30880`. Click **Platform** in the top-left corner and select **Cluster Management**.
2. Go to **Pods** in **Application Workloads** and select `kube-system` from the project drop-down list. You can see that the Pod of `nfs-client` is up and running.

View File

@ -18,7 +18,7 @@ Your cluster nodes are created on [QingCloud Platform](https://intl.qingcloud.co
To make sure the platform can create cloud disks for your cluster, you need to provide the access key (`qy_access_key_id` and `qy_secret_access_key`) in a separate configuration file of QingCloud CSI.
1. Log in to the web console of [QingCloud](https://console.qingcloud.com/login) and select **Access Key** from the drop-down list in the top right corner.
1. Log in to the web console of [QingCloud](https://console.qingcloud.com/login) and select **Access Key** from the drop-down list in the top-right corner.
![access-key](/images/docs/installing-on-linux/introduction/persistent-storage-configuration/access-key.jpg)
@ -261,7 +261,7 @@ You can verify that QingCloud CSI has been successfully installed either from th
### KubeSphere console
1. Log in to the web console with the default account and password (`admin/P@88w0rd`) at `<NodeIP>:30880`. Click **Platform** in the top left corner and select **Cluster Management**.
1. Log in to the web console with the default account and password (`admin/P@88w0rd`) at `<NodeIP>:30880`. Click **Platform** in the top-left corner and select **Cluster Management**.
2. Go to **Pods** in **Application Workloads** and select `kube-system` from the project drop-down list. You can see that the Pods of `csi-qingcloud` are up and running.

View File

@ -29,7 +29,7 @@ KubeKey creates [a configuration file](../../../installing-on-linux/introduction
There are generally two ways for you to let KubeKey apply configurations of the storage system to be installed.
1. Input necessary parameters under the `addons` field directly in `config-sample.yaml`.
1. Enter necessary parameters under the `addons` field directly in `config-sample.yaml`.
2. Create a separate configuration file for your add-on to list all the necessary parameters and provide the path of the file in `config-sample.yaml` so that KubeKey can reference it during installation.
For more information, see [add-ons](https://github.com/kubesphere/kubekey/blob/master/docs/addons.md).

View File

@ -8,7 +8,7 @@ Weight: 3420
## Introduction
For a production environment, you need to consider the high availability of the cluster. If key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
For a production environment, you need to consider the high availability of the cluster. If key components (for example, kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (for example, F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
This tutorial walks you through an example of how to create two [QingCloud load balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of master and etcd nodes using the load balancers.
@ -253,7 +253,7 @@ Kubekey provides some fields and parameters to allow the cluster administrator t
### Step 6: Persistent storage plugin configurations
Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want.
Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (for example, CSI) in `config-sample.yaml` to define which storage service you want.
{{< notice note >}}

View File

@ -54,7 +54,7 @@ Automation represents a key part of implementing DevOps. With automatic, streaml
**Jenkins-powered**. The KubeSphere DevOps system is built with Jenkins as the engine, which is abundant in plugins. On top of that, Jenkins provides an enabling environment for extension development, making it possible for the DevOps team to work smoothly across the whole process (developing, testing, building, deploying, monitoring, logging, notifying, etc.) in a unified platform. The KubeSphere account can also be used for the built-in Jenkins, meeting the demand of enterprises for multi-tenant isolation of CI/CD pipelines and unified authentication.
**Convenient built-in tools**. Users can easily take advantage of automation tools (e.g. Binary-to-Image and Source-to-Image) even without a thorough understanding of how Docker or Kubernetes works. They only need to submit a registry address or upload binary files (e.g. JAR/WAR/Binary). Ultimately, services will be released to Kubernetes automatically without any coding in a Dockerfile.
**Convenient built-in tools**. Users can easily take advantage of automation tools (for example, Binary-to-Image and Source-to-Image) even without a thorough understanding of how Docker or Kubernetes works. They only need to submit a registry address or upload binary files (for example, JAR/WAR/Binary). Ultimately, services will be released to Kubernetes automatically without any coding in a Dockerfile.
For more information, see [DevOps User Guide](../../devops-user-guide/).
@ -85,7 +85,7 @@ The KubeSphere community has the capabilities and technical know-how to help you
**Partners**. KubeSphere partners play a critical role in KubeSphere's go-to-market strategy. They can be app developers, technology companies, cloud providers or go-to-market partners, all of whom drive the community ahead in their respective aspects.
**Ambassadors**. As community representatives, ambassadors promote KubeSphere in a variety of ways (e.g. activities, blogs and user cases) so that more people can join the community.
**Ambassadors**. As community representatives, ambassadors promote KubeSphere in a variety of ways (for example, activities, blogs and user cases) so that more people can join the community.
**Contributors**. KubeSphere contributors help the whole community by contributing to code or documentation. You don't need to be an expert while you can still make a difference even it is a minor code fix or language improvement.

View File

@ -39,7 +39,7 @@ The next-gen installer [KubeKey](https://github.com/kubesphere/kubekey) provides
As the IT world sees a growing number of cloud-native applications reshaping software portfolios for enterprises, users tend to deploy their clusters across locations, geographies, and clouds. Against this backdrop, KubeSphere has undergone a significant upgrade to address the pressing need of users with its brand-new multi-cluster feature.
With KubeSphere, users can manage the infrastructure underneath, such as adding or deleting clusters. Heterogeneous clusters deployed on any infrastructure (e.g. Amazon EKS and Google Kubernetes Engine) can be managed in a unified way. This is made possible by a central control plane of KubeSphere with two efficient management approaches available.
With KubeSphere, users can manage the infrastructure underneath, such as adding or deleting clusters. Heterogeneous clusters deployed on any infrastructure (for example, Amazon EKS and Google Kubernetes Engine) can be managed in a unified way. This is made possible by a central control plane of KubeSphere with two efficient management approaches available.
- **Solo**. Independently deployed Kubernetes clusters can be maintained and managed together in KubeSphere container platform.
- **Federation**. Multiple Kubernetes clusters can be aggregated together as a Kubernetes resource pool. When users deploy applications, replicas can be deployed on different Kubernetes clusters in the pool. In this regard, high availability is achieved across zones and clusters.
@ -72,7 +72,7 @@ S2I allows you to publish your service to Kubernetes without writing a Dockerfil
### Binary-to-Image
Similar to S2I, Binary-to-Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (e.g. Jar, War, Binary package).
Similar to S2I, Binary-to-Image (B2I) is a toolkit and automated workflow for building reproducible container images from binary (for example, Jar, War, Binary package).
You just need to upload your application binary package, and specify the image registry to which you want to push. The rest is exactly the same as S2I.
@ -103,7 +103,7 @@ Based on Jaeger, KubeSphere service mesh enables users to track how services int
## Multi-tenant Management
In KubeSphere, resources (e.g. clusters) can be shared between tenants. First, administrators or managers need to set different account roles with different authorizations. After that, members in the platform can be assigned with these roles to perform specific actions on varied resources. Meanwhile, as KubeSphere completely isolates tenants, they will not affect each other at all.
In KubeSphere, resources (for example, clusters) can be shared between tenants. First, administrators or managers need to set different account roles with different authorizations. After that, members in the platform can be assigned with these roles to perform specific actions on varied resources. Meanwhile, as KubeSphere completely isolates tenants, they will not affect each other at all.
- **Multi-tenancy**. It provides role-based fine-grained authentication in a unified way and a three-tier authorization system.
- **Unified authentication**. For enterprises, KubeSphere is compatible with their central authentication system that is base on LDAP or AD protocol. Single sign-on (SSO) is also supported to achieve unified authentication of tenant identity.

View File

@ -6,7 +6,7 @@ titleLink: "Agent Connection"
weight: 5220
---
The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the Host Cluster (H Cluster) cannot access the Member Cluster (M Cluster) directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H Cluster through the agent. This method is applicable when the M Cluster is in a private environment (e.g. IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed across different cloud providers.
The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the Host Cluster (H Cluster) cannot access the Member Cluster (M Cluster) directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H Cluster through the agent. This method is applicable when the M Cluster is in a private environment (for example, IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed across different cloud providers.
To use the multi-cluster feature using an agent, you must have at least two clusters serving as the H Cluster and the M Cluster respectively. A cluster can be defined as the H Cluster or the M Cluster either before or after you install KubeSphere. For more information about installing KubeSphere, refer to [Installing on Linux](../../../installing-on-linux/) and [Installing on Kubernetes](../../../installing-on-kubernetes/).
@ -113,7 +113,7 @@ Generally, there is always a LoadBalancer solution in the public cloud, and the
tower LoadBalancer 10.233.63.191 <pending> 8080:30721/TCP 16h
```
2. Add the value of `proxyPublishAddress` to the configuration file of `ks-installer` and input the public IP address (`139.198.120.120` in this tutorial) and port number as follows.
2. Add the value of `proxyPublishAddress` to the configuration file of `ks-installer` and provide the public IP address (`139.198.120.120` in this tutorial) and port number as follows.
- Option A - Use the web console:
@ -173,7 +173,7 @@ If you already have a standalone KubeSphere cluster installed, you can set the v
kubectl edit cc ks-installer -n kubesphere-system
```
In the YAML file of `ks-installer`, input the corresponding `jwtSecret` shown above:
In the YAML file of `ks-installer`, enter the corresponding `jwtSecret` shown above:
```yaml
authentication:
@ -193,7 +193,7 @@ You need to **wait for a while** so that the change can take effect.
{{< tab "KubeSphere has not been installed" >}}
You can define a member cluster before you install KubeSphere either on Linux or on an existing Kubernetes cluster. If you want to [install KubeSphere on Linux](../../../installing-on-linux/introduction/multioverview/#1-create-an-example-configuration-file), you use a `config-sample.yaml` file. If you want to [install KubeSphere on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/#deploy-kubesphere), you use two YAML files, one of which is `cluster-configuration.yaml`. To set a member cluster, input the value of `jwtSecret` shown above and change the value of `clusterRole` to `member` in `config-sample.yaml` or `cluster-configuration.yaml` accordingly before you install KubeSphere.
You can define a member cluster before you install KubeSphere either on Linux or on an existing Kubernetes cluster. If you want to [install KubeSphere on Linux](../../../installing-on-linux/introduction/multioverview/#1-create-an-example-configuration-file), you use a `config-sample.yaml` file. If you want to [install KubeSphere on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/#deploy-kubesphere), you use two YAML files, one of which is `cluster-configuration.yaml`. To set a member cluster, enter the value of `jwtSecret` shown above and change the value of `clusterRole` to `member` in `config-sample.yaml` or `cluster-configuration.yaml` accordingly before you install KubeSphere.
```yaml
authentication:
@ -227,7 +227,7 @@ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=
![add-cluster](/images/docs/multicluster-management/enable-multicluster-management-in-kubesphere/agent-connection/add-cluster.png)
2. Enter the basic information of the cluster to be imported on the **Import Cluster** page. You can also click **Edit Mode** in the top right corner to view and edit the basic information in YAML format. After you finish editing, click **Next**.
2. Enter the basic information of the cluster to be imported on the **Import Cluster** page. You can also click **Edit Mode** in the top-right corner to view and edit the basic information in YAML format. After you finish editing, click **Next**.
![cluster-info](/images/docs/multicluster-management/enable-multicluster-management-in-kubesphere/agent-connection/cluster-info.png)

View File

@ -100,7 +100,7 @@ If you already have a standalone KubeSphere cluster installed, you can set the v
kubectl edit cc ks-installer -n kubesphere-system
```
In the YAML file of `ks-installer`, input the corresponding `jwtSecret` shown above:
In the YAML file of `ks-installer`, enter the corresponding `jwtSecret` shown above:
```yaml
authentication:
@ -120,7 +120,7 @@ You need to **wait for a while** so that the change can take effect.
{{< tab "KubeSphere has not been installed" >}}
You can define a member cluster before you install KubeSphere either on Linux or on an existing Kubernetes cluster. If you want to [install KubeSphere on Linux](../../../installing-on-linux/introduction/multioverview/#1-create-an-example-configuration-file), you use a `config-sample.yaml` file. If you want to [install KubeSphere on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/#deploy-kubesphere), you use two YAML files, one of which is `cluster-configuration.yaml`. To set a member cluster, input the value of `jwtSecret` shown above and change the value of `clusterRole` to `member` in `config-sample.yaml` or `cluster-configuration.yaml` accordingly before you install KubeSphere.
You can define a member cluster before you install KubeSphere either on Linux or on an existing Kubernetes cluster. If you want to [install KubeSphere on Linux](../../../installing-on-linux/introduction/multioverview/#1-create-an-example-configuration-file), you use a `config-sample.yaml` file. If you want to [install KubeSphere on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/#deploy-kubesphere), you use two YAML files, one of which is `cluster-configuration.yaml`. To set a member cluster, enter the value of `jwtSecret` shown above and change the value of `clusterRole` to `member` in `config-sample.yaml` or `cluster-configuration.yaml` accordingly before you install KubeSphere.
```yaml
authentication:
@ -154,11 +154,11 @@ kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=
![add-cluster](/images/docs/multicluster-management/enable-multicluster-management-in-kubesphere/direct-connection/add-cluster.png)
2. Enter the basic information of the cluster to be imported on the **Import Cluster** page. You can also click **Edit Mode** in the top right corner to view and edit the basic information in YAML format. After you finish editing, click **Next**.
2. Enter the basic information of the cluster to be imported on the **Import Cluster** page. You can also click **Edit Mode** in the top-right corner to view and edit the basic information in YAML format. After you finish editing, click **Next**.
![cluster-info](/images/docs/multicluster-management/enable-multicluster-management-in-kubesphere/direct-connection/cluster-info.png)
3. In **Connection Method**, select **Direct Connection**, and copy the kubeconfig of the Member Cluster and paste it into the box. You can also click **Edit Mode** in the top right corner to edit the kubeconfig of the Member Cluster in YAML format.
3. In **Connection Method**, select **Direct Connection**, and copy the kubeconfig of the Member Cluster and paste it into the box. You can also click **Edit Mode** in the top-right corner to edit the kubeconfig of the Member Cluster in YAML format.
{{< notice note >}}

View File

@ -29,9 +29,9 @@ This tutorial demonstrates how to import an Aliyun ACK cluster through the [dire
jwtSecret: "QVguGh7qnURywHn2od9IiOX6X8f8wK8g"
```
2. Log in to the KubeSphere console of the ACK cluster as `admin`. Click **Platform** in the upper left corner and then select **Cluster Management**.
2. Log in to the KubeSphere console of the ACK cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**.
3. Go to **CRDs**, input `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
3. Go to **CRDs**, enter `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
![search-config](/images/docs/multicluster-management/import-cloud-hosted-k8s/import-ack/search-config.png)
@ -65,11 +65,11 @@ Log in to the web console of Aliyun. Go to **Clusters** under **Container Servic
### Step 3: Import the ACK Member Cluster
1. Log in to the KubeSphere console on your Host Cluster as `admin`. Click **Platform** in the upper left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
1. Log in to the KubeSphere console on your Host Cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
![click-add-cluster](/images/docs/multicluster-management/import-cloud-hosted-k8s/import-ack/click-add-cluster.png)
2. Input the basic information based on your needs and click **Next**.
2. Enter the basic information based on your needs and click **Next**.
![input-info](/images/docs/multicluster-management/import-cloud-hosted-k8s/import-ack/input-info.png)

View File

@ -33,9 +33,9 @@ You need to deploy KubeSphere on your EKS cluster first. For more information ab
jwtSecret: "QVguGh7qnURywHn2od9IiOX6X8f8wK8g"
```
2. Log in to the KubeSphere console of the EKS cluster as `admin`. Click **Platform** in the upper left corner and then select **Cluster Management**.
2. Log in to the KubeSphere console of the EKS cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**.
3. Go to **CRDs**, input `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
3. Go to **CRDs**, enter `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
![search-config](/images/docs/multicluster-management/import-cloud-hosted-k8s/import-eks/search-config.png)
@ -166,11 +166,11 @@ You need to deploy KubeSphere on your EKS cluster first. For more information ab
### Step 4: Import the EKS Member Cluster
1. Log in to the KubeSphere console on your Host Cluster as `admin`. Click **Platform** in the upper left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
1. Log in to the KubeSphere console on your Host Cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
![click-add-cluster](/images/docs/multicluster-management/import-cloud-hosted-k8s/import-eks/click-add-cluster.png)
2. Input the basic information based on your needs and click **Next**.
2. Enter the basic information based on your needs and click **Next**.
![input-info](/images/docs/multicluster-management/import-cloud-hosted-k8s/import-eks/input-info.png)

View File

@ -33,9 +33,9 @@ You need to deploy KubeSphere on your GKE cluster first. For more information ab
jwtSecret: "QVguGh7qnURywHn2od9IiOX6X8f8wK8g"
```
2. Log in to the KubeSphere console on GKE as `admin`. Click **Platform** in the upper left corner and then select **Cluster Management**.
2. Log in to the KubeSphere console on GKE as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**.
3. Go to **CRDs**, input `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
3. Go to **CRDs**, enter `ClusterConfiguration` in the search bar, and then press **Enter** on your keyboard. Click **ClusterConfiguration** to go to its detail page.
![search-config](/images/docs/multicluster-management/import-cloud-hosted-k8s/import-gke/search-config.png)
@ -111,11 +111,11 @@ You need to deploy KubeSphere on your GKE cluster first. For more information ab
### Step 4: Import the GKE Member Cluster
1. Log in to the KubeSphere console on your Host Cluster as `admin`. Click **Platform** in the upper left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
1. Log in to the KubeSphere console on your Host Cluster as `admin`. Click **Platform** in the upper-left corner and then select **Cluster Management**. On the **Cluster Management** page, click **Add Cluster**.
![click-add-cluster](/images/docs/multicluster-management/import-cloud-hosted-k8s/import-gke/click-add-cluster.png)
2. Input the basic information based on your needs and click **Next**.
2. Enter the basic information based on your needs and click **Next**.
![input-info](/images/docs/multicluster-management/import-cloud-hosted-k8s/import-gke/input-info.png)

View File

@ -15,7 +15,7 @@ This tutorial demonstrates how to unbind a cluster from the central control plan
## Unbind a Cluster
1. Click **Platform** in the top left corner and select **Cluster Management**.
1. Click **Platform** in the top-left corner and select **Cluster Management**.
2. On the **Cluster Management** page, click the cluster that you want to remove from the central control plane.

View File

@ -21,7 +21,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```
{{< notice note >}}
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Alerting in this mode (e.g. for testing purposes), refer to [the following section](#enable-alerting-after-installation) to see how Alerting can be enabled after installation.
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Alerting in this mode (for example, for testing purposes), refer to [the following section](#enable-alerting-after-installation) to see how Alerting can be enabled after installation.
{{</ notice >}}
2. In this file, navigate to `alerting` and change `false` to `true` for `enabled`. Save the file after you finish.

View File

@ -27,7 +27,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```
{{< notice note >}}
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the App Store in this mode (e.g. for testing purposes), refer to [the following section](#enable-app-store-after-installation) to see how the App Store can be installed after installation.
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the App Store in this mode (for example, for testing purposes), refer to [the following section](#enable-app-store-after-installation) to see how the App Store can be installed after installation.
{{</ notice >}}
2. In this file, navigate to `openpitrix` and change `false` to `true` for `enabled`. Save the file after you finish.
@ -106,7 +106,7 @@ You can find the web kubectl tool by clicking <img src="/images/docs/enable-plug
## Verify the Installation of the Component
After you log in to the console, if you can see **App Store** in the top left corner and 16 built-in apps in it, it means the installation is successful.
After you log in to the console, if you can see **App Store** in the top-left corner and 16 built-in apps in it, it means the installation is successful.
![app-store](/images/docs/enable-pluggable-components/kubesphere-app-store/app-store.png)

View File

@ -23,7 +23,7 @@ When you implement multi-node installation KubeSphere on Linux, you need to crea
```
{{< notice note >}}
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Auditing in this mode (e.g. for testing purposes), refer to [the following section](#enable-auditing-logs-after-installation) to see how Auditing can be installed after installation.
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Auditing in this mode (for example, for testing purposes), refer to [the following section](#enable-auditing-logs-after-installation) to see how Auditing can be installed after installation.
{{</ notice >}}
2. In this file, navigate to `auditing` and change `false` to `true` for `enabled`. Save the file after you finish.
@ -148,7 +148,7 @@ You can find the web kubectl tool by clicking <img src="/images/docs/enable-plug
{{< tab "Verify the component on the dashboard" >}}
Verify that you can use the **Auditing Operating** function from the **Toolbox** in the bottom right corner.
Verify that you can use the **Auditing Operating** function from the **Toolbox** in the bottom-right corner.
![auditing-operating](/images/docs/enable-pluggable-components/kubesphere-auditing-logs/auditing-operating.png)

View File

@ -8,7 +8,7 @@ weight: 6300
The KubeSphere DevOps System is designed for CI/CD workflows in Kubernetes. Based on [Jenkins](https://jenkins.io/), it provides one-stop solutions to help both development and Ops teams build, test and publish apps to Kubernetes in a straight-forward way. It also features plugin management, [Binary-to-Image (B2I)](../../project-user-guide/image-builder/binary-to-image/), [Source-to-Image (S2I)](../../project-user-guide/image-builder/source-to-image/), code dependency caching, code quality analysis, pipeline logging, etc.
The DevOps System offers an enabling environment for users as apps can be automatically released to the same platform. It is also compatible with third-party private image registries (e.g. Harbor) and code repositories (e.g. GitLab/GitHub/SVN/BitBucket). As such, it creates excellent user experiences by providing users with comprehensive, visualized CI/CD pipelines which are extremely useful in air-gapped environments.
The DevOps System offers an enabling environment for users as apps can be automatically released to the same platform. It is also compatible with third-party private image registries (for example, Harbor) and code repositories (for example, GitLab/GitHub/SVN/BitBucket). As such, it creates excellent user experiences by providing users with comprehensive, visualized CI/CD pipelines which are extremely useful in air-gapped environments.
For more information, see [DevOps User Guide](../../devops-user-guide/).
@ -25,7 +25,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```
{{< notice note >}}
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable DevOps in this mode (e.g. for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how DevOps can be installed after installation.
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable DevOps in this mode (for example, for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how DevOps can be installed after installation.
{{</ notice >}}
2. In this file, navigate to `devops` and change `false` to `true` for `enabled`. Save the file after you finish.

View File

@ -24,7 +24,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
{{< notice note >}}
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Events in this mode (e.g. for testing purposes), refer to [the following section](#enable-events-after-installation) to see how Events can be [installed after installation](#enable-events-after-installation).
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Events in this mode (for example, for testing purposes), refer to [the following section](#enable-events-after-installation) to see how Events can be [installed after installation](#enable-events-after-installation).
{{</ notice >}}
@ -154,7 +154,7 @@ You can find the web kubectl tool by clicking <img src="/images/docs/enable-plug
{{< tab "Verify the component on the dashboard" >}}
Verify that you can use the **Event Search** function from the **Toolbox** in the bottom right corner.
Verify that you can use the **Event Search** function from the **Toolbox** in the bottom-right corner.
![event-search](/images/docs/enable-pluggable-components/kubesphere-events/event-search.png)

View File

@ -27,7 +27,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```
{{< notice note >}}
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeEdge in this mode (e.g. for testing purposes), refer to [the following section](#enable-kubeedge-after-installation) to see how KubeEdge can be installed after installation.
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeEdge in this mode (for example, for testing purposes), refer to [the following section](#enable-kubeedge-after-installation) to see how KubeEdge can be installed after installation.
{{</ notice >}}
2. In this file, navigate to `kubeedge.enabled` and change `false` to `true`.

View File

@ -24,7 +24,7 @@ When you install KubeSphere on Linux, you need to create a configuration file, w
{{< notice note >}}
- If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Logging in this mode (e.g. for testing purposes), refer to [the following section](#enable-logging-after-installation) to see how Logging can be installed after installation.
- If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Logging in this mode (for example, for testing purposes), refer to [the following section](#enable-logging-after-installation) to see how Logging can be installed after installation.
- If you adopt [Multi-node Installation](../../installing-on-linux/introduction/multioverview/) and are using symbolic links for docker root directory, make sure all nodes follow the exactly same symbolic links. Logging agents are deployed in DaemonSets onto nodes. Any discrepancy in container log path may cause collection failures on that node.

View File

@ -21,7 +21,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```
{{< notice note >}}
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Metrics Server in this mode (e.g. for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how the Metrics Server can be installed after installation.
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Metrics Server in this mode (for example, for testing purposes), refer to [the following section](#enable-devops-after-installation) to see how the Metrics Server can be installed after installation.
{{</ notice >}}
2. In this file, navigate to `metrics_server` and change `false` to `true` for `enabled`. Save the file after you finish.

View File

@ -30,7 +30,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```
{{< notice note >}}
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Network Policy in this mode (e.g. for testing purposes), refer to [the following section](#enable-network-policy-after-installation) to see how the Network Policy can be installed after installation.
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable the Network Policy in this mode (for example, for testing purposes), refer to [the following section](#enable-network-policy-after-installation) to see how the Network Policy can be installed after installation.
{{</ notice >}}
2. In this file, navigate to `network.networkpolicy` and change `false` to `true` for `enabled`. Save the file after you finish.

View File

@ -21,7 +21,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```
{{< notice note >}}
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Pod IP Pools in this mode (e.g. for testing purposes), refer to [the following section](#enable-pod-ip-pools-after-installation) to see how Pod IP Pools can be installed after installation.
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Pod IP Pools in this mode (for example, for testing purposes), refer to [the following section](#enable-pod-ip-pools-after-installation) to see how Pod IP Pools can be installed after installation.
{{</ notice >}}
2. In this file, navigate to `network.ippool.type` and change `none` to `calico`. Save the file after you finish.

View File

@ -23,7 +23,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```
{{< notice note >}}
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeSphere Service Mesh in this mode (e.g. for testing purposes), refer to [the following section](#enable-service-mesh-after-installation) to see how KubeSphere Service Mesh can be installed after installation.
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable KubeSphere Service Mesh in this mode (for example, for testing purposes), refer to [the following section](#enable-service-mesh-after-installation) to see how KubeSphere Service Mesh can be installed after installation.
{{</ notice >}}
2. In this file, navigate to `servicemesh` and change `false` to `true` for `enabled`. Save the file after you finish.

View File

@ -21,7 +21,7 @@ When you implement multi-node installation of KubeSphere on Linux, you need to c
```
{{< notice note >}}
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Service Topology in this mode (e.g. for testing purposes), refer to [the following section](#enable-service-topology-after-installation) to see how Service Topology can be installed after installation.
If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Service Topology in this mode (for example, for testing purposes), refer to [the following section](#enable-service-topology-after-installation) to see how Service Topology can be installed after installation.
{{</ notice >}}
2. In this file, navigate to `network.topology.type` and change `none` to `weave-scope`. Save the file after you finish.

View File

@ -6,7 +6,7 @@ linkTitle: "Container Limit Ranges"
weight: 13400
---
A container can use as much CPU and memory as set by [the resource quota for a project](../../workspace-administration/project-quotas/). At the same time, KubeSphere uses requests and limits to control resource (e.g. CPU and memory) usage for a container, also known as [LimitRanges](https://kubernetes.io/docs/concepts/policy/limit-range/) in Kubernetes. Requests make sure the container can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that container can never use resources above a certain value.
A container can use as much CPU and memory as set by [the resource quota for a project](../../workspace-administration/project-quotas/). At the same time, KubeSphere uses requests and limits to control resource (for example, CPU and memory) usage for a container, also known as [LimitRanges](https://kubernetes.io/docs/concepts/policy/limit-range/) in Kubernetes. Requests make sure the container can get the resources it needs as they are specifically guaranteed and reserved. On the contrary, limits ensure that container can never use resources above a certain value.
When you create a workload, such as a Deployment, you configure resource requests and limits for the container. To make these request and limit fields pre-populated with values, you can set default limit ranges.
@ -18,13 +18,11 @@ You have an available workspace, a project and an account (`project-admin`). The
## Set Default Limit Ranges
1. Log in to the console as `project-admin` and go to a project. On the **Overview** page, you can see default limit ranges remain unset if the project is newly created. Click **Set** to configure limit ranges.
1. Log in to the console as `project-admin` and go to a project. On the **Overview** page, you can see default limit ranges remain unset if the project is newly created. Click **Set** next to **Resource Default Request Not Set** to configure limit ranges.
![limit-ranges](/images/docs/project-administration/container-limit-ranges/limit-ranges.jpg)
2. In the dialog that appears, you can see that KubeSphere does not set any requests or limits by default. To set requests and limits to control CPU and memory resources, use the slider to move to a desired value or enter numbers directly. Leaving a field blank means you do not set any requests or limits.
2. In the dialog that appears, you can see that KubeSphere does not set any requests or limits by default. To set requests and limits to control CPU and memory resources, use the slider to move to a desired value or input numbers directly. Leaving a field blank means you do not set any requests or limits.
![default-limit-range](/images/docs/project-administration/container-limit-ranges/default-limit-range.jpg)
![default-limit-range](/images/docs/project-administration/container-limit-ranges/default-limit-range.png)
{{< notice note >}}
@ -34,19 +32,17 @@ You have an available workspace, a project and an account (`project-admin`). The
3. Click **OK** to finish setting limit ranges.
4. Go to **Basic Info** in **Project Settings**, and you can see default limit ranges for containers in a project.
4. Go to **Basic Information** in **Project Settings**, and you can see default limit ranges for containers in a project.
![view-limit-ranges](/images/docs/project-administration/container-limit-ranges/view-limit-ranges.jpg)
![view-limit-ranges](/images/docs/project-administration/container-limit-ranges/view-limit-ranges.png)
5. To change default limit ranges, click **Manage Project** on the **Basic Info** page and select **Edit Resource Default Request**.
![change-limit-range](/images/docs/project-administration/container-limit-ranges/change-limit-range.jpg)
5. To change default limit ranges, click **Manage Project** on the **Basic Information** page and select **Edit Resource Default Request**.
6. Change limit ranges directly in the dialog and click **OK**.
7. When you create a workload, requests and limits of the container will be pre-populated with values.
![workload-values](/images/docs/project-administration/container-limit-ranges/workload-values.jpg)
![workload-values](/images/docs/project-administration/container-limit-ranges/workload-values.png)
{{< notice note >}}

View File

@ -27,7 +27,7 @@ This tutorial demonstrates how to collect disk logs for an example app.
1. From the left navigation bar, select **Workloads** in **Application Workloads**. Under the **Deployments** tab, click **Create**.
2. In the dialog that appears, set a name for the Deployment (e.g. `demo-deployment`) and click **Next**.
2. In the dialog that appears, set a name for the Deployment (for example, `demo-deployment`) and click **Next**.
3. Under **Container Image**, click **Add Container Image**.
@ -35,7 +35,7 @@ This tutorial demonstrates how to collect disk logs for an example app.
![alpine-image](/images/docs/project-administration/disk-log-collection/alpine-image.png)
5. Scroll down to **Start Command** and check it. Input the following values for **Run Command** and **Parameters** respectively, click **√**, and then click **Next**.
5. Scroll down to **Start Command** and check it. Enter the following values for **Run Command** and **Parameters** respectively, click **√**, and then click **Next**.
**Run Command**
@ -61,7 +61,7 @@ This tutorial demonstrates how to collect disk logs for an example app.
![mount-volumes](/images/docs/project-administration/disk-log-collection/mount-volumes.png)
7. On the **Temporary Volume** tab, input a name for the volume (e.g. `demo-disk-log-collection`) and set the access mode and path. Refer to the image below as an example.
7. On the **Temporary Volume** tab, enter a name for the volume (for example, `demo-disk-log-collection`) and set the access mode and path. Refer to the image below as an example.
![volume-example](/images/docs/project-administration/disk-log-collection/volume-example.png)
@ -85,7 +85,7 @@ This tutorial demonstrates how to collect disk logs for an example app.
![inspect-logs](/images/docs/project-administration/disk-log-collection/inspect-logs.png)
3. Alternatively, you can also use the **Log Search** function from **Toolbox** in the bottom right corner to view stdout logs. For example, use the Pod name of the Deployment for a fuzzy query:
3. Alternatively, you can also use the **Log Search** function from **Toolbox** in the bottom-right corner to view stdout logs. For example, use the Pod name of the Deployment for a fuzzy query:
![fuzzy-match](/images/docs/project-administration/disk-log-collection/fuzzy-match.png)

View File

@ -2,7 +2,6 @@
title: "Projects and Multi-cluster Projects"
keywords: 'KubeSphere, Kubernetes, project, multicluster-project'
description: 'Learn how to create different types of projects.'
linkTitle: "Projects and Multi-cluster Projects"
weight: 13100
---
@ -11,116 +10,91 @@ A project in KubeSphere is a Kubernetes [namespace](https://kubernetes.io/docs/c
A multi-cluster project runs across clusters, empowering users to achieve high availability and isolate occurring issues to a certain cluster while not affecting your business. For more information, see [Multi-cluster Management](../../multicluster-management/).
This chapter demonstrates the basic operations of project administration, such as creation and deletion.
This tutorial demonstrates how to manage projects and multi-cluster projects.
## Prerequisites
- You have an available workspace.
- You must have the authorization of **Projects Management**, which is included in the built-in role `workspace-self-provisioner`.
- You need to create a workspace and an account (`project-admin`). The account must be invited to the workspace with the role of `workspace-self-provisioner`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../docs/quick-start/create-workspace-and-project/).
- You must enable the multi-cluster feature through [Direction Connection](../../multicluster-management/enable-multicluster/direct-connection/) or [Agent Connection](../../multicluster-management/enable-multicluster/agent-connection/) before you create a multi-cluster project.
## Projects
### Create a project
1. Go to the **Projects** page of a workspace and click **Create**.
![create-project](/images/docs/project-admin/create-project.jpg)
1. Go to the **Projects** page of a workspace and click **Create** on the **Projects** tab.
{{< notice note >}}
- You can change the cluster where the project will be created on the **Cluster** drop-down list. The list is only visible after you enable the multi-cluster feature.
- If you cannot see the **Create** button, it means no cluster is available to use for your workspace. You need to contact the platform administrator or cluster administrator so that workspace resources can be created in the cluster. To assign a cluster to a workspace, the platform administrator or cluster administrator needs to edit [**Cluster Visibility**](../../cluster-administration/cluster-settings/cluster-visibility-and-authorization/) on the **Cluster Management** page.
- If you cannot see the **Create** button, it means no cluster is available to use for your workspace. You need to contact the platform administrator or cluster administrator so that workspace resources can be created in the cluster. [To assign a cluster to a workspace](../../cluster-administration/cluster-settings/cluster-visibility-and-authorization/), the platform administrator or cluster administrator needs to edit **Cluster Visibility** on the **Cluster Management** page.
{{</ notice >}}
2. In the **Create Project** window that appears, enter a project name and add an alias or description if necessary. Select the cluster where the project will be created (this option does not appear if the multi-cluster feature is not enabled), and click **OK** to finish.
![create-project-page](/images/docs/project-admin/create-project-page.jpg)
2. In the **Create Project** window that appears, enter a project name and add an alias or description if necessary. Under **Cluster Settings**, select the cluster where the project will be created (this option does not appear if the multi-cluster feature is not enabled), and click **OK**.
3. A project created will display in the list as shown below. You can click the project name to go to its **Overview** page.
![project-list](/images/docs/project-admin/project-list.jpg)
![project-list](/images/docs/project-administration/project-and-multicluster-project/project-list.png)
### Edit project information
### Edit a project
1. Navigate to **Basic Info** under **Project Settings** and click **Manage Project** on the right.
1. Go to your project, navigate to **Basic Information** under **Project Settings** and click **Manage Project** on the right.
![basic-info-page](/images/docs/project-admin/basic-info-page.jpg)
2. Choose **Edit Info** from the drop-down menu.
2. Choose **Edit Information** from the drop-down menu.
![project-basic-information](/images/docs/project-administration/project-and-multicluster-project/project-basic-information.png)
{{< notice note >}}
The project name cannot be edited. If you want to change other information, see relevant chapters in the documentation.
The project name cannot be edited. If you want to change other information, see relevant tutorials in the documentation.
{{</ notice >}}
{{</ notice >}}
### Delete a project
3. To delete a project, choose **Delete Project** from the drop-down menu. In the dialog that appears, enter the project name and click **OK** to confirm the deletion.
1. Navigate to **Basic Info** under **Project Settings** and click **Manage Project** on the right.
{{< notice warning >}}
![basic-info-page](/images/docs/project-admin/basic-info-page.jpg)
A project cannot be recovered once deleted and resources in the project will be removed.
2. Choose **Delete Project** from the drop-down menu.
3. In the dialog that appears, enter the project name and click **OK** to confirm the deletion.
{{< notice warning >}}
A project cannot be recovered once deleted and resources in the project will be removed as well.
{{</ notice >}}
{{</ notice >}}
## Multi-cluster Projects
### Create a multi-cluster project
1. Go to the **Projects** page of a workspace, choose **Multi-cluster Projects** and click **Create**.
![create-multicluster-project](/images/docs/project-admin/create-multicluster-project.jpg)
1. Go to the **Projects** page of a workspace, click the **Multi-cluster Projects** tab and click **Create**.
{{< notice note >}}
- If you cannot see the **Create** button, it means no cluster is available to use for your workspace. You need to contact the platform administrator or cluster administrator so that workspace resources can be created in the cluster. To assign a cluster to a workspace, the platform administrator or cluster administrator needs to edit [**Cluster Visibility**](../../cluster-administration/cluster-settings/cluster-visibility-and-authorization/) on the **Cluster Management** page.
- If you cannot see the **Create** button, it means no cluster is available to use for your workspace. You need to contact the platform administrator or cluster administrator so that workspace resources can be created in the cluster. [To assign a cluster to a workspace](../../cluster-administration/cluster-settings/cluster-visibility-and-authorization/), the platform administrator or cluster administrator needs to edit **Cluster Visibility** on the **Cluster Management** page.
- Make sure at least two clusters are assigned to your workspace.
{{</ notice >}}
2. In the **Create Multi-cluster Project** window that appears, enter a project name and add an alias or description if necessary. Select multiple clusters for your project by clicking **Add Cluster**, and click **OK** to finish.
![create-multicluster-project-page](/images/docs/project-admin/create-multicluster-project-page.jpg)
2. In the **Create Multi-cluster Project** window that appears, enter a project name and add an alias or description if necessary. Under **Cluster Settings**, select multiple clusters for your project by clicking **Add Cluster**, and click **OK**.
3. A multi-cluster project created will display in the list as shown below. You can click the project name to go to its **Overview** page.
![multicluster-project-list](/images/docs/project-admin/multicluster-project-list.jpg)
![multi-cluster-list](/images/docs/project-administration/project-and-multicluster-project/multi-cluster-list.png)
### Edit multi-cluster project information
### Edit a multi-cluster project
1. Navigate to **Basic Info** under **Project Settings** and click **Manage Project** on the right.
1. Go to your multi-cluster project, navigate to **Basic Information** under **Project Settings** and click **Manage Project** on the right.
![basic-info-multicluster](/images/docs/project-admin/basic-info-multicluster.jpg)
2. Choose **Edit Information** from the drop-down menu.
2. Choose **Edit Info** from the drop-down menu.
![multi-cluster-basic-information](/images/docs/project-administration/project-and-multicluster-project/multi-cluster-basic-information.png)
{{< notice note >}}
The project name cannot be edited. If you want to change other information, see relevant chapters in the documentation.
The project name cannot be edited. If you want to change other information, see relevant tutorials in the documentation.
{{</ notice >}}
{{</ notice >}}
### Delete a multi-cluster project
3. To delete a multi-cluster project, choose **Delete Project** from the drop-down menu. In the dialog that appears, enter the project name and click **OK** to confirm the deletion.
1. Navigate to **Basic Info** under **Project Settings** and click **Manage Project** on the right.
{{< notice warning >}}
![basic-info-multicluster](/images/docs/project-admin/basic-info-multicluster.jpg)
A multi-cluster project cannot be recovered once deleted and resources in the project will be removed.
2. Choose **Delete Project** from the drop-down menu.
3. In the dialog that appears, enter the project name and click **OK** to confirm the deletion.
{{< notice warning >}}
A multi-cluster project cannot be recovered once deleted and resources in the project will be removed as well.
{{</ notice >}}
{{</ notice >}}

View File

@ -30,7 +30,7 @@ You need to create a workspace, a project and an account (`project-admin`). The
**LoadBalancer**: You can access Services with a single IP address through the gateway.
3. You can also enable **Application Governance** on the **Set Gateway** page. You need to enable **Application Governance** so that you can use the Tracing feature and use [different grayscale release strategies](../../project-user-guide/grayscale-release/overview/). Once it is enabled, check whether an annotation (e.g. `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your route (Ingress) if the route is inaccessible.
3. You can also enable **Application Governance** on the **Set Gateway** page. You need to enable **Application Governance** so that you can use the Tracing feature and use [different grayscale release strategies](../../project-user-guide/grayscale-release/overview/). Once it is enabled, check whether an annotation (for example, `nginx.ingress.kubernetes.io/service-upstream: true`) is added for your route (Ingress) if the route is inaccessible.
4. After you select an access method, click **Save**.

View File

@ -1,92 +1,81 @@
---
title: "Role and Member Management In Your Project"
title: "Project Role and Member Management"
keywords: 'KubeSphere, Kubernetes, role, member, management, project'
description: 'Learn how to manage access control for a project.'
linkTitle: "Role and Member Management"
linkTitle: "Project Role and Member Management"
weight: 13200
---
This guide demonstrates how to manage roles and members in your project. For more information about KubeSphere roles, see Overview of Role Management.
This tutorial demonstrates how to manage roles and members in a project. At the project level, you can grant permissions in the following modules to a role:
In project scope, you can grant the following resources' permissions to a role:
- Application Workloads
- Storage
- Configurations
- Monitoring & Alerting
- Project Settings
- Access Control
- **Application Workloads**
- **Storage**
- **Configurations**
- **Monitoring & Alerting**
- **Access Control**
- **Project Settings**
## Prerequisites
At least one project has been created, such as `demo-project`. Besides, you need an account of the `admin` role (e.g. `project-admin`) at the project level. See [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/) if it is not ready yet.
At least one project has been created, such as `demo-project`. Besides, you need an account of the `admin` role (for example, `project-admin`) at the project level. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
## Built-in Roles
In **Project Roles**, there are three available built-in roles as shown below. Built-in roles are created automatically by KubeSphere when a project is created and they cannot be edited or deleted. You can only view permissions and authorized user list.
In **Project Roles**, there are three available built-in roles as shown below. Built-in roles are created automatically by KubeSphere when a project is created and they cannot be edited or deleted. You can only view permissions included in a built-in role or assign it to a user.
| Built-in Roles | Description |
| ------------------ | ------------------------------------------------------------ |
| viewer | The viewer who can view all resources in the project. |
| operator | The maintainer of the project who can manage resources other than users and roles in the project. |
| admin | The administrator in the project who can perform any action on any resource. It gives full control over all resources in the project. |
<table>
<tr>
<th width="17%">Built-in Roles</th>
<th width="83%">Description</th>
</tr>
<tr>
<td><code>viewer</code></td>
<td>The viewer who can view all resources in the project.</td>
</tr>
<tr>
<td><code>operator</code></td>
<td>The maintainer of the project who can manage resources other than users and roles in the project.</td>
</tr>
<tr>
<td><code>admin</code></td>
<td>The administrator in the project who can perform any action on any resource. It gives full control over all resources in the project.</td>
</tr>
</table>
1. In **Project Roles**, click `admin` and you can see the role detail as shown below.
To view the permissions that a role contains:
![view role details](/images/docs/project-admin/project_role_detail.png)
1. Log in to the console as `project-admin`. In **Project Roles**, click a role (for example, `admin`) and you can see role details as shown below.
2. You can switch to **Authorized Users** tab to see all the users that are granted an `admin` role.
![project-role-details](/images/docs/project-administration/role-and-member-management/project-role-details.png)
2. Click the **Authorized Users** tab to see all the users that are granted the role.
## Create a Project Role
1. Log in to the console as `project-admin` and select a project (e.g. `demo-project`) under **Projects** list.
1. Navigate to **Project Roles** under **Project Settings**.
2. In **Project Roles**, click **Create** and set a role **Name** (for example, `project-monitor`). Click **Edit Permissions** to continue.
3. In the pop-up window, permissions are categorized into different **Modules**. In this example, select **Application Workload Viewing** in **Application Workloads**, and **Alerting Message Viewing** and **Alerting Policy Viewing** in **Monitoring & Alerting**. Click **OK** to finish creating the role.
{{< notice note >}}
The account `project-admin` is used as an example. As long as the account you are using is granted a role including the authorization of **Project Members View**, **Project Roles Management** and **Project Roles View** in **Access Control** at project level, it can create a project role.
**Depends on** means the major permission (the one listed after **Depends on**) needs to be selected first so that the affiliated permission can be assigned.
{{</ notice >}}
{{</ notice >}}
2. Go to **Project Roles** in **Project Settings**, click **Create** and set a **Role Identifier**. In this example, a role named `project-monitor` will be created. Click **Edit Authorization** to continue.
4. Newly-created roles will be listed in **Project Roles**. To edit an existing role, click <img src="/images/docs/project-administration/role-and-member-management/three-dots.png" height="20px"> on the right.
![Create a project role](/images/docs/project-admin/project_role_create_step1.png)
3. Select the authorization that you want this role to contain. For example, **Application Workloads View** in **Application Workloads**, and **Alerting Messages View** and **Alerting Policies View** in **Monitoring & Alerting** are selected for this role. Click **OK** to finish.
![Edit Authorization](/images/docs/project-admin/project_role_create_step2.png)
{{< notice note >}}
**Depend on** means the major authorization (the one listed after **Depend on**) needs to be selected first so that the affiliated authorization can be assigned.
{{</ notice >}}
4. Newly-created roles will be listed in **Project Roles**. You can click the three dots on the right to edit it.
![Edit Roles](/images/docs/project-admin/project_role_list.png)
{{< notice note >}}
The role of `project-monitor` is only granted limited permissions in **Monitoring & Alerting**, which may not satisfy your need. This example is only for demonstration purpose. You can create customized roles based on your needs.
{{</ notice >}}
![project-role-list](/images/docs/project-administration/role-and-member-management/project-role-list.png)
## Invite a New Member
1. In **Project Settings**, select **Project Members** and click **Invite Member**.
2. Invite a user to the project. Grant the role of `project-monitor` to the user.
1. Navigate to **Project Members** under **Project Settings**, and click **Invite Member**.
![invite member](/images/docs/project-admin/project_invite_member_step2.png)
2. Invite a user to the project by clicking <img src="/images/docs/project-administration/role-and-member-management/add.png" height="20px"> on the right of it and assign a role to it.
{{< notice note >}}
3. After you add the user to the project, click **OK**. In **Project Members**, you can see the user in the list.
The user must be invited to the project's workspace first.
4. To edit the role of an existing user or remove the user from the project, click <img src="/images/docs/project-administration/role-and-member-management/three-dots.png" height="20px"> on the right and select the corresponding operation.
{{</ notice >}}
3. After you add a user to the project, click **OK**. In **Project Members**, you can see the newly invited member listed.
4. You can also change the role of an existing member by editing it or remove it from the project.
![edit member role](/images/docs/project-admin/project_user_edit.png)
![edit-project-account](/images/docs/project-administration/role-and-member-management/edit-project-account.png)

View File

@ -10,7 +10,7 @@ When you create Deployments, StatefulSets or DaemonSets, you need to specify a c
{{< notice tip >}}
You can enable **Edit Mode** in the top right corner to see corresponding values in the manifest file (YAML format) of properties on the dashboard.
You can enable **Edit Mode** in the top-right corner to see corresponding values in the manifest file (YAML format) of properties on the dashboard.
{{</ notice >}}
@ -30,17 +30,17 @@ After you click **Add Container Image**, you will see an image as below.
#### Image Search Bar
You can click the cube icon on the right to select an image from the list or input an image name to search it. KubeSphere provides Docker Hub images and your private image repository. If you want to use your private image repository, you need to create an Image Registry Secret first in **Secrets** under **Configurations**.
You can click the cube icon on the right to select an image from the list or enter an image name to search it. KubeSphere provides Docker Hub images and your private image repository. If you want to use your private image repository, you need to create an Image Registry Secret first in **Secrets** under **Configurations**.
{{< notice note >}}
Remember to press **Enter** on your keyboard after you input an image name in the search bar.
Remember to press **Enter** on your keyboard after you enter an image name in the search bar.
{{</ notice >}}
#### Image Tag
You can input a tag like `imagename:tag`. If you do not specify it, it will default to the latest version.
You can enter a tag like `imagename:tag`. If you do not specify it, it will default to the latest version.
#### Container Name
@ -274,7 +274,7 @@ A security context defines privilege and access control settings for a Pod or Co
### Deployment Mode
You can select different deployment modes to switch between inter-pod affinity and inter-pod anti-affinity. In Kubernetes, inter-pod affinity is specified as field `podAffinity` of field `affinity` while inter-pod anti-affinity is specified as field `podAntiAffinity` of field `affinity`. In KubeSphere, both `podAffinity` and `podAntiAffinity` are set to `preferredDuringSchedulingIgnoredDuringExecution`. You can enable **Edit Mode** in the top right corner to see field details.
You can select different deployment modes to switch between inter-pod affinity and inter-pod anti-affinity. In Kubernetes, inter-pod affinity is specified as field `podAffinity` of field `affinity` while inter-pod anti-affinity is specified as field `podAntiAffinity` of field `affinity`. In KubeSphere, both `podAffinity` and `podAntiAffinity` are set to `preferredDuringSchedulingIgnoredDuringExecution`. You can enable **Edit Mode** in the top-right corner to see field details.
- **Pod Decentralized Deployment** represents anti-affinity.
- **Pod Aggregation Deployment** represents affinity.

View File

@ -22,7 +22,7 @@ Log in to the console as `project-regular`. Go to **Jobs** of a project, choose
![cronjob-list](/images/docs/project-user-guide/application-workloads/cronjobs/cronjob-list.jpg)
### Step 2: Input basic information
### Step 2: Enter basic information
Enter the basic information. You can refer to the image below for each field. When you finish, click **Next**.
@ -30,7 +30,7 @@ Enter the basic information. You can refer to the image below for each field. Wh
- **Name**: The name of the CronJob, which is also the unique identifier.
- **Alias**: The alias name of the CronJob, making resources easier to identify.
- **Schedule**: It runs a Job periodically on a given time-based schedule. Please see [CRON](https://en.wikipedia.org/wiki/Cron) for grammar reference. Some preset CRON statements are provided in KubeSphere to simplify the input. This field is specified by `.spec.schedule`. For this CronJob, input `*/1 * * * *`, which means it runs once per minute.
- **Schedule**: It runs a Job periodically on a given time-based schedule. Please see [CRON](https://en.wikipedia.org/wiki/Cron) for grammar reference. Some preset CRON statements are provided in KubeSphere to simplify the input. This field is specified by `.spec.schedule`. For this CronJob, enter `*/1 * * * *`, which means it runs once per minute.
| Type | CRON |
| ----------- | ----------- |
@ -51,7 +51,7 @@ Enter the basic information. You can refer to the image below for each field. Wh
{{< notice note >}}
You can enable **Edit Mode** in the top right corner to see the YAML manifest of this CronJob.
You can enable **Edit Mode** in the top-right corner to see the YAML manifest of this CronJob.
{{</ notice >}}
@ -61,7 +61,7 @@ Please refer to [Jobs](../jobs/#step-3-job-settings-optional).
### Step 4: Set an image
1. Click **Add Container Image** in **Container Image** and input `busybox` in the search bar.
1. Click **Add Container Image** in **Container Image** and enter `busybox` in the search bar.
![input-busybox](/images/docs/project-user-guide/application-workloads/cronjobs/input-busybox.jpg)

View File

@ -30,9 +30,9 @@ Log in to the console as `project-regular`. Go to **Application Workloads** of a
![daemonsets](/images/docs/project-user-guide/workloads/daemonsets.jpg)
### Step 2: Input basic information
### Step 2: Enter basic information
Specify a name for the DaemonSet (e.g. `demo-daemonset`) and click **Next** to continue.
Specify a name for the DaemonSet (for example, `demo-daemonset`) and click **Next** to continue.
![daemonsets](/images/docs/project-user-guide/workloads/daemonsets_form_1.jpg)
@ -42,13 +42,13 @@ Specify a name for the DaemonSet (e.g. `demo-daemonset`) and click **Next** to c
![daemonsets](/images/docs/project-user-guide/workloads/daemonsets_form_2_container_btn.jpg)
2. Input an image name from public Docker Hub or from a [private repository](../../configuration/image-registry/) you specified. For example, input `fluentd` in the search bar and press **Enter**.
2. Enter an image name from public Docker Hub or from a [private repository](../../configuration/image-registry/) you specified. For example, enter `fluentd` in the search bar and press **Enter**.
![daemonsets](/images/docs/project-user-guide/workloads/daemonsets_form_2_container_1.jpg)
{{< notice note >}}
- Remember to press **Enter** on your keyboard after you input an image name in the search bar.
- Remember to press **Enter** on your keyboard after you enter an image name in the search bar.
- If you want to use your private image repository, you should [create an Image Registry Secret](../../configuration/image-registry/) first in **Secrets** under **Configurations**.
{{</ notice >}}
@ -61,7 +61,7 @@ Specify a name for the DaemonSet (e.g. `demo-daemonset`) and click **Next** to c
5. Select a policy for image pulling from the drop-down menu. For more information, see [Image Pull Policy in Container Image Settings](../container-image-settings/#add-container-image).
6. For other settings (**Health Checker**, **Start Command**, **Environment Variables**, **Container Security Context** and **Sync Host Timezone**), you can configure them on the dashboard as well. For more information, see detailed explanations of these properties in [Container Image Settings](../container-image-settings/#add-container-image). When you finish, click **√** in the bottom right corner to continue.
6. For other settings (**Health Checker**, **Start Command**, **Environment Variables**, **Container Security Context** and **Sync Host Timezone**), you can configure them on the dashboard as well. For more information, see detailed explanations of these properties in [Container Image Settings](../container-image-settings/#add-container-image). When you finish, click **√** in the bottom-right corner to continue.
7. Select an update strategy from the drop-down menu. It is recommended you choose **RollingUpdate**. For more information, see [Update Strategy](../container-image-settings/#update-strategy).

View File

@ -23,9 +23,9 @@ Log in to the console as `project-regular`. Go to **Application Workloads** of a
![deployments](/images/docs/project-user-guide/workloads/deployments.png)
### Step 2: Input basic information
### Step 2: Enter basic information
Specify a name for the Deployment (e.g. `demo-deployment`) and click **Next** to continue.
Specify a name for the Deployment (for example, `demo-deployment`) and click **Next** to continue.
![deployments](/images/docs/project-user-guide/workloads/deployments_form_1.jpg)
@ -34,7 +34,7 @@ Specify a name for the Deployment (e.g. `demo-deployment`) and click **Next** to
1. Before you set an image, define the number of replicated Pods in **Pod Replicas** by clicking the **plus** or **minus** icon, which is indicated by the `.spec.replicas` field in the manifest file.
{{< notice tip >}}
You can see the Deployment manifest file in YAML format by enabling **Edit Mode** in the top right corner. KubeSphere allows you to edit the manifest file directly to create a Deployment. Alternatively, you can follow the steps below to create a Deployment via the dashboard.
You can see the Deployment manifest file in YAML format by enabling **Edit Mode** in the top-right corner. KubeSphere allows you to edit the manifest file directly to create a Deployment. Alternatively, you can follow the steps below to create a Deployment via the dashboard.
{{</ notice >}}
![deployments](/images/docs/project-user-guide/workloads/deployments_form_2.jpg)
@ -43,13 +43,13 @@ You can see the Deployment manifest file in YAML format by enabling **Edit Mode*
![deployments](/images/docs/project-user-guide/workloads/deployments_form_2_container_btn.jpg)
3. Input an image name from public Docker Hub or from a [private repository](../../configuration/image-registry/) you specified. For example, input `nginx` in the search bar and press **Enter**.
3. Enter an image name from public Docker Hub or from a [private repository](../../configuration/image-registry/) you specified. For example, enter `nginx` in the search bar and press **Enter**.
![deployments](/images/docs/project-user-guide/workloads/deployments_form_2_container_1.jpg)
{{< notice note >}}
- Remember to press **Enter** on your keyboard after you input an image name in the search bar.
- Remember to press **Enter** on your keyboard after you enter an image name in the search bar.
- If you want to use your private image repository, you should [create an Image Registry Secret](../../configuration/image-registry/) first in **Secrets** under **Configurations**.
{{</ notice >}}
@ -62,7 +62,7 @@ You can see the Deployment manifest file in YAML format by enabling **Edit Mode*
6. Select a policy for image pulling from the drop-down menu. For more information, see [Image Pull Policy in Container Image Settings](../container-image-settings/#add-container-image).
7. For other settings (**Health Checker**, **Start Command**, **Environment Variables**, **Container Security Context** and **Sync Host Timezone**), you can configure them on the dashboard as well. For more information, see detailed explanations of these properties in [Container Image Settings](../container-image-settings/#add-container-image). When you finish, click **√** in the bottom right corner to continue.
7. For other settings (**Health Checker**, **Start Command**, **Environment Variables**, **Container Security Context** and **Sync Host Timezone**), you can configure them on the dashboard as well. For more information, see detailed explanations of these properties in [Container Image Settings](../container-image-settings/#add-container-image). When you finish, click **√** in the bottom-right corner to continue.
8. Select an update strategy from the drop-down menu. It is recommended you choose **RollingUpdate**. For more information, see [Update Strategy](../container-image-settings/#update-strategy).

View File

@ -1,14 +1,14 @@
---
title: "Horizontal Pod Autoscaling"
title: "Kubernetes HPA (Horizontal Pod Autoscaling) on KubeSphere"
keywords: "Horizontal, Pod, Autoscaling, Autoscaler"
description: "How to configure Horizontal Pod Autoscaling on KubeSphere."
description: "How to configure Kubernetes Horizontal Pod Autoscaling on KubeSphere."
weight: 10290
---
This document describes how to configure Horizontal Pod Autoscaling (HPA) on KubeSphere.
The HPA feature automatically adjusts the number of Pods to maintain average resource usage (CPU and memory) of Pods around preset values. For details about how HPA functions, see the [official Kubernetes document](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
The Kubernetes HPA feature automatically adjusts the number of Pods to maintain average resource usage (CPU and memory) of Pods around preset values. For details about how HPA functions, see the [official Kubernetes document](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/).
This document uses HPA based on CPU usage as an example. Operations for HPA based on memory usage are similar.
@ -50,7 +50,7 @@ This document uses HPA based on CPU usage as an example. Operations for HPA base
7. Click **Next** on the **Mount Volumes** tab and click **Create** on the **Advanced Settings** tab.
## Configure HPA
## Configure Kubernetes HPA
1. Choose **Deployments** in **Workloads** on the left navigation bar and click the HPA Deployment (for example, hpa-v1) on the right.

View File

@ -25,7 +25,7 @@ Log in to the console as `project-regular`. Go to **Jobs** under **Application W
![create-job](/images/docs/project-user-guide/application-workloads/jobs/create-job.jpg)
### Step 2: Input basic information
### Step 2: Enter basic information
Enter the basic information. Refer to the image below as an example.
@ -62,7 +62,7 @@ You can set the values in this step as below or click **Next** to use the defaul
![add-container-image-job](/images/docs/project-user-guide/application-workloads/jobs/add-container-image-job.png)
3. On the same page, scroll down to **Start Command**. Input the following commands in the box which computes pi to 2000 places then prints it. Click **√** in the bottom right corner and select **Next** to continue.
3. On the same page, scroll down to **Start Command**. Enter the following commands in the box which computes pi to 2000 places then prints it. Click **√** in the bottom-right corner and select **Next** to continue.
```bash
perl,-Mbignum=bpi,-wle,print bpi(2000)
@ -76,7 +76,7 @@ For more information about setting images, see [Container Image Settings](../con
### Step 5: Inspect the Job manifest (optional)
1. Enable **Edit Mode** in the top right corner which displays the manifest file of the Job. You can see all the values are set based on what you have specified in the previous steps.
1. Enable **Edit Mode** in the top-right corner which displays the manifest file of the Job. You can see all the values are set based on what you have specified in the previous steps.
```yaml
apiVersion: batch/v1
@ -145,7 +145,7 @@ You can rerun the Job if it fails, the reason of which displays under **Messages
{{< notice tip >}}
- In **Resource Status**, the Pod list provides the Pod's detailed information (e.g. creation time, node, Pod IP and monitoring data).
- In **Resource Status**, the Pod list provides the Pod's detailed information (for example, creation time, node, Pod IP and monitoring data).
- You can view the container information by clicking the Pod.
- Click the container log icon to view the output logs of the container.
- You can view the Pod detail page by clicking the Pod name.

View File

@ -78,7 +78,7 @@ The steps of creating a stateful Service and a stateless Service are basically t
{{</ notice >}}
### Step 2: Input basic information
### Step 2: Enter basic information
1. In the dialog that appears, you can see the field **Version** prepopulated with `v1`. You need to define a name for the Service, such as `demo-service`. When you finish, click **Next** to continue.
@ -90,7 +90,7 @@ The steps of creating a stateful Service and a stateless Service are basically t
{{< notice tip >}}
The value of **Name** is used in both configurations, one for Deployment and the other for Service. You can see the manifest file of the Deployment and the Service by enabling **Edit Mode** in the top right corner. Below is an example file for your reference.
The value of **Name** is used in both configurations, one for Deployment and the other for Service. You can see the manifest file of the Deployment and the Service by enabling **Edit Mode** in the top-right corner. Below is an example file for your reference.
{{</ notice>}}

View File

@ -35,9 +35,9 @@ Log in to the console as `project-regular`. Go to **Application Workloads** of a
![statefulsets](/images/docs/project-user-guide/workloads/statefulsets.jpg)
### Step 2: Input basic information
### Step 2: Enter basic information
Specify a name for the StatefulSet (e.g. `demo-stateful`) and click **Next** to continue.
Specify a name for the StatefulSet (for example, `demo-stateful`) and click **Next** to continue.
![statefulsets](/images/docs/project-user-guide/workloads/statefulsets_form_1.jpg)
@ -47,7 +47,7 @@ Specify a name for the StatefulSet (e.g. `demo-stateful`) and click **Next** to
{{< notice tip >}}
You can see the StatefulSet manifest file in YAML format by enabling **Edit Mode** in the top right corner. KubeSphere allows you to edit the manifest file directly to create a StatefulSet. Alternatively, you can follow the steps below to create a StatefulSet via the dashboard.
You can see the StatefulSet manifest file in YAML format by enabling **Edit Mode** in the top-right corner. KubeSphere allows you to edit the manifest file directly to create a StatefulSet. Alternatively, you can follow the steps below to create a StatefulSet via the dashboard.
{{</ notice >}}
@ -57,13 +57,13 @@ You can see the StatefulSet manifest file in YAML format by enabling **Edit Mode
![statefulsets](/images/docs/project-user-guide/workloads/statefulsets_form_2_container_btn.jpg)
3. Input an image name from public Docker Hub or from a [private repository](../../configuration/image-registry/) you specified. For example, input `nginx` in the search bar and press **Enter**.
3. Enter an image name from public Docker Hub or from a [private repository](../../configuration/image-registry/) you specified. For example, enter `nginx` in the search bar and press **Enter**.
![statefulsets](/images/docs/project-user-guide/workloads/statefulsets_form_2_container_1.jpg)
{{< notice note >}}
- Remember to press **Enter** on your keyboard after you input an image name in the search bar.
- Remember to press **Enter** on your keyboard after you enter an image name in the search bar.
- If you want to use your private image repository, you should [create an Image Registry Secret](../../configuration/image-registry/) first in **Secrets** under **Configurations**.
{{</ notice >}}
@ -76,7 +76,7 @@ You can see the StatefulSet manifest file in YAML format by enabling **Edit Mode
6. Select a policy for image pulling from the drop-down menu. For more information, see [Image Pull Policy in Container Image Settings](../container-image-settings/#add-container-image).
7. For other settings (**Health Checker**, **Start Command**, **Environment Variables**, **Container Security Context** and **Sync Host Timezone**), you can configure them on the dashboard as well. For more information, see detailed explanations of these properties in [Container Image Settings](../container-image-settings/#add-container-image). When you finish, click **√** in the bottom right corner to continue.
7. For other settings (**Health Checker**, **Start Command**, **Environment Variables**, **Container Security Context** and **Sync Host Timezone**), you can configure them on the dashboard as well. For more information, see detailed explanations of these properties in [Container Image Settings](../container-image-settings/#add-container-image). When you finish, click **√** in the bottom-right corner to continue.
8. Select an update strategy from the drop-down menu. It is recommended you choose **RollingUpdate**. For more information, see [Update Strategy](../container-image-settings/#update-strategy).

View File

@ -6,7 +6,7 @@ linkTitle: "App Templates"
weight: 10110
---
An app template serves as a way for users to upload, deliver and manage apps. Generally, an app is composed of one or more Kubernetes workloads (e.g. [Deployments](../../../project-user-guide/application-workloads/deployments/), [StatefulSets](../../../project-user-guide/application-workloads/statefulsets/) and [DaemonSets](../../../project-user-guide/application-workloads/daemonsets/)) and [Services](../../../project-user-guide/application-workloads/services/) based on how it functions and communicates with the external environment. Apps that are uploaded as app templates are built based on a [Helm](https://helm.sh/) package.
An app template serves as a way for users to upload, deliver and manage apps. Generally, an app is composed of one or more Kubernetes workloads (for example, [Deployments](../../../project-user-guide/application-workloads/deployments/), [StatefulSets](../../../project-user-guide/application-workloads/statefulsets/) and [DaemonSets](../../../project-user-guide/application-workloads/daemonsets/)) and [Services](../../../project-user-guide/application-workloads/services/) based on how it functions and communicates with the external environment. Apps that are uploaded as app templates are built based on a [Helm](https://helm.sh/) package.
## How App Templates Work
@ -30,7 +30,7 @@ KubeSphere deploys app repository services based on [OpenPitrix](https://github.
## Why App Templates
App templates enable users to deploy and manage apps in a visualized way. Internally, they play an important role as shared resources (e.g. databases, middleware and operating systems) created by enterprises for the coordination and cooperation within teams. Externally, app templates set industry standards of building and delivery. Users can take advantage of app templates in different scenarios to meet their own needs through one-click deployment.
App templates enable users to deploy and manage apps in a visualized way. Internally, they play an important role as shared resources (for example, databases, middleware and operating systems) created by enterprises for the coordination and cooperation within teams. Externally, app templates set industry standards of building and delivery. Users can take advantage of app templates in different scenarios to meet their own needs through one-click deployment.
In addition, as OpenPitrix is integrated to KubeSphere to provide application management across the entire lifecycle, the platform allows ISVs, developers and regular users to all participate in the process. Backed by the multi-tenant system of KubeSphere, each tenant is only responsible for their own part, such as app uploading, app review, release, test, and version management. Ultimately, enterprises can build their own App Store and enrich their application pools with their customized standards. As such, apps can also be delivered in a standardized fashion.

Some files were not shown because too many files have changed in this diff Show More