fix conflicts

Signed-off-by: FeynmanZhou <pengfeizhou@yunify.com>
This commit is contained in:
FeynmanZhou 2021-01-11 11:13:57 +08:00
commit e70a7e7cee
622 changed files with 4250 additions and 3172 deletions

View File

@ -44,7 +44,7 @@ Give a title first before you write a paragraph. It can be grouped into differen
- When you submit your md files to GitHub, make sure you add related image files that appear in md files in the pull request as well. Please save your image files in static/images/docs. You can create a folder in the directory to save your images.
- If you want to add remarks (e.g. put a box on a UI button), use the color **green**. As some screenshot apps does not support the color picking function for a specific color code, as long as the color is **similar** to #09F709, #00FF00, #09F709 or #09F738, it is acceptable.
- Make sure images in your guide match the content. For example, you mention that users need to log in KubeSphere using an account of a role; this means the account that displays in your image is expected to be the one you are talking about. It confuses your readers if the content you are describing is not consistent with the image used.
- Make sure images in your guide match the content. For example, you mention that users need to log in to KubeSphere using an account of a role; this means the account that displays in your image is expected to be the one you are talking about. It confuses your readers if the content you are describing is not consistent with the image used.
- Recommended: [Xnip](https://xnipapp.com/) for Mac and [Sniptool](https://www.reasyze.com/sniptool/) for Windows.
@ -122,7 +122,7 @@ kubectl edit svc ks-console -o yaml -n kubesphere-system
| Do | Don't |
| ------------------------------------------------------ | ---------------------------------------------------- |
| Log in the console as `admin`. | Log in the console as admin. |
| Log in to the console as `admin`. | Log in to the console as admin. |
| The account will be assigned the role `users-manager`. | The account will be assigned the role users-manager. |
### Code Comments

View File

@ -69,7 +69,7 @@ As I will upload individual Helm charts of TiDB later, I need to first download
Now that you have Helm charts ready, you can upload them to KubeSphere as app templates.
1. Log in the web console of KubeSphere. As I described in my last blog, you need to create a workspace before you create any resources in it. You can see [the official documentation of KubeSphere](https://kubesphere.io/docs/quick-start/create-workspace-and-project/) to learn how to create a workspace.
1. Log in to the web console of KubeSphere. As I described in my last blog, you need to create a workspace before you create any resources in it. You can see [the official documentation of KubeSphere](https://kubesphere.io/docs/quick-start/create-workspace-and-project/) to learn how to create a workspace.
![create-workspace](https://ap3.qingstor.com/kubesphere-website/docs/20201026192648.png)

View File

@ -26,7 +26,7 @@ As you can imagine, the very first thing to consider is to have a Kubernetes clu
Therefore, I select QingCloud Kubernetes Engine (QKE) to prepare the environment. In fact, you can also use instances on the platform directly and [deploy a highly-available Kubernetes cluster with KubeSphere installed](https://kubesphere.io/docs/installing-on-linux/public-cloud/kubesphere-on-qingcloud-instance/). Here is how I deploy the cluster and TiDB:
1. Log in the [web console of QingCloud](https://console.qingcloud.com/). Simply select **KubeSphere (QKE)** from the menu and create a Kubernetes cluster with KubeSphere installed. The platform allows you to install different components of KubeSphere. Here, we need to enable [OpenPitrix](https://github.com/openpitrix/openpitrix), which powers the app management feature in KubeSphere.
1. Log in to the [web console of QingCloud](https://console.qingcloud.com/). Simply select **KubeSphere (QKE)** from the menu and create a Kubernetes cluster with KubeSphere installed. The platform allows you to install different components of KubeSphere. Here, we need to enable [OpenPitrix](https://github.com/openpitrix/openpitrix), which powers the app management feature in KubeSphere.
{{< notice note >}}
@ -36,7 +36,7 @@ Therefore, I select QingCloud Kubernetes Engine (QKE) to prepare the environment
![qingcloud-kubernetes-engine](https://ap3.qingstor.com/kubesphere-website/docs/20201026173924.png)
2. The cluster will be up and running in around 10 minutes. In this example, I select 3 working nodes to make sure I have enough resources for the deployment later. You can also customize configurations based on your needs. When the cluster is ready, log in the web console of KubeSphere with the default account and password (`admin/P@88w0rd`). Here is the cluster **Overview** page:
2. The cluster will be up and running in around 10 minutes. In this example, I select 3 working nodes to make sure I have enough resources for the deployment later. You can also customize configurations based on your needs. When the cluster is ready, log in to the web console of KubeSphere with the default account and password (`admin/P@88w0rd`). Here is the cluster **Overview** page:
![cluster-management](https://ap3.qingstor.com/kubesphere-website/docs/20201026175447.png)

View File

@ -74,4 +74,5 @@ Three grayscale strategies are provided by KubeSphere based on Istio: blue-green
## KubeSphere Installation
KubeSphere can be deployed and run on any infrastructure, including public clouds, private clouds, virtual machines, bare metals and Kubernetes. It can be installed either online or offline. Please refer to [KubeSphere Installation Guide](https://kubesphere.io/docs/installation/intro/) for installation.
KubeSphere can be deployed and run on any infrastructure, including public clouds, private clouds, virtual machines, bare metals and Kubernetes. It can be installed either online or offline. For more information, refer to [Installing on Linux](https://kubesphere.io/docs/installing-on-linux/) and [Installing on Kubernetes](https://kubesphere.io/docs/installing-on-kubernetes/).

View File

@ -1,7 +1,7 @@
---
title: "Configure Authentication"
keywords: "LDAP, identity provider"
description: "How to configure identity provider"
description: "How to configure authentication"
linkTitle: "Configure Authentication"
weight: 12200
@ -21,18 +21,19 @@ KubeSphere includes a built-in OAuth server. Users obtain OAuth access tokens to
As an administrator, you can configure OAuth by editing configmap to specify an identity provider.
## Identity Providers
KubeSphere has an internal account management system.
You can modify the kubesphere authentication configuration using your desired identity provider by the following command:
## Authentication Configuration
KubeSphere has an internal account management system. You can modify the kubesphere authentication configuration by the following command:
*Example Configuration*:
```bash
kubectl -n kubesphere-system edit cm kubesphere-config
```
*Example Configuration*:
```yaml
apiVersion: v1
data:
@ -51,7 +52,19 @@ data:
...
```
You can define additional authentication configuration in the `identityProviders `section.
For the above example:
| Parameter | Description |
|-----------|-------------|
| authenticateRateLimiterMaxTries | AuthenticateRateLimiter defines under which circumstances we will block user. |
| authenticateRateLimiterDuration | A user will be blocked if his/her failed login attempt reaches AuthenticateRateLimiterMaxTries in AuthenticateRateLimiterDuration for about AuthenticateRateLimiterDuration. |
| loginHistoryRetentionPeriod | Retention login history, records beyond this amount will be deleted. |
| maximumClockSkew | Controls the maximum allowed clock skew when performing time-sensitive operations, such as validating the expiration time of a user token. The default value for maximum clock skew is `10 seconds`. |
| multipleLogin | Allow multiple users login from different location at the same time. The default value for multiple login is `true`. |
| jwtSecret | Secret to sign user token. Multi-cluster environments [need to use the same secret](../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-member-cluster). |
| accessTokenMaxAge | AccessTokenMaxAge control the lifetime of access tokens. The default lifetime is 2 hours. Setting the `accessTokenMaxAge` to 0 means the token will not expire, it will be set to 0 when the cluster role is member. |
| accessTokenInactivityTimeout | Inactivity timeout for tokens. The value represents the maximum amount of time that can occur between consecutive uses of the token. Tokens become invalid if they are not used within this temporal window. The user will need to acquire a new token to regain access once a token times out. |
After modifying the identity provider configuration, you need to restart the ks-apiserver.
@ -59,7 +72,11 @@ After modifying the identity provider configuration, you need to restart the ks-
kubectl -n kubesphere-system rollout restart deploy/ks-apiserver
```
## LDAP Authentication
## Identity Providers
You can define additional authentication configuration in the `identityProviders `section.
### LDAP Authentication
Set LDAPIdentityProvider in the identityProviders section to validate username and password against an LDAPv3 server using simple bind authentication.
@ -70,7 +87,7 @@ There are four parameters common to all identity providers:
| Parameter | Description |
|-----------|-------------|
| name | The name of the identity provider is associated with the user label. |
| mappingMethod | Defines how new identities are mapped to users when they log in. |
| mappingMethod | The account mapping configuration. You can use different mapping methods, such as:<br/>- `auto`: The default value. The user account will be automatically created and mapped if the login is successful. <br/>- `lookup`: Using this method requires you to manually provision accounts. |
*Example Configuration Using LDAPIdentityProvider*:

View File

@ -0,0 +1,89 @@
---
title: "OAuth2 Identity Providers"
keywords: 'Kubernetes, KubeSphere, OAuth2, Identity Provider'
description: 'OAuth2 Identity Provider'
linkTitle: "OAuth2 Identity Providers"
weight: 12200
---
## Overview
You can integrate external OAuth2 providers with KubeSphere using the standard OAuth2 protocol. After the account authentication by external OAuth2 servers, accounts can be associated with KubeSphere.
![oauth2](/images/docs/access-control-and-account-management/oauth2-identity-provider/oauth2.svg)
## GitHubIdentityProvider
KubeSphere provides you with an example of configuring GitHubIdentityProvider for OAuth2 authentication.
### Parameter settings
To set IdentityProvider parameters, edit the ConfigMap of `kubesphere-config` in the namespace of `kubesphere-system`.
1. Execute the following command.
```bash
kubectl -n kubesphere-system edit cm kubesphere-config
```
2. This is an example configuration for your reference.
```yaml
apiVersion: v1
data:
kubesphere.yaml: |
authentication:
authenticateRateLimiterMaxTries: 10
authenticateRateLimiterDuration: 10m0s
loginHistoryRetentionPeriod: 7d
maximumClockSkew: 10s
multipleLogin: true
kubectlImage: kubesphere/kubectl:v1.0.0
jwtSecret: "jwt secret"
oauthOptions:
accessTokenMaxAge: 1h
accessTokenInactivityTimeout: 30m
identityProviders:
- name: github
type: GitHubIdentityProvider
mappingMethod: auto
provider:
clientID: 'Iv1.547165ce1cf2f590'
clientSecret: 'c53e80ab92d48ab12f4e7f1f6976d1bdc996e0d7'
endpoint:
authURL: 'https://github.com/login/oauth/authorize'
tokenURL: 'https://github.com/login/oauth/access_token'
redirectURL: 'https://ks-console/oauth/redirect'
scopes:
- user
...
```
3. Add the configuration block for GitHubIdentityProvider in `authentication.oauthOptions.identityProviders`. See the following table for more information about different fields.
| Field | Description |
| --------------- | ------------------------------------------------------------ |
| `name` | The unique name of IdentityProvider. |
| `type` | The type of IdentityProvider plugin. GitHubIdentityProvider is a default implementation type. |
| `mappingMethod` | The account mapping configuration. You can use different mapping methods, such as:<br/>- `auto`: The default value. The user account will be automatically created and mapped if the login is successful. <br/>- `lookup`: Using this method requires you to manually provision accounts. <br/>For more information, see [the parameters in GitHub](https://github.com/kubesphere/kubesphere/blob/master/pkg/apiserver/authentication/oauth/oauth_options.go#L37-L44). |
| `clientID` | The OAuth2 client ID. |
| `clientSecret` | The OAuth2 client secret. |
| `authURL` | The OAuth2 endpoint. |
| `tokenURL` | The OAuth2 endpoint. |
| `redirectURL` | The redirected URL to ks-console. |
4. Restart `ks-apiserver` to update the configuration.
```bash
kubectl -n kubesphere-system rollout restart deploy ks-apiserver
```
5. Access the login page of the KubeSphere console and you can see the option **Log in with GitHub**.
![github-login-page](/images/docs/access-control-and-account-management/oauth2-identity-provider/github-login-page.png)
![github-authentication](/images/docs/access-control-and-account-management/oauth2-identity-provider/github-authentication.jpg)
![logged-in](/images/docs/access-control-and-account-management/oauth2-identity-provider/logged-in.png)
6. After you log in the console, the account [can be invited to a workspace](../../workspace-administration/role-and-member-management/) to work in one or more projects.

View File

@ -1,8 +1,157 @@
---
title: "Helm Developer Guide"
keywords: 'kubernetes, kubesphere'
description: ''
keywords: 'Kubernetes, KubeSphere, helm, development'
description: 'Helm developer guide'
linkTitle: "Helm Developer Guide"
weight: 14410
---
You can upload the Helm chart of an app to KubeSphere so that tenants with necessary permissions can deploy it. This tutorial demonstrates how prepare Helm charts using NGINX as an example.
## Install Helm
If you have already installed KubeSphere, then Helm is deployed in your environment. Otherwise, refer to the [Helm documentation](https://helm.sh/docs/intro/install/) to install Helm first.
## Create a Local Repository
Execute the following commands to create a repository on your machine.
```bash
mkdir helm-repo
```
```bash
cd helm-repo
```
## Create an App
Use `helm create` to create a folder named `nginx`, which automatically creates YAML templates and directories for your app. Generally, it is not recommended to change the name of files and directories in the top level directory.
```bash
$ helm create nginx
$ tree nginx/
nginx/
├── charts
├── Chart.yaml
├── templates
│ ├── deployment.yaml
│ ├── _helpers.tpl
│ ├── ingress.yaml
│ ├── NOTES.txt
│ └── service.yaml
└── values.yaml
```
`Chart.yaml` is used to define the basic information of the chart, including name, API, and app version. For more information, see [Chart.yaml File](../helm-specification/#chartyaml-file).
An example of the `Chart.yaml` file:
```yaml
apiVersion: v1
appVersion: "1.0"
description: A Helm chart for Kubernetes
name: nginx
version: 0.1.0
```
When you deploy Helm-based apps to Kubernetes, you can edit the `values.yaml` file on the KubeSphere console directly.
An example of the `values.yaml` file:
```yaml
# Default values for test.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: nginx
tag: stable
pullPolicy: IfNotPresent
nameOverride: ""
fullnameOverride: ""
service:
type: ClusterIP
port: 80
ingress:
enabled: false
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
path: /
hosts:
- chart-example.local
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
nodeSelector: {}
tolerations: []
affinity: {}
```
Refer to [Helm Specifications](../helm-specification/) to edit files in the `nginx` folder and save them when you finish editing.
## Create an Index File (Optional)
To add a repository with an HTTP or HTTPS URL in KubeSphere, you need to upload an `index.yaml` file to the object storage in advance. Use Helm to create the index file by executing the following command in the previous directory of `nginx`.
```bash
helm repo index .
```
```bash
$ ls
index.yaml nginx
```
{{< notice note >}}
- If the repository URL is S3-styled, an index file will be created automatically in the object storage when you add apps to the repository.
- For more information about how to add repositories to KubeSphere, see [Import an Helm Repository](../../../workspace-administration/app-repository/import-helm-repository/).
{{</ notice >}}
## Package the Chart
Go to the previous directory of `nginx` and execute the following command to package your chart which creates a .tgz package.
```bash
helm package nginx
```
```bash
$ ls
nginx nginx-0.1.0.tgz
```
## Upload Your App
Now that you have your Helm-based app ready, you can load it to KubeSphere and test it on the platform.
## See Also
[Helm Specifications](../helm-specification/)
[Import an Helm Repository](../../../workspace-administration/app-repository/import-helm-repository/)

View File

@ -1,8 +1,130 @@
---
title: "Helm Specification"
keywords: 'kubernetes, kubesphere'
description: 'Helm Specification'
title: "Helm Specifications"
keywords: 'Kubernetes, KubeSphere, Helm, specifications'
description: 'Helm Specifications'
linkTitle: "Helm Specifications"
weight: 14420
---
Helm charts serve as a packaging format. A chart is a collection of files that describe a related set of Kubernetes resources. For more information, see the [Helm documentation](https://helm.sh/docs/topics/charts/).
## Structure
All related files of a chart is stored in a directory which generally contains:
```text
chartname/
Chart.yaml # A YAML file containing basic information about the chart, such as version and name.
LICENSE # (Optional) A plain text file containing the license for the chart.
README.md # (Optional) The description of the app and how-to guide.
values.yaml # The default configuration values for this chart.
values.schema.json # (Optional) A JSON Schema for imposing a structure on the values.yaml file.
charts/ # A directory containing any charts upon which this chart depends.
crds/ # Custom Resource Definitions.
templates/ # A directory of templates that will generate valid Kubernetes configuration files with corresponding values provided.
templates/NOTES.txt # (Optional) A plain text file with usage notes.
```
## Chart.yaml File
You must provide the `chart.yaml` file for a chart. Here is an example of the file with explanations for each field.
```yaml
apiVersion: (Required) The chart API version.
name: (Required) The name of the chart.
version: (Required) The version, following the SemVer 2 standard.
kubeVersion: (Optional) The compatible Kubernetes version, following the SemVer 2 standard.
description: (Optional) A single-sentence description of the app.
type: (Optional) The type of the chart.
keywords:
- (Optional) A list of keywords about the app.
home: (Optional) The URL of the app.
sources:
- (Optional) A list of URLs to source code for this app.
dependencies: (Optional) A list of the chart requirements.
- name: The name of the chart, such as nginx.
version: The version of the chart, such as "1.2.3".
repository: The repository URL ("https://example.com/charts") or alias ("@repo-name").
condition: (Optional) A yaml path that resolves to a boolean, used for enabling/disabling charts (e.g. subchart1.enabled ).
tags: (Optional)
- Tags can be used to group charts for enabling/disabling together.
import-values: (Optional)
- ImportValues holds the mapping of source values to parent key to be imported. Each item can be a string or pair of child/parent sublist items.
alias: (Optional) Alias to be used for the chart. It is useful when you have to add the same chart multiple times.
maintainers: (Optional)
- name: (Required) The maintainer name.
email: (Optional) The maintainer email.
url: (Optional) A URL for the maintainer.
icon: (Optional) A URL to an SVG or PNG image to be used as an icon.
appVersion: (Optional) The app version. This needn't be SemVer.
deprecated: (Optional, boolean) Whether this chart is deprecated.
annotations:
example: (Optional) A list of annotations keyed by name.
```
{{< notice note >}}
- The field `dependencies` is used to define chart dependencies which were located in a separate file `requirements.yaml` for `v1` charts. For more information, see [Chart Dependencies](https://helm.sh/docs/topics/charts/#chart-dependencies).
- The field `type` is used to define the type of chart. Allowed values are `application` and `library`. For more information, see [Chart Types](https://helm.sh/docs/topics/charts/#chart-types).
{{</ notice >}}
## Values.yaml and Templates
Written in the [Go template language](https://golang.org/pkg/text/template/), Helm chart templates are stored in the `templates` folder of a chart. There are two ways to provide values for the templates:
1. Make a `values.yaml` file inside of a chart with default values that can be referenced.
2. Make a YAML file that contains necessary values and use the file through the command line with `helm install`.
Here is an example of the template in the `templates` folder.
```yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: deis-database
namespace: deis
labels:
app.kubernetes.io/managed-by: deis
spec:
replicas: 1
selector:
app.kubernetes.io/name: deis-database
template:
metadata:
labels:
app.kubernetes.io/name: deis-database
spec:
serviceAccount: deis-database
containers:
- name: deis-database
image: {{.Values.imageRegistry}}/postgres:{{.Values.dockerTag}}
imagePullPolicy: {{.Values.pullPolicy}}
ports:
- containerPort: 5432
env:
- name: DATABASE_STORAGE
value: {{default "minio" .Values.storage}}
```
The above example defines a ReplicationController template in Kubernetes. There some values referenced in it which are defined in `values.yaml`.
- `imageRegistry`: The Docker image registry.
- `dockerTag`: The Docker image tag.
- `pullPolicy`: The image pulling policy.
- `storage`: The storage backend. It defaults to `minio`.
An example `values.yaml` file:
```text
imageRegistry: "quay.io/deis"
dockerTag: "latest"
pullPolicy: "Always"
storage: "s3"
```
## Reference
[Helm Documentation](https://helm.sh/docs/)
[Charts](https://helm.sh/docs/topics/charts/)

View File

@ -18,7 +18,7 @@ Using [Redis](https://redis.io/) as an example application, this tutorial demons
## Prerequisites
- You need to enable [KubeSphere App Store (OpenPitrix)](../../pluggable-components/app-store/).
- You need to create a workspace, a project and an account (`project-regular`). For more information, see [Create Workspace, Project, Account and Role](../../quick-start/create-workspace-and-project/).
- You need to create a workspace, a project and an account (`project-regular`). For more information, see [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
## Hands-on Lab
@ -26,7 +26,7 @@ Using [Redis](https://redis.io/) as an example application, this tutorial demons
You need to create two accounts first, one for ISVs (`isv`) and the other (`reviewer`) for app technical reviewers.
1. Log in the KubeSphere console with the account `admin`. Click **Platform** in the top left corner and select **Access Control**. In **Account Roles**, click **Create**.
1. Log in to the KubeSphere console with the account `admin`. Click **Platform** in the top left corner and select **Access Control**. In **Account Roles**, click **Create**.
![create-role](/images/docs/appstore/application-lifecycle-management/create-role.jpg)
@ -56,7 +56,7 @@ You need to create two accounts first, one for ISVs (`isv`) and the other (`revi
### Step 2: Upload and submit an application
1. Log in KubeSphere as `isv` and go to your workspace. You need to upload the example app Redis to this workspace so that it can be used later. First, download the app [Redis 11.3.4](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/redis-11.3.4.tgz) and click **Upload Template** in **App Templates**.
1. Log in to KubeSphere as `isv` and go to your workspace. You need to upload the example app Redis to this workspace so that it can be used later. First, download the app [Redis 11.3.4](https://github.com/kubesphere/tutorial/raw/master/tutorial%205%20-%20app-store/redis-11.3.4.tgz) and click **Upload Template** in **App Templates**.
![upload-app](/images/docs/appstore/application-lifecycle-management/upload-app.jpg)
@ -175,7 +175,7 @@ After the app is approved, `isv` can release the Redis application to the App St
`reviewer` can create multiple categories for different types of applications based on their function and usage. It is similar to setting tags and categories can be used in the App Store as filters, such as Big Data, Middleware, and IoT.
1. Log in KubeSphere as `reviewer`. To create a category, go to the **App Store Management** page and click the plus icon in **App Categories**.
1. Log in to KubeSphere as `reviewer`. To create a category, go to the **App Store Management** page and click the plus icon in **App Categories**.
![app-category](/images/docs/appstore/application-lifecycle-management/app-category.jpg)
@ -205,7 +205,7 @@ After the app is approved, `isv` can release the Redis application to the App St
To allow workspace users to upgrade apps, you need to add new app versions to KubeSphere first. Follow the steps below to add a new version for the example app.
1. Log in KubeSphere as `isv` again and navigate to **App Templates**. Click the app Redis in the list.
1. Log in to KubeSphere as `isv` again and navigate to **App Templates**. Click the app Redis in the list.
![redis-new-version](/images/docs/appstore/application-lifecycle-management/redis-new-version.jpg)
@ -233,7 +233,7 @@ To follow the steps below, you must deploy an app of one of its old versions fir
{{</ notice >}}
1. Log in KubeSphere as `project-regular`, navigate to the **Applications** page of the project, and click the app to be upgraded.
1. Log in to KubeSphere as `project-regular`, navigate to the **Applications** page of the project, and click the app to be upgraded.
![app-to-be-upgraded](/images/docs/appstore/application-lifecycle-management/app-to-be-upgraded.jpg)
@ -257,7 +257,7 @@ To follow the steps below, you must deploy an app of one of its old versions fir
You can choose to remove an app entirely from the App Store or suspend a specific app version.
1. Log in KubeSphere as `reviewer`. Click **Platform** in the top left corner and go to **App Store Management**. On the **App Store** page, click Redis.
1. Log in to KubeSphere as `reviewer`. Click **Platform** in the top left corner and go to **App Store Management**. On the **App Store** page, click Redis.
![remove-app](/images/docs/appstore/application-lifecycle-management/remove-app.jpg)

View File

@ -13,11 +13,11 @@ This tutorial walks you through an example of deploying etcd from the App Store
## Prerequisites
- Please make sure you [enable the OpenPitrix system](https://kubesphere.io/docs/pluggable-components/app-store/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Hands-on Lab
### Step 1: Deploy etcd from App Store
### Step 1: Deploy etcd from the App Store
1. On the **Overview** page of the project `demo-project`, click **App Store** in the top left corner.
@ -47,11 +47,11 @@ This tutorial walks you through an example of deploying etcd from the App Store
![etcd-running](/images/docs/appstore/built-in-apps/etcd-app/etcd-running.jpg)
### Step 2: Access etcd Service
### Step 2: Access the etcd Service
After the app is deployed, you can use etcdctl, a command-line tool for interacting with etcd server, to access etcd on the KubeSphere console directly.
1. Navigate to **StatefulSets** in **Workloads**, click the service name of etcd.
1. Navigate to **StatefulSets** in **Workloads**, and click the service name of etcd.
![etcd-statefulset](/images/docs/appstore/built-in-apps/etcd-app/etcd-statefulset.jpg)

View File

@ -12,11 +12,11 @@ This tutorial walks you through an example of deploying Memcached from the App S
## Prerequisites
- Please make sure you [enable the OpenPitrix system](https://kubesphere.io/docs/pluggable-components/app-store/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Hands-on Lab
### Step 1: Deploy Memcached from App Store
### Step 1: Deploy Memcached from the App Store
1. On the **Overview** page of the project `demo-project`, click **App Store** in the top left corner.
@ -42,7 +42,7 @@ This tutorial walks you through an example of deploying Memcached from the App S
### Step 2: Access Memcached
1. Navigate to **Services**, click the service name of Memcached.
1. Navigate to **Services**, and click the service name of Memcached.
![memcached-service](/images/docs/appstore/built-in-apps/memcached-app/memcached-service.jpg)

View File

@ -3,7 +3,6 @@ title: "Deploy MinIO on KubeSphere"
keywords: 'Kubernetes, KubeSphere, Minio, app-store'
description: 'How to deploy Minio on KubeSphere from the App Store of KubeSphere'
linkTitle: "Deploy MinIO on KubeSphere"
weight: 14240
---
[MinIO](https://min.io/) object storage is designed for high performance and the S3 API. It is ideal for large, private cloud environments with stringent security requirements and delivers mission-critical availability across a diverse range of workloads.
@ -13,11 +12,11 @@ This tutorial walks you through an example of deploying MinIO from the App Store
## Prerequisites
- Please make sure you [enable the OpenPitrix system](../../../pluggable-components/app-store/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Hands-on Lab
### Step 1: Deploy MinIO from App Store
### Step 1: Deploy MinIO from the App Store
1. On the **Overview** page of the project `demo-project`, click **App Store** in the top left corner.
@ -41,7 +40,7 @@ This tutorial walks you through an example of deploying MinIO from the App Store
![minio-in-list](/images/docs/appstore/built-in-apps/minio-app/minio-in-list.jpg)
### Step 2: Access MinIO Browser
### Step 2: Access the MinIO Browser
To access MinIO outside the cluster, you need to expose the app through NodePort first.

View File

@ -13,11 +13,11 @@ This tutorial walks you through an example of deploying MongoDB from the App Sto
## Prerequisites
- Please make sure you [enable the OpenPitrix system](../../../pluggable-components/app-store/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Hands-on Lab
### Step 1: Deploy MongoDB from App Store
### Step 1: Deploy MongoDB from the App Store
1. On the **Overview** page of the project `demo-project`, click **App Store** in the top left corner.
@ -47,7 +47,7 @@ This tutorial walks you through an example of deploying MongoDB from the App Sto
![mongodb-running](/images/docs/appstore/built-in-apps/mongodb-app/mongodb-running.jpg)
### Step 2: Access MongoDB Terminal
### Step 2: Access the MongoDB Terminal
1. Go to **Services** and click the service name of MongoDB.

View File

@ -13,11 +13,11 @@ This tutorial walks you through an example of deploying MySQL from the App Store
## Prerequisites
- Please make sure you [enable the OpenPitrix system](https://kubesphere.io/docs/pluggable-components/app-store/).
- You need to create a workspace, a project, and a user account for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You need to create a workspace, a project, and a user account for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Hands-on Lab
### Step 1: Deploy MySQL from App Store
### Step 1: Deploy MySQL from the App Store
1. On the **Overview** page of the project `demo-project`, click **App Store** in the top left corner.
@ -33,7 +33,7 @@ This tutorial walks you through an example of deploying MySQL from the App Store
![deploy-mysql](/images/docs/appstore/built-in-apps/mysql-app/deploy-mysql.jpg)
4. In **App Config**, uncomment the `mysqlRootPassword` field or customize the password. Click **Deploy** to continue.
4. In **App Config**, uncomment the `mysqlRootPassword` field and customize the password. Click **Deploy** to continue.
![uncomment-password](/images/docs/appstore/built-in-apps/mysql-app/uncomment-password.jpg)
@ -41,7 +41,7 @@ This tutorial walks you through an example of deploying MySQL from the App Store
![mysql-running](/images/docs/appstore/built-in-apps/mysql-app/mysql-running.jpg)
### Step 2: Access MySQL Terminal
### Step 2: Access the MySQL Terminal
1. Go to **Workloads** and click the service name of MySQL.
@ -51,11 +51,11 @@ This tutorial walks you through an example of deploying MySQL from the App Store
![mysql-teminal](/images/docs/appstore/built-in-apps/mysql-app/mysql-teminal.jpg)
3. In the terminal, execute `mysql -uroot -ptesting` to log in MySQL as the root user.
3. In the terminal, execute `mysql -uroot -ptesting` to log in to MySQL as the root user.
![log-in-mysql](/images/docs/appstore/built-in-apps/mysql-app/log-in-mysql.jpg)
### Step 3: Access MySQL Database outside Cluster
### Step 3: Access the MySQL Database outside the Cluster
To access MySQL outside the cluster, you need to expose the app through NodePort first.
@ -88,4 +88,3 @@ To access MySQL outside the cluster, you need to expose the app through NodePort
{{</ notice >}}
6. For more information about MySQL, refer to [the official documentation of MySQL](https://dev.mysql.com/doc/).

View File

@ -13,11 +13,11 @@ This tutorial walks you through an example of deploying NGINX from the App Store
## Prerequisites
- Please make sure you [enable the OpenPitrix system](../../../pluggable-components/app-store/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Hands-on Lab
### Step 1: Deploy NGINX from App Store
### Step 1: Deploy NGINX from the App Store
1. On the **Overview** page of the project `demo-project`, click **App Store** in the top left corner.

View File

@ -13,11 +13,11 @@ This tutorial walks you through an example of how to deploy PostgreSQL from the
## Prerequisites
- Please make sure you [enable the OpenPitrix system](../../../pluggable-components/app-store/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Hands-on Lab
### Step 1: Deploy PostgreSQL from App Store
### Step 1: Deploy PostgreSQL from the App Store
1. On the **Overview** page of the project `demo-project`, click **App Store** in the top left corner.
@ -47,9 +47,9 @@ This tutorial walks you through an example of how to deploy PostgreSQL from the
![postgresql-ready](/images/docs/appstore/built-in-apps/postgresql-app/postgresql-ready.jpg)
### Step 2: Access PostgreSQL Database
### Step 2: Access the PostgreSQL Database
To access MySQL outside the cluster, you need to expose the app through NodePort first.
To access PostgreSQL outside the cluster, you need to expose the app through NodePort first.
1. Go to **Services** and click the service name of PostgreSQL.

View File

@ -2,8 +2,7 @@
title: "Deploy RabbitMQ on KubeSphere"
keywords: 'KubeSphere, RabbitMQ, Kubernetes, Installation'
description: 'How to deploy RabbitMQ on KubeSphere through App Store'
link title: "Deploy RabbitMQ"
linkTitle: "Deploy RabbitMQ on KubeSphere"
weight: 14290
---
[RabbitMQ](https://www.rabbitmq.com/) is the most widely deployed open-source message broker. It is lightweight and easy to deploy on premises and in the cloud. It supports multiple messaging protocols. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.
@ -13,11 +12,11 @@ This tutorial walks you through an example of how to deploy RabbitMQ from the Ap
## Prerequisites
- Please make sure you [enable the OpenPitrix system](https://kubesphere.io/docs/pluggable-components/app-store/).
- You need to create a workspace, a project, and a user account for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You need to create a workspace, a project, and a user account for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Hands-on Lab
### Step 1: Deploy RabbitMQ from App Store
### Step 1: Deploy RabbitMQ from the App Store
1. On the **Overview** page of the project `demo-project`, click **App Store** in the top left corner.
@ -49,7 +48,7 @@ This tutorial walks you through an example of how to deploy RabbitMQ from the Ap
![check-if-rabbitmq-is-running](/images/docs/appstore/built-in-apps/rabbitmq-app/rabbitmq05.jpg)
### Step 2: Access RabbitMQ Dashboard
### Step 2: Access the RabbitMQ Dashboard
To access RabbitMQ outside the cluster, you need to expose the app through NodePort first.

View File

@ -13,11 +13,11 @@ This tutorial walks you through an example of deploying Redis from the App Store
## Prerequisites
- Please make sure you [enable the OpenPitrix system](../../../pluggable-components/app-store/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Hands-on Lab
### Step 1: Deploy Redis from App Store
### Step 1: Deploy Redis from the App Store
1. On the **Overview** page of the project `demo-project`, click **App Store** in the top left corner.
@ -47,7 +47,7 @@ This tutorial walks you through an example of deploying Redis from the App Store
![redis-running](/images/docs/appstore/built-in-apps/redis-app/redis-running.jpg)
### Step 2: Access Redis Terminal
### Step 2: Access the Redis Terminal
1. Go to **Services** and click the service name of Redis.

View File

@ -2,8 +2,7 @@
title: "Deploy Tomcat on KubeSphere"
keywords: 'KubeSphere, Kubernetes, Installation, Tomcat'
description: 'How to deploy Tomcat on KubeSphere through App Store'
link title: "Deploy Tomcat"
linkTitle: "Deploy Tomcat on KubeSphere"
weight: 14292
---
[Apache Tomcat](https://tomcat.apache.org/index.html) powers numerous large-scale, mission-critical web applications across a diverse range of industries and organizations. Tomcat provides a pure Java HTTP web server environment in which Java code can run.
@ -13,11 +12,11 @@ This tutorial walks you through an example of deploying Tomcat from the App Stor
## Prerequisites
- Please make sure you [enable the OpenPitrix system](../../../pluggable-components/app-store/).
- You need to create a workspace, a project, and a user account for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You need to create a workspace, a project, and a user account for this tutorial. The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-regular` and work in the project `demo-project` in the workspace `demo-workspace`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Hands-on Lab
### Step 1: Deploy Tomcat from App Store
### Step 1: Deploy Tomcat from the App Store
1. On the **Overview** page of the project `demo-project`, click **App Store** in the top left corner.
@ -41,7 +40,7 @@ This tutorial walks you through an example of deploying Tomcat from the App Stor
![tomcat-running](/images/docs/appstore/built-in-apps/tomcat-app/tomcat-running.jpg)
### Step 2: Access Tomcat Terminal
### Step 2: Access the Tomcat Terminal
1. Go to **Services** and click the service name of Tomcat.
@ -55,9 +54,9 @@ This tutorial walks you through an example of deploying Tomcat from the App Stor
![view-project](/images/docs/appstore/built-in-apps/tomcat-app/view-project.jpg)
### Step 3: Access Tomcat Project from Browser
### Step 3: Access a Tomcat Project from Your Browser
To access Tomcat projects outside the cluster, you need to expose the app through NodePort first.
To access a Tomcat project outside the cluster, you need to expose the app through NodePort first.
1. Go to **Services** and click the service name of Tomcat.

View File

@ -14,7 +14,7 @@ This tutorial shows you how to quickly deploy a [GitLab](https://gitlab.com/gitl
## Prerequisites
- You have enabled [OpenPitrix](/docs/pluggable-components/app-store/).
- You have completed the tutorial [Create Workspace, Project, Account and Role](/docs/quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, we'll work in the project `apps` of the workspace `apps`.
- You have completed the tutorial [Create Workspaces, Projects, Accounts and Roles](/docs/quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, we'll work in the project `apps` of the workspace `apps`.
## Hands-on Lab

View File

@ -11,7 +11,7 @@ In addition to monitoring data at the physical resource level, cluster administr
## Prerequisites
You need an account granted a role including the authorization of **Clusters Management**. For example, you can log in the console as `admin` directly or create a new role with the authorization and assign it to an account.
You need an account granted a role including the authorization of **Clusters Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to an account.
## Resource Usage

View File

@ -12,13 +12,13 @@ This guide demonstrates how to set cluster visibility.
## Prerequisites
* You need to enable the [multi-cluster feature](../../../multicluster-management/).
* You need to have a workspace and an account that has the permission to create workspaces, such as `ws-manager`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
* You need to have a workspace and an account that has the permission to create workspaces, such as `ws-manager`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Set Cluster Visibility
### Select available clusters when you create a workspace
1. Log in KubeSphere with an account that has the permission to create a workspace, such as `ws-manager`.
1. Log in to KubeSphere with an account that has the permission to create a workspace, such as `ws-manager`.
2. Click **Platform** in the top left corner and select **Access Control**. In **Workspaces** from the navigation bar, click **Create**.
@ -44,7 +44,7 @@ Try not to create resources on the host cluster to avoid excessive loads, which
After a workspace is created, you can allocate additional clusters to the workspace through authorization or unbind a cluster from the workspace. Follow the steps below to adjust the visibility of a cluster.
1. Log in KubeSphere with an account that has the permission to manage clusters, such as `admin`.
1. Log in to KubeSphere with an account that has the permission to manage clusters, such as `admin`.
2. Click **Platform** in the top left corner and select **Clusters Management**. Select a cluster from the list to view cluster information.
@ -52,12 +52,12 @@ After a workspace is created, you can allocate additional clusters to the worksp
4. You can see the list of authorized workspaces, which means the current cluster is available to resources in all these workspaces.
![cluster-visibility-settings-1](/images/docs/cluster-administration/cluster-settings/cluster-visibility-and-authorization/cluster-visibility-settings-1.png)
![workspace-list](/images/docs/cluster-administration/cluster-settings/cluster-visibility-and-authorization/workspace-list.jpg)
5. Click **Edit Visibility** to set the cluster authorization. You can select new workspaces that will be able to use the cluster or unbind it from a workspace.
![cluster-visibility-settings-2](/images/docs/cluster-administration/cluster-settings/cluster-visibility-and-authorization/cluster-visibility-settings-2.png)
![assign-workspace](/images/docs/cluster-administration/cluster-settings/cluster-visibility-and-authorization/assign-workspace.jpg)
### Make a cluster public
You can check **Set as public cluster** so that all platform users can access the cluster, in which they are able to create and schedule resources.
You can check **Set as public cluster** so that platform users can access the cluster, in which they are able to create and schedule resources.

View File

@ -9,7 +9,7 @@ weight: 8630
## Objective
This guide demonstrates email notification settings (customized settings supported) for alert policies. You can specify user email addresses to receive alert messages.
This guide demonstrates email notification settings (customized settings supported) for alerting policies. You can specify user email addresses to receive alerting messages.
## Prerequisites
@ -17,7 +17,7 @@ This guide demonstrates email notification settings (customized settings support
## Hands-on Lab
1. Log in the web console with one account granted the role `platform-admin`.
1. Log in to the web console with one account granted the role `platform-admin`.
2. Click **Platform** in the top left corner and select **Clusters Management**.
![mail_server_guide](/images/docs/alerting/mail_server_guide.png)

View File

@ -10,7 +10,7 @@ KubeSphere provides monitoring of related metrics such as CPU, memory, network,
## Prerequisites
You need an account granted a role including the authorization of **Clusters Management**. For example, you can log in the console as `admin` directly or create a new role with the authorization and assign it to an account.
You need an account granted a role including the authorization of **Clusters Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to an account.
## Cluster Status Monitoring

View File

@ -6,17 +6,17 @@ linkTitle: "Alerting Messages (Node Level)"
weight: 8540
---
Alert messages record detailed information of alerts triggered based on alert rules, including monitoring targets, alert policies, recent notifications and comments.
Alerting messages record detailed information of alerts triggered based on alert rules, including monitoring targets, alerting policies, recent notifications and comments.
## Prerequisites
You have created a node-level alert policy and received alert notifications of it. If it is not ready, please refer to [Alert Policy (Node Level)](../alerting-policy/) to create one first.
You have created a node-level alerting policy and received alert notifications of it. If it is not ready, please refer to [Alerting Policy (Node Level)](../alerting-policy/) to create one first.
## Hands-on Lab
### Task 1: View alert messages
### Task 1: View alerting messages
1. Log in the console with one account granted the role `platform-admin`.
1. Log in to the console with one account granted the role `platform-admin`.
2. Click **Platform** in the top left corner and select **Clusters Management**.
@ -24,17 +24,17 @@ You have created a node-level alert policy and received alert notifications of i
3. Select a cluster from the list and enter it (If you do not enable the [multi-cluster feature](../../../multicluster-management/), you will directly go to the **Overview** page).
4. Navigate to **Alerting Messages** under **Monitoring & Alerting**, and you can see alert messages in the list. In the example of [Alert Policy (Node Level)](../alerting-policy/), you set one node as the monitoring target, and its memory utilization rate is higher than the threshold of `50%`, so you can see an alert message of it.
4. Navigate to **Alerting Messages** under **Monitoring & Alerting**, and you can see alerting messages in the list. In the example of [Alerting Policy (Node Level)](../alerting-policy/), you set one node as the monitoring target, and its memory utilization rate is higher than the threshold of `50%`, so you can see an alerting message of it.
![alerting_message_node_level_list](/images/docs/alerting/alerting_message_node_level_list.png)
5. Click the alert message to enter the detail page. In **Alerting Detail**, you can see the graph of memory utilization rate of the node over time, which has been continuously higher than the threshold of `50%` set in the alert rule, so the alert was triggered.
5. Click the alerting message to enter the detail page. In **Alerting Detail**, you can see the graph of memory utilization rate of the node over time, which has been continuously higher than the threshold of `50%` set in the alert rule, so the alert was triggered.
![alerting_message_node_level_detail](/images/docs/alerting/alerting_message_node_level_detail.png)
### Task 2: View alert policies
### Task 2: View alerting policies
Switch to **Alerting Policy** to view the alert policy corresponding to this alert message, and you can see the triggering rule of it set in the example of [Alert Policy (Node Level)](../alerting-policy/).
Switch to **Alerting Policy** to view the alerting policy corresponding to this alerting message, and you can see the triggering rule of it set in the example of [Alerting Policy (Node Level)](../alerting-policy/).
![alerting_message_node_level_policy](/images/docs/alerting/alerting_message_node_level_policy.png)
@ -44,10 +44,10 @@ Switch to **Alerting Policy** to view the alert policy corresponding to this ale
![alerting_message_node_level_notification](/images/docs/alerting/alerting_message_node_level_notification.png)
2. Log in your email to see alert notification mails sent by the KubeSphere mail server. You have received a total of 3 emails.
2. Log in to your email to see alert notification mails sent by the KubeSphere mail server. You have received a total of 3 emails.
### Task 4: Add comments
Click **Comment** to add comments to current alert messages. For example, as memory utilization rate of the node is higher than the threshold set based on the alert rule, you can add a comment in the dialog below: `The node needs to be tainted and new pod is not allowed to be scheduled to it`.
Click **Comment** to add comments to current alerting messages. For example, as memory utilization rate of the node is higher than the threshold set based on the alert rule, you can add a comment in the dialog below: `The node needs to be tainted and new pod is not allowed to be scheduled to it`.
![alerting_message_node_level_comment](/images/docs/alerting/alerting_message_node_level_comment.png)

View File

@ -8,7 +8,7 @@ weight: 8530
## Objective
KubeSphere provides alert policies for nodes and workloads. This guide demonstrates how you can create alert policies for nodes in the cluster and configure mail notifications. See [Alerting Policy (Workload Level)](../../../project-user-guide/alerting/alerting-policy/) to learn how to configure alert policies for workloads.
KubeSphere provides alerting policies for nodes and workloads. This guide demonstrates how you can create alerting policies for nodes in the cluster and configure mail notifications. See [Alerting Policy (Workload Level)](../../../project-user-guide/alerting/alerting-policy/) to learn how to configure alerting policies for workloads.
## Prerequisites
@ -17,9 +17,9 @@ KubeSphere provides alert policies for nodes and workloads. This guide demonstra
## Hands-on Lab
### Task 1: Create an alert policy
### Task 1: Create an alerting policy
1. Log in the console with one account granted the role `platform-admin`.
1. Log in to the console with one account granted the role `platform-admin`.
2. Click **Platform** in the top left corner and select **Clusters Management**.
@ -36,8 +36,8 @@ KubeSphere provides alert policies for nodes and workloads. This guide demonstra
In the dialog that appears, fill in the basic information as follows. Click **Next** after you finish.
- **Name**: a concise and clear name as its unique identifier, such as `alert-demo`.
- **Alias**: to help you distinguish alert policies better. Chinese is supported.
- **Description**: a brief introduction to the alert policy.
- **Alias**: to help you distinguish alerting policies better. Chinese is supported.
- **Description**: a brief introduction to the alerting policy.
![alerting_policy_node_level_basic_info](/images/docs/alerting/alerting_policy_node_level_basic_info.png)
@ -55,7 +55,7 @@ You can sort nodes in the node list from the drop-down menu through the followin
### Task 4: Add alerting rules
1. Click **Add Rule** to begin to create an alerting rule. The rule defines parameters such as metric type, check period, consecutive times, metric threshold and alert level to provide rich configurations. The check period (the second field under **Rule**) means the time interval between 2 consecutive checks of the metric. For example, `2 minutes/period` means the metric is checked every two minutes. The consecutive times (the third field under **Rule**) means the number of consecutive times that the metric meets the threshold when checked. An alert is only triggered when the actual time is equal to or is greater than the number of consecutive times set in the alert policy.
1. Click **Add Rule** to begin to create an alerting rule. The rule defines parameters such as metric type, check period, consecutive times, metric threshold and alert level to provide rich configurations. The check period (the second field under **Rule**) means the time interval between 2 consecutive checks of the metric. For example, `2 minutes/period` means the metric is checked every two minutes. The consecutive times (the third field under **Rule**) means the number of consecutive times that the metric meets the threshold when checked. An alert is only triggered when the actual time is equal to or is greater than the number of consecutive times set in the alerting policy.
![alerting_policy_node_level_alerting_rule](/images/docs/alerting/alerting_policy_node_level_alerting_rule.png)
@ -65,7 +65,7 @@ You can sort nodes in the node list from the drop-down menu through the followin
{{< notice note >}}
You can create node-level alert policies for the following metrics:
You can create node-level alerting policies for the following metrics:
- CPU: `cpu utilization rate`, `cpu load average 1 minute`, `cpu load average 5 minutes`, `cpu load average 15 minutes`
- Memory: `memory utilization rate`, `memory available`
@ -83,17 +83,17 @@ You can create node-level alert policies for the following metrics:
![alerting_policy_node_level_notification_rule](/images/docs/alerting/alerting_policy_node_level_notification_rule.png)
3. Click **Create**, and you can see that the alert policy is successfully created.
3. Click **Create**, and you can see that the alerting policy is successfully created.
{{< notice note >}}
*Waiting Time for Alerting* **=** *Check Period* **x** *Consecutive Times*. For example, if the check period is 1 minute/period, and the number of consecutive times is 2, you need to wait for 2 minutes before the alert message appears.
*Waiting Time for Alerting* **=** *Check Period* **x** *Consecutive Times*. For example, if the check period is 1 minute/period, and the number of consecutive times is 2, you need to wait for 2 minutes before the alerting message appears.
{{</ notice >}}
### Task 6: View alert policies
### Task 6: View alerting policies
After an alert policy is successfully created, you can enter its detail information page to view the status, alert rules, monitoring targets, notification rule, alert history, etc. Click **More** and select **Change Status** from the drop-down menu to enable or disable this alert policy.
After an alerting policy is successfully created, you can enter its detail information page to view the status, alert rules, monitoring targets, notification rule, alert history, etc. Click **More** and select **Change Status** from the drop-down menu to enable or disable this alerting policy.
![alerting-policy-node-level-detail-page](/images/docs/alerting/alerting-policy-node-level-detail-page.png)

View File

@ -30,7 +30,7 @@ In KubeSphere v3.0 and above, users can also use Alertmanager to manage alerts t
Generally, to receive notifications for Alertmanager alerts, users have to edit Alertmanager's configuration files manually to configure receiver settings such as Email and Slack.
This is not convenient for Kubernetes users and it breaks the multi-tenant principle/architecture of KubeSphere. More specifically, alerts triggered by workloads in different namespaces belonging to different users might be sent to the same user.
This is not convenient for Kubernetes users and it breaks the multi-tenant principle/architecture of KubeSphere. More specifically, alerts triggered by workloads in different namespaces, which should have been sent to different tenants, might be sent to the same tenant.
To use Alertmanager to manage alerts on the platform, KubeSphere offers [Notification Manager](https://github.com/kubesphere/notification-manager), a Kubernetes native notification management tool, which is completely open source. It complies with the multi-tenancy principle, providing user-friendly experiences of Kubernetes notifications. It's installed by default in KubeSphere v3.0 and above.

View File

@ -13,7 +13,7 @@ This tutorial demonstrates what a cluster administrator can view and do for node
## Prerequisites
You need an account granted a role including the authorization of **Clusters Management**. For example, you can log in the console as `admin` directly or create a new role with the authorization and assign it to an account.
You need an account granted a role including the authorization of **Clusters Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to an account.
## Node Status

View File

@ -28,22 +28,26 @@ The table below summarizes common volume plugins for various provisioners (stora
## Prerequisites
You need an account granted a role including the authorization of **Clusters Management**. For example, you can log in the console as `admin` directly or create a new role with the authorization and assign it to an account.
You need an account granted a role including the authorization of **Clusters Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to an account.
## Manage Storage Classes
1. Click **Platform** in the top left corner and select **Clusters Management**.
![clusters-management-select](/images/docs/cluster-administration/persistent-volume-and-storage-class/clusters-management-select.jpg)
2. If you have enabled the [multi-cluster feature](../../multicluster-management) with member clusters imported, you can select a specific cluster. If you have not enabled the feature, refer to the next step directly.
3. On the **Cluster Management** page, navigate to **Storage Classes** under **Storage**, where you can create, update and delete a storage class.
![storage-class](/images/docs/cluster-administration/persistent-volume-and-storage-class/storage-class.jpg)
4. To create a storage class, click **Create** and enter the basic information in the pop-up window. When you finish, click **Next**.
![create-storage-class-basic-info](/images/docs/cluster-administration/persistent-volume-and-storage-class/create-storage-class-basic-info.png)
5. In KubeSphere, you can create storage classes for `QingCloud-CSI`, `Glusterfs` and `Ceph RBD` directly. Alternatively, you can also create customized storage classes for other storage systems based on your needs. Select a type and click **Next**.
![create-storage-class-storage-system](/images/docs/cluster-administration/persistent-volume-and-storage-class/create-storage-class-storage-system.png)
![create-storage-class-settings](/images/docs/cluster-administration/persistent-volume-and-storage-class/create-storage-class-settings.png)

View File

@ -10,11 +10,11 @@ weight: 11410
- You need to [enable KubeSphere DevOps System](../../../../docs/pluggable-components/devops/).
- You need to have a [Docker Hub](https://hub.docker.com/) account.
- You need to create a workspace, a DevOps project, a project, and an account (`project-regular`). This account needs to be invited to the DevOps project and the project for deploying your workload with the role `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project).
- You need to create a workspace, a DevOps project, a project, and an account (`project-regular`). This account needs to be invited to the DevOps project and the project for deploying your workload with the role `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
## Create a Docker Hub Access Token
1. Sign in [Docker Hub](https://hub.docker.com/) and select **Account Settings** from the menu in the top right corner.
1. Log in to [Docker Hub](https://hub.docker.com/) and select **Account Settings** from the menu in the top right corner.
![dockerhub-settings](/images/docs/devops-user-guide/examples/compile-and-deploy-a-go-project/dockerhub-settings.jpg)
@ -34,7 +34,7 @@ weight: 11410
You need to create credentials in KubeSphere for the access token created so that the pipeline can interact with Docker Hub for imaging pushing. Besides, you also create kubeconfig credentials for the access to the Kubernetes cluster.
1. Log in the web console of KubeSphere as `project-regular`. Go to your DevOps project and click **Create** in **Credentials**.
1. Log in to the web console of KubeSphere as `project-regular`. Go to your DevOps project and click **Create** in **Credentials**.
![create-dockerhub-id](/images/docs/devops-user-guide/examples/compile-and-deploy-a-go-project/create-dockerhub-id.jpg)

View File

@ -11,11 +11,11 @@ weight: 11420
- You need to [enable the multi-cluster feature](../../../../docs/multicluster-management/).
- You need to have a [Docker Hub](https://hub.docker.com/) account.
- You need to [enable KubeSphere DevOps System](../../../../docs/pluggable-components/devops/) on your host cluster.
- You need to create a workspace with multiple clusters, a DevOps project on your **host** cluster, a multi-cluster project (in this tutorial, this multi-cluster project is created on the host cluster and one member cluster), and an account (`project-regular`). This account needs to be invited to the DevOps project and the multi-cluster project with the role `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project), [Multi-cluster Management](../../../multicluster-management) and [Multi-cluster Projects](../../../project-administration/project-and-multicluster-project/#multi-cluster-projects).
- You need to create a workspace with multiple clusters, a DevOps project on your **host** cluster, a multi-cluster project (in this tutorial, this multi-cluster project is created on the host cluster and one member cluster), and an account (`project-regular`). This account needs to be invited to the DevOps project and the multi-cluster project with the role `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project), [Multi-cluster Management](../../../multicluster-management) and [Multi-cluster Projects](../../../project-administration/project-and-multicluster-project/#multi-cluster-projects).
## Create a Docker Hub Access Token
1. Sign in [Docker Hub](https://hub.docker.com/) and select **Account Settings** from the menu in the top right corner.
1. Log in to [Docker Hub](https://hub.docker.com/) and select **Account Settings** from the menu in the top right corner.
![dockerhub-settings](/images/docs/devops-user-guide/examples/compile-and-deploy-a-go-multi-cluster-project/dockerhub-settings.jpg)
@ -35,7 +35,7 @@ weight: 11420
You need to create credentials in KubeSphere for the access token created so that the pipeline can interact with Docker Hub for imaging pushing. Besides, you also need to create kubeconfig credentials for the access to the Kubernetes cluster.
1. Log in the web console of KubeSphere as `project-regular`. Go to your DevOps project and click **Create** in **Credentials**.
1. Log in to the web console of KubeSphere as `project-regular`. Go to your DevOps project and click **Create** in **Credentials**.
![create-dockerhub-id](/images/docs/devops-user-guide/examples/compile-and-deploy-a-go-multi-cluster-project/create-dockerhub-id.jpg)

View File

@ -49,7 +49,7 @@ Click **EXPORT TO FILE** to save the credential.
### Create Credentials
Log into KubeSphere, enter into the created DevOps project and create the following credential under **Project Management → Credentials**:
Log in to KubeSphere, enter into the created DevOps project and create the following credential under **Project Management → Credentials**:
![](/images/devops/ks-console-create-credential.png)

View File

@ -184,7 +184,7 @@ To integrate SonarQube into your pipeline, you must install SonarQube Server fir
http://192.168.0.4:30180
```
3. Access Jenkins with the address `http://Public IP:30180`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in Jenkins with KubeSphere accounts (e.g. `admin/P@88w0rd`) directly. For more information about configuring Jenkins, see [Jenkins System Settings](../../../devops-user-guide/how-to-use/jenkins-setting/).
3. Access Jenkins with the address `http://Public IP:30180`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (e.g. `admin/P@88w0rd`) directly. For more information about configuring Jenkins, see [Jenkins System Settings](../../../devops-user-guide/how-to-use/jenkins-setting/).
![jenkins-login-page](/images/docs/devops-user-guide/tool-integration/integrate-sonarqube-into-pipeline/jenkins-login-page.jpg)

View File

@ -14,7 +14,7 @@ This tutorial demonstrates how to create a pipeline through graphical editing pa
- You need to [enable the KubeSphere DevOps System](../../../../docs/pluggable-components/devops/).
- You need to have a [Docker Hub](http://www.dockerhub.com/) account.
- You need to create a workspace, a DevOps project, and an account (`project-regular`). This account must be invited to the DevOps project with the `operator` role. See [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/) if they are not ready.
- You need to create a workspace, a DevOps project, and an account (`project-regular`). This account must be invited to the DevOps project with the `operator` role. See [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/) if they are not ready.
- Set CI dedicated nodes to run the pipeline. For more information, see [Set CI Node for Dependency Cache](../set-ci-node/).
- Configure your email server for pipeline notifications (Optional). For more information, see [Set Email Server for KubeSphere Pipelines](../../how-to-use/jenkins-email/).
- Configure SonarQube to include code analysis as part of the pipeline (Optional). For more information, see [Integrate SonarQube into Pipelines](../../../devops-user-guide/how-to-integrate/sonarqube/).
@ -40,7 +40,7 @@ This example pipeline includes the following six stages.
### Step 1: Create credentials
1. Log in the KubeSphere console as `project-regular`. Go to your DevOps project and create the following credentials in **Credentials** under **Project Management**. For more information about how to create credentials, see [Credential Management](../credential-management/).
1. Log in to the KubeSphere console as `project-regular`. Go to your DevOps project and create the following credentials in **Credentials** under **Project Management**. For more information about how to create credentials, see [Credential Management](../credential-management/).
{{< notice note >}}
@ -65,7 +65,7 @@ This example pipeline includes the following six stages.
In this tutorial, the example pipeline will deploy the [sample](https://github.com/kubesphere/devops-java-sample/tree/sonarqube) app to a project. Hence, you must create the project (for example, `kubesphere-sample-dev`) in advance. The Deployment and Service of the app will be created automatically in the project once the pipeline runs successfully.
You can use the account `project-admin` to create the project. Besides, this account is also the reviewer of the CI/CD pipeline. Make sure the account `project-regular` is invited to the project with the role of `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
You can use the account `project-admin` to create the project. Besides, this account is also the reviewer of the CI/CD pipeline. Make sure the account `project-regular` is invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
### Step 3: Create a pipeline
@ -266,7 +266,7 @@ This stage uses SonarQube to test your code. You can skip this stage if you do n
![docker-credential](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-graphical-editing-panels/docker-credential.jpg)
6. Click **Add nesting steps** (the first one) in the **withCredentials** step created above. Select **shell** and enter the following command in the pop-up window, which is used to log in Docker Hub. Click **OK** to confirm.
6. Click **Add nesting steps** (the first one) in the **withCredentials** step created above. Select **shell** and enter the following command in the pop-up window, which is used to log in to Docker Hub. Click **OK** to confirm.
```shell
echo "$DOCKER_PASSWORD" | docker login $REGISTRY -u "$DOCKER_USERNAME" --password-stdin
@ -379,7 +379,7 @@ On the **Code Quality** page, view the code analysis result of this example pipe
{{</ notice >}}
4. Now that the pipeline has run successfully, an image will be pushed to Docker Hub. Log in Docker Hub and check the result.
4. Now that the pipeline has run successfully, an image will be pushed to Docker Hub. Log in to Docker Hub and check the result.
![dockerhub-image](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-graphical-editing-panels/dockerhub-image.jpg)

View File

@ -20,7 +20,7 @@ Two types of pipelines can be created in KubeSphere: Pipelines created based on
- You need to have a [Docker Hub](https://hub.docker.com/) account and a [GitHub](https://github.com/) account.
- You need to [enable KubeSphere DevOps system](../../../pluggable-components/devops/).
- You need to create a workspace, a DevOps project, and an account (`project-regular`). This account needs to be invited to the DevOps project with the `operator` role. See [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/) if they are not ready.
- You need to create a workspace, a DevOps project, and an account (`project-regular`). This account needs to be invited to the DevOps project with the `operator` role. See [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/) if they are not ready.
- You need to set a CI dedicated node for running pipelines. Refer to [Set a CI Node for Dependency Caching](../../how-to-use/set-ci-node/).
- You need to install and configure SonarQube. Refer to [Integrate SonarQube into Pipeline](../../../devops-user-guide/how-to-integrate/sonarqube/). If you skip this part, there is no **SonarQube Analysis** below.
@ -47,7 +47,7 @@ There are eight stages as shown below in this example pipeline.
### Step 1: Create credentials
1. Log in the KubeSphere console as `project-regular`. Go to your DevOps project and create the following credentials in **Credentials** under **Project Management**. For more information about how to create credentials, see [Credential Management](../../../devops-user-guide/how-to-use/credential-management/).
1. Log in to the KubeSphere console as `project-regular`. Go to your DevOps project and create the following credentials in **Credentials** under **Project Management**. For more information about how to create credentials, see [Credential Management](../../../devops-user-guide/how-to-use/credential-management/).
{{< notice note >}}
@ -71,7 +71,7 @@ There are eight stages as shown below in this example pipeline.
### Step 2: Modify the Jenkinsfile in your GitHub repository
1. Log in GitHub. Fork [devops-java-sample](https://github.com/kubesphere/devops-java-sample) from the GitHub repository to your own GitHub account.
1. Log in to GitHub. Fork [devops-java-sample](https://github.com/kubesphere/devops-java-sample) from the GitHub repository to your own GitHub account.
![fork-github-repo](/images/docs/devops-user-guide/using-devops/create-a-pipeline-using-a-jenkinsfile/fork-github-repo.jpg)
@ -110,11 +110,11 @@ You need to create two projects, such as `kubesphere-sample-dev` and `kubesphe
{{< notice note >}}
The account `project-admin` needs to be created in advance since it is the reviewer of the CI/CD Pipeline. See [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/) for more information.
The account `project-admin` needs to be created in advance since it is the reviewer of the CI/CD Pipeline. See [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/) for more information.
{{</ notice >}}
1. Use the account `project-admin` to log in KubeSphere. In the same workspace where you create the DevOps project, create two projects as below. Make sure you invite `project-regular` to these two projects with the role of `operator`.
1. Use the account `project-admin` to log in to KubeSphere. In the same workspace where you create the DevOps project, create two projects as below. Make sure you invite `project-regular` to these two projects with the role of `operator`.
| Project Name | Alias |
| ---------------------- | ----------------------- |
@ -283,7 +283,7 @@ The account `project-admin` needs to be created in advance since it is the revie
### Step 8: Access the example Service
1. To access the Service, log in KubeSphere as `admin` to use the **web kubectl** from **Toolbox**. Go to the project `kubesphere-sample-dev`, and select `ks-sample-dev` in **Services** under **Application Workloads**. The endpoint can be used to access the Service.
1. To access the Service, log in to KubeSphere as `admin` to use the **web kubectl** from **Toolbox**. Go to the project `kubesphere-sample-dev`, and select `ks-sample-dev` in **Services** under **Application Workloads**. The endpoint can be used to access the Service.
![sample-app-result-check](/images/docs/devops-user-guide/tool-integration/integrate-sonarqube-into-pipeline/sample-app-result-check.jpg)

View File

@ -24,11 +24,11 @@ This tutorial demonstrates how to create and manage credentials in a DevOps proj
## Prerequisites
- You have enabled [KubeSphere DevOps System](../../../pluggable-components/devops/).
- You have a workspace, a DevOps project and an account (`project-regular`) invited to the DevOps project with the `operator` role. If they are not ready yet, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You have a workspace, a DevOps project and an account (`project-regular`) invited to the DevOps project with the `operator` role. If they are not ready yet, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Create Credentials
Log in the console of KubeSphere as `project-regular`. Navigate to your DevOps project, choose **Credentials** and click **Create**.
Log in to the console of KubeSphere as `project-regular`. Navigate to your DevOps project, choose **Credentials** and click **Create**.
![create-credential-step1](/images/docs/devops-user-guide/using-devops/credential-management/create-credential-step1.jpg)

View File

@ -12,7 +12,7 @@ The built-in Jenkins cannot share the same email configuration with the platform
## Prerequisites
- You need to enable [KubeSphere DevOps System](../../../pluggable-components/devops/).
- You need an account granted a role including the authorization of **Clusters Management**. For example, you can log in the console as `admin` directly or create a new role with the authorization and assign it to an account.
- You need an account granted a role including the authorization of **Clusters Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to an account.
## Set the Email Server

View File

@ -8,7 +8,7 @@ Weight: 11240
Jenkins is powerful and flexible and it has become the de facto standard for CI/CD workflows. Nevertheless, many plugins require users to set system-level configurations before they can be put to use.
The KubeSphere DevOps System offers containerized CI/CD functions based on Jenkins. To provide users with a schedulable Jenkins environment, KubeSphere uses **Configuration-as-Code** for Jenkins system settings, which requires users to log in the Jenkins dashboard and reload the configuration after it is modified. In the current release, Jenkins system settings are not available on the KubeSphere console, which will be supported in upcoming releases.
The KubeSphere DevOps System offers containerized CI/CD functions based on Jenkins. To provide users with a schedulable Jenkins environment, KubeSphere uses **Configuration-as-Code** for Jenkins system settings, which requires users to log in to the Jenkins dashboard and reload the configuration after it is modified. In the current release, Jenkins system settings are not available on the KubeSphere console, which will be supported in upcoming releases.
This tutorial demonstrates how to set up Jenkins and reload configurations on the Jenkins dashboard.
@ -20,7 +20,7 @@ You have enabled [the KubeSphere DevOps System](../../../pluggable-components/de
It is recommended that you configure Jenkins in KubeSphere through Configuration-as-Code (CasC). The built-in Jenkins CasC file is stored as a [ConfigMap](../../../project-user-guide/configuration/configmaps/).
1. Log in KubeSphere as `admin`. Click **Platform** in the top left corner and select **Clusters Management**.
1. Log in to KubeSphere as `admin`. Click **Platform** in the top left corner and select **Clusters Management**.
![cluster-management](/images/docs/devops-user-guide/using-devops/jenkins-system-settings/cluster-management.jpg)
@ -38,7 +38,7 @@ It is recommended that you configure Jenkins in KubeSphere through Configuration
![edit-jenkins](/images/docs/devops-user-guide/using-devops/jenkins-system-settings/edit-jenkins.jpg)
## Log in Jenkins to Reload Configurations
## Log in to Jenkins to Reload Configurations
After you modified `jenkins-casc-config`, you need to reload your updated system configuration on the **Configuration as Code** page on the Jenkins dashboard. This is because system settings configured directly through the Jenkins dashboard may be overwritten by the CasC configuration after Jenkins is rescheduled.
@ -56,7 +56,7 @@ After you modified `jenkins-casc-config`, you need to reload your updated system
http://192.168.0.4:30180
```
3. Access Jenkins at `http://Node IP:Port Number`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in Jenkins with KubeSphere accounts (e.g. `admin/P@88w0rd`) directly.
3. Access Jenkins at `http://Node IP:Port Number`. When KubeSphere is installed, the Jenkins dashboard is also installed by default. Besides, Jenkins is configured with KubeSphere LDAP, which means you can log in to Jenkins with KubeSphere accounts (e.g. `admin/P@88w0rd`) directly.
![jenkins-dashboard](/images/docs/devops-user-guide/using-devops/jenkins-system-settings/jenkins-dashboard.jpg)
@ -66,7 +66,7 @@ After you modified `jenkins-casc-config`, you need to reload your updated system
{{</ notice >}}
4. After you log in the dashboard, click **Manage Jenkins** from the navigation bar.
4. After you log in to the dashboard, click **Manage Jenkins** from the navigation bar.
![manage-jenkins](/images/docs/devops-user-guide/using-devops/jenkins-system-settings/manage-jenkins.jpg)

View File

@ -12,7 +12,7 @@ This tutorial demonstrates how you set CI nodes so that KubeSphere schedules tas
## Prerequisites
You need an account granted a role including the authorization of **Clusters Management**. For example, you can log in the console as `admin` directly or create a new role with the authorization and assign it to an account.
You need an account granted a role including the authorization of **Clusters Management**. For example, you can log in to the console as `admin` directly or create a new role with the authorization and assign it to an account.
## Label a CI Node

View File

@ -10,12 +10,12 @@ This tutorial demonstrates how to create and manage DevOps projects.
## Prerequisites
- You need to create a workspace and an account (`project-admin`). The account must be invited to the workspace with the role of `workspace-self-provisioner`. For more information, refer to [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You need to create a workspace and an account (`project-admin`). The account must be invited to the workspace with the role of `workspace-self-provisioner`. For more information, refer to [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
- You need to enable the [KubeSphere DevOps system](../../../pluggable-components/devops/).
## Create a DevOps Project
1. Log in the console of KubeSphere as `project-admin`. Go to **DevOps Projects** and click **Create**.
1. Log in to the console of KubeSphere as `project-admin`. Go to **DevOps Projects** and click **Create**.
![devops-project-create](/images/docs/devops-user-guide/understand-and-manage-devops-projects/devops-project-management/devops-project-create.jpg)

View File

@ -31,7 +31,7 @@ In **Project Roles**, there are three available built-in roles as shown below. B
## Create a DevOps Project Role
1. Log in the console as `devops-admin` and select a DevOps project (e.g. `demo-devops`) under **DevOps Projects** list.
1. Log in to the console as `devops-admin` and select a DevOps project (e.g. `demo-devops`) under **DevOps Projects** list.
{{< notice note >}}

View File

@ -47,6 +47,10 @@ Use your own Prometheus stack setup in KubeSphere.
Reset the password of any account.
### [Session Timeout](../access-control/session-timeout/)
Understand session timeout and customize the timeout period.
## KubeSphere Web Console
### [Supported Browsers](../faq/console/console-web-browser/)

View File

@ -0,0 +1,21 @@
---
title: "Session Timeout"
keywords: "Session timeout, KubeSphere, Kubernetes"
description: "How to solve the session timeout problem."
linkTitle: "Session Timeout"
Weight: 16420
---
A session starts as a user logs in the console of KubeSphere. You may see a message of "**Session timeout or this account is logged in elsewhere, please login again**" when the session expires.
## Inactivity Session Timeout
You can control when an inactive user session expires. The default session timeout is two hours of inactivity. It means once the session timeout is reached, the user will be automatically logged out of the console. You can [configure accessTokenMaxAge and accessTokenInactivityTimeout](../../../access-control-and-account-management/configuring-authentication/#authentication-configuration) for the session timeout.
## JWT Signature Verification Failed
In a [multi-cluster environment](../../../multicluster-management/enable-multicluster/direct-connection/#prepare-a-member-cluster), `clusterRole` and `jwtSecret` must be set correctly.
## Node Clock Skew
The node clock skew affects time-sensitive operations such as validating the expiration time of a user token. You can configure the server time synchronization with an NTP server. [MaximumClockSkew](../../../access-control-and-account-management/configuring-authentication/#authentication-configuration) can also be set, which defaults to 10 seconds.

View File

@ -16,7 +16,7 @@ You have installed KubeSphere.
## Change the Console Language
1. Log in KubeSphere with your account and click the account name in the top right corner.
1. Log in to KubeSphere with your account and click the account name in the top right corner.
2. Select **User Settings**.

View File

@ -18,7 +18,7 @@ Editing resources in `system-workspace` may cause unexpected results, such as Ku
## Edit the Console Configuration
1. Log in KubeSphere as `admin`. Click the hammer icon in the bottom right corner and select **Kubectl**.
1. Log in to KubeSphere as `admin`. Click the hammer icon in the bottom right corner and select **Kubectl**.
2. Execute the following command:

View File

@ -12,7 +12,7 @@ If you have trouble downloading images from `dockerhub.io`, it is highly recomme
To configure the booster, you need a registry mirror address. See the following example to see how you can get a booster URL from Alibaba Cloud.
1. Log in the console of Alibaba Cloud and enter "container registry" in the search bar. Click **Container Registry** in the drop-down list as below.
1. Log in to the console of Alibaba Cloud and enter "container registry" in the search bar. Click **Container Registry** in the drop-down list as below.
![container-registry](/images/docs/installing-on-linux/faq/configure-booster/container-registry.png)
@ -81,7 +81,7 @@ Docker needs to be installed in advance for this method.
privateRegistry: "" # Configure a private image registry for air-gapped installation (e.g. docker local registry or Harbor)
```
2. Input the registry mirror address above and save the file. For more information about the installation process, see [Multi-Node Installation](../../../installing-on-linux/introduction/multioverview/).
2. Input the registry mirror address above and save the file. For more information about the installation process, see [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/).
{{< notice note >}}

View File

@ -63,7 +63,7 @@ If you install KubeSphere on Linux, see [Disable Telemetry after Installation](.
### Disable Telemetry after installation
1. Log in the console as `admin` and click **Platform** in the top left corner.
1. Log in to the console as `admin` and click **Platform** in the top left corner.
2. Select **Clusters Management** and navigate to **CRDs**.

View File

@ -106,7 +106,7 @@ Now that KubeSphere is installed, you can access the web console of KubeSphere b
{{</ notice >}}
- Log in the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image.
- Log in to the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image.
![doks-cluster](/images/docs/do/doks-cluster.png)

View File

@ -173,7 +173,7 @@ Now that KubeSphere is installed, you can access the web console of KubeSphere b
- Access the web console of KubeSphere using the external IP generated by EKS.
- Log in the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image.
- Log in to the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image.
![eks-cluster](https://ap3.qingstor.com/kubesphere-website/docs/gke-cluster.png)

View File

@ -99,7 +99,7 @@ Now that KubeSphere is installed, you can access the web console of KubeSphere b
{{</ notice >}}
- Log in the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image.
- Log in to the console with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard as shown in the following image.
![gke-cluster](https://ap3.qingstor.com/kubesphere-website/docs/gke-cluster.png)

View File

@ -99,7 +99,7 @@ Default settings are OK for other detailed configurations. You can also set them
After you set LoadBalancer for KubeSphere console, you can visit it via the given address. Go to KubeSphere login page and use the default account (username `admin` and password `P@88w0rd`) to log in.
![Log in KubeSphere Console](/images/docs/huawei-cce/en/login-ks-console.png)
![Log in to KubeSphere Console](/images/docs/huawei-cce/en/login-ks-console.png)
## Enable Pluggable Components (Optional)

View File

@ -140,7 +140,7 @@ Now that KubeSphere is installed, you can access the web console of KubeSphere e
![console-service](https://ap3.qingstor.com/kubesphere-website/docs/console-service.png)
- Log in the console through the external IP address with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard shown below:
- Log in to the console through the external IP address with the default account and password (`admin/P@88w0rd`). In the cluster overview page, you can see the dashboard shown below:
![kubesphere-oke-dashboard](https://ap3.qingstor.com/kubesphere-website/docs/kubesphere-oke-dashboard.png)

View File

@ -2,12 +2,11 @@
title: "Overview"
keywords: "KubeSphere, Kubernetes, Installation"
description: "Overview of KubeSphere Installation on Kubernetes"
linkTitle: "Overview"
weight: 4110
---
![KubeSphere+K8s](https://pek3b.qingstor.com/kubesphere-docs/png/20191123144507.png)
![kubesphere+k8s](/images/docs/installing-on-kubernetes/introduction/overview/kubesphere+k8s.png)
As part of KubeSphere's commitment to provide a plug-and-play architecture for users, it can be easily installed on existing Kubernetes clusters. More specifically, KubeSphere can be deployed on Kubernetes either hosted on clouds (e.g. AWS EKS, QingCloud QKE and Google GKE) or on-premises. This is because KubeSphere does not hack Kubernetes itself. It only interacts with the Kubernetes API to manage Kubernetes cluster resources. In other words, KubeSphere can be installed on any native Kubernetes cluster and Kubernetes distribution.
@ -15,37 +14,37 @@ This section gives you an overview of the general steps of installing KubeSphere
{{< notice note >}}
Please read [Prerequisites](../prerequisites/) before you install KubeSphere on existing Kubernetes clusters.
Read [Prerequisites](../prerequisites/) before you install KubeSphere on existing Kubernetes clusters.
{{</ notice >}}
## Deploy KubeSphere
After you make sure your existing Kubernetes cluster meets all the requirements, you can use kubectl to trigger the default minimal installation of KubeSphere.
After you make sure your existing Kubernetes cluster meets all the requirements, you can use kubectl to install KubeSphere with the default minimal package.
- Execute the following commands to start installation:
1. Execute the following commands to start installation:
```bash
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/kubesphere-installer.yaml
kubectl apply -f https://github.com/kubesphere/ks-installer/releases/download/v3.0.0/cluster-configuration.yaml
```
- Inspect the logs of installation:
2. Inspect the logs of installation:
```bash
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
```
- Use `kubectl get pod --all-namespaces` to see whether all pods are running normally in relevant namespaces of KubeSphere. If they are, check the port (30880 by default) of the console through the following command:
3. Use `kubectl get pod --all-namespaces` to see whether all pods are running normally in relevant namespaces of KubeSphere. If they are, check the port (30880 by default) of the console through the following command:
```bash
kubectl get svc/ks-console -n kubesphere-system
```
- Make sure port 30880 is opened in security groups and access the web console through the NodePort (`IP:30880`) with the default account and password (`admin/P@88w0rd`).
4. Make sure port 30880 is opened in security groups and access the web console through the NodePort (`IP:30880`) with the default account and password (`admin/P@88w0rd`).
![kubesphere-console](https://ap3.qingstor.com/kubesphere-website/docs/login.png)
![login](/images/docs/installing-on-kubernetes/introduction/overview/login.png)
## Enable Pluggable Components (Optional)

View File

@ -2,18 +2,15 @@
title: "Prerequisites"
keywords: "KubeSphere, Kubernetes, Installation, Prerequisites"
description: "The prerequisites of installing KubeSphere on existing Kubernetes"
linkTitle: "Prerequisites"
weight: 4120
---
You can install KubeSphere on virtual machines and bare metal with Kubernetes also provisioned. In addition, KubeSphere can also be deployed on cloud-hosted and on-premises Kubernetes clusters as long as your Kubernetes cluster meets the prerequisites below.
Not only can KubeSphere be installed on virtual machines and bare metal with provisioned Kubernetes, but also supports installing on cloud-hosted and on-premises existing Kubernetes clusters as long as your Kubernetes cluster meets the prerequisites below.
- Kubernetes version: `1.15.x, 1.16.x, 1.17.x, 1.18.x`.
- Kubernetes version: 1.15.x, 1.16.x, 1.17.x, 1.18.x.
- Avaiable CPU > 1 Core and Memory > 2 G.
- A **default** Storage Class in your Kubernetes cluster is configured; use `kubectl get sc` to verify it.
- A **default** StorageClass in your Kubernetes cluster is configured; use `kubectl get sc` to verify it.
- The CSR signing feature is activated in kube-apiserver when it is started with the `--cluster-signing-cert-file` and `--cluster-signing-key-file` parameters. See [RKE installation issue](https://github.com/kubesphere/kubesphere/issues/1925#issuecomment-591698309).
## Pre-checks
@ -27,7 +24,7 @@ Not only can KubeSphere be installed on virtual machines and bare metal with pro
```
{{< notice note >}}
Pay attention to the `Server Version` line. If `GitVersion` shows an older one, you need to upgrade Kubernetes first. Please refer to [Upgrading kubeadm clusters from v1.14 to v1.15](https://v1-15.docs.kubernetes.io/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade-1-15/).
Pay attention to the `Server Version` line. If `GitVersion` shows an older one, you need to upgrade Kubernetes first.
{{</ notice >}}
2. Check if the available resources in your cluster meet the minimum requirements.
@ -39,7 +36,7 @@ Pay attention to the `Server Version` line. If `GitVersion` shows an older one,
Swap: 0 0 0
```
3. Check if there is a **default** Storage Class in your cluster. An existing default Storage Class is a prerequisite for KubeSphere installation.
3. Check if there is a **default** StorageClass in your cluster. An existing default StorageClass is a prerequisite for KubeSphere installation.
```bash
$ kubectl get sc

View File

@ -1,16 +1,15 @@
---
title: "Uninstall KubeSphere from Kubernetes"
keywords: 'kubernetes, kubesphere, uninstall, remove-cluster'
keywords: 'Kubernetes, KubeSphere, uninstall, remove-cluster'
description: 'How to uninstall KubeSphere from Kubernetes'
linkTitle: "Uninstall KubeSphere from Kubernetes"
weight: 4400
---
You can uninstall KubeSphere from your existing Kubernetes cluster as follows.
You can uninstall KubeSphere from your existing Kubernetes cluster by using [kubesphere-delete.sh](https://github.com/kubesphere/ks-installer/blob/master/scripts/kubesphere-delete.sh). Copy it from the [GitHub source file](https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/kubesphere-delete.sh) and execute this script on your local machine.
{{< notice tip >}}
Uninstall will remove KubeSphere from your Kubernetes cluster. This operation is irreversible and does not have any backup. Please be cautious with this operation.
{{</ notice >}}
{{< notice warning >}}
You can use [kubesphere-delete.sh](https://github.com/kubesphere/ks-installer/blob/master/scripts/kubesphere-delete.sh) to uninstall KubeSphere from Kubernetes. Copy it from [GitHub source file](https://raw.githubusercontent.com/kubesphere/ks-installer/master/scripts/kubesphere-delete.sh) and execute this script in your local.
Uninstalling will remove KubeSphere from your Kubernetes cluster. This operation is irreversible and does not have any backup. Please be cautious with this operation.
{{</ notice >}}

View File

@ -69,7 +69,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
{{< notice note >}}
After you download KubeKey, If you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
{{</ notice >}}

View File

@ -14,7 +14,7 @@ As an open-source project on [GitHub](https://github.com/kubesphere), KubeSphere
Users are provided with multiple installation options. Please note not all options are mutually exclusive. For instance, you can deploy KubeSphere with minimal packages on multiple nodes in an air-gapped environment.
- [All-in-One](../../../quick-start/all-in-one-on-linux/): Install KubeSphere on a single node. It is only for users to quickly get familiar with KubeSphere.
- [Multi-Node](../multioverview/): Install KubeSphere on multiple nodes. It is for testing or development.
- [Multi-node](../multioverview/): Install KubeSphere on multiple nodes. It is for testing or development.
- [Air-gapped Installation on Linux](../air-gapped-installation): All images of KubeSphere have been encapsulated into a package. It is convenient for air-gapped installation on Linux machines.
- [High Availability Installation](../ha-configuration/): Install high availability KubeSphere on multiple nodes which is used for the production environment.
- Minimal Packages: Only install the minimum required system components of KubeSphere. Here is the minimum resource requirement:

View File

@ -1,8 +1,8 @@
---
title: "Multi-Node Installation"
title: "Multi-node Installation"
keywords: 'Multi-node, Installation, KubeSphere'
description: 'Explain how to install KubeSphere on multiple nodes'
linkTitle: "Multi-Node Installation"
linkTitle: "Multi-node Installation"
weight: 3120
---
@ -126,7 +126,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
{{< notice note >}}
After you download KubeKey, If you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
{{</ notice >}}
@ -221,7 +221,7 @@ List all your machines under `hosts` and add their detailed information as above
`internalAddress`: The private IP address of the instance.
- In this tutorial, port 22 is the default port of SSH so you do not need to add it in the yaml file. Otherwise, you need to add the port number after the IP address. For example:
- In this tutorial, port 22 is the default port of SSH so you do not need to add it in the YAML file. Otherwise, you need to add the port number after the IP address. For example:
```yaml
hosts:

View File

@ -46,7 +46,7 @@ You need to create this file of chart configurations and input the values above
#### Key
To get values for `qy_access_key_id` and `qy_secret_access_key`, log in the web console of [QingCloud](https://console.qingcloud.com/login) and refer to the image below to create a key first. Download the key after it is created, which is stored in a csv file.
To get values for `qy_access_key_id` and `qy_secret_access_key`, log in to the web console of [QingCloud](https://console.qingcloud.com/login) and refer to the image below to create a key first. Download the key after it is created, which is stored in a csv file.
![access-key](/images/docs/installing-on-linux/introduction/persistent-storage-configuration/access-key.jpg)

View File

@ -2,25 +2,25 @@
title: "Deploy KubeSphere on Bare Metal"
keywords: 'Kubernetes, KubeSphere, bare-metal'
description: 'How to install KubeSphere on bare metal.'
linkTitle: "Deploy KubeSphere on Bare Metal"
weight: 3320
---
## Introduction
In addition to the deployment on cloud, KubeSphere can also be installed on bare metal. As the virtualization layer is removed, the infrastructure overhead is drastically reduced, which brings more compute and storage resources to app deployments. As a result, hardware efficiency is improved. Refer to the example below of how to deploy KubeSphere on bare metal.
In addition to the deployment on cloud, KubeSphere can also be installed on bare metal. As the virtualization layer is removed, the infrastructure overhead is drastically reduced, which brings more compute and storage resources to app deployments. As a result, hardware efficiency is improved. Refer to the example below to deploy KubeSphere on bare metal.
## Prerequisites
- Please make sure that you already know how to install KubeSphere with a multi-node cluster based on the tutorial [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/).
- Make sure you already know how to install KubeSphere on a multi-node cluster based on the tutorial [Multi-node Installation](../../../installing-on-linux/introduction/multioverview/).
- Server and network redundancy in your environment.
- Considering data persistence, for a production environment, it is recommended you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
- For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
## Prepare Linux Hosts
This tutorial uses 3 physical machines of **DELL 620 Intel (R) Xeon (R) CPU E5-2640 v2 @ 2.00GHz (32G memory)**, on which **CentOS Linux release 7.6.1810 (Core)** will be installed for the minimal deployment of KubeSphere.
### CentOS Installation
### Install CentOS
Download and install the [image](http://mirror1.es.uci.edu/centos/7.6.1810/isos/x86_64/CentOS-7-x86_64-DVD-1810.iso) first. Make sure you allocate at least 200 GB to the root directory where it stores docker images (you can skip this if you are installing KubeSphere for testing).
@ -35,108 +35,108 @@ Here is a list of the three hosts for your reference.
|192.168.60.153|worker1|worker|
|192.168.60.154|worker2|worker|
### NIC Setting
### NIC settings
1. Clear NIC configurations.
```bash
ifdown em1
```
```bash
ifdown em2
```
```bash
rm -rf /etc/sysconfig/network-scripts/ifcfg-em1
```
```bash
rm -rf /etc/sysconfig/network-scripts/ifcfg-em2
```
```bash
ifdown em1
```
```bash
ifdown em2
```
```bash
rm -rf /etc/sysconfig/network-scripts/ifcfg-em1
```
```bash
rm -rf /etc/sysconfig/network-scripts/ifcfg-em2
```
2. Create the NIC bonding.
```bash
nmcli con add type bond con-name bond0 ifname bond0 mode 802.3ad ip4 192.168.60.152/24 gw4 192.168.60.254
```
```bash
nmcli con add type bond con-name bond0 ifname bond0 mode 802.3ad ip4 192.168.60.152/24 gw4 192.168.60.254
```
3. Set the bonding mode.
```bash
nmcli con mod id bond0 bond.options mode=802.3ad,miimon=100,lacp_rate=fast,xmit_hash_policy=layer2+3
```
```bash
nmcli con mod id bond0 bond.options mode=802.3ad,miimon=100,lacp_rate=fast,xmit_hash_policy=layer2+3
```
4. Bind the physical NIC.
```bash
nmcli con add type bond-slave ifname em1 con-name em1 master bond0
```
```bash
nmcli con add type bond-slave ifname em1 con-name em1 master bond0
```
```bash
nmcli con add type bond-slave ifname em2 con-name em2 master bond0
```
```bash
nmcli con add type bond-slave ifname em2 con-name em2 master bond0
```
5. Change the NIC mode.
```bash
vi /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO=static
```
```bash
vi /etc/sysconfig/network-scripts/ifcfg-bond0
BOOTPROTO=static
```
6. Restart Network Manager.
```bash
systemctl restart NetworkManager
```
```bash
systemctl restart NetworkManager
```
```bash
nmcli con # Display NIC information
```
```bash
nmcli con # Display NIC information
```
7. Change the host name and DNS.
```bash
hostnamectl set-hostname worker-1
```
```bash
hostnamectl set-hostname worker-1
```
```bash
vim /etc/resolv.conf
```
```bash
vim /etc/resolv.conf
```
### Time Setting
### Time settings
1. Synchronize time.
```bash
yum install -y chrony
```
```bash
systemctl enable chronyd
```
```bash
systemctl start chronyd
```
```bash
timedatectl set-ntp true
```
```bash
yum install -y chrony
```
```bash
systemctl enable chronyd
```
```bash
systemctl start chronyd
```
```bash
timedatectl set-ntp true
```
2. Set the time zone.
```bash
timedatectl set-timezone Asia/Shanghai
```
```bash
timedatectl set-timezone Asia/Shanghai
```
3. Check if the ntp-server is available.
```bash
chronyc activity -v
```
```bash
chronyc activity -v
```
### Firewall Setting
### Firewall settings
Execute the following commands to stop and disable the FirewallD service:
@ -156,7 +156,7 @@ systemctl stop firewalld
systemctl disable firewalld
```
### Package Update and Dependencies
### Package updates and dependencies
Execute the following commands to update system packages and install dependencies.
@ -180,10 +180,6 @@ yum install epel-release
yum install conntrack-tools
```
```bash
yum install wget # This tool will be used later to download KubeKey.
```
{{< notice note >}}
You may not need to install all the dependencies depending on the Kubernetes version to be installed. For more information, see [Dependency Requirements](../../../installing-on-linux/introduction/multioverview/).
@ -224,7 +220,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
{{< notice note >}}
After you download KubeKey, If you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
{{</ notice >}}
@ -298,7 +294,7 @@ Create a cluster using the configuration file you customized above:
./kk create cluster -f config-sample.yaml
```
#### Verify the Multi-node Installation
#### Verify the installation
After the installation finishes, you can inspect the logs of installation by executing the command below:
@ -306,7 +302,7 @@ After the installation finishes, you can inspect the logs of installation by exe
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
```
If you can see the welcome log return, it means the installation is successful. Your cluster is up and running.
If you can see the welcome log return, it means the installation is successful.
```bash
**************************************************
@ -328,74 +324,74 @@ https://kubesphere.io 20xx-xx-xx xx:xx:xx
#####################################################
```
#### Log in the Console
#### Log in to the console
You will be able to use default account and password `admin/P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after login.
You will be able to use default account and password `admin/P@88w0rd` to log in to the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after login.
#### Enable Pluggable Components (Optional)
#### Enable pluggable components (Optional)
The example above demonstrates the process of a default minimal installation. To enable other components in KubeSphere, see [Enable Pluggable Components](../../../pluggable-components/) for more details.
## System Improvements
- Update your system.
```bash
yum update
```
```bash
yum update
```
- Add the required options to the kernel boot arguments:
```bash
sudo /sbin/grubby --update-kernel=ALL --args='cgroup_enable=memory cgroup.memory=nokmem swapaccount=1'
```
```bash
sudo /sbin/grubby --update-kernel=ALL --args='cgroup_enable=memory cgroup.memory=nokmem swapaccount=1'
```
- Enable the `overlay2` kernel module.
```bash
echo "overlay2" | sudo tee -a /etc/modules-load.d/overlay.conf
```
```bash
echo "overlay2" | sudo tee -a /etc/modules-load.d/overlay.conf
```
- Refresh the dynamically generated grub2 configuration.
```bash
sudo grub2-set-default 0
```
```bash
sudo grub2-set-default 0
```
- Adjust kernel parameters and make the change effective.
```bash
cat <<EOF | sudo tee -a /etc/sysctl.conf
vm.max_map_count = 262144
fs.may_detach_mounts = 1
net.ipv4.ip_forward = 1
vm.swappiness=1
kernel.pid_max =1000000
fs.inotify.max_user_instances=524288
EOF
sudo sysctl -p
```
```bash
cat <<EOF | sudo tee -a /etc/sysctl.conf
vm.max_map_count = 262144
fs.may_detach_mounts = 1
net.ipv4.ip_forward = 1
vm.swappiness=1
kernel.pid_max =1000000
fs.inotify.max_user_instances=524288
EOF
sudo sysctl -p
```
- Adjust system limits.
```bash
vim /etc/security/limits.conf
* soft nofile 1024000
* hard nofile 1024000
* soft memlock unlimited
* hard memlock unlimited
root soft nofile 1024000
root hard nofile 1024000
root soft memlock unlimited
```
```bash
vim /etc/security/limits.conf
* soft nofile 1024000
* hard nofile 1024000
* soft memlock unlimited
* hard memlock unlimited
root soft nofile 1024000
root hard nofile 1024000
root soft memlock unlimited
```
- Remove the previous limit configuration.
```bash
sudo rm /etc/security/limits.d/20-nproc.conf
```
```bash
sudo rm /etc/security/limits.d/20-nproc.conf
```
- Root the system.
- Reboot the system.
```bash
reboot
```
```bash
reboot
```

View File

@ -321,7 +321,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
{{< notice note >}}
After you download KubeKey, If you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
{{</ notice >}}
@ -531,9 +531,9 @@ https://kubesphere.io 2020-08-15 23:32:12
#####################################################
```
### Log in the Console
### Log in to the Console
You will be able to use default account and password `admin/P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after login.
You will be able to use default account and password `admin/P@88w0rd` to log in to the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after login.
## Enable Pluggable Components (Optional)

View File

@ -124,7 +124,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
{{< notice note >}}
After you download KubeKey, If you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
{{</ notice >}}
@ -206,9 +206,9 @@ The public load balancer is used directly instead of an internal load balancer d
{{</ notice >}}
### Persistent Storage Plugin Configuration
### Persistent Storage Plugin Configurations
See [Storage Configuration](../storage-configuration) for details.
See [Persistent Storage Configurations](../../../installing-on-linux/introduction/storage-configuration/) for details.
### Configure the Network Plugin

View File

@ -1,121 +1,121 @@
---
title: "Deploy KubeSphere on QingCloud Instance"
title: "Deploy KubeSphere on QingCloud Instances"
keywords: "KubeSphere, Installation, HA, High-availability, LoadBalancer"
description: "The tutorial is for installing a high-availability cluster."
description: "The tutorial is for installing a high-availability cluster on QingCloud instances."
linkTitle: "Deploy KubeSphere on QingCloud Instances"
Weight: 3220
---
## Introduction
For a production environment, we need to consider the high availability of the cluster. If the key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, we need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
For a production environment, you need to consider the high availability of the cluster. If key components (e.g. kube-apiserver, kube-scheduler, and kube-controller-manager) are all running on the same master node, Kubernetes and KubeSphere will be unavailable once the master node goes down. Therefore, you need to set up a high-availability cluster by provisioning load balancers with multiple master nodes. You can use any cloud load balancer, or any hardware load balancer (e.g. F5). In addition, Keepalived and [HAproxy](https://www.haproxy.com/), or Nginx is also an alternative for creating high-availability clusters.
This tutorial walks you through an example of how to create two [QingCloud Load Balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of master and etcd nodes using the load balancers.
This tutorial walks you through an example of how to create two [QingCloud load balancers](https://docs.qingcloud.com/product/network/loadbalancer), serving as the internal load balancer and external load balancer respectively, and of how to implement high availability of master and etcd nodes using the load balancers.
## Prerequisites
- Please make sure that you already know how to install KubeSphere with a multi-node cluster by following the [guide](https://github.com/kubesphere/kubekey). For the detailed information about the config yaml file that is used for installation, see Multi-node Installation. This tutorial focuses more on how to configure load balancers.
- Make sure you already know how to install KubeSphere on a multi-node cluster by following the [guide](../../../installing-on-linux/introduction/multioverview/). For detailed information about the configuration file that is used for installation, see [Edit the configuration file](../../../installing-on-linux/introduction/multioverview/#2-edit-the-configuration-file). This tutorial focuses more on how to configure load balancers.
- You need a [QingCloud](https://console.qingcloud.com/login) account to create load balancers, or follow the guide of any other cloud provider to create load balancers.
- Considering data persistence, for a production environment, we recommend you to prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
- For a production environment, it is recommended that you prepare persistent storage and create a StorageClass in advance. For development and testing, you can use the integrated OpenEBS to provision LocalPV as the storage service directly.
## Architecture
This example prepares six machines of **Ubuntu 16.04.6**. We will create two load balancers, and deploy three master and etcd nodes on three of the machines. You can configure these master and etcd nodes in `config-sample.yaml` of KubeKey (Please note that this is the default name, which can be changed by yourself).
This example prepares six machines of **Ubuntu 16.04.6**. You will create two load balancers, and deploy three master and etcd nodes on three of the machines. You can configure these master and etcd nodes in `config-sample.yaml` created by KubeKey (Please note that this is the default name, which can be changed by yourself).
![kubesphere-ha-architecture](https://ap3.qingstor.com/kubesphere-website/docs/ha-architecture.png)
![ha-architecture](/images/docs/installing-on-linux/installing-on-public-cloud/deploy-kubesphere-on-qingcloud-instances/ha-architecture.png)
{{< notice note >}}
The Kubernetes document [Options for Highly Available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) demonstrates that there are two options for configuring the topology of a highly available (HA) Kubernetes cluster, i.e. stacked etcd topology and external etcd topology. You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster according to [this document](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). In this guide, we adopt stacked etcd topology to bootstrap an HA cluster for convenient demonstration.
The Kubernetes document [Options for Highly Available topology](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/) demonstrates that there are two options for configuring the topology of a highly available (HA) Kubernetes cluster, i.e. stacked etcd topology and external etcd topology. You should carefully consider the advantages and disadvantages of each topology before setting up an HA cluster according to [this document](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/ha-topology/). This tutorial adopts stacked etcd topology to bootstrap an HA cluster for demonstration purposes.
{{</ notice >}}
## Install HA Cluster
## Install an HA Cluster
### Create Load Balancers
### Step 1: Create load balancers
This step demonstrates how to create load balancers on QingCloud platform.
This step demonstrates how to create load balancers on the QingCloud platform.
#### Create an Internal Load Balancer
#### Create an internal load balancer
1. Log in [QingCloud Console](https://console.qingcloud.com/login). In the menu on the left, under **Network & CDN**, select **Load Balancers**. Click **Create** to create a load balancer.
1. Log in to the [QingCloud console](https://console.qingcloud.com/login). In the menu on the left, under **Network & CDN**, select **Load Balancers**. Click **Create** to create a load balancer.
![create-lb](https://ap3.qingstor.com/kubesphere-website/docs/create-lb.png)
![create-lb](/images/docs/installing-on-linux/installing-on-public-cloud/deploy-kubesphere-on-qingcloud-instances/create-lb.png)
2. In the pop-up window, set a name for the load balancer. Choose the VxNet where your machines are created from the Network drop-down list. Here is `pn`. Other fields can be default values as shown below. Click **Submit** to finish.
2. In the pop-up window, set a name for the load balancer. Choose the VxNet where your machines are created from the **Network** drop-down list. Here is `pn`. Other fields can be default values as shown below. Click **Submit** to finish.
![qingcloud-lb](https://ap3.qingstor.com/kubesphere-website/docs/qingcloud-lb.png)
![qingcloud-lb](/images/docs/installing-on-linux/installing-on-public-cloud/deploy-kubesphere-on-qingcloud-instances/qingcloud-lb.png)
3. Click the load balancer. In the detailed information page, create a listener that listens on port `6443` with the Listener Protocol set as `TCP`.
3. Click the load balancer. On the detail page, create a listener that listens on port `6443` with the **Listener Protocol** set to `TCP`.
![Listener](https://ap3.qingstor.com/kubesphere-website/docs/listener.png)
![listener](/images/docs/installing-on-linux/installing-on-public-cloud/deploy-kubesphere-on-qingcloud-instances/listener.png)
- Name: Define a name for this Listener
- Listener Protocol: Select `TCP` protocol
- Port: `6443`
- Load mode: `Poll`
- **Name**: Define a name for this Listener
- **Listener Protocol**: Select `TCP` protocol
- **Port**: `6443`
- **Balance mode**: `Poll`
Click Submit to continue.
Click **Submit** to continue.
{{< notice note >}}
{{< notice note >}}
After you create the listener, check the firewall rules of the load balancer. Make sure that port `6443` has been added to the firewall rules and that external traffic is allowed to port `6443`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
{{</ notice >}}
After you create the listener, please check the firewall rules of the load balancer. Make sure that the port `6443` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. If you are using QingCloud platform, you can find the information in **Security Groups** under **Security**.
4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click **Advanced Search**, choose the three master nodes, and set the port to `6443` which is the default secure port of api-server.
{{</ notice >}}
![3-master](/images/docs/installing-on-linux/installing-on-public-cloud/deploy-kubesphere-on-qingcloud-instances/3-master.png)
4. Click **Add Backend**, and choose the VxNet you just selected (in this example, it is `pn`). Click the button **Advanced Search**, choose the three master nodes, and set the port to `6443` which is the default secure port of api-server.
Click **Submit** when you finish.
![add-backend](https://ap3.qingstor.com/kubesphere-website/docs/3-master.png)
5. Click **Apply Changes** to use the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer.
Click **Submit** when you finish.
{{< notice note >}}
The status of all masters might show **Not Available** after you added them as backends. This is normal since port `6443` of api-server is not active on master nodes yet. The status will change to **Active** and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected.
{{</ notice >}}
![apply-change](/images/docs/installing-on-linux/installing-on-public-cloud/deploy-kubesphere-on-qingcloud-instances/apply-change.png)
Record the Intranet VIP shown under **Networks**. The IP address will be added later to the configuration file.
5. Click the button **Apply Changes** to activate the configurations. At this point, you can find the three masters have been added as the backend servers of the listener that is behind the internal load balancer.
{{< notice note >}}
The status of all masters might show `Not Available` after you added them as backends. This is normal since the port `6443` of api-server is not active on master nodes yet. The status will change to `Active` and the port of api-server will be exposed after the installation finishes, which means the internal load balancer you configured works as expected.
{{</ notice >}}
![apply-changes](https://ap3.qingstor.com/kubesphere-website/docs/apply-change.png)
Record the Intranet VIP shown under Networks. The IP address will be added later to the config yaml file.
#### Create an External Load Balancer
#### Create an external load balancer
You need to create an EIP in advance. To create an EIP, go to **Elastic IPs** under **Networks & CDN**.
{{< notice note >}}
Two elastic IPs are needed for this whole tutorial, one for the VPC network and the other for the external load balancer created in this step. You cannot associate the same EIP to the VPC network and the load balancer at the same time.
Two elastic IPs are needed for this tutorial, one for the VPC network and the other for the external load balancer created in this step. You cannot associate the same EIP to the VPC network and the load balancer at the same time.
{{</ notice >}}
6. Similarly, create an external load balancer while don't select VxNet for the Network field. Bind the EIP that you created to this load balancer by clicking **Add IPv4**.
1. Similarly, create an external load balancer while don't select VxNet for the **Network** field. Bind the EIP that you created to this load balancer by clicking **Add IPv4**.
![bind-eip](https://ap3.qingstor.com/kubesphere-website/docs/bind-eip.png)
![bind-eip](/images/docs/installing-on-linux/installing-on-public-cloud/deploy-kubesphere-on-qingcloud-instances/bind-eip.png)
7. In the load balancer detailed information page, create a listener that listens on port `30880` (NodePort of KubeSphere console) with the Listener Protocol set as `HTTP`.
2. On the load balancer's detail page, create a listener that listens on port `30880` (NodePort of KubeSphere console) with **Listener Protocol** set to `HTTP`.
{{< notice note >}}
{{< notice note >}}
After you create the listener, please check the firewall rules of the load balancer. Make sure that the port `30880` has been added to the firewall rules and the external traffic can pass through `6443`. Otherwise, the installation will fail. If you are using QingCloud platform, you can find the information in **Security Groups** under **Security**.
After you create the listener, check the firewall rules of the load balancer. Make sure that port `30880` has been added to the firewall rules and that external traffic is allowed to port `30880`. Otherwise, the installation will fail. You can find the information in **Security Groups** under **Security** on the QingCloud platform.
{{</ notice >}}
{{</ notice >}}
![listener2](https://ap3.qingstor.com/kubesphere-website/docs/listener2.png)
![listener2](/images/docs/installing-on-linux/installing-on-public-cloud/deploy-kubesphere-on-qingcloud-instances/listener2.png)
8. Click **Add Backend**. In **Advanced Search**, choose the `six` machines on which we are going to install KubeSphere within the VxNet `pn`, and set the port to `30880`.
3. Click **Add Backend**. In **Advanced Search**, choose the `six` machines on which you are going to install KubeSphere within the VxNet `pn`, and set the port to `30880`.
![six-instances](https://ap3.qingstor.com/kubesphere-website/docs/six-instances.png)
![six-instances](/images/docs/installing-on-linux/installing-on-public-cloud/deploy-kubesphere-on-qingcloud-instances/six-instances.png)
Click **Submit** when you finish.
Click **Submit** when you finish.
9. Click **Apply Changes** to activate the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
4. Click **Apply Changes** to use the configurations. At this point, you can find the six machines have been added as the backend servers of the listener that is behind the external load balancer.
### Download KubeKey
### Step 2: Download KubeKey
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere v3.0.0.
[Kubekey](https://github.com/kubesphere/kubekey) is the next-gen installer which provides an easy, fast and flexible way to install Kubernetes and KubeSphere.
Follow the step below to download KubeKey.
@ -147,7 +147,7 @@ curl -sfL https://get-kk.kubesphere.io | VERSION=v1.0.1 sh -
{{< notice note >}}
After you download KubeKey, If you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
After you download KubeKey, if you transfer it to a new machine also with poor network connections to Googleapis, you must run `export KKZONE=cn` again before you proceed with the steps below.
{{</ notice >}}
@ -182,9 +182,9 @@ Create an example configuration file with default configurations. Here Kubernete
{{</ notice >}}
### Cluster Node Planning
### Step 3: Set cluster nodes
As we adopt the HA topology with stacked control plane nodes, where etcd nodes are colocated with master nodes, we will define the master nodes and etcd nodes are on the same three machines.
As you adopt the HA topology with stacked control plane nodes, the master nodes and etcd nodes are on the same three machines.
| **Property** | **Description** |
| :----------- | :-------------------------------- |
@ -193,7 +193,7 @@ As we adopt the HA topology with stacked control plane nodes, where etcd nodes a
| `master` | Master node names |
| `worker` | Worker node names |
- Put the master node name (master1, master2 and master3) under `etcd` and `master` respectively as below, which means these three machines will be assigned with both the master and etcd role. Please note that the number of etcd needs to be odd. Meanwhile, we do not recommend you to install etcd on worker nodes since the memory consumption of etcd is very high. Edit the configuration file, and we use **Ubuntu 16.04.6** in this example.
Put the master node name (`master1`, `master2` and `master3`) under `etcd` and `master` respectively as below, which means these three machines will serve as both the master and etcd nodes. Note that the number of etcd needs to be odd. Meanwhile, it is not recommended that you install etcd on worker nodes since the memory consumption of etcd is very high.
#### config-sample.yaml Example
@ -221,11 +221,13 @@ spec:
- node3
```
For a complete configuration sample explanation, please see [this file](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md).
For a complete configuration sample explanation, see [this file](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md).
### Configure the Load Balancer
### Step 4: Configure the load balancer
In addition to the node information, you need to provide the load balancer information in the same yaml file. For the Intranet VIP address, you can find it in step 5 mentioned above. Assume the VIP address and listening port of the **internal load balancer** are `192.168.0.253` and `6443` respectively, and you can refer to the following example.
In addition to the node information, you need to provide the load balancer information in the same YAML file. For the Intranet VIP address, you can find it in the last part when you create [an internal load balancer](#step-1-create-load-balancers). Assume the VIP address and listening port of the **internal load balancer** are `192.168.0.253` and `6443`
respectively, and you can refer to the following example.
#### The configuration example in config-sample.yaml
@ -241,19 +243,17 @@ In addition to the node information, you need to provide the load balancer infor
{{< notice note >}}
- The address and port should be indented by two spaces in `config-sample.yaml`, and the address should be VIP.
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access. If you need to change the domain name, please uncomment and modify it.
- The domain name of the load balancer is `lb.kubesphere.local` by default for internal access. If you need to change the domain name, uncomment and modify it.
{{</ notice >}}
After that, you can enable any components you need by following [Enable Pluggable Components](../../../pluggable-components/) and start your HA cluster installation.
### Step 5: Kubernetes cluster configurations (Optional)
### Kubernetes Cluster Configuration (Optional)
Kubekey provides some fields and parameters to allow the cluster administrator to customize Kubernetes installation, including Kubernetes version, network plugins and image registry. There are some default values provided in `config-sample.yaml`. You can modify Kubernetes-related configurations in the file based on your needs. For more information, see [Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/).
Kubekey provides some fields and parameters to allow the cluster administrator to customize Kubernetes installation, including Kubernetes version, network plugins and image registry. There are some default values provided in `config-example.yaml`. Optionally, you can modify the Kubernetes related configuration in `config-example.yaml` according to your needs. See [config-example.md](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md) for detailed explanation.
### Step 6: Persistent storage plugin configurations
### Persistent Storage Plugin Configuration
As we mentioned in the prerequisites, considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want.
Considering data persistence in a production environment, you need to prepare persistent storage and configure the storage plugin (e.g. CSI) in `config-sample.yaml` to define which storage service you want.
{{< notice note >}}
@ -261,24 +261,24 @@ For testing or development, you can skip this part. KubeKey will use the integra
{{</ notice >}}
**Available Storage Plugins & Clients**
**Available storage plugins and clients**
- Ceph RBD & CephFS
- GlusterFS
- NFS
- QingCloud CSI
- QingStor CSI
- More plugins are WIP, which will be added soon
- More plugins will be supported in future releases
For each storage plugin configuration, you can refer to [config-example.md](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md) to get detailed explanation. Make sure you have configured the storage plugin before you get started. KubeKey will create a StorageClass and persistent volumes for related workloads during the installation.
Make sure you have configured the storage plugin before you get started. KubeKey will create a StorageClass and persistent volumes for related workloads during the installation. For more information, see [Persistent Storage Configurations](../../../installing-on-linux/introduction/storage-configuration/).
### Enable Pluggable Components (Optional)
### Step 7: Enable pluggable components (Optional)
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be started with a minimal installation if you do not enable them.
KubeSphere has decoupled some core feature components since v2.1.0. These components are designed to be pluggable which means you can enable them either before or after installation. By default, KubeSphere will be installed with the minimal package if you do not enable them.
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Please ensure your machines have sufficient CPU and memory before enabling them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
You can enable any of them according to your demands. It is highly recommended that you install these pluggable components to discover the full-stack features and capabilities provided by KubeSphere. Make sure your machines have sufficient CPU and memory before you enable them. See [Enable Pluggable Components](../../../pluggable-components/) for details.
### Start to Bootstrap a Cluster
### Step 8: Start to bootstrap a cluster
After you complete the configuration, you can execute the following command to start the installation:
@ -286,9 +286,9 @@ After you complete the configuration, you can execute the following command to s
./kk create cluster -f config-sample.yaml
```
### Verify the Installation
### Step 9: Verify the installation
Inspect the logs of installation. When you see the successful logs as follows, congratulations and enjoy it!
Inspect the logs of installation. When you see output logs as follows, it means KubeSphere has been successfully deployed.
```bash
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
@ -316,18 +316,28 @@ https://kubesphere.io 2020-08-13 10:50:24
#####################################################
```
### Verify the HA Cluster
### Step 10: Verify the HA cluster
Now that you have finished the installation, you can go back to the detailed information page of both the internal and external load balancers to see the status.
Now that you have finished the installation, go back to the detail page of both the internal and external load balancers to see the status.
![LB active](https://ap3.qingstor.com/kubesphere-website/docs/active.png)
![active](/images/docs/installing-on-linux/installing-on-public-cloud/deploy-kubesphere-on-qingcloud-instances/active.png)
Both listeners show that the status is `Active`, meaning the node is up and running.
Both listeners show that the status is **Active**, meaning nodes are up and running.
![active-listener](https://ap3.qingstor.com/kubesphere-website/docs/active-listener.png)
![active-listener](/images/docs/installing-on-linux/installing-on-public-cloud/deploy-kubesphere-on-qingcloud-instances/active-listener.png)
In the web console of KubeSphere, you can also see that all the nodes are functioning well.
![cluster-node](https://ap3.qingstor.com/kubesphere-website/docs/cluster-node.png)
![cluster-node](/images/docs/installing-on-linux/installing-on-public-cloud/deploy-kubesphere-on-qingcloud-instances/cluster-node.png)
To verify if the cluster is highly available, you can turn off an instance on purpose. For example, the above dashboard is accessed through the address `IP: 30880` (the EIP address here is the one bound to the external load balancer). If the cluster is highly available, the dashboard will still work well even if you shut down a master node.
To verify if the cluster is highly available, you can turn off an instance on purpose. For example, the above console is accessed through the address `IP: 30880` (the EIP address here is the one bound to the external load balancer). If the cluster is highly available, the console will still work well even if you shut down a master node.
## See Also
[Multi-node Installation](../../../installing-on-linux/introduction/multioverview/)
[Kubernetes Cluster Configurations](../../../installing-on-linux/introduction/vars/)
[Persistent Storage Configurations](../../../installing-on-linux/introduction/storage-configuration/)
[Enable Pluggable Components](../../../pluggable-components/)

View File

@ -41,6 +41,6 @@ KubeSphere separates [frontend](https://github.com/kubesphere/console) from [bac
## Service Components
Each component has many services, see [Service Components](../../infrastructure/components) for more details.
Each component has many services. See [Overview](../../pluggable-components/overview/) for more details.
![Service Components](https://pek3b.qingstor.com/kubesphere-docs/png/20191017163549.png)

View File

@ -1,5 +1,5 @@
---
linkTitle: "Enable Multi-cluster in KubeSphere"
linkTitle: "Enable Multi-cluster Management in KubeSphere"
weight: 5200
_build:

View File

@ -1,34 +1,28 @@
---
title: "Agent Connection"
keywords: 'Kubernetes, KubeSphere, multicluster, agent-connection'
description: 'Overview'
description: 'How to manage multiple clusters using an agent.'
titleLink: "Agent Connection"
weight: 5220
---
## Prerequisites
The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the Host Cluster (H Cluster) cannot access the Member Cluster (M Cluster) directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H Cluster through the agent. This method is applicable when the M Cluster is in a private environment (e.g. IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed across different cloud providers.
You have already installed at least two KubeSphere clusters. Please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if they are not ready yet. You can also set up the role of the cluster in advance when installing KubeSphere as instructed in the tab `KubeSphere has not been installed` of [Prepare a Host Cluster](#prepare-a-host-cluster) and [Prepare a Member Cluster](#prepare-a-member-cluster) below.
To use the multi-cluster feature using an agent, you must have at least two clusters serving as the H Cluster and the M Cluster respectively. A cluster can be defined as the H Cluster or the M Cluster either before or after you install KubeSphere. For more information about installing KubeSphere, refer to [Installing on Linux](../../../installing-on-linux) and [Installing on Kubernetes](../../../installing-on-kubernetes).
{{< notice note >}}
Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, you can deploy KubeSphere on it with a minimal installation so that it can be imported. See [Minimal KubeSphere on Kubernetes](../../../quick-start/minimal-kubesphere-on-k8s/) for details.
{{</ notice >}}
## Prepare a Host Cluster
## Agent Connection
The component [Tower](https://github.com/kubesphere/tower) of KubeSphere is used for agent connection. Tower is a tool for network connection between clusters through the agent. If the Host Cluster (hereafter referred to as **H** Cluster) cannot access the Member Cluster (hereafter referred to as **M** Cluster) directly, you can expose the proxy service address of the H cluster. This enables the M Cluster to connect to the H cluster through the agent. This method is applicable when the M Cluster is in a private environment (e.g. IDC) and the H Cluster is able to expose the proxy service. The agent connection is also applicable when your clusters are distributed across different cloud providers.
### Prepare a Host Cluster
A host cluster provides you with the central control plane and you can only define one host cluster.
{{< tabs >}}
{{< tab "KubeSphere has been installed" >}}
If you already have a standalone KubeSphere installed, you can set the value of `clusterRole` to `host` by editing the cluster configuration. You need to **wait for a while** so that the change can take effect.
If you already have a standalone KubeSphere cluster installed, you can set the value of `clusterRole` to `host` by editing the cluster configuration.
- Option A - Use Web Console:
- Option A - Use the web console:
Use `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
Use the `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
- Option B - Use Kubectl:
@ -36,24 +30,32 @@ If you already have a standalone KubeSphere installed, you can set the value of
kubectl edit cc ks-installer -n kubesphere-system
```
Scroll down and set the value of `clusterRole` to `host`, then click **Update** (if you use the web console) to make it effective:
In the YAML file of `ks-installer`, navigate to `multicluster`, set the value of `clusterRole` to `host`, then click **Update** (if you use the web console) to make it effective:
```yaml
multicluster:
clusterRole: host
```
You need to **wait for a while** so that the change can take effect.
{{</ tab >}}
{{< tab "KubeSphere has not been installed" >}}
There is no big difference than installing a standalone KubeSphere if you define a host cluster before installation. Please note that the `clusterRole` in `config-sample.yaml` or `cluster-configuration.yaml` has to be set as follows:
You can define a host cluster before you install KubeSphere either on Linux or on an existing Kubernetes cluster. If you want to [install KubeSphere on Linux](../../../installing-on-linux/introduction/multioverview/#1-create-an-example-configuration-file), you use a `config-sample.yaml` file. If you want to [install KubeSphere on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/#deploy-kubesphere), you use two YAML files, one of which is `cluster-configuration.yaml`. To set a host cluster, change the value of `clusterRole` to `host` in `config-sample.yaml` or `cluster-configuration.yaml` accordingly before you install KubeSphere.
```yaml
multicluster:
clusterRole: host
```
{{< notice note >}}
If you install KubeSphere on a single-node cluster ([All-in-One](../../../quick-start/all-in-one-on-linux/)), you do not need to create a `config-sample.yaml` file. In this case, you can set a host cluster after KubeSphere is installed.
{{</ notice >}}
{{</ tab >}}
{{</ tabs >}}
@ -64,21 +66,21 @@ You can use **kubectl** to retrieve the installation logs to verify the status b
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
```
### Set Proxy Service Address
## Set the Proxy Service Address
After the installation of the Host Cluster, a proxy service called tower will be created in `kubesphere-system`, whose type is `LoadBalancer`.
After the installation of the host cluster, a proxy service called `tower` will be created in `kubesphere-system`, whose type is `LoadBalancer`.
{{< tabs >}}
{{< tab "A LoadBalancer available in your cluster" >}}
If a LoadBalancer plugin is available for the cluster, you can see a corresponding address for `EXTERNAL-IP` of the tower service, which will be acquired by KubeSphere and set the proxy service automatically. That means you can skip the step to set the proxy. Execute the following command to confirm you have a LoadBalancer.
If a LoadBalancer plugin is available for the cluster, you can see a corresponding address for `EXTERNAL-IP` of tower, which will be acquired by KubeSphere. In this case, the proxy service is set automatically. That means you can skip the step to set the proxy. Execute the following command to verify if you have a LoadBalancer.
```bash
kubectl -n kubesphere-system get svc
```
The output may look as follows:
The output is similar to this:
```shell
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
@ -95,19 +97,23 @@ Generally, there is always a LoadBalancer solution in the public cloud, and the
{{< tab "No LoadBalancer available in your cluster" >}}
1. If you cannot see a corresponding address displayed (the EXTERNAL-IP is pending), you need to manually set the proxy address. For example, you have an available public IP address `139.198.120.120`, and the port `8080` of this IP address has been forwarded to the port `30721` of the cluster. Execute the following command to check the service.
1. If you cannot see a corresponding address displayed (`EXTERNAL-IP` is `pending`), you need to manually set the proxy address. For example, you have an available public IP address `139.198.120.120`, and port `8080` of **this IP address has been forwarded to port** `30721` of the cluster. Execute the following command to check the service.
```shell
$ kubectl -n kubesphere-system get svc
kubectl -n kubesphere-system get svc
```
The output is similar to this:
```
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tower LoadBalancer 10.233.63.191 <pending> 8080:30721/TCP 16h
```
2. Add the value of `proxyPublishAddress` to the configuration file of ks-installer and input the public IP address (`139.198.120.120` for this demo) and port number as follows.
2. Add the value of `proxyPublishAddress` to the configuration file of `ks-installer` and input the public IP address (`139.198.120.120` in this tutorial) and port number as follows.
- Option A - Use Web Console:
- Option A - Use the web console:
Use `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
Use the `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
- Option B - Use Kubectl:
@ -115,7 +121,7 @@ Generally, there is always a LoadBalancer solution in the public cloud, and the
kubectl -n kubesphere-system edit clusterconfiguration ks-installer
```
Navigate to `multicluster` and add a new line for `proxyPublishAddress` to define the IP address so access tower.
Navigate to `multicluster` and add a new line for `proxyPublishAddress` to define the IP address to access tower.
```yaml
multicluster:
@ -133,9 +139,9 @@ Generally, there is always a LoadBalancer solution in the public cloud, and the
{{</ tabs >}}
### Prepare a Member Cluster
## Prepare a Member Cluster
In order to manage the member cluster within the **host cluster**, you need to make `jwtSecret` the same between them. Therefore, you need to get it first by excuting the following command on the **host cluster**.
In order to manage the member cluster from the **host cluster**, you need to make `jwtSecret` the same between them. Therefore, get it first by excuting the following command on the **host cluster**.
```bash
kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret
@ -151,11 +157,11 @@ jwtSecret: "gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU"
{{< tab "KubeSphere has been installed" >}}
If you already have a standalone KubeSphere installed, you can set the value of `clusterRole` to `member` by editing the cluster configuration. You need to **wait for a while** so that the change can take effect.
If you already have a standalone KubeSphere cluster installed, you can set the value of `clusterRole` to `member` by editing the cluster configuration.
- Option A - Use Web Console:
- Option A - Use the web console:
Use `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
Use the `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
- Option B - Use Kubectl:
@ -163,7 +169,7 @@ If you already have a standalone KubeSphere installed, you can set the value of
kubectl edit cc ks-installer -n kubesphere-system
```
Input the corresponding `jwtSecret` shown above:
In the YAML file of `ks-installer`, input the corresponding `jwtSecret` shown above:
```yaml
authentication:
@ -177,40 +183,56 @@ multicluster:
clusterRole: member
```
You need to **wait for a while** so that the change can take effect.
{{</ tab >}}
{{< tab "KubeSphere has not been installed" >}}
There is no big difference than installing a standalone KubeSphere if you define a member cluster before installation. Please note that the `clusterRole` in `config-sample.yaml` or `cluster-configuration.yaml` has to be set as follows:
You can define a member cluster before you install KubeSphere either on Linux or on an existing Kubernetes cluster. If you want to [install KubeSphere on Linux](../../../installing-on-linux/introduction/multioverview/#1-create-an-example-configuration-file), you use a `config-sample.yaml` file. If you want to [install KubeSphere on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/#deploy-kubesphere), you use two YAML files, one of which is `cluster-configuration.yaml`. To set a member cluster, input the value of `jwtSecret` shown above and change the value of `clusterRole` to `member` in `config-sample.yaml` or `cluster-configuration.yaml` accordingly before you install KubeSphere.
```yaml
authentication:
jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU
```
Scroll down and set the value of `clusterRole` to `member`:
```yaml
multicluster:
clusterRole: member
```
{{< notice note >}}
If you install KubeSphere on a single-node cluster ([All-in-One](../../../quick-start/all-in-one-on-linux/)), you do not need to create a `config-sample.yaml` file. In this case, you can set a member cluster after KubeSphere is installed.
{{</ notice >}}
{{</ tab >}}
{{</ tabs >}}
### Import Cluster
You can use **kubectl** to retrieve the installation logs to verify the status by running the following command. Wait for a while, and you will be able to see the successful log return if the member cluster is ready.
1. Open the H Cluster dashboard and click **Add Cluster**.
![Add Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827231611.png)
```bash
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
```
## Import a Member Cluster
1. Log in the KubeSphere console as `admin` and click **Add Cluster** on the **Clusters Management** page.
![add-cluster](/images/docs/multicluster-management/enable-multicluster-management-in-kubesphere/agent-connection/add-cluster.png)
2. Enter the basic information of the cluster to be imported and click **Next**.
![Import Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827211842.png)
![cluster-info](/images/docs/multicluster-management/enable-multicluster-management-in-kubesphere/agent-connection/cluster-info.png)
3. In **Connection Method**, select **Cluster connection agent** and click **Import**. It will show the agent deployment generated by the H Cluster in the console.
![agent-en](/images/docs/agent-en.png)
4. Create an `agent.yaml` file in the M Cluster based on the instruction, then copy and paste the agent deployment to the file. Execute `kubectl create -f agent.yaml` on the node and wait for the agent to be up and running. Please make sure the proxy address is accessible to the M Cluster.
![agent-en](/images/docs/multicluster-management/enable-multicluster-management-in-kubesphere/agent-connection/agent-en.png)
4. Create an `agent.yaml` file on the M Cluster based on the instruction, then copy and paste the agent deployment to the file. Execute `kubectl create -f agent.yaml` on the node and wait for the agent to be up and running. Please make sure the proxy address is accessible to the M Cluster.
5. You can see the cluster you have imported in the H Cluster when the cluster agent is up and running.
![Azure AKS](https://ap3.qingstor.com/kubesphere-website/docs/20200827231650.png)
![cluster-imported](/images/docs/multicluster-management/enable-multicluster-management-in-kubesphere/agent-connection/cluster-imported.png)

View File

@ -1,34 +1,28 @@
---
title: "Direct Connection"
keywords: 'Kubernetes, KubeSphere, multicluster, hybrid-cloud, direct-connection'
description: 'Overview'
description: 'How to manage multiple clusters using direct connection.'
titleLink: "Direct Connection"
weight: 5210
---
## Prerequisites
If the kube-apiserver address of the Member Cluster (M Cluster) is accessible on any node of the Host Cluster (H Cluster), you can adopt **Direction Connection**. This method is applicable when the kube-apiserver address of the M Cluster can be exposed or H Cluster and M Cluster are in the same private network or subnet.
You have already installed at least two KubeSphere clusters. Please refer to [Installing on Linux](../../../installing-on-linux) or [Installing on Kubernetes](../../../installing-on-kubernetes) if they are not ready yet.
To use the multi-cluster feature using direct connection, you must have at least two clusters serving as the H Cluster and the M Cluster respectively. A cluster can be defined as the H Cluster or the M Cluster either before or after you install KubeSphere. For more information about installing KubeSphere, refer to [Installing on Linux](../../../installing-on-linux) and [Installing on Kubernetes](../../../installing-on-kubernetes).
{{< notice note >}}
Multi-cluster management requires Kubesphere to be installed on the target clusters. If you have an existing cluster, you can deploy KubeSphere on it with a minimal installation so that it can be imported. See [Minimal KubeSphere on Kubernetes](../../../quick-start/minimal-kubesphere-on-k8s/) for details.
{{</ notice >}}
## Prepare a Host Cluster
## Direct Connection
If the kube-apiserver address of Member Cluster (hereafter referred to as **M** Cluster) is accessible on any node of the Host Cluster (hereafter referred to as **H** Cluster), you can adopt **Direction Connection**. This method is applicable when the kube-apiserver address of M Cluster can be exposed or H Cluster and M Cluster are in the same private network or subnet.
### Prepare a Host Cluster
A host cluster provides you with the central control plane and you can only define one host cluster.
{{< tabs >}}
{{< tab "KubeSphere has been installed" >}}
If you already have a standalone KubeSphere installed, you can set the value of `clusterRole` to `host` by editing the cluster configuration. You need to **wait for a while** so that the change can take effect.
If you already have a standalone KubeSphere cluster installed, you can set the value of `clusterRole` to `host` by editing the cluster configuration.
- Option A - Use Web Console:
- Option A - Use the web console:
Use `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
Use the `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
- Option B - Use Kubectl:
@ -36,24 +30,32 @@ If you already have a standalone KubeSphere installed, you can set the value of
kubectl edit cc ks-installer -n kubesphere-system
```
Scroll down and set the value of `clusterRole` to `host`, then click **Update** (if you use the web console) to make it effective:
In the YAML file of `ks-installer`, navigate to `multicluster`, set the value of `clusterRole` to `host`, then click **Update** (if you use the web console) to make it effective:
```yaml
multicluster:
clusterRole: host
```
You need to **wait for a while** so that the change can take effect.
{{</ tab >}}
{{< tab "KubeSphere has not been installed" >}}
There is no big difference than installing a standalone KubeSphere if you define a host cluster before installation. Please note that the `clusterRole` in `config-sample.yaml` or `cluster-configuration.yaml` has to be set as follows:
You can define a host cluster before you install KubeSphere either on Linux or on an existing Kubernetes cluster. If you want to [install KubeSphere on Linux](../../../installing-on-linux/introduction/multioverview/#1-create-an-example-configuration-file), you use a `config-sample.yaml` file. If you want to [install KubeSphere on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/#deploy-kubesphere), you use two YAML files, one of which is `cluster-configuration.yaml`. To set a host cluster, change the value of `clusterRole` to `host` in `config-sample.yaml` or `cluster-configuration.yaml` accordingly before you install KubeSphere.
```yaml
multicluster:
clusterRole: host
```
{{< notice note >}}
If you install KubeSphere on a single-node cluster ([All-in-One](../../../quick-start/all-in-one-on-linux/)), you do not need to create a `config-sample.yaml` file. In this case, you can set a host cluster after KubeSphere is installed.
{{</ notice >}}
{{</ tab >}}
{{</ tabs >}}
@ -64,9 +66,9 @@ You can use **kubectl** to retrieve the installation logs to verify the status b
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
```
### Prepare a Member Cluster
In order to manage the member cluster within the **host cluster**, you need to make `jwtSecret` the same between them. Therefore, you need to get it first by excuting the following command on the **host cluster** .
## Prepare a Member Cluster
In order to manage the member cluster from the **host cluster**, you need to make `jwtSecret` the same between them. Therefore, get it first by excuting the following command on the **host cluster**.
```bash
kubectl -n kubesphere-system get cm kubesphere-config -o yaml | grep -v "apiVersion" | grep jwtSecret
@ -82,11 +84,11 @@ jwtSecret: "gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU"
{{< tab "KubeSphere has been installed" >}}
If you already have a standalone KubeSphere installed, you can set the value of `clusterRole` to `member` by editing the cluster configuration. You need to **wait for a while** so that the change can take effect.
If you already have a standalone KubeSphere cluster installed, you can set the value of `clusterRole` to `member` by editing the cluster configuration.
- Option A - Use Web Console:
- Option A - Use the web console:
Use `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
Use the `admin` account to log in the console and go to **CRDs** on the **Cluster Management** page. Enter the keyword `ClusterConfiguration` and go to its detail page. Edit the YAML of `ks-installer`, which is similar to [Enable Pluggable Components](../../../pluggable-components/).
- Option B - Use Kubectl:
@ -94,7 +96,7 @@ If you already have a standalone KubeSphere installed, you can set the value of
kubectl edit cc ks-installer -n kubesphere-system
```
Input the corresponding `jwtSecret` shown above:
In the YAML file of `ks-installer`, input the corresponding `jwtSecret` shown above:
```yaml
authentication:
@ -108,24 +110,30 @@ multicluster:
clusterRole: member
```
You need to **wait for a while** so that the change can take effect.
{{</ tab >}}
{{< tab "KubeSphere has not been installed" >}}
There is no big difference than installing a standalone KubeSphere if you define a member cluster before installation. Please note that the `clusterRole` in `config-sample.yaml` or `cluster-configuration.yaml` has to be set as follows:
You can define a member cluster before you install KubeSphere either on Linux or on an existing Kubernetes cluster. If you want to [install KubeSphere on Linux](../../../installing-on-linux/introduction/multioverview/#1-create-an-example-configuration-file), you use a `config-sample.yaml` file. If you want to [install KubeSphere on an existing Kubernetes cluster](../../../installing-on-kubernetes/introduction/overview/#deploy-kubesphere), you use two YAML files, one of which is `cluster-configuration.yaml`. To set a member cluster, input the value of `jwtSecret` shown above and change the value of `clusterRole` to `member` in `config-sample.yaml` or `cluster-configuration.yaml` accordingly before you install KubeSphere.
```yaml
authentication:
jwtSecret: gfIwilcc0WjNGKJ5DLeksf2JKfcLgTZU
```
Scroll down and set the value of `clusterRole` to `member`:
```yaml
multicluster:
clusterRole: member
```
{{< notice note >}}
If you install KubeSphere on a single-node cluster ([All-in-One](../../../quick-start/all-in-one-on-linux/)), you do not need to create a `config-sample.yaml` file. In this case, you can set a member cluster after KubeSphere is installed.
{{</ notice >}}
{{</ tab >}}
{{</ tabs >}}
@ -136,21 +144,26 @@ You can use **kubectl** to retrieve the installation logs to verify the status b
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
```
### Import Cluster
## Import a Member Cluster
1. Open the H Cluster dashboard and click **Add Cluster**.
![Add Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827231611.png)
1. Log in the KubeSphere console as `admin` and click **Add Cluster** on the **Clusters Management** page.
![add-cluster](/images/docs/multicluster-management/enable-multicluster-management-in-kubesphere/direct-connection/add-cluster.png)
2. Enter the basic information of the cluster to be imported and click **Next**.
![Import Cluster](https://ap3.qingstor.com/kubesphere-website/docs/20200827211842.png)
3. In **Connection Method**, select **Direct Connection to Kubernetes cluster**.
![cluster-info](/images/docs/multicluster-management/enable-multicluster-management-in-kubesphere/direct-connection/cluster-info.png)
4. [Retrieve the KubeConfig](../retrieve-kubeconfig), copy the KubeConfig of the Member Cluster and paste it into the box.
{{< notice tip >}}
Please make sure the `server` address in KubeConfig is accessible on any node of the H Cluster.
{{</ notice >}}
![import a cluster - direct connection](/images/docs/direct_import_en.png)
3. In **Connection Method**, select **Direct Connection to Kubernetes cluster**, copy the KubeConfig of the member cluster and paste it into the box.
5. Click **Import** and wait for cluster initialization to finish.
![Azure AKS](https://ap3.qingstor.com/kubesphere-website/docs/20200827231650.png)
{{< notice note >}}
Make sure the `server` address in KubeConfig is accessible on any node of the host cluster.
{{</ notice >}}
![kubeconfig](/images/docs/multicluster-management/enable-multicluster-management-in-kubesphere/direct-connection/kubeconfig.jpg)
4. Click **Import** and wait for cluster initialization to finish.
![cluster-imported](/images/docs/multicluster-management/enable-multicluster-management-in-kubesphere/direct-connection/cluster-imported.png)

View File

@ -1,18 +1,20 @@
---
title: "Retrieve KubeConfig"
keywords: 'Kubernetes, KubeSphere, multicluster, hybrid-cloud, kubeconfig'
description: 'Describe how to retrieve kubeconfig from a Kuberenetes cluster'
description: 'How to retrieve kubeconfig from a Kuberenetes cluster.'
titleLink: "Retrieve KubeConfig"
weight: 5230
---
You need to provide the kubeconfig of a member cluster if you import it using [direct connection](../direct-connection/).
## Prerequisites
You have a Kubernetes cluster.
## Explore KubeConfig File
## Get KubeConfig
Go to `$HOME/.kube`, and check the file in the directory where, normally, a file named **config** exists. Use the following command to retrieve the KubeConfig file:
Go to `$HOME/.kube`, and check the file in the directory where, normally, a file named `config` exists. Use the following command to retrieve the KubeConfig file:
```bash
cat $HOME/.kube/config

View File

@ -55,6 +55,10 @@ Let's say you have an EKS cluster which already has KubeSphere installed. And yo
```shell
~:# kubectl get node
```
The output is similar to this:
```
NAME STATUS ROLES AGE VERSION
ip-10-0-47-38.cn-north-1.compute.internal Ready <none> 11h v1.18.8-eks-7c9bda
ip-10-0-8-148.cn-north-1.compute.internal Ready <none> 78m v1.18.8-eks-7c9bda
@ -111,8 +115,12 @@ users:
```
Double check our new kubeconfig do has the access to EKS.
```
```shell
~:# kubectl get nodes
```
The output is similar to this:
```
NAME STATUS ROLES AGE VERSION
ip-10-0-47-38.cn-north-1.compute.internal Ready <none> 11h v1.18.8-eks-7c9bda
ip-10-0-8-148.cn-north-1.compute.internal Ready <none> 78m v1.18.8-eks-7c9bda

View File

@ -2,12 +2,22 @@
title: "KubeSphere Federation"
keywords: 'Kubernetes, KubeSphere, federation, multicluster, hybrid-cloud'
description: 'Overview'
linkTitle: "KubeSphere Federation"
weight: 5120
---
The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters as the workload can be reduced.
The multi-cluster feature relates to the network connection among multiple clusters. Therefore, it is important to understand the topological relations of clusters.
Before you use the multi-cluster feature, you need to create a Host Cluster (hereafter referred to as **H** Cluster), which is actually a KubeSphere cluster with the multi-cluster feature enabled. All the clusters managed by the H Cluster are called Member Cluster (hereafter referred to as **M** Cluster). They are common KubeSphere clusters that do not have the multi-cluster feature enabled. There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and the M Cluster can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment.
## How the Multi-cluster Architecture Works
![Kubernetes Federation in KubeSphere](https://ap3.qingstor.com/kubesphere-website/docs/20200907232319.png)
Before you use the central control plane of KubeSphere to management multiple clusters, you need to create a Host Cluster, also known as **H** Cluster. The H Cluster, essentially, is a KubeSphere cluster with the multi-cluster feature enabled. It provides you with the control plane for unified management of Member Clusters, also known as **M** Cluster. M Clusters are common KubeSphere clusters without the central control plane. Namely, tenants with necessary permissions (usually cluster administrators) can access the control plane from the H Cluster to manage all M Clusters, such as viewing and editing resources on M Clusters. Conversely, if you access the web console of any M Cluster separately, you cannot see any resources on other clusters.
![centrol-control-plane](/images/docs/multicluster-management/introduction/kubesphere-federation/centrol-control-plane.png)
There can only be one H Cluster while multiple M Clusters can exist at the same time. In a multi-cluster architecture, the network between the H Cluster and M Clusters can be connected directly or through an agent. The network between M Clusters can be set in a completely isolated environment.
![kubesphere-federation](/images/docs/multicluster-management/introduction/kubesphere-federation/kubesphere-federation.png)
## Vendor Agnostic
KubeSphere features a powerful, inclusive central control plane so that you can manage any KubeSphere clusters in a unified way regardless of deployment environments or cloud providers.

View File

@ -2,7 +2,7 @@
title: "Overview"
keywords: 'Kubernetes, KubeSphere, multicluster, hybrid-cloud'
description: 'Overview'
linkTitle: "Overview"
weight: 5110
---
@ -10,6 +10,6 @@ Today, it's very common for organizations to run and manage multiple Kubernetes
The most common use cases of multi-cluster management include service traffic load balancing, development and production isolation, decoupling of data processing and data storage, cross-cloud backup and disaster recovery, flexible allocation of computing resources, low latency access with cross-region services, and vendor lock-in avoidance.
KubeSphere is developed to address multi-cluster and multi-cloud management challenges and implement the proceeding user scenarios, providing users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premises environments. KubeSphere also provides rich observability across multiple clusters including centralized monitoring, logging, events, and auditing logs.
KubeSphere is developed to address multi-cluster and multi-cloud management challenges, including the scenarios mentioned above. It provides users with a unified control plane to distribute applications and its replicas to multiple clusters from public cloud to on-premises environments. KubeSphere also boasts rich observability across multiple clusters including centralized monitoring, logging, events, and auditing logs.
![KubeSphere Multi-cluster Management](/images/docs/multi-cluster-overview.jpg)
![multi-cluster-overview](/images/docs/multicluster-management/introduction/overview/multi-cluster-overview.jpg)

View File

@ -8,7 +8,7 @@ weight: 6600
## What are KubeSphere Alerting and Notification
Alerting and Notification are two important building blocks of observability, closely related to monitoring and logging. The alerting system in KubeSphere, coupled with the proactive failure notification system, allows users to know activities of interest based on alert policies. When a predefined threshold of a certain metric is reached, an alert will be sent to preconfigured recipients, the notification method of which can be set by yourself, including Email, WeChat Work and Slack. With a highly functional alerting and notification system in place, you can quickly identify and resolve potential issues in advance before they affect your business.
Alerting and Notification are two important building blocks of observability, closely related to monitoring and logging. The alerting system in KubeSphere, coupled with the proactive failure notification system, allows users to know activities of interest based on alerting policies. When a predefined threshold of a certain metric is reached, an alert will be sent to preconfigured recipients, the notification method of which can be set by yourself, including Email, WeChat Work and Slack. With a highly functional alerting and notification system in place, you can quickly identify and resolve potential issues in advance before they affect your business.
For more information, see [Alerting Policy](../../project-user-guide/alerting/alerting-policy) and [Alerting Message](../../project-user-guide/alerting/alerting-message).
@ -78,7 +78,7 @@ The process of installing KubeSphere on Kubernetes is same as stated in the tuto
## Enable Alerting and Notification after Installation
1. Log in the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
![clusters-management](/images/docs/enable-pluggable-components/kubesphere-alerting-and-notification/clusters-management.png)

View File

@ -72,7 +72,7 @@ The process of installing KubeSphere on Kubernetes is same as stated in the tuto
## Enable the App Store after Installation
1. Log in the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
![clusters-management](/images/docs/enable-pluggable-components/kubesphere-app-store/clusters-management.png)

View File

@ -100,7 +100,7 @@ By default, ks-installer will install Elasticsearch internally if Auditing is en
## Enable Auditing Logs after Installation
1. Log in the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
![clusters-management](/images/docs/enable-pluggable-components/kubesphere-auditing-logs/clusters-management.png)

View File

@ -70,7 +70,7 @@ The process of installing KubeSphere on Kubernetes is same as stated in the tuto
## Enable DevOps after Installation
1. Log in the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
![clusters-management](/images/docs/enable-pluggable-components/kubesphere-devops-system/clusters-management.png)

View File

@ -102,7 +102,7 @@ By default, ks-installer will install Elasticsearch internally if Events is enab
## Enable Events after Installation
1. Log in the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
![clusters-management](/images/docs/enable-pluggable-components/kubesphere-events/clusters-management.png)

View File

@ -28,7 +28,7 @@ When you install KubeSphere on Linux, you need to create a configuration file, w
- If you adopt [All-in-One Installation](../../quick-start/all-in-one-on-linux/), you do not need to create a `config-sample.yaml` file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Logging in this mode (e.g. for testing purposes), refer to [the following section](#enable-logging-after-installation) to see how Logging can be installed after installation.
- If you adopt [Multi-Node Installation](../../installing-on-linux/introduction/multioverview/) and are using symbolic links for docker root directory, make sure all nodes follow the exactly same symbolic links. Logging agents are deployed in DaemonSets onto nodes. Any discrepancy in container log path may cause collection failures on that node.
- If you adopt [Multi-node Installation](../../installing-on-linux/introduction/multioverview/) and are using symbolic links for docker root directory, make sure all nodes follow the exactly same symbolic links. Logging agents are deployed in DaemonSets onto nodes. Any discrepancy in container log path may cause collection failures on that node.
{{</ notice >}}
@ -104,7 +104,7 @@ By default, ks-installer will install Elasticsearch internally if Logging is ena
## Enable Logging after Installation
1. Log in the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
![clusters-management](/images/docs/enable-pluggable-components/kubesphere-logging-system/clusters-management.png)

View File

@ -75,7 +75,7 @@ The process of installing KubeSphere on Kubernetes is same as stated in the tuto
## Enable the Network Policy after Installation
1. Log in the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
![clusters-management](/images/docs/enable-pluggable-components/network-policies/clusters-management.png)

View File

@ -69,7 +69,7 @@ The process of installing KubeSphere on Kubernetes is same as stated in the tuto
## Enable the Service Mesh after Installation
1. Log in the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
1. Log in to the console as `admin`. Click **Platform** in the top-left corner and select **Clusters Management**.
![clusters-management](/images/docs/enable-pluggable-components/kubesphere-service-mesh/clusters-management.png)

View File

@ -14,11 +14,11 @@ This tutorial demonstrates how to set a gateway in KubeSphere for the external a
## Prerequisites
You need to create a workspace, a project and an account (`project-admin`). The account must be invited to the project with the role of `admin` at the project level. For more information, see [Create Workspace, Project, Account and Role](../../../docs/quick-start/create-workspace-and-project).
You need to create a workspace, a project and an account (`project-admin`). The account must be invited to the project with the role of `admin` at the project level. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../docs/quick-start/create-workspace-and-project).
## Set a Gateway
1. Log in the KubeSphere web console as `project-admin` and go to your project. In **Project Settings** from the navigation bar, select **Advanced Settings** and click **Set Gateway**.
1. Log in to the KubeSphere web console as `project-admin` and go to your project. In **Project Settings** from the navigation bar, select **Advanced Settings** and click **Set Gateway**.
![set-project-gateway](/images/docs/project-administration/project-gateway/set-project-gateway.jpg)

View File

@ -12,7 +12,7 @@ KubeSphere project network isolation lets project administrators enforce which n
## Prerequisites
- You have already enabled Network Policy. Please refer to [network-policy](../../pluggable-components/network-policy) if it is not ready yet.
- Use an account of the `admin` role at the project level. For example, use the account `project-admin` created in [Create Workspace, Project, Account and Role](../../quick-start/create-workspace-and-project/).
- Use an account of the `admin` role at the project level. For example, use the account `project-admin` created in [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/).
{{< notice note >}}
For the implementation of the Network Policy, you can refer to [kubesphere-network-policy](https://github.com/kubesphere/community/blob/master/sig-network/concepts-and-designs/kubesphere-network-policy.md).

View File

@ -20,7 +20,7 @@ In project scope, you can grant the following resources' permissions to a role:
## Prerequisites
At least one project has been created, such as `demo-project`. Besides, you need an account of the `admin` role (e.g. `project-admin`) at the project level. See [Create Workspace, Project, Account and Role](../../quick-start/create-workspace-and-project/) if it is not ready yet.
At least one project has been created, such as `demo-project`. Besides, you need an account of the `admin` role (e.g. `project-admin`) at the project level. See [Create Workspaces, Projects, Accounts and Roles](../../quick-start/create-workspace-and-project/) if it is not ready yet.
## Built-in Roles
@ -40,7 +40,7 @@ In **Project Roles**, there are three available built-in roles as shown below. B
## Create a Project Role
1. Log in the console as `project-admin` and select a project (e.g. `demo-project`) under **Projects** list.
1. Log in to the console as `project-admin` and select a project (e.g. `demo-project`) under **Projects** list.
{{< notice note >}}

View File

@ -2,45 +2,49 @@
title: "Alerting Message (Workload Level)"
keywords: 'KubeSphere, Kubernetes, Workload, Alerting, Message, Notification'
description: 'How to view alerting messages at the workload level.'
linkTitle: "Alerting Message (Workload Level)"
weight: 10720
---
Alert messages record detailed information of alerts triggered based on alert rules, including monitoring targets, alert policies, recent notifications and comments.
Alerting messages record detailed information of alerts triggered based on alert rules, including monitoring targets, alerting policies, recent notifications and comments.
This tutorial demonstrates how to view alerting messages at the workload level.
## Prerequisites
You have created a workload-level alert policy and received alert notifications of it. If it is not ready, please refer to [Alert Policy (Workload Level)](../alerting-policy/) to create one first.
- You have enabled [KubeSphere Alerting and Notification](../../../pluggable-components/alerting-notification/).
- You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
- You have created a workload-level alerting policy and received alert notifications of it. If it is not ready, refer to [Alerting Policy (Workload Level)](../alerting-policy/) to create one first.
## Hands-on Lab
### Task 1: View Alert Message
### Step 1: View alerting messages
1. Log in the console and go to your project. Navigate to **Alerting Message** under **Monitoring & Alerting**, and you can see alert messages in the list. In the example of [Alert Policy (Workload Level)](../alerting-policy/), you set two monitoring targets (`reviews-v1` and `details-v1`), and both of their memory usage are higher than the threshold, so you can see two alert messages corresponding to them.
1. Log in to the console as `project-regular` and go to your project. Navigate to **Alerting Message** under **Monitoring & Alerting**, and you can see alerting messages in the list. In the example of [Alerting Policy (Workload Level)](../alerting-policy/), you set two monitoring targets (`reviews-v1` and `details-v1`), and both of their memory usage are higher than the threshold, so you can see two alerting messages corresponding to them.
![alerting_message_workload_level_list](/images/docs/alerting/alerting_message_workload_level_list.png)
![alerting_message_workload_level_list](/images/docs/alerting/alerting_message_workload_level_list.png)
2. Select one of the alert messages to enter the detail page. In **Alerting Detail**, you can see the graph of the memory usage of the monitored workload over time, which has been continuously higher than the threshold of 20 MiB set in the alert rule, so the alert was triggered.
2. Select one of the alerting messages to enter the detail page. In **Alerting Detail**, you can see the graph of the memory usage of the monitored workload over time, which has been continuously higher than the threshold of 20 MiB set in the alert rule, so the alert was triggered.
![alerting_message_workload_level_detail](/images/docs/alerting/alerting_message_workload_level_detail.png)
![alerting_message_workload_level_detail](/images/docs/alerting/alerting_message_workload_level_detail.png)
### Task 2: View Alert Policy
### Step 2: View the alerting policy
Switch to **Alerting Policy** to view the alert policy corresponding to this alert message, and you can see the triggering rule of it set in the example of [Alert Policy (Workload Level)](../alerting-policy/).
Switch to **Alerting Policy** to view the alerting policy corresponding to this alerting message, and you can see the triggering rule of it set in the example of [Alerting Policy (Workload Level)](../alerting-policy/).
![alerting_message_workload_level_policy](/images/docs/alerting/alerting_message_workload_level_policy.png)
### Task 3: View Recent Notification
### Step 3: View recent notifications
1. Switch to **Recent Notification**. It can be seen that 3 notifications have been received, because the notification rule was set with a repetition period of `Alert once every 5 minutes` and retransmission of `Resend up to 3 times`.
![alerting_message_workload_level_notification](/images/docs/alerting/alerting_message_workload_level_notification.png)
![alerting_message_workload_level_notification](/images/docs/alerting/alerting_message_workload_level_notification.png)
2. Log in your email to see alert notification mails sent by the KubeSphere mail server. You have received a total of 6 emails. This is because the memory usage of two monitored workloads (**Deployments**) has exceeded the threshold of `20 MiB` continuously, and the notification email is sent every 5 minutes for 3 consecutive times based on the notification rule.
2. Log in to your email to see alert notification emails sent by the KubeSphere mail server. You have received a total of 6 emails. This is because the memory usage of two monitored workloads (**Deployments**) has exceeded the threshold of `20 MiB` continuously, and the notification email is sent every 5 minutes for 3 consecutive times based on the notification rule.
### Task 4: Add Comment
### Step 4: Add comments
Click **Comment** to add comments to current alert messages. For example, as the memory usage of workload is higher than the threshold set based on the alert rule, you can add a comment in the dialog: `Default maximum memory usage quota needs to be increased for this workload`.
Click **Comment** to add comments to the current alerting message. For example, as the memory usage of workload is higher than the threshold set based on the alert rule, you can add a comment in the dialog: `Default maximum memory usage quota needs to be increased for this workload`.
![alerting_message_workload_level_comment](/images/docs/alerting/alerting_message_workload_level_comment.png)

View File

@ -2,74 +2,72 @@
title: "Alerting Policy (Workload Level)"
keywords: 'KubeSphere, Kubernetes, Workload, Alerting, Policy, Notification'
description: 'How to set alerting policies at the workload level.'
linkTitle: "Alerting Policy (Workload Level)"
weight: 10710
---
## Objective
KubeSphere provides alert policies for nodes and workloads. This guide demonstrates how you, as a project member, can create alert policies for workloads in the project and configure mail notifications. See [Alerting Policy (Node Level)](../../../cluster-administration/alerting/alerting-policy/) to learn how to configure alert policies for nodes.
KubeSphere provides alerting policies for nodes and workloads. This tutorial demonstrates how to create alerting policies for workloads in the project and configure mail notifications. See [Alerting Policy (Node Level)](../../../cluster-administration/alerting/alerting-policy/) to learn how to configure alerting policies for nodes.
## Prerequisites
- [KubeSphere Alerting and Notification](../../../pluggable-components/alerting-notification/) needs to be enabled by an account granted the role `platform-admin`.
- [Mail Server](../../../cluster-administration/cluster-settings/mail-server/) needs to be configured by an account granted the role `platform-admin`.
- You have been invited to one project with an `operator` role.
- You have enabled [KubeSphere Alerting and Notification](../../../pluggable-components/alerting-notification/).
- You have configured the [Mail server](../../../cluster-administration/cluster-settings/mail-server/).
- You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
- You have workloads in this project. If they are not ready, go to **Applications** under **Application Workloads**, and click **Deploy Sample Application** to deploy an application quickly. For more information, see [Deploy Bookinfo and Manage Traffic](../../../quick-start/deploy-bookinfo-to-k8s/).
## Hands-on Lab
### Task 1: Create an Alert Policy
### Step 1: Open the dashboard
Log in the console and go to your project. Navigate to **Alerting Policy** under **Monitoring & Alerting**, then click **Create**.
Log in to the console as `project-regular` and go to your project. Navigate to **Alerting Policy** under **Monitoring & Alerting**, then click **Create**.
![alerting_policy_workload_level_create](/images/docs/alerting/alerting_policy_workload_level_create.png)
### Task 2: Provide Basic Information
### Step 2: Provide basic information
In the dialog that appears, fill in the basic information as follows. Click **Next** after you finish.
- **Name**: a concise and clear name as its unique identifier, such as `alert-demo`.
- **Alias**: to help you distinguish alert policies better. Chinese is supported.
- **Description**: a brief introduction to the alert policy.
- **Alias**: to help you distinguish alerting policies better. Chinese is supported.
- **Description**: a brief introduction to the alerting policy.
![alerting_policy_workload_level_basic_info](/images/docs/alerting/alerting_policy_workload_level_basic_info.png)
### Task 3: Select Monitoring Targets
### Step 3: Select monitoring targets
You can select three types of workloads as the monitoring targets: **Deployments**, **StatefulSets** and **DaemonSets**. Select **Deployments** as the type and `reviews-v1` and `details-v1` as monitoring targets, then click **Next**.
![alerting_policy_workload_level_monitoring_target](/images/docs/alerting/alerting_policy_workload_level_monitoring_target.png)
### Task 4: Add Alerting Rules
### Step 4: Add an alerting rule
1. Click **Add Rule** to begin to create an alerting rule. The rule defines parameters such as metric type, check period, consecutive times, metric threshold and alert level to provide rich configurations. The check period (the second field under **Rule**) means the time interval between 2 consecutive checks of the metric. For example, `2 minutes/period` means the metric is checked every two minutes. The consecutive times (the third field under **Rule**) means the number of consecutive times that the metric meets the threshold when checked. An alert is only triggered when the actual time is equal to or is greater than the number of consecutive times set in the alert policy.
1. Click **Add Rule** to begin to create an alerting rule. The rule defines parameters such as metric type, check period, consecutive times, metric threshold and alert level to provide rich configurations. The check period (the second field under **Rule**) means the time interval between 2 consecutive checks of the metric. For example, `2 minutes/period` means the metric is checked every two minutes. The consecutive times (the third field under **Rule**) means the number of consecutive times that the metric meets the threshold when checked. An alert is only triggered when the actual time is equal to or is greater than the number of consecutive times set in the alerting policy.
![alerting_policy_workload_level_alerting_rule](/images/docs/alerting/alerting_policy_workload_level_alerting_rule.png)
![alerting_policy_workload_level_alerting_rule](/images/docs/alerting/alerting_policy_workload_level_alerting_rule.png)
2. In this example, set those parameters to `memory usage`, `1 minute/period`, `2 consecutive times`, `>` and `20` MiB for threshold and `Major Alert` for alert level. It means KubeSphere checks the memory usage every minute, and a major alert is triggered if it is larger than 20 MiB for 2 consecutive times.
3. Click **√** to save the rule when you finish and click **Next** to continue.
{{< notice note >}}
{{< notice note >}}
- You can create workload-level alert policies for the following metrics:
- CPU: `cpu usage`
- Memory: `memory usage (including cache)`, `memory usage`
- Network: `network data transmitting rate`, `network data receiving rate`
- Workload Metric: `unavailable deployment replicas ratio`
You can create workload-level alerting policies for the following metrics:
- CPU: `cpu usage`
- Memory: `memory usage (including cache)`, `memory usage`
- Network: `network data transmitting rate`, `network data receiving rate`
- Workload Metric: `unavailable deployment replicas ratio`
{{</ notice >}}
### Task 5: Set Notification Rule
### Step 5: Set a notification rule
1. **Effective Notification Time Range** is used to set sending time of notification emails, such as `09:00 ~ 19:00`. **Notification Channel** currently only supports **Email**. You can add email addresses of members to be notified to **Notification List**.
1. **Customize Repetition Rules** defines sending period and retransmission times of notification emails. If alerts have not been resolved, the notification will be sent repeatedly after a certain period of time. Different repetition rules can also be set for different levels of alerts. Since the alert level set in the previous step is `Major Alert`, select `Alert once every 5 minutes` (sending period) in the second field for **Major Alert** and `Resend up to 3 times` in the third field (retransmission times). Refer to the following image to set notification rules:
![alerting_policy_workload_level_notification_rule](/images/docs/alerting/alerting_policy_workload_level_notification_rule.png)
![alerting_policy_workload_level_notification_rule](/images/docs/alerting/alerting_policy_workload_level_notification_rule.png)
3. Click **Create**, and you can see that the alert policy is successfully created.
3. Click **Create**, and you can see that the alerting policy is successfully created.
### Task 6: View Alert Policy
### Step 6: View the alerting policy
After an alert policy is successfully created, you can enter its detail information page to view the status, alert rules, monitoring targets, notification rule, alert history, etc. Click **More** and select **Change Status** from the drop-down menu to enable or disable this alert policy.
After an alerting policy is successfully created, you can go to its detail page to view the status, alert rules, monitoring targets, notification rule, alert history, etc. Click **More** and select **Change Status** from the drop-down menu to enable or disable this alerting policy.

View File

@ -30,7 +30,7 @@ After you click **Add Container Image**, you will see an image as below.
#### Image Search Bar
You can click the cube icon on the right to select an image from the list or input an image name to search it. KubeSphere provides Docker Hub images and your private image repository. If you want to use your private image repository, you need to create a Docker Hub secret first in **Secrets** under **Configurations**.
You can click the cube icon on the right to select an image from the list or input an image name to search it. KubeSphere provides Docker Hub images and your private image repository. If you want to use your private image repository, you need to create an Image Registry Secret first in **Secrets** under **Configurations**.
{{< notice note >}}
@ -68,7 +68,7 @@ You can specify the upper limit of the resources that the application can use, i
{{< notice note >}}
The CPU resource is measure in CPU units, or **Core** in KubeSphere. The memory resource is measured in bytes, or **Mi** in KubeSphere.
The CPU resource is measured in CPU units, or **Core** in KubeSphere. The memory resource is measured in bytes, or **Mi** in KubeSphere.
{{</ notice >}}
@ -92,13 +92,13 @@ This value is indicated by the `imagePullPolicy` field. On the dashboard, you ca
- The default value is `IfNotPresent`, but the value of images tagged with `:latest` is `Always` by default.
- Docker will check it when pulling the image. If MD5 has not changed, it will not pull.
- The `:latest` should be avoided as much as possible in the production environment, and the latest image can be automatically pulled by the `:latest` in the development environment.
- The `:latest` tag should be avoided as much as possible in the production environment, and the latest image can be automatically pulled by the `:latest` tag in the development environment.
{{< /notice >}}
#### **Health Checker**
Support **Liveness**, **Readiness**, and **Startup**. The survival check is used to detect when to restart the container.
Support **Liveness**, **Readiness**, and **Startup**.
![container-health-check](/images/docs/project-user-guide/workloads/container-health-check.jpg)
@ -210,7 +210,7 @@ The drop-down menu under **Update Strategy** is indicated by the `.spec.updateSt
- **RollingUpdate (Recommended)**
If `.spec.template` is updated, the Pods in the StatefulSet will be automatically deleted with new pods created as replacements. Pods are updated in reserve ordinal order, sequentially deleted and created. A new Pod update will not begin until the previous Pod becomes up and running after it is updated.
If `.spec.template` is updated, the Pods in the StatefulSet will be automatically deleted with new pods created as replacements. Pods are updated in reverse ordinal order, sequentially deleted and created. A new Pod update will not begin until the previous Pod becomes up and running after it is updated.
- **OnDelete**

View File

@ -13,13 +13,13 @@ For more information, see [the official documentation of Kubernetes](https://kub
## Prerequisites
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project).
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
## Create a CronJob
### Step 1: Open Dashboard
Log in the console as `project-regular`. Go to **Jobs** of a project, choose **CronJobs** and click **Create**.
Log in to the console as `project-regular`. Go to **Jobs** of a project, choose **CronJobs** and click **Create**.
![cronjob-list](/images/docs/project-user-guide/application-workloads/cronjobs/cronjob-list.jpg)

View File

@ -3,7 +3,6 @@ title: "DaemonSets"
keywords: 'KubeSphere, Kubernetes, DaemonSet, workload'
description: 'Kubernetes DaemonSets'
linkTitle: "DaemonSets"
weight: 10230
---
@ -21,23 +20,23 @@ DaemonSets are very helpful in cases where you want to deploy ongoing background
## Prerequisites
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project).
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
## Create a DaemonSet
### Step 1: Open Dashboard
### Step 1: Open the dashboard
Log in the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the tab **DaemonSets**.
Log in to the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the tab **DaemonSets**.
![daemonsets](/images/docs/project-user-guide/workloads/daemonsets.jpg)
### Step 2: Input Basic Information
### Step 2: Input basic information
Specify a name for the DaemonSet (e.g. `demo-daemonset`) and click **Next** to continue.
![daemonsets](/images/docs/project-user-guide/workloads/daemonsets_form_1.jpg)
### Step 3: Set Image
### Step 3: Set an image
1. Click the **Add Container Image** box.
@ -68,9 +67,9 @@ Specify a name for the DaemonSet (e.g. `demo-daemonset`) and click **Next** to c
8. Select a deployment mode. For more information, see [Deployment Mode](../container-image-settings/#deployment-mode).
9. Click **Next** to go to the next step when you finish setting the container image.
9. Click **Next** to continue when you finish setting the container image.
### Step 4: Mount Volumes
### Step 4: Mount volumes
You can add a volume directly or mount a ConfigMap or Secret. Alternatively, click **Next** directly to skip this step. For more information about volumes, visit [Volumes](../../storage/volumes/#mount-a-volume).
@ -82,7 +81,7 @@ DaemonSets can't use a volume template, which is used by StatefulSets.
{{</ notice>}}
### Step 5: Configure Advanced Settings
### Step 5: Configure advanced settings
You can add metadata in this section. When you finish, click **Create** to complete the whole process of creating a DaemonSet.
@ -94,7 +93,7 @@ You can add metadata in this section. When you finish, click **Create** to compl
## Check DaemonSet Details
### Detail Page
### Detail page
1. After a DaemonSet is created, it displays in the list as below. You can click the three dots on the right and select the operation from the menu to modify a DaemonSet.
@ -133,6 +132,6 @@ You can add metadata in this section. When you finish, click **Create** to compl
- Click the container log icon to view output logs of the container.
- You can view the Pod detail page by clicking the Pod name.
### Revision Records
### Revision records
After the resource template of workload is changed, a new log will be generated and Pods will be rescheduled for a version update. The latest 10 versions will be saved by default. You can implement a redeployment based on the change log.

View File

@ -13,23 +13,23 @@ For more information, see the [official documentation of Kubernetes](https://kub
## Prerequisites
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project).
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
## Create a Deployment
### Step 1: Open Dashboard
### Step 1: Open the dashboard
Log in the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the tab **Deployments**.
Log in to the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the tab **Deployments**.
![deployments](/images/docs/project-user-guide/workloads/deployments.png)
### Step 2: Input Basic Information
### Step 2: Input basic information
Specify a name for the Deployment (e.g. `demo-deployment`) and click **Next** to continue.
![deployments](/images/docs/project-user-guide/workloads/deployments_form_1.jpg)
### Step 3: Set Image
### Step 3: Set an image
1. Before you set an image, define the number of replicated Pods in **Pod Replicas** by clicking the **plus** or **minus** icon, which is indicated by the `.spec.replicas` field in the manifest file.
@ -68,9 +68,9 @@ You can see the Deployment manifest file in YAML format by enabling **Edit Mode*
9. Select a deployment mode. For more information, see [Deployment Mode](../container-image-settings/#deployment-mode).
10. Click **Next** to go to the next step when you finish setting the container image.
10. Click **Next** to continue when you finish setting the container image.
### Step 4: Mount Volumes
### Step 4: Mount volumes
You can add a volume directly or mount a ConfigMap or Secret. Alternatively, click **Next** directly to skip this step. For more information about volumes, visit [Volumes](../../storage/volumes/#mount-a-volume).
@ -82,7 +82,7 @@ Deployments can't use a volume template, which is used by StatefulSets.
{{</ notice>}}
### Step 5: Configure Advanced Settings
### Step 5: Configure advanced settings
You can set a policy for node scheduling and add metadata in this section. When you finish, click **Create** to complete the whole process of creating a Deployment.
@ -98,7 +98,7 @@ You can set a policy for node scheduling and add metadata in this section. When
## Check Deployment Details
### Detail Page
### Detail page
1. After a Deployment is created, it displays in the list as below. You can click the three dots on the right and select actions from the menu to modify your Deployment.
@ -138,6 +138,6 @@ You can set a policy for node scheduling and add metadata in this section. When
- Click the container log icon to view output logs of the container.
- You can view the Pod detail page by clicking the Pod name.
### Revision Records
### Revision records
After the resource template of workload is changed, a new log will be generated and Pods will be rescheduled for a version update. The latest 10 versions will be saved by default. You can implement a redeployment based on the change log.

View File

@ -2,7 +2,7 @@
title: "Jobs"
keywords: "KubeSphere, Kubernetes, docker, jobs"
description: "Create a Kubernetes Job"
CronJobs: "Jobs"
linkTitle: "Jobs"
weight: 10250
---
@ -15,17 +15,17 @@ The following example demonstrates specific steps of creating a Job (computing
## Prerequisites
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project).
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
## Create a Job
### Step 1: Open Dashboard
### Step 1: Open the dashboard
Log in the console as `project-regular`. Go to **Application Workloads** and click **Jobs**. Click **Create** to open the modal.
Log in to the console as `project-regular`. Go to **Jobs** under **Application Workloads** and click **Create**.
![create-job](/images/docs/project-user-guide/application-workloads/jobs/create-job.jpg)
### Step 2: Input Basic Information
### Step 2: Input basic information
Enter the basic information. Refer to the image below as an example.
@ -35,7 +35,7 @@ Enter the basic information. Refer to the image below as an example.
![job-create-basic-info](/images/docs/project-user-guide/application-workloads/jobs/job-create-basic-info.png)
### Step 3: Job Settings (Optional)
### Step 3: Job settings (optional)
You can set the values in this step as below or click **Next** to use the default values. Refer to the table below for detailed explanations of each field.
@ -48,7 +48,7 @@ You can set the values in this step as below or click **Next** to use the defaul
| Parallelism | `spec.parallelism` | It specifies the maximum desired number of Pods the Job should run at any given time. The actual number of Pods running in a steady state will be less than this number when the work left to do is less than max parallelism ((`.spec.completions - .status.successful`) < `.spec.parallelism`). For more information, see [Jobs](https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/). |
| Active Deadline Seconds | `spec.activeDeadlineSeconds` | It specifies the duration in seconds relative to the startTime that the Job may be active before the system tries to terminate it; the value must be a positive integer. |
### Step 4: Set Image
### Step 4: Set an image
1. Select **Never** for **Restart Policy**. You can only specify **Never** or **OnFailure** for **Restart Policy** when the Job is not completed:
@ -74,7 +74,7 @@ You can set the values in this step as below or click **Next** to use the defaul
For more information about setting images, see [Container Image Settings](../container-image-settings/).
{{</ notice >}}
### Step 5: Inspect Job Manifest (Optional)
### Step 5: Inspect the Job manifest (optional)
1. Enable **Edit Mode** in the top right corner which displays the manifest file of the Job. You can see all the values are set based on what you have specified in the previous steps.
@ -120,16 +120,16 @@ For more information about setting images, see [Container Image Settings](../con
2. You can make adjustments in the manifest directly and click **Create** or disable the **Edit Mode** and get back to the **Create Job** page.
{{< notice note >}}
You can skip **Mount Volumes** and **Advanced Settings** for this tutorial. For more information, see [Pod Volumes](../deployments/#step-4-mount-volumes) and [Deployment Advanced Settings](../deployments/#step-5-configure-advanced-settings).
You can skip **Mount Volumes** and **Advanced Settings** for this tutorial. For more information, see [Mount volumes](../deployments/#step-4-mount-volumes) and [Configure advanced settings](../deployments/#step-5-configure-advanced-settings).
{{</ notice >}}
### Step 6: Check Result
### Step 6: Check the result
1. In the final step of **Advanced Settings**, click **Create** to finish. A new item will be added to the Job list if the creation is successful.
![job-list-new](/images/docs/project-user-guide/application-workloads/jobs/job-list-new.png)
2. Click this Job and go to **Execution Records** tab where you can see the information of each execution record. There are four completed Pods since **Completions** was set to `4` in Step 3.
2. Click this Job and go to **Execution Records** where you can see the information of each execution record. There are four completed Pods since **Completions** was set to `4` in Step 3.
![execution-record](/images/docs/project-user-guide/application-workloads/jobs/execution-record.jpg)

View File

@ -3,7 +3,6 @@ title: "Services"
keywords: 'KubeSphere, Kubernetes, services, workloads'
description: 'Create a KubeSphere Service'
linkTitle: "Services"
weight: 10240
---
@ -13,7 +12,7 @@ With Kubernetes, you don't need to modify your application to use an unfamiliar
For more information, see the [official documentation of Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/).
## Access Type
## Access Types
- **Virtual IP**: It is based on the unique IP generated by the cluster. A service can be accessed through this IP inside the cluster. This type is suitable for most services. Alternatively, a service can also be accessed through NodePort and LoadBalancer outside the cluster.
@ -27,9 +26,9 @@ In KubeSphere, stateful and stateless Services are created with a virtual IP by
## Prerequisites
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project).
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
## Service Type
## Service Types
As shown in the image below, KubeSphere provides three basic methods to create a Service: **Stateless Service**, **Stateful Service**, and **External Service**. Besides, you can also customize a Service through **Specify Workloads** and **Edit by YAML** under **Custom Creation**.
@ -63,7 +62,7 @@ The value of `annotations:kubesphere.io/serviceType` keywords can be defined as:
## Create a Stateless Service
### Step 1: Open Dashboard
### Step 1: Open the dashboard
1. Go to **Services** under **Application Workloads** of a project and click **Create**.
@ -77,11 +76,11 @@ The value of `annotations:kubesphere.io/serviceType` keywords can be defined as:
The steps of creating a stateful Service and a stateless Service are basically the same. This example only goes through the process of creating a stateless Service for demonstration purpose.
{{</ notice >}}
{{</ notice >}}
### Step 2: Input Basic Information
### Step 2: Input basic information
1. In the dialog that appears, you can see the filed **Version** prepopulated with `v1`. You need to define a name for the Service, such as `demo-service`. When you finish, click **Next** to continue.
1. In the dialog that appears, you can see the field **Version** prepopulated with `v1`. You need to define a name for the Service, such as `demo-service`. When you finish, click **Next** to continue.
![stateless_form_1](/images/docs/project-user-guide/workloads/stateless_form_1.jpg)
@ -126,9 +125,9 @@ The value of **Name** is used in both configurations, one for Deployment and the
app: xxx
```
### Step 3: Set Image
### Step 3: Set an image
To add a container image for the Service, see [Set Image](../deployments/#step-3-set-image) for details.
To add a container image for the Service, see [Set an image](../deployments/#step-3-set-an-image) for details.
![stateless_form_2.png](/images/docs/project-user-guide/workloads/stateless_form_2.jpg)
@ -138,13 +137,13 @@ For more information about explanations of dashboard properties, see [Container
{{</ notice >}}
### Step 4: Mount Volumes
### Step 4: Mount volumes
To mount a volume for the Service, see [Mount Volumes](../deployments/#step-4-mount-volumes) for details.
![stateless_form_3](/images/docs/project-user-guide/workloads/stateless_form_3.jpg)
### Step 5: Configure Advanced Settings
### Step 5: Configure advanced settings
You can set a policy for node scheduling and add metadata which is the same as explained in [Deployments](../deployments/#step-5-configure-advanced-settings). For a Service, you can see two additional options available, **Internet Access** and **Enable Sticky Session**.
@ -172,7 +171,7 @@ This value is specified by `.spec.type`. If you select **LoadBalancer**, you nee
## Check Service Details
### Detail Page
### Detail page
1. After a Service is created, you can click the three dots on the right to further edit it, such as its metadata (excluding **Name**), YAML, port, and Internet access.

View File

@ -3,13 +3,12 @@ title: "StatefulSets"
keywords: 'KubeSphere, Kubernetes, StatefulSets, dashboard, service'
description: 'Kubernetes StatefulSets'
linkTitle: "StatefulSets"
weight: 10220
---
As workload API object, a StatefulSet is used to manage stateful applications. It is responsible for the deploying, scaling of a set of Pods, and guarantees the ordering and uniqueness of these Pods.
As a workload API object, a StatefulSet is used to manage stateful applications. It is responsible for the deploying, scaling of a set of Pods, and guarantees the ordering and uniqueness of these Pods.
Like a Deployment, a StatefulSet manages Pods that are based on an identical container specification. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These pods are created from the same specification, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
Like a Deployment, a StatefulSet manages Pods that are based on an identical container specification. Unlike a Deployment, a StatefulSet maintains a sticky identity for each of their Pods. These Pods are created from the same specification, but are not interchangeable: each has a persistent identifier that it maintains across any rescheduling.
If you want to use storage volumes to provide persistence for your workload, you can use a StatefulSet as part of the solution. Although individual Pods in a StatefulSet are susceptible to failure, the persistent Pod identifiers make it easier to match existing volumes to the new Pods that replace any that have failed.
@ -24,25 +23,25 @@ For more information, see the [official documentation of Kubernetes](https://kub
## Prerequisites
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project).
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
## Create a StatefulSet
In KubeSphere, a **Headless** service is also created when you create a StatefulSet. You can find the headless service in [Services](../services/) under **Application Workloads** in a project.
### Step 1: Open Dashboard
### Step 1: Open the dashboard
Log in the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the tab **StatefulSets**.
Log in to the console as `project-regular`. Go to **Application Workloads** of a project, select **Workloads**, and click **Create** under the tab **StatefulSets**.
![statefulsets](/images/docs/project-user-guide/workloads/statefulsets.jpg)
### Step 2: Input Basic Information
### Step 2: Input basic information
Specify a name for the StatefulSet (e.g. `demo-stateful`) and click **Next** to continue.
![statefulsets](/images/docs/project-user-guide/workloads/statefulsets_form_1.jpg)
### Step 3: Set Image
### Step 3: Set an image
1. Before you set an image, define the number of replicated Pods in **Pod Replicas** by clicking the **plus** or **minus** icon, which is indicated by the `.spec.replicas` field in the manifest file.
@ -83,15 +82,15 @@ You can see the StatefulSet manifest file in YAML format by enabling **Edit Mode
9. Select a deployment mode. For more information, see [Deployment Mode](../container-image-settings/#deployment-mode).
10. Click **Next** to go to the next step when you finish setting the container image.
10. Click **Next** to continue when you finish setting the container image.
### Step 4: Mount Volumes
### Step 4: Mount volumes
StatefulSets can use the volume template, but you must create it in **Storage** in advance. For more information about volumes, visit [Volumes](../../storage/volumes/#mount-a-volume). When you finish, click **Next** to continue.
![statefulsets](/images/docs/project-user-guide/workloads/statefulsets_form_3.jpg)
### Step 5: Configure Advanced Settings
### Step 5: Configure advanced settings
You can set a policy for node scheduling and add metadata in this section. When you finish, click **Create** to complete the whole process of creating a StatefulSet.
@ -107,7 +106,7 @@ You can set a policy for node scheduling and add metadata in this section. When
## Check StatefulSet Details
### Detail Page
### Detail page
1. After a StatefulSet is created, it displays in the list as below. You can click the three dots on the right to select actions from the menu to modify your StatefulSet.
@ -147,6 +146,6 @@ You can set a policy for node scheduling and add metadata in this section. When
- Click the container log icon to view output logs of the container.
- You can view the Pod detail page by clicking the Pod name.
### Revision Records
### Revision records
After the resource template of workload is changed, a new log will be generated and Pods will be rescheduled for a version update. The latest 10 versions will be saved by default. You can implement a redeployment based on the change log.

View File

@ -13,7 +13,7 @@ This tutorial demonstrates how to quickly deploy [NGINX](https://www.nginx.com/)
## Prerequisites
- You have enabled [OpenPitirx (App Store)](../../../pluggable-components/app-store).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user invited to the project with the `operator` role. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/).
- You need to create a workspace, a project, and a user account (`project-regular`) for this tutorial. The account needs to be a platform regular user invited to the project with the `operator` role. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/).
## Hands-on Lab

View File

@ -13,13 +13,13 @@ This tutorial demonstrates how to quickly deploy [Grafana](https://grafana.com/)
## Prerequisites
- You have enabled [OpenPitirx (App Store)](../../../pluggable-components/app-store).
- You have completed the tutorial of [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project/). Namely, you must have a workspace, a project and two user accounts (`ws-admin` and `project-regular`). `ws-admin` must be granted the role of `workspace-admin` in the workspace and `project-regular` must be granted the role of `operator` in the project.
- You have completed the tutorial of [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project/). Namely, you must have a workspace, a project and two user accounts (`ws-admin` and `project-regular`). `ws-admin` must be granted the role of `workspace-admin` in the workspace and `project-regular` must be granted the role of `operator` in the project.
## Hands-on Lab
### Step 1: Add an app repository
1. Log in the web console of KubeSphere as `ws-admin`. In your workspace, go to **App Repos** under **Apps Management**, and then click **Add Repo**.
1. Log in to the web console of KubeSphere as `ws-admin`. In your workspace, go to **App Repos** under **Apps Management**, and then click **Add Repo**.
![add-app-repo](/images/docs/project-user-guide/applications/deploy-apps-from-app-templates/add-app-repo.jpg)

View File

@ -16,13 +16,13 @@ This tutorial demonstrates how to create a ConfigMap in KubeSphere.
## Prerequisites
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project).
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
## Create a ConfigMap
### Step 1: Open Dashboard
Log in the console as `project-regular`. Go to **Configurations** of a project, choose **ConfigMaps** and click **Create**.
Log in to the console as `project-regular`. Go to **Configurations** of a project, choose **ConfigMaps** and click **Create**.
![create-configmap](/images/docs/project-user-guide/configurations/configmaps/create-configmap.jpg)

View File

@ -12,7 +12,7 @@ This tutorial demonstrates how to create Secrets for different image registries.
## Prerequisites
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project).
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
## Create a Secret
@ -20,7 +20,7 @@ When you create workloads, [Services](../../../project-user-guide/application-wo
### Step 1: Open Dashboard
Log in the web console of KubeSphere as `project-regular`. Go to **Configurations** of a project, choose **Secrets** and click **Create**.
Log in to the web console of KubeSphere as `project-regular`. Go to **Configurations** of a project, choose **Secrets** and click **Create**.
![open-dashboard](/images/docs/project-user-guide/configurations/image-registries/open-dashboard.jpg)
@ -41,8 +41,8 @@ You can see the Secret's manifest file in YAML format by enabling **Edit Mode**
Select **Image Registry Secret** for **Type**. To use images from your private registry as you create application workloads, you need to specify the following fields.
- **Registry Address**. The address of the image registry that stores images for you to use when creating application workloads.
- **User Name**. The account name you use to log in the registry.
- **Password**. The password you use to log in the registry.
- **User Name**. The account name you use to log in to the registry.
- **Password**. The password you use to log in to the registry.
- **Email** (Optional). Your email address.
![image-registry-info](/images/docs/project-user-guide/configurations/image-registries/image-registry-info.jpg)

View File

@ -16,13 +16,13 @@ This tutorial demonstrates how to create a Secret in KubeSphere.
## Prerequisites
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspace, Project, Account and Role](../../../quick-start/create-workspace-and-project).
You need to create a workspace, a project and an account (`project-regular`). The account must be invited to the project with the role of `operator`. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../quick-start/create-workspace-and-project).
## Create a Secret
### Step 1: Open the dashboard
Log in the console as `project-regular`. Go to **Configurations** of a project, choose **Secrets** and click **Create**.
Log in to the console as `project-regular`. Go to **Configurations** of a project, choose **Secrets** and click **Create**.
![create-secrets](/images/docs/project-user-guide/configurations/secrets/create-secrets.jpg)
@ -124,7 +124,7 @@ This section shows how to create Secrets from your Docker Hub account and GitHub
### Create the Docker Hub Secret
1. Log in KubeSphere as `project-regular` and go to your project. Select **Secrets** from the navigation bar and click **Create** on the right.
1. Log in to KubeSphere as `project-regular` and go to your project. Select **Secrets** from the navigation bar and click **Create** on the right.
![secret-create](/images/docs/project-user-guide/configurations/secrets/secret-create.jpg)
@ -144,7 +144,7 @@ This section shows how to create Secrets from your Docker Hub account and GitHub
### Create the GitHub Secret
1. Log in KubeSphere as `project-regular` and go to your project. Select **Secrets** from the navigation bar and click **Create** on the right.
1. Log in to KubeSphere as `project-regular` and go to your project. Select **Secrets** from the navigation bar and click **Create** on the right.
![secret-create](/images/docs/project-user-guide/configurations/secrets/secret-create.jpg)

View File

@ -1,8 +1,7 @@
---
title: "Monitor MySQL"
keywords: 'monitoring, prometheus, prometheus operator'
keywords: 'monitoring, Prometheus, Prometheus operator'
description: 'Monitor MySQL'
linkTitle: "Monitor MySQL"
weight: 10812
---
@ -13,15 +12,15 @@ This tutorial walks you through an example of how to monitor MySQL metrics and v
## Prerequisites
- Please make sure you [enable the OpenPitrix system](https://kubesphere.io/docs/pluggable-components/app-store/). MySQL and MySQL exporter will be deployed from the App Store.
- You need to create a workspace, a project, and a user account for this tutorial. For more information, see [Create Workspace, Project, Account and Role](../../../../quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-operator` and work in the project `demo` in the workspace `demo-workspace`.
- You need to create a workspace, a project, and a user account for this tutorial. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../../quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited as the project operator with the `operator` role. In this tutorial, you log in as `project-operator` and work in the project `demo` in the workspace `demo-workspace`.
## Hands-on Lab
### Step 1: Deploy MySQL
To begin with, you [deploy MySQL from the App Store](../../../../application-store/built-in-apps/mysql-app/) and set the root password to `testing`. Please make sure you are landing on the **Overview** page of the project `demo`.
To begin with, you [deploy MySQL from the App Store](../../../../application-store/built-in-apps/mysql-app/) and set the root password to `testing`.
1. Go to **App Store**.
1. Go to the project `demo` and click **App Store** in the top left corner.
![go-to-app-store](/images/docs/project-user-guide/custom-application-monitoring/go-to-app-store.jpg)
@ -62,7 +61,7 @@ You need to deploy MySQL exporter in `demo` on the same cluster. MySQL exporter
![set-servicemonitor-to-true](/images/docs/project-user-guide/custom-application-monitoring/set-servicemonitor-to-true.jpg)
{{< notice warning >}}
Don't forget to enable the SericeMonitor CRD if you are using external exporter helm charts. Those charts usually disable ServiceMonitor by default and require manual modification.
Don't forget to enable the SericeMonitor CRD if you are using external exporter Helm charts. Those charts usually disable ServiceMonitor by default and require manual modification.
{{</ notice >}}
4. Modify MySQL connection parameters. MySQL exporter needs to connect to the target MySQL. In this tutorial, MySQL is installed with the service name `mysql-a8xgvx`. Set `mysql.host` to `mysql-a8xgvx`, `mysql.pass` to `testing`, and `user` to `root` as below. Note that your MySQL service may be created with **a different name**.
@ -75,7 +74,7 @@ Don't forget to enable the SericeMonitor CRD if you are using external exporter
![exporter-is-running](/images/docs/project-user-guide/custom-application-monitoring/exporter-is-running.jpg)
### Step 3: Create Dashboard
### Step 3: Create a monitoring dashboard
After about two minutes, you can create a monitoring dashboard for MySQL and visualize metrics in real time.
@ -83,7 +82,7 @@ After about two minutes, you can create a monitoring dashboard for MySQL and vis
![navigate-to-custom-monitoring](/images/docs/project-user-guide/custom-application-monitoring/navigate-to-custom-monitoring.jpg)
2. In the dialog that appears, name the dashboard as `mysql-overview` and choose **MySQL template**. Click **Create** to continue.
2. In the dialog that appears, name the dashboard `mysql-overview` and choose **MySQL template**. Click **Create** to continue.
![create-mysql-dashboard](/images/docs/project-user-guide/custom-application-monitoring/create-mysql-dashboard.jpg)

View File

@ -1,9 +1,8 @@
---
title: "Monitor Sample Web"
title: "Monitor a Sample Web Application"
keywords: 'monitoring, prometheus, prometheus operator'
description: 'Monitor Sample Web'
linkTitle: "Monitor Sample Web"
description: 'Monitor a Sample Web Application'
linkTitle: "Monitor a Sample Web Application"
weight: 10813
---
@ -12,27 +11,27 @@ This section walks you through monitoring a sample web application. The applicat
## Prerequisites
- Please make sure you [enable the OpenPitrix system](../../../../pluggable-components/app-store/).
- You need to create a workspace, a project, and a user account for this tutorial. For more information, see [Create Workspace, Project, Account and Role](../../../../quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited as the workspace self provisioner with the `self-provisioner` role. Namely, create an account `workspace-self-provisioner` of the `self-provisioner` role, and use this account to create a project (e.g. `test`). In this tutorial, you log in as `workspace-self-provisioner` and work in the project `test` in the workspace `demo-workspace`.
- You need to create a workspace, a project, and a user account for this tutorial. For more information, see [Create Workspaces, Projects, Accounts and Roles](../../../../quick-start/create-workspace-and-project/). The account needs to be a platform regular user and to be invited as the workspace self provisioner with the `self-provisioner` role. Namely, create an account `workspace-self-provisioner` of the `self-provisioner` role, and use this account to create a project (e.g. `test`). In this tutorial, you log in as `workspace-self-provisioner` and work in the project `test` in the workspace `demo-workspace`.
- Knowledge of Helm Chart and [PromQL](https://prometheus.io/docs/prometheus/latest/querying/examples/).
- Knowledge of Helm charts and [PromQL](https://prometheus.io/docs/prometheus/latest/querying/examples/).
## Hands-on Lab
### Step 1: Prepare Sample Web Application Image
### Step 1: Prepare the image of a sample web application
First, prepare the sample web application image. The sample web application exposes a user-defined metric called `myapp_processed_ops_total`. It is a counter type metric that counts the number of operations that have been processed by far. The counter increases automatically by one every 2 seconds.
The sample web application exposes a user-defined metric called `myapp_processed_ops_total`. It is a counter type metric that counts the number of operations that have been processed. The counter increases automatically by one every 2 seconds.
This sample application exposes application-specific metrics via the endpoint `http://localhost:2112/metrics`.
In this tutorial, you use the made-ready image `kubespheredev/promethues-example-app`. The source code can be found in [kubesphere/prometheus-example-app](https://github.com/kubesphere/prometheus-example-app). You can also follow [Instrument A Go Application For Prometheus](https://prometheus.io/docs/guides/go-application/) in the official documentation of Prometheus.
### Step 2: Pack the Application into a Helm Chart
### Step 2: Pack the application into a Helm chart
Pack the Deployment, Service, and ServiceMonitor YAML template into a helm chat for reuse. In the Deployment and Service template, you define sample web container and the port for the metrics endpoint. ServiceMonitor is a custom resource defined and used by Prometheus Operator. It connects your application and KubeSphere monitoring engine (Prometheus) so that the engine knows where and how to scrape metrics. In future releases, KubeSphere will provide a graphical user interface for easy operation.
Pack the Deployment, Service, and ServiceMonitor YAML template into a Helm chat for reuse. In the Deployment and Service template, you define the sample web container and the port for the metrics endpoint. ServiceMonitor is a custom resource defined and used by Prometheus Operator. It connects your application and KubeSphere monitoring engine (Prometheus) so that the engine knows where and how to scrape metrics. In future releases, KubeSphere will provide a graphical user interface for easy operation.
Find the source code in the folder `helm` in [kubesphere/prometheus-example-app](https://github.com/kubesphere/prometheus-example-app). The helm chart package is made ready and is named as `prometheus-example-app-0.1.0.tgz`. Please download the .tgz file and you will use it in the next step.
Find the source code in the folder `helm` in [kubesphere/prometheus-example-app](https://github.com/kubesphere/prometheus-example-app). The Helm chart package is made ready and is named `prometheus-example-app-0.1.0.tgz`. Please download the .tgz file and you will use it in the next step.
### Step 3: Upload the Helm Chart
### Step 3: Upload the Helm chart
1. Go to the workspace **Overview** page of `demo-workspace` and navigate to **App Templates**.
@ -52,7 +51,7 @@ Find the source code in the folder `helm` in [kubesphere/prometheus-example-app]
![click-upload-app-template-6](/images/docs/project-user-guide/custom-application-monitoring/click-upload-app-template-6.jpg)
### Step 4: Deploy Sample Web Application
### Step 4: Deploy the sample web application
You need to deploy the sample web application into `demo`. For demonstration purposes, you can simply run a test deployment.
@ -80,7 +79,7 @@ You need to deploy the sample web application into `demo`. For demonstration pur
![create-dashboard-1](/images/docs/project-user-guide/custom-application-monitoring/create-dashboard-1.jpg)
### Step 5: Create Dashboard
### Step 5: Create a monitoring dashboard
This section guides you on how to create a dashboard from scratch. You will create a text chart showing the total number of processed operations and a line chart for displaying the operation rate.

View File

@ -2,28 +2,27 @@
title: "Introduction"
keywords: 'monitoring, prometheus, prometheus operator'
description: 'Introduction to KubeSphere custom application monitoring.'
linkTitle: "Introduction"
weight: 10810
---
Custom monitoring allows you to monitor and visualize custom application metrics in KubeSphere. The application can be either a third-party application, such as MySQL, Redis, and Elasticsearch, or your own applications. This section introduces the workflow of this feature.
Custom monitoring allows you to monitor and visualize custom application metrics in KubeSphere. The application can be either a third-party application, such as MySQL, Redis, and Elasticsearch, or your own application. This section introduces the workflow of this feature.
The KubeSphere monitoring engine is powered by Prometheus and Prometheus Operator. To integrate custom application metrics into KubeSphere, you need to go through the following steps in general.
- [Expose Prometheus-Formatted Metrics](#step-1-expose-prometheus-formatted-metrics) of your application.
- [Expose Prometheus-formatted metrics](#step-1-expose-prometheus-formatted-metrics) of your application.
- [Apply ServiceMonitor CRD](#step-2-apply-servicemonitor-crd) to hook up your application with the monitoring target.
- [Visualize Metrics](#step-3-visualize-metrics) to compose a dashboard for viewing the custom metrics trend.
- [Visualize metrics](#step-3-visualize-metrics) on the dashboard to view the trend of custom metrics.
### Step 1: Expose Prometheus-Formatted Metrics
### Step 1: Expose Prometheus-formatted metrics
First of all, your application must expose Prometheus-formatted metrics. Prometheus exposition format is the de-facto format in the realm of cloud-native monitoring. Prometheus uses a [text-based exposition format](https://prometheus.io/docs/instrumenting/exposition_formats/). Depending on your application and use case, there are two ways to expose metrics:
First of all, your application must expose Prometheus-formatted metrics. The Prometheus exposition format is the de-facto format in the realm of cloud-native monitoring. Prometheus uses a [text-based exposition format](https://prometheus.io/docs/instrumenting/exposition_formats/). Depending on your application and use case, there are two ways to expose metrics:
#### Direct exposing
Directly exposing Prometheus metrics from applications is a common way among cloud-native applications. It requires developers to import Prometheus client libraries in their codes and expose metrics at a specific endpoint. Many applications, such as ETCD, CoreDNS, and Istio, adopt this method.
The Prometheus community offers client libraries for most programming languages. Please find your language on the [Prometheus Client Libraries](https://prometheus.io/docs/instrumenting/clientlibs/) page. For Go developers, read [Instrumenting a Go application](https://prometheus.io/docs/guides/go-application/) to learn how to write a Prometheus-compliant application.
The Prometheus community offers client libraries for most programming languages. Find your language on the [Prometheus Client Libraries](https://prometheus.io/docs/instrumenting/clientlibs/) page. For Go developers, read [Instrumenting a Go application](https://prometheus.io/docs/guides/go-application/) to learn how to write a Prometheus-compliant application.
The [sample web application](../examples/monitor-sample-web) is an example demonstrating how an application exposes Prometheus-formatted metrics directly.
@ -31,7 +30,7 @@ The [sample web application](../examples/monitor-sample-web) is an example demon
If you dont want to modify your code or you cannot do so because the application is provided by a third party, you can deploy an exporter which serves as an agent to scrape metric data and translate them into Prometheus format.
For most third-party applications, such as MySQL, the Prometheus community provides production-ready exporters. Please refer to [Exporters and Integrations](https://prometheus.io/docs/instrumenting/exporters/) for available exporters. In KubeSphere, it is recommended to [enable OpenPitrix](../../../pluggable-components/app-store/) and deploy exporters from the App Store. Exporters for MySQL, Elasticsearch, and Redis are all built-in items in the App Store.
For most third-party applications, such as MySQL, the Prometheus community provides production-ready exporters. Refer to [Exporters and Integrations](https://prometheus.io/docs/instrumenting/exporters/) for available exporters. In KubeSphere, it is recommended to [enable OpenPitrix](../../../pluggable-components/app-store/) and deploy exporters from the App Store. Exporters for MySQL, Elasticsearch, and Redis are all built-in apps in the App Store.
Please read [Monitor MySQL](../examples/monitor-mysql) to learn how to deploy a MySQL exporter and monitor MySQL metrics.
@ -43,9 +42,9 @@ In the previous step, you expose metric endpoints in a Kubernetes Service object
The ServiceMonitor CRD is defined by [Prometheus Operator](https://github.com/prometheus-operator/prometheus-operator). ServiceMonitor contains information about the metrics endpoints. With ServiceMonitor objects, the KubeSphere monitoring engine knows where and how to scape metrics. For each monitoring target, you apply a ServiceMonitor object to hook your application (or exporters) up to KubeSphere.
In KubeSphere v3.0.0, you need to pack ServiceMonitor with your applications (or exporters) into a helm chart for reuse. In future releases, KubeSphere will provide graphical interfaces for easy operation.
In KubeSphere v3.0.0, you need to pack ServiceMonitor with your applications (or exporters) into a Helm chart for reuse. In future releases, KubeSphere will provide graphical interfaces for easy operation.
Please read [Monitor Sample Web Application](../examples/monitor-sample-web) to learn how to pack ServiceMonitor with your application.
Please read [Monitor a Sample Web Application](../examples/monitor-sample-web) to learn how to pack ServiceMonitor with your application.
### Step 3: Visualize Metrics

View File

@ -2,14 +2,13 @@
title: "Overview"
keywords: 'monitoring, prometheus, prometheus operator'
description: 'Overview'
linkTitle: "Overview"
weight: 10815
---
This section introduces dashboard features. You will learn how to visualize metric data in KubeSphere for your custom applications. If you do not know how to integrate your application metrics into KubeSphere monitoring system, read [Introduction](../../introduction) first.
## Create Dashboard
## Create a Monitoring Dashboard
To create new dashboards for your application metrics, navigate to **Custom Monitoring** on the project **Overview** page. There are three ways to create dashboards. For MySQL, Elasticsearch, and Redis, you can use built-in templates. These templates are for demonstration purposes and are updated with KubeSphere releases. Besides, you can choose to customize dashboards from scratch.
@ -25,7 +24,7 @@ For a quickstart, KubeSphere provides built-in templates for MySQL, Elasticsearc
To start with a blank template, click **Create**.
### From YAML file
### From YAML
Toggle to **Edit Mode** in the top right corner then paste your dashboard YAML files.
@ -59,20 +58,20 @@ You can view chart details in the right-most column. It shows the **max**, **min
![dashboard-layout-4](/images/docs/project-user-guide/custom-application-monitoring/dashboard-layout-4.jpg)
## Edit Dashboard
## Edit the monitoring dashboard
You can edit an existing template by clicking **Edit Template** in the top right corner.
### Add panel
### Add a panel
To add text charts, click the **add icon** in the left column. To add charts, click **Add Monitoring Item** in the bottom right corner.
![edit-dashboard](/images/docs/project-user-guide/custom-application-monitoring/edit-dashboard.jpg)
### Add group
### Add a group
To group monitoring items, you can drag the item into the target group. To add a new group, click **Add Monitoring Group**.
## Open Dashboard
## Dashboard Templates
Find and share dashboard templates in [Monitoring Dashboards Gallery](https://github.com/kubesphere/monitoring-dashboard/tree/master/contrib/gallery). It is a place for KubeSphere community users to contribute their masterpieces.

Some files were not shown because too many files have changed in this diff Show More