Update index of account management, project admin, app store, toolbox, faq and reference in cn

Signed-off-by: Sherlock113 <sherlockxu@yunify.com>
This commit is contained in:
Sherlock113 2020-12-08 16:01:04 +08:00
parent 585a00781d
commit ea3162f143
62 changed files with 207 additions and 554 deletions

View File

@ -4,5 +4,5 @@ keywords: 'kubernetes, kubesphere'
description: ''
weight: 2240
weight: 14410
---

View File

@ -4,5 +4,5 @@ keywords: 'kubernetes, kubesphere'
description: 'Helm Specification'
weight: 2240
weight: 14420
---

View File

@ -3,7 +3,7 @@ title: "Deploy etcd on KubeSphere"
keywords: 'Kubernetes, KubeSphere, etcd, app-store'
description: 'How to deploy etcd on KubeSphere.'
linkTitle: "Deploy etcd on KubeSphere"
weight: 2240
weight: 14210
---
Written in Go, [etcd](https://etcd.io/) is a distributed key-value store to store data that needs to be accessed by a distributed system or cluster of machines. In Kubernetes, it is the backend for service discovery and stores cluster states and configurations.

View File

@ -4,7 +4,7 @@ keywords: 'Kubernetes, KubeSphere, Harbor, app-store'
description: 'How to use built-in Harbor registry'
weight: 2242
weight: 14220
---
From the [Introduction](../../_index) section, you know there was uncounted application could be installed by helm. [kubesphere\'s App Store](https://charts.kubesphere.io/main/) also added some popular application.

View File

@ -3,7 +3,7 @@ title: "Deploy Memcached on KubeSphere"
keywords: 'Kubernetes, KubeSphere, Memcached, app-store'
description: 'How to deploy Memcached on KubeSphere through App Store'
linkTitle: "Deploy Memcached on KubeSphere"
weight: 2242
weight: 14230
---
[Memcached](https://memcached.org/) is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering. Its API is available for the majority of popular languages.

View File

@ -4,7 +4,7 @@ keywords: 'Kubernetes, KubeSphere, Minio, app-store'
description: 'How to deploy Minio on KubeSphere from the App Store of KubeSphere'
linkTitle: "Deploy MinIO on KubeSphere"
weight: 2242
weight: 14240
---
[MinIO](https://min.io/) object storage is designed for high performance and the S3 API. It is ideal for large, private cloud environments with stringent security requirements and delivers mission-critical availability across a diverse range of workloads.

View File

@ -3,7 +3,7 @@ title: "Deploy MongoDB on KubeSphere"
keywords: 'KubeSphere, Kubernetes, Installation, MongoDB'
description: 'How to deploy MongoDB from the App Store of KubeSphere'
linkTitle: "Deploy MongoDB on KubeSphere"
weight: 261
weight: 14250
---
[MongoDB](https://www.mongodb.com/) is a general purpose, document-based, distributed database built for modern application developers and for the cloud era.

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, Kubernetes, Installation, MySQL'
description: 'How to deploy MySQL on KubeSphere through App Store'
link title: "Deploy MySQL"
weight: 345
weight: 14260
---
[MySQL](https://www.mysql.com/) is an open-source relational database management system (RDBMS), which uses the most commonly used database management language - Structured Query Language (SQL) for database management. It provides a fully managed database service to deploy cloud-native applications using the worlds most popular open-source database.

View File

@ -3,7 +3,7 @@ title: "Deploy NGINX on KubeSphere"
keywords: 'KubeSphere, Kubernetes, Installation, NGINX'
description: 'How to deploy NGINX from the App Store of KubeSphere'
linkTitle: "Deploy NGINX on KubeSphere"
weight: 261
weight: 14270
---
[NGINX](https://www.nginx.com/) is an open-source software application for web serving, reverse proxying, caching, load balancing, media streaming, and more.

View File

@ -3,7 +3,7 @@ title: "Deploy PostgreSQL on KubeSphere"
keywords: 'Kubernetes, KubeSphere, PostgreSQL, app-store'
description: 'How to deploy PostgreSQL from the App Store of KubeSphere'
linkTitle: "Deploy PostgreSQL on KubeSphere"
weight: 2242
weight: 14280
---
[PostgreSQL](https://www.postgresql.org/) is a powerful, open-source object-relational database system which is famous for reliability, feature robustness, and performance.

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, RabbitMQ, Kubernetes, Installation'
description: 'How to deploy RabbitMQ on KubeSphere through App Store'
link title: "Deploy RabbitMQ"
weight: 251
weight: 14290
---
[RabbitMQ](https://www.rabbitmq.com/) is the most widely deployed open-source message broker. It is lightweight and easy to deploy on premises and in the cloud. It supports multiple messaging protocols. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.

View File

@ -3,7 +3,7 @@ title: "Deploy Redis on KubeSphere"
keywords: 'KubeSphere, Kubernetes, Installation, Redis'
description: 'How to deploy Redis from the App Store of KubeSphere'
linkTitle: "Deploy Redis on KubeSphere"
weight: 261
weight: 14291
---
[Redis](https://redis.io/) is an open-source (BSD licensed), in-memory data structure store, used as a database, cache and message broker.

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, Kubernetes, Installation, Tomcat'
description: 'How to deploy Tomcat on KubeSphere through App Store'
link title: "Deploy Tomcat"
weight: 261
weight: 14292
---
[Apache Tomcat](https://tomcat.apache.org/index.html) powers numerous large-scale, mission-critical web applications across a diverse range of industries and organizations. Tomcat provides a pure Java HTTP web server environment in which Java code can run.

View File

@ -4,7 +4,7 @@ keywords: 'kubernetes, kubesphere, gitlab, app-store'
description: 'How to deploy GitLab'
weight: 2240
weight: 14310
---
## Objective

View File

@ -1,10 +1,10 @@
---
title: "户管理和权限控制"
description: "户管理和权限控制"
title: "户管理和权限控制"
description: "户管理和权限控制"
layout: "single"
linkTitle: "户管理和权限控制"
weight: 13000
linkTitle: "户管理和权限控制"
weight: 12000
icon: "/images/docs/docs.svg"

View File

@ -4,7 +4,7 @@ keywords: "kubernetes, kubesphere, multi-tenancy"
description: "Multi-tenancy in KubeSphere"
linkTitle: "KubeSphere中的多租户"
weight: 2230
weight: 12100
---

View File

@ -3,7 +3,7 @@ title: "OAuth2 Identity Provider"
keywords: 'kubernetes, kubesphere, OAuth2, Identity Provider'
description: 'OAuth2 Identity Provider'
weight: 2240
weight: 12200
---
## 概览

View File

@ -5,7 +5,7 @@ layout: "single"
linkTitle: "Reference"
weight: 18000
weight: 17000
icon: "/images/docs/docs.svg"

View File

@ -5,7 +5,7 @@ layout: "single"
linkTitle: "API Changes"
weight: 8200
weight: 17300
icon: "/images/docs/docs.svg"

View File

@ -4,12 +4,14 @@ keywords: 'Kubernetes, KubeSphere, API, Logging'
description: 'Logging'
weight: 250
weight: 17310
---
The API changes of logging component in KubeSphere v3.0.0.
## Time format
The time format for query parameters must be in Unix timestamp, which is the number of seconds that have elapsed since the Unix epoch. Millisecond is no longer allowed. The change affects the parameters `start_time` and `end_time`.
The time format for query parameters must be in Unix timestamp, which is the number of seconds that has elapsed since the Unix epoch. Millisecond is no longer allowed. The change affects the parameters `start_time` and `end_time`.
## Deprecated APIs

View File

@ -4,7 +4,7 @@ keywords: 'Kubernetes, KubeSphere, API, Monitoring'
description: 'Monitoring'
weight: 260
weight: 17320
---
## API Version
@ -13,7 +13,7 @@ The monitoring API version is bumped to `v1alpha3`.
## Time format
The time format for query parameters must be in Unix timestamp, which is the number of seconds that have elapsed since the Unix epoch. Decimal is no longer allowed. The change affects the parameters `start`, `end` and `time`.
The time format for query parameters must be in Unix timestamp, which is the number of seconds that has elapsed since the Unix epoch. Decimal is no longer allowed. The change affects the parameters `start`, `end` and `time`.
## Deprecated Metrics

View File

@ -4,20 +4,22 @@ keywords: 'Kubernetes, KubeSphere, API'
description: 'KubeSphere API documentation'
weight: 240
weight: 17100
---
In KubeSphere v3.0, we move the functionalities of _ks-apigateway_, _ks-account_ into _ks-apiserver_ to make the architecture more compact and straight forward. In order to use KubeSphere API, you need to expose _ks-apiserver_ to your client.
## Expose KubeSphere API service
If you are going to access KubeSphere inside the cluster, you can skip the following section and just using the KubeSphere API server endpoint **`http://ks-apiserver.kubesphere-system.svc`**.
## Step 1: Expose KubeSphere API service
But if not, you need to expose the KubeSphere API server endpoint to the outside of the cluster first.
If you are going to access KubeSphere inside the cluster, you can skip the following section and just use the KubeSphere API server endpoint **`http://ks-apiserver.kubesphere-system.svc`**.
On the other hand, you need to expose the KubeSphere API server endpoint to the outside of the cluster first.
There are many ways to expose a Kubernetes service. For simplicity, we use _NodePort_ in our case. Change service `ks-apiserver` type to NodePort by using the following command.
There are many ways to expose a Kubernetes service, for simplicity, we use _NodePort_ in our case. Change service `ks-apiserver` type to NodePort by using following command, and then you are done.
```bash
root@master:~# kubectl -n kubesphere-system patch service ks-apiserver -p '{"spec":{"type":"NodePort"}}'
root@master:~# kubectl -n kubesphere-system get svc
$ kubectl -n kubesphere-system patch service ks-apiserver -p '{"spec":{"type":"NodePort"}}'
$ kubectl -n kubesphere-system get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
etcd ClusterIP 10.233.34.220 <none> 2379/TCP 44d
ks-apiserver NodePort 10.233.15.31 <none> 80:31407/TCP 49d
@ -26,18 +28,26 @@ ks-console NodePort 10.233.3.45 <none> 80:30880/TCP
Now, you can access `ks-apiserver` outside the cluster through URL like `http://[node ip]:31407`, where `[node ip]` means IP of any node in your cluster.
## Generate a token
There is one more thing to do before calling the API, authorization. Any clients that talk to the KubeSphere API server need to identify themselves first, only after successful authorization will the server respond to the call.
## Step 2: Generate a token
You need to identify yourself before making any call to the API server. Let's take a user `jeff` with password `P#$$w0rd` as an example. He needs to issue a request to generate a token like the following:
Let's say now a user `jeff` with password `P#$$w0rd` want to generate a token. He/She can issue a request like the following:
```bash
root@master:~# curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \
curl -X POST -H 'Content-Type: application/x-www-form-urlencoded' \
'http://[node ip]:31407/oauth/token' \
--data-urlencode 'grant_type=password' \
--data-urlencode 'username=admin' \
--data-urlencode 'password=P#$$w0rd'
```
If the identity is correct, the server will response something like the following. `access_token` is the token what we need to access the KubeSphere API Server.
```
{{< notice note >}}
Please substitue `[node ip]` with the real ip address.
{{</ notice >}}
If the identity is correct, the server will response something like the following. `access_token` is the token to access the KubeSphere API Server.
```json
{
"access_token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImFkbWluIiwidWlkIjoiYTlhNjJmOTEtYWQ2Yi00MjRlLWIxNWEtZTFkOTcyNmUzNDFhIiwidG9rZW5fdHlwZSI6ImFjY2Vzc190b2tlbiIsImV4cCI6MTYwMDg1MjM5OCwiaWF0IjoxNjAwODQ1MTk4LCJpc3MiOiJrdWJlc3BoZXJlIiwibmJmIjoxNjAwODQ1MTk4fQ.Hcyf-CPMeq8XyQQLz5PO-oE1Rp1QVkOeV_5J2oX1hvU",
@ -46,15 +56,15 @@ If the identity is correct, the server will response something like the followin
"expires_in": 7200
}
```
> **Note**: Please substitue `[node ip]:31407` with the real ip address.
## Make the call
## Step 3: Make the call
Now you got everything you need to access KubeSphere API server. Make the call using the access token acquired above as the following to get node list:
Now you got everything you need to access api server, make the call using the access token just acquire :
```bash
root@master1:~# curl -X GET -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImFkbWluIiwidWlkIjoiYTlhNjJmOTEtYWQ2Yi00MjRlLWIxNWEtZTFkOTcyNmUzNDFhIiwidG9rZW5fdHlwZSI6ImFjY2Vzc190b2tlbiIsImV4cCI6MTYwMDg1MjM5OCwiaWF0IjoxNjAwODQ1MTk4LCJpc3MiOiJrdWJlc3BoZXJlIiwibmJmIjoxNjAwODQ1MTk4fQ.Hcyf-CPMeq8XyQQLz5PO-oE1Rp1QVkOeV_5J2oX1hvU" \
$ curl -X GET -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ1c2VybmFtZSI6ImFkbWluIiwidWlkIjoiYTlhNjJmOTEtYWQ2Yi00MjRlLWIxNWEtZTFkOTcyNmUzNDFhIiwidG9rZW5fdHlwZSI6ImFjY2Vzc190b2tlbiIsImV4cCI6MTYwMDg1MjM5OCwiaWF0IjoxNjAwODQ1MTk4LCJpc3MiOiJrdWJlc3BoZXJlIiwibmJmIjoxNjAwODQ1MTk4fQ.Hcyf-CPMeq8XyQQLz5PO-oE1Rp1QVkOeV_5J2oX1hvU" \
-H 'Content-Type: application/json' \
'http://10.233.15.31/kapis/resources.kubesphere.io/v1alpha3/nodes'
'http://[node ip]:31407/kapis/resources.kubesphere.io/v1alpha3/nodes'
{
"items": [
@ -86,8 +96,15 @@ root@master1:~# curl -X GET -H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cC
....
```
## API Reference
KubeSpehre API swagger json can be found in repo https://github.com/kubesphere/kubesphere/blob/master/api/
{{< notice note >}}
- KubeSphere specified API [swagger json](https://github.com/kubesphere/kubesphere/blob/master/api/ks-openapi-spec/swagger.json). It contains all the API that only applied to KubeSphere.
- KubeSphere specified CRD [swagger json](https://github.com/kubesphere/kubesphere/blob/master/api/openapi-spec/swagger.json). Contains all the generated CRD api documentation, it's same with Kubernetes api objects.
Please substitue `[node ip]` with the real ip address.
{{</ notice >}}
## API Reference
KubeSpehre API swagger json can be found in repo https://github.com/kubesphere/kubesphere/tree/release-3.0/api
- KubeSphere specified API [swagger json](https://github.com/kubesphere/kubesphere/blob/release-3.0/api/ks-openapi-spec/swagger.json). It contains all the APIs that are only applied to KubeSphere.
- KubeSphere specified CRD [swagger json](https://github.com/kubesphere/kubesphere/blob/release-3.0/api/openapi-spec/swagger.json). It contains all the generated CRDs API documentation. It is same with Kubernetes api objects.

View File

@ -4,7 +4,7 @@ keywords: 'kubernetes, kubesphere, devops, docker, helm, jenkins, istio, prometh
description: 'KubeSphere Glossary documentation'
weight: 240
weight: 17200
---
This glossary includes technical terms that are specific to KubeSphere, as well as more general terms that provide useful context.

View File

@ -5,7 +5,7 @@ layout: "single"
linkTitle: "应用商店"
weight: 15000
weight: 14000
icon: "/images/docs/docs.svg"

View File

@ -1,6 +1,6 @@
---
linkTitle: "Application Developer Guide"
weight: 2200
weight: 14400
_build:
render: false

View File

@ -1,224 +1,8 @@
---
title: "Air-Gapped Installation"
title: "Helm Developer Guide"
keywords: 'kubernetes, kubesphere, air gapped, installation'
description: 'How to install KubeSphere on air-gapped Linux machines'
description: ''
weight: 2240
---
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## Prerequisites
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
- Ensure your disk of each node is at least 100G.
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
```
## Step 3: Configure Host Template
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[local-registry]
master
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 4: Enable All Components
> This is step is complete installation. You can skip this step if you choose a minimal installation.
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
## Step 5: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).
weight: 14410
---

View File

@ -1,224 +1,8 @@
---
title: "Air-Gapped Installation"
title: "Helm Specification"
keywords: 'kubernetes, kubesphere, air gapped, installation'
description: 'How to install KubeSphere on air-gapped Linux machines'
description: ''
weight: 2240
---
The air-gapped installation is almost the same as the online installation except it creates a local registry to host the Docker images. We will demonstrate how to install KubeSphere and Kubernetes on air-gapped environment.
> Note: The dependencies in different operating systems may cause upexpected problems. If you encounter any installation problems on air-gapped environment, please describe your OS information and error logs on [GitHub](https://github.com/kubesphere/kubesphere/issues).
## Prerequisites
- If your machine is behind a firewall, you need to open the ports by following the document [Ports Requirements](../port-firewall) for more information.
> - Installer will use `/var/lib/docker` as the default directory where all Docker related files, including the images, are stored. We recommend you to add additional storage to a disk with at least 100G mounted at `/var/lib/docker` and `/mnt/registry` respectively, use the [fdisk](https://www.computerhope.com/unix/fdisk.htm) command for reference.
- Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [OpenEBS](https://openebs.io/) to provide storage service with dynamic provisioning. It is convenient for testing and development. For production, please [configure supported persistent storage service](../storage-configuration) and prepare [high availability configuration](../master-ha) before installation.
- Since the air-gapped machines cannot connect to apt or yum source, please use clean Linux machine to avoid this problem.
## Step 1: Prepare Linux Hosts
The following describes the requirements of hardware and operating system. To get started with multi-node installation, you need to prepare at least `three` hosts according to the following requirements.
- Supported OSes: CentOS 7.4 ~ 7.7 (64-bit), Ubuntu 16.04.5/16.04.6/18.04.1/18.04.2/18.04.3 LTS (64-bit)
- Time synchronization is required across all nodes, otherwise the installation may not succeed;
- For `Ubuntu 16.04` OS, it is recommended to select `16.04.5`;
- If you are using `Ubuntu 18.04`, you need to use the user `root`.
- Ensure your disk of each node is at least 100G.
- CPU and memory in total of all machines: 2 cores and 4 GB for minimal installation; 8 cores and 16 GB for complete installation.
The following section describes an example to introduce multi-node installation. This example shows three hosts installation by taking the `master` serving as the taskbox to execute the installation. The following cluster consists of one Master and two Nodes.
> Note: KubeSphere supports the high-availability configuration of the Masters and Etcd nodes. Please refer to [Creating High Availability KubeSphere Cluster](../master-ha) for guide.
| Host IP | Host Name | Role |
| --- | --- | --- |
|192.168.0.1|master|master, etcd|
|192.168.0.2|node1|node|
|192.168.0.3|node2|node|
### Cluster Architecture
#### Single Master, Single Etcd, Two Nodes
![Architecture](/cluster-architecture.svg)
## Step 2: Download Installer Package
Download `KubeSphere 2.1.1` to your taskbox machine, then unpack it and go to the folder `conf`.
```bash
curl -L https://kubesphere.io/download/offline/latest > kubesphere-all-offline-v2.1.1.tar.gz \
&& tar -zxf kubesphere-all-offline-v2.1.1.tar.gz && cd kubesphere-all-offline-v2.1.1/conf
```
## Step 3: Configure Host Template
> This step is only for multi-node installation, you can skip this step if you choose all-in-one installation.
Please refer to the following sample to configure all hosts in `hosts.ini`. It is recommended to install KubeSphere using root user. The following is an example configuration for `CentOS 7.5` using root user. Note do not manually wrap any line in the file.
> Note:
>
> - If you use non-root user with sudo access to install KubeSphere, you need to refer to the example block that is commented out in `conf/hosts.ini`.
> - If the `root` user of that taskbox machine cannot establish SSH connection with the rest of machines, you need to refer to the `non-root` user example at the top of the `conf/hosts.ini`, but it is recommended to switch `root` user when executing `install.sh`.
> - master, node1 and node2 are the host names of each node and all host names should be in lowercase.
### hosts.ini
```ini
[all]
master ansible_connection=local ip=192.168.0.1
node1 ansible_host=192.168.0.2 ip=192.168.0.2 ansible_ssh_pass=PASSWORD
node2 ansible_host=192.168.0.3 ip=192.168.0.3 ansible_ssh_pass=PASSWORD
[local-registry]
master
[kube-master]
master
[kube-node]
node1
node2
[etcd]
master
[k8s-cluster:children]
kube-node
kube-master
```
> Note:
>
> - You need to replace each node information such as IP, password with real values in the group `[all]`. The master node is the taskbox so you do not need to add password field here.
> - Installer will use a node as the local registry for docker images, defaults to "master" in the group `[local-registry]`.
> - The "master" node also takes the role of master and etcd, so "master" is filled under the group`[kube-master]` and the group `[etcd]` respectively.
> - "node1" and "node2" both serve the role of `Node`, so they are filled under the group `[kube-node]`.
>
> Parameters Specification:
>
> - `ansible_connection`: Connection type to the host, "local" in the example above means local connection.
> - `ansible_host`: The name of the host to be connected.
> - `ip`: The ip of the host to be connected.
> - `ansible_user`: The default ssh user name to use.
> - `ansible_become_pass`: Allows you to set the privilege escalation password.
> - `ansible_ssh_pass`: The password of the host to be connected using root.
## Step 4: Enable All Components
> This is step is complete installation. You can skip this step if you choose a minimal installation.
Edit `conf/common.yaml`, reference the following changes with values being `true` which are `false` by default.
```yaml
# LOGGING CONFIGURATION
# logging is an optional component when installing KubeSphere, and
# Kubernetes builtin logging APIs will be used if logging_enabled is set to false.
# Builtin logging only provides limited functions, so recommend to enable logging.
logging_enabled: true # Whether to install logging system
elasticsearch_master_replica: 1 # total number of master nodes, it's not allowed to use even number
elasticsearch_data_replica: 2 # total number of data nodes
elasticsearch_volume_size: 20Gi # Elasticsearch volume size
log_max_age: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elk_prefix: logstash # the string making up index names. The index name will be formatted as ks-<elk_prefix>-log
kibana_enabled: false # Kibana Whether to install built-in Grafana
#external_es_url: SHOULD_BE_REPLACED # External Elasticsearch address, KubeSphere supports integrate with Elasticsearch outside the cluster, which can reduce the resource consumption.
#external_es_port: SHOULD_BE_REPLACED # External Elasticsearch service port
#DevOps Configuration
devops_enabled: true # Whether to install built-in DevOps system (Supports CI/CD pipeline, Source/Binary to image)
jenkins_memory_lim: 8Gi # Jenkins memory limit, it is 8 Gi by default
jenkins_memory_req: 4Gi # Jenkins memory request, it is 4 Gi by default
jenkins_volume_size: 8Gi # Jenkins volume size, it is 8 Gi by default
jenkinsJavaOpts_Xms: 3g # Following three are JVM parameters
jenkinsJavaOpts_Xmx: 6g
jenkinsJavaOpts_MaxRAM: 8g
sonarqube_enabled: true # Whether to install built-in SonarQube
#sonar_server_url: SHOULD_BE_REPLACED # External SonarQube address, KubeSphere supports integrate with SonarQube outside the cluster, which can reduce the resource consumption.
#sonar_server_token: SHOULD_BE_REPLACED # SonarQube token
# Following components are all optional for KubeSphere,
# Which could be turned on to install it before installation or later by updating its value to true
openpitrix_enabled: true # KubeSphere application store
metrics_server_enabled: true # For KubeSphere HPA to use
servicemesh_enabled: true # KubeSphere service mesh system(Istio-based)
notification_enabled: true # KubeSphere notification system
alerting_enabled: true # KubeSphere alerting system
```
## Step 5: Install KubeSphere to Linux Machines
> Note:
>
> - Generally, you can install KubeSphere without any modification, it will start with minimal installation by default.
> - If you want to enable pluggable feature components installation, modify common.yaml and refer to [Enable Pluggable Components Installation](../pluggable-components) for instructions.
> - Installer uses [Local volume](https://kubernetes.io/docs/concepts/storage/volumes/#local) based on [openEBS](https://openebs.io/) to provide storage service with dynamic provisioning. For production environment, please [configure supported persistent storage service](../storage-configuration) before installation.
> - Since the default subnet for Cluster IPs is 10.233.0.0/18, and the default subnet for Pod IPs is 10.233.64.0/18, the node IPs must not use the two IP range. You can modify the default subnets `kube_service_addresses` or `kube_pods_subnet` in the file `conf/common.yaml` to avoid conflicts.
**1.** Enter `scripts` folder, and execute `install.sh` using `root` user:
```bash
cd ../cripts
./install.sh
```
**2.** Type `2` to select multi-node mode to start the installation. The installer will ask you if you have set up persistent storage service or not. Just type `yes` since we are going to use local volume.
```bash
################################################
KubeSphere Installer Menu
################################################
* 1) All-in-one
* 2) Multi-node
* 3) Quit
################################################
https://kubesphere.io/ 2020-02-24
################################################
Please input an option: 2
```
**3.** Verify the multi-node installation
**(1).** If "Successful" it returned after `install.sh` process completed, then congratulation! you are ready to go.
```bash
successsful!
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.1:30880
Account: admin
Password: P@88w0rd
NOTEPlease modify the default password after login.
#####################################################
```
> Note: The information above is saved in a log file that you can view by following the [guide](../verify-components).
**(2).** You will be able to use default account and password `admin / P@88w0rd` to log in the console `http://{$IP}:30880` to take a tour of KubeSphere. Please change the default password after logging in.
![Login](https://pek3b.qingstor.com/kubesphere-docs/png/20191017172215.png)
<font color=red>Note: After log in console, please verify the monitoring status of service components in the "Cluster Status". If any service is not ready, please wait patiently untill all components get running up.</font>
![Landing Page](https://pek3b.qingstor.com/kubesphere-docs/png/20191125003158.png)
## Enable Pluggable Components
If you already have set up minimal installation, you still can edit the ConfigMap of ks-installer using the following command. Please make sure there is enough resource in your machines, see [Pluggable Components Overview](/en/installation/pluggable-components/).
```bash
kubectl edit cm -n kubesphere-system ks-installer
```
## FAQ
If you have further questions please do not hesitate to raise issues on [GitHub](https://github.com/kubesphere/kubesphere/issues).
weight: 14420
---

View File

@ -1,9 +1,9 @@
---
title: "应用程序生命周期管理"
keywords: 'Kubernetes, KubeSphere, app-store'
description: 'App Lifecycle Management'
keywords: 'Kubernetes, KubeSphere, 应用商店'
description: '应用程序生命周期管理'
linkTitle: '应用程序生命周期管理'
weight: 2240
weight: 14100
---
KubeSphere integrates [OpenPitrix](https://github.com/openpitrix/openpitrix), an open-source multi-cloud application management platform, to set up the App Store, managing applications throughout their entire lifecycle. The App Store supports two kinds of application deployment:

View File

@ -1,6 +1,6 @@
---
linkTitle: "内置应用"
weight: 2200
weight: 14200
_build:
render: false

View File

@ -3,7 +3,7 @@ title: "Deploy etcd on KubeSphere"
keywords: 'Kubernetes, KubeSphere, etcd, app-store'
description: 'How to deploy etcd on KubeSphere.'
linkTitle: "Deploy etcd on KubeSphere"
weight: 2240
weight: 14210
---
Written in Go, [etcd](https://etcd.io/) is a distributed key-value store to store data that needs to be accessed by a distributed system or cluster of machines. In Kubernetes, it is the backend for service discovery and stores cluster states and configurations.

View File

@ -4,7 +4,7 @@ keywords: 'Kubernetes, KubeSphere, Harbor, app-store'
description: 'How to use built-in Harbor registry'
weight: 2242
weight: 14220
---
From the [Introduction](../../_index) section, you know there was uncounted application could be installed by helm. [kubesphere\'s App Store](https://charts.kubesphere.io/main/) also added some popular application.

View File

@ -3,7 +3,7 @@ title: "Deploy Memcached on KubeSphere"
keywords: 'Kubernetes, KubeSphere, Memcached, app-store'
description: 'How to deploy Memcached on KubeSphere through App Store'
linkTitle: "Deploy Memcached on KubeSphere"
weight: 2242
weight: 14230
---
[Memcached](https://memcached.org/) is an in-memory key-value store for small chunks of arbitrary data (strings, objects) from results of database calls, API calls, or page rendering. Its API is available for the majority of popular languages.

View File

@ -4,7 +4,7 @@ keywords: 'Kubernetes, KubeSphere, Minio, app-store'
description: 'How to deploy Minio on KubeSphere from the App Store of KubeSphere'
linkTitle: "Deploy MinIO on KubeSphere"
weight: 2242
weight: 14240
---
[MinIO](https://min.io/) object storage is designed for high performance and the S3 API. It is ideal for large, private cloud environments with stringent security requirements and delivers mission-critical availability across a diverse range of workloads.

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, Kubernetes, Installation, MySQL'
description: 'How to deploy MySQL on KubeSphere through App Store'
link title: "Deploy MySQL"
weight: 345
weight: 14260
---
[MySQL](https://www.mysql.com/) is an open-source relational database management system (RDBMS), which uses the most commonly used database management language - Structured Query Language (SQL) for database management. It provides a fully managed database service to deploy cloud-native applications using the worlds most popular open-source database.

View File

@ -3,7 +3,7 @@ title: "Deploy PostgreSQL on KubeSphere"
keywords: 'Kubernetes, KubeSphere, PostgreSQL, app-store'
description: 'How to deploy PostgreSQL from the App Store of KubeSphere'
linkTitle: "Deploy PostgreSQL on KubeSphere"
weight: 2242
weight: 14280
---
[PostgreSQL](https://www.postgresql.org/) is a powerful, open-source object-relational database system which is famous for reliability, feature robustness, and performance.

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, RabbitMQ, Kubernetes, Installation'
description: 'How to deploy RabbitMQ on KubeSphere through App Store'
link title: "Deploy RabbitMQ"
weight: 251
weight: 14290
---
[RabbitMQ](https://www.rabbitmq.com/) is the most widely deployed open-source message broker. It is lightweight and easy to deploy on premises and in the cloud. It supports multiple messaging protocols. RabbitMQ can be deployed in distributed and federated configurations to meet high-scale, high-availability requirements.

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, Kubernetes, Installation, Tomcat'
description: 'How to deploy Tomcat on KubeSphere through App Store'
link title: "Deploy Tomcat"
weight: 261
weight: 14292
---
[Apache Tomcat](https://tomcat.apache.org/index.html) powers numerous large-scale, mission-critical web applications across a diverse range of industries and organizations. Tomcat provides a pure Java HTTP web server environment in which Java code can run.

View File

@ -4,7 +4,7 @@ keywords: 'Nginx应用商店OpenPitrixKubernetes'
description: '使用 KubeSphere 应用商店部署 Nginx 至 Kubernetes'
weight: 2240
weight: 14270
---
本文介绍在 **kubesphere** 中通过应用商店部署 **Nginx** 的操作步骤。

View File

@ -4,7 +4,7 @@ keywords: 'redis应用商店OpenPitrixKubernetes'
description: '使用 KubeSphere 应用商店部署 Redis 至 Kubernetes'
weight: 2240
weight: 14291
---
本文介绍在 **kubesphere** 中通过应用商店部署 **Redis** 的操作步骤。

View File

@ -4,7 +4,7 @@ keywords: 'mongodb应用商店OpenPitrixKubernetes'
description: '使用 KubeSphere 应用商店部署 MongoDB 至 Kubernetes'
weight: 2240
weight: 14250
---
本文介绍在 **kubesphere** 中通过应用商店部署 **MongoDB** 的操作步骤。

View File

@ -1,6 +1,6 @@
---
linkTitle: "External Applications"
weight: 2200
weight: 14300
_build:
render: false

View File

@ -4,7 +4,7 @@ keywords: 'kubernetes, kubesphere, gitlab, app-store'
description: 'How to deploy GitLab'
weight: 2240
weight: 14310
---
## Objective

View File

@ -4,7 +4,7 @@ description: "FAQ is designed to answer and summarize the questions users ask mo
layout: "single"
linkTitle: "FAQ"
weight: 17000
weight: 16000
icon: "/images/docs/docs.svg"
---

View File

@ -3,7 +3,7 @@ title: "Bring your own Prometheus"
keywords: "Monitoring, Prometheus, node-exporter, kube-state-metrics, KubeSphere, Kubernetes"
description: "Use your own Prometheus stack for KubeSphere monitoring"
Weight: 7100
Weight: 16100
---
KubeSphere comes with several pre-installed customized monitoring components including Prometheus Operator, Prometheus, Alertmanager, Grafana (Optional), various service monitors, node-exporter, kube-state-metrics. These components might already exist before you install KubeSphere, it's possible to use your own Prometheus stack setup in KubeSphere v3.0.0 .
@ -25,21 +25,18 @@ To use your own Prometheus stack setup, the steps are as below:
You can uninstall KubeSphere customized Prometheus stack as below:
```bash
# Enter ks-installer pod
kubectl -n kubesphere-system exec -it `kubectl -n kubesphere-system get pod|grep ks-installer|awk '{print $1}'` -- /bin/sh
# Execute the following commands inside ks-installer pod to uninstall, pls ignore errors like below:
# Error from server (NotFound): error when deleting "/kubesphere/kubesphere/prometheus/xx/xxx.yaml": xxx "xxx" not found
kubectl delete -f /kubesphere/kubesphere/prometheus/alertmanager/
kubectl delete -f /kubesphere/kubesphere/prometheus/devops/
kubectl delete -f /kubesphere/kubesphere/prometheus/etcd/
kubectl delete -f /kubesphere/kubesphere/prometheus/grafana/
kubectl delete -f /kubesphere/kubesphere/prometheus/kube-state-metrics/
kubectl delete -f /kubesphere/kubesphere/prometheus/node-exporter/
kubectl delete -f /kubesphere/kubesphere/prometheus/upgrade/
kubectl delete -f /kubesphere/kubesphere/prometheus/prometheus-rules-v1.16\+.yaml
kubectl delete -f /kubesphere/kubesphere/prometheus/prometheus-rules.yaml
kubectl delete -f /kubesphere/kubesphere/prometheus/prometheus
kubectl delete -f /kubesphere/kubesphere/prometheus/init/
# Execute the following commands to uninstall:
kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/alertmanager/ 2>/dev/null
kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/devops/ 2>/dev/null
kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/etcd/ 2>/dev/null
kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/grafana/ 2>/dev/null
kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/kube-state-metrics/ 2>/dev/null
kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/node-exporter/ 2>/dev/null
kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/upgrade/ 2>/dev/null
kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/prometheus-rules-v1.16\+.yaml 2>/dev/null
kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/prometheus-rules.yaml 2>/dev/null
kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/prometheus 2>/dev/null
kubectl -n kubesphere-system exec $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -- kubectl delete -f /kubesphere/kubesphere/prometheus/init/ 2>/dev/null
# Delete pvc Prometheus used
kubectl -n kubesphere-monitoring-system delete pvc `kubectl -n kubesphere-monitoring-system get pvc | grep -v VOLUME | awk '{print $1}' | tr '\n' ' '`
```
@ -48,13 +45,23 @@ kubectl -n kubesphere-monitoring-system delete pvc `kubectl -n kubesphere-monito
{{< notice note >}}
KubeSphere 3.0 was certified to work well with Prometheus Operator **v0.38.3+**, Prometheus **v2.20.1+**, Alertmanager **v0.21.0+**,kube-state-metrics **v1.9.6**, node-exporter **v0.18.1**, so please be aware that your Prometheus stack components' version meets these version requirements especially node-exporter and kube-state-metrics.
KubeSphere 3.0.0 was certified to work well with the following Prometheus stack components:
If you've already had a Prometheus stack up and running, you can skip this step.
- Prometheus Operator **v0.38.3+**
- Prometheus **v2.20.1+**
- Alertmanager **v0.21.0+**
- kube-state-metrics **v1.9.6**
- node-exporter **v0.18.1**
Please be aware that your Prometheus stack components' version meets these version requirements especially `node-exporter` and `kube-state-metrics`.
**Make sure to install `node-exporter` and `kube-state-metrics` if only `Prometheus Operator` and `Prometheus` were installed. These two components are required for KubeSphere to work properly.**
**If you've already had the entire Prometheus stack up and running, you can skip this step.**
{{</ notice >}}
Promethes stack can be installed in many ways, the following steps show how to install using `kube-prometheus`.
Promethes stack can be installed in many ways, the following steps show how to install using **upstream `kube-prometheus`** to namespace `monitoring`.
```bash
# Get kube-prometheus version v0.6.0 whose node-exporter's version v0.18.1 matches the one KubeSphere v3.0.0 used
@ -68,6 +75,7 @@ rm -rf manifests/prometheus-adapter-*.yaml
# Change kube-state-metrics to the same version v1.9.6 as KubeSphere v3.0.0 used
sed -i 's/v1.9.5/v1.9.6/g' manifests/kube-state-metrics-deployment.yaml
# Install Prometheus, Alertmanager, Grafana, kube-state-metrics, node-exporter
# You can only install kube-state-metrics or node-exporter by only applying yaml files kube-state-metrics-*.yaml or node-exporter-*.yaml
kubectl apply -f manifests/
```
@ -75,7 +83,7 @@ kubectl apply -f manifests/
{{< notice note >}}
KubeSphere 3.0 uses Prometheus Operator to manage Prometheus/Alertmanager config and lifecycle, ServiceMonitor (to manage scrape config), PrometheusRule (to manage Prometheus recording/alert rules).
KubeSphere 3.0.0 uses Prometheus Operator to manage Prometheus/Alertmanager config and lifecycle, ServiceMonitor (to manage scrape config), PrometheusRule (to manage Prometheus recording/alert rules).
There are a few items listed in [KubeSphere kustomization](https://github.com/kubesphere/kube-prometheus/blob/ks-v3.0/kustomize/kustomization.yaml), among which `prometheus-rules.yaml` and `prometheus-rulesEtcd.yaml` are required for KubeSphere v3.0.0 to work properly and the others are optional. You can remove `alertmanager-secret.yaml` if you don't want your existing Alertmanager's config to be overwritten. You can remove `xxx-serviceMonitor.yaml` if you don't want your own `ServiceMonitors` to be overwritten (KubeSphere customized ServiceMonitors discard many irrelevant metrics to make sure Promethues only store the most useful metrics).
@ -91,16 +99,18 @@ If your Prometheus stack setup isn't managed by Prometheus Operator, you can ski
# Get KubeSphere v3.0.0 customized kube-prometheus
cd ~ && mkdir kubesphere && cd kubesphere && git clone https://github.com/kubesphere/kube-prometheus.git && cd kube-prometheus/kustomize
# Change to your own namespace in which Prometheus stack is deployed
sed -i 's/my-namespace/<replace-me-with-new-ns>/g' kustomization.yaml
# For example 'monitoring' if you install Prometheus to the monitoring namespace following step 2.
sed -i 's/my-namespace/<your own namespace>/g' kustomization.yaml
# Apply KubeSphere customized stuff including Promethues rules, Alertmanager config, various ServiceMonitors.
kubectl apply -k .
# Setup service for kube-scheduler and kube-controller-manager metrics exposure
kubectl apply -f https://raw.githubusercontent.com/kubesphere/kube-prometheus/ks-v3.0/kustomize/prometheus-serviceKubeScheduler.yaml
kubectl apply -f https://raw.githubusercontent.com/kubesphere/kube-prometheus/ks-v3.0/kustomize/prometheus-serviceKubeControllerManager.yaml
# Find Prometheus CR which is usually k8s
kubectl -n monitoring get prometheus
# Set Prometheus rule evaluation interval to 1m to be consistent with KubeSphere v3.0.0 customized ServiceMonitor, rule evaluation interval should be greater or equal to scrape interval.
kubectl -n monitoring patch prometheus k8s --patch '{
kubectl apply -f ./prometheus-serviceKubeScheduler.yaml
kubectl apply -f ./prometheus-serviceKubeControllerManager.yaml
# Find Prometheus CR which is usually k8s in your own namespace
kubectl -n <your own namespace> get prometheus
# Set Prometheus rule evaluation interval to 1m to be consistent with KubeSphere v3.0.0 customized ServiceMonitor
# Rule evaluation interval should be greater or equal to scrape interval.
kubectl -n <your own namespace> patch prometheus k8s --patch '{
"spec": {
"evaluationInterval": "1m"
}
@ -125,4 +135,10 @@ Change monitoring endpoint to your own Prometheus:
endpoint: http://prometheus-operated.monitoring.svc:9090
```
Restart KubeSphere APIServer by running `kubectl -n kubesphere-system rollout restart deployment/ks-apiserver`
Restart KubeSphere APIServer by running `kubectl -n kubesphere-system rollout restart deployment/ks-apiserver`
{{< notice warning >}}
If you enable/disable KubeSphere pluggable components following [this guide](https://kubesphere.io/docs/pluggable-components/overview/) , the `monitoring endpoint` will be reset to the original one and you have to change it to the new one and then restart KubeSphere APIServer again.
{{</ notice >}}

View File

@ -3,7 +3,7 @@ title: "Questions about KubeSphere Console"
keywords: "FAQ, console, KubeSphere, Kubernetes"
description: "FAQ is designed to answer and summarize the questions our users most frequently ask about KubeSphere Console."
Weight: 7200
Weight: 16200
---
**What kind of browsers does KubeSphere support?**

View File

@ -0,0 +1,50 @@
---
title: "Edit System Resources on the Console"
keywords: "system, resources, KubeSphere, Kubernetes"
description: "How to edit system resources on the console."
linkTitle: 'Edit System Resources on the Console'
Weight: 16300
---
When you install KubeSphere, the workspace `system-workspace` is created where all KubeSphere system projects and Kubernetes system projects run. To avoid any misoperation on both systems, you are not allowed to edit resources in the workspace directly on the console. However, you can still make adjustments to resources using `kubectl`.
This tutorial demonstrates how to edit `system-workspace` resources on the console directly.
{{< notice warning >}}
Editing resources in `system-workspace` may cause unexpected results, such as KubeSphere system and node failures, and your business may be affected. Please be extremely careful about the operation.
{{</ notice >}}
## Edit the Console Configuration
1. Log in KubeSphere as `admin`. Click the hammer icon in the bottom right corner and select **Kubectl**.
2. Execute the following command:
```bash
kubectl -n kubesphere-system edit cm ks-console-config
```
3. Add the `systemWorkspace` field under `client` and save the file.
```yaml
client:
version:
kubesphere: v3.0.0
kubernetes: v1.17.9
openpitrix: v0.3.5
enableKubeConfig: true
systemWorkspace: "$" # Add this line manually.
```
4. Redeploy `ks-console` by executing the following command and wait for Pods to be recreated.
```bash
kubectl -n kubesphere-system rollout restart deployment ks-console
```
5. Refresh the KubeSphere console and you can see that editing buttons in projects in `system-workspace` appear.
6. If you want to disable the editing function on the console, delete the field `systemWorkspace` by following the same steps above.

View File

@ -3,7 +3,7 @@ title: "Telemetry in KubeSphere"
keywords: "Installer, Telemetry, KubeSphere, Kubernetes"
description: "Telemetry collects aggregate information of KubeSphere installation."
Weight: 7300
Weight: 16400
---
Telemetry collects aggregate information about the size of KubeSphere clusters installed, KubeSphere and Kubernetes versions, components enabled, cluster running time, error logs, etc. KubeSphere promises that the information is only used by the KubeSphere community to improve products and will not be shared with any third parties.

View File

@ -4,7 +4,7 @@ description: "Help you to better manage KubeSphere projects"
layout: "single"
linkTitle: "Project Administration"
weight: 14000
weight: 13000
icon: "/images/docs/docs.svg"

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, Kubernetes, project, multicluster-project'
description: 'This tutorial introduces projects and multi-cluster projects.'
linkTitle: "Projects and Multi-cluster Projects"
weight: 2100
weight: 13100
---
A project in KubeSphere is a Kubernetes [namespace](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/), which is used to organize resources into non-overlapping groups. It represents a logical partitioning capability as it divides cluster resources between multiple tenants.

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Project Gateway'
linkTitle: "Project Gateway"
weight: 2130
weight: 13500
---
TBD

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, kubernetes, Calico, Network Policy'
description: 'Project Network Isolation'
linkTitle: "Project Network Isolation"
weight: 2130
weight: 13300
---
KubeSphere project network isolation lets project administrators enforce which network traffic is allowed using rules.

View File

@ -4,7 +4,7 @@ keywords: 'kubernetes, docker, helm, jenkins, istio, prometheus'
description: 'Project Quotas'
linkTitle: "Project Quotas"
weight: 2110
weight: 13400
---
TBD

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, Kubernetes, role, member, management, project'
description: 'Role and Member Management in a Project'
linkTitle: "Role and Member Management"
weight: 2130
weight: 13200
---
This guide demonstrates how to manage roles and members in your project. For more information about KubeSphere roles, see Overview of Role Management.

View File

@ -5,7 +5,7 @@ layout: "single"
linkTitle: "Toolbox"
weight: 16000
weight: 15000
icon: "/images/docs/docs.svg"
---

View File

@ -1,6 +1,6 @@
---
linkTitle: "Auditing"
weight: 5510
weight: 15300
_build:
render: false

View File

@ -4,7 +4,7 @@ keywords: "Kubernetes, KubeSphere, auditing, log, query"
description: "How to perform queries of auditing logs in KubeSphere."
linkTitle: "Auditing Log Query"
weight: 4914
weight: 15330
---
KubeSphere supports the query of auditing logs among isolated tenants. In this tutorial, you will learn how to use the query function, including the interface, search parameters and detail pages.

View File

@ -4,7 +4,7 @@ keywords: "Kubernetes, KubeSphere, auditing, log, customize, receive"
description: "How to receive and customize KubeSphere and Kubernetes auditing logs."
linkTitle: "Receive and Customize Auditing Logs"
weight: 4910
weight: 15310
---
KubeSphere Auditing Logs provide a security-relevant chronological set of records documenting the sequence of activities that have affected the system by individual users, administrators, or other components of the system. Each request to KubeSphere generates an event that is then written to a webhook and processed according to a certain rule. The event will be ignored, stored, or generate an alert based on different rules.

View File

@ -4,7 +4,7 @@ keywords: "Kubernetes, docker, kubesphere, auditing"
description: "Kubernetes and KubeSphere operation auditing"
linkTitle: "Auditing Rule"
weight: 4912
weight: 15320
---
An auditing rule defines the policy for processing auditing logs. KubeSphere Auditing Logs provide users with two CRD rules (`archiving-rule` and `alerting-rule`) for customization.

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, Kubernetes, Event, Query'
description: 'How to perform event query in KubeSphere.'
linkTitle: "Event Query"
weight: 4900
weight: 15200
---
## Objective

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, Kubernetes, history'
description: 'Use browser history from toolbox'
linkTitle: "History"
weight: 5520
weight: 15500
---
When you work in multiple workspaces or projects, your web browser will record the latest path you visited. You can check your history using F1, Win+K, or Command +K, which helps you quickly switch between the resources you access.

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, Kubernetes, log'
description: 'Query Kubernetes logs from toolbox'
linkTitle: "Log Query"
weight: 4190
weight: 15100
---
The logs of applications and systems can help you better understand what is happening inside your cluster and workloads. The logs are particularly useful for debugging problems and monitoring cluster activities. KubeSphere provides a powerful and easy-to-use logging system which offers users the capabilities of log collection, query and management from the perspective of tenants. The tenant-based logging system is much more useful than Kibana since different tenants can only view their own logs, leading to better security. Moreover, KubeSphere logging system filters out some redundant information so that tenants can only focus on logs that are useful to them.

View File

@ -4,7 +4,7 @@ keywords: 'KubeSphere, Kubernetes, kubectl, cli'
description: 'Use kubectl from toolbox'
linkTitle: "Web Kubectl"
weight: 5515
weight: 15400
---
The Kubernetes command-line tool, kubectl, allows you to run commands on Kubernetes clusters. You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs.