Merge pull request #197 from Sherlock113/evnet

Add enabling events in components
This commit is contained in:
pengfei 2020-09-01 23:41:19 +08:00 committed by GitHub
commit f713515ec9
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
2 changed files with 202 additions and 92 deletions

View File

@ -0,0 +1,202 @@
---
title: "KubeSphere Events"
keywords: "Kubernetes, events, KubeSphere, k8s-events"
description: "How to enable KubeSphere Events"
linkTitle: "KubeSphere Events"
weight: 3530
---
## What are KubeSphere Events
KubeSphere events allow users to keep track of what is happening inside a cluster, such as node scheduling status and image pulling result. They will be accurately recorded with the specific reason, status and message displayed in the web console. To query events, users can quickly launch the web Toolkit and enter related information in the search bar with different filters (e.g keyword and project) available. Events can also be archived to third-party tools, such as Elasticsearch, Kafka or Fluentd.
For more information, see Logging, Events and Auditing.
## Enable Events before Installation
### Installing on Linux
When you install KubeSphere on Linux, you need to create a configuration file, which lists all KubeSphere components.
1. In the tutorial of [Installing KubeSphere on Linux](https://kubesphere.io/docs/installing-on-linux/introduction/multioverview/), you create a default file **config-sample.yaml**. Modify the file by executing the following command:
```bash
vi config-sample.yaml
```
{{< notice note >}}
If you adopt [All-in-one Installation](https://kubesphere.io/docs/quick-start/all-in-one-on-linux/), you do not need to create a config-sample.yaml file as you can create a cluster directly. Generally, the all-in-one mode is for users who are new to KubeSphere and look to get familiar with the system. If you want to enable Events in this mode (e.g. for testing purpose), refer to the following section to see how Events can be installed after installation.
{{</ notice >}}
2. In this file, navigate to `events` and change `false` to `true` for `enabled`. Save the file after you finish.
```bash
events:
enabled: true # Change "false" to "true"
```
{{< notice note >}}
By default, KubeKey will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following value in **config-sample.yaml** if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, KubeKey will integrate your external Elasticsearch directly instead of installing an internal one.
{{</ notice >}}
```bash
es: # Storage backend for logging, tracing, events and auditing.
elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
elasticsearchDataReplicas: 1 # total number of data nodes
elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes
elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes
logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log
externalElasticsearchUrl: # The URL of external Elasticsearch
externalElasticsearchPort: # The port of external Elasticsearch
```
3. Create a cluster using the configuration file:
```bash
./kk create cluster -f config-sample.yaml
```
### **Installing on Kubernetes**
When you install KubeSphere on Kubernetes, you need to download the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) for cluster setting. If you want to install Events, do not use `kubectl apply -f` directly for this file.
1. In the tutorial of [Installing KubeSphere on Kubernetes](https://kubesphere.io/docs/installing-on-kubernetes/introduction/overview/), you execute `kubectl apply -f` first for the file [kubesphere-installer.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/kubesphere-installer.yaml). After that, to enable Events, create a local file cluster-configuration.yaml.
```bash
vi cluster-configuration.yaml
```
2. Copy all the content in the file [cluster-configuration.yaml](https://raw.githubusercontent.com/kubesphere/ks-installer/master/deploy/cluster-configuration.yaml) and paste it to the local file just created.
3. In this local cluster-configuration.yaml file, navigate to `events` and enable Events by changing `false` to `true` for `enabled`. Save the file after you finish.
```bash
events:
enabled: true # Change "false" to "true"
```
{{< notice note >}}
By default, ks-installer will install Elasticsearch internally if Events is enabled. For a production environment, it is highly recommended that you set the following value in **cluster-configuration.yaml** if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information before installation, ks-installer will integrate your external Elasticsearch directly instead of installing an internal one.
{{</ notice >}}
```bash
es: # Storage backend for logging, tracing, events and auditing.
elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
elasticsearchDataReplicas: 1 # total number of data nodes
elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes
elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes
logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log
externalElasticsearchUrl: # The URL of external Elasticsearch
externalElasticsearchPort: # The port of external Elasticsearch
```
4. Execute the following command to start installation:
```bash
kubectl apply -f cluster-configuration.yaml
```
## Enable Events after Installation
1. Log in the console as `admin`. Click **Platform** at the top left corner and select **Clusters Management**.
![clusters-management](https://ap3.qingstor.com/kubesphere-website/docs/20200828111130.png)
2. Click **CRDs** and enter `clusterconfiguration` in the search bar. Click the result to view its detailed page.
{{< notice info >}}
A Custom Resource Definition (CRD) allows users to create a new type of resources without adding another API server. They can use these resources like any other native Kubernetes objects.
{{</ notice >}}
3. In **Resource List**, click the three dots on the right of `ks-installer` and select **Edit YAML**.
![edit-yaml](https://ap3.qingstor.com/kubesphere-website/docs/20200827182002.png)
4. In this yaml file, navigate to `events` and change `false` to `true` for `enabled`. After you finish, click **Update** at the bottom right corner to save the configuration.
```bash
events:
enabled: true # Change "false" to "true"
```
{{< notice note >}}
By default, Elasticsearch will be installed internally if Events is enabled. For a production environment, it is highly recommended that you set the following value in this yaml file if you want to enable Events, especially `externalElasticsearchUrl` and `externalElasticsearchPort`. Once you provide the following information, KubeSphere will integrate your external Elasticsearch directly instead of installing an internal one.
{{</ notice >}}
```bash
es: # Storage backend for logging, tracing, events and auditing.
elasticsearchMasterReplicas: 1 # total number of master nodes, it's not allowed to use even number
elasticsearchDataReplicas: 1 # total number of data nodes
elasticsearchMasterVolumeSize: 4Gi # Volume size of Elasticsearch master nodes
elasticsearchDataVolumeSize: 20Gi # Volume size of Elasticsearch data nodes
logMaxAge: 7 # Log retention time in built-in Elasticsearch, it is 7 days by default.
elkPrefix: logstash # The string making up index names. The index name will be formatted as ks-<elk_prefix>-log
externalElasticsearchUrl: # The URL of external Elasticsearch
externalElasticsearchPort: # The port of external Elasticsearch
```
5. You can use the web kubectl to check the installation process by executing the following command:
```bash
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
```
{{< notice tip >}}
You can find the web kubectl tool by clicking the hammer icon at the bottom right corner of the console.
{{</ notice >}}
## Verify the Installation of Component
{{< tabs >}}
{{< tab "Verify the Component in Dashboard" >}}
If you enable both Logging and Events, you can check the status of Events in **Logging** in **Components**. You may see an image as follows:
![events](https://ap3.qingstor.com/kubesphere-website/docs/events.png)
If you only enable Events without Logging installed, you cannot see the image above as the button **Logging** will not display.
{{</ tab >}}
{{< tab "Verify the Component through kubectl" >}}
Execute the following command to check the status of pods:
```bash
kubectl get pod -n kubesphere-logging-system
```
The output may look as follows if the component runs successfully:
```bash
NAME READY STATUS RESTARTS AGE
elasticsearch-logging-data-0 1/1 Running 0 11m
elasticsearch-logging-data-1 1/1 Running 0 6m48s
elasticsearch-logging-discovery-0 1/1 Running 0 11m
fluent-bit-ljlsl 1/1 Running 0 6m30s
fluentbit-operator-5bf7687b88-85vxv 1/1 Running 0 11m
ks-events-exporter-5cb959c74b-rc4lm 2/2 Running 0 7m1s
ks-events-operator-7d46fcccc9-8vvsh 1/1 Running 0 10m
ks-events-ruler-97f756879-lg65t 2/2 Running 0 7m1s
ks-events-ruler-97f756879-ptbkr 2/2 Running 0 7m1s
```
{{</ tab >}}
{{</ tabs >}}

View File

@ -1,92 +0,0 @@
---
title: "KubeSphere Events System"
keywords: "kubernetes, events, kubesphere, k8s-events"
description: "How to enable KubeSphere events system"
linkTitle: "KubeSphere Events System"
weight: 700
---
KubeSphere 2.0.0 was released on **May 18th, 2019**.
## What's New in 2.0.0
### Component Upgrades
- Support Kubernetes [Kubernetes 1.13.5](https://github.com/kubernetes/kubernetes/releases/tag/v1.13.5)
- Integrate [QingCloud Cloud Controller](https://github.com/yunify/qingcloud-cloud-controller-manager). After installing load balancer, QingCloud load balancer can be created through KubeSphere console and the backend workload is bound automatically. 
- Integrate [QingStor CSI v0.3.0](https://github.com/yunify/qingstor-csi/tree/v0.3.0) storage plugin and support physical NeonSAN storage system. Support SAN storage service with high availability and high performance.
- Integrate [QingCloud CSI v0.2.1](https://github.com/yunify/qingcloud-csi/tree/v0.2.1) storage plugin and support many types of volume to create QingCloud block services.
- Harbor is upgraded to 1.7.5.
- GitLab is upgraded to 11.8.1.
- Prometheus is upgraded to 2.5.0.
### Microservice Governance
- Integrate Istio 1.1.1 and support visualization of service mesh management.
- Enable the access to the project's external websites and the application traffic governance.
- Provide built-in sample microservice [Bookinfo Application](https://istio.io/docs/examples/bookinfo/).
- Support traffic governance.
- Support traffic images.
- Provide load balancing of microservice based on Istio.
- Support canary release.
- Enable blue-green deployment.
- Enable circuit breaking.
- Enable microservice tracing.
### DevOps (CI/CD Pipeline)
- CI/CD pipeline provides email notification and supports the email notification during construction.
- Enhance CI/CD graphical editing pipelines, and more pipelines for common plugins and execution conditions.
- Provide source code vulnerability scanning based on SonarQube 7.4.
- Support [Source to Image](https://github.com/kubesphere/s2ioperator) feature.
### Monitoring
- Provide Kubernetes component independent monitoring page including etcd, kube-apiserver and kube-scheduler.
- Optimize several monitoring algorithm.
- Optimize monitoring resources. Reduce Prometheus storage and the disk usage up to 80%.
### Logging
- Provide unified log console in terms of tenant.
- Enable accurate and fuzzy retrieval.
- Support real-time and history logs.
- Support combined log query based on namespace, workload, Pod, container, key words and time limit.  
- Support detail page of single and direct logs. Pods and containers can be switched.
- [FluentBit Operator](https://github.com/kubesphere/fluentbit-operator) supports logging gathering settings: ElasticSearch, Kafka and Fluentd can be added, activated or turned off as log collectors. Before sending to log collectors, you can configure filtering conditions for needed logs.
### Alerting and Notifications
- Email notifications are available for cluster nodes and workload resources. 
- Notification rules: combined multiple monitoring resources are available. Different warning levels, detection cycle, push times and threshold can be configured.
- Time and notifiers can be set.
- Enable notification repeating rules for different levels.
### Security Enhancement
- Fix RunC Container Escape Vulnerability [Runc container breakout](https://log.qingcloud.com/archives/5127)
- Fix Alpine Docker's image Vulnerability [Alpine container shadow breakout](https://www.alpinelinux.org/posts/Docker-image-vulnerability-CVE-2019-5021.html)
- Support single and multi-login configuration items.
- Verification code is required after multiple invalid logins.
- Enhance passwords' policy and prevent weak passwords.
- Others security enhancements.
### Interface Optimization
- Optimize multiple user experience of console, such as the switch between DevOps project and other projects.
- Optimize many Chinese-English webpages.
### Others
- Support Etcd backup and recovery.
- Support regular cleanup of the docker's image.
## Bugs Fixes
- Fix delay updates of the resource and deleted pages.
- Fix the left dirty data after deleting the HPA workload.
- Fix incorrect Job status display.
- Correct resource quota, Pod usage and storage metrics algorithm.
- Adjust CPU usage percentages.
- many more bugfix