Merge pull request #489 from rayzhou2017/master

format log connections and mail server
This commit is contained in:
KubeSphere CI Bot 2020-11-09 09:45:45 +08:00 committed by GitHub
commit 369237420f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
5 changed files with 122 additions and 122 deletions

View File

@ -15,23 +15,25 @@ Before adding a log receiver, you need to enable any of the `logging`, `events`
1. To add a log receiver:
- Login KubeSphere with an account of ***platform-admin*** role
- Click ***Platform*** -> ***Clusters Management***
- Select a cluster if multiple clusters exist
- Click ***Cluster Settings*** -> ***Log Collections***
- Log receivers can be added by clicking ***Add Log Collector***
- Login KubeSphere with an account of ***platform-admin*** role
- Click ***Platform*** -> ***Clusters Management***
- Select a cluster if multiple clusters exist
- Click ***Cluster Settings*** -> ***Log Collections***
- Log receivers can be added by clicking ***Add Log Collector***
![Add receiver](/images/docs/cluster-administration/cluster-settings/log-collections/add-receiver.png)
![Add receiver](/images/docs/cluster-administration/cluster-settings/log-collections/add-receiver.png)
2. Choose ***Elasticsearch*** and fill in the Elasticsearch service address and port like below:
![Add Elasticsearch](/images/docs/cluster-administration/cluster-settings/log-collections/add-es.png)
![Add Elasticsearch](/images/docs/cluster-administration/cluster-settings/log-collections/add-es.png)
3. Elasticsearch appears in the receiver list of ***Log Collections*** page and its status becomes ***Collecting***.
![Receiver List](/images/docs/cluster-administration/cluster-settings/log-collections/receiver-list.png)
![Receiver List](/images/docs/cluster-administration/cluster-settings/log-collections/receiver-list.png)
4. Verify whether Elasticsearch is receiving logs sent from Fluent Bit:
- Click ***Log Search*** in the ***Toolbox*** in the bottom right corner.
- You can search logs in the logging console that appears.
- Click ***Log Search*** in the ***Toolbox*** in the bottom right corner.
- You can search logs in the logging console that appears.
You can read [Log Query](../../../../toolbox/log-query/) to learn how to use the tool.

View File

@ -125,31 +125,30 @@ EOF
1. To add a log receiver:
- Login KubeSphere with an account of ***platform-admin*** role
- Click ***Platform*** -> ***Clusters Management***
- Select a cluster if multiple clusters exist
- Click ***Cluster Settings*** -> ***Log Collections***
- Log receivers can be added by clicking ***Add Log Collector***
- Login KubeSphere with an account of ***platform-admin*** role
- Click ***Platform*** -> ***Clusters Management***
- Select a cluster if multiple clusters exist
- Click ***Cluster Settings*** -> ***Log Collections***
- Log receivers can be added by clicking ***Add Log Collector***
![Add receiver](/images/docs/cluster-administration/cluster-settings/log-collections/add-receiver.png)
![Add receiver](/images/docs/cluster-administration/cluster-settings/log-collections/add-receiver.png)
2. Choose ***Fluentd*** and fill in the Fluentd service address and port like below:
![Add Fluentd](/images/docs/cluster-administration/cluster-settings/log-collections/add-fluentd.png)
![Add Fluentd](/images/docs/cluster-administration/cluster-settings/log-collections/add-fluentd.png)
3. Fluentd appears in the receiver list of ***Log Collections*** UI and its status shows ***Collecting***.
![Receiver List](/images/docs/cluster-administration/cluster-settings/log-collections/receiver-list.png)
![Receiver List](/images/docs/cluster-administration/cluster-settings/log-collections/receiver-list.png)
4. Verify whether Fluentd is receiving logs sent from Fluent Bit:
- Click ***Application Workloads*** in the ***Cluster Management*** UI.
- Select ***Workloads*** and then select the `default` namespace in the ***Workload*** - ***Deployments*** tab
- Click the ***fluentd*** item and then click the ***fluentd-xxxxxxxxx-xxxxx*** pod
- Click the ***fluentd*** container
- In the ***fluentd*** container page, select the ***Container Logs*** tab
- Click ***Application Workloads*** in the ***Cluster Management*** UI.
- Select ***Workloads*** and then select the `default` namespace in the ***Workload*** - ***Deployments*** tab
- Click the ***fluentd*** item and then click the ***fluentd-xxxxxxxxx-xxxxx*** pod
- Click the ***fluentd*** container
- In the ***fluentd*** container page, select the ***Container Logs*** tab
You'll see logs begin to scroll up continuously.
You'll see logs begin to scroll up continuously.
![Container Logs](/images/docs/cluster-administration/cluster-settings/log-collections/container-logs.png)
![Container Logs](/images/docs/cluster-administration/cluster-settings/log-collections/container-logs.png)

View File

@ -10,8 +10,8 @@ KubeSphere supports using Elasticsearch, Kafka and Fluentd as log receivers.
This doc will demonstrate:
- Deploy [strimzi-kafka-operator](https://github.com/strimzi/strimzi-kafka-operator) and then create a Kafka cluster and a Kafka topic by creating `Kafka` and `KafkaTopic` CRDs.
- Add Kafka log receiver to receive logs sent from Fluent Bit
- Verify whether the Kafka cluster is receiving logs using [Kafkacat](https://github.com/edenhill/kafkacat)
- Add Kafka log receiver to receive logs sent from Fluent Bit.
- Verify whether the Kafka cluster is receiving logs using [Kafkacat](https://github.com/edenhill/kafkacat).
## Prerequisite
@ -29,105 +29,104 @@ You can use [strimzi-kafka-operator](https://github.com/strimzi/strimzi-kafka-op
1. Install [strimzi-kafka-operator](https://github.com/strimzi/strimzi-kafka-operator) to the `default` namespace:
```bash
helm repo add strimzi https://strimzi.io/charts/
helm install --name kafka-operator -n default strimzi/strimzi-kafka-operator
```
```bash
helm repo add strimzi https://strimzi.io/charts/
helm install --name kafka-operator -n default strimzi/strimzi-kafka-operator
```
2. Create a Kafka cluster and a Kafka topic in the `default` namespace:
To deploy a Kafka cluster and create a Kafka topic, you simply need to open the ***kubectl*** console in ***KubeSphere Toolbox*** and run the following command:
To deploy a Kafka cluster and create a Kafka topic, you simply need to open the ***kubectl*** console in ***KubeSphere Toolbox*** and run the following command:
{{< notice note >}}
{{< notice note >}}
The following will create Kafka and Zookeeper clusters with storage type `ephemeral` which is `emptydir` for demo purpose. You should use other storage types for production, please refer to [kafka-persistent](https://github.com/strimzi/strimzi-kafka-operator/blob/0.19.0/examples/kafka/kafka-persistent.yaml).
{{</ notice >}}
The following will create Kafka and Zookeeper clusters with storage type `ephemeral` which is `emptydir` for demo purpose. You should use other storage types for production, please refer to [kafka-persistent](https://github.com/strimzi/strimzi-kafka-operator/blob/0.19.0/examples/kafka/kafka-persistent.yaml)
{{</ notice >}}
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: my-cluster
namespace: default
spec:
kafka:
version: 2.5.0
replicas: 3
listeners:
plain: {}
tls: {}
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
log.message.format.version: '2.5'
storage:
type: ephemeral
zookeeper:
replicas: 3
storage:
type: ephemeral
entityOperator:
topicOperator: {}
userOperator: {}
---
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: my-topic
namespace: default
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 3
replicas: 3
config:
retention.ms: 7200000
segment.bytes: 1073741824
EOF
```
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: my-cluster
namespace: default
spec:
kafka:
version: 2.5.0
replicas: 3
listeners:
plain: {}
tls: {}
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
log.message.format.version: '2.5'
storage:
type: ephemeral
zookeeper:
replicas: 3
storage:
type: ephemeral
entityOperator:
topicOperator: {}
userOperator: {}
---
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: my-topic
namespace: default
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 3
replicas: 3
config:
retention.ms: 7200000
segment.bytes: 1073741824
EOF
```
3. Run the following command to wait for Kafka and Zookeeper pods are all up and runing:
```bash
kubectl -n default get pod
NAME READY STATUS RESTARTS AGE
my-cluster-entity-operator-f977bf457-s7ns2 3/3 Running 0 69m
my-cluster-kafka-0 2/2 Running 0 69m
my-cluster-kafka-1 2/2 Running 0 69m
my-cluster-kafka-2 2/2 Running 0 69m
my-cluster-zookeeper-0 1/1 Running 0 71m
my-cluster-zookeeper-1 1/1 Running 1 71m
my-cluster-zookeeper-2 1/1 Running 1 71m
strimzi-cluster-operator-7d6cd6bdf7-9cf6t 1/1 Running 0 104m
```
```bash
kubectl -n default get pod
NAME READY STATUS RESTARTS AGE
my-cluster-entity-operator-f977bf457-s7ns2 3/3 Running 0 69m
my-cluster-kafka-0 2/2 Running 0 69m
my-cluster-kafka-1 2/2 Running 0 69m
my-cluster-kafka-2 2/2 Running 0 69m
my-cluster-zookeeper-0 1/1 Running 0 71m
my-cluster-zookeeper-1 1/1 Running 1 71m
my-cluster-zookeeper-2 1/1 Running 1 71m
strimzi-cluster-operator-7d6cd6bdf7-9cf6t 1/1 Running 0 104m
```
Then run the follwing command to find out metadata of kafka cluster
Then run the follwing command to find out metadata of kafka cluster
```bash
kafkacat -L -b my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc:9092,my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc:9092,my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc:9092
```
```bash
kafkacat -L -b my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc:9092,my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc:9092,my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc:9092
```
4. Add Kafka as logs receiver:
Click ***Add Log Collector*** and then select ***Kafka***, input Kafka broker address and port like below:
```bash
my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc 9092
my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc 9092
my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc 9092
```
Click ***Add Log Collector*** and then select ***Kafka***, input Kafka broker address and port like below:
![Add Kafka](/images/docs/cluster-administration/cluster-settings/log-collections/add-kafka.png)
```bash
my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc 9092
my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc 9092
my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc 9092
```
![Add Kafka](/images/docs/cluster-administration/cluster-settings/log-collections/add-kafka.png)
5. Run the following command to verify whether the Kafka cluster is receiving logs sent from Fluent Bit:
```bash
# Start a util container
kubectl run --rm utils -it --generator=run-pod/v1 --image arunvelsriram/utils bash
# Install Kafkacat in the util container
apt-get install kafkacat
# Run the following command to consume log messages from kafka topic: my-topic
kafkacat -C -b my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc:9092,my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc:9092,my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc:9092 -t my-topic
```
```bash
# Start a util container
kubectl run --rm utils -it --generator=run-pod/v1 --image arunvelsriram/utils bash
# Install Kafkacat in the util container
apt-get install kafkacat
# Run the following command to consume log messages from kafka topic: my-topic
kafkacat -C -b my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc:9092,my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc:9092,my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc:9092 -t my-topic
```

View File

@ -34,7 +34,7 @@ To add a log receiver:
### Add Elasticsearch as log receiver
A default Elasticsearch receiver will be added with its service address set to an Elasticsearch cluster if logging/events/auditing is enabled in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md)
A default Elasticsearch receiver will be added with its service address set to an Elasticsearch cluster if logging/events/auditing is enabled in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md).
An internal Elasticsearch cluster will be deployed into K8s cluster if neither ***externalElasticsearchUrl*** nor ***externalElasticsearchPort*** are specified in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md) when logging/events/auditing is enabled.
@ -72,15 +72,15 @@ To turn a log receiver on or off:
- Click a log receiver and enter the receiver details page.
- Click ***More*** -> ***Change Status***
![more](/images/docs/cluster-administration/cluster-settings/log-collections/more.png)
![more](/images/docs/cluster-administration/cluster-settings/log-collections/more.png)
- You can select ***Activate*** or ***Close*** to turn the log receiver on or off
![Change Status](/images/docs/cluster-administration/cluster-settings/log-collections/change-status.png)
![Change Status](/images/docs/cluster-administration/cluster-settings/log-collections/change-status.png)
- Log receiver's status will be changed to ***Close*** if you turn it off, otherwise the status will be ***Collecting***
![receiver-status](/images/docs/cluster-administration/cluster-settings/log-collections/receiver-status.png)
![receiver-status](/images/docs/cluster-administration/cluster-settings/log-collections/receiver-status.png)
## Modify or delete a log receiver
@ -89,6 +89,6 @@ You can modify a log receiver or delete it:
- Click a log receiver and enter the receiver details page.
- You can edit a log receiver by clicking ***Edit*** or ***Edit Yaml***
![more](/images/docs/cluster-administration/cluster-settings/log-collections/more.png)
![more](/images/docs/cluster-administration/cluster-settings/log-collections/more.png)
- Log receiver can be deleted by clicking ***Delete Log Collector***

View File

@ -18,9 +18,9 @@ This guide demonstrates email notification settings (customized settings support
## Hands-on Lab
1. Log in the web console with one account granted the role `platform-admin`.
2. Click **Platform** in the top left corner and select **Clusters Management**.
2. Click **Platform** in the top left corner and select **Clusters Management**.
![mail_server_guide](/images/docs/alerting/mail_server_guide.png)
![mail_server_guide](/images/docs/alerting/mail_server_guide.png)
3. Select a cluster from the list and enter it (If you do not enable the [multi-cluster feature](../../../multicluster-management/), you will directly go to the **Overview** page).
4. Select **Mail Server** under **Cluster Settings**. In the page, provide your mail server configuration and SMTP authentication information as follows:
@ -28,6 +28,6 @@ This guide demonstrates email notification settings (customized settings support
- **Use SSL Secure Connection**: SSL can be used to encrypt mails, thereby improving the security of information transmitted by mails. Usually you have to configure the certificate for the mail server.
- SMTP authentication information: Fill in **SMTP User**, **SMTP Password**, **Sender Email Address**, etc. as below
![mail_server_config](/images/docs/alerting/mail_server_config.png)
![mail_server_config](/images/docs/alerting/mail_server_config.png)
5. After you complete the above settings, click **Save**. You can send a test email to verify the success of the server configuration.
5. After you complete the above settings, click **Save**. You can send a test email to verify the success of the server configuration.