Merge pull request #458 from shenhonglei/zh-mail-server

翻译邮件服务器文档
This commit is contained in:
pengfei 2020-11-04 15:37:25 +08:00 committed by GitHub
commit e6faab1809
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
10 changed files with 521 additions and 0 deletions

View File

@ -0,0 +1,7 @@
---
linkTitle: "Cluster Settings"
weight: 4180
_build:
render: false
---

View File

@ -0,0 +1,54 @@
---
title: "Cluster Visibility and Authorization"
keywords: "Cluster Visibility, Cluster Management"
description: "Cluster Visibility"
linkTitle: "Cluster Visibility and Authorization"
weight: 200
---
## Objective
This guide demonstrates how to set up cluster visibility. You can limit which clusters workspace can use with cluster visibility settings.
## Prerequisites
* You need to enable [Multi-cluster Management](/docs/multicluster-management/enable-multicluster/direct-connection/).
* You need to create at least one workspace.
## Set cluster visibility
In KubeSphere, clusters can be authorized to multiple workspaces, and workspaces can also be associated with multiple clusters.
### Set up available clusters when creating workspace
1. Log in to an account that has permission to create a workspace, such as `ws-manager`.
2. Open the **Platform** menu to enter the **Access Control** page, and then enter the **Workspaces** list page from the sidebar.
3. Click the **Create** button.
4. Fill in the form and click the **Next** button.
5. Then you can see a list of clusters, and you can check to set which clusters workspace can use.
![create-workspace.png](/images/docs/cluster-administration/create-workspace.png)
6. After the workspace is created, the members of the workspace can use the resources in the associated cluster.
![create-project.png](/images/docs/cluster-administration/create-project.png)
{{< notice warning >}}
Please try not to create resources on the host cluster to avoid excessive loads, which can lead to a decrease in the stability across clusters.
{{</ notice >}}
### Set cluster visibility after the workspace is created
After the workspace is created, you can also add or cancel the cluster authorization. Please follow the steps below to adjust the visibility of a cluster.
1. Log in to an account that has permission to manage clusters, such as `cluster-manager`.
2. Open the **Platform** menu to enter the **Clusters Management** page, and then Click a cluster to enter the Single **Cluster Management** page.
3. Expand the **Cluster Settings** sidebar and click on the **Cluster Visibility** menu.
4. You can see the list of authorized workspaces.
5. Click the **Edit Visibility** button to set the cluster authorization scope by adjusting the position of the workspace in the **Authorized/Unauthorized** list.
![cluster-visibility-settings-1.png](/images/docs/cluster-administration/cluster-visibility-settings-1.png)
![cluster-visibility-settings-2.png](/images/docs/cluster-administration/cluster-visibility-settings-2.png)
### Public cluster
You can check **Set as public cluster** when setting cluster visibility.
A public cluster means all platform users can access the cluster, in which they are able to create and schedule resources.

View File

@ -0,0 +1,7 @@
---
linkTitle: "Log collection"
weight: 2000
_build:
render: false
---

View File

@ -0,0 +1,37 @@
---
title: "Add Elasticsearch as receiver (aka Collector)"
keywords: 'kubernetes, log, elasticsearch, pod, container, fluentbit, output'
description: 'Add Elasticsearch as log receiver to receive container logs'
linkTitle: "Add Elasticsearch as Receiver"
weight: 2200
---
KubeSphere supports using Elasticsearch, Kafka and Fluentd as log receivers.
This doc will demonstrate how to add an Elasticsearch receiver.
## Prerequisite
Before adding a log receiver, you need to enable any of the `logging`, `events` or `auditing` components following [Enable Pluggable Components](https://kubesphere.io/docs/pluggable-components/). The `logging` component is enabled as an example in this doc.
1. To add a log receiver:
- Login KubeSphere with an account of ***platform-admin*** role
- Click ***Platform*** -> ***Clusters Management***
- Select a cluster if multiple clusters exist
- Click ***Cluster Settings*** -> ***Log Collections***
- Log receivers can be added by clicking ***Add Log Collector***
![Add receiver](/images/docs/cluster-administration/cluster-settings/log-collections/add-receiver.png)
2. Choose ***Elasticsearch*** and fill in the Elasticsearch service address and port like below:
![Add Elasticsearch](/images/docs/cluster-administration/cluster-settings/log-collections/add-es.png)
3. Elasticsearch appears in the receiver list of ***Log Collections*** page and its status becomes ***Collecting***.
![Receiver List](/images/docs/cluster-administration/cluster-settings/log-collections/receiver-list.png)
4. Verify whether Elasticsearch is receiving logs sent from Fluent Bit:
- Click ***Log Search*** in the ***Toolbox*** in the bottom right corner.
- You can search logs in the logging console that appears.

View File

@ -0,0 +1,155 @@
---
title: "Add Fluentd as Receiver (aka Collector)"
keywords: 'kubernetes, log, fluentd, pod, container, fluentbit, output'
description: 'KubeSphere Installation Overview'
linkTitle: "Add Fluentd as Receiver"
weight: 2400
---
KubeSphere supports using Elasticsearch, Kafka and Fluentd as log receivers.
This doc will demonstrate:
- How to deploy Fluentd as deployment and create corresponding service and configmap.
- How to add Fluentd as a log receiver to receive logs sent from Fluent Bit and then output to stdout.
- How to verify if Fluentd receives logs successfully.
## Prerequisites
- Before adding a log receiver, you need to enable any of the `logging`, `events` or `auditing` components following [Enable Pluggable Components](https://kubesphere.io/docs/pluggable-components/). The `logging` component is enabled as an example in this doc.
- To configure log collection, you should use an account of ***platform-admin*** role.
## Step 1: Deploy Fluentd as a deployment
Usually, Fluentd is deployed as a daemonset in K8s to collect container logs on each node. KubeSphere chooses Fluent Bit for this purpose because of its low memory footprint. Besides, Fluentd features numerous output plugins. Hence, KubeSphere chooses to deploy Fluentd as a deployment to forward logs it receives from Fluent Bit to more destinations such as S3, MongoDB, Cassandra, MySQL, syslog and Splunk.
To deploy Fluentd as a deployment, you simply need to open the ***kubectl*** console in ***KubeSphere Toolbox*** and run the following command:
{{< notice note >}}
- The following command will deploy Fluentd deployment, service and configmap into the `default` namespace and add a filter to Fluentd configmap to exclude logs from the `default` namespace to avoid Fluent Bit and Fluentd loop logs collection.
- You'll need to change all these `default` to the namespace you selected if you want to deploy to a different namespace.
{{</ notice >}}
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: default
data:
fluent.conf: |-
# Receive logs sent from Fluent Bit on port 24224
<source>
@type forward
port 24224
</source>
# Because this will send logs Fluentd received to stdout,
# to avoid Fluent Bit and Fluentd loop logs collection,
# add a filter here to avoid sending logs from the default namespace to stdout again
<filter **>
@type grep
<exclude>
key $.kubernetes.namespace_name
pattern /^default$/
</exclude>
</filter>
# Send received logs to stdout for demo/test purpose only
# Various output plugins are supported to output logs to S3, MongoDB, Cassandra, MySQL, syslog and Splunk etc.
<match **>
@type stdout
</match>
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: fluentd
name: fluentd
namespace: default
spec:
replicas: 1
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
containers:
- image: fluentd:v1.9.1-1.0
imagePullPolicy: IfNotPresent
name: fluentd
ports:
- containerPort: 24224
name: forward
protocol: TCP
- containerPort: 5140
name: syslog
protocol: TCP
volumeMounts:
- mountPath: /fluentd/etc
name: config
readOnly: true
volumes:
- configMap:
defaultMode: 420
name: fluentd-config
name: config
---
apiVersion: v1
kind: Service
metadata:
labels:
app: fluentd-svc
name: fluentd-svc
namespace: default
spec:
ports:
- name: forward
port: 24224
protocol: TCP
targetPort: forward
selector:
app: fluentd
sessionAffinity: None
type: ClusterIP
EOF
```
## Step 2: Add Fluentd as log receiver (aka collector)
1. To add a log receiver:
- Login KubeSphere with an account of ***platform-admin*** role
- Click ***Platform*** -> ***Clusters Management***
- Select a cluster if multiple clusters exist
- Click ***Cluster Settings*** -> ***Log Collections***
- Log receivers can be added by clicking ***Add Log Collector***
![Add receiver](/images/docs/cluster-administration/cluster-settings/log-collections/add-receiver.png)
2. Choose ***Fluentd*** and fill in the Fluentd service address and port like below:
![Add Fluentd](/images/docs/cluster-administration/cluster-settings/log-collections/add-fluentd.png)
3. Fluentd appears in the receiver list of ***Log Collections*** UI and its status shows ***Collecting***.
![Receiver List](/images/docs/cluster-administration/cluster-settings/log-collections/receiver-list.png)
4. Verify whether Fluentd is receiving logs sent from Fluent Bit:
- Click ***Application Workloads*** in the ***Cluster Management*** UI.
- Select ***Workloads*** and then select the `default` namespace in the ***Workload*** - ***Deployments*** tab
- Click the ***fluentd*** item and then click the ***fluentd-xxxxxxxxx-xxxxx*** pod
- Click the ***fluentd*** container
- In the ***fluentd*** container page, select the ***Container Logs*** tab
You'll see logs begin to scroll up continuously.
![Container Logs](/images/docs/cluster-administration/cluster-settings/log-collections/container-logs.png)

View File

@ -0,0 +1,133 @@
---
title: "Add Kafka as Receiver (aka Collector)"
keywords: 'kubernetes, log, kafka, pod, container, fluentbit, output'
description: 'KubeSphere Installation Overview'
linkTitle: "Add Kafka as Receiver"
weight: 2300
---
KubeSphere supports using Elasticsearch, Kafka and Fluentd as log receivers.
This doc will demonstrate:
- Deploy [strimzi-kafka-operator](https://github.com/strimzi/strimzi-kafka-operator) and then create a Kafka cluster and a Kafka topic by creating `Kafka` and `KafkaTopic` CRDs.
- Add Kafka log receiver to receive logs sent from Fluent Bit
- Verify whether the Kafka cluster is receiving logs using [Kafkacat](https://github.com/edenhill/kafkacat)
## Prerequisite
Before adding a log receiver, you need to enable any of the `logging`, `events` or `auditing` components following [Enable Pluggable Components](https://kubesphere.io/docs/pluggable-components/). The `logging` component is enabled as an example in this doc.
## Step 1: Create a Kafka cluster and a Kafka topic
{{< notice note >}}
If you already have a Kafka cluster, you can start from Step 2.
{{</ notice >}}
You can use [strimzi-kafka-operator](https://github.com/strimzi/strimzi-kafka-operator) to create a Kafka cluster and a Kafka topic
1. Install [strimzi-kafka-operator](https://github.com/strimzi/strimzi-kafka-operator) to the `default` namespace:
```bash
helm repo add strimzi https://strimzi.io/charts/
helm install --name kafka-operator -n default strimzi/strimzi-kafka-operator
```
2. Create a Kafka cluster and a Kafka topic in the `default` namespace:
To deploy a Kafka cluster and create a Kafka topic, you simply need to open the ***kubectl*** console in ***KubeSphere Toolbox*** and run the following command:
{{< notice note >}}
The following will create Kafka and Zookeeper clusters with storage type `ephemeral` which is `emptydir` for demo purpose. You should use other storage types for production, please refer to [kafka-persistent](https://github.com/strimzi/strimzi-kafka-operator/blob/0.19.0/examples/kafka/kafka-persistent.yaml)
{{</ notice >}}
```yaml
cat <<EOF | kubectl apply -f -
apiVersion: kafka.strimzi.io/v1beta1
kind: Kafka
metadata:
name: my-cluster
namespace: default
spec:
kafka:
version: 2.5.0
replicas: 3
listeners:
plain: {}
tls: {}
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
log.message.format.version: '2.5'
storage:
type: ephemeral
zookeeper:
replicas: 3
storage:
type: ephemeral
entityOperator:
topicOperator: {}
userOperator: {}
---
apiVersion: kafka.strimzi.io/v1beta1
kind: KafkaTopic
metadata:
name: my-topic
namespace: default
labels:
strimzi.io/cluster: my-cluster
spec:
partitions: 3
replicas: 3
config:
retention.ms: 7200000
segment.bytes: 1073741824
EOF
```
3. Run the following command to wait for Kafka and Zookeeper pods are all up and runing:
```bash
kubectl -n default get pod
NAME READY STATUS RESTARTS AGE
my-cluster-entity-operator-f977bf457-s7ns2 3/3 Running 0 69m
my-cluster-kafka-0 2/2 Running 0 69m
my-cluster-kafka-1 2/2 Running 0 69m
my-cluster-kafka-2 2/2 Running 0 69m
my-cluster-zookeeper-0 1/1 Running 0 71m
my-cluster-zookeeper-1 1/1 Running 1 71m
my-cluster-zookeeper-2 1/1 Running 1 71m
strimzi-cluster-operator-7d6cd6bdf7-9cf6t 1/1 Running 0 104m
```
Then run the follwing command to find out metadata of kafka cluster
```bash
kafkacat -L -b my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc:9092,my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc:9092,my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc:9092
```
4. Add Kafka as logs receiver:
Click ***Add Log Collector*** and then select ***Kafka***, input Kafka broker address and port like below:
```bash
my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc 9092
my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc 9092
my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc 9092
```
![Add Kafka](/images/docs/cluster-administration/cluster-settings/log-collections/add-kafka.png)
5. Run the following command to verify whether the Kafka cluster is receiving logs sent from Fluent Bit:
```bash
# Start a util container
kubectl run --rm utils -it --generator=run-pod/v1 --image arunvelsriram/utils bash
# Install Kafkacat in the util container
apt-get install kafkacat
# Run the following command to consume log messages from kafka topic: my-topic
kafkacat -C -b my-cluster-kafka-0.my-cluster-kafka-brokers.default.svc:9092,my-cluster-kafka-1.my-cluster-kafka-brokers.default.svc:9092,my-cluster-kafka-2.my-cluster-kafka-brokers.default.svc:9092 -t my-topic
```

View File

@ -0,0 +1,94 @@
---
title: "Introduction"
keywords: 'kubernetes, log, elasticsearch, kafka, fluentd, pod, container, fluentbit, output'
description: 'Add log receivers to receive container logs'
linkTitle: "Introduction"
weight: 2100
---
KubeSphere provides a flexible log collection configuration method. Powered by [FluentBit Operator](https://github.com/kubesphere/fluentbit-operator/), users can add/modify/delete/enable/disable Elasticsearch, Kafka and Fluentd receivers with ease. Once a receiver is added, logs will be sent to this receiver.
## Prerequisite
Before adding a log receiver, you need to enable any of the `logging`, `events` or `auditing` components following [Enable Pluggable Components](https://kubesphere.io/docs/pluggable-components/).
## Add Log Receiver (aka Collector) for container logs
To add a log receiver:
- Login with an account of ***platform-admin*** role
- Click ***Platform*** -> ***Clusters Management***
- Select a cluster if multiple clusters exist
- Click ***Cluster Settings*** -> ***Log Collections***
- Log receivers can be added by clicking ***Add Log Collector***
![Log collection](/images/docs/cluster-administration/cluster-settings/log-collections/log-collections.png)
{{< notice note >}}
- At most one receiver can be added for each receiver type.
- Different types of receivers can be added simultaneously.
{{</ notice >}}
### Add Elasticsearch as log receiver
A default Elasticsearch receiver will be added with its service address set to an Elasticsearch cluster if logging/events/auditing is enabled in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md)
An internal Elasticsearch cluster will be deployed into K8s cluster if neither ***externalElasticsearchUrl*** nor ***externalElasticsearchPort*** are specified in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md) when logging/events/auditing is enabled.
Configuring an external Elasticsearch cluster is recommended for production usage, the internal Elasticsearch cluster is for test/development/demo purpose only.
Log searching relies on the internal/external Elasticsearch cluster configured.
Please refer to [Add Elasticsearch as receiver](../add-es-as-receiver) to add a new Elasticsearch log receiver if the default one is deleted.
### Add Kafka as log receiver
Kafka is often used to receive logs and serve as a broker to other processing systems like Spark. [Add Kafka as receiver](../add-kafka-as-receiver) demonstrates how to add Kafka to receive Kubernetes logs.
### Add Fluentd as log receiver
If you need to output logs to more places other than Elasticsearch or Kafka, you'll need to add Fluentd as a log receiver. Fluentd has numerous output plugins which can forward logs to various destinations like S3, MongoDB, Cassandra, MySQL, syslog, Splunk etc. [Add Fluentd as receiver](../add-fluentd-as-receiver) demonstrates how to add Fluentd to receive Kubernetes logs.
## Add Log Receiver (aka Collector) for events/auditing logs
Starting from KubeSphere v3.0.0, K8s events logs and K8s/KubeSphere auditing logs can be archived in the same way as container logs. There will be ***Events*** or ***Auditing*** tab in the ***Log Collections*** page if ***events*** or ***auditing*** component is enabled in [ClusterConfiguration](https://github.com/kubesphere/kubekey/blob/master/docs/config-example.md). Log receivers for K8s events or K8s/KubeSphere auditing can be configured after switching to the corresponding tab.
![events](/images/docs/cluster-administration/cluster-settings/log-collections/log-collections-events.png)
Container logs, K8s events and K8s/KubeSphere auditing logs should be stored in different Elasticsearch indices to be searched in KubeSphere, the index prefixes are:
- ***ks-logstash-log*** for container logs
- ***ks-logstash-events*** for K8s events
- ***ks-logstash-auditing*** for K8s/KubeSphere auditing
## Turn a log receiver on or off
KubeSphere supports turning a log receiver on or off without adding/deleting it.
To turn a log receiver on or off:
- Click a log receiver and enter the receiver details page.
- Click ***More*** -> ***Change Status***
![more](/images/docs/cluster-administration/cluster-settings/log-collections/more.png)
- You can select ***Activate*** or ***Close*** to turn the log receiver on or off
![Change Status](/images/docs/cluster-administration/cluster-settings/log-collections/change-status.png)
- Log receiver's status will be changed to ***Close*** if you turn it off, otherwise the status will be ***Collecting***
![receiver-status](/images/docs/cluster-administration/cluster-settings/log-collections/receiver-status.png)
## Modify or delete a log receiver
You can modify a log receiver or delete it:
- Click a log receiver and enter the receiver details page.
- You can edit a log receiver by clicking ***Edit*** or ***Edit Yaml***
![more](/images/docs/cluster-administration/cluster-settings/log-collections/more.png)
- Log receiver can be deleted by clicking ***Delete Log Collector***

View File

@ -0,0 +1,34 @@
---
title: "邮件服务器"
keywords: 'KubeSphere, Kubernetes, Notification, Mail Server'
description: '邮件服务器'
linkTitle: "邮件服务器"
weight: 4190
---
## 目标
本指南演示了告警策略的电子邮件通知设置(支持自定义设置)。 您可以指定用户电子邮件地址以接收告警消息。
## 前提条件
[KubeSphere Alerting and Notification](../../../pluggable-components/alerting-notification/) 需要启用。
## 动手实验室
1. 使用具有 ` platform-admin` 角色的一个帐户登录 Web 控制台。
2. 点击左上角的平台管理,然后选择集群管理。
![mail_server_guide](/images/docs/alerting/mail_server_guide-zh.png)
1. 从列表中选择一个集群并输入它(如果您未启用[多集群功能](../../../multicluster-management/),则将直接转到**概述**页面)。
2. 在**群集设置**下选择**邮件服务器**。 在页面中,提供您的邮件服务器配置和 SMTP 身份验证信息,如下所示:
- **SMTP 服务器地址**:填写可以提供邮件服务的 SMTP 服务器地址。 端口通常是 25。
- **使用 SSL 安全连接**SSL 可用于加密邮件,从而提高了邮件传输信息的安全性。 通常,您必须为邮件服务器配置证书。
- SMTP 验证信息:如下填写 **SMTP 用户****SMTP 密码****发件人电子邮件地址**等
![mail_server_config](/images/docs/alerting/mail_server_config-zh.png)
5. 完成上述设置后,单击**保存**。 您可以发送测试电子邮件以验证服务器配置是否成功。

Binary file not shown.

After

Width:  |  Height:  |  Size: 936 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 328 KiB