mirror of
https://github.com/haiwen/seafile-admin-docs.git
synced 2025-12-26 02:32:50 +00:00
wip: k8s supports log routes
This commit is contained in:
parent
a25ec0aec6
commit
266a99baf5
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
data:
|
||||
# for Seafile server
|
||||
TIME_ZONE: "UTC"
|
||||
SEAFILE_LOG_TO_STDOUT: "true"
|
||||
SEAFILE_LOG_TO_STDOUT: "false"
|
||||
SITE_ROOT: "/"
|
||||
ENABLE_SEADOC: "false"
|
||||
SEADOC_SERVER_URL: "https://seafile.example.com/sdoc-server" # only valid in ENABLE_SEADOC = true
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ metadata:
|
|||
data:
|
||||
# for Seafile server
|
||||
TIME_ZONE: "UTC"
|
||||
SEAFILE_LOG_TO_STDOUT: "true"
|
||||
SEAFILE_LOG_TO_STDOUT: "false"
|
||||
SITE_ROOT: "/"
|
||||
ENABLE_SEADOC: "false"
|
||||
SEADOC_SERVER_URL: "https://seafile.example.com/sdoc-server" # only valid in ENABLE_SEADOC = true
|
||||
|
|
|
|||
|
|
@ -241,3 +241,150 @@ Finally, you should modify the related URLs in `seahub_settings.py`, from `http:
|
|||
SERVICE_URL = "https://seafile.example.com"
|
||||
FILE_SERVER_ROOT = 'https://seafile.example.com/seafhttp'
|
||||
```
|
||||
|
||||
## Log routing and aggregation system
|
||||
|
||||
Similar to [Single-pod Seafile](./k8s_single_node.md), you can browse the log files of Seafile running directly in the persistent volume directory. The difference is that when using K8S to deploy a Seafile cluster (especially in a cloud environment), the persistent volume created is usually shared and synchronized for all nodes. However, ***the logs generated by the Seafile service do not record the specific node information where these logs are located***, so browsing the files in the above folder may make it difficult to identify which node these logs are generated from. Therefore, one solution proposed here is:
|
||||
|
||||
1. Record the generated logs to the standard output. In this way, the logs can be distinguished under each node, but all types of logs will be merged together. You can enable this feature (**it should be enabled by default in K8S Seafile cluster but not in K8S single-pod Seafile**) by modifing `SEAFILE_LOG_TO_STDOUT` to `true` in `seafile-env.yaml`:
|
||||
|
||||
```yaml
|
||||
...
|
||||
data:
|
||||
...
|
||||
SEAFILE_LOG_TO_STDOUT: "true"
|
||||
...
|
||||
```
|
||||
|
||||
Then restart the Seafile server:
|
||||
|
||||
```sh
|
||||
kubectl delete -f /opt/seafile-k8s-yaml/
|
||||
kubectl apply -f /opt/seafile-k8s-yaml/
|
||||
```
|
||||
|
||||
2. Route the standard output logs and re-record them in a new file or upload them to a log aggregation system (e.g., [*Loki*](https://grafana.com/oss/loki/)).
|
||||
|
||||
Currently in the K8S environment, the commonly used log routing plugins are:
|
||||
|
||||
- [*Fluent Bit*](https://fluentbit.io/)
|
||||
- [*Fluentd*](https://www.fluentd.org/)
|
||||
- [*Logstash*](https://www.elastic.co/logstash/)
|
||||
- [*Promtail*](https://grafana.com/loki/docs/sources/promtail/) (also a part of Loki)
|
||||
|
||||
***Fluent Bit*** and ***Promtail*** are more lightweight (i.e., consume less system resources), while *Promtail* only supports transferring logs to *Loki*. Therefore, this document will mainly introduce log routing through ***Fluent Bit*** which is a fast, lightweight logs and metrics agent. It is also a CNCF graduated sub-project under the umbrella of *Fluentd*. *Fluent Bit* is licensed under the terms of the Apache License v2.0. You should deploy the *Fluent Bit* in your K8S cluster by following [offical document](https://docs.fluentbit.io/manual/installation/kubernetes) firstly. Then modify Fluent-Bit pod settings to mount a new directory to load the configuration files:
|
||||
|
||||
```yaml
|
||||
#kubectl edit ds fluent-bit
|
||||
|
||||
...
|
||||
spec:
|
||||
...
|
||||
spec:
|
||||
...
|
||||
containers:
|
||||
- name: fluent-bit
|
||||
volumeMounts:
|
||||
...
|
||||
- mountPath: /fluent-bit/etc/seafile
|
||||
name: fluent-bit-seafile
|
||||
- mountPath: /
|
||||
...
|
||||
...
|
||||
volumes:
|
||||
...
|
||||
- hostPath:
|
||||
path: /opt/fluent-bit
|
||||
name: fluent-bit-seafile
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```yaml
|
||||
#kubectl edit cm fluent-bit
|
||||
|
||||
data:
|
||||
...
|
||||
fluent-bit.conf: |
|
||||
[SERVICE]
|
||||
...
|
||||
Parsers_File /fluent-bit/etc/seafile/confs/parsers.conf
|
||||
...
|
||||
@INCLUDE /fluent-bit/etc/seafile/confs/*-log.conf
|
||||
```
|
||||
|
||||
For example in here, we use `/opt/fluent-bit/confs`. What's more, the parsers will be defined in `/opt/fluent-bit/confs/parsers.conf` and for each type log (e.g., *seahub*'s log, *seafevent*'s log) will be defined in `/opt/fluent-bit/confs/*-log.conf`. Each `.conf` file defines several Fluent-Bit data pipeline components:
|
||||
|
||||
| **Pipeline** | **Description** | **Required/Optional** |
|
||||
| ------------- | --------------- | --------------------- |
|
||||
| **INPUT** | Specifies where and how Fluent-Bit can get the original log information, and assigns a tag for each log record after read. | Required |
|
||||
| **PARSER** | Parse the read log records. For K8S Docker runtime logs, they are usually in Json format. | Required |
|
||||
| **PROCESSOR** | Processes the log records of the specified tag (such as removing unnecessary parts), and assigns a new tag for the records after processed. | Optional |
|
||||
| **FILTER** | Filters and selects log records with a specified tag, and assigns a new tag to new records. | Optional |
|
||||
| **OUTPUT** | tells Fluent-Bit what format the log records for the specified tag will be in and where to output them (such as file, *Elasticsearch*, *Loki*, etc.). | Required |
|
||||
|
||||
!!! warning
|
||||
For ***PARSER***, it can only be stored in `/opt/fluent-bit/confs/parsers.conf`, otherwise the Fluent-Bit cannot startup normally.
|
||||
|
||||
### Inputer
|
||||
|
||||
According to the above, a container will generate a log file (usually in `/var/log/containers/<container-name>-xxxxxx.log`), so you need to prepare an importer and add the following information in `/opt/fluent-bit/confs/seafile-log.conf`:
|
||||
|
||||
```conf
|
||||
[INPUT]
|
||||
Name tail
|
||||
Path /var/log/containers/seafile-frontend-*.log
|
||||
Buffer_Chunk_Size 2MB
|
||||
Buffer_Max_Size 10MB
|
||||
Docker_Mode On
|
||||
Docker_Mode_Flush 5
|
||||
Tag seafile.*
|
||||
Parser Docker
|
||||
|
||||
[INPUT]
|
||||
Name tail
|
||||
Path /var/log/containers/seafile-backend-*.log
|
||||
Buffer_Chunk_Size 2MB
|
||||
Buffer_Max_Size 10MB
|
||||
Docker_Mode On
|
||||
Docker_Mode_Flush 5
|
||||
Tag seafile.*
|
||||
Parser Docker
|
||||
```
|
||||
|
||||
The above defines two importers, which are used to monitor seafile-frontend and seafile-backend services respectively. The reason why they are written together here is that for a node, you may not know when it will run the frontend service and when it will run the backend service.
|
||||
|
||||
### Parser
|
||||
|
||||
Each input uses a parser to parse the logs and pass them to the filter. Here, a parser named Docker is created to parse the logs generated by the *K8S-docker-runtime container*. The parser is placed in `/opt/fluent-bit/confs/parser.conf`
|
||||
|
||||
```conf
|
||||
[PARSER]
|
||||
Name Docker
|
||||
Format json
|
||||
Time_Key time
|
||||
Time_Format %Y-%m-%dT%H:%M:%S.%LZ
|
||||
|
||||
```
|
||||
|
||||
### Filter
|
||||
|
||||
Add a filter in `/opt/fluent-bit/confs/seafile-log.conf` for log routing. Here, the `rewrite_tag` filter is used to route logs according to specific rules:
|
||||
|
||||
```conf
|
||||
[FILTER]
|
||||
Name rewrite_tag
|
||||
Match seafile.*
|
||||
```
|
||||
|
||||
The routing method (for details, please refer to [Fluent-Bit](https://docs.fluentbit.io/manual/pipeline/filters/rewrite-tag)), is to define routing through `Rule` term, such as
|
||||
|
||||
```conf
|
||||
[FILTER]
|
||||
...
|
||||
Rule Key_name Regular New_tag Keep_old_tag_or_not
|
||||
```
|
||||
|
||||
### Output log's to aggegation system (e.g., *Loki*)
|
||||
|
||||
|
||||
|
|
|
|||
|
|
@ -133,4 +133,11 @@ kubectl exec -it seafile-748b695648-d6l4g -- bash
|
|||
|
||||
## HTTPS
|
||||
|
||||
Please refer [here](./cluster_deploy_with_k8s.md#load-balance-and-https) about suggestions of enabling HTTPS in K8S.
|
||||
Please refer to [here](./cluster_deploy_with_k8s.md#load-balance-and-https) about suggestions of enabling HTTPS in K8S.
|
||||
|
||||
## Seafile directory structure
|
||||
|
||||
Please refer to [here](./setup_pro_by_docker.md#seafile-directory-structure) for the details.
|
||||
|
||||
!!! tip "Send logs to Loki"
|
||||
You can directly view the log files of single-pod Seafile in the persistent volume directory, as the log files are distinguishable even the node of pod has changed (because there will only be one node running Seafile), so by default single-pod Seafile logs are not output to standard output. If you need to record these log files to a log server (e.g., [*Loki*](https://grafana.com/oss/loki/)), you can refer to [here](./cluster_deploy_with_k8s.md#log-routing-and-aggregation-system) for more informations.
|
||||
|
|
|
|||
|
|
@ -228,7 +228,7 @@ docker compose up -d
|
|||
|
||||
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
|
||||
|
||||
* /opt/seafile-data/seafile: This is the directory for seafile server configuration 、logs and data.
|
||||
* /opt/seafile-data/seafile: This is the directory for seafile server configuration, logs and data.
|
||||
* /opt/seafile-data/seafile/logs: This is the directory that would contain the log files of seafile server processes. For example, you can find seaf-server logs in `/opt/seafile-data/seafile/logs/seafile.log`.
|
||||
* /opt/seafile-data/logs: This is the directory for operating system and Nginx logs.
|
||||
* /opt/seafile-data/logs/var-log: This is the directory that would be mounted as `/var/log` inside the container. For example, you can find the nginx logs in `/opt/seafile-data/logs/var-log/nginx/`.
|
||||
|
|
|
|||
Loading…
Reference in New Issue