deployment of seafile-ai::face-embedding

This commit is contained in:
Junxiang Huang 2025-06-27 16:59:49 +08:00
parent 2d2169d2a6
commit ac90891e5c
9 changed files with 239 additions and 68 deletions

View File

@ -0,0 +1,168 @@
# Seafile AI extension
From Seafile 13 Pro, users can enable ***Seafile AI*** to support the following features:
- File tags, file and image summaries, text translation, sdoc writing assistance
- Given an image, generate its corresponding tags (including objects, weather, color, etc.)
- Detect faces in images and encode them
- Detect text in images (OCR)
## Deploy Seafile AI basic service
The Seafile AI basic service will use API calls to external large language model service to implement file labeling, file and image summaries, text translation, and sdoc writing assistance.
1. Download `seafile-ai.yml`
```sh
wget https://manual.seafile.com/13.0/repo/docker/seafile-ai.yml
```
!!! note "Deploy in a cluster or standalone deployment"
If you deploy Seafile in a cluster and would like to deploy Seafile AI, please expose port `8888` in `seafile-ai.yml`:
```yml
services:
seafile-ai:
...
ports:
- 8888:8888
```
At the same time, Seafile AI should be deployed on one of the cluster nodes.
2. Modify `.env`, insert or modify the following fields:
```
COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml
ENABLE_SEAFILE_AI=true
SEAFILE_AI_LLM_KEY=<your LLM access key>
```
!!! note "Deploy in a cluster or standalone deployment"
Please also specify the following items in `.env`:
- `.env` on the host where deploys Seafile server:
- `SEAFILE_AI_SERVER_URL`: the service url of Seafile AI (e.g., `http://seafile-ai.example.com:8888`)
- `.env` on the host where deploys Seafile AI:
`SEAFILE_SERVER_URL`: your Seafile server's url (e.g., `https://seafile.example.com`)
!!! tip "About LLM configs"
By default, Seafile uses the ***GPT-4o-mini*** model from *OpenAI*. You only need to provide your ***OpenAI API Key***. If you need to use other LLM (including self-deployed LLM service), you also need to specify the following in `.env`:
```sh
SEAFILE_AI_LLM_TYPE=<your LLM type>
SEAFILE_AI_LLM_URL=<your LLM endpoint>
SEAFILE_AI_LLM_KEY=<your LLM API key>
```
3. Restart Seafile server:
```sh
docker compose down
docker compose up -d
```
## Deploy face embedding service (Optional)
The Face Embedding service is used to detect and encode faces in images. Generally, we **recommend** that you deploy the service on a machine with a **GPU** and a graphics card driver that supports [OnnxRuntime](https://onnxruntime.ai/docs/) (so it can also be deployed on a different machine from the Seafile AI base service). Currently, the Seafile AI Face Embedding service only supports the following modes:
- *Nvidia* GPU, which will use the ***CUDA 12.4*** acceleration environment (at least the minimum Nvidia Geforce 531.18 driver) and requires the installation of the [Nvidia container toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html).
<!-- - *AMD* GPU, which will use the ***ROCm 6.4.1*** acceleration environment.-->
- Pure *CPU* mode
If you plan to deploy these face embeddings in an environment using a GPU, you need to make sure your graphics card is **in the range supported by the acceleration environment** and **correctly mapped in `/dev/dri` directory** (so cloud servers and [WSL](https://learn.microsoft.com/en-us/windows/wsl/install) under certain driver versions will not be supported).
1. Download Docker compose files
=== "CUDA"
```sh
wget -O face-embedding.yml https://manual.seafile.com/13.0/repo/docker/face-embedding/cuda.yml
```
=== "CPU"
```sh
wget -O face-embedding.yml https://manual.seafile.com/13.0/repo/docker/face-embedding/cpu.yml
```
<!--
=== "ROCM"
```sh
wget -O face-embedding.yml https://manual.seafile.com/13.0/repo/docker/face-embedding/rocm.yml
```
-->
2. Modify `.env`, insert or modify the following fields:
```
COMPOSE_FILE='...,face-embedding.yml' # add face-embedding.yml
FACE_EMBEDDING_VOLUME=/opt/face_embedding
```
3. Restart Seafile server
```sh
docker compose down
docker compose up -d
```
4. Enable face recognition in the repo's settings:
![Enable face recognition](../images/face-embedding.png)
### Deploy the face embedding service on a different machine than the Seafile AI basic service
Since the face embedding service may need to be deployed on some hosts with GPU(s), it may not be deployed together with the Seafile AI basic service. At this time, you should make some changes to the Docker compose file so that the service can be accessed normally.
1. Modify `.yml` file, delete the commented out lines to expose the service port:
```yml
services:
face-embedding:
...
ports:
- 8886:8886
```
2. Modify the `.env` of the Seafile AI basic service:
```
FACE_EMBEDDING_SERVICE_URL=http://<your face embedding service host>:8886
```
3. Make sure `JWT_PRIVATE_KEY` has set in the `.env` for face embedding and is same as the Seafile server
4. Restart Seafile server
```sh
docker compose down
docker compose up -d
```
### Persistent volume and model management
By default, the persistent volume is `/opt/face_embedding`. It will consist of two subdirectories:
- `/opt/face_embedding/logs`: Contains the startup log and access log of the face embedding
- `/opt/face_embedding/models`: Contains the model files of the face embedding. It will automatically obtain the latest applicable models at each startup. These models are hosted by [our Hugging Face repository](https://huggingface.co/Seafile/face-embedding). Of course, you can also manually download their own model files before the first startup (**If you fail to automatically pull the model, you can also manually download the model to this directory**).
### Customizing model serving access keys
By default, the access key used by the face embedding is the same as that used by the Seafile server, which is `JWT_PRIVATE_KEY`. At some point, this will have to be modified for security reasons. If you need to customize the access key for the face embedding, you can do the following steps:
1. Modify `.env` file for both face embedding and Seafile AI:
```
FACE_EMBEDDING_SERVICE_KEY=<your customizing access keys>
```
2. Restart Seafile server
```sh
docker compose down
docker compose up -d
```

View File

@ -1,54 +0,0 @@
# Seafile AI extension
From Seafile 13 Pro, users can enable ***Seafile AI*** to support the following features:
- File tags, file and image summaries, text translation, sdoc writing assistance
- Given an image, generate its corresponding tags (including objects, weather, color, etc.)
- Detect faces in images and encode them
- Detect text in images (OCR)
## Deploy Seafile AI basic service
The Seafile AI basic service will use API calls to external large language model service (e.g., *GPT-4o-mini*) to implement file labeling, file and image summaries, text translation, and sdoc writing assistance.
1. Download `seafile-ai.yml`
```sh
wget https://manual.seafile.com/13.0/repo/docker/seafile-ai.yml
```
!!! note "Deploy in a cluster"
If you deploy Seafile in a cluster and would like to deploy Seafile AI, please expose port `8888` in `seafile-ai.yml`:
```yml
services:
seafile-ai:
...
ports:
- 8888:8888
```
At the same time, Seafile AI should be deployed on one of the cluster nodes.
2. Modify `.env`, insert or modify the following fields:
```
COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml
ENABLE_SEAFILE_AI=true
SEAFILE_AI_LLM_TYPE=open-ai
SEAFILE_AI_LLM_URL=<your LLM endpoint URL>
SEAFILE_AI_LLM_KEY=<your LLM access key>
```
!!! note "Deploy in a cluster"
Please also specify `SEAFILE_AI_SERVER_URL` to the host where deploys your Seafile AI basic service in `.env`, if you deploy Seafile in a cluster.
3. Restart Seafile server:
```sh
docker compose down
docker compose up -d
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 86 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 97 KiB

View File

@ -0,0 +1,16 @@
services:
face-embedding:
image: ${FACE_EMBEDDING_IMAGE:-seafileltd/face_embedding:latest-cpu}
container_name: face-embedding
# ports:
# - 8886:8886
volumes:
- ${FACE_EMBEDDING_VOLUME:-/opt/face_embedding}/models:/models
- ${FACE_EMBEDDING_VOLUME:-/opt/face_embedding}/logs:/logs
environment:
- FACE_EMBEDDING_SERVICE_KEY=${FACE_EMBEDDING_SERVICE_KEY:-${JWT_PRIVATE_KEY:?JWT_PRIVATE_KEY is not set or empty}}
networks:
- seafile-net
networks:
seafile-net:
name: seafile-net

View File

@ -0,0 +1,26 @@
services:
face-embedding:
image: ${FACE_EMBEDDING_IMAGE:-seafileltd/face_embedding:latest-cuda}
container_name: face-embedding
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
# ports:
# - 8886:8886
devices:
- "/dev/dri:/dev/dri"
volumes:
- ${FACE_EMBEDDING_VOLUME:-/opt/face_embedding}/models:/models
- ${FACE_EMBEDDING_VOLUME:-/opt/face_embedding}/logs:/logs
environment:
- FACE_EMBEDDING_SERVICE_KEY=${FACE_EMBEDDING_SERVICE_KEY:-${JWT_PRIVATE_KEY:?Variable is not set or empty}}
networks:
- seafile-net
networks:
seafile-net:
name: seafile-net

View File

@ -0,0 +1,19 @@
services:
face-embedding:
image: ${FACE_EMBEDDING_IMAGE:-seafileltd/face_embedding:latest-rocm}
container_name: face-embedding
ports:
- 8886:8886
devices:
- "/dev/kfd:/dev/kfd"
volumes:
- ${FACE_EMBEDDING_VOLUME:-/opt/face_embedding}/models:/models
- ${FACE_EMBEDDING_VOLUME:-/opt/face_embedding}/logs:/logs
environment:
- FACE_EMBEDDING_SERVICE_KEY=${FACE_EMBEDDING_SERVICE_KEY:-${JWT_PRIVATE_KEY:?Variable is not set or empty}}
networks:
- seafile-net
networks:
seafile-net:
name: seafile-net

View File

@ -5,23 +5,19 @@ services:
volumes:
- ${SEAFILE_VOLUME:-/opt/seafile-data}:/shared
# ports:
# - 8888:8888
# - 8888:8888
environment:
- JWT_PRIVATE_KEY=${JWT_PRIVATE_KEY:?Variable is not set or empty}
- IMAGE_TAGS_SERVICE_KEY=${IMAGE_TAGS_SERVICE_KEY:-}
- FACE_EMBEDDING_SERVICE_KEY=${FACE_EMBEDDING_SERVICE_KEY:-}
- OCR_SERVICE_KEY=${OCR_SERVICE_KEY:-}
- SEAFILE_AI_LLM_TYPE=${SEAFILE_AI_LLM_TYPE:-open-ai}
- SEAFILE_AI_LLM_URL=$SEAFILE_AI_LLM_URL
- SEAFILE_AI_LLM_KEY=$SEAFILE_AI_LLM_KEY
- IMAGE_TAGS_SERVICE_URL=${IMAGE_TAGS_SERVICE_URL:-http://image-tags:8885}
- SEAFILE_AI_LLM_TYPE=${SEAFILE_AI_LLM_TYPE:-openai}
- SEAFILE_AI_LLM_URL=${SEAFILE_AI_LLM_URL:-}
- SEAFILE_AI_LLM_KEY=${SEAFILE_AI_LLM_KEY:-}
- FACE_EMBEDDING_SERVICE_URL=${FACE_EMBEDDING_SERVICE_URL:-http://face-embedding:8886}
- OCR_SERVICE_URL=${OCR_SERVICE_URL:-http://ocr:8887}
- SEAFILE_SERVER_HOSTNAME=${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty}
- SEAFILE_SERVER_PROTOCOL=${SEAFILE_SERVER_PROTOCOL:-http}
- FACE_EMBEDDING_SERVICE_KEY=${FACE_EMBEDDING_SERVICE_KEY:-${JWT_PRIVATE_KEY:?Variable is not set or empty}}
- SEAFILE_SERVER_URL=${SEAFILE_SERVER_URL:-http://seafile}
- JWT_PRIVATE_KEY=${JWT_PRIVATE_KEY:?Variable is not set or empty}
- SEAFILE_AI_LOG_LEVEL=${SEAFILE_AI_LOG_LEVEL:-info}
networks:
- seafile-net
networks:
seafile-net:
name: seafile-net
name: seafile-net

View File

@ -141,7 +141,7 @@ nav:
- Thumbnail Server: extension/thumbnail-server.md
- WebDAV extension: extension/webdav.md
- FUSE extension: extension/fuse.md
- Seafile AI extension: extension/seafile_ai.md
- Seafile AI extension: extension/seafile-ai.md
- Online Office:
- Collabora Online Integration: extension/libreoffice_online.md
- OnlyOffice Integration: extension/only_office.md