diff --git a/13.0/extension/seafile-ai/index.html b/13.0/extension/seafile-ai/index.html index 0e3b4c5c..afbc7030 100644 --- a/13.0/extension/seafile-ai/index.html +++ b/13.0/extension/seafile-ai/index.html @@ -2153,6 +2153,30 @@ + +
  • @@ -4514,6 +4538,30 @@ + +
  • @@ -4586,27 +4634,17 @@
  • Detect text in images (OCR)
  • Deploy Seafile AI basic service

    +

    Deploy Seafile AI on the host with Seafile

    The Seafile AI basic service will use API calls to external large language model service to implement file labeling, file and image summaries, text translation, and sdoc writing assistance.

    Seafile AI requires Redis cache

    -

    In order to deploy Seafile AI correctly, you need to use Redis cache. Please set CACHE_PROVIDER=redis in .env and set Redis related configuration information correctly.

    +

    In order to deploy Seafile AI correctly, you have to use Redis as the cache. Please set CACHE_PROVIDER=redis in .env and set Redis related configuration information correctly.

    1. Download seafile-ai.yml

      wget https://manual.seafile.com/13.0/repo/docker/seafile-ai.yml
       
      -
      -

      Deploy in a cluster or standalone deployment

      -

      If you deploy Seafile in a cluster and would like to deploy Seafile AI, please expose port 8888 in seafile-ai.yml:

      -
      services:
      -  seafile-ai:
      -    ...
      -    ports:
      -      - 8888:8888
      -
      -

      At the same time, Seafile AI should be deployed on one of the cluster nodes.

      -
    2. Modify .env, insert or modify the following fields:

      @@ -4615,23 +4653,6 @@ ENABLE_SEAFILE_AI=true SEAFILE_AI_LLM_KEY=<your LLM access key> -
      -

      Deploy in a cluster or standalone deployment

      -

      Please also specify the following items in .env:

      -
        -
      • .env on the host where deploys Seafile server:
          -
        • SEAFILE_AI_SERVER_URL: the service url of Seafile AI (e.g., http://seafile-ai.example.com:8888)
        • -
        -
      • -
      • .env on the host where deploys Seafile AI:
          -
        • SEAFILE_SERVER_URL: your Seafile server's url (e.g., https://seafile.example.com)
        • -
        • REDIS_HOST: your redis host
        • -
        • REDIS_PORT: your redis port
        • -
        • REDIS_PASSWORD: your redis password
        • -
        -
      • -
      -

      About LLM configs

      By default, Seafile uses the GPT-4o-mini model from OpenAI. You only need to provide your OpenAI API Key. If you need to use other LLM (including self-deployed LLM service), you also need to specify the following in .env:

      @@ -4648,6 +4669,79 @@ docker compose up
    +

    Deploy Seafile AI on another host to Seafile

    +
      +
    1. +

      Download seafile-ai.yml and .env:

      +
      wget https://manual.seafile.com/13.0/repo/docker/seafile-ai/seafile-ai.yml
      +wget -O .env https://manual.seafile.com/13.0/repo/docker/seafile-ai/env
      +
      +
    2. +
    3. +

      Modify .env in the host will deploy Seafile AI according to following table

      + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      variabledescription
      SEAFILE_VOLUMEThe volume directory of thumbnail server data
      JWT_PRIVATE_KEYJWT key, the same as the config in Seafile .env file
      INNER_SEAHUB_SERVICE_URLIntranet URL for accessing Seahub component, like http://<your Seafile server intranet IP>.
      REDIS_HOSTRedis server host
      REDIS_PORTRedis server port
      REDIS_PASSWORDRedis server password
      SEAFILE_AI_LLM_TYPELarge Language Model (LLM) API Type (e.g., openai)
      SEAFILE_AI_LLM_URLLLM API url (leave blank if you would like to use official OpenAI's API endpoint)
      SEAFILE_AI_LLM_KEYLLM API key
      FACE_EMBEDDING_SERVICE_URLFace embedding service url
      +

      then start your Seafile AI server:

      +
      docker compose up -d
      +
      +
    4. +
    5. +

      Modify .env in the host deployed Seafile

      +
      SEAFILE_AI_SERVER_URL=http://<your seafile ai host>:8888
      +
      +

      then restart your Seafile server

      +
      docker compose down && docker compose up -d
      +
      +
    6. +

    Deploy face embedding service (Optional)

    The face embedding service is used to detect and encode faces in images and is an extension component of Seafile AI. Generally, we recommend that you deploy the service on a machine with a GPU and a graphics card driver that supports OnnxRuntime (so it can also be deployed on a different machine from the Seafile AI base service). Currently, the Seafile AI face embedding service only supports the following modes: