update 12.0 cluster by binary

This commit is contained in:
Junxiang Huang 2024-12-25 15:03:49 +08:00
parent 88869ebb20
commit ab6a6babd1
16 changed files with 826 additions and 700 deletions

View File

@ -9,7 +9,7 @@ Since version 7.0.8 pro, Seafile provides commands to export reports via command
```
cd seafile-server-latest
./seahub.sh python-env python seahub/manage.py export_user_traffic_report --date 201906
./seahub.sh python-env python3 seahub/manage.py export_user_traffic_report --date 201906
```
@ -17,7 +17,7 @@ cd seafile-server-latest
```
cd seafile-server-latest
./seahub.sh python-env python seahub/manage.py export_user_storage_report
./seahub.sh python-env python3 seahub/manage.py export_user_storage_report
```
@ -25,7 +25,7 @@ cd seafile-server-latest
```
cd seafile-server-latest
./seahub.sh python-env python seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01
./seahub.sh python-env python3 seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01
```

View File

@ -436,7 +436,7 @@ SSO_LDAP_USE_SAME_UID = True
!!! note "Here the UID means the unique user ID, in LDAP it is the attribute you use for `LDAP_LOGIN_ATTR` (not `LDAP_UID_ATTR`), in ADFS it is `uid` attribute. You need make sure you use the same attribute for the two settings"
On this basis, if you only want users to login using OSS and not through LDAP, you can set
On this basis, if you only want users to login using SSO and not through LDAP, you can set
```python
USE_LDAP_SYNC_ONLY = True

View File

@ -59,29 +59,30 @@ Seafile uses a system trash, where deleted libraries will be moved to. In this w
Seafile Pro Edition uses memory caches in various cases to improve performance. Some session information is also saved into memory cache to be shared among the cluster nodes. Memcached or Reids can be use for memory cache.
If you use memcached:
!!! tip
Redis support is added in version 11.0. Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet.
```
[memcached]
# Replace `localhost` with the memcached address:port if you're using remote memcached
# POOL-MIN and POOL-MAX is used to control connection pool size. Usually the default is good enough.
memcached_options = --SERVER=localhost --POOL-MIN=10 --POOL-MAX=100
=== "memcached"
```
```
[memcached]
# Replace `localhost` with the memcached address:port if you're using remote memcached
# POOL-MIN and POOL-MAX is used to control connection pool size. Usually the default is good enough.
memcached_options = --SERVER=localhost --POOL-MIN=10 --POOL-MAX=100
If you use redis:
```
```
[redis]
# your redis server address
redis_host = 127.0.0.1
# your redis server port
redis_port = 6379
# size of connection pool to redis, default is 100
max_connections = 100
```
=== "Redis"
Redis support is added in version 11.0. Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet.
```
[redis]
# your redis server address
redis_host = 127.0.0.1
# your redis server port
redis_port = 6379
# size of connection pool to redis, default is 100
max_connections = 100
```
## Seafile fileserver configuration

View File

@ -63,7 +63,7 @@ Notification server is enabled on the remote server xxxx
There is no additional features for notification server in the Pro Edition. It works the same as in community edition.
If you enable [clustering](../setup_binary/deploy_in_a_cluster.md), You need to deploy notification server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node.
If you enable [clustering](../setup_binary/cluster_deployment.md), You need to deploy notification server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node.
Download `.env` and `notification-server.yml` to notification server directory:

View File

@ -1,18 +1,50 @@
# Seafile Docker Cluster Deployment
Seafile Docker cluster deployment requires "sticky session" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the [Load Balancer Setting](../setup_binary/deploy_in_a_cluster.md#load-balancer-setting) for details.
## Architecture
The Seafile cluster solution employs a 3-tier architecture:
* Load balancer tier: Distribute incoming traffic to Seafile servers. HA can be achieved by deploying multiple load balancer instances.
* Seafile server cluster: a cluster of Seafile server instances. If one instance fails, the load balancer will stop handing traffic to it. So HA is achieved.
* Backend storage: Distributed storage cluster, e.g. S3, Openstack Swift or Ceph.
This architecture scales horizontally. That means, you can handle more traffic by adding more machines. The architecture is visualized in the following picture.
![seafile-cluster](../images/seafile-cluster-arch.png)
There are two main components on the Seafile server node: web server (Nginx/Apache) and Seafile app server. The web server passes requests from the clients to Seafile app server. The Seafile app servers work independently. They don't know about each other's state. That means each app server can fail independently without affecting other app server instances. The load balancer is responsible for detecting failure and re-routing requests.
Even though Seafile app servers work independently, they still have to share some session information. All shared session information is stored in memory cache. Thus, all Seafile app servers have to connect to the same memory cache server (cluster). Since Pro Edition 11.0, both memcached and Redis can be used as memory cache. Before 11.0, only memcached is supported. More details about memory cache configuration is available later.
The background server is the workhorse for various background tasks, including full-text indexing, office file preview, virus scanning, LDAP syncing. It should usually be run on a dedicated server for better performance. Currently only one background task server can be running in the entire cluster. If more than one background servers are running, they may conflict with each others when doing some tasks. If you need HA for background task server, you can consider using [Keepalived](http://www.keepalived.org/) to build a hot backup for it.
In the seafile cluster, **only one server** should run the background tasks, including:
* indexing files for search
* email notification
* office documents converts service (Start from 9.0 version, office converts service is moved to a separate docker component)
* LDAP sync
* virus scan
Let's assume you have three nodes in your cluster: A, B, and C.
* Node A is backend node that run background tasks.
* Node B and C are frontend nodes that serving requests from clients.
![cluster-nodes](../images/cluster-nodes.png)
## Environment
!!! success
Since version 11.0, Redis can also be used as memory cache server. But currently only single-node Redis is supported.
!!! note "Prerequisites"
- We assume you have already deployed ***memcache***, ***MariaDB***, ***ElasticSearch*** in separate machines and use ***S3*** like object storage.
- We assume you have already deployed memory cache server (e.g., ***Memcached***), ***MariaDB***, ***ElasticSearch*** in separate machines and use ***S3*** like object storage.
- Usually, each node of Seafile Cluster should have at least **2 cores** and **2G memory**. If the above services are deployed together with a node in the Seafile cluster, we recommend that you prepare **4 cores** and **4G memory** for the node (especially if ElasticSearch is also deployed on the node)
System: Ubuntu 24.04
Seafile Server: 2 frontend nodes, 1 backend node
Seafile Server: 2 frontend nodes, 1 backend node (Virtual machines are sufficient for most cases)
## Deploy Seafile service
@ -230,13 +262,29 @@ Seafile Server: 2 frontend nodes, 1 backend node
## Deploy load balance (Optional)
!!! note
Since Seafile Pro server 6.0.0, cluster deployment requires "sticky session" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the "Load Balancer Setting" section below for details
Generally speaking, in order to better access the Seafile service, we recommend that you use a load balancing service to access the Seafile cluster and bind your domain name (such as `seafile.cluster.com`) to the load balancing service. Usually, you can use:
- Cloud service provider's load balancing service
- Cloud service provider's load balancing service (e.g., ***AWS Elastic Load Balancer***)
- Deploy your own load balancing service, our document will give two of common load balance services:
- Nginx
- HAproxy
- ***Nginx***
- ***HAproxy***
### AWS Elastic Load Balancer (ELB)
In the AWS ELB management console, after you've added the Seafile server instances to the instance list, you should do two more configurations.
First you should setup HTTP(S) listeners. Ports 443 and 80 of ELB should be forwarded to the ports 80 or 443 of the Seafile servers.
Then you setup health check
![elb-health-check](../images/elb-health-check.png)
Refer to [AWS documentation](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html) about how to setup sticky sessions.
### Nginx

View File

@ -0,0 +1,698 @@
# Cluster Deployment
!!! tip "Since version 8.0, the recommend way to install Seafile clsuter is using [*Docker*](../setup/cluster_deploy_with_docker.md)"
## Environment
!!! success "About Redis"
Since version 11.0, Redis can also be used as memory cache server. But currently only single-node Redis is supported.
!!! note "Prerequisites"
- We assume you have already deployed memory cache server (e.g., ***Memcached***), ***MariaDB***, ***ElasticSearch*** in separate machines and use ***S3*** like object storage.
- Usually, each node of Seafile Cluster should have at least **2 cores** and **2G memory**. If the above services are deployed together with a node in the Seafile cluster, we recommend that you prepare **4 cores** and **4G memory** for the node (especially if ElasticSearch is also deployed on the node)
System: Ubuntu 24.04/22.04, Debian 12/11
Seafile Server: 2 frontend nodes, 1 backend node (Virtual machines are sufficient for most cases)
## Preparation
### Install prerequisites for all nodes
!!! tip
The standard directory `/opt/seafile` is assumed for the rest of this manual. If you decide to put Seafile in another directory, some commands need to be modified accordingly
=== "Ubuntu 24.04"
!!! note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with `source python-venv/bin/activate`.
```sh
sudo apt-get update
sudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev python3.12-venv
sudo apt-get install -y memcached libmemcached-dev
# create the data directory
mkdir /opt/seafile
cd /opt/seafile
# create the vitual environment in the python-venv directory
python3 -m venv python-venv
# activate the venv
source python-venv/bin/activate
# Notice that this will usually change your prompt so you know the venv is active
# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).
pip3 install --timeout=3600 django==4.2.* future==1.0.* mysqlclient==2.2.* \
pymysql pillow==10.4.* pylibmc captcha==0.6.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.* \
psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.9.* pysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 lxml python-ldap==3.4.* gevent==24.2.*
```
=== "Debian 12"
!!! note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with `source python-venv/bin/activate`.
```sh
sudo apt-get update
sudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmariadb-dev-compat ldap-utils libldap2-dev libsasl2-dev python3.11-venv
sudo apt-get install -y memcached libmemcached-dev
# create the data directory
mkdir /opt/seafile
cd /opt/seafile
# create the vitual environment in the python-venv directory
python3 -m venv python-venv
# activate the venv
source python-venv/bin/activate
# Notice that this will usually change your prompt so you know the venv is active
# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).
pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* pymysql pillow==10.0.* pylibmc captcha==0.4 markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 psd-tools django-pylibmc django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3
```
=== "Ubuntu 22.04/Debian 11"
```sh
# on (on , it is almost the same)
apt-get update
apt-get install -y python3 python3-dev python3-setuptools python3-pip python3-ldap libmysqlclient-dev ldap-utils libldap2-dev dnsutils
apt-get install -y memcached libmemcached-dev
apt-get install -y poppler-utils
# create the data directory
mkdir /opt/seafile
cd /opt/seafile
sudo pip3 install --timeout=3600 django==4.2.* future==1.0.* mysqlclient==2.2.* \
pymysql pillow==10.4.* pylibmc captcha==0.6.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.* \
psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.95.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3 lxml gevent==24.2.*
```
### Create user `seafile` for all nodes
Create a new user and follow the instructions on the screen:
```
adduser seafile
```
Change ownership of the created directory to the new user:
```
chown -R seafile: /opt/seafile
```
All the following steps are done as user seafile.
Change to user seafile:
```
su seafile
```
### Placing the Seafile PE license in `/opt/seafile` in all nodes
Save the license file in Seafile's programm directory `/opt/seafile`. Make sure that the name is `seafile-license.txt`.
!!! danger "If the license file has a different name or cannot be read, Seafile server will not start"
## Setup the first frontend Node
### Downloading the install package
The install packages for Seafile PE are available for download in the the [Seafile Customer Center](https://customer.seafile.com). To access the Customer Center, a user account is necessary. The registration is free.
Beginning with Seafile PE 7.0.17, the Seafile Customer Center provides two install packages for every version (using Seafile PE 12.0.6 as an example):
* _seafile-pro-server_12.0.6_x86-64_Ubuntu.tar.gz_, compiled in Ubuntu environment
The former is suitable for installation on Ubuntu/Debian servers, the latter for CentOS servers.
Download the install package using wget (replace the x.x.x with the version you wish to download):
```
# Debian/Ubuntu
wget -O 'seafile-pro-server_x.x.x_x86-64_Ubuntu.tar.gz' 'VERSION_SPECIFIC_LINK_FROM_SEAFILE_CUSTOMER_CENTER'
```
We use Seafile version 12.0.6 as an example in the remainder of these instructions.
### Uncompressing the package
The install package is downloaded as a compressed tarball which needs to be uncompressed.
Uncompress the package using tar:
```
# Debian/Ubuntu
tar xf seafile-pro-server_12.0.6_x86-64_Ubuntu.tar.gz
```
Now you have:
```
$ tree -L 2 /opt/seafile
.
├── seafile-license.txt
├── python-venv # you will not see this directory if you use ubuntu 22/debian 10
│   ├── bin
│   ├── include
│   ├── lib
│   ├── lib64 -> lib
│   └── pyvenv.cfg
├── seafile-pro-server-12.0.6
│   ├── check_init_admin.py
│   ├── index_op.py
│   ├── migrate-repo.py
│   ├── migrate-repo.sh
│   ├── migrate.py
│   ├── migrate.sh
│   ├── migrate_ldapusers.py
│   ├── parse_seahub_db.py
│   ├── pro
│   ├── remove-objs.py
│   ├── remove-objs.sh
│   ├── reset-admin.sh
│   ├── run_index_master.sh
│   ├── run_index_worker.sh
│   ├── runtime
│   ├── seaf-backup-cmd.py
│   ├── seaf-backup-cmd.sh
│   ├── seaf-encrypt.sh
│   ├── seaf-fsck.sh
│   ├── seaf-fuse.sh
│   ├── seaf-gc.sh
│   ├── seaf-import.sh
│   ├── seafile
│   ├── seafile-background-tasks.sh
│   ├── seafile-monitor.sh
│   ├── seafile.sh
│   ├── seahub
│   ├── seahub.sh
│   ├── setup-seafile-mysql.py
│   ├── setup-seafile-mysql.sh
│   ├── setup-seafile.sh
│   ├── sql
│   └── upgrade
└── seafile-pro-server_12.0.6_x86-64_Ubuntu.tar.gz
```
### Setup Seafile databases
The install package comes with a script that sets Seafile up for you. Specifically, the script creates the required directories and extracts all files in the right place. It can also create a MySQL user and the three databases that [Seafile's components](../introduction/components.md) require:
* ccnet server
* seafile server
* seahub
!!! note "While ccnet server was merged into the seafile-server in Seafile 8.0, the corresponding database is still required for the time being"
Run the script as user seafile:
!!! note
For installations using python virtual environment, activate it if it isn't already active
```sh
source python-venv/bin/activate
```
```
cd seafile-pro-server-12.0.6
./setup-seafile-mysql.sh
```
Configure your Seafile Server by specifying the following three parameters:
| Option | Description | Note |
| --------------------- | ---------------------------------------------------- | ------------------------------------------------------------ |
| server name | Name of the Seafile Server | 3-15 characters, only English letters, digits and underscore ('\_') are allowed |
| server's ip or domain | IP address or domain name used by the Seafile Server | Seafile client program will access the server using this address |
| fileserver port | TCP port used by the Seafile fileserver | Default port is 8082, it is recommended to use this port and to only change it if is used by other service |
In the next step, choose whether to create new databases for Seafile or to use existing databases. The creation of new databases requires the root password for the SQL server.
![grafik](../images/seafile-setup-database.png)
!!! note
If you don't have the root password, you need someone who has the privileges, e.g., the database admin, to create the three databases required by Seafile, as well as a MySQL user who can access the databases. For example, to create three databases `ccnet_db` / `seafile_db` / `seahub_db` for ccnet/seafile/seahub respectively, and a MySQL user "seafile" to access these databases run the following SQL queries:
```
create database `ccnet_db` character set = 'utf8';
create database `seafile_db` character set = 'utf8';
create database `seahub_db` character set = 'utf8';
create user 'seafile'@'localhost' identified by 'seafile';
GRANT ALL PRIVILEGES ON `ccnet_db`.* to `seafile`@localhost;
GRANT ALL PRIVILEGES ON `seafile_db`.* to `seafile`@localhost;
GRANT ALL PRIVILEGES ON `seahub_db`.* to `seafile`@localhost;
```
=== "\[1] Create new ccnet/seafile/seahub databases"
The script creates these databases and a MySQL user that Seafile Server will use to access them. To this effect, you need to answer these questions:
| Question | Description | Note |
| ------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| mysql server host | Host address of the MySQL server | Default is localhost |
| mysql server port | TCP port used by the MySQL server | Default port is 3306; almost every MySQL server uses this port |
| mysql root password | Password of the MySQL root account | The root password is required to create new databases and a MySQL user |
| mysql user for Seafile | MySQL user created by the script, used by Seafile's components to access the databases | Default is seafile; the user is created unless it exists |
| mysql password for Seafile user | Password for the user above, written in Seafile's config files | Percent sign ('%') is not allowed |
| database name | Name of the database used by ccnet | Default is "ccnet_db", the database is created if it does not exist |
| seafile database name | Name of the database used by Seafile | Default is "seafile_db", the database is created if it does not exist |
| seahub database name | Name of the database used by seahub | Default is "seahub_db", the database is created if it does not exist |
=== "\[2] Use existing ccnet/seafile/seahub databases"
The prompts you need to answer:
| Question | Description | Note |
| ------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------ |
| mysql server host | Host address of the MySQL server | Default is localhost |
| mysql server port | TCP port used by MySQL server | Default port is 3306; almost every MySQL server uses this port |
| mysql user for Seafile | User used by Seafile's components to access the databases | The user must exists |
| mysql password for Seafile user | Password for the user above | |
| ccnet database name | Name of the database used by ccnet, default is "ccnet_db" | The database must exist |
| seafile database name | Name of the database used by Seafile, default is "seafile_db" | The database must exist |
| seahub dabase name | Name of the database used by Seahub, default is "seahub_db" | The database must exist |
If the setup is successful, you see the following output:
![grafik](../images/seafile-setup-output.png)
The directory layout then looks as follows:
```
/opt/seafile
├── seafile-license.txt
├── ccnet
├── conf
│   ├── gunicorn.conf.py
│   ├── seafdav.conf
│   ├── seafevents.conf
│   ├── seafile.conf
│   └── seahub_settings.py
├── pro-data
├── python-venv # you will not see this directory if you use ubuntu 22/debian 10
│   ├── bin
│   ├── include
│   ├── lib
│   ├── lib64 -> lib
│   └── pyvenv.cfg
├── seafile-data
│   └── library-template
├── seafile-pro-server-12.0.6
│   ├── check_init_admin.py
│   ├── index_op.py
│   ├── migrate-repo.py
│   ├── migrate-repo.sh
│   ├── migrate.py
│   ├── migrate.sh
│   ├── migrate_ldapusers.py
│   ├── parse_seahub_db.py
│   ├── pro
│   ├── remove-objs.py
│   ├── remove-objs.sh
│   ├── reset-admin.sh
│   ├── run_index_master.sh
│   ├── run_index_worker.sh
│   ├── runtime
│   ├── seaf-backup-cmd.py
│   ├── seaf-backup-cmd.sh
│   ├── seaf-encrypt.sh
│   ├── seaf-fsck.sh
│   ├── seaf-fuse.sh
│   ├── seaf-gc.sh
│   ├── seaf-import.sh
│   ├── seafile
│   ├── seafile-background-tasks.sh
│   ├── seafile-monitor.sh
│   ├── seafile.sh
│   ├── seahub
│   ├── seahub.sh
│   ├── setup-seafile-mysql.py
│   ├── setup-seafile-mysql.sh
│   ├── setup-seafile.sh
│   ├── sql
│   └── upgrade
├── seafile-pro-server_12.0.6_x86-64_Ubuntu.tar.gz
├── seafile-server-latest -> seafile-pro-server-12.0.6
└── seahub-data
└── avatars
```
The folder `seafile-server-latest` is a symbolic link to the current Seafile Server folder. When later you upgrade to a new version, the upgrade scripts update this link to point to the latest Seafile Server folder.
### Create and Modify configuration files in `/opt/seafile/conf`
#### .env
!!! tip
`JWT_PRIVATE_KEY`, A random string with a length of no less than 32 characters can be generated from:
```sh
pwgen -s 40 1
```
```sh
JWT_PRIVATE_KEY=<Your jwt private key>
SEAFILE_SERVER_PROTOCOL=https
SEAFILE_SERVER_HOSTNAME=seafile.example.com
SEAFILE_MYSQL_DB_HOST=<your database host>
SEAFILE_MYSQL_DB_PORT=3306
SEAFILE_MYSQL_DB_USER=seafile
SEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>
SEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db
SEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db
SEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db
```
#### seafile.conf
1. Add or modify the following configuration to `seafile.conf`:
=== "Memcached"
```
[memcached]
memcached_options = --SERVER=<your memcached ip>[:<your memcached port>] --POOL-MIN=10 --POOL-MAX=100
```
=== "Redis"
```conf
[redis]
redis_host = <your redis ip>
redis_port = <your redis port, default 6379>
max_connections = 100
```
2. Enable cluster mode
```conf
[cluster]
enabled = true
```
!!! tip "More options in `cluster` section"
The Seafile server also opens a port for the load balancers to run health checks. Seafile by default uses port `11001`. You can change this by adding the following config:
```conf
[cluster]
health_check_port = 12345
```
3. Enable backend storage:
- [S3](../setup/setup_with_s3.md)
- [OpenStack Swift](../setup/setup_with_swift.md)
- [Ceph](../setup/setup_with_ceph.md)
#### seahub_settings.py
1. You must setup and use memory cache when deploying Seafile cluster, please add or modify the following configuration to `seahub_settings.py`:
=== "Memcached"
```py
CACHES = {
'default': {
'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',
'LOCATION': '<your Memcached host>:<your Memcached port, default 11211>',
},
}
```
=== "Redis"
please Refer to [Django's documentation about using Redis cache](https://docs.djangoproject.com/en/4.2/topics/cache/#redis) to add Redis configurations to `seahub_settings.py`.
2. Add following options to seahub_setting.py, which will tell Seahub to store avatar in database and cache avatar in memcached, and store css CACHE to local memory.
```
AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'
```
#### seafevents.conf
Modify the `[INDEX FILES]` section to enable full test search, we take *ElasticSearch* for example:
```
[INDEX FILES]
enabled = true
interval = 10m
highlight = fvh
index_office_pdf = true
es_host = <your ElasticSearch host>
es_port = <your ElasticSearch port, default 9200>
```
### Update Seahub Database
In cluster environment, we have to store avatars in the database instead of in a local disk.
```
mysql -h<your MySQL host> -P<your MySQL port> -useafile -p<user seafile's password>
# enter MySQL environment
USE seahub_db;
CREATE TABLE `avatar_uploaded` (`filename` TEXT NOT NULL, `filename_md5` CHAR(32) NOT NULL PRIMARY KEY, `data` MEDIUMTEXT NOT NULL, `size` INTEGER NOT NULL, `mtime` datetime NOT NULL);
```
### Setup Nginx/Apache and HTTP
Nginx/Apache with HTTP need to set it up on each machine running Seafile server. This is make sure only port 80 need to be exposed to load balancer. (HTTPS should be setup at the load balancer)
Please check the following documents on how to setup HTTP with [Nginx](./https_with_nginx.md). (HTTPS is not needed)
### Run and Test the Single Node
Once you have finished configuring this single node, start it to test if it runs properly:
!!! note
For installations using python virtual environment, activate it if it isn't already active
```sh
source python-venv/bin/activate
```
```
cd /opt/seafile/seafile-server-latest
su seafile
./seafile.sh start
./seahub.sh start
```
!!! success
The first time you start seahub, the script would prompt you to create an admin account for your Seafile server, then you can visit `http://ip-address-of-this-node:80` and login with the admin account to test if this node is working fine or not.
## Configure other nodes
If the first node works fine, you can compress the whole directory `/opt/seafile` into a tarball and copy it to all other Seafile server nodes. You can simply uncompress it and start the server on other frontend nodes by:
!!! note
For installations using python virtual environment, activate it if it isn't already active
```sh
source python-venv/bin/activate
```
```sh
cd /opt/seafile/seafile-server-latest
su seafile
./seafile.sh start
./seahub.sh start
```
### backend node
In the backend node, you need to execute the following command to start Seafile server. **CLUSTER_MODE=backend** means this node is seafile backend server.
!!! note
For installations using python virtual environment, activate it if it isn't already active
```sh
source python-venv/bin/activate
```
```bash
export CLUSTER_MODE=backend
cd /opt/seafile/seafile-server-latest
su seafile
./seafile.sh start
./seafile-background-tasks.sh start
```
## Start Seafile Service on boot
It would be convenient to setup Seafile service to start on system boot. Follow [this documentation](./start_seafile_at_system_bootup.md) to set it up on **all nodes**.
## Firewall Settings
There are 2 firewall rule changes for Seafile cluster:
* On each Seafile server machine, you should open the health check port (default 11001);
* On the Cache and ElasticSearch server, please only allow Seafile servers to access this port for security resons.
## Load Balancer Setting
!!! note
Since Seafile Pro server 6.0.0, cluster deployment requires "sticky session" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the "Load Balancer Setting" section below for details
Generally speaking, in order to better access the Seafile service, we recommend that you use a load balancing service to access the Seafile cluster and bind your domain name (such as `seafile.cluster.com`) to the load balancing service. Usually, you can use:
- Cloud service provider's load balancing service (e.g., ***AWS Elastic Load Balancer***)
- Deploy your own load balancing service, our document will give two of common load balance services:
- ***Nginx***
- ***HAproxy***
### AWS Elastic Load Balancer (ELB)
In the AWS ELB management console, after you've added the Seafile server instances to the instance list, you should do two more configurations.
First you should setup HTTP(S) listeners. Ports 443 and 80 of ELB should be forwarded to the ports 80 or 443 of the Seafile servers.
Then you setup health check
![elb-health-check](../images/elb-health-check.png)
Refer to [AWS documentation](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html) about how to setup sticky sessions.
### Nginx
1. Install Nginx in the host if you would like to deploy load balance service
```sh
sudo apt update
sudo apt install nginx
```
2. Create the configurations file for Seafile cluster
```sh
sudo nano /etc/nginx/sites-available/seafile-cluster
```
and, add the following contents into this file:
```nginx
upstream seafile_cluster {
server <IP: your frontend node 1>:80;
server <IP: your frontend node 2>:80;
...
}
server {
listen 80;
server_name <your domain>;
location / {
proxy_pass http://seafile_cluster;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
http_502 http_503 http_504;
}
}
```
3. Link the configurations file to `sites-enabled` directory:
```sh
sudo ln -s /etc/nginx/sites-available/seafile-cluster /etc/nginx/sites-enabled/
```
4. Test and enable configuration
```sh
sudo nginx -t
sudo nginx -s reload
```
### HAProxy
This is a sample `/etc/haproxy/haproxy.cfg`:
(Assume your health check port is `11001`)
```
global
log 127.0.0.1 local1 notice
maxconn 4096
user haproxy
group haproxy
defaults
log global
mode http
retries 3
maxconn 2000
timeout connect 10000
timeout client 300000
timeout server 36000000
listen seafile 0.0.0.0:80
mode http
option httplog
option dontlognull
option forwardfor
cookie SERVERID insert indirect nocache
server seafileserver01 192.168.1.165:80 check port 11001 cookie seafileserver01
server seafileserver02 192.168.1.200:80 check port 11001 cookie seafileserver02
```
## See how it runs
Now you should be able to test your cluster. Open <https://seafile.example.com> in your browser and enjoy. You can also synchronize files with Seafile clients.
## The final configuration of the front-end nodes
Here is the summary of configurations at the front-end node that related to cluster setup. (for version 7.1+)
For **seafile.conf**:
```
[cluster]
enabled = true
memcached_options = --SERVER=<IP of memcached node> --POOL-MIN=10 --POOL-MAX=100
```
The `enabled` option will prevent the start of background tasks by `./seafile.sh start` in the front-end node. The tasks should be explicitly started by `./seafile-background-tasks.sh start` at the back-end node.
For **seahub_settings.py**:
```
AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'
```
For **seafevents.conf**:
```
[INDEX FILES]
enabled = true
interval = 10m
highlight = fvh # This configuration is for improving searching speed
es_host = <IP of background node>
es_port = 9200
```
The `[INDEX FILES]` section is needed to let the front-end node know the file search feature is enabled.
## HTTPS
You can engaged HTTPS in your load balance service, as you can use certificates manager (e.g., [Certbot](https://certbot.eff.org)) to acquire and enable HTTPS to your Seafile cluster. You have to modify the relative URLs from the prefix `http://` to `https://` in `seahub_settings.py` and `.env`, after enabling HTTPS.
## (Optional) Deploy SeaDoc server
You can follow [here](../extension/setup_seadoc.md) to deploy SeaDoc server. And then modify `SEADOC_SERVER_URL` in your `.env` file

View File

@ -1,341 +0,0 @@
# Deploy in a cluster
!!! tip
Since Seafile Pro server 6.0.0, cluster deployment requires "sticky session" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the "Load Balancer Setting" section below for details
## Architecture
The Seafile cluster solution employs a 3-tier architecture:
* Load balancer tier: Distribute incoming traffic to Seafile servers. HA can be achieved by deploying multiple load balancer instances.
* Seafile server cluster: a cluster of Seafile server instances. If one instance fails, the load balancer will stop handing traffic to it. So HA is achieved.
* Backend storage: Distributed storage cluster, e.g. S3, Openstack Swift or Ceph.
This architecture scales horizontally. That means, you can handle more traffic by adding more machines. The architecture is visualized in the following picture.
![seafile-cluster](../images/seafile-cluster-arch.png)
There are two main components on the Seafile server node: web server (Nginx/Apache) and Seafile app server. The web server passes requests from the clients to Seafile app server. The Seafile app servers work independently. They don't know about each other's state. That means each app server can fail independently without affecting other app server instances. The load balancer is responsible for detecting failure and re-routing requests.
Even though Seafile app servers work independently, they still have to share some session information. All shared session information is stored in memory cache. Thus, all Seafile app servers have to connect to the same memory cache server (cluster). Since Pro Edition 11.0, both memcached and Redis can be used as memory cache. Before 11.0, only memcached is supported. More details about memory cache configuration is available later.
The background server is the workhorse for various background tasks, including full-text indexing, office file preview, virus scanning, LDAP syncing. It should usually be run on a dedicated server for better performance. Currently only one background task server can be running in the entire cluster. If more than one background servers are running, they may conflict with each others when doing some tasks. If you need HA for background task server, you can consider using [Keepalived](http://www.keepalived.org/) to build a hot backup for it. More details can be found in [background server setup](enable_search_and_background_tasks_in_a_cluster.md).
All Seafile app servers access the same set of user data. The user data has two parts: One in the MySQL database and the other one in the backend storage cluster (S3, Ceph etc.). All app servers serve the data equally to the clients.
All app servers have to connect to the same database or database cluster. We recommend to use MariaDB Galera Cluster if you need a database cluster.
There are a few steps to deploy a Seafile cluster:
1. Prepare hardware, operating systems, memory cache and database
2. Setup a single Seafile server node
3. Copy the deployment to other Seafile nodes
4. Setup Nginx/Apache and firewall rules
5. Setup load balancer
6. [Setup backgroup task node](enable_search_and_background_tasks_in_a_cluster.md)
## Preparation
### Hardware, Database, Memory Cache
At least 3 Linux server with at least 4 cores, 8GB RAM. Two servers work as frontend servers, while one server works as background task server. Virtual machines are sufficient for most cases.
In small cluster, you can re-use the 3 Seafile servers to run memcached cluster and MariaDB cluster. For larger clusters, you can have 3 more dedicated server to run memcached cluster and MariaDB cluster. Because the load on these two clusters are not high, they can share the hardware to save cost. Documentation about how to setup memcached cluster and MariaDB cluster can be found [here](memcached_mariadb_cluster.md).
Since version 11.0, Redis can also be used as memory cache server. But currently only single-node Redis is supported.
### Install Python libraries
On each mode, you need to install some python libraries.
First make sure your have installed Python 2.7, then:
```
sudo easy_install pip
sudo pip install boto
```
If you receive an error stating "Wheel installs require setuptools >= ...", run this between the pip and boto lines above
```
sudo pip install setuptools --no-use-wheel --upgrade
```
## Configure a Single Node
You should make sure the config files on every Seafile server are consistent.
### Get the license
Put the license you get under the top level diretory. In our wiki, we use the diretory `/data/haiwen/` as the top level directory.
### Download/Uncompress Seafile Professional Server
```
tar xf seafile-pro-server_8.0.0_x86-64.tar.gz
```
Now you have:
```
haiwen
├── seafile-license.txt
└── seafile-pro-server-8.0.0/
```
### Setup Seafile
Please follow [Download and Setup Seafile Professional Server With MySQL](./installation_pro.md) to setup a single Seafile server node.
!!! note "Use the load balancer's address or domain name for the server address. Don't use the local IP address of each Seafile server machine. This assures the user will always access your service via the load balancers"
After the setup process is done, you still have to do a few manual changes to the config files.
#### seafile.conf
If you use a single memcached server, you have to add the following configuration to `seafile.conf`
```
[cluster]
enabled = true
[memcached]
memcached_options = --SERVER=192.168.1.134 --POOL-MIN=10 --POOL-MAX=100
```
If you use memcached cluster, the recommended way to setup memcached clusters can be found [here](memcached_mariadb_cluster.md).
You'll setup two memcached server, in active/standby mode. A floating IP address will be assigned to the current active memcached node. So you have to configure the address in seafile.conf accordingly.
```
[cluster]
enabled = true
[memcached]
memcached_options = --SERVER=<floating IP address> --POOL-MIN=10 --POOL-MAX=100
```
If you are using Redis as cache, add following configurations:
```
[cluster]
enabled = true
[redis]
# your redis server address
redis_server = 127.0.0.1
# your redis server port
redis_port = 6379
# size of connection pool to redis, default is 100
max_connections = 100
```
Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet.
(Optional) The Seafile server also opens a port for the load balancers to run health checks. Seafile by default uses port 11001. You can change this by adding the following config option to `seafile.conf`
```
[cluster]
health_check_port = 12345
```
#### seahub_settings.py
You must setup and use memory cache when deploying Seafile cluster. Refer to ["memory cache"](../config/seahub_settings_py.md#cache) to configure memory cache in Seahub.
Also add following options to seahub_setting.py. These settings tell Seahub to store avatar in database and cache avatar in memcached, and store css CACHE to local memory.
```
AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'
```
#### seafevents.conf
Here is an example `[INDEX FILES]` section:
```
[INDEX FILES]
enabled = true
interval = 10m
highlight = fvh # This configuration is only available for Seafile 6.3.0 pro and above.
index_office_pdf = true
es_host = background.seafile.com
es_port = 9200
```
!!! tip
`enable = true` should be left unchanged. It means the file search feature is enabled.
### Update Seahub Database
In cluster environment, we have to store avatars in the database instead of in a local disk.
```
CREATE TABLE `avatar_uploaded` (`filename` TEXT NOT NULL, `filename_md5` CHAR(32) NOT NULL PRIMARY KEY, `data` MEDIUMTEXT NOT NULL, `size` INTEGER NOT NULL, `mtime` datetime NOT NULL);
```
### Backend Storage Settings
You also need to add the settings for backend cloud storage systems to the config files.
* For NFS: [Setup Seafile cluster with NFS](setup_seafile_cluster_with_nfs.md)
* For S3: [Setup With Amazon S3](../setup/setup_with_s3.md)
* For OpenStack Swift: [Setup With OpenStackSwift](../setup/setup_with_swift.md)
### Setup Nginx/Apache and HTTP
Nginx/Apache with HTTP need to set it up on each machine running Seafile server. This is make sure only port 80 need to be exposed to load balancer. (HTTPS should be setup at the load balancer)
Please check the following documents on how to setup HTTP with [Nginx](./https_with_nginx.md). (HTTPS is not needed)
### Run and Test the Single Node
Once you have finished configuring this single node, start it to test if it runs properly:
```
cd /data/haiwen/seafile-server-latest
./seafile.sh start
./seahub.sh start
```
!!! success
The first time you start seahub, the script would prompt you to create an admin account for your Seafile server.
Open your browser, visit `http://ip-address-of-this-node:80` and login with the admin account.
## Configure other nodes
Now you have one node working fine, let's continue to configure more nodes.
### Copy the config to all Seafile servers
Supposed your Seafile installation directory is `/data/haiwen`, compress this whole directory into a tarball and copy the tarball to all other Seafile server machines. You can simply uncompress the tarball and use it.
On each node, run `./seafile.sh` and `./seahub.sh` to start Seafile server.
### backend node
In the backend node, you need to execute the following command to start Seafile server. **CLUSTER_MODE=backend** means this node is seafile backend server.
```bash
export CLUSTER_MODE=backend
./seafile.sh start
./seafile-background-tasks.sh start
```
## Start Seafile Service on boot
It would be convenient to setup Seafile service to start on system boot. Follow [this documentation](./start_seafile_at_system_bootup.md) to set it up on **all nodes**.
## Firewall Settings
<!--Beside [standard ports of a seafile server](../deploy/using_firewall.md), t--> There are 2 firewall rule changes for Seafile cluster:
* On each Seafile server machine, you should open the health check port (default 11001);
* On the memcached server, you should open the port 11211. For security resons only the Seafile servers should be allowed to access this port.
## Load Balancer Setting
Now that your cluster is already running, fire up the load balancer and welcome your users. Since version 6.0.0, Seafile Pro requires "sticky session" settings in the load balancer. You should refer to the manual of your load balancer for how to set up sticky sessions.
### AWS Elastic Load Balancer (ELB)
In the AWS ELB management console, after you've added the Seafile server instances to the instance list, you should do two more configurations.
First you should setup HTTP(S) listeners. Ports 443 and 80 of ELB should be forwarded to the ports 80 or 443 of the Seafile servers.
Then you setup health check
![elb-health-check](../images/elb-health-check.png)
Refer to [AWS documentation](http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html) about how to setup sticky sessions.
### HAProxy
This is a sample `/etc/haproxy/haproxy.cfg`:
(Assume your health check port is `11001`)
```
global
log 127.0.0.1 local1 notice
maxconn 4096
user haproxy
group haproxy
defaults
log global
mode http
retries 3
maxconn 2000
timeout connect 10000
timeout client 300000
timeout server 36000000
listen seafile 0.0.0.0:80
mode http
option httplog
option dontlognull
option forwardfor
cookie SERVERID insert indirect nocache
server seafileserver01 192.168.1.165:80 check port 11001 cookie seafileserver01
server seafileserver02 192.168.1.200:80 check port 11001 cookie seafileserver02
```
## See how it runs
Now you should be able to test your cluster. Open <https://seafile.example.com> in your browser and enjoy. You can also synchronize files with Seafile clients.
If the above works, the next step would be [Enable search and background tasks in a cluster](enable_search_and_background_tasks_in_a_cluster.md).
## The final configuration of the front-end nodes
Here is the summary of configurations at the front-end node that related to cluster setup. (for version 7.1+)
For **seafile.conf**:
```
[cluster]
enabled = true
memcached_options = --SERVER=<IP of memcached node> --POOL-MIN=10 --POOL-MAX=100
```
The `enabled` option will prevent the start of background tasks by `./seafile.sh start` in the front-end node. The tasks should be explicitly started by `./seafile-background-tasks.sh start` at the back-end node.
For **seahub_settings.py**:
```
AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'
```
For **seafevents.conf**:
```
[INDEX FILES]
enabled = true
interval = 10m
highlight = fvh # This configuration is for improving searching speed
es_host = <IP of background node>
es_port = 9200
```
The `[INDEX FILES]` section is needed to let the front-end node know the file search feature is enabled.

View File

@ -1,121 +0,0 @@
_Note:_ Before you try to deploy file search office documents preview, make sure other parts of your seafile cluster are already working, e.g upload/download files in a web browser. Make sure memory cache is configured as described in ["Deploy in a cluster"](deploy_in_a_cluster.md).
# Enable search and background tasks in a cluster
In the seafile cluster, only one server should run the background tasks, including:
* indexing files for search
* email notification
* office documents converts service (Start from 9.0 version, office converts service is moved to a separate docker component)
* LDAP sync
* virus scan
Let's assume you have three nodes in your cluster: A, B, and C.
* Node A is backend node that run background tasks.
* Node B and C are frontend nodes that serving requests from clients.
![cluster-nodes](../images/cluster-nodes.png)
## Configuring Node A (the backend node)
If you following the steps on settings up a cluster, node B and node C should have already be configed as frontend node. You can copy the configuration of node B as a base for node A. Then do the following steps:
Since 9.0, ElasticSearch program is not part of Seafile package. You should deploy ElasticSearch service seperately. Then edit `seafevents.conf`, add the following lines:
```
[INDEX FILES]
enabled = true
es_host = <ip of elastic search service>
es_port = 9200
interval = 10m
highlight = fvh # this is for improving the search speed
```
Edit **seafile.conf** to enable virus scan according to [virus scan document](../extension/virus_scan.md)
## Configure Other Nodes
On nodes B and C, you need to:
Edit `seafevents.conf`, add the following lines:
```
[INDEX FILES]
enabled = true
es_host = <ip of elastic search service>
es_port = 9200
```
## Start the background node
Type the following commands to start the background node (Note, one additional command `seafile-background-tasks.sh` is needed)
```shell
export CLUSTER_MODE=backend
./seafile.sh start
./seafile-background-tasks.sh start
```
To stop the background node, type:
```shell
./seafile-background-tasks.sh stop
./seafile.sh stop
```
You should also configure Seafile background tasks to start on system bootup. For systemd based OS, you can add `/etc/systemd/system/seafile-background-tasks.service`:
```
[Unit]
Description=Seafile Background Tasks Server
After=network.target seafile.service
[Service]
Type=forking
ExecStart=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh start
ExecStop=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh stop
User=root
Group=root
[Install]
WantedBy=multi-user.target
```
Then enable this task in systemd:
```
systemctl enable seafile-background-tasks.service
```
## The final configuration of the background node
Here is the summary of configurations at the background node that related to clustering setup.
For **seafile.conf**:
```
[cluster]
enabled = true
[memcached]
memcached_options = --SERVER=<you memcached server host> --POOL-MIN=10 --POOL-MAX=100
```
For **seafevents.conf**:
```
[INDEX FILES]
enabled = true
es_host = <ip of elastic search service>
es_port = 9200
interval = 10m
highlight = fvh # this is for improving the search speed
```

View File

@ -44,9 +44,9 @@ Seafile uses the mysql_native_password plugin for authentication. The versions o
# Notice that this will usually change your prompt so you know the venv is active
# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).
pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \
pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \
psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.16.0 lxml python-ldap==3.4.3
pip3 install --timeout=3600 django==4.2.* future==1.0.* mysqlclient==2.2.* \
pymysql pillow==10.4.* pylibmc captcha==0.6.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.* \
psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.9.* pysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 lxml python-ldap==3.4.* gevent==24.2.*
```
=== "Debian 12"
!!! note
@ -82,9 +82,9 @@ Seafile uses the mysql_native_password plugin for authentication. The versions o
sudo mkdir /opt/seafile
cd /opt/seafile
sudo pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \
pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \
psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3
sudo pip3 install --timeout=3600 django==4.2.* future==1.0.* mysqlclient==2.2.* \
pymysql pillow==10.4.* pylibmc captcha==0.6.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.* \
psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.95.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3 lxml gevent==24.2.*
```
@ -328,7 +328,7 @@ Seahub caches items(avatars, profiles, etc) on file system by default(/tmp/seahu
1. Install Redis with package installers in your OS.
2. refer to [Django's documentation about using Redis cache](https://docs.djangoproject.com/en/4.2/topics/cache/#redis) to add Redis configurations to `seahub_settings.py`.
2. Refer to [Django's documentation about using Redis cache](https://docs.djangoproject.com/en/4.2/topics/cache/#redis) to add Redis configurations to `seahub_settings.py`.
### Tweaking conf files
@ -352,7 +352,7 @@ nano /opt/seafile/conf/.env
pwgen -s 40 1
```
```env
```sh
JWT_PRIVATE_KEY=<Your jwt private key>
SEAFILE_SERVER_PROTOCOL=https
SEAFILE_SERVER_HOSTNAME=seafile.example.com

View File

@ -44,9 +44,9 @@ Seafile uses the `mysql_native_password` plugin for authentication. The versions
# Notice that this will usually change your prompt so you know the venv is active
# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).
pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \
pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \
psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.16.0 lxml python-ldap==3.4.3
pip3 install --timeout=3600 django==4.2.* future==1.0.* mysqlclient==2.2.* \
pymysql pillow==10.4.* pylibmc captcha==0.6.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.* \
psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.9.* pysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 lxml python-ldap==3.4.* gevent==24.2.*
```
=== "Debian 12"
!!! note
@ -80,9 +80,9 @@ Seafile uses the `mysql_native_password` plugin for authentication. The versions
# create the data directory
mkdir /opt/seafile
cd /opt/seafile
sudo pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \
pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \
psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3 lxml
sudo pip3 install --timeout=3600 django==4.2.* future==1.0.* mysqlclient==2.2.* \
pymysql pillow==10.4.* pylibmc captcha==0.6.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.* \
psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.95.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3 lxml gevent==24.2.*
```
### Creating user seafile
@ -111,7 +111,9 @@ su seafile
### Placing the Seafile PE license
Save the license file in Seafile's programm directory `/opt/seafile`. Make sure that the name is `seafile-license.txt`. (If the file has a different name or cannot be read, Seafile PE will not start.)
Save the license file in Seafile's programm directory `/opt/seafile`. Make sure that the name is `seafile-license.txt`.
!!! danger "If the license file has a different name or cannot be read, Seafile server will not start"
### Downloading the install package
@ -200,7 +202,7 @@ $ tree -L 2 /opt/seafile
* Seafile CE: `seafile-server_12.0.6_x86-86.tar.gz`; uncompressing into folder `seafile-server-12.0.6`
* Seafile PE: `seafile-pro-server_12.0.6_x86-86.tar.gz`; uncompressing into folder `seafile-pro-server-12.0.6`
### Setting up Seafile Pro
### Setting up Seafile Pro databses
The install package comes with a script that sets Seafile up for you. Specifically, the script creates the required directories and extracts all files in the right place. It can also create a MySQL user and the three databases that [Seafile's components](../introduction/components.md) require:
@ -366,9 +368,9 @@ Memory cache is mandatory for pro edition. You may use Memcached or Reids as cac
```
Add the following configuration to `seahub_settings.py`.
Add or modify the following configuration to `seahub_settings.py`:
```
```py
CACHES = {
'default': {
'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',
@ -377,13 +379,30 @@ Memory cache is mandatory for pro edition. You may use Memcached or Reids as cac
}
```
Add or modify the following configuration to `seafile.conf`:
```
[memcached]
memcached_options = --SERVER=127.0.0.1 --POOL-MIN=10 --POOL-MAX=100
```
=== "Redis"
!!! success "Redis is supported since version 11.0"
1. Install Redis with package installers in your OS.
2. refer to [Django's documentation about using Redis cache](https://docs.djangoproject.com/en/4.2/topics/cache/#redis) to add Redis configurations to `seahub_settings.py`.
2. Refer to [Django's documentation about using Redis cache](https://docs.djangoproject.com/en/4.2/topics/cache/#redis) to add Redis configurations to `seahub_settings.py`.
3. Add or modify the following configuration to `seafile.conf`:
```
[redis]
redis_host = 127.0.0.1
redis_port = 6379
max_connections = 100
```
### Enabling HTTP/HTTPS (Optional but Recommended)
@ -403,7 +422,7 @@ nano /opt/seafile/conf/.env
pwgen -s 40 1
```
```env
```sh
JWT_PRIVATE_KEY=<Your jwt private key>
SEAFILE_SERVER_PROTOCOL=https
SEAFILE_SERVER_HOSTNAME=seafile.example.com

View File

@ -1,144 +0,0 @@
# Setup Memcached Cluster and MariaDB Galera Cluster
For high availability, it is recommended to set up a memcached cluster and MariaDB Galera cluster for Seafile cluster. This documentation will provide information on how to do this with 3 servers. You can either use 3 dedicated servers or use the 3 Seafile server nodes.
## Setup Memcached Cluster
Seafile servers share session information within memcached. So when you set up a Seafile cluster, there needs to be a memcached server (cluster) running.
The simplest way is to use a single-node memcached server. But when this server fails, some functions in the web UI of Seafile cannot work. So for HA, it's usually desirable to have more than one memcached servers.
We recommend to setup two independent memcached servers, in active/standby mode. A floating IP address (or Virtual IP address in some context) is assigned to the current active node. When the active node goes down, Keepalived will migrate the virtual IP to the standby node. So you actually use a single node memcahced, but use Keepalived (or other alternatives) to provide high availability.
After installing memcahced on each server, you need to make some modification to the memcached config file.
```
# Under Ubuntu
vi /etc/memcached.conf
# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding this much
# memory
# -m 64
-m 256
# Specify which IP address to listen on. The default is to listen on all IP addresses
# This parameter is one of the only security measures that memcached has, so make sure
# it's listening on a firewalled interface.
-l 0.0.0.0
service memcached restart
```
!!! tip "Please configure memcached to start on system startup"
Install and configure Keepalived.
```
# For Ubuntu
sudo apt-get install keepalived -y
```
Modify keepalived config file `/etc/keepalived/keepalived.conf`.
On active node
```
cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node1
vrrp_mcast_group4 224.0.100.19
}
vrrp_script chk_memcached {
script "killall -0 memcached && exit 0 || exit 1"
interval 1
weight -5
}
vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass hello123
}
virtual_ipaddress {
192.168.1.113/24 dev ens33
}
track_script {
chk_memcached
}
}
```
On standby node
```
cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived@localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id node2
vrrp_mcast_group4 224.0.100.19
}
vrrp_script chk_memcached {
script "killall -0 memcached && exit 0 || exit 1"
interval 1
weight -5
}
vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51
priority 98
advert_int 1
authentication {
auth_type PASS
auth_pass hello123
}
virtual_ipaddress {
192.168.1.113/24 dev ens33
}
track_script {
chk_memcached
}
}
```
!!! tip "Please adjust the network device names accordingly. virtual_ipaddress is the floating IP address in use"
## Setup MariaDB Cluster
MariaDB cluster helps you to remove single point of failure from the cluster architecture. Every update in the database cluster is synchronously replicated to all instances.
You can choose between two different setups:
- For a small cluster with 3 nodes, you can run MariaDB cluster directly on the Seafile server nodes. Each Seafile server access its local instance of MariaDB.
- For larger clusters, it's preferable to have 3 dedicated MariaDB nodes to form a cluster. You have to set up a HAProxy in front of the MariaDB cluster. Seafile will access database via HAProxy.
We refer to the documentation from MariaDB team:
- [Setting up MariaDB cluster on CentOS 7](https://mariadb.com/resources/blog/setting-mariadb-enterprise-cluster-part-2-how-set-mariadb-cluster)
- [Setting up HAProxy for MariaDB Galera Cluster](https://mariadb.com/resources/blog/setup-mariadb-enterprise-cluster-part-3-setup-ha-proxy-load-balancer-read-and-write-pools).
!!! tip
Seafile doesn't use read/write isolation techniques. So you don't need to setup read and write pools.

View File

@ -18,9 +18,7 @@ There are two ways to deploy Seafile Pro Edition. Since version 8.0, the recomme
## Cluster
- [Deploy seafile servers in a cluster](./deploy_in_a_cluster.md)
- [Enable search and background tasks in a cluster](./enable_search_and_background_tasks_in_a_cluster.md)
- [Setup Seafile cluster with NFS](./setup_seafile_cluster_with_nfs.md)
- [Deploy seafile servers in a cluster](./cluster_deployment.md)

View File

@ -3,7 +3,12 @@
Seafile Professional Edition
SOFTWARE LICENSE AGREEMENT
NOTICE: READ THE FOLLOWING TERMS AND CONDITIONS CAREFULLY BEFORE YOU DOWNLOAD, INSTALL OR USE Seafile Ltd.'S PROPRIETARY SOFTWARE. BY INSTALLING OR USING THE SOFTWARE, YOU AGREE TO BE BOUND BY THE FOLLOWING TERMS AND CONDITIONS. IF YOU DO NOT AGREE TO THE FOLLOWING TERMS AND CONDITIONS, DO NOT INSTALL OR USE THE SOFTWARE.
!!! danger "Important"
- READ THE FOLLOWING TERMS AND CONDITIONS CAREFULLY **BEFORE** YOU DOWNLOAD, INSTALL OR USE Seafile Ltd.'S PROPRIETARY SOFTWARE.
- BY INSTALLING OR USING THE SOFTWARE, YOU **AGREE** TO BE BOUND BY THE FOLLOWING TERMS AND CONDITIONS.
- IF YOU **DO NOT** AGREE TO THE FOLLOWING TERMS AND CONDITIONS, **DO NOT** INSTALL OR USE THE SOFTWARE.
## 1. DEFINITIONS

View File

@ -1,29 +0,0 @@
# Setup Seafile cluster with NFS
In a Seafile cluster, one common way to share data among the Seafile server instances is to use NFS. You should only share the files objects (located in `seafile-data` folder) and user avatars as well as thumbnails (located in `seahub-data` folder) on NFS. Here we'll provide a tutorial about how and what to share.
How to setup nfs server and client is beyond the scope of this wiki. Here are few references:
* Ubuntu: https://help.ubuntu.com/community/SettingUpNFSHowTo
* CentOS: http://www.centos.org/docs/5/html/Deployment_Guide-en-US/ch-nfs.html
Supposed your seafile server installation directory is `/data/haiwen`, after you run the setup script there should be a `seafile-data` and `seahub-data` directory in it. And supposed you mount the NFS drive on `/seafile-nfs`, you should follow a few steps:
* Move the `seafile-data` and `seahub-data` folder to `/seafile-nfs`:
```
mv /data/haiwen/seafile-data /seafile-nfs/
mv /data/haiwen/seahub-data /seafile-nfs/
```
* On every node in the cluster, make a symbolic link to the shared `seafile-data` and `seahub-data` folder
```
cd /data/haiwen
ln -s /seafile-nfs/seafile-data /data/haiwen/seafile-data
ln -s /seafile-nfs/seahub-data /data/haiwen/seahub-data
```
This way the instances will share the same `seafile-data` and `seahub-data` folder. All other config files and log files will remain independent.

View File

@ -1,7 +1,3 @@
---
status: new
---
# Upgrade a Seafile cluster (Docker)
## Major and minor version upgrade

View File

@ -111,13 +111,9 @@ nav:
- HTTPS with Nginx: setup_binary/https_with_nginx.md
- Seafile Professional Setup:
- Outline: setup_binary/outline_pro.md
- Installation: setup_binary/installation_pro.md
- Cluster deployment:
- Deploy in a cluster: setup_binary/deploy_in_a_cluster.md
- Search and background tasks in a cluster: setup_binary/enable_search_and_background_tasks_in_a_cluster.md
- Memcache and MariaDB Cluster: setup_binary/memcached_mariadb_cluster.md
- Setup Seafile cluster with NFS: setup_binary/setup_seafile_cluster_with_nfs.md
- License: setup_binary/seafile_professional_sdition_software_license_agreement.md
- Installation: setup_binary/installation_pro.md
- Cluster Deployment: setup_binary/cluster_deployment.md
- Other deployment notes:
- Start Seafile at System Bootup: setup_binary/start_seafile_at_system_bootup.md
- Logrotate: setup_binary/using_logrotate.md