Since Seafile 9.0.7, you can enable the profile function of go fileserver by adding the following configuration options:
# profile_password is required, change it for your need
diff --git a/13.0/repo/docker/seadoc/0.8/docker-compose.yml b/13.0/repo/docker/seadoc/0.8/docker-compose.yml
new file mode 100644
index 00000000..726e086f
--- /dev/null
+++ b/13.0/repo/docker/seadoc/0.8/docker-compose.yml
@@ -0,0 +1,22 @@
+services:
+
+ sdoc-server:
+ image: seafileltd/sdoc-server:0.8.0
+ container_name: sdoc-server
+ ports:
+ - 80:80
+ # - 443:443
+ # - 7070:7070
+ # - 8888:8888
+ volumes:
+ - /opt/seadoc-data/:/shared
+ environment:
+ - DB_HOST=192.168.0.2
+ - DB_PORT=3306
+ - DB_USER=user
+ - DB_PASSWD=password # Required, password of MySQL service.
+ - DB_NAME=sdoc_db
+ - TIME_ZONE=Etc/UTC # Optional, default is UTC. Should be uncomment and set to your local time zone.
+ - SDOC_SERVER_LETSENCRYPT=false # Whether to use https or not.
+ - SDOC_SERVER_HOSTNAME=sdoc-server.example.com # Specifies your host name if https is enabled.
+ - SEAHUB_SERVICE_URL=http://seafile.example.com
diff --git a/13.0/search/search_index.json b/13.0/search/search_index.json
index eeb0f1f1..e2719c7c 100644
--- a/13.0/search/search_index.json
+++ b/13.0/search/search_index.json
@@ -1 +1 @@
-{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"
Seafile 13.0
Our document of Seafile 13.0 is still in progress, the updates of some key components have not been completed yet. Please refer to Seafile 12.0 for more stable support.
Seafile is an open source cloud storage system for file sync, share and document collaboration. SeaDoc is an extension of Seafile that providing a lightweight online collaborative document feature.
"},{"location":"administration/","title":"Administration","text":""},{"location":"administration/#enter-the-admin-panel","title":"Enter the admin panel","text":"
As the system admin, you can enter the admin panel by click System Admin in the popup of avatar.
When you setup seahub website, you should have setup a admin account. After you logged in a admin, you may add/delete users and file libraries.
"},{"location":"administration/account/#how-to-change-a-users-id","title":"How to change a user's ID","text":"
Since version 11.0, if you need to change a user's external ID, you can manually modify database table social_auth_usersocialauth to map the new external ID to internal ID.
"},{"location":"administration/account/#resetting-user-password","title":"Resetting User Password","text":"
Administrator can reset password for a user in \"System Admin\" page.
In a private server, the default settings doesn't support users to reset their password by email. If you want to enable this, you have first to set up notification email.
"},{"location":"administration/account/#forgot-admin-account-or-password","title":"Forgot Admin Account or Password?","text":"
You may run reset-admin.sh script under seafile-server-latest directory. This script would help you reset the admin account and password. Your data will not be deleted from the admin account, this only unlocks and changes the password for the admin account.
Tip
Enter into the docker image, then go to /opt/seafile/seafile-server-latest
Under the seafile-server-latest directory, run ./seahub.sh python-env python seahub/manage.py check_user_quota , when the user quota exceeds 90%, an email will be sent. If you want to enable this, you have first to set up notification email.
"},{"location":"administration/auditing/","title":"Access log and auditing (Pro)","text":"
In the Pro Edition, Seafile offers four audit logs in system admin panel:
Login log
File access log (including access to shared files)
File update log
Permission change log
The audit log data is saved in seahub_db.
"},{"location":"administration/backup_recovery/","title":"Backup and Recovery","text":""},{"location":"administration/backup_recovery/#overview","title":"Overview","text":"
There are generally two parts of data to backup
Seafile library data
Databases
There are 3 databases:
ccnet_db: contains user and group information
seafile_db: contains library metadata
seahub_db: contains tables used by the web front end (seahub)
"},{"location":"administration/backup_recovery/#backup-order-database-first-or-data-directory-first","title":"Backup Order: Database First or Data Directory First","text":"
backup data directory first, SQL later: When you're backing up data directory, some new objects are written and they're not backed up. Those new objects may be referenced in SQL database. So when you restore, some records in the database cannot find its object. So the library is corrupted.
backup SQL first, data directory later: Since you backup database first, all records in the database have valid objects to be referenced. So the libraries won't be corrupted. But new objects written to storage when you're backing up are not referenced by database records. So some libraries are out of date. When you restore, some new data are lost.
The second sequence is better in the sense that it avoids library corruption. Like other backup solutions, some new data can be lost in recovery. There is always a backup window. However, if your storage backup mechanism can finish quickly enough, using the first sequence can retain more data.
We assume your seafile data directory is in /opt/seafile for binary package based deployment (or /opt/seafile-data for docker based deployment). And you want to backup to /backup directory. The /backup can be an NFS or Windows share mount exported by another machine, or just an external disk. You can create a layout similar to the following in /backup directory:
/backup\n---- databases/ contains database backup files\n---- data/ contains backups of the data directory\n
"},{"location":"administration/backup_recovery/#backup-and-restore-for-binary-package-based-deployment","title":"Backup and restore for binary package based deployment","text":""},{"location":"administration/backup_recovery/#backing-up-databases","title":"Backing up Databases","text":"
It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.
Assume your database names are ccnet_db, seafile_db and seahub_db. mysqldump automatically locks the tables so you don't need to stop Seafile server when backing up MySQL databases. Since the database tables are usually very small, it won't take long to dump.
You may encounter this problem on some machines with a minimal (from 10.5) or a newer (from 11.0) Mariadb server installed, of which the mysql* series of commands have been gradually deprecated. If you encounter this error, use the mariadb-dump command, such as:
"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data","title":"Backing up Seafile library data","text":"
The data files are all stored in the /opt/seafile directory, so just back up the whole directory. You can directly copy the whole directory to the backup destination, or you can use rsync to do incremental backup.
This produces a separate copy of the data directory each time. You can delete older backup copies after a new one is completed.
If you have a lot of data, copying the whole data directory would take long. You can use rsync to do incremental backup.
rsync -az /opt/seafile /backup/data\n
This command backup the data directory to /backup/data/seafile.
"},{"location":"administration/backup_recovery/#restore-from-backup","title":"Restore from backup","text":"
Now supposed your primary seafile server is broken, you're switching to a new machine. Using the backup data to restore your Seafile instance:
Copy /backup/data/seafile to the new machine. Let's assume the seafile deployment location new machine is also /opt/seafile.
Restore the database.
Since database and data are backed up separately, they may become a little inconsistent with each other. To correct the potential inconsistency, run seaf-fsck tool to check data integrity on the new machine. See seaf-fsck documentation.
"},{"location":"administration/backup_recovery/#restore-the-databases","title":"Restore the databases","text":"
Now with the latest valid database backup files at hand, you can restore them.
You may encounter this problem on some machines with a minimal (from 10.5) or a newer (from 11.0) Mariadb server installed, of which the mysql* series of commands have been gradually deprecated. If you encounter this error, use the mariadb command, such as:
"},{"location":"administration/backup_recovery/#backup-and-restore-for-docker-based-deployment","title":"Backup and restore for Docker based deployment","text":""},{"location":"administration/backup_recovery/#structure","title":"Structure","text":"
We assume your seafile volumns path is in /opt/seafile-data. And you want to backup to /backup directory.
The data files to be backed up:
/opt/seafile-data/seafile/conf # configuration files\n/opt/seafile-data/seafile/seafile-data # data of seafile\n/opt/seafile-data/seafile/seahub-data # data of seahub\n
"},{"location":"administration/backup_recovery/#backing-up-database","title":"Backing up Database","text":"
# It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.\ncd /backup/databases\ndocker exec -it seafile-mysql mariadb-dump -u[username] -p[password] --opt ccnet_db > ccnet_db.sql\ndocker exec -it seafile-mysql mariadb-dump -u[username] -p[password] --opt seafile_db > seafile_db.sql\ndocker exec -it seafile-mysql mariadb-dump -u[username] -p[password] --opt seahub_db > seahub_db.sql\n
Tip
The default image of database is Mariadb 10.11 from Seafile 12, you may not be able to find these commands in the container (such as mysqldump: command not found), since commands of mysql* series have been gradually deprecated. So we recommend that you use the mariadb* series of commands.
However, if you still use the MySQL docker image, you should continue to use mysqldump here:
"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data_1","title":"Backing up Seafile library data","text":""},{"location":"administration/backup_recovery/#to-directly-copy-the-whole-data-directory","title":"To directly copy the whole data directory","text":"
cp -R /opt/seafile-data/seafile /backup/data/\n
"},{"location":"administration/backup_recovery/#use-rsync-to-do-incremental-backup","title":"Use rsync to do incremental backup","text":"
"},{"location":"administration/backup_recovery/#recovery","title":"Recovery","text":""},{"location":"administration/backup_recovery/#restore-the-databases_1","title":"Restore the databases","text":"
The default image of database is Mariadb 10.11 from Seafile 12, you may not be able to find these commands in the container (such as mysql: command not found), since commands of mysql* series have been gradually deprecated. So we recommend that you use the mariadb* series of commands.
However, if you still use the MySQL docker image, you should continue to use mysql here:
Use the following command to clear expired session records in Seahub database:
cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clearsessions\n
Tip
Enter into the docker image, then go to /opt/seafile/seafile-server-latest
"},{"location":"administration/clean_database/#use-clean_db_records-command-to-clean-seahub_db","title":"Use clean_db_records command to clean seahub_db","text":"
Use the following command to simultaneously clean up table records of Activity, sysadmin_extra_userloginlog, FileAudit, FileUpdate, FileHistory, PermAudit, FileTrash 90 days ago:
Use the following command to clear the activity records:
use seahub_db;\nDELETE FROM Activity WHERE to_days(now()) - to_days(timestamp) > 90;\nDELETE FROM UserActivity WHERE to_days(now()) - to_days(timestamp) > 90;\n
Since version 6.2, we offer command to clear outdated library records in Seafile database, e.g. records that are not deleted after a library is deleted. This is because users can restore a deleted library, so we can't delete these records at library deleting time.
There are two tables in Seafile db that are related to library sync tokens.
RepoUserToken contains the authentication tokens used for library syncing. Note that a separate token is created for every client (including sync client and SeaDrive.)
RepoTokenPeerInfo contains more information about each client token, such as client name, IP address, last sync time etc.
When you have many sync clients connected to the server, these two tables can have large number of rows. Many of them are no longer actively used. You may clean the tokens that are not used in a recent period, by the following SQL query:
delete t,i from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
xxxx is the UNIX timestamp for the time before which tokens will be deleted.
To be safe, you can first check how many tokens will be removed:
select * from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01\n
"},{"location":"administration/logs/","title":"Seafile server logs","text":""},{"location":"administration/logs/#log-files-of-seafile-server","title":"Log files of seafile server","text":"
seafile.log: logs of seaf-server
seahub.log: logs from Django framework
fileserver.log: logs of the golang file server component
seafevents.log: logs for background tasks and office file conversion
seahub_email_sender.log: logs for periodically email sending of background tasks
"},{"location":"administration/logs/#log-files-for-seafile-background-node-in-cluster-mode","title":"Log files for seafile background node in cluster mode","text":"
seafile.log: logs of seaf-server
seafevents.log: Empty
seafile-background-tasks.log: logs for background tasks and office file convertion
seahub_email_sender.log: logs for periodically email sending of background tasks
On the server side, Seafile stores the files in the libraries in an internal format. Seafile has its own representation of directories and files (similar to Git).
With default installation, these internal objects are stored in the server's file system directly (such as Ext4, NTFS). But most file systems don't assure the integrity of file contents after a hard shutdown or system crash. So if new Seafile internal objects are being written when the system crashes, they can be corrupt after the system reboots. This will make part of the corresponding library not accessible.
Warning
If you store the seafile-data directory in a battery-backed NAS (like EMC or NetApp), or use S3 backend available in the Pro edition, the internal objects won't be corrupt.
Note
If your Seafile server is deployed with Docker, make sure you have enter the container before executing the following commands in this manual:
docker exec -it seafile bash\n
This is also required for the other scripts in this document.
We provide a seaf-fsck.sh script to check the integrity of libraries. The seaf-fsck tool accepts the following arguments:
cd /opt/seafile/seafile-server-latest\n./seaf-fsck.sh [--repair|-r] [--export|-E export_path] [repo_id_1 [repo_id_2 ...]]\n
There are three modes of operation for seaf-fsck:
checking integrity of libraries.
repairing corrupted libraries.
exporting libraries.
"},{"location":"administration/seafile_fsck/#checking-integrity-of-libraries","title":"Checking Integrity of Libraries","text":"
Running seaf-fsck.sh without any arguments will run a read-only integrity check for all libraries.
./seaf-fsck.sh\n
If you want to check integrity for specific libraries, just append the library id's as arguments:
./seaf-fsck.sh [library-id1] [library-id2] ...\n
The output looks like:
[02/13/15 16:21:07] fsck.c(470): Running fsck for repo ca1a860d-e1c1-4a52-8123-0bf9def8697f.\n[02/13/15 16:21:07] fsck.c(413): Checking file system integrity of repo fsck(ca1a860d)...\n[02/13/15 16:21:07] fsck.c(35): Dir 9c09d937397b51e1283d68ee7590cd9ce01fe4c9 is missing.\n[02/13/15 16:21:07] fsck.c(200): Dir /bf/pk/(9c09d937) is corrupted.\n[02/13/15 16:21:07] fsck.c(105): Block 36e3dd8757edeb97758b3b4d8530a4a8a045d3cb is corrupted.\n[02/13/15 16:21:07] fsck.c(178): File /bf/02.1.md(ef37e350) is corrupted.\n[02/13/15 16:21:07] fsck.c(85): Block 650fb22495b0b199cff0f1e1ebf036e548fcb95a is missing.\n[02/13/15 16:21:07] fsck.c(178): File /01.2.md(4a73621f) is corrupted.\n[02/13/15 16:21:07] fsck.c(514): Fsck finished for repo ca1a860d.\n
The corrupted files and directories are reported in the above message. By the way, you may also see output like the following:
[02/13/15 16:36:11] Commit 6259251e2b0dd9a8e99925ae6199cbf4c134ec10 is missing\n[02/13/15 16:36:11] fsck.c(476): Repo ca1a860d HEAD commit is corrupted, need to restore to an old version.\n[02/13/15 16:36:11] fsck.c(314): Scanning available commits...\n[02/13/15 16:36:11] fsck.c(376): Find available commit 1b26b13c(created at 2015-02-13 16:10:21) for repo ca1a860d.\n
This means the head commit (current state of the library) recorded in database is not consistent with the library data. In such case, fsck will try to find the last consistent state and check the integrity in that state.
Tip
If you have many libraries, it's helpful to save the fsck output into a log file for later analysis.
Corruption repair in seaf-fsck basically works in two steps:
If the library state (commit) recorded in database is not found in data directory, find the last available state from data directory.
Check data integrity in that specific state. If files or directories are corrupted, set them to empty files or empty directories. The corrupted paths will be reported, so that the user can recover them from somewhere else.
Running the following command repairs all the libraries:
./seaf-fsck.sh --repair\n
Most of time you run the read-only integrity check first, to find out which libraries are corrupted. And then you repair specific libraries with the following command:
After repairing, in the library history, seaf-fsck includes the list of files and folders that are corrupted. So it's much easier to located corrupted paths.
"},{"location":"administration/seafile_fsck/#best-practice-for-repairing-a-library","title":"Best Practice for Repairing a Library","text":"
To check all libraries and find out which library is corrupted, the system admin can run seaf-fsck.sh without any argument and save the output to a log file. Search for keyword \"Fail\" in the log file to locate corrupted libraries. You can run seaf-fsck to check all libraries when your Seafile server is running. It won't damage or change any files.
When the system admin find a library is corrupted, he/she should run seaf-fsck.sh with \"--repair\" for the library. After the command fixes the library, the admin should inform user to recover files from other places. There are two ways:
Upload corrupted files or folders via the web interface
If the library was synced to some desktop computer, and that computer has a correct version of the corrupted file, resyncing the library on that computer will upload the corrupted files to the server.
"},{"location":"administration/seafile_fsck/#speeding-up-fsck-by-not-checking-file-contents","title":"Speeding up FSCK by not checking file contents","text":"
Starting from Pro edition 7.1.5, an option is added to speed up FSCK. Most of the running time of seaf-fsck is spent on calculating hashes for file contents. This hash will be compared with block object ID. If they're not consistent, the block is detected as corrupted.
In many cases, the file contents won't be corrupted most of time. Some objects are just missing from the system. So it's enough to only check for object existence. This will greatly speed up the fsck process.
To skip checking file contents, add the --shallow or -s option to seaf-fsck.
"},{"location":"administration/seafile_fsck/#exporting-libraries-to-file-system","title":"Exporting Libraries to File System","text":"
You can use seaf-fsck to export all the files in libraries to external file system (such as Ext4). This procedure doesn't rely on the seafile database. As long as you have your seafile-data directory, you can always export your files from Seafile to external file system. The command about this operation is
The argument top_export_path is a directory to place the exported files. Each library will be exported as a sub-directory of the export path. If you don't specify library ids, all libraries will be exported.
Note
Currently only un-encrypted libraries can be exported. Encrypted libraries will be skipped.
Seafile uses storage de-duplication technology to reduce storage usage. The underlying data blocks will not be removed immediately after you delete a file or a library. As a result, the number of unused data blocks will increase on Seafile server.
To release the storage space occupied by unused blocks, you have to run a \"garbage collection\" program to clean up unused blocks on your server.
The GC program cleans up two types of unused blocks:
Blocks that no library references to, that is, the blocks belong to deleted libraries;
If you set history length limit on some libraries, the out-dated blocks in those libraries will also be removed.
[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo My Library(ffa57d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 265.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo ffa57d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 5 commits, 265 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 265 blocks total, about 265 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo aa(f3d0a8d0)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 5.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo f3d0a8d0.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 8 commits, 5 blocks.\n[03/19/15 19:41:49] gc-core.c(264): Populating index for sub-repo 9217622a.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 4 commits, 4 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 5 blocks total, about 9 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo test2(e7d26d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 507.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo e7d26d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 577 commits, 507 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 507 blocks total, about 507 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:50] seafserv-gc.c(124): === Repos deleted by users ===\n[03/19/15 19:41:50] seafserv-gc.c(145): === GC is finished ===\n\n[03/19/15 19:41:50] Following repos have blocks to be removed:\nrepo-id1\nrepo-id2\nrepo-id3\n
If you give specific library ids, only those libraries will be checked; otherwise all libraries will be checked.
repos have blocks to be removed
Notice that at the end of the output there is a \"repos have blocks to be removed\" section. It contains the list of libraries that have garbage blocks. Later when you run GC without --dry-run option, you can use these libraris ids as input arguments to GC program.
To actually remove garbage blocks, run without the --dry-run option:
./seaf-gc.sh [repo-id1] [repo-id2] ...\n
If libraries ids are specified, only those libraries will be checked for garbage.
As described before, there are two types of garbage blocks to be removed. Sometimes just removing the first type of blocks (those that belong to deleted libraries) is good enough. In this case, the GC program won't bother to check the libraries for outdated historic blocks. The \"-r\" option implements this feature:
./seaf-gc.sh -r\n
Success
Libraries deleted by the users are not immediately removed from the system. Instead, they're moved into a \"trash\" in the system admin page. Before they're cleared from the trash, their blocks won't be garbage collected.
Since Pro server 8.0.6 and community edition 9.0, you can remove garbage fs objects. It should be run without the --dry-run option:
./seaf-gc.sh --rm-fs\n
Bug reports
This command has bug before Pro Edition 10.0.15 and Community Edition 11.0.7. It could cause virtual libraries (e.g. shared folders) failing to merge into their parent libraries. Please avoid using this option in the affected versions. Please contact our support team if you are affected by this bug.
"},{"location":"administration/seafile_gc/#using-multiple-threads-in-gc","title":"Using Multiple Threads in GC","text":"
You can specify the thread number in GC. By default,
If storage backend is S3/Swift/Ceph, 10 threads are started to do the GC work.
If storage backend is file system, only 1 thread is started.
You can specify the thread number in with \"-t\" option. \"-t\" option can be used together with all other options. Each thread will do GC on one library. For example, the following command will use 20 threads to GC all libraries:
./seaf-gc.sh -t 20\n
Since the threads are concurrent, the output of each thread may mix with each others. Library ID is printed in each line of output.
"},{"location":"administration/seafile_gc/#run-gc-based-on-library-id-prefix","title":"Run GC based on library ID prefix","text":"
Since GC usually runs quite slowly as it needs to traverse the entire library history. You can use multiple threads to run GC in parallel. For even larger deployments, it's also desirable to run GC on multiple server in parallel.
A simple pattern to divide the workload among multiple GC servers is to assign libraries to servers based on library ID. Since Pro edition 7.1.5, this is supported. You can add \"--id-prefix\" option to seaf-gc.sh, to specify the library ID prefix. For example, the below command will only process libraries having \"a123\" as ID prefix.
./seaf-gc.sh --id-prefix a123\n
"},{"location":"administration/security_features/","title":"Security Questions","text":""},{"location":"administration/security_features/#how-is-the-connection-between-client-and-server-encrypted","title":"How is the connection between client and server encrypted?","text":"
Seafile uses HTTP(S) to syncing files between client and server (Since version 4.1.0).
Seafile provides a feature called encrypted library to protect your privacy. The file encryption/decryption is performed on client-side when using the desktop client for file synchronization. The password of an encrypted library is not stored on the server. Even the system admin of the server can't view the file contents.
There are a few limitation about this feature:
File metadata is NOT encrypted. The metadata includes: the complete list of directory and file names, every files size, the history of editors, when, and what byte ranges were altered.
The client side encryption does currently NOT work while using the web browser and the cloud file explorer of the desktop client. When you are browsing encrypted libraries via the web browser or the cloud file explorer, you need to input the password and the server is going to use the password to decrypt the \"file key\" for the library (see description below) and cache the password in memory for one hour. The plain text password is never stored or cached on the server.
If you create an encrypted library on the web interface, the library password and encryption keys will pass through the server. If you want end-to-end protection, you should create encrypted libraries from desktop client only.
For encryption protocol version 4, each library use its own salt to derive key/iv pairs. However, all files within a library shares the same salt. Likewise, all the files within a library are encrypted with the same key/iv pair. With encryption protocol version 2, all libraries use the same salt, but separate key/iv pairs.
Encrypted library doesn't ensure file integrity. For example, the server admin can still partially change the contents of files in an encrypted library. The client is not able to detect such changes to contents.
The client side encryption works on iOS client since version 2.1.6. The Android client support client side encryption since version 2.1.0. But since version 3.0.0, the iOS and Android clients drop support for client side encryptioin. You need to send the password to the server to encrypt/decrypt files.
"},{"location":"administration/security_features/#how-does-an-encrypted-library-work","title":"How does an encrypted library work?","text":"
When you create an encrypted library, you'll need to provide a password for it. All the data in that library will be encrypted with the password before uploading it to the server (see limitations above).
There are currently two supported encryption protocol versions for encrypted libraries, version 2 and versioin 4. The two versions shares the same basic procedure so we first describe the procedure.
Generate a 32-byte long cryptographically strong random number. This will be used as the file encryption key (\"file key\").
Encrypt the file key with the user provided password. We first use a secure hash algorithm to derive a key/iv pair from the password, then use AES 256/CBC to encrypt the file key. The result is called the \"encrypted file key\". This encrypted file key will be sent to and stored on the server. When you need to access the data, you can decrypt the file key from the encrypted file key.
A \"magic token\" is derived from the password and library id, with the same secure hash algorithm. This token is stored with the library and will be use to check passwords before decrypting data later.
All file data is encrypted by the file key with AES 256/CBC. We use PBKDF2-SHA256 with 1000 iterations to derive key/iv pair from the file key. After encryption, the data is uploaded to the server.
The only difference between version 2 and version 4 is on the usage of salt for the secure hash algorithm. In version 2, all libaries share the same fixed salt. In version 4, each library will use a separate and randomly generated salt.
"},{"location":"administration/security_features/#secure-hash-algorithms-for-password-verification","title":"Secure hash algorithms for password verification","text":"
A secure hash algorithm is used to derive key/iv pair for encrypting the file key. So it's critical to choose a relatively costly algorithm to prevent brute-force guessing for the password.
Before version 12, a fixed secure hash algorithm (PBKDF2-SHA256 with 1000 iterations) is used, which is far from secure for today's standard.
Since Seafile server version 12, we allow the admin to choose proper secure hash algorithms. Currently two hash algorithms are supported.
PBKDF2: The only available parameter is the number of iterations. You need to increase the the number of iterations over time, as GPUs are more and more used for such calculation. The default number of iterations is 1000. As of 2023, the recommended iterations is 600,000.
Argon2id: Secure hash algorithm that has high cost even for GPUs. There are 3 parameters that can be set: time cost, memory cost, and parallelism degree. The parameters are seperated by commas, e.g. \"2,102400,8\", which the default parameters used in Seafile. Learn more about this algorithm on https://github.com/P-H-C/phc-winner-argon2 .
"},{"location":"administration/security_features/#client-side-encryption-and-decryption","title":"Client-side encryption and decryption","text":"
The above encryption procedure can be executed on the desktop and the mobile client. The Seahub browser client uses a different encryption procedure that happens at the server. Because of this your password will be transferred to the server.
When you sync an encrypted library to the desktop, the client needs to verify your password. When you create the library, a \"magic token\" is derived from the password and library id. This token is stored with the library on the server side. The client use this token to check whether your password is correct before you sync the library. The magic token is generated by the secure hash algorithm chosen when the library was created.
For maximum security, the plain-text password won't be saved on the client side, too. The client only saves the key/iv pair derived from the \"file key\", which is used to decrypt the data. So if you forget the password, you won't be able to recover it or access your data on the server.
"},{"location":"administration/security_features/#why-fileserver-delivers-every-content-to-everybody-knowing-the-content-url-of-an-unshared-private-file","title":"Why fileserver delivers every content to everybody knowing the content URL of an unshared private file?","text":"
When a file download link is clicked, a random URL is generated for user to access the file from fileserver. This url can only be access once. After that, all access will be denied to the url. So even if someone else happens to know about the url, he can't access it anymore.
This was changed in Seafile server version 12. Instead of a random URL, a URL like 'https://yourserver.com/seafhttp/repos/{library id}/file_path' is used for downloading the file. Authorization will be done by checking cookies or API tokens on the server side. This makes the URL more cache-friendly while still being secure.
"},{"location":"administration/security_features/#how-does-seafile-store-user-login-password","title":"How does Seafile store user login password?","text":"
User login passwords are stored in hash form only. Note that user login password is different from the passwords used in encrypted libraries. In the database, its format is
PBKDF2SHA256$iterations$salt$hash\n
The record is divided into 4 parts by the $ sign.
The first part is the used hash algorithm. Currently we use PBKDF2 with SHA256. It can be changed to an even stronger algorithm if needed.
The second part is the number of iterations of the hash algorithm
The third part is the random salt used to generate the hash
The fourth part is the final hash generated from the password
To calculate the hash:
First, generate a 32-byte long cryptographically strong random number, use it as the salt.
Calculate the hash with PBKDF2(password, salt, iterations). The number of iterations is currently 10000.
After that, there will be a \"Two-Factor Authentication\" section in the user profile page.
Users can use the Google Authenticator app on their smart-phone to scan the QR code.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/","title":"Seafile Professional Server Changelog (old)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#44","title":"4.4","text":"
Note: Two new options are added in version 4.4, both are in seahub_settings.py
SHOW_TRAFFIC: default is True, set to False if you what to hide public link traffic in profile
[fix] Fix support for syncing old formatted libraries
Remove commit and fs objects in GC for deleted libraries
Add \"transfer\" operation to library list in \"admin panel->a single user\"
[fix] Fix the showing of the folder name for upload link generated from the root of a library
[fix] Add access log for online file preview
[fix] Fix permission settings for a sub-folder of a shared sub-folder
LDAP improvements and fixes
Only import LDAP users to Seafile internal database upon login
Only list imported LDAP users in \"organization->members\"
Add option to not import users via LDAP Sync (Only update information for already imported users). The option name is IMPORT_NEW_USER. See document http://manual.seafile.com/deploy/ldap_user_sync.html (url might deprecated)
[security] Check validity of file object id to avoid a potential attack
[fix] Check the validity of system default library template, if it is broken, recreate a new one.
[fix] After transfer a library, remove original sharing information
[security] Fix possibility to bypass Captcha check
[security] More security fixes.
[pro] Enable syncing a sub-sub-folder of a shared sub-folder (For example, if you share library-A/sub-folder-B to a group, other group members can selectively sync sub-folder-B/sub-sub-folder-C)
[fix, office preview] Handle the case that \"/tmp/seafile-office-output\"is removed by operating system
[ui] Improve UI for sharing link page, login page, file upload link page
[security] Clean web sessions when reset an user's password
Delete the user's libraries when deleting an user
Show link expiring date in sharing link management page
[admin] In a user's admin page, showing libraries' size and last modify time
[fix, api] Fix star file API
[pro, beta] Add \"Open via Client\" to enable calling local program to open a file at the web
About \"Open via Client\": The web interface will call Seafile desktop client via \"seafile://\" protocol to use local program to open a file. If the file is already synced, the local file will be opened. Otherwise it is downloaded and uploaded after modification. Need client version 4.3.0+
Improve preview for office files (doc/docx/ppt/pptx)
In the old way, the whole file is converted to HTML5 before returning to the client. By converting an office file to HTML5 page by page, the first page will be displayed faster. By displaying each page in a separate frame, the quality for some files is improved too.
Add global address book and remove the contacts module (You can disable it if you use CLOUD_MODE by adding ENABLE_GLOBAL_ADDRESSBOOK = False in seahub_settings.py)
List users imported from LDAP
[guest] Enable guest user by default
[guest] Guest user can't generate share link
Don't count inactive users as licensed users
Important
[fix] Fix viewing sub-folders for password protected sharing
[fix] Fix viewing starred files
[fix] Fix support of uploading multiple files in clients' cloud file browser
Improve security of password resetting link
Remove user private message feature
New features
Enable syncing any folder for an encrypted library
Add open file locally (open file via desktop client)
Others
[fix] Fix permission checking for sub-folder permissions
Change \"quit\" to \"Leave group\"
Clean inline CSS
Use image gallery module in sharing link for folders containing images
[api] Update file details api, fix error
Enable share link file download token available for multiple downloads
[fix] Fix visiting share link whose original path is deleted
Hide enable sub-library option since it is not meaningless for Pro edition
Support syncing any sub-folder in the desktop client
Add audit log, see http://manual.seafile.com/security/auditing.html (url might deprecated). This feature is turned off by default. To turn it on, see http://manual.seafile.com/deploy_pro/configurable_options.html (url might deprecated)
Syncing LDAP groups
Add permission setting for a sub-folder (beta)
Updates in community edition too
[fix] Fix image thumbnail in sharing link
Show detailed time when mouse over a relative time
Add trashed libraries (deleted libraries will first be put into trashed libraries where system admin can restore)
Improve seaf-gc.sh
Redesign fsck.
Add API to support logout/login an account in the desktop client
Add API to generate thumbnails for images files
Clean syncing tokens after deleting an account
Change permission of seahub_settings.py, ccnet.conf, seafile.conf to 0600
"},{"location":"changelog/changelog-for-seafile-professional-server/","title":"Seafile Professional Server Changelog","text":"
You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000
SeaDoc is now stable, providing online notes and documents feature
A new wiki module
A new trash mechanism, that deleted files will be recorded in database for fast listing
The password strength level is now calculated by algorithm. The old USER_PASSWORD_MIN_LENGTH, USER_PASSWORD_STRENGTH_LEVEL is removed. Only USER_STRONG_PASSWORD_REQUIRED is still used.
ADDITIONAL_APP_BOTTOM_LINKS is removed. Because there is no buttom bar in the navigation side bar now.
SERVICE_URL and FILE_SERVER_ROOT are removed. SERVICE_URL will be calculated from SEAFILE_SERVER_PROTOCOL and SEAFILE_SERVER_HOSTNAME in .env file.
ccnet.conf is removed. Some of its configuration items are moved from .env file, others are read from items in seafile.conf with same name.
For security reason, WebDAV no longer support login with LDAP account, the user with LDAP account must generate a WebDAV token at the profile page
[File tags] The current file tags feature is deprecated. We will re-implement a new one in version 13.0 with a new general metadata management module.
For ElasticSearch based search, full text search of doc/xls/ppt file types are no longer supported. This enable us to remove Java dependency in Seafile side.
Forbid generating share links for a library if the user has invisible/cloud-read-only permission on the library
[fix] Fix a configuration error for Ceph storage (if you don't use S3 interface)
[fix] Fix a bug in traffic statistic in golang file server
Support use different index names for ElasticSearch
Fix column view is limited to 100 items
Fix LDAP user login for WebDAV
Remove the configuration item \"ENABLE_FILE_COMMENT\" as it is no longer needed
Enable copy/move files between encrypted and non-encrypted libraries
Forbid creating libraries with Emoji in name
Fix some letters in the file name do not fit in height in some dialogs
Fix a performance issue in sending file updates report
Some other UI fixes and improvements
SDoc editor 0.6
Support convert docx file to sdoc file
Support Markdown format in comments
Support drag rows/columns in table element and other improvements for table elements
Other UI fixes and improvements
"},{"location":"changelog/changelog-for-seafile-professional-server/#1104-beta-and-sdoc-editor-05-2024-02-01","title":"11.0.4 beta and SDoc editor 0.5 (2024-02-01)","text":"
Major changes
Use a virtual ID to identify a user
LDAP login update
SAML/Shibboleth/OAuth login update
Update Django to version 4.2
Update SQLAlchemy to version 2.x
Add SeaDoc
UI Improvements
Improve UI of PDF view page
Update folder icon
The activities page support filter records based on modifiers
Add indicator for folders that has been shared out
Use file type icon as favicon for file preview pages
Support preview of JFIF image format
Pro edition only changes
Support S3 SSE-C encryption
Support a new invisible sub-folder permission
Update of online read-write permission, online read-write permission now support the shared user to update/rename/delete files online, making it consistent with normal read-write permission
Other changes
Remove file comment features as they are used very little (except for SeaDoc)
Add move dir/file, copy dir/file, delete dir/file, rename dir/file APIs for library token based API
Use user's current language when create Office files in OnlyOffice
Please check our document for how to upgrade to 10.0.
Note
If you upgrade to version 10.0.18+ from 10.0.16 or below, you need to upgrade the sqlalchemy to version 1.4.44+ if you use binary based installation. Otherwise \"activities\" page will not work.
[security] Upgrade pillow dependency from 9.0 to 10.0.
Note, after upgrading to this version, you need to upgrade the Python libraries in your server \"pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20\"
Note: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command pip3 install lxml to install it.
A page in published libraries is rendered at the server side to improve loading speed.
Upgrade Django from 3.2.6 to 3.2.14
Fix a bug in collaboration notice sending via email to users' contact email
Support OnlyOffice oform/docxf files
Improve user search when sharing a library
Admin panel support searching a library via ID prefix
[fix] Fix preview PSD images
[fix] Fix a bug that office files can't be opened in sharing links via OnlyOffice
[fix] Go fileserver: Folder or File is not deletable when there is a spurious utf-8 char inside the filename
[fix] Fix file moving in WebDAV
ElasticSearch now support https
Support advanced permissions like cloud-preview only, cloud read-write only when shareing a department library
[fix] Fix a bug in get library sharing info in multi-tenancy mode
[fix] Fix a bug in library list cache used by syncing client
[fix] Fix anothoer bug in upload files to a sharing link with upload permission
Potential breaking change in Seafile Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
[fix] Fix deleting libraries without owner in admin panel
Add an API to change a user's email
[fix] Fix a bug in storage migration script
[fix] Fix a bug that will cause fsck crash
[fix] Fix a XSS problem in notification
Potential breaking change in Seafile Pro 7.1.16: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
Since seafile-pro 7.0.0, we have upgraded Elasticsearch to 5.6. As Elasticsearch 5.6 relies on the Java 8 environment and can't run with root, you need to run Seafile with a non-root user and upgrade the Java version.
In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.
With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.
The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf instead of running ./seahub.sh start <another-port>.
Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:
Note, this command should be run while Seafile server is running.
Version 6.3 changed '/shib-login' to '/sso'. If you use Shibboleth, you need to to update your Apache/Nginx config. Please check the updated document: shibboleth config v6.3
Version 6.3 add a new option for file search (seafevents.conf):
[INDEX FILES]\n...\nhighlight = fvh\n...\n
This option will make search speed improved significantly (10x) when the search result contains large pdf/doc files. But you need to rebuild search index if you want to add this option.
From 6.2, It is recommended to use proxy mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:
Change the config file of Nginx/Apache.
Restart Seahub with ./seahub.sh start instead of ./seahub.sh start-fastcgi
[fix] Fix a bug when concurrent uploading/creating files (in the old version, when a user uploading/deleting multiple files in cloud file browser, it had a high chance to get \u201cinternal server error\u201d message)
[fix] Fix thumbnails for some images that 90 degrees rotated
[fix] Fix support for resumable file upload
[fix] Fix MySQL connection pool in Ccnet
[fix] Use original GIF file when view GIF files
[fix, api] Check if name is valid when creating folder/file
Remove deleted libraries in search index
Use 30MB as the default value of THUMBNAIL_IMAGE_SIZE_LIMIT
[api] Improve performance when move or copy multiple files/folders
[admin] Support syncing user role from AD/LDAP attribute (ldap role sync)
[admin] Support deleting all outdated invitations at once
[admin] Improve access log
[admin] Support upload seafile-license.txt via web interface (only for single machine deployment)
[admin] Admin can cancel two-factor authentication of a user
[admin, role] Show user\u2019s role in LDAP(Imported) table
[admin, role] Add wildcard support in role mapping for Shibboleth login
[admin] Improve performance in getting total file number, used space and total number of devices
[admin] Admin can add users to an institution via Web UI
[admin] Admin can choose a user\u2019s role when creating a user
In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share.
cd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n
The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster.
Force users to change password if imported via csv
Support set user's quota, name when import user via csv
Set user's quota in user list page
Add search group by group name
Use ajax when deleting a user's library in admin panel
Support logrotate for controller.log
Add a log when a user can't be find in LDAP during login, so that the system admin can know whether it is caused by password error or the user can't be find
Delete shared libraries information when deleting a user
Add admin API to create default library for a user
[ldap-sync] Support syncing users from AD/LDAP as inactive user
Other
[fix] Fix user search when global address book is disabled in CLOUD_MODE
[fix] Avoid timeout in some cases when showing a library trash
Show \"the account is inactive\" when an inactive account try to login
[security] Remove viewer.js to show open document files (ods, odt) because viewer.js is not actively maintained and may have potential security bugs
[fix] Exclude virtual libraries from storage size statistics
[fix] Fix mysql gone away problem in seafevents
Add region config option for Swift storage backend
[anti-virus] Send notification to the library owner if a virus is found
Guest invitation: Prevent the same address can be invited multiple times by the same inviter and by multiple inviters
Guest invitation: Add an regex to prevent certain email addresses be invited (see roles permissions)
Office online: support co-authoring
Admin can set users' department and name when creating users
Show total number of files and storage in admin info page
Show total number of devices and recently connected devices in admin info page
Delete shared libraries information when deleting a user
Upgrade Django to 1.8.17
Admin can create group in admin panel
[fix] Fix quota check: users can't upload a file if the quota will be exceeded after uploading the file
[fix] Fix quota check when copy file from one library to another
Add # -*- coding: utf-8 -*- to seahub_settings.py, so that admin can use non-ascii characters in the file.
[fix] Prevent admin from access group's wiki
[fix] Prevent transfering libraries to guest account
[fix] Prevent guest accout to create share link via API v2
Add a log when a user can't be find in LDAP during login, so that the system admin can know whether it is caused by password error or the user can't be find
Ingore white space character in the end of lines in ccnet.conf
[fix] Virus scan fails when the keystone token has expired https://github.com/haiwen/seafile/issues/1737
[fix] If you share a sub-folder to a group, the sub-folder will appear as a library in that group page. Don't show \"permission\" menu item for such a shared sub-folder on the group page, because setting permissions on this shared sub-folder not work. The user should set permissions on the original library directly.
[fix] Fix API for uploading file by blocks (Used by iOS client when uploading a large file)
[fix] Fix a database connection problem in ccnet-server
[fix] Fix moved files are still present in local folder until refresh
[fix] Fix admin panel can't show deleted libraries
[admin] Add group transfer function in admin panel
[admin] Admin can set library permissions in admin panel
Improve checking the user running Seafile must be the owner of seafile-data. If seafile-data is symbolic link, check the destination folder instead of the symbolic link.
[ui] Improve rename operation
Show name/contact email in admin panel and enable search user by name/contact email
Add printing style for markdown and doc/pdf
The \u201cSeafile\u201d in \"Welcome to Seafile\" message can be customised by SITE_NAME
Improve sorting of files with numbers
[api] Add admin API to only return LDAP imported user list
Code clean and update Web APIs
Remove number of synced libraries in devices page for simplify the interface and concept
Update help pages
[online preview] The online preview size limit setting FILE_PREVIEW_MAX_SIZE will not affect videos and audio files. So videos and audio with any size can be previewed online.
[online preview] Add printing style for markdown
Pro only features
Support LibreOffice online/Collabora Office online
Add two-factor authentication
Remote wipe (need desktop client 6.0.0)
[anti-virus] Support parallel scan
[anti-virus] Add option to only scan a file with size less than xx MB
[anti-virus] Add option to specific which file types to scan
[anti-virus] Add scanning virus instantly when user upload files via upload link
[online preivew] Add printing style for doc/pdf
[online preivew] Warn user if online preview only show 50 pages for doc/pdf with more than 50 pages
[fix] Fix search only work on the first page of search result pages
Add \u201cGroups\u201d category in the client\u2019s library view
Click notification pop up now open the exact folder containing the modified file.
Change \"Get Seafile Share Link\" to \"Get Seafile Download Link\"
[fix] Use case-insensitive sorting in cloud file browser
[fix] Don't sync a folder in Windows if it contains invalid characters instead of creating an empty folder with invalid name
[fix] Fix a rare bug where sometimes files are synced as zero length files. This happens when another software doesn't change the file timestamp after changing the content of the file.
Fix popup two password input dialogs when visit an encrypted library
Popup a tip when file conflicts happen
Don't send the password to server when creating an encrypted library
[mac] Fix support for TLS 1.2
[win, extension] Add context menu \"get internal link\"
Enable uploading of an empty folder in cloud file browser
[pro] Enable customization of app name and logo for the main window (See https://github.com/haiwen/seafile-docs/blob/master/config/seahub_customization.md#customize-the-logo-and-name-displayed-on-seafile-desktop-clients-seafile-professional-only)
[fix, windows] Fix a bug that causes freeze of Seafile UI
[sync] Improve index performance after a file is modified
[sync] Use multi-threads to upload/download file blocks
[admin] Enable config Seafile via seafile.rc in Mac/Linux or seafile.ini in Windows (https://github.com/haiwen/seafile-user-manual/blob/master/en/faq.md)
[admin] Enable uninstall Seafile without popup \"deleting config files\" dialog
Add file lock
[mac, extension] Add getting Seafile internal link
[mac, extension] Improve performance of showing sync status
[win] support for file path greater than 260 characters.
In the old version, you will sometimes see strange directory such as \"Documents~1\" synced to the server, this because the old version did not handle long path correctly.
[mac] Fix a syncing problem when library name contains \"\u00e8\" characters
[windows] Gracefully handle file lock issue.
In the previous version, when you open an office file in Windows, it is locked by the operating system. If another person modify this file in another computer, the syncing will be stopped until you close the locked file. In this new version, the syncing process will continue. The locked file will not be synced to local computer, but other files will not be affected.
[fix] Fix \"sometimes deleted folder reappearing problem\" on Windows.
You have to update all the clients in all the PCs. If one PC does not use the v3.1.11, when the \"deleting folder\" information synced to this PC, it will fail to delete the folder completely. And the folder will be synced back to other PCs. So other PCs will see the folder reappear again.
"},{"location":"changelog/server-changelog-old/","title":"Seafile Server Changelog (old)","text":""},{"location":"changelog/server-changelog-old/#50","title":"5.0","text":"
Note when upgrade to 5.0 from 4.4
You can follow the document on major upgrade (http://manual.seafile.com/deploy/upgrade.html) (url might deprecated)
In Seafile 5.0, we have moved all config files to folder conf, including:
If you want to downgrade from v5.0 to v4.4, you should manually copy these files back to the original place, then run minor_upgrade.sh to upgrade symbolic links back to version 4.4.
The 5.0 server is compatible with v4.4 and v4.3 desktop clients.
Common issues (solved) when upgrading to v5.0:
DatabaseError after Upgrade to 5.0 https://github.com/haiwen/seafile/issues/1429#issuecomment-153695240
Get name, institution, contact_email field from Shibboleth
[webdav] Don't show sub-libraries
Enable LOGIN_URL to be configured, user need to add LOGIN_URL to seahub_settings.py explicitly if deploy at non-root domain, e.g. LOGIN_URL = '//accounts/login/'.
Add ENABLE_USER_CREATE_ORG_REPO to enable/disable organization repo creation.
Change the Chinese translation of \"organization\"
Use GB/MB/KB instead of GiB/MiB/KiB in quota calculation and quota setting (1GB = 1000MB = 1,000,000KB)
Show detailed message if sharing a library failed.
[fix] Fix JPG Preview in IE11
[fix] Show \"out of quota\" instead of \"DERP\" in the case of out of quota when uploading files via web interface
[fix] Fix empty nickname during shibboleth login.
[fix] Fix default repo re-creation bug when web login after desktop.
[fix] Don't show sub-libraries at choose default library page, seafadmin page and save shared file to library page
[fix] Seafile server daemon: write PID file before connecting to database to avoid a problem when the database connection is slow
[fix] Don't redirect to old library page when restoring a folder in snapshot page
[fix] Fix start up parameters for seaf-fuse, seaf-server, seaf-fsck
Update Markdown editor and viewer. The update of the markdown editor and parser removed support for the Seafile-specific wiki syntax: Linking to other wikipages isn't possible anymore using [[ Pagename]].
Add tooltip in admin panel->library->Trash: \"libraries deleted 30 days before will be cleaned automatically\"
Don't open a new page when click the settings, trash and history icons in the library page
other small UI improvements
Config changes:
Move all config files to folder conf
Add web UI to config the server. The config items are saved in database table (seahub-dab/constance_config). They have a higher priority over the items in config files.
Trash:
A trash for every folder, showing deleted items in the folder and sub-folders. Others changes
Admin:
Admin can see the file numbers of a library
Admin can disable the creation of encrypted library
Security:
Change most GET requests to POST to increase security
Add global address book and remove the contacts module (You can disable it if you use CLOUD_MODE by adding ENABLE_GLOBAL_ADDRESSBOOK = False in seahub_settings.py)
Use image gallery module in sharing link for folders containing images
[fix] Fix missing library names (show as none) in 32bit version
[fix] Fix viewing sub-folders for password protected sharing
[fix] Fix viewing starred files
[fix] Fix supporting of uploading multi-files in clients' cloud file browser
Use unix domain socket in ccnet to listen for local connections. This isolates the access to ccnet daemon for different users. Thanks to Kimmo Huoman and Henri Salo for reporting this issue.
[fix] Handle loading avatar exceptions to avoid 500 error
Platform
Use random salt and PBKDF2 algorithm to store users' password. (You need to manually upgrade the database if you using 3.0.0 beta2 with MySQL backend.)
Syncing and sharing a sub-directory of an existing library.
Directly sharing files between two users (instead of generating public links)
User can save shared files to one's own library
[wiki] Add frame and max-width to images
Use 127.0.0.1 to read files (markdown, txt, pdf) in file preview
[bugfix] Fix pagination in library snapshot page
Set the max length of message reply from 128 characters to 2000 characters.
Improved performance for home page and group page
[admin] Add administration of public links
API
Add creating/deleting library API
Platform
Improve HTTPS support, now HTTPS reverse proxy is the recommend way.
Add LDAP filter and multiple DN
Case insensitive login
Move log files to a single directory
[security] Add salt when saving user's password
[bugfix] Fix a bug in handling client connection
"},{"location":"changelog/server-changelog-old/#17","title":"1.7","text":""},{"location":"changelog/server-changelog-old/#1702-for-linux-32-bit","title":"1.7.0.2 for Linux 32 bit","text":"
[bugfix] Fix \"Page Unavailable\" when view doc/docx/ppt.
"},{"location":"changelog/server-changelog-old/#1701-for-linux-32-bit","title":"1.7.0.1 for Linux 32 bit","text":"
Video/Audio playback with MediaElement.js (Contributed by Phillip Thelen)
Edit library title/description
Public Info & Public Library page are combined into one
Support selection of file encoding when viewing online
Improved online picture view (Switch to prev/next picture with keyboard)
Fixed a bug when doing diff for a newly created file.
Sort starred files by last-modification time.
Seafile Daemon
Fixed bugs for using httpserver under https
Fixed performance bug when checking client's credential during sync.
LDAP support
Enable setting of the size of the thread pool.
API
Add listing of shared libraries
Add unsharing of a library.
"},{"location":"changelog/server-changelog/","title":"Seafile Server Changelog","text":"
You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000
SeaDoc is now stable, providing online notes and documents feature
A new wiki module
A new trash mechanism, that deleted files will be recorded in database for fast listing
The password strength level is now calculated by algorithm. The old USER_PASSWORD_MIN_LENGTH, USER_PASSWORD_STRENGTH_LEVEL is removed. Only USER_STRONG_PASSWORD_REQUIRED is still used.
ADDITIONAL_APP_BOTTOM_LINKS is removed. Because there is no buttom bar in the navigation side bar now.
SERVICE_URL and FILE_SERVER_ROOT are removed. SERVICE_URL will be calculated from SEAFILE_SERVER_PROTOCOL and SEAFILE_SERVER_HOSTNAME in .env file.
ccnet.conf is removed. Some of its configuration items are moved from .env file, others are read from items in seafile.conf with same name.
For security reason, WebDAV no longer support login with LDAP account, the user with LDAP account must generate a WebDAV token at the profile page
[File tags] The current file tags feature is deprecated. We will re-implement a new one in version 13.0 with a new general metadata management module.
Note: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command pip3 install lxml to install it.
A page in published libraries is rendered at the server side to improve loading speed.
Upgrade Django from 3.2.6 to 3.2.14
Fix a bug in collaboration notice sending via email to users' contact email
Support OnlyOffice oform/docxf files
Improve user search when sharing a library
Admin panel support searching a library via ID prefix
[fix] Fix preview PSD images
[fix] Fix a bug that office files can't be opened in sharing links via OnlyOffice
[fix] Go fileserver: Folder or File is not deletable when there is a spurious utf-8 char inside the filename
In version 6.3, users can create public or private Wikis. In version 7.0, private Wikis is replaced by column mode view. Every library has a column mode view. So users don't need to explicitly create private Wikis.
Public Wikis are now renamed to published libraries.
Upgrade
Just follow our document on major version upgrade. No special steps are needed.
In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.
With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.
The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf instead of running ./seahub.sh start <another-port>.
Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:
From 6.2, It is recommended to use WSGI mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:
Change the config file of Nginx/Apache.
Restart Seahub with ./seahub.sh start instead of ./seahub.sh start-fastcgi
Enable fixing the email for share link to be fixed in certain language (option SHARE_LINK_EMAIL_LANGUAGE in seahub_setting.py). So admin can force the language for a email of a share link to be always in English, regardless of what language the sender is using.
The language of the interface of CollaboraOffice/OnlyOffice will be determined by the language of the current user.
Display the correct image thumbnails in favorites instead of the generic one
Note: If you ever used 6.0.0 or 6.0.1 or 6.0.2 with SQLite as database and encoutered a problem with desktop/mobile client login, follow https://github.com/haiwen/seafile/pull/1738 to fix the problem.
Show total storage, total number of files, total number of connected devices in the info page of admin panel
Force users to change password if imported via csv
Support set user's quota, name when import user via csv
Set user's quota in user list page
Add search group by group name
Use ajax when deleting a user's library in admin panel
Support logrotate for controller.log
Add # -*- coding: utf-8 -*- to seahub_settings.py, so that admin can use non-ascii characters in the file.
Ingore white space character in the end of lines in ccnet.conf
Add a log when a user can't be find in LDAP during login, so that the system admin can know whether it is caused by password error or the user can't be find
Delete shared libraries information when deleting a user
Other
[fix] Uploading files with special names lets seaf-server crash
[fix] Fix user search when global address book is disabled in CLOUD_MODE
[fix] Avoid timeout in some cases when showing a library trash
Show \"the account is inactive\" when an inactive account try to login
[security] Remove viewer.js to show open document files (ods, odt) because viewer.js is not actively maintained and may have potential security bugs (Thanks to Lukas Reschke from Nextcloud GmbH to report the issue)
[fix] Fix PostgreSQL support
Update Django to 1.8.17
Change time_zone to UTC as default
[fix] Fix quota check: users can't upload a file if the quota will be exceeded after uploading the file
[fix] Fix quota check when copy file from one library to another
[fix] Fix default value of created_at in table api2_tokenv2. This bug leads to login problems for desktop and mobile clients.
[fix] Fix a bug in generating a password protected share link
Improve checking the user running Seafile must be the owner of seafile-data. If seafile-data is symbolic link, check the destination folder instead of the symbolic link.
[ui] Improve rename operation
Admin can set library permissions in admin panel
Show name/contact email in admin panel and enable search user by name/contact email
Add printing style for markdown
The \u201cSeafile\u201d in \"Welcome to Seafile\" message can be customised by SITE_NAME
Improve sorting of files with numbers
[fix] Fix can't view more than 100 files
[api] Add admin API to only return LDAP imported user list
[fix] Fix seaf-fsck.sh --export fails without database
[fix] Fix users with Umlauts in their display name breaks group management and api2/account/info on some special Linux distribution
Remove user from groups when a user is deleted.
[fix] Fix can't generate shared link for read-only shared library
[fix] Fix can still view file history after library history is set to \"no history\".
[fix] Fix after moving or deleting multiple selected items in the webinterface, the buttons are lost until reloading
Check user before start seafile. The user must be the owner of seafile-data directory
Don't allow emails with very special characters that may containing XSS string to register
[fix] During downloading multiple files/folders, show \"Total size exceeds limits\" instead of \"internal server error\" when selected items exceeds limits.
[fix] When delete a share, only check whether the be-shared user exist or not. This is to avoid the situation that share to a user can't be deleted after the user be deleted.
Add a notificition to a user if he/she is added to a group
Improve UI for password change page when forcing password change after admin reset a user's password
[fix] Fix duplicated files show in Firefox if the folder name contains single quote '
Note: in this version, the group discussion is not re-implement yet. It will be available when the stable verison is released.
Redesign navigation
Rewrite group management
Improve sorting for large folder
Remember the sorting option for folder
Improve devices page
Update icons for libraries and files
Remove library settings page, re-implement them with dialogs
Remove group avatar
Don't show share menu in top bar when multiple item selected
Auto-focus on username field when loading the login page
Remove self-introduction in user profile
Upgrade to django 1.8
Force the user to change password if adding by admin or password reset by admin
disable add non-existing user to a group
"},{"location":"config/","title":"Server Configuration and Customization","text":""},{"location":"config/#config-files","title":"Config Files","text":"
The config files used in Seafile include:
environment variables: contains environment variables, the items here are shared between different components. Newly introduced components, like sdoc-server and notificaiton server, read configuraitons from environment variables and have no config files.
seafile.conf: contains settings for seafile daemon and fileserver.
seahub_settings.py: contains settings for Seahub
seafevents.conf: contains settings for background tasks and file search.
You can also modify most of the config items via web interface.The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files.
"},{"location":"config/#the-design-of-configure-options","title":"The design of configure options","text":"
There are now three places you can config Seafile server:
environment variables
config files
via web interface
The web interface has the highest priority. It contains a subset of end-user oriented settings. In practise, you can disable settings via web interface for simplicity.
Environment variables contains system level settings that needed when initialize Seafile server or run Seafile server. Environment variables also have three categories:
Initialization variables that used to generate config files when Seafile server run for the first time.
Variables that shared and used by multiple components of Seafile server.
Variables that used both in generate config files and later also needed for some components that have no corresponding config files.
The variables in the first category can be deleted after initialization. In the future, we will make more components to read config from environment variables, so that the third category is no longer needed.
"},{"location":"config/admin_roles_permissions/","title":"Roles and Permissions Support","text":"
You can add/edit roles and permission for administrators. Seafile has four build-in admin roles:
default_admin, has all permissions.
system_admin, can only view system info and config system.
daily_admin, can only view system info, view statistic, manage library/user/group, view user log.
audit_admin, can only view system info and admin log.
All administrators will have default_admin role with all permissions by default. If you set an administrator to some other admin role, the administrator will only have the permissions you configured to True.
Seafile supports eight permissions for now, its configuration is very like common user role, you can custom it by adding the following settings to seahub_settings.py.
Seafile Server supports the following external authentication types:
LDAP (Auth and Sync)
OAuth
Shibboleth
SAML
Since 11.0 version, switching between the types is possible, but any switch requires modifications of Seafile's databases.
Note
Before manually manipulating your database, make a database backup, so you can restore your system if anything goes wrong!
See more about make a database backup.
"},{"location":"config/auth_switch/#migrating-from-local-user-database-to-external-authentication","title":"Migrating from local user database to external authentication","text":"
As an organisation grows and its IT infrastructure matures, the migration from local authentication to external authentication like LDAP, SAML, OAUTH is common requirement. Fortunately, the switch is comparatively simple.
Configure and test the desired external authentication. Note the name of the provider you use in the config file. The user to be migrated should already be able to log in with this new authentication type, but he will be created as a new user with a new unique identifier, so he will not have access to his existing libraries. Note the uid from the social_auth_usersocialauth table. Delete this new, still empty user again.
Determine the ID of the user to be migrated in ccnet_db.EmailUser. For users created before version 11, the ID should be the user's email, for users created after version 11, the ID should be a string like xxx@auth.local.
Replace the password hash with an exclamation mark.
Create a new entry in social_auth_usersocialauth with the xxx@auth.local, your provider and the uid.
The login with the password stored in the local database is not possible anymore. After logging in via external authentication, the user has access to all his previous libraries.
This example shows how to migrate the user with the username 12ae56789f1e4c8d8e1c31415867317c@auth.local from local database authentication to OAuth. The OAuth authentication is configured in seahub_settings.py with the provider name authentik-oauth. The uid of the user inside the Identity Provider is HR12345.
This is what the database looks like before these commands must be executed:
mysql> select email,left(passwd,25) from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------------------------------+\n| email | left(passwd,25) |\n+---------------------------------------------+------------------------------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | PBKDF2SHA256$10000$4cdda6... |\n+---------------------------------------------+------------------------------+\n\nmysql> update EmailUser set passwd = '!' where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n\nmysql> insert into `social_auth_usersocialauth` (`username`, `provider`, `uid`, `extra_data`) values ('12ae56789f1e4c8d8e1c31415867317c@auth.local', 'authentik-oauth', 'HR12345', '');\n
Note
The extra_data field store user's information returned from the provider. For most providers, the extra_data field is usually an empty character. Since version 11.0.3-Pro, the default value of the extra_data field is NULL.
Afterwards the databases should look like this:
mysql> select email,passwd from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------- +\n| email | passwd |\n+---------------------------------------------+--------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | ! |\n+---------------------------------------------+--------+\n\nmysql> select username,provider,uid from social_auth_usersocialauth where username = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+-----------------+---------+\n| username | provider | uid |\n+---------------------------------------------+-----------------+---------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | authentik-oauth | HR12345 |\n+---------------------------------------------+-----------------+---------+\n
"},{"location":"config/auth_switch/#migrating-from-one-external-authentication-to-another","title":"Migrating from one external authentication to another","text":"
First configure the two external authentications and test them with a dummy user. Then, to migrate all the existing users you only need to make changes to the social_auth_usersocialauth table. No entries need to be deleted or created. You only need to modify the existing ones. The xxx@auth.local remains the same, you only need to replace the provider and the uid.
"},{"location":"config/auth_switch/#migrating-from-external-authentication-to-local-user-database","title":"Migrating from external authentication to local user database","text":"
First, delete the entry in the social_auth_usersocialauth table that belongs to the particular user.
Then you can reset the user's password, e.g. via the web interface. The user will be assigned a local password and from now on the authentication against the local database of Seafile will be done.
More details about this option will follow soon.
"},{"location":"config/auto_login_seadrive/","title":"Auto Login to SeaDrive on Windows","text":"
Kerberos is a widely used single sign on (SSO) protocol. Supporting of auto login will use a Kerberos service. For server configuration, please read remote user authentication documentation. You have to configure Apache to authenticate with Kerberos. This is out of the scope of this documentation. You can for example refer to this webpage.
The client machine has to join the AD domain. In a Windows domain, the Kerberos Key Distribution Center (KDC) is implemented on the domain service. Since the client machine has been authenticated by KDC when a Windows user logs in, a Kerberos ticket will be generated for current user without needs of another login in the browser.
When a program using the WinHttp API tries to connect a server, it can perform a login automatically through the Integrated Windows Authentication. Internet Explorer and SeaDrive both use this mechanism.
The details of Integrated Windows Authentication is described below:
Decide whether or not to use IWA according to the address and Internet Options. (more in next section)
Send a request to the server (e.g. http://test.seafile.com/sso)
The server returns an HTTP 401 unauthorized response with the Negotiate header which includes an authentication protocol.
The WinHttp API will try to use Kerberos first, if there is a valid ticket from KDC. The request will be sent again, together with the ticket in an HTTP header.
Then, Apache can check the ticket with KDC, and extract the username from it. The username will be passed to SeaHub for a successful auto login.
If the WinHttp API failed to get a ticket, it will then try the NTLM protocol by sending an HTTP request with Negotiate NTLMSSP token in the header. Without supporting the NTLM protocol, Apache shall returns an HTTP 401 unauthorized response and stops negotiation. At this point, the browser will pop up a login dialog, which means an auto login failure.
In short:
The client machine has to join the AD domain.
The Internet Options has to be configured properly.
The WinHttp API should be able to get a valid ticket from KDC. Make sure you use the correct server address (e.g. test.seafile.com) when you generate keytab file on the domain controller.
"},{"location":"config/auto_login_seadrive/#auto-login-on-internet-explorer","title":"Auto Login on Internet Explorer","text":"
The Internet Options has to be configured as following:
Open \"Internet Options\", select \"Security\" tab, select \"Local Intranet\" zone.
\"Sites\" -> \"Advanced\" -> \"Add this website to zone\". This is the place where we fill the address (e.g. http://test.seafile.com)
\"Security level for this zone\" -> \"Custom level...\" -> \"Automatic log-on with current username and password\".
Note
Above configuration requires a reboot to take effect.
Next, we shall test the auto login function on Internet Explorer: visit the website and click \"Single Sign-On\" link. It should be able to log in directly, otherwise the auto login is malfunctioned.
Note
The address in the test must be same as the address specified in the keytab file. Otherwise, the client machine can't get a valid ticket from Kerberos.
"},{"location":"config/auto_login_seadrive/#auto-login-on-seadrive","title":"Auto Login on SeaDrive","text":"
SeaDrive will use the Kerberos login configuration from the Windows Registry under HKEY_CURRENT_USER/SOFTWARE/SeaDrive.
Key : PreconfigureServerAddr\nType : REG_SZ\nValue : <the url of seafile server>\n\nKey : PreconfigureUseKerberosLogin\nType : REG_SZ\nValue : <0|1> // 0 for normal login, 1 for SSO login\n
The system wide configuration path is located at HKEY_LOCAL_MACHINE/SOFTWARE/Wow6432Node/SeaDrive.
Ccnet is the internal RPC framework used by Seafile server and also manages the user database. A few useful options are in ccnet.conf.
ccnet.conf is removed in version 12.0
"},{"location":"config/ccnet-conf/#options-that-moved-to-env-file","title":"Options that moved to .env file","text":"
Due to ccnet.conf is removed in version 12.0, the following informaiton is read from .env file
SEAFILE_MYSQL_DB_USER: The database user, the default is seafile\nSEAFILE_MYSQL_DB_PASSWORD: The database password\nSEAFILE_MYSQL_DB_HOST: The database host\nSEAFILE_MYSQL_DB_CCNET_DB_NAME: The database name for ccnet db, the default is ccnet_db\n
"},{"location":"config/ccnet-conf/#changing-mysql-connection-pool-size","title":"Changing MySQL Connection Pool Size","text":"
In version 12.0, the following information is read from the same option in seafile.conf
When you configure ccnet to use MySQL, the default connection pool size is 100, which should be enough for most use cases. You can change this value by adding following options to ccnet.conf:
[Database]\n......\n# Use larger connection pool\nMAX_CONNECTIONS = 200\n
When set use_ssl to true and skip_verify to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path. The ca_path is a trusted CA certificate path for signing MySQL server certificates. When skip_verify is true, there is no need to add the ca_path option. The MySQL server certificate won't be verified at this time.
"},{"location":"config/config_seafile_with_ADFS/","title":"config seafile with ADFS","text":""},{"location":"config/config_seafile_with_ADFS/#requirements","title":"Requirements","text":"
To use ADFS to log in to your Seafile, you need the following components:
A Winodws Server with ADFS installed. For configuring and installing ADFS you can see this article.
A valid SSL certificate for ADFS server, and here we use adfs-server.adfs.com as the domain name example.
A valid SSL certificate for Seafile server, and here we use demo.seafile.com as the domain name example.
These x.509 certs are used to sign and encrypt elements like NameID and Metadata for SAML. \n\n Then copy these two files to **<seafile-install-path>/seahub-data/certs**. (if the certs folder not exists, create it.)\n\n2. x.509 cert from IdP (Identity Provider)\n\n 1. Log into the ADFS server and open the ADFS management.\n\n 1. Double click **Service** and choose **Certificates**.\n\n 1. Export the **Token-Signing** certificate:\n\n 1. Right-click the certificate and select **View Certificate**.\n 1. Select the **Details** tab.\n 1. Click **Copy to File** (select **DER encoded binary X.509**).\n\n 1. Convert this certificate to PEM format, rename it to **idp.crt**\n\n 1. Then copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Prepare IdP Metadata File\n\n1. Open https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml\n\n1. Save this xml file, rename it to **idp_federation_metadata.xml**\n\n1. Copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Install Requirements on Seafile Server\n\n- For Ubuntu 16.04\n
### Config Seafile\n\nAdd the following lines to **seahub_settings.py**\n
from os import path import saml2 import saml2.saml"},{"location":"config/config_seafile_with_ADFS/#update-following-lines-according-to-your-situation","title":"update following lines according to your situation","text":"
CERTS_DIR = '/seahub-data/certs' SP_SERVICE_URL = 'https://demo.seafile.com' XMLSEC_BINARY = '/usr/local/bin/xmlsec1' ATTRIBUTE_MAP_DIR = '/seafile-server-latest/seahub-extra/seahub_extra/adfs_auth/attribute-maps' SAML_ATTRIBUTE_MAPPING = { 'DisplayName': ('display_name', ), 'ContactEmail': ('contact_email', ), 'Deparment': ('department', ), 'Telephone': ('telephone', ), }"},{"location":"config/config_seafile_with_ADFS/#update-the-idp-section-in-sampl_config-according-to-your-situation-and-leave-others-as-default","title":"update the 'idp' section in SAMPL_CONFIG according to your situation, and leave others as default","text":"
ENABLE_ADFS_LOGIN = True EXTRA_AUTHENTICATION_BACKENDS = ( 'seahub_extra.adfs_auth.backends.Saml2Backend', ) SAML_USE_NAME_ID_AS_USERNAME = True LOGIN_REDIRECT_URL = '/saml2/complete/' SAML_CONFIG = { # full path to the xmlsec1 binary programm 'xmlsec_binary': XMLSEC_BINARY,
'allow_unknown_attributes': True,\n\n# your entity id, usually your subdomain plus the url to the metadata view\n'entityid': SP_SERVICE_URL + '/saml2/metadata/',\n\n# directory with attribute mapping\n'attribute_map_dir': ATTRIBUTE_MAP_DIR,\n\n# this block states what services we provide\n'service': {\n # we are just a lonely SP\n 'sp' : {\n \"allow_unsolicited\": True,\n 'name': 'Federated Seafile Service',\n 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS,\n 'endpoints': {\n # url and binding to the assetion consumer service view\n # do not change the binding or service name\n 'assertion_consumer_service': [\n (SP_SERVICE_URL + '/saml2/acs/',\n saml2.BINDING_HTTP_POST),\n ],\n # url and binding to the single logout service view\n # do not change the binding or service name\n 'single_logout_service': [\n (SP_SERVICE_URL + '/saml2/ls/',\n saml2.BINDING_HTTP_REDIRECT),\n (SP_SERVICE_URL + '/saml2/ls/post',\n saml2.BINDING_HTTP_POST),\n ],\n },\n\n # attributes that this project need to identify a user\n 'required_attributes': [\"uid\"],\n\n # attributes that may be useful to have but not required\n 'optional_attributes': ['eduPersonAffiliation', ],\n\n # in this section the list of IdPs we talk to are defined\n 'idp': {\n # we do not need a WAYF service since there is\n # only an IdP defined here. This IdP should be\n # present in our metadata\n\n # the keys of this dictionary are entity ids\n 'https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml': {\n 'single_sign_on_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/idpinitiatedsignon.aspx',\n },\n 'single_logout_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/?wa=wsignout1.0',\n },\n },\n },\n },\n},\n\n# where the remote metadata is stored\n'metadata': {\n 'local': [path.join(CERTS_DIR, 'idp_federation_metadata.xml')],\n},\n\n# set to 1 to output debugging information\n'debug': 1,\n\n# Signing\n'key_file': '', \n'cert_file': path.join(CERTS_DIR, 'idp.crt'), # from IdP\n\n# Encryption\n'encryption_keypairs': [{\n 'key_file': path.join(CERTS_DIR, 'sp.key'), # private part\n 'cert_file': path.join(CERTS_DIR, 'sp.crt'), # public part\n}],\n\n'valid_for': 24, # how long is our metadata valid\n
Relying Party Trust is the connection between Seafile and ADFS.
Log into the ADFS server and open the ADFS management.
Double click Trust Relationships, then right click Relying Party Trusts, select Add Relying Party Trust\u2026.
Select Import data about the relying party published online or one a local network, input https://demo.seafile.com/saml2/metadata/ in the Federation metadata address.
Then Next until Finish.
Add Relying Party Claim Rules
Relying Party Claim Rules is used for attribute communication between Seafile and users in Windows Domain.
Important: Users in Windows domain must have the E-mail value setted.
Right-click on the relying party trust and select Edit Claim Rules...
On the Issuance Transform Rules tab select Add Rules...
Select Send LDAP Attribute as Claims as the claim rule template to use.
Give the claim a name such as LDAP Attributes.
Set the Attribute Store to Active Directory, the LDAP Attribute to E-Mail-Addresses, and the Outgoing Claim Type to E-mail Address.
Select Finish.
Click Add Rule... again.
Select Transform an Incoming Claim.
Give it a name such as Email to Name ID.
Incoming claim type should be E-mail Address (it must match the Outgoing Claim Type in rule #1).
The Outgoing claim type is Name ID (this is required in Seafile settings policy 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS).
Note: You can copy password_reset_email.html to seahub-data/custom/templates/registration/password_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
"},{"location":"config/customize_email_notifications/#system-admin-add-new-member","title":"System admin add new member","text":"
Note: You can copy user_add_email.html to seahub-data/custom/templates/sysadmin/user_add_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
"},{"location":"config/customize_email_notifications/#system-admin-reset-user-password","title":"System admin reset user password","text":"
Note: You can copy user_reset_email.html to seahub-data/custom/templates/sysadmin/user_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
"},{"location":"config/details_about_file_search/","title":"Details about File Search","text":""},{"location":"config/details_about_file_search/#search-options","title":"Search Options","text":"
The following options can be set in seafevents.conf to control the behaviors of file search. You need to restart seafile and seahub to make them take effect.
[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## this is for improving the search speed\nhighlight = fvh \n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\nindex_office_pdf=false\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n
"},{"location":"config/details_about_file_search/#enable-full-text-search-for-officepdf-files","title":"Enable full text search for Office/PDF files","text":"
Full text search is not enabled by default to save system resources. If you want to enable it, you need to follow the instructions below.
"},{"location":"config/details_about_file_search/#modify-seafeventsconf","title":"Modify seafevents.conf","text":"Deploy in DockerDeploy from binary packages
cd /opt/seafile-data/seafile/conf\nnano seafevents.conf\n
"},{"location":"config/details_about_file_search/#restart-seafile-server","title":"Restart Seafile server","text":"Deploy in DockerDeploy from binary packages
docker exec -it seafile bash\ncd /scripts\n./seafile.sh restart\n\n# delete the existing search index and recreate it\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
cd /opt/seafile/seafile-server-latest\n./seafile.sh restart\n\n# delete the existing search index and recreate it\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
"},{"location":"config/details_about_file_search/#common-problems","title":"Common problems","text":""},{"location":"config/details_about_file_search/#how-to-rebuild-the-index-if-something-went-wrong","title":"How to rebuild the index if something went wrong","text":"
cd /opt/seafile/seafile-server-latest\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
Tip
If this does not work, you can try the following steps:
Stop Seafile
Remove the old search index rm -rf pro-data/search
Restart Seafile
Wait one minute then run ./pro/pro.py search --update
"},{"location":"config/details_about_file_search/#access-the-aws-elasticsearch-service-using-https","title":"Access the AWS elasticsearch service using HTTPS","text":"
Create an elasticsearch service on AWS according to the documentation.
Configure the seafevents.conf:
[INDEX FILES]\nenabled = true\ninterval = 10m\nindex_office_pdf=true\nes_host = your domain endpoint(for example, https://search-my-domain.us-east-1.es.amazonaws.com)\nes_port = 443\nscheme = https\nusername = master user\npassword = password\nhighlight = fvh\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n
Note
The version of the Python third-party package elasticsearch cannot be greater than 7.14.0, otherwise the elasticsearch service cannot be accessed: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/samplecode.html#client-compatibility, https://github.com/elastic/elasticsearch-py/pull/1623.
"},{"location":"config/details_about_file_search/#i-get-no-result-when-i-search-a-keyword","title":"I get no result when I search a keyword","text":"
The search index is updated every 10 minutes by default. So before the first index update is performed, you get nothing no matter what you search.
COMPOSE_FILE: .yml files for components of Seafile-docker, each .yml must be separated by the symbol defined in COMPOSE_PATH_SEPARATOR. The core components are involved in seafile-server.yml and caddy.yml which must be taken in this term.
COMPOSE_PATH_SEPARATOR: The symbol used to separate the .yml files in term COMPOSE_FILE, default is ','.
CACHE_PROVIDER: The type of cache server used for Seafile. The available options are redis and memcached. Since Seafile 13, it is recommended to use redis as the cache service to support new features, and memcached will no longer be integrated into Seafile Docker by default. Default is redis
SEAF_SERVER_STORAGE_TYPE: What kind of the Seafile data for storage. Available options are disk (i.e., local disk), s3 and multiple (see the details of multiple storage backends)
S3_SS_BUCKET: S3 storage bucket for SeaSearch data (valid when service enabled)
S3_MD_BUCKET: S3 storage bucket for metadata-sever data (valid when service available)
S3_KEY_ID: S3 storage backend key ID
S3_SECRET_KEY: S3 storage backend secret key
S3_USE_V4_SIGNATURE: Use the v4 protocol of S3 if enabled, default is true
S3_AWS_REGION: Region of your buckets (AWS only), default is us-east-1.
S3_HOST: Host of your buckets (required when not use AWS).
S3_USE_HTTPS: Use HTTPS connections to S3 if enabled, default is true
S3_PATH_STYLE_REQUEST: This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. Default false.
S3_SSE_C_KEY: A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C.
Easier to configure S3 for Seafile and its components
Since Seafile Pro 13.0, in order to facilitate users to deploy Seafile's related extension components and other services in the future, a section will be provided in .env to store the S3 Configurations for Seafile and some extension components (such as SeaSearch, Metadata server). You can locate it with the title bar Storage configurations for S3.
S3 configurations in .env only support single S3 storage backend mode
The Seafile server only support configuring S3 in .env for single S3 storage backend mode (i.e., when SEAF_SERVER_STORAGE_TYPE=s3). If you would like to use other storage backend (e.g., Ceph, Swift) or other settings that can only be set in seafile.conf (like multiple storage backends), please set SEAF_SERVER_STORAGE_TYPE to multiple, and set MD_STORAGE_TYPE and SS_STORAGE_TYPE according to your configurations.
The S3 configurations only valid with at least one STORAGE_TYPE has specified to s3
Now there are three (pro) and one (cluster) STORAGE_TYPE we provided in .env: - SEAF_SERVER_STORAGE_TYPE (pro & cluster) - MD_STORAGE_TYPE (pro, see the Metadata server section for the details) - SS_STORAGE_TYPE (pro, see the SeaSearch section for the details)
You have to specify at least one of them as s3 for the above configuration to take effect.
NOTIFICATION_SERVER_URL: The notification server url, leave blank to disable it (default).
In Seafile cluster or standalone-deployment notification server
In addition to NOTIFICATION_SERVER_URL, you also need to specify INNER_NOTIFICATION_SERVER_URL=$NOTIFICATION_SERVER_URL, which will be used for the connection between Seafile server and notification server.
CLUSTER_INIT_MODE: (only valid in pro edition at deploying first time). Cluster initialization mode, in which the necessary configuration files for the service to run will be generated (but the service will not be started). If the configuration file already exists, no operation will be performed. The default value is true. When the configuration file is generated, be sure to set this item to false.
CLUSTER_INIT_ES_HOST: (only valid in pro edition at deploying first time). Your cluster Elasticsearch server host.
CLUSTER_INIT_ES_PORT: (only valid in pro edition at deploying first time). Your cluster Elasticsearch server port. Default is 9200.
CLUSTER_MODE: Seafile service node type, i.e., frontend (default) or backend.
"},{"location":"config/ldap_in_ce/","title":"Configure Seafile to use LDAP","text":"
This documentation is for the Community Edition. If you're using Pro Edition, please refer to the Seafile Pro documentation
"},{"location":"config/ldap_in_ce/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"
When Seafile is integrated with LDAP, users in the system can be divided into two tiers:
Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated.
Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them.
When Seafile counts the number of users in the system, it only counts the activated users in its internal database.
The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier:
Email address: this is the most common choice. Most organizations assign unique email address for each member.
UserPrincipalName: this is a user attribute only available in Active Directory. It's format is user-login-name@domain-name, e.g. john@example.com. It's not a real email address, but it works fine as the unique identifier.
The identifier is stored in table social_auth_usersocialauth to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth table
variable description LDAP_SERVER_URL The URL of LDAP server LDAP_BASE_DN The root node of users who can log in to Seafile in the LDAP server LDAP_ADMIN_DN DN of the administrator used to query the LDAP server for information. For OpenLDAP, it may be cn=admin,dc=example,dc=comLDAP_ADMIN_PASSWORD Password of LDAP_ADMIN_DNLDAP_PROVIDER Identify the source of the user, used in the table social_auth_usersocialauth, defaults to 'ldap' LDAP_LOGIN_ATTR User's attribute used to log in to Seafile. It should be a unique identifier for the user in LDAP server. Learn more about this id from the descriptions at the beginning of this section. LDAP_CONTACT_EMAIL_ATTR LDAP user's contact_email attribute LDAP_USER_ROLE_ATTR LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR Attribute for user's first name. It's \"givenName\" by default. LDAP_USER_LAST_NAME_ATTR Attribute for user's last name. It's \"sn\" by default. LDAP_USER_NAME_REVERSE In some languages, such as Chinese, the display order of the first and last name is reversed. Set this option if you need it. LDAP_FILTER Additional filter conditions. Users who meet the filter conditions can log in, otherwise they cannot log in.
Tips for choosing LDAP_BASE_DN and LDAP_ADMIN_DN:
To determine the LDAP_BASE_DN, you first have to navigate your organization hierachy on the domain controller GUI.
If you want to allow all users to use Seafile, you can use cn=users,dc=yourdomain,dc=com as LDAP_BASE_DN (with proper adjustment for your own needs).
If you want to limit users to a certain OU (Organization Unit), you run dsquery command on the domain controller to find out the DN for this OU. For example, if the OU is staffs, you can run dsquery ou -name staff. More information can be found here.
AD supports user@domain.name format for the LDAP_ADMIN_DN option. For example you can use administrator@example.com for LDAP_ADMIN_DN. Sometime the domain controller doesn't recognize this format. You can still use dsquery command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser. More information here.
Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the LDAP_BASE_DN option. The DNs are separated by \";\", e.g.
Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting LDAP_FILTER option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).
The final filter used for searching for users is (&($LOGIN_ATTR=*)($LDAP_FILTER)). $LOGIN_ATTR and $LDAP_FILTER will be replaced by your option values.
For example, add below option to seahub_settings.py:
The final search filter would be (&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))
Note that the case of attribute names in the above example is significant. The memberOf attribute is only available in Active Directory.
"},{"location":"config/ldap_in_ce/#limiting-seafile-users-to-a-group-in-active-directory","title":"Limiting Seafile Users to a Group in Active Directory","text":"
You can use the LDAP_FILTER option to limit user scope to a certain AD group.
First, you should find out the DN for the group. Again, we'll use the dsquery command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup.
Add below option to seahub_settings.py:
LDAP_FILTER = 'memberOf={output of dsquery command}'\n
"},{"location":"config/ldap_in_ce/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"
If your LDAP service supports TLS connections, you can configure LDAP_SERVER_URL as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
LDAP_SERVER_URL = 'ldaps://192.168.0.1:636/'\n
"},{"location":"config/ldap_in_pro/","title":"Configure Seafile Pro Edition to use LDAP","text":""},{"location":"config/ldap_in_pro/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"
When Seafile is integrated with LDAP, users in the system can be divided into two tiers:
Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated.
Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them.
When Seafile counts the number of users in the system, it only counts the activated users in its internal database.
The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier:
Email address: this is the most common choice. Most organizations assign unique email address for each member.
UserPrincipalName: this is a user attribute only available in Active Directory. It's format is user-login-name@domain-name, e.g. john@example.com. It's not a real email address, but it works fine as the unique identifier.
The identifier is stored in table social_auth_usersocialauth to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth table
variable description LDAP_SERVER_URL The URL of LDAP server LDAP_BASE_DN The root node of users who can log in to Seafile in the LDAP server LDAP_ADMIN_DN DN of the administrator used to query the LDAP server for information. For OpenLDAP, it may be cn=admin,dc=example,dc=comLDAP_ADMIN_PASSWORD Password of LDAP_ADMIN_DNLDAP_PROVIDER Identify the source of the user, used in the table social_auth_usersocialauth, defaults to 'ldap' LDAP_LOGIN_ATTR User's attribute used to log in to Seafile. It should be a unique identifier for the user in LDAP server. Learn more about this id from the descriptions at the beginning of this section. LDAP_CONTACT_EMAIL_ATTR LDAP user's contact_email attribute LDAP_USER_ROLE_ATTR LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR Attribute for user's first name. It's \"givenName\" by default. LDAP_USER_LAST_NAME_ATTR Attribute for user's last name. It's \"sn\" by default. LDAP_USER_NAME_REVERSE In some languages, such as Chinese, the display order of the first and last name is reversed. Set this option if you need it. LDAP_FILTER Additional filter conditions. Users who meet the filter conditions can log in, otherwise they cannot log in.
Tips for choosing LDAP_BASE_DN and LDAP_ADMIN_DN:
To determine the LDAP_BASE_DN, you first have to navigate your organization hierachy on the domain controller GUI.
If you want to allow all users to use Seafile, you can use cn=users,dc=yourdomain,dc=com as LDAP_BASE_DN (with proper adjustment for your own needs).
If you want to limit users to a certain OU (Organization Unit), you run dsquery command on the domain controller to find out the DN for this OU. For example, if the OU is staffs, you can run dsquery ou -name staff. More information can be found here.
AD supports user@domain.name format for the LDAP_ADMIN_DN option. For example you can use administrator@example.com for LDAP_ADMIN_DN. Sometime the domain controller doesn't recognize this format. You can still use dsquery command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser. More information here.
"},{"location":"config/ldap_in_pro/#setting-up-ldap-user-sync-optional","title":"Setting Up LDAP User Sync (optional)","text":"
In Seafile Pro, except for importing users into internal database when they log in, you can also configure Seafile to periodically sync user information from LDAP server into the internal database.
User's full name, department and contact email address can be synced to internal database. Users can use this information to more easily search for a specific user. User's Windows or Unix login id can be synced to the internal database. This allows the user to log in with its familiar login id. When a user is removed from LDAP, the corresponding user in Seafile will be deactivated. Otherwise, he could still sync files with Seafile client or access the web interface. After synchronization is complete, you can see the user's full name, department and contact email on its profile page.
Variable Description LDAP_SYNC_INTERVAL The interval to sync. Unit is minutes. Defaults to 60 minutes. ENABLE_LDAP_USER_SYNC set to \"true\" if you want to enable ldap user synchronization LDAP_USER_OBJECT_CLASS This is the name of the class used to search for user objects. In Active Directory, it's usually \"person\". The default value is \"person\". LDAP_DEPT_ATTR Attribute for department info. LDAP_UID_ATTR Attribute for Windows login name. If this is synchronized, users can also log in with their Windows login name. In AD, the attribute sAMAccountName can be used as UID_ATTR. The attribute will be stored as login_id in Seafile (in seahub_db.profile_profile table). LDAP_AUTO_REACTIVATE_USERS Whether to auto activate deactivated user, default by 'true' LDAP_USE_PAGED_RESULT Whether to use pagination extension. It is useful when you have more than 1000 users in LDAP server. IMPORT_NEW_USER Whether to import new users when sync user. ACTIVE_USER_WHEN_IMPORT Whether to activate the user automatically when imported. DEACTIVE_USER_IF_NOTFOUND set to \"true\" if you want to deactivate a user when he/she was deleted in AD server. ENABLE_EXTRA_USER_INFO_SYNC Enable synchronization of additional user information, including user's full name, department, and Windows login name, etc."},{"location":"config/ldap_in_pro/#importing-users-without-activating-them","title":"Importing Users without Activating Them","text":"
The users imported with the above configuration will be activated by default. For some organizations with large number of users, they may want to import user information (such as user full name) without activating the imported users. Activating all imported users will require licenses for all users in LDAP, which may not be affordable.
Seafile provides a combination of options for such use case. You can modify below option in seahub_settings.py:
ACTIVATE_USER_WHEN_IMPORT = False\n
This prevents Seafile from activating imported users. Then, add below option to seahub_settings.py:
ACTIVATE_AFTER_FIRST_LOGIN = True\n
This option will automatically activate users when they login to Seafile for the first time.
When you set the DEACTIVE_USER_IF_NOTFOUND option, a user will be deactivated when he/she is not found in LDAP server. By default, even after this user reappears in the LDAP server, it won't be reactivated automatically. This is to prevent auto reactivating a user that was manually deactivated by the system admin.
However, sometimes it's desirable to auto reactivate such users. You can modify below option in seahub_settings.py:
"},{"location":"config/ldap_in_pro/#setting-up-ldap-group-sync-optional","title":"Setting Up LDAP Group Sync (optional)","text":""},{"location":"config/ldap_in_pro/#how-it-works","title":"How It Works","text":"
The importing or syncing process maps groups from LDAP directory server to groups in Seafile's internal database. This process is one-way.
Any changes to groups in the database won't propagate back to LDAP;
Any changes to groups in the database, except for \"setting a member as group admin\", will be overwritten in the next LDAP sync operation. If you want to add or delete members, you can only do that on LDAP server.
The creator of imported groups will be set to the system admin.
There are two modes of operation:
Periodical: the syncing process will be executed in a fixed interval
Manual: there is a script you can run to trigger the syncing once
Before enabling LDAP group sync, you should have configured LDAP authentication. See Basic LDAP Integration for details.
The following are LDAP group sync related options:
# ldap group sync options.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_GROUP_FILTER = '' # An additional filter to use when searching group objects.\n # If it's set, the final filter used to run search is \"(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER))\";\n # otherwise the final filter would be \"(objectClass=GROUP_OBJECT_CLASS)\".\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile.\n # Learn more about departments in Seafile [here](https://help.seafile.com/sharing_collaboration/departments/).\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\n
Meaning of some options:
variable description ENABLE_LDAP_GROUP_SYNC Whether to enable group sync. LDAP_GROUP_OBJECT_CLASS This is the name of the class used to search for group objects. LDAP_GROUP_MEMBER_ATTR The attribute field to use when loading the group's members. For most directory servers, the attribute is \"member\" which is the default value. For \"posixGroup\", it should be set to \"memberUid\". LDAP_USER_ATTR_IN_MEMBERUID The user attribute set in 'memberUid' option, which is used in \"posixGroup\". The default value is \"uid\". LDAP_GROUP_UUID_ATTR Used to uniquely identify groups in LDAP. LDAP_GROUP_FILTER An additional filter to use when searching group objects. If it's set, the final filter used to run search is (&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER)); otherwise the final filter would be (objectClass=GROUP_OBJECT_CLASS). LDAP_USER_GROUP_MEMBER_RANGE_QUERY When a group contains too many members, AD will only return part of them. Set this option to TRUE to make LDAP sync work with large groups. DEL_GROUP_IF_NOT_FOUND Set to \"true\", sync process will delete the group if not found in the LDAP server. LDAP_SYNC_GROUP_AS_DEPARTMENT Whether to sync groups as top-level departments in Seafile. Learn more about departments in Seafile here. LDAP_DEPT_NAME_ATTR Used to get the department name.
Tip
The search base for groups is the option LDAP_BASE_DN.
Some LDAP server, such as Active Directory, allows a group to be a member of another group. This is called \"group nesting\". If we find a nested group B in group A, we should recursively add all the members from group B into group A. And group B should still be imported a separate group. That is, all members of group B are also members in group A.
In some LDAP server, such as OpenLDAP, it's common practice to use Posix groups to store group membership. To import Posix groups as Seafile groups, set LDAP_GROUP_OBJECT_CLASS option to posixGroup. A posixGroup object in LDAP usually contains a multi-value attribute for the list of member UIDs. The name of this attribute can be set with the LDAP_GROUP_MEMBER_ATTR option. It's MemberUid by default. The value of the MemberUid attribute is an ID that can be used to identify a user, which corresponds to an attribute in the user object. The name of this ID attribute is usually uid, but can be set via the LDAP_USER_ATTR_IN_MEMBERUID option. Note that posixGroup doesn't support nested groups.
"},{"location":"config/ldap_in_pro/#sync-ou-as-departments","title":"Sync OU as Departments","text":"
A department in Seafile is a special group. In addition to what you can do with a group, there are two key new features for departments:
Department supports hierarchy. A department can have any levels of sub-departments.
Department can have storage quota.
Seafile supports syncing OU (Organizational Units) from AD/LDAP to departments. The sync process keeps the hierarchical structure of the OUs.
Options for syncing departments from OU:
LDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_DEPT_NAME_ATTR = 'description' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n
"},{"location":"config/ldap_in_pro/#periodical-and-manual-sync","title":"Periodical and Manual Sync","text":"
Periodical sync won't happen immediately after you restart seafile server. It gets scheduled after the first sync interval. For example if you set sync interval to 30 minutes, the first auto sync will happen after 30 minutes you restarts. To sync immediately, you need to manually trigger it.
After the sync is run, you should see log messages like the following in logs/seafevents.log. And you should be able to see the groups in system admin page.
[2023-03-30 18:15:05,109] [DEBUG] create group 1, and add dn pair CN=DnsUpdateProxy,CN=Users,DC=Seafile,DC=local<->1 success.\n[2023-03-30 18:15:05,145] [DEBUG] create group 2, and add dn pair CN=Domain Computers,CN=Users,DC=Seafile,DC=local<->2 success.\n[2023-03-30 18:15:05,154] [DEBUG] create group 3, and add dn pair CN=Domain Users,CN=Users,DC=Seafile,DC=local<->3 success.\n[2023-03-30 18:15:05,164] [DEBUG] create group 4, and add dn pair CN=Domain Admins,CN=Users,DC=Seafile,DC=local<->4 success.\n[2023-03-30 18:15:05,176] [DEBUG] create group 5, and add dn pair CN=RAS and IAS Servers,CN=Users,DC=Seafile,DC=local<->5 success.\n[2023-03-30 18:15:05,186] [DEBUG] create group 6, and add dn pair CN=Enterprise Admins,CN=Users,DC=Seafile,DC=local<->6 success.\n[2023-03-30 18:15:05,197] [DEBUG] create group 7, and add dn pair CN=dev,CN=Users,DC=Seafile,DC=local<->7 success.\n
Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the LDAP_BASE_DN option. The DNs are separated by \";\", e.g.
Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting LDAP_FILTER option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).
The final filter used for searching for users is (&($LOGIN_ATTR=*)($LDAP_FILTER)). $LOGIN_ATTR and $LDAP_FILTER will be replaced by your option values.
For example, add below option to seahub_settings.py:
The final search filter would be (&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))
The case of attribute names in the above example is significant. The memberOf attribute is only available in Active Directory
"},{"location":"config/ldap_in_pro/#limiting-seafile-users-to-a-group-in-active-directory","title":"Limiting Seafile Users to a Group in Active Directory","text":"
You can use the LDAP_FILTER option to limit user scope to a certain AD group.
First, you should find out the DN for the group. Again, we'll use the dsquery command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup.
Add below option to seahub_settings.py:
LDAP_FILTER = 'memberOf={output of dsquery command}'\n
"},{"location":"config/ldap_in_pro/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"
If your LDAP service supports TLS connections, you can configure LDAP_SERVER_URL as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
LDAP protocol version 3 supports \"paged results\" (PR) extension. When you have large number of users, this option can greatly improve the performance of listing users. Most directory server nowadays support this extension.
In Seafile Pro Edition, add this option to seahub_settings.py to enable PR:
Seafile Pro Edition supports auto following referrals in LDAP search. This is useful for partitioned LDAP or AD servers, where users may be spreaded on multiple directory servers. For more information about referrals, you can refer to this article.
To configure, add below option to seahub_settings.py, e.g.:
Seafile Pro Edition supports multi-ldap servers, you can configure two ldap servers to work with seafile. Multi-ldap servers mean that, when get or search ldap user, it will iterate all configured ldap servers until a match is found; When listing all ldap users, it will iterate all ldap servers to get all users; For Ldap sync it will sync all user/group info in all configured ldap servers to seafile.
Currently, only two LDAP servers are supported.
If you want to use multi-ldap servers, please replace LDAP in the options with MULTI_LDAP_1, and then add them to seahub_settings.py, for example:
!!! note: There are still some shared config options are used for all LDAP servers, as follows:
```python\n# Common user sync options\nLDAP_SYNC_INTERVAL = 60\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# Common group sync options\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n```\n
"},{"location":"config/ldap_in_pro/#sso-and-ldap-users-use-the-same-uid","title":"SSO and LDAP users use the same uid","text":"
If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth or Shibboleth), you want Seafile to find the existing account for this user instead of creating a new one, you can set
SSO_LDAP_USE_SAME_UID = True\n
Here the UID means the unique user ID, in LDAP it is the attribute you use for LDAP_LOGIN_ATTR (not LDAP_UID_ATTR), in ADFS it is uid attribute. You need make sure you use the same attribute for the two settings
On this basis, if you only want users to login using SSO and not through LDAP, you can set
USE_LDAP_SYNC_ONLY = True\n
"},{"location":"config/ldap_in_pro/#importing-roles-from-ldap","title":"Importing Roles from LDAP","text":"
Seafile Pro Edition supports syncing roles from LDAP or Active Directory.
To enable this feature, add below option to seahub_settings.py, e.g.
LDAP_USER_ROLE_ATTR = 'title'\n
LDAP_USER_ROLE_ATTR is the attribute field to configure roles in LDAP. You can write a custom function to map the role by creating a file seahub_custom_functions.py under conf/ and edit it like:
# -*- coding: utf-8 -*-\n\n# The AD roles attribute returns a list of roles (role_list).\n# The following function use the first entry in the list.\ndef ldap_role_mapping(role):\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n\n# From version 11.0.11-pro, you can define the following function\n# to calculate a role from the role_list.\ndef ldap_role_list_mapping(role_list):\n if not role_list:\n return ''\n for role in role_list:\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n
You should only define one of the two functions
You can rewrite the function (in python) to make your own mapping rules. If the file or function doesn't exist, the first entry in role_list will be synced.
"},{"location":"config/multi_institutions/","title":"Multiple Organization/Institution User Management","text":"
Starting from version 5.1, you can add institutions into Seafile and assign users into institutions. Each institution can have one or more administrators. This feature is to ease user administration when multiple organizations (universities) share a single Seafile instance. Unlike multi-tenancy, the users are not-isolated. A user from one institution can share files with another institution.
"},{"location":"config/multi_institutions/#turn-on-the-feature","title":"Turn on the feature","text":"
In seahub_settings.py, add MULTI_INSTITUTION = True to enable multi-institution feature, and add
Please replease += to = if EXTRA_MIDDLEWARE_CLASSES or EXTRA_MIDDLEWARE is not defined
"},{"location":"config/multi_institutions/#add-institutions-and-institution-admins","title":"Add institutions and institution admins","text":"
After restarting Seafile, a system admin can add institutions by adding institution name in admin panel. He can also click into an institution, which will list all users whose profile.institution match the name.
"},{"location":"config/multi_institutions/#assign-users-to-institutions","title":"Assign users to institutions","text":"
If you are using Shibboleth, you can map a Shibboleth attribute into institution. For example, the following configuration maps organization attribute to institution.
Multi-tenancy feature is designed for hosting providers that what to host several customers in a single Seafile instance. You can create multi-organizations. Organizations is separated from each other. Users can't share libraries between organizations.
CLOUD_MODE = True\nMULTI_TENANCY = True\n\nORG_MEMBER_QUOTA_ENABLED = True\n\nORG_ENABLE_ADMIN_CUSTOM_NAME = True # Default is True, meaning organization name can be customized\nORG_ENABLE_ADMIN_CUSTOM_LOGO = False # Default is False, if set to True, organization logo can be customized\n\nENABLE_MULTI_ADFS = True # Default is False, if set to True, support per organization custom ADFS/SAML2 login\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n
An organization can be created via system admin in \u201cadmin panel->organization->Add organization\u201d.
Every organization has an URL prefix. This field is for future usage. When a user create an organization, an URL like org1 will be automatically assigned.
After creating an organization, the first user will become the admin of that organization. The organization admin can add other users. Note, the system admin can't add users.
"},{"location":"config/multi_tenancy/#adfssaml-single-sign-on-integration-in-multi-tenancy","title":"ADFS/SAML single sign-on integration in multi-tenancy","text":""},{"location":"config/multi_tenancy/#preparation-for-adfssaml","title":"Preparation for ADFS/SAML","text":"
1) Prepare SP(Seafile) certificate directory and SP certificates:
The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command:
The days option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly
Note
If certificates are not placed in /opt/seafile-data/seafile/seahub-data/certs, you need to add the following configuration in seahub_settings.py:
SAML_CERTS_DIR = '/path/to/certs'\n
2) Add the following configuration to seahub_settings.py and then restart Seafile:
Before using OAuth, you should first register an OAuth2 client application on your authorization server, then add some configurations to seahub_settings.py.
"},{"location":"config/oauth/#register-an-oauth2-client-application","title":"Register an OAuth2 client application","text":"
Here we use Github as an example. First you should register an OAuth2 client application on Github, official document from Github is very detailed.
Add the folllowing configurations to seahub_settings.py:
ENABLE_OAUTH = True\n\n# If create new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_CREATE_UNKNOWN_USER = True\n\n# If active new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_ACTIVATE_USER_AFTER_CREATION = True\n\n# Usually OAuth works through SSL layer. If your server is not parametrized to allow HTTPS, some method will raise an \"oauthlib.oauth2.rfc6749.errors.InsecureTransportError\". Set this to `True` to avoid this error.\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\n# Client id/secret generated by authorization server when you register your client application.\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\n\n# Callback url when user authentication succeeded. Note, the redirect url you input when you register your client application MUST be exactly the same as this value.\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following should NOT be changed if you are using Github as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'github.com' \nOAUTH_PROVIDER = 'github.com'\n\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # Please keep the 'email' option unchanged to be compatible with the login of users of version 11.0 and earlier.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n \"uid\": (True, \"uid\"), # Seafile v11.0 + \n}\n
"},{"location":"config/oauth/#more-explanations-about-the-settings","title":"More explanations about the settings","text":"
OAUTH_PROVIDER / OAUTH_PROVIDER_DOMAIN
OAUTH_PROVIDER_DOMAIN will be deprecated, and it can be replaced by OAUTH_PROVIDER. This variable is used in the database to identify third-party providers, either as a domain or as an easy-to-remember string less than 32 characters.
OAUTH_ATTRIBUTE_MAP
This variables describes which claims from the response of the user info endpoint are to be filled into which attributes of the new Seafile user. The format is showing like below:
OAUTH_ATTRIBUTE_MAP = {\n <:Attribute in the OAuth provider>: (<:Is required or not in Seafile?>, <:Attribute in Seafile >)\n }\n
If the remote resource server, like Github, uses email to identify an unique user too, Seafile will use Github id directorily, the OAUTH_ATTRIBUTE_MAP setting for Github should be like this:
OAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # it is deprecated\n \"uid / id / username\": (True, \"uid\") \n\n # extra infos you want to update to Seafile\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n
The key part id stands for an unique identifier of user in Github, this tells Seafile which attribute remote resoure server uses to indentify its user. The value part True stands for if this field is mandatory by Seafile.
Since 11.0 version, Seafile use uid as the external unique identifier of the user. It stores uid in table social_auth_usersocialauth and map it to internal unique identifier used in Seafile. Different OAuth systems have different attributes, which may be: id or uid or username, etc. And the id/email config id: (True, email) is deprecated.
If you upgrade from a version below 11.0, you need to have both fields configured, i.e., you configuration should be like:
In this way, when a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration \"id\": (True, \"email\").
If you use a newly deployed 11.0+ Seafile instance, you don't need the \"id\": (True, \"email\") item. Your configuration should be like:
ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following shoud NOT be changed if you are using Google as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'google.com'\nOAUTH_AUTHORIZATION_URL = 'https://accounts.google.com/o/oauth2/v2/auth'\nOAUTH_TOKEN_URL = 'https://www.googleapis.com/oauth2/v4/token'\nOAUTH_USER_INFO_URL = 'https://www.googleapis.com/oauth2/v1/userinfo'\nOAUTH_SCOPE = [\n \"openid\",\n \"https://www.googleapis.com/auth/userinfo.email\",\n \"https://www.googleapis.com/auth/userinfo.profile\",\n]\nOAUTH_ATTRIBUTE_MAP = {\n \"sub\": (True, \"uid\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n
Note
For Github, email is not the unique identifier for an user, but id is in most cases, so we use id as settings example in our manual. As Seafile uses email to identify an unique user account for now, so we combine id and OAUTH_PROVIDER_DOMAIN, which is github.com in your case, to an email format string and then create this account if not exist.
For users of Azure Cloud, as there is no id field returned from Azure Cloud's user info endpoint, so we use a special configuration for OAUTH_ATTRIBUTE_MAP setting (others are the same as Github/Google). Please see this tutorial for the complete deployment process of OAuth against Azure Cloud.
Add the following configuration to seahub_settings.py.
Sharing between Seafile serversSharing from NextCloud to Seafile
# Enable OCM\nENABLE_OCM = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"dev\",\n \"server_url\": \"https://seafile-domain-1/\", # should end with '/'\n },\n {\n \"server_name\": \"download\",\n \"server_url\": \"https://seafile-domain-2/\", # should end with '/'\n },\n]\n
# Enable OCM\nENABLE_OCM_VIA_WEBDAV = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"nextcloud\",\n \"server_url\": \"https://nextcloud-domain-1/\", # should end with '/'\n }\n]\n
OCM_REMOTE_SERVERS is a list of servers that you allow your users to share libraries with
"},{"location":"config/ocm/#usage","title":"Usage","text":""},{"location":"config/ocm/#share-library-to-other-server","title":"Share library to other server","text":"
In the library sharing dialog jump to \"Share to other server\", you can share this library to users of another server with \"Read-Only\" or \"Read-Write\" permission. You can also view shared records and cancel sharing.
"},{"location":"config/ocm/#view-be-shared-libraries","title":"View be shared libraries","text":"
You can jump to \"Shared from other servers\" page to view the libraries shared by other servers and cancel the sharing.
And enter the library to view, download or upload files.
"},{"location":"config/remote_user/","title":"SSO using Remote User","text":"
Starting from 7.0.0, Seafile can integrate with various Single Sign On systems via a proxy server. Examples include Apache as Shibboleth proxy, or LemonLdap as a proxy to LDAP servers, or Apache as Kerberos proxy. Seafile can retrieve user information from special request headers (HTTP_REMOTE_USER, HTTP_X_AUTH_USER, etc.) set by the proxy servers.
After the proxy server (Apache/Nginx) is successfully authenticated, the user information is set to the request header, and Seafile creates and logs in the user based on this information.
Make sure that the proxy server has a corresponding security mechanism to protect against forgery request header attacks
Please add the following settings to conf/seahub_settings.py to enable this feature.
ENABLE_REMOTE_USER_AUTHENTICATION = True\n\n# Optional, HTTP header, which is configured in your web server conf file,\n# used for Seafile to get user's unique id, default value is 'HTTP_REMOTE_USER'.\nREMOTE_USER_HEADER = 'HTTP_REMOTE_USER'\n\n# Optional, when the value of HTTP_REMOTE_USER is not a valid email address\uff0c\n# Seafile will build a email-like unique id from the value of 'REMOTE_USER_HEADER'\n# and this domain, e.g. user1@example.com.\nREMOTE_USER_DOMAIN = 'example.com'\n\n# Optional, whether to create new user in Seafile system, default value is True.\n# If this setting is disabled, users doesn't preexist in the Seafile DB cannot login.\n# The admin has to first import the users from external systems like LDAP.\nREMOTE_USER_CREATE_UNKNOWN_USER = True\n\n# Optional, whether to activate new user in Seafile system, default value is True.\n# If this setting is disabled, user will be unable to login by default.\n# the administrator needs to manually activate this user.\nREMOTE_USER_ACTIVATE_USER_AFTER_CREATION = True\n\n# Optional, map user attribute in HTTP header and Seafile's user attribute.\nREMOTE_USER_ATTRIBUTE_MAP = {\n 'HTTP_DISPLAYNAME': 'name',\n 'HTTP_MAIL': 'contact_email',\n\n # for user info\n \"HTTP_GIVENNAME\": 'givenname',\n \"HTTP_SN\": 'surname',\n \"HTTP_ORGANIZATION\": 'institution',\n\n # for user role\n 'HTTP_SHIBBOLETH_AFFILIATION': 'affiliation',\n}\n\n# Map affiliation to user role. Though the config name is SHIBBOLETH_AFFILIATION_ROLE_MAP,\n# it is not restricted to Shibboleth\nSHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\n
Then restart Seafile.
"},{"location":"config/roles_permissions/","title":"Roles and Permissions Support","text":"
You can add/edit roles and permission for users. A role is just a group of users with some pre-defined permissions, you can toggle user roles in user list page at admin panel. For most permissions, the meaning can be easily obtained from the variable name. The following is a further detailed introduction to some variables.
role_quota is used to set quota for a certain role of users. For example, we can set the quota of employee to 100G by adding 'role_quota': '100g', and leave other role of users to the default quota.
After set role_quote, it will take affect once a user with such a role login into Seafile. You can also manually change seafile-db.RoleQuota, if you want to see the effect immediately.
can_add_public_repo is to set whether a role can create a public library (shared by all login users), default is False.
Since version 11.0.9 pro, can_share_repo is added to limit users' ability to share a library
The can_add_public_repo option will not take effect if you configure global CLOUD_MODE = True
can_create_wiki and can_publish_wiki are used to control whether a role can create a Wiki and publish a Wiki. (A published Wiki have a special URL and can be visited by anonymous users)
storage_ids permission is used for assigning storage backends to users with specific role. More details can be found in multiple storage backends.
upload_rate_limit and download_rate_limit are added to limit upload and download speed for users with different roles.
Note
After configured the rate limit, run the following command in the seafile-server-latest directory to make the configuration take effect:
If you want to edit the permissions of build-in roles, e.g. default users can invite guest, guest users can view repos in organization, you can add following lines to seahub_settings.py with corresponding permissions set to True.
After that, email address \"a@a.com\", any email address ends with \"@a-a-a.com\" and any email address ends with \"@foo.com\" or \"@bar.com\" will not be allowed.
If you want to add a new role and assign some users with this role, e.g. new role employee can invite guest and can create public library and have all other permissions a default user has, you can add following lines to seahub_settings.py
"},{"location":"config/saml2/","title":"SAML 2.0 in version 10.0+","text":"
In this document, we use Microsoft Azure SAML single sign-on app and Microsoft on-premise ADFS to show how Seafile integrate SAML 2.0. Other SAML 2.0 provider should be similar.
"},{"location":"config/saml2/#preparations-for-saml-20","title":"Preparations for SAML 2.0","text":"
Second, prepare SP(Seafile) certificate directory and SP certificates:
Create certs dir
$ mkdir -p /opt/seafile/seahub-data/certs\n
The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command:
The days option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly
"},{"location":"config/saml2/#integration-with-adfssaml-single-sign-on","title":"Integration with ADFS/SAML single sign-on","text":""},{"location":"config/saml2/#microsoft-azure-saml-single-sign-on-app","title":"Microsoft Azure SAML single sign-on app","text":"
If you use Microsoft Azure SAML app to achieve single sign-on, please follow the steps below:
First, add SAML single sign-on app and assign users, refer to: add an Azure AD SAML application, create and assign users.
Second, setup the Identifier, Reply URL, and Sign on URL of the SAML app based on your service URL, refer to: enable single sign-on for saml app. The format of the Identifier, Reply URL, and Sign on URL are: https://example.com/saml2/metadata/, https://example.com/saml2/acs/, https://example.com/, e.g.:
Next, edit saml attributes & claims. Keep the default attributes & claims of SAML app unchanged, the uid attribute must be added, the mail and name attributes are optional, e.g.:
Next, download the base64 format SAML app's certificate and rename to idp.crt:
and put it under the certs directory(/opt/seafile/seahub-data/certs).
Next, copy the metadata URL of the SAML app:
and paste it into the SAML_REMOTE_METADATA_URL option in seahub_settings.py, e.g.:
SAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n
Next, add ENABLE_ADFS_LOGIN, LOGIN_REDIRECT_URL and SAML_ATTRIBUTE_MAPPING options to seahub_settings.py, and then restart Seafile, e.g:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n\n}\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n
Note
If the xmlsec1 binary is not located in /usr/bin/xmlsec1, you need to add the following configuration in seahub_settings.py:
SAML_XMLSEC_BINARY_PATH = '/path/to/xmlsec1'\n
View where the xmlsec1 binary is located:
$ which xmlsec1\n
If certificates are not placed in /opt/seafile/seahub-data/certs, you need to add the following configuration in seahub_settings.py:
SAML_CERTS_DIR = '/path/to/certs'\n
Finally, open the browser and enter the Seafile login page, click Single Sign-On, and use the user assigned to SAML app to perform a SAML login test.
If you use Microsoft ADFS to achieve single sign-on, please follow the steps below:
First, please make sure the following preparations are done:
A Windows Server with ADFS installed. For configuring and installing ADFS you can see this article.
A valid SSL certificate for ADFS server, and here we use temp.adfs.com as the domain name example.
A valid SSL certificate for Seafile server, and here we use demo.seafile.com as the domain name example.
Second, download the base64 format certificate and upload it:
Navigate to the AD FS management window. In the left sidebar menu, navigate to Services > Certificates.
Locate the Token-signing certificate. Right-click the certificate and select View Certificate.
In the dialog box, select the Details tab.
Click Copy to File.
In the Certificate Export Wizard that opens, click Next.
Select Base-64 encoded X.509 (.CER), then click Next.
Named it idp.crt, then click Next.
Click Finish to complete the download.
And then put it under the certs directory(/opt/seafile/seahub-data/certs).
Next, add the following configurations to seahub_settings.py and then restart Seafile:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n}\nSAML_REMOTE_METADATA_URL = 'https://temp.adfs.com/federationmetadata/2007-06/federationmetadata.xml' # The format of the ADFS federation metadata URL is: `https://{your ADFS domain name}/federationmetadata/2007-06/federationmetadata.xml`\n
Next, add relying party trust:
Log into the ADFS server and open the ADFS management.
Under Actions, click Add Relying Party Trust.
On the Welcome page, choose Claims aware and click Start.
Select Import data about the relying party published online or on a local network, type your metadate url in Federation metadata address (host name or URL), and then click Next. Your metadate url format is: https://example.com/saml2/metadata/, e.g.:
On the Specify Display Name page type a name in Display name, e.g. Seafile, under Notes type a description for this relying party trust, and then click Next.
In the Choose an access control policy window, select Permit everyone, then click Next.
Review your settings, then click Next.
Click Close.
Next, create claims rules:
Open the ADFS management, click Relying Party Trusts.
Right-click your trust, and then click Edit Claim Issuance Policy.
On the Issuance Transform Rules tab click Add Rules.
Click the Claim rule template dropdown menu and select Send LDAP Attributes as Claims, and then click Next.
In the Claim rule name field, type the display name for this rule, such as Seafile Claim rule. Click the Attribute store dropdown menu and select Active Directory. In the LDAP Attribute column, click the dropdown menu and select User-Principal-Name. In the Outgoing Claim Type column, click the dropdown menu and select UPN. And then click Finish.
Click Add Rule again.
Click the Claim rule template dropdown menu and select Transform an Incoming Claim, and then click Next.
In the Claim rule name field, type the display name for this rule, such as UPN to Name ID. Click the Incoming claim type dropdown menu and select UPN(It must match the Outgoing Claim Type in rule Seafile Claim rule). Click the Outgoing claim type dropdown menu and select Name ID. Click the Outgoing name ID format dropdown menu and select Email. And then click Finish.
Click OK to add both new rules.
When creating claims rule, you can also select other LDAP Attributes, such as E-Mail-Addresses, depending on your ADFS service
Finally, open the browser and enter the Seafile login page, click Single Sign-On to perform ADFS login test.
[DATABASE]\ntype = mysql\nhost = 192.168.0.2\nport = 3306\nusername = seafile\npassword = password\nname = seahub_db\n\n[STATISTICS]\n## must be \"true\" to enable statistics\nenabled = false\n\n[SEAHUB EMAIL]\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending Seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n\n[FILE HISTORY]\nenabled = true\nthreshold = 5\nsuffix = md,txt,...\n\n## From seafile 7.0.0\n## Recording file history to database for fast access is enabled by default for 'Markdown, .txt, ppt, pptx, doc, docx, xls, xlsx'. \n## After enable the feature, the old histories version for markdown, doc, docx files will not be list in the history page.\n## (Only new histories that stored in database will be listed) But the users can still access the old versions in the library snapshots.\n## For file types not listed in the suffix , histories version will be scanned from the library history as before.\n## The feature default is enable. You can set the 'enabled = false' to disable the feature.\n\n## The 'threshold' is the time threshold for recording the historical version of a file, in minutes, the default is 5 minutes. \n## This means that if the interval between two adjacent file saves is less than 5 minutes, the two file changes will be merged and recorded as a historical version. \n## When set to 0, there is no time limit, which means that each save will generate a separate historical version.\n\n## If you need to modify the file list format, you can add 'suffix = md, txt, ...' configuration items to achieve.\n\n# From Seafile 13.0 Redis also support using in CE, and is the default cached server\n[REDIS]\n## redis use the 0 database and \"repo_update\" channel\nserver = 192.168.1.1\nport = 6379\npassword = q!1w@#123\n
"},{"location":"config/seafevents-conf/#the-following-configurations-for-pro-edition-only","title":"The following configurations for Pro Edition only","text":"
[AUDIT]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\n\n[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## From Seafile 6.3.0 pro, in order to speed up the full-text search speed, you should setup\nhighlight = fvh\n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\n## Refer to file search manual for details.\nindex_office_pdf=false\n\n## The default size limit for doc, docx, ppt, pptx, xls, xlsx and pdf files. Files larger than this will not be indexed.\n## Since version 6.2.0\n## Unit: MB\noffice_file_size_limit = 10\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n\n## The default loglevel is `warning`.\n## Since version 11.0.4\nloglevel = info\n\n[EVENTS PUBLISH]\n## must be \"true\" to enable publish events messages\nenabled = false\n## message format: repo-update\\t{{repo_id}}}\\t{{commit_id}}\n## Currently only support redis message queue\nmq_type = redis\n\n[AUTO DELETION]\nenabled = true # Default is false, when enabled, users can use file auto deletion feature\ninterval = 86400 # The unit is second(s), the default frequency is one day, that is, it runs once a day\n\n[SEASEARCH]\nenabled = true # Default is false, when enabled, seafile can use SeaSearch as the search engine\nseasearch_url = http://seasearch:4080 # If your SeaSearch server deploy on another machine, replace it to the truth address\nseasearch_token = <your auth token> # base64 code consist of `username:password`\ninterval = 10m # The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\n
You may set a default quota (e.g. 2GB) for all users. To do this, just add the following lines to seafile.conf file
[quota]\n# default user quota in GB, integer only\ndefault = 2\n
This setting applies to all users. If you want to set quota for a specific user, you may log in to seahub website as administrator, then set it in \"System Admin\" page.
Since Pro 10.0.9 version, you can set the maximum number of files allowed in a library, and when this limit is exceeded, files cannot be uploaded to this library. There is no limit by default.
[quota]\nlibrary_file_limit = 100000\n
"},{"location":"config/seafile-conf/#default-history-length-limit","title":"Default history length limit","text":"
If you don't want to keep all file revision history, you may set a default history length limit for all libraries.
seaf-server component in Seafile Pro Edition uses memory caches in various cases to improve performance. (seaf-server component in community edition does not use cache) Some session information is also saved into memory cache to be shared among the cluster nodes. Memcached or Reids can be use for memory cache.
Tip
Redis support is added in version 11.0 and is the default cache server from Seafile 13.0. Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet.
memcachedRedis
[memcached]\n# Replace `localhost` with the memcached address:port if you're using remote memcached\n# POOL-MIN and POOL-MAX is used to control connection pool size. Usually the default is good enough.\nmemcached_options = --SERVER=localhost --POOL-MIN=10 --POOL-MAX=100\n
[redis]\n# your redis server address\nredis_host = 127.0.0.1\n# your redis server port\nredis_port = 6379\n# size of connection pool to redis, default is 100\nmax_connections = 100\n
The configuration of seafile fileserver is in the [fileserver] section of the file seafile.conf
[fileserver]\n# bind address for fileserver\n# default to 0.0.0.0, if deployed without proxy: no access restriction\n# set to 127.0.0.1, if used with local proxy: only access by local\nhost = 127.0.0.1\n# tcp port for fileserver\nport = 8082\n
Since Community Edition 6.2 and Pro Edition 6.1.9, you can set the number of worker threads to server http requests. Default value is 10, which is a good value for most use cases.
[fileserver]\nworker_threads = 15\n
Change upload/download settings.
[fileserver]\n# Set maximum upload file size to 200M.\n# If not configured, there is no file size limit for uploading.\nmax_upload_size=200\n\n# Set maximum download directory size to 200M.\n# Default is 100M.\nmax_download_dir_size=200\n
After a file is uploaded via the web interface, or the cloud file browser in the client, it needs to be divided into fixed size blocks and stored into storage backend. We call this procedure \"indexing\". By default, the file server uses 1 thread to sequentially index the file and store the blocks one by one. This is suitable for most cases. But if you're using S3/Ceph/Swift backends, you may have more bandwidth in the storage backend for storing multiple blocks in parallel. We provide an option to define the number of concurrent threads in indexing:
[fileserver]\nmax_indexing_threads = 10\n
When users upload files in the web interface (seahub), file server divides the file into fixed size blocks. Default blocks size for web uploaded files is 8MB. The block size can be set here.
[fileserver]\n#Set block size to 2MB\nfixed_block_size=2\n
When users upload files in the web interface, file server assigns an token to authorize the upload operation. This token is valid for 1 hour by default. When uploading a large file via WAN, the upload time can be longer than 1 hour. You can change the token expire time to a larger value.
[fileserver]\n#Set uploading time limit to 3600s\nweb_token_expire_time=3600\n
You can download a folder as a zip archive from seahub, but some zip software on windows doesn't support UTF-8, in which case you can use the \"windows_encoding\" settings to solve it.
[zip]\n# The file name encoding of the downloaded zip file.\nwindows_encoding = iso-8859-1\n
The \"httptemp\" directory contains temporary files created during file upload and zip download. In some cases the temporary files are not cleaned up after the file transfer was interrupted. Starting from 7.1.5 version, file server will regularly scan the \"httptemp\" directory to remove files created long time ago.
[fileserver]\n# After how much time a temp file will be removed. The unit is in seconds. Default to 3 days.\nhttp_temp_file_ttl = x\n# File scan interval. The unit is in seconds. Default to 1 hour.\nhttp_temp_scan_interval = x\n
New in Seafile Pro 7.1.16 and Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server.
Since Pro 8.0.4 version, you can set both options to -1, to allow unlimited size and timeout.
If you use object storage as storage backend, when a large file is frequently downloaded, the same blocks need to be fetched from the storage backend to Seafile server. This may waste bandwith and cause high load on the internal network. Since Seafile Pro 8.0.5 version, we add block caching to improve the situation.
To enable this feature, set use_block_cache option in the [fileserver] group. It's not enabled by default.
The block_cache_size_limit option is used to limit the size of the cache. Its default value is 10GB. The blocks are cached in seafile-data/block-cache directory. When the total size of cached files exceeds the limit, seaf-server will clean up older files until the size reduces to 70% of the limit. The cleanup interval is 5 minutes. You have to have a good estimate on how much space you need for the cache directory. Otherwise on frequent downloads this directory can be quickly filled up.
The block_cache_file_types configuration is used to choose the file types that are cached. block_cache_file_types the default value is mp4;mov.
[fileserver]\nuse_block_cache = true\n# Set block cache size limit to 100MB\nblock_cache_size_limit = 100\nblock_cache_file_types = mp4;mov\n
When a large number of files are uploaded through the web page and API, it will be expensive to calculate block IDs based on the block contents. Since Seafile-pro-9.0.6, you can add the skip_block_hash option to use a random string as block ID.
Warning
This option will prevent fsck from checking block content integrity. You should specify --shallow option to fsck to not check content integrity.
[fileserver]\nskip_block_hash = true\n
If you want to limit the type of files when uploading files, since Seafile Pro 10.0.0 version, you can set file_ext_white_list option in the [fileserver] group. This option is a list of file types, only the file types in this list are allowed to be uploaded. It's not enabled by default.
[fileserver]\nfile_ext_white_list = md;mp4;mov\n
Since seafile 10.0.1, when you use go fileserver, you can set upload_limit and download_limit option in the [fileserver] group to limit the speed of file upload and download. It's not enabled by default.
[fileserver]\n# The unit is in KB/s.\nupload_limit = 100\ndownload_limit = 100\n
Since Seafile 11.0.7 Pro, you can ask file server to check virus for every file uploaded with web APIs. Find more options about virus scanning at virus scan.
[fileserver]\n# default is false\ncheck_virus_on_web_upload = true\n
When set use_ssl to true and skip_verify to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path. The ca_path is a trusted CA certificate path for signing MySQL server certificates. When skip_verify is true, there is no need to add the ca_path option. The MySQL server certificate won't be verified at this time.
The Seafile Pro server auto expires file locks after some time, to prevent a locked file being locked for too long. The expire time can be tune in seafile.conf file.
[file_lock]\ndefault_expire_hours = 6\n
The default is 12 hours.
Since Seafile-pro-9.0.6, you can add cache for getting locked files (reduce server load caused by sync clients). Since Pro Edition 12, this option is enabled by default.
[file_lock]\nuse_locked_file_cache = true\n
At the same time, you also need to configure the following memcache options for the cache to take effect:
[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n
Since Seafile-pro-6.3.10, you can enable seaf-server's RPC slow log to do performance analysis.The slow log is enabled by default.
If you want to configure related options, add the options to seafile.conf:
[slow_log]\n# default to true\nenable_slow_log = true\n# the unit of all slow log thresholds is millisecond.\n# default to 5000 milliseconds, only RPC queries processed for longer than 5000 milliseconds will be logged.\nrpc_slow_threshold = 5000\n
You can find seafile_slow_rpc.log in logs/slow_logs. You can also use log-rotate to rotate the log files. You just need to send SIGUSR2 to seaf-server process. The slow log file will be closed and reopened.
Since 9.0.2 Pro, the signal to trigger log rotation has been changed to SIGUSR1. This signal will trigger rotation for all log files opened by seaf-server. You should change your log rotate settings accordingly.
Even though Nginx logs all requests with certain details, such as url, response code, upstream process time, it's sometimes desirable to have more context about the requests, such as the user id for each request. Such information can only be logged from file server itself. Since 9.0.2 Pro, access log feature is added to fileserver.
To enable access log, add below options to seafile.conf:
[fileserver]\n# default to false. If enabled, fileserver-access.log will be written to log directory.\nenable_access_log = true\n
The log format is as following:
start time - user id - url - response code - process time\n
Seafile 9.0 introduces a new fileserver implemented in Go programming language. To enable it, you can set the options below in seafile.conf:
[fileserver]\nuse_go_fileserver = true\n
Go fileserver has 3 advantages over the traditional fileserver implemented in C language:
Better performance when syncing libraries with large number of files. With C fileserver, syncing large libraries may consume all the worker threads in the server and make the service slow. There is a config option max_sync_file_count to limit the size of library to be synced. The default is 100K. With Go fileserver you can set this option to a much higher number, such as 1 million.
Downloading zipped folders on the fly. And there is no limit on the size of the downloaded folder. With C fileserver, the server has to first create a zip file for the downloaded folder then send it to the client. With Go fileserver, the zip file can be created while transferring to the client. The option max_download_dir_size is thus no longer needed by Go fileserver.
Since version 10.0 you can also set upload/download rate limits.
Go fileserver caches fs objects in memory. On the one hand, it avoids repeated creation and destruction of repeatedly accessed objects; on the other hand it will also slow down the speed at which objects are released, which will prevent go's gc mechanism from consuming too much CPU time. You can set the size of memory used by fs cache through the following options.
[fileserver]\n# The unit is in M. Default to 2G.\nfs_cache_limit = 100\n
"},{"location":"config/seafile-conf/#profiling-go-fileserver-performance","title":"Profiling Go Fileserver Performance","text":"
Since Seafile 9.0.7, you can enable the profile function of go fileserver by adding the following configuration options:
# profile_password is required, change it for your need\n[fileserver]\nenable_profiling = true\nprofile_password = 8kcUz1I2sLaywQhCRtn2x1\n
This interface can be used through the pprof tool provided by Go language. See https://pkg.go.dev/net/http/pprof for details. Note that you have to first install Go on the client that issues the below commands. The password parameter should match the one you set in the configuration.
go tool pprof http://localhost:8082/debug/pprof/heap?password=8kcUz1I2sLaywQhCRtn2x1\ngo tool pprof http://localhost:8082/debug/pprof/profile?password=8kcUz1I2sLaywQhCRtn2x1\n
"},{"location":"config/seafile-conf/#notification-server-configuration","title":"Notification server configuration","text":"
Since Seafile 10.0.0, you can ask Seafile server to send notifications (file changes, lock changes and folder permission changes) to Notification Server component.
[notification]\nenabled = true\n# IP address of the server running notification server\n# or \"notification-server\" if you are running notification server container on the same host as Seafile server\nhost = 192.168.0.83\n# the port of notification server\nport = 8083\n
Tip
The configuration here only works for version >= 12.0. The configuration for notificaton server has been changed in 12.0 to make it clearer. The new configuration is not compatible with older versions.
"},{"location":"config/seahub_customization/","title":"Seahub customization","text":""},{"location":"config/seahub_customization/#customize-seahub-logo-and-css","title":"Customize Seahub Logo and CSS","text":"
For example, modify the templates/help/base.html file and save it. You will see the new help page.
Note
There are some more help pages available for modifying, you can find the list of the html file here
"},{"location":"config/seahub_customization/#add-an-extra-note-in-sharing-dialog","title":"Add an extra note in sharing dialog","text":"
You can add an extra note in sharing dialog in seahub_settings.py
ADDITIONAL_SHARE_DIALOG_NOTE = {\n 'title': 'Attention! Read before shareing files:',\n 'content': 'Do not share personal or confidential official data with **.'\n}\n
Since Pro 7.0.9, Seafile supports adding some custom navigation entries to the home page for quick access. This requires you to add the following configuration information to the conf/seahub_settings.py configuration file:
You can also modify most of the config items via web interface. The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files. If you want to disable settings via web interface, you can add ENABLE_SETTINGS_VIA_WEB = False to seahub_settings.py.
"},{"location":"config/seahub_settings_py/#sending-email-notifications-on-seahub","title":"Sending Email Notifications on Seahub","text":"
# For security consideration, please set to match the host/domain of your site, e.g., ALLOWED_HOSTS = ['.example.com'].\n# Please refer https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts for details.\nALLOWED_HOSTS = ['.myseafile.com']\n\n\n# Whether to use a secure cookie for the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = True\n\n# The value of the SameSite flag on the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-samesite\nCSRF_COOKIE_SAMESITE = 'Strict'\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-trusted-origins\nCSRF_TRUSTED_ORIGINS = ['https://www.myseafile.com']\n
The following options affect user registration, password and session.
# Enalbe or disalbe registration on web. Default is `False`.\nENABLE_SIGNUP = False\n\n# Activate or deactivate user when registration complete. Default is `True`.\n# If set to `False`, new users need to be activated by admin in admin panel.\nACTIVATE_AFTER_REGISTRATION = False\n\n# Whether to send email when a system admin adding a new member. Default is `True`.\nSEND_EMAIL_ON_ADDING_SYSTEM_MEMBER = True\n\n# Whether to send email when a system admin resetting a user's password. Default is `True`.\nSEND_EMAIL_ON_RESETTING_USER_PASSWD = True\n\n# Send system admin notify email when user registration is complete. Default is `False`.\nNOTIFY_ADMIN_AFTER_REGISTRATION = True\n\n# Remember days for login. Default is 7\nLOGIN_REMEMBER_DAYS = 7\n\n# Attempt limit before showing a captcha when login.\nLOGIN_ATTEMPT_LIMIT = 3\n\n# deactivate user account when login attempts exceed limit\n# Since version 5.1.2 or pro 5.1.3\nFREEZE_USER_ON_LOGIN_FAILED = False\n\n# mininum length for user's password\nUSER_PASSWORD_MIN_LENGTH = 6\n\n# LEVEL based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nUSER_PASSWORD_STRENGTH_LEVEL = 3\n\n# default False, only check USER_PASSWORD_MIN_LENGTH\n# when True, check password strength level, STRONG(or above) is allowed\nUSER_STRONG_PASSWORD_REQUIRED = False\n\n# Force user to change password when admin add/reset a user.\n# Added in 5.1.1, deafults to True.\nFORCE_PASSWORD_CHANGE = True\n\n# Age of cookie, in seconds (default: 2 weeks).\nSESSION_COOKIE_AGE = 60 * 60 * 24 * 7 * 2\n\n# Whether a user's session cookie expires when the Web browser is closed.\nSESSION_EXPIRE_AT_BROWSER_CLOSE = False\n\n# Whether to save the session data on every request. Default is `False`\nSESSION_SAVE_EVERY_REQUEST = False\n\n# Whether enable the feature \"published library\". Default is `False`\n# Since 6.1.0 CE\nENABLE_WIKI = True\n\n# In old version, if you use Single Sign On, the password is not saved in Seafile.\n# Users can't use WebDAV because Seafile can't check whether the password is correct.\n# Since version 6.3.8, you can enable this option to let user's to specific a password for WebDAV login.\n# Users login via SSO can use this password to login in WebDAV.\n# Enable the feature. pycryptodome should be installed first.\n# sudo pip install pycryptodome==3.12.0\nENABLE_WEBDAV_SECRET = True\nWEBDAV_SECRET_MIN_LENGTH = 8\n\n# LEVEL for the password, based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nWEBDAV_SECRET_STRENGTH_LEVEL = 1\n\n\n# Since version 7.0.9, you can force a full user to log in with a two factor authentication.\n# The prerequisite is that the administrator should 'enable two factor authentication' in the 'System Admin -> Settings' page.\n# Then you can add the following configuration information to the configuration file.\nENABLE_FORCE_2FA_TO_ALL_USERS = True\n
# if enable create encrypted library\nENABLE_ENCRYPTED_LIBRARY = True\n\n# version for encrypted library\n# should only be `2` or `4`.\n# version 3 is insecure (using AES128 encryption) so it's not supported any more.\n# refer to https://manual.seafile.com/latest/administration/security_features/#how-does-an-encrypted-library-work\n# for the difference between version 2 and 4.\nENCRYPTED_LIBRARY_VERSION = 2\n\n# Since version 12, you can choose password hash algorithm for new encrypted libraries.\n# The password is used to encrypt the encryption key. So using a secure password hash algorithm to\n# prevent brute-force password guessing is important.\n# Before version 12, a fixed algorithm (PBKDF2-SHA256 with 1000 iterations) is used.\n#\n# Currently two hash algorithms are supported.\n# - PBKDF2: The only available parameter is the number of iterations. You need to increase the\n# the number of iterations over time, as GPUs are more and more used for such calculation.\n# The default number of iterations is 1000. As of 2023, the recommended iterations is 600,000.\n# - Argon2id: Secure hash algorithm that has high cost even for GPUs. There are 3 parameters that\n# can be set: time cost, memory cost, and parallelism degree. The parameters are seperated by commas,\n# e.g. \"2,102400,8\", which the default parameters used in Seafile. Learn more about this algorithm\n# on https://github.com/P-H-C/phc-winner-argon2 .\n#\n# Note that only sync client >= 9.0.9 and SeaDrive >= 3.0.12 supports syncing libraries created with these algorithms.\nENCRYPTED_LIBRARY_PWD_HASH_ALGO = \"argon2id\"\nENCRYPTED_LIBRARY_PWD_HASH_PARAMS = \"2,102400,8\"\n# ENCRYPTED_LIBRARY_PWD_HASH_ALGO = \"pbkdf2_sha256\"\n# ENCRYPTED_LIBRARY_PWD_HASH_PARAMS = \"600000\"\n\n# mininum length for password of encrypted library\nREPO_PASSWORD_MIN_LENGTH = 8\n\n# force use password when generate a share/upload link (since version 8.0.9)\nSHARE_LINK_FORCE_USE_PASSWORD = False\n\n# mininum length for password for share link (since version 4.4)\nSHARE_LINK_PASSWORD_MIN_LENGTH = 8\n\n# LEVEL for the password of a share/upload link\n# based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above. (since version 8.0.9)\nSHARE_LINK_PASSWORD_STRENGTH_LEVEL = 3\n\n# Default expire days for share link (since version 6.3.8)\n# Once this value is configured, the user can no longer generate an share link with no expiration time.\n# If the expiration value is not set when the share link is generated, the value configured here will be used.\nSHARE_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be less than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be greater than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# Default expire days for upload link (since version 7.1.6)\n# Once this value is configured, the user can no longer generate an upload link with no expiration time.\n# If the expiration value is not set when the upload link is generated, the value configured here will be used.\nUPLOAD_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MIN should be less than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MAX should be greater than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# force user login when view file/folder share link (since version 6.3.6)\nSHARE_LINK_LOGIN_REQUIRED = True\n\n# enable water mark when view(not edit) file in web browser (since version 6.3.6)\nENABLE_WATERMARK = True\n\n# Disable sync with any folder. Default is `False`\n# NOTE: since version 4.2.4\nDISABLE_SYNC_WITH_ANY_FOLDER = True\n\n# Enable or disable library history setting\nENABLE_REPO_HISTORY_SETTING = True\n\n# Enable or disable user share library to any group\n# Since version 6.2.0\nENABLE_SHARE_TO_ALL_GROUPS = True\n\n# Enable or disable user to clean trash (default is True)\n# Since version 6.3.6\nENABLE_USER_CLEAN_TRASH = True\n\n# Add a report abuse button on download links. (since version 7.1.0)\n# Users can report abuse on the share link page, fill in the report type, contact information, and description.\n# Default is false.\nENABLE_SHARE_LINK_REPORT_ABUSE = True\n
Options for online file preview:
# Online preview maximum file size, defaults to 30M.\nFILE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024\n\n# Extensions of previewed text files.\n# NOTE: since version 6.1.1\nTEXT_PREVIEW_EXT = \"\"\"ac, am, bat, c, cc, cmake, cpp, cs, css, diff, el, h, html,\nhtm, java, js, json, less, make, org, php, pl, properties, py, rb,\nscala, script, sh, sql, txt, text, tex, vi, vim, xhtml, xml, log, csv,\ngroovy, rst, patch, go\"\"\"\n\n\n# Seafile only generates thumbnails for images smaller than the following size.\n# Since version 6.3.8 pro, suport the psd online preview.\nTHUMBNAIL_IMAGE_SIZE_LIMIT = 30 # MB\n\n# Enable or disable thumbnail for video. ffmpeg and moviepy should be installed first.\n# For details, please refer to https://manual.seafile.com/deploy/video_thumbnails.html\n# NOTE: this option is deprecated in version 7.1\nENABLE_VIDEO_THUMBNAIL = False\n\n# Use the frame at 5 second as thumbnail\n# NOTE: this option is deprecated in version 7.1\nTHUMBNAIL_VIDEO_FRAME_TIME = 5\n\n# Absolute filesystem path to the directory that will hold thumbnail files.\nTHUMBNAIL_ROOT = '/haiwen/seahub-data/thumbnail/thumb/'\n\n# Default size for picture preview. Enlarge this size can improve the preview quality.\n# NOTE: since version 6.1.1\nTHUMBNAIL_SIZE_FOR_ORIGINAL = 1024\n
You should enable cloud mode if you use Seafile with an unknown user base. It disables the organization tab in Seahub's website to ensure that users can't access the user list. Cloud mode provides some nice features like sharing content with unregistered users and sending invitations to them. Therefore you also want to enable user registration. Through the global address book (since version 4.2.3) you can do a search for every user account. So you probably want to disable it.
# Enable cloude mode and hide `Organization` tab.\nCLOUD_MODE = True\n\n# Disable global address book\nENABLE_GLOBAL_ADDRESSBOOK = False\n
# Enable authentication with ADFS\n# Default is False\n# Since 6.0.9\nENABLE_ADFS_LOGIN = True\n\n# Force user login through ADFS/OAuth instead of email and password\n# Default is False\n# Since 11.0.7, in version 12.0, it also controls users via OAuth\nDISABLE_ADFS_USER_PWD_LOGIN = True\n\n# Enable authentication wit Kerberos\n# Default is False\nENABLE_KRB5_LOGIN = True\n\n# Enable authentication with Shibboleth\n# Default is False\nENABLE_SHIBBOLETH_LOGIN = True\n\n# Enable client to open an external browser for single sign on\n# When it is false, the old buitin browser is opened for single sign on\n# When it is true, the default browser of the operation system is opened\n# The benefit of using system browser is that it can support hardware 2FA\n# Since 11.0.0, and sync client 9.0.5, drive client 3.0.8\nCLIENT_SSO_VIA_LOCAL_BROWSER = True # default is False\nCLIENT_SSO_UUID_EXPIRATION = 5 * 60 # in seconds\n
# This is outside URL for Seahub(Seafile Web). \n# The domain part (i.e., www.example.com) will be used in generating share links and download/upload file via web.\n# Note: SERVICE_URL is moved to seahub_settings.py since 9.0.0\n# Note: SERVICE_URL is no longer used since version 12.0\n# SERVICE_URL = 'https://seafile.example.com:'\n\n# Disable settings via Web interface in system admin->settings\n# Default is True\n# Since 5.1.3\nENABLE_SETTINGS_VIA_WEB = False\n\n# Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = 'UTC'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\n# Default language for sending emails.\nLANGUAGE_CODE = 'en'\n\n# Custom language code choice.\nLANGUAGES = (\n ('en', 'English'),\n ('zh-cn', '\u7b80\u4f53\u4e2d\u6587'),\n ('zh-tw', '\u7e41\u9ad4\u4e2d\u6587'),\n)\n\n# Set this to your website/company's name. This is contained in email notifications and welcome message when user login for the first time.\nSITE_NAME = 'Seafile'\n\n# Browser tab's title\nSITE_TITLE = 'Private Seafile'\n\n# If you don't want to run seahub website on your site's root path, set this option to your preferred path.\n# e.g. setting it to '/seahub/' would run seahub on http://example.com/seahub/.\nSITE_ROOT = '/'\n\n# Max number of files when user upload file/folder.\n# Since version 6.0.4\nMAX_NUMBER_OF_FILES_FOR_FILEUPLOAD = 500\n\n# Control the language that send email. Default to user's current language.\n# Since version 6.1.1\nSHARE_LINK_EMAIL_LANGUAGE = ''\n\n# Interval for browser requests unread notifications\n# Since PRO 6.1.4 or CE 6.1.2\nUNREAD_NOTIFICATIONS_REQUEST_INTERVAL = 3 * 60 # seconds\n\n# Whether to allow user to delete account, change login password or update basic user\n# info on profile page.\n# Since PRO 6.3.10\nENABLE_DELETE_ACCOUNT = False\nENABLE_UPDATE_USER_INFO = False\nENABLE_CHANGE_PASSWORD = False\n\n# Get web api auth token on profile page.\nENABLE_GET_AUTH_TOKEN_BY_SESSION = True\n\n# Since 8.0.6 CE/PRO version.\n# Url redirected to after user logout Seafile.\n# Usually configured as Single Logout url.\nLOGOUT_REDIRECT_URL = 'http{s}://www.example-url.com'\n\n\n# Enable system admin add T&C, all users need to accept terms before using. Defaults to `False`.\n# Since version 6.0\nENABLE_TERMS_AND_CONDITIONS = True\n\n# Enable two factor authentication for accounts. Defaults to `False`.\n# Since version 6.0\nENABLE_TWO_FACTOR_AUTH = True\n\n# Enable user select a template when he/she creates library.\n# When user select a template, Seafile will create folders releated to the pattern automaticly.\n# Since version 6.0\nLIBRARY_TEMPLATES = {\n 'Technology': ['/Develop/Python', '/Test'],\n 'Finance': ['/Current assets', '/Fixed assets/Computer']\n}\n\n# Enable a user to change password in 'settings' page. Default to `True`\n# Since version 6.2.11\nENABLE_CHANGE_PASSWORD = True\n\n# If show contact email when search user.\nENABLE_SHOW_CONTACT_EMAIL_WHEN_SEARCH_USER = True\n
"},{"location":"config/seahub_settings_py/#pro-edition-only-options","title":"Pro edition only options","text":"
# Whether to show the used traffic in user's profile popup dialog. Default is True\nSHOW_TRAFFIC = True\n\n# Allow administrator to view user's file in UNENCRYPTED libraries\n# through Libraries page in System Admin. Default is False.\nENABLE_SYS_ADMIN_VIEW_REPO = True\n\n# For un-login users, providing an email before downloading or uploading on shared link page.\n# Since version 5.1.4\nENABLE_SHARE_LINK_AUDIT = True\n\n# Check virus after upload files to shared upload links. Defaults to `False`.\n# Since version 6.0\nENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n\n# Send email to these email addresses when a virus is detected.\n# This list can be any valid email address, not necessarily the emails of Seafile user.\n# Since version 6.0.8\nVIRUS_SCAN_NOTIFY_LIST = ['user_a@seafile.com', 'user_b@seafile.com']\n
# API throttling related settings. Enlarger the rates if you got 429 response code during API calls.\nREST_FRAMEWORK = {\n 'DEFAULT_THROTTLE_RATES': {\n 'ping': '600/minute',\n 'anon': '5/minute',\n 'user': '300/minute',\n },\n 'UNICODE_JSON': False,\n}\n\n# Throtting whitelist used to disable throttle for certain IPs.\n# e.g. REST_FRAMEWORK_THROTTING_WHITELIST = ['127.0.0.1', '192.168.1.1']\n# Please make sure `REMOTE_ADDR` header is configured in Nginx conf according to https://manual.seafile.com/13.0/setup_binary/ce/deploy_with_nginx.html.\nREST_FRAMEWORK_THROTTING_WHITELIST = []\n
Since version 6.2, you can define a custom function to modify the result of user search function.
For example, if you want to limit user only search users in the same institution, you can define custom_search_user function in {seafile install path}/conf/seahub_custom_functions/__init__.py
Code example:
import os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseahub_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seahub/seahub')\nsys.path.append(seahub_dir)\n\nfrom seahub.profile.models import Profile\ndef custom_search_user(request, emails):\n\n institution_name = ''\n\n username = request.user.username\n profile = Profile.objects.get_profile_by_user(username)\n if profile:\n institution_name = profile.institution\n\n inst_users = [p.user for p in\n Profile.objects.filter(institution=institution_name)]\n\n filtered_emails = []\n for email in emails:\n if email in inst_users:\n filtered_emails.append(email)\n\n return filtered_emails\n
You should NOT change the name of custom_search_user and seahub_custom_functions/__init__.py
Since version 6.2.5 pro, if you enable the ENABLE_SHARE_TO_ALL_GROUPS feather on sysadmin settings page, you can also define a custom function to return the groups a user can share library to.
For example, if you want to let a user to share library to both its groups and the groups of user test@test.com, you can define a custom_get_groups function in {seafile install path}/conf/seahub_custom_functions/__init__.py
Code example:
import os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseaserv_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seafile/lib64/python2.7/site-packages')\nsys.path.append(seaserv_dir)\n\ndef custom_get_groups(request):\n\n from seaserv import ccnet_api\n\n groups = []\n username = request.user.username\n\n # for current user\n groups += ccnet_api.get_groups(username)\n\n # for 'test@test.com' user\n groups += ccnet_api.get_groups('test@test.com')\n\n return groups\n
You should NOT change the name of custom_get_groups and seahub_custom_functions/__init__.py
Tip
You need to restart seahub so that your changes take effect.
Deploy in DockerDeploy from binary packages
docker compose restart\n
cd /opt/seafile/seafile-server-latest\n./seahub.sh restart\n
If your changes don't take effect, You may need to delete 'seahub_setting.pyc'. (A cache file)
"},{"location":"config/sending_email/","title":"Sending Email Notifications on Seahub","text":""},{"location":"config/sending_email/#types-of-email-sending-in-seafile","title":"Types of Email Sending in Seafile","text":"
There are currently five types of emails sent in Seafile:
User resets his/her password
System admin adds new member
System admin resets user password
User sends file/folder share and upload link
Reminder of unread notifications
The first four types of email are sent immediately. The last type is sent by a background task running periodically.
"},{"location":"config/sending_email/#options-of-email-sending","title":"Options of Email Sending","text":"
Please add the following lines to seahub_settings.py to enable email sending.
If your email service still does not work, you can checkout the log file logs/seahub.log to see what may cause the problem. For a complete email notification list, please refer to email notification list.
If you want to use the email service without authentication leaf EMAIL_HOST_USER and EMAIL_HOST_PASSWORD blank (''). (But notice that the emails then will be sent without a From: address.)
About using SSL connection (using port 465)
Port 587 is being used to establish a connection using STARTTLS and port 465 is being used to establish an SSL connection. Starting from Django 1.8, it supports both.
If you want to use SSL on port 465, set EMAIL_USE_SSL = True instead of EMAIL_USE_TLS.
"},{"location":"config/sending_email/#change-reply-to-of-email","title":"Change reply to of email","text":"
You can change the reply to field of email by add the following settings to seahub_settings.py. This only affects email sending for file share link.
# Set reply-to header to user's email or not, defaults to ``False``. For details,\n# please refer to http://www.w3.org/Protocols/rfc822/\nADD_REPLY_TO_HEADER = True\n
The background task will run periodically to check whether an user have new unread notifications. If there are any, it will send a reminder email to that user. The background email sending task is controlled by seafevents.conf.
[SEAHUB EMAIL]\n\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n
"},{"location":"config/sending_email/#add-smime-signature-to-email","title":"Add S/MIME signature to email","text":"
If you want the email signed by S/MIME, please add the config in seahub_settings.py
ENABLE_SMIME = True\nSMIME_CERTS_DIR = /opt/seafile/seahub-data/smime-certs # including cert.pem and private_key.pem\n
The certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the certs using the following command:
The simplest way to customize the email messages is setting the SITE_NAME variable in seahub_settings.py. If it is not enough for your case, you can customize the email templates.
Tip
Subject line may vary between different releases, this is based on Release 5.0.0. Restart Seahub so that your changes take effect.
"},{"location":"config/sending_email/#the-email-base-template","title":"The email base template","text":"
seahub/seahub/templates/email_base.html
Tip
You can copy email_base.html to seahub-data/custom/templates/email_base.html and modify the new one. In this way, the customization will be maintained after upgrade.
You can copy password_reset_email.html to seahub-data/custom/templates/registration/password_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
"},{"location":"config/sending_email/#system-admin-adds-new-member","title":"System admin adds new member","text":"
Subject
seahub/seahub/views/sysadmin.py line:424
send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\n
You can copy user_add_email.html to seahub-data/custom/templates/sysadmin/user_add_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
"},{"location":"config/sending_email/#system-admin-resets-user-password","title":"System admin resets user password","text":"
Subject
seahub/seahub/views/sysadmin.py line:1224
send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\n
You can copy user_reset_email.html to seahub-data/custom/templates/sysadmin/user_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
try:\n if file_shared_type == 'f':\n c['file_shared_type'] = _(u\"file\")\n send_html_email(_(u'A file is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to\n )\n else:\n c['file_shared_type'] = _(u\"directory\")\n send_html_email(_(u'A directory is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to)\n
You can copy shared_link_email.html to seahub-data/custom/templates/shared_link_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
"},{"location":"config/sending_email/#reminder-of-unread-notifications","title":"Reminder of unread notifications","text":"
Subject
send_html_email(_('New notice on %s') % settings.SITE_NAME,\n 'notifications/notice_email.html', c,\n None, [to_user])\n
Shibboleth is a widely used single sign on (SSO) protocol. Seafile supports authentication via Shibboleth. It allows users from another organization to log in to Seafile without registering an account on the service provider.
In this documentation, we assume the reader is familiar with Shibboleth installation and configuration. For introduction to Shibboleth concepts, please refer to https://shibboleth.atlassian.net/wiki/spaces/CONCEPT/overview .
Shibboleth Service Provider (SP) should be installed on the same server as the Seafile server. The official SP from https://shibboleth.net/ is implemented as an Apache module. The module handles all Shibboleth authentication details. Seafile server receives authentication information (username) from HTTP request. The username then can be used as login name for the user.
Seahub provides a special URL to handle Shibboleth login. The URL is https://your-seafile-domain/sso. Only this URL needs to be configured under Shibboleth protection. All other URLs don't go through the Shibboleth module. The overall workflow for a user to login with Shibboleth is as follows:
In the Seafile login page, there is a separate \"Single Sign-On\" login button. When the user clicks the button, she/he will be redirected to https://your-seafile-domain/sso.
Since that URL is controlled by Shibboleth, the user will be redirected to IdP for login. After the user logs in, she/he will be redirected back to https://your-seafile-domain/sso.
This time the Shibboleth module passes the request to Seahub. Seahub reads the user information from the request(HTTP_REMOTE_USER header) and brings the user to her/his home page.
All later access to Seahub will not pass through the Shibboleth module. Since Seahub keeps session information internally, the user doesn't need to login again until the session expires.
Since Shibboleth support requires Apache, if you want to use Nginx, you need two servers, one for non-Shibboleth access, another configured with Apache to allow Shibboleth login. In a cluster environment, you can configure your load balancer to direct traffic to different server according to URL. Only the URL https://your-seafile-domain/sso needs to be directed to Apache.
The configuration includes 3 steps:
Install and configure Shibboleth Service Provider;
Configure Apache;
Configure Seahub.
"},{"location":"config/shibboleth_authentication/#install-and-configure-shibboleth-service-provider","title":"Install and Configure Shibboleth Service Provider","text":"
<!-- The ApplicationDefaults element is where most of Shibboleth's SAML bits are defined. -->\n<ApplicationDefaults entityID=\"https://your-seafile-domain/sso\"\n REMOTE_USER=\"mail\"\n cipherSuites=\"DEFAULT:!EXP:!LOW:!aNULL:!eNULL:!DES:!IDEA:!SEED:!RC4:!3DES:!kRSA:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1\">\n
Seahub extracts the username from the REMOTE_USER environment variable. So you should modify your SP's shibboleth2.xml config file, so that Shibboleth translates your desired attribute into REMOTE_USER environment variable.
In Seafile, only one of the following two attributes can be used for username: eppn, and mail. eppn stands for \"Edu Person Principal Name\". It is usually the UserPrincipalName attribute in Active Directory. It's not necessarily a valid email address. mail is the user's email address. You should set REMOTE_USER to either one of these attributes.
<!--\nConfigures SSO for a default IdP. To properly allow for >1 IdP, remove\nentityID property and adjust discoveryURL to point to discovery service.\nYou can also override entityID on /Login query string, or in RequestMap/htaccess.\n-->\n<SSO entityID=\"https://your-IdP-domain\">\n <!--discoveryProtocol=\"SAMLDS\" discoveryURL=\"https://wayf.ukfederation.org.uk/DS\"-->\n SAML2\n</SSO>\n
After restarting Apache, you should be able to get the Service Provider metadata by accessing https://your-seafile-domain/Shibboleth.sso/Metadata. This metadata should be uploaded to the Identity Provider (IdP) server.
Seahub can process additional user attributes from Shibboleth. These attributes are saved into Seahub's database, as user's properties. They're all not mandatory. The internal user properties Seahub now supports are:
givenname
surname
contact_email: used for sending notification email to user if username is not a valid email address (like eppn).
institution: used to identify user's institution
You can specify the mapping between Shibboleth attributes and Seahub's user properties in seahub_settings.py:
In the above config, the hash key is Shibboleth attribute name, the second element in the hash value is Seahub's property name. You can adjust the Shibboleth attribute name for your own needs.
You may have to change attribute-map.xml in your Shibboleth SP, so that the desired attributes are passed to Seahub. And you have to make sure the IdP sends these attributes to the SP
We also added an option SHIB_ACTIVATE_AFTER_CREATION (defaults to True) which control the user status after shibboleth connection. If this option set to False, user will be inactive after connection, and system admins will be notified by email to activate that account.
"},{"location":"config/shibboleth_authentication/#affiliation-and-user-role","title":"Affiliation and user role","text":"
Shibboleth has a field called affiliation. It is a list like: employee@uni-mainz.de;member@uni-mainz.de;faculty@uni-mainz.de;staff@uni-mainz.de.
We are able to set user role from Shibboleth. Details about user role, please refer to Roles and Permissions
To enable this, modify SHIBBOLETH_ATTRIBUTE_MAP above and add Shibboleth-affiliation field, you may need to change Shibboleth-affiliation according to your Shibboleth SP attributes.
After Shibboleth login, Seafile should calcualte user's role from affiliation and SHIBBOLETH_AFFILIATION_ROLE_MAP.
"},{"location":"config/shibboleth_authentication/#custom-set-user-role","title":"Custom set user role","text":"
If you are unable to set user roles by obtaining affiliation information, or if you wish to have a more customized way of setting user roles, you can add the following configuration to achieve this.
For example, set all users whose email addresses end with @seafile.com as default, and set other users as guest.
First, update the SHIBBOLETH_ATTRIBUTE_MAP configuration in seahub_settings.py, and add HTTP_REMOTE_USER.
Then, create /opt/seafile/conf/seahub_custom_functions/__init__.py file and add the following code.
# function name `custom_shibboleth_get_user_role` should NOT be changed\ndef custom_shibboleth_get_user_role(shib_meta):\n\n remote_user = shib_meta.get('remote_user', '')\n if not remote_user:\n return ''\n\n remote_user = remote_user.lower()\n if remote_user.endswith('@seafile.com'):\n return 'default'\n else:\n return 'guest'\n
Open seafile-server-latest/seahub/thirdpart/shibboleth/middleware.py
Insert the following code in line 59
assert False\n
Insert the following code in line 65
if not username:\n assert False\n
The complete code after these changes is as follows:
#Locate the remote user header.\n# import pprint; pprint.pprint(request.META)\ntry:\n username = request.META[SHIB_USER_HEADER]\nexcept KeyError:\n assert False\n # If specified header doesn't exist then return (leaving\n # request.user set to AnonymousUser by the\n # AuthenticationMiddleware).\n return\n\nif not username:\n assert False\n\np_id = ccnet_api.get_primary_id(username)\nif p_id is not None:\n username = p_id\n
Then restart Seafile and relogin, you will see debug info in web page.
"},{"location":"config/single_sign_on/","title":"Single Sign On support in Seafile","text":"
Seafile supports most of the popular single-sign-on authentication protocols. Some are included in Community Edition, some are only in Pro Edition.
In the Community Edition:
Shibboleth
OAuth
Remote User (Proxy Server)
Auto Login to SeaDrive on Windows
Kerberos authentication can be integrated by using Apache as a proxy server and follow the instructions in Remote User Authentication and Auto Login SeaDrive on Windows.
Seafile internally uses a data model similar to GIT's. It consists of Repo, Commit, FS, and Block.
Seafile's high performance comes from the architectural design: stores file metadata in object storage (or file system), while only stores small amount of metadata about the libraries in relational database. An overview of the architecture can be depicted as below. We'll describe the data model in more details.
Commit objects save the change history of a repo. Each update from the web interface, or sync upload operation will create a new commit object. A commit object contains the following information: commit ID, library name, creator of this commit (a.k.a. the modifier), creation time of this commit (a.k.a. modification time), root fs object ID, parent commit ID.
The root fs object ID points to the root FS object, from which we can traverse a file system snapshot for the repo.
The parent commit ID points to the last commit previous to the current commit. The RepoHead table contains the latest head commit ID for each repo. From this head commit, we can traverse the repo history.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/commits/<repo_id>. If you use object storage, commit objects are stored in the commits bucket.
There are two types of FS objects, SeafDir Object and Seafile Object. SeafDir Object represents a directory, and Seafile Object represents a file.
The SeafDir object contains metadata for each file/sub-folder, which includes name, last modification time, last modifier, size, and object ID. The object ID points to another SeafDir or Seafile object. The Seafile object contains a block list, which is a list of block IDs for the file.
The FS object IDs are calculated based on the contents of the object. That means if a folder or a file is not changed, the same objects will be reused across multiple commits. This allow us to create snapshots very efficiently.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/fs/<repo_id>. If you use object storage, commit objects are stored in the fs bucket.
A file is further divided into blocks with variable lengths. We use Content Defined Chunking algorithm to divide file into blocks. A clear overview of this algorithm can be found at http://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf. On average, a block's size is around 8MB.
This mechanism makes it possible to deduplicate data between different versions of frequently updated files, improving storage efficiency. It also enables transferring data to/from multiple servers in parallel.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/blocks/<repo_id>. If you use object storage, commit objects are stored in the blocks bucket.
A \"virtual repo\" is a special repo that will be created in the cases below:
A folder in a library is shared.
A folder in a library is synced selectively from the sync client.
A virtual repo can be understood as a view for part of the data in its parent library. For example, when sharing a folder, the virtual repo only provides access to the shared folder in that library. Virtual repo use the same underlying data as the parent library. So virtual repos use the same fs and blocks storage location as its parent.
Virtual repo has its own change history. So it has separate commits storage location from its parent. The changes in virtual repo and its parent repo will be bidirectional merged. So that changes from each side can be seen from another.
There is a VirtualRepo table in seafile_db database. It contains the folder path in the parent repo for each virtual repo.
The following setups are required for building and packaging Sync Client on macOS:
XCode 13.2 (or later)
After installing XCode, you can start XCode once so that it automatically installs the rest of the components.
Qt 6.2
MacPorts
Modify /opt/local/etc/macports/macports.conf to add configuration universal_archs arm64 x86_64. Specifies the architecture on which MapPorts is compiled.
Modify /opt/local/etc/macports/variants.conf to add configuration +universal. MacPorts installs universal versions of all ports.
Install other dependencies: sudo port install autoconf automake pkgconfig libtool glib2 libevent vala openssl git jansson cmake libwebsockets argon2.
Certificates
Create certificate signing requests for certification, see https://developer.apple.com/help/account/create-certificates/create-a-certificate-signing-request.
Create a Developer ID Application certificate and a Developer ID Installer certificate, see https://developer.apple.com/help/account/create-certificates/create-developer-id-certificates. Install them to the login keychain.
Install the Developer ID Intermediate Certificate (Developer ID - G2), from https://www.apple.com/certificateauthority/
Update the CERT_ID in seafile-workspace/seafile/scripts/build/build-mac-local-py3.py to the ID of Developer ID Application.
Run the packaging script: python3 build-mac-local-py3.py --brand=\"\" --version=1.0.0 --nostrip --universal
"},{"location":"develop/rpi/","title":"How to Build Seafile Server Release Package","text":"
From Seafile 11.0, you can build Seafile release package with seafile-build script. You can check the README.md file in the same folder for detailed instructions.
The seafile-build.sh compatible with more platforms, including Raspberry Pi, arm-64, x86-64.
Old version is below:
Table of contents:
Setup the build environment
Install packages
Compile development libraries
Install Python libraries
Prepare source code
Fetch git tags and prepare source tarballs
Run the packaging script
Test the built package
Test a fresh install
Test upgrading
"},{"location":"develop/rpi/#setup-the-build-environment","title":"Setup the build environment","text":"
Requirements:
A raspberry pi with raspian distribution installed.
"},{"location":"develop/rpi/#compile-development-libraries","title":"Compile development libraries","text":""},{"location":"develop/rpi/#libevhtp","title":"libevhtp","text":"
libevhtp is a http server libary on top of libevent. It's used in seafile file server.
git clone https://www.github.com/haiwen/libevhtp.git\ncd libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nsudo make install\n
After compiling all the libraries, run ldconfig to update the system libraries cache:
After the script finisheds, we would get a seafile-server_6.0.1_pi.tar.gz in ~/seafile-server-pkgs folder.
"},{"location":"develop/rpi/#test-the-built-package","title":"Test the built package","text":""},{"location":"develop/rpi/#test-a-fresh-install","title":"Test a fresh install","text":"
The test should cover these steps at least:
The setup process is ok
After seafile.sh start and seahub.sh start, you can login from a browser.
Uploading/Downloading files through a web browser works correctly.
Seafile WebDAV server works correctly
"},{"location":"develop/rpi/#test-upgrading-from-a-previous-version","title":"Test upgrading from a previous version","text":"
Download the package of the previous version seafile server, and setup it.
Upgrading according to the manual
After the upgrade, check the functionality is ok:
Uploading/Downloading files through a web browser works correctly.
mysql -uroot -pyour_password -e \"CREATE DATABASE ccnet CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seafile CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seahub CHARACTER SET utf8;\"\n
Then, you can visit http://127.0.0.1:8000/ to use Seafile.
"},{"location":"develop/server/#the-final-directory-structure","title":"The Final Directory Structure","text":""},{"location":"develop/server/#more","title":"More","text":""},{"location":"develop/server/#deploy-frontend-development-environment","title":"Deploy Frontend Development Environment","text":"
For deploying frontend development enviroment, you need:
1, checkout seahub to master branch
cd /root/dev/source-code/seahub\n\ngit fetch origin master:master\ngit checkout master\n
2, add the following configration to /root/dev/conf/seahub_settings.py
cd /root/dev/source-code/seahub/frontend\n\nnpm install\n
4, npm run dev
cd /root/dev/source-code/seahub/frontend\n\nnpm run dev\n
5, start seaf-server and seahub
"},{"location":"develop/translation/","title":"Translation","text":""},{"location":"develop/translation/#seahub-seafile-server-71-and-above","title":"Seahub (Seafile Server 7.1 and above)","text":""},{"location":"develop/translation/#translate-and-try-locally","title":"Translate and try locally","text":"
1. Locate the translation files in the seafile-server-latest/seahub directory:
For Seahub (except Markdown editor): /locale/<lang-code>/LC_MESSAGES/django.po\u00a0 and \u00a0/locale/<lang-code>/LC_MESSAGES/djangojs.po
For Markdown editor: /media/locales/<lang-code>/seafile-editor.json
For example, if you want to improve the Russian translation, find the corresponding strings to be edited in either of the following three files:
If there is no translation for your language, create a new folder matching your language code and copy-paste the contents of another language folder in your newly created one. (Don't copy from the 'en' folder because the files therein do not contain the strings to be translated.)
2. Edit the files using an UTF-8 editor.
3. Save your changes.
4. (Only necessary when you created a new language code folder) Add a new entry for your language to the language block in the /seafile-server-latest/seahub/seahub/settings.py file and save it.
LANGUAGES = (\n ...\n ('ru', '\u0420\u0443\u0441\u0441\u043a\u0438\u0439'),\n ...\n)\n
5. (Only necessary when you edited either django.po or djangojs.po) Apply the changes made in django.po and djangojs.po by running the following two commands in /seafile-server-latest/seahub/locale/<lang-code>/LC_MESSAGES:
msgfmt -o django.mo django.po
msgfmt -o djangojs.mo djangojs.po
Note: msgfmt is included in the gettext package.
Additionally, run the following two commands in the seafile-server-latest directory:
6. Restart Seahub to load changes made in django.po and djangojs.po; reload the Markdown editor to check your modifications in the seafile-editor.json file.
"},{"location":"develop/translation/#submit-your-translation","title":"Submit your translation","text":"
Please submit translations via Transifex: https://www.transifex.com/projects/p/seahub/
Steps:
Create a free account on Transifex (https://www.transifex.com/).
Send a request to join the language translation.
After accepted by the project maintainer, then you can upload your file or translate online.
FileNotFoundError occurred when executing the command manage.py collectstatic.
FileNotFoundError: [Errno 2] No such file or directory: '/opt/seafile/seafile-server-latest/seahub/frontend/build'\n
Steps:
Modify STATICFILES_DIRS in /opt/seafile/seafile-server-latest/seahub/seahub/settings.py manually
STATICFILES_DIRS = (\n # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n '%s/static' % PROJECT_ROOT,\n# '%s/frontend/build' % PROJECT_ROOT,\n)\n
```python STATICFILES_DIRS = ( # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. '%s/static' % PROJECT_ROOT, '%s/frontend/build' % PROJECT_ROOT, )
Restart Seahub
./seahub.sh restart\n
This issue has been fixed since version 11.0
"},{"location":"develop/web_api_v2.1/","title":"Web API","text":""},{"location":"develop/web_api_v2.1/#seafile-web-api","title":"Seafile Web API","text":"
The API document can be accessed in the following location:
$ git clone --depth=1 git@github.com:google/breakpad.git\n$ cd breakpad\n$ git clone https://github.com/google/googletest.git testing\n$ cd ..\n# create vs solution, this may throw an error \"module collections.abc has no attribute OrderedDict\", you should open the msvs.py and replace 'collections.abc' with 'collections'.\n$ gyp \u2013-no-circular-check breakpad\\src\\client\\windows\\breakpad_client.gyp\n
open breakpad_client.sln and configure C++ Language Standard to C++17 and C/C++ ---> Code Generation ---> Runtime Library to Multi-threaded DLL (/MD)
The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, github.com/haiwen/seafile-client, and github.com/haiwen/seafile-shell-ext.
If you use a cluster to deploy Seafile, you can use distributed indexing to realize real-time indexing and improve indexing efficiency. The indexing process is as follows:
"},{"location":"extension/distributed_indexing/#install-redis-and-modify-configuration-files","title":"Install redis and modify configuration files","text":""},{"location":"extension/distributed_indexing/#1-install-redis-on-all-frontend-nodes","title":"1. Install redis on all frontend nodes","text":"
Tip
If you use redis cloud service, skip this step and modify the configuration files directly
UbuntuCentOS
$ apt install redis-server\n
$ yum install redis\n
"},{"location":"extension/distributed_indexing/#2-install-python-redis-third-party-package-on-all-frontend-nodes","title":"2. Install python redis third-party package on all frontend nodes","text":"
$ pip install redis\n
"},{"location":"extension/distributed_indexing/#3-modify-the-seafeventsconf-on-all-frontend-nodes","title":"3. Modify the seafevents.conf on all frontend nodes","text":"
Add the following config items
[EVENTS PUBLISH]\nmq_type=redis # must be redis\nenabled=true\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n
"},{"location":"extension/distributed_indexing/#4-modify-the-seafeventsconf-on-the-backend-node","title":"4. Modify the seafevents.conf on the backend node","text":"
Disable the scheduled indexing task, because the scheduled indexing task and the distributed indexing task conflict.
First, prepare a index-server master node and several index-server slave nodes, the number of slave nodes depends on your needs. Copy the seafile.conf and the seafevents.conf in the conf directory from the Seafile frontend nodes to /opt/seafile-data/seafile/conf in index-server nodes. The master node and slave nodes need to read the configuration files to obtain the necessary information.
CLUSTER_MODE needs to be configured as master on the master node, and needs to be configured as worker on the slave nodes.
Next, create a configuration file index-master.conf in the conf directory of the master node, e.g.
[DEFAULT]\nmq_type=redis # must be redis\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n
Start master node.
docker compose up -d\n
Next, create a configuration file index-worker.conf in the conf directory of all slave nodes, e.g.
[DEFAULT]\nmq_type=redis # must be redis\nindex_workers=2 # number of threads to create/update indexes, you can increase this value according to your needs\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n
Start all slave nodes.
docker compose up -d\n
"},{"location":"extension/distributed_indexing/#some-commands-in-distributed-indexing","title":"Some commands in distributed indexing","text":"
Rebuild search index, first execute the command in the Seafile node:
cd /opt/seafile/seafile-server-last/\n./pro/pro.py search --clear\n
Then execute the command in the index-server master node:
Files in the seafile system are split to blocks, which means what are stored on your seafile server are not complete files, but blocks. This design faciliates effective data deduplication.
However, administrators sometimes want to access the files directly on the server. You can use seaf-fuse to do this.
Seaf-fuse is an implementation of the FUSE virtual filesystem. In a word, it mounts all the seafile files to a folder (which is called the '''mount point'''), so that you can access all the files managed by seafile server, just as you access a normal folder on your server.
Note
Encrypted folders can't be accessed by seaf-fuse.
Currently the implementation is '''read-only''', which means you can't modify the files through the mounted folder.
On debian/centos systems, you need to be in the \"fuse\" group to have the permission to mount a FUSE folder.
"},{"location":"extension/fuse/#use-seaf-fuse-in-docker-based-deployment","title":"Use seaf-fuse in Docker based deployment","text":"
Assume we want to mount to /opt/seafile-fuse in host.
The fuse enables the block cache function by default to cache block objects, thereby reducing access to backend storage, but this function will occupy local disk space. Since Seafile-pro-10.0.0, you can disable block cache by adding following options:
"},{"location":"extension/fuse/#contents-of-the-mounted-folder","title":"Contents of the mounted folder","text":""},{"location":"extension/fuse/#the-top-level-folder","title":"The top level folder","text":"
Now you can list the content of /data/seafile-fuse.
$ ls -lhp /data/seafile-fuse\n\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 abc@abc.com/\ndrwxr-xr-x 2 root root 4.0K Jan 4 2015 foo@foo.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 plus@plus.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 sharp@sharp.com/\ndrwxr-xr-x 2 root root 4.0K Jan 3 2015 test@test.com/\n
The top level folder contains many subfolders, each of which corresponds to a user
"},{"location":"extension/fuse/#the-folder-for-each-user","title":"The folder for each user","text":"
$ ls -lhp /data/seafile-fuse/abc@abc.com\n\ndrwxr-xr-x 2 root root 924 Jan 1 1970 5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\ndrwxr-xr-x 2 root root 1.6K Jan 1 1970 a09ab9fc-7bd0-49f1-929d-6abeb8491397_My Notes/\n
From the above list you can see, under the folder of a user there are subfolders, each of which represents a library of that user, and has a name of this format: '''{library_id}-{library-name}'''.
"},{"location":"extension/fuse/#the-folder-for-a-library","title":"The folder for a library","text":"
$ ls -lhp /data/seafile-fuse/abc@abc.com/5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\n\n-rw-r--r-- 1 root root 501K Jan 1 2015 image.png\n-rw-r--r-- 1 root root 501K Jan 1 2015 sample.jpng\n
"},{"location":"extension/fuse/#if-you-get-a-permission-denied-error","title":"If you get a \"Permission denied\" error","text":"
If you get an error message saying \"Permission denied\" when running ./seaf-fuse.sh start, most likely you are not in the \"fuse group\". You should:
Add yourself to the fuse group
sudo usermod -a -G fuse <your-user-name>\n
Logout your shell and login again
Now try ./seaf-fuse.sh start <path>again.
"},{"location":"extension/libreoffice_online/","title":"Integrate Seafile with Collabora Online (LibreOffice Online)","text":""},{"location":"extension/libreoffice_online/#setup-collaboraonline","title":"Setup CollaboraOnline","text":"
Deployment Tips
The steps from this guide only cover installing collabora as another container on the same docker host that your seafile docker container is on. Please make sure your host have sufficient cores and RAM.
If you want to install on another host please refer the collabora documentation for instructions. Then you should follow here to configure seahub_settings.py to enable online office.
Note
To integrate LibreOffice with Seafile, you have to enable HTTPS in your Seafile server:
Add following config option to seahub_settings.py:
OFFICE_SERVER_TYPE = 'CollaboraOffice'\nENABLE_OFFICE_WEB_APP = True\nOFFICE_WEB_APP_BASE_URL = 'http://collabora:9980/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use LibreOffice Online view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 30 * 60 # seconds\n\n# List of file formats that you want to view through LibreOffice Online\n# You can change this value according to your preferences\n# And of course you should make sure your LibreOffice Online supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through LibreOffice Online\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through LibreOffice Online\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n
Then restart Seafile.
Click an office file in Seafile web interface, you will see the online preview rendered by CollaboraOnline. Here is an example:
CollaboraOnline container will output the logs in the stdout, you can use following command to access it
docker logs seafile-collabora\n
If you would like to use file to save log (i.e., a .log file), you can modify .env with following statment, and remove the notes in the collabora.yml
# .env\nCOLLABORA_ENABLE_FILE_LOGGING=True\nCOLLABORA_PATH=/opt/collabora # path of the collabora logs\n
# collabora.yml\n# remove the following notes\n...\nservices:\n collabora:\n ...\n volumes:\n - \"${COLLABORA_PATH:-/opt/collabora}/logs:/opt/cool/logs/\" # chmod 777 needed\n ...\n...\n
Create the logs directory, and restart Seafile server
mkdir -p /opt/collabora\nchmod 777 /opt/collabora\ndocker compose down\ndocker compose up -d\n
"},{"location":"extension/libreoffice_online/#collaboraonline-server-on-a-separate-host","title":"CollaboraOnline server on a separate host","text":"
If your CollaboraOnline server on a separate host, you just need to modify the seahub_settings.py similar to deploy on the same host. The only different is you have to change the field OFFICE_WEB_APP_BASE_URL to your CollaboraOnline host (e.g., https://collabora-online.seafile.com/hosting/discovery).
The startup of Metadata server requires using Redis as the cache server (it should be the default cache server in Seafile 13.0). So you must deploy Redis for Seafile, then modify seafile.conf, seahub_settings.py and seafevents.conf to enable it before deploying metadata server.
Warning
Please make sure your Seafile service has been deployed before deploying Metadata server. This is because Metadata server needs to read Seafile's configuration file seafile.conf. If you deploy Metadata server before or at the same time with Seafile, it may not be able to detect seafile.conf and fail to start.
Metadata server read all configurations from environtment and does not need a dedicated configuration file, and you don't need to add additional variables to your .env (except for standalone deployment) to get the metadata server started, because it will read the exact same configuration as the Seafile server (including JWT_PRIVATE_KEY ) and keep the repository metadata locally (default /opt/seafile-data/seafile/md-data). But you still need to modify the COMPOSE_FILE list in .env, and add md-server.yml to enable the metadata server:
COMPOSE_FILE='...,md-server.yml'\n
To facilitate your deployment, we still provide two different configuration solutions for your reference:
"},{"location":"extension/metadata-server/#example-env-for-seafile-data-is-stored-locally","title":"Example .env for Seafile data is stored locally","text":"
In this case you don't need to add any additional configuration to your .env. You can also specify image version, maximum local cache size, etc.
"},{"location":"extension/metadata-server/#example-env-for-seafile-data-is-stored-in-the-storage-backend-eg-s3","title":"Example .env for Seafile data is stored in the storage backend (e.g., S3)","text":"
First you need to create a bucket for metadata on your S3 storage backend provider. Then add or modify the following information to .env:
Data for Seafile server should be accessible for Metadata server
In order to correctly obtain metadata information, you must ensure that the data of your Seafile server can be correctly accessed. In the case of deploying Metadata server and Seafile server together, Metadata server will be able to automatically obtain the configuration information of Seafile server, so you don't need to worry about this. But if your Metadata server is deployed in Standalone (usually in a cluster environment), then you need to ensure that the description of the Seafile server storage part in the .env deployed by Metadata server needs to be consistent with the .env deployed by Seafile server (e.g., SEAF_SERVER_STORAGE_TYPE), and can access the configuration file information of Seafile server (e.g., seafile.conf) to ensure that Metadata server can correctly obtain data from Seafile server.
"},{"location":"extension/metadata-server/#list-of-environment-variables-for-metadata-server","title":"List of environment variables for Metadata server","text":"
The following table is all the related environment variables with Metadata server:
Variables Description Required JWT_PRIVATE_KEY The JWT key used to connect with Seafile server Required MD_MAX_CACHE_SIZE The maximum cache size. Optional, default 1GBREDIS_HOST Your Redis service host. Optional, default redisREDIS_PORT Your Redis service port. Optional, default 6379REDIS_PASSWORD Your Redis access password. Optional MD_STORAGE_TYPE Where the metadata storage in. Available options are disk (local storage) and s3diskS3_MD_BUCKET Your S3 bucket name for the bucket storing metadata Required when using S3 (MD_STORAGE_TYPE=s3)
In addition, there are some environment variables related to S3 authorization, please refer to the part with S3_ prefix in this table (the buckets name for Seafile are also needed).
Metadata server supports Redis only
To enable metadata feature, you have to use Redis for cache, as the CACHE_PROVIDER must be set to redis in your .env
You can use following command to start metadata server (and the Seafile service also have to restart):
docker compose down\ndocker compose up -d\n
"},{"location":"extension/metadata-server/#verify-metadata-server-and-enable-it-in-the-seafile","title":"Verify Metadata server and enable it in the Seafile","text":"
Check container log for seafile-md-server, you can see the following message if it runs fine:
When you deploy Seafile server and Metadata server to the same machine, Metadata server will use the same persistence directory (e.g. /opt/seafile-data) as Seafile server. Metadata server will use the following directories or files:
/opt/seafile-data/seafile/md-data: Metadata server data and cache
/opt/seafile-data/seafile/logs/seaf-md-server: The logs directory of Metadata server, consist of a running log and an access log.
"},{"location":"extension/notification-server/","title":"Notification Server Overview","text":"
Currently, the status updates of files and libraries on the client and web interface are based on polling the server. The latest status cannot be reflected in real time on the client due to polling delays. The client needs to periodically refresh the library modification, file locking, subdirectory permissions and other information, which causes additional performance overhead to the server.
When a directory is opened on the web interface, the lock status of the file cannot be updated in real time, and the page needs to be refreshed.
The notification server uses websocket protocol and maintains a two-way communication connection with the client or the web interface. When the above changes occur, seaf-server will notify the notification server of the changes. Then the notification server can notify the client or the web interface in real time. This not only improves the real-time performance, but also reduces the performance overhead of the server.
NOTIFICATION_SERVER_URL=<your notification server URL>\nINNER_NOTIFICATION_SERVER_URL=$NOTIFICATION_SERVER_URL\n
Difference between NOTIFICATION_SERVER_URL and INNER_NOTIFICATION_SERVER_URL
NOTIFICATION_SERVER_URL: used to do the connection between client (i.e., user's browser) and notification server
INNER_NOTIFICATION_SERVER_URL: used to do the connection between Seafile server and notification server
Finally, You can run notification server with the following command:
docker compose down\ndocker compose up -d\n
"},{"location":"extension/notification-server/#checking-notification-server-status","title":"Checking notification server status","text":"
When the notification server is working, you can access http://127.0.0.1:8083/ping from your browser, which will answer {\"ret\": \"pong\"}. If you have a proxy configured, you can access https://seafile.example.com/notification/ping from your browser instead.
If the client works with notification server, there should be a log message in seafile.log or seadrive.log.
Notification server is enabled on the remote server xxxx\n
"},{"location":"extension/notification-server/#notification-server-in-seafile-cluster","title":"Notification Server in Seafile cluster","text":"
There is no additional features for notification server in the Pro Edition. It works the same as in community edition.
If you enable clustering, You need to deploy notification server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node.
Download .env and notification-server.yml to notification server directory:
Then modify the .env file according to your environment. The following fields are needed to be modified:
variable description SEAFILE_MYSQL_DB_HOST Seafile MySQL host SEAFILE_MYSQL_DB_USER Seafile MySQL user, default is seafileSEAFILE_MYSQL_DB_PASSWORD Seafile MySQL password TIME_ZONE Time zone JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file SEAFILE_SERVER_HOSTNAME Seafile host name SEAFILE_SERVER_PROTOCOL http or https
You can run notification server with the following command:
docker compose up -d\n
And you need to add the following configurations under seafile.conf and restart Seafile server:
[notification]\nenabled = true\n# the ip of notification server.\nhost = 192.168.0.83\n# the port of notification server\nport = 8083\n
You need to configure load balancer according to the following forwarding rules:
Forward /notification/ping requests to notification server via http protocol.
Forward websockets requests with URL prefix /notification to notification server.
Here is a configuration that uses haproxy to support notification server. Haproxy version needs to be >= 2.0. You should use similar configurations for other load balancers.
#/etc/haproxy/haproxy.cfg\n\n# Other existing haproxy configurations\n......\n\nfrontend seafile\n bind 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n acl notif_ping_request url_sub -i /notification/ping\n acl ws_requests url -i /notification\n acl hdr_connection_upgrade hdr(Connection) -i upgrade\n acl hdr_upgrade_websocket hdr(Upgrade) -i websocket\n use_backend ws_backend if hdr_connection_upgrade hdr_upgrade_websocket\n use_backend notif_ping_backend if notif_ping_request\n use_backend ws_backend if ws_requests\n default_backend backup_nodes\n\nbackend backup_nodes\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.0.137:80\n\nbackend notif_ping_backend\n option forwardfor\n server ws 192.168.0.137:8083\n\nbackend ws_backend\n option forwardfor # This sets X-Forwarded-For\n server ws 192.168.0.137:8083\n
In Seafile Professional Server Version 4.4.0 (or above), you can use Microsoft Office Online Server (formerly named Office Web Apps) to preview documents online. Office Online Server provides the best preview for all Office format files. It also support collaborative editing of Office files directly in the web browser. For organizations with Microsoft Office Volume License, it's free to use Office Online Server. For more information about Office Online Server and how to deploy it, please refer to https://technet.microsoft.com/en-us/library/jj219455(v=office.16).aspx.
Seafile only supports Office Online Server 2016 and above
To use Office Online Server for preview, please add following config option to seahub_settings.py.
# Enable Office Online Server\nENABLE_OFFICE_WEB_APP = True\n\n# Url of Office Online Server's discovery page\n# The discovery page tells Seafile how to interact with Office Online Server when view file online\n# You should change `http://example.office-web-app.com` to your actual Office Online Server server address\nOFFICE_WEB_APP_BASE_URL = 'http://example.office-web-app.com/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use Office Online Server view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 60 * 60 * 24 # seconds\n\n# List of file formats that you want to view through Office Online Server\n# You can change this value according to your preferences\n# And of course you should make sure your Office Online Server supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('ods', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt',\n 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through Office Online Server\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through Office Online Server\n# Note, Office Online Server 2016 is needed for editing docx\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('xlsx', 'pptx', 'docx')\n\n\n# HTTPS authentication related (optional)\n\n# Server certificates\n# Path to a CA_BUNDLE file or directory with certificates of trusted CAs\n# NOTE: If set this setting to a directory, the directory must have been processed using the c_rehash utility supplied with OpenSSL.\nOFFICE_WEB_APP_SERVER_CA = '/path/to/certfile'\n\n\n# Client certificates\n# You can specify a single file (containing the private key and the certificate) to use as client side certificate\nOFFICE_WEB_APP_CLIENT_PEM = 'path/to/client.pem'\n\n# or you can specify these two file path to use as client side certificate\nOFFICE_WEB_APP_CLIENT_CERT = 'path/to/client.cert'\nOFFICE_WEB_APP_CLIENT_KEY = 'path/to/client.key'\n
Then restart
./seafile.sh restart\n./seahub.sh restart\n
After you click the document you specified in seahub_settings.py, you will see the new preview page.
Understanding how the web app integration works is going to help you debugging the problem. When a user visits a file page:
(seahub->browser) Seahub will generate a page containing an iframe and send it to the browser
(browser->office online server) With the iframe, the browser will try to load the file preview page from the office online server
(office online server->seahub) office online server receives the request and sends a request to Seahub to get the file content
(office online server->browser) office online server sends the file preview page to the browser.
Please check the Nginx log for Seahub (for step 3) and Office Online Server to see which step is wrong.
Warning
You should make sure you have configured at least a few GB of paging files in your Windows system. Otherwise the IIS worker processes may die randomly when handling Office Online requests.
Seafile supports OnlyOffice to view/edit office files online. In order to use OnlyOffice, you must first deploy an OnlyOffice server.
Deployment Tips
You can deploy OnlyOffice to the same machine as Seafile (only support deploying with Docker with sufficient cores and RAM) using the onlyoffice.yml provided by Seafile according to this document, or you can deploy it to a different machine according to OnlyOffice official document.
"},{"location":"extension/only_office/#deployment-of-onlyoffice","title":"Deployment of OnlyOffice","text":"
insert onlyoffice.yml into COMPOSE_FILE list (i.e., COMPOSE_FILE='...,onlyoffice.yml'), and add the following configurations of onlyoffice in .env file.
# OnlyOffice image\nONLYOFFICE_IMAGE=onlyoffice/documentserver:8.1.0.1\n\n# Persistent storage directory of OnlyOffice\nONLYOFFICE_VOLUME=/opt/onlyoffice\n\n# OnlyOffice document server port\nONLYOFFICE_PORT=6233\n\n# jwt secret, generated by `pwgen -s 40 1` \nONLYOFFICE_JWT_SECRET=<your jwt secret>\n
Note
From Seafile 12.0, OnlyOffice's JWT verification will be forced to enable. Secure communication between Seafile and OnlyOffice is granted by a shared secret. You can get the JWT secret by following command
pwgen -s 40 1\n
Also modify seahub_settings.py
ENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'https://seafile.example.com:6233/web-apps/apps/api/documents/api.js'\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\n\n# NOTE\n# The following two configurations, do NOT need to configure them explicitly.\n# The default values are as follows.\n# If you have custom needs, you can also configure them, which will override the default values.\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods', 'ppsx', 'pps', 'csv')\nONLYOFFICE_EDIT_FILE_EXTENSION = ('docx', 'pptx', 'xlsx', 'csv')\n
Tip
By default OnlyOffice will use port 6233 used for communication between Seafile and Document Server, You can modify the bound port by specifying ONLYOFFICE_PORT, and the port in the term ONLYOFFICE_APIJS_URL in seahub_settings.py should be modified together.
"},{"location":"extension/only_office/#advanced-custom-settings-of-onlyoffice","title":"Advanced: Custom settings of OnlyOffice","text":"
The following configuration options are only for OnlyOffice experts. You can create and mount a custom configuration file called local-production-linux.json to force some settings.
nano local-production-linux.json\n
For example, you can configure OnlyOffice to automatically save by copying the following code block in this file:
For more information you can check the official documentation: https://api.onlyoffice.com/editors/signature/ and https://github.com/ONLYOFFICE/Docker-DocumentServer#available-configuration-parameters
"},{"location":"extension/only_office/#restart-seafile-docker-instance-and-test-that-onlyoffice-is-running","title":"Restart Seafile-docker instance and test that OnlyOffice is running","text":"
docker-compose down\ndocker-compose up -d\n
Success
After the installation process is finished, visit this page to make sure you have deployed OnlyOffice successfully: http{s}://{your Seafile server's domain or IP}:6233/welcome, you will get Document Server is running info at this page.
Firstly, run docker logs -f seafile-onlyoffice, then open an office file. After the \"Download failed.\" error appears on the page, observe the logs for the following error:
==> /var/log/onlyoffice/documentserver/converter/out.log <==\n...\nError: DNS lookup {local IP} (family:undefined, host:undefined) is not allowed. Because, It is a private IP address.\n...\n
If it shows this error message and you haven't enabled JWT while using a local network, then it's likely due to an error triggered proactively by OnlyOffice server for enhanced security. (https://github.com/ONLYOFFICE/DocumentServer/issues/2268#issuecomment-1600787905)
So, as mentioned in the post, we highly recommend you enabling JWT in your integrations to fix this problem.
"},{"location":"extension/only_office/#the-document-security-token-is-not-correctly-formed","title":"The document security token is not correctly formed","text":"
Starting from OnlyOffice Docker-DocumentServer version 7.2, JWT is enabled by default on OnlyOffice server.
So, for security reason, please Configure OnlyOffice to use JWT Secret.
"},{"location":"extension/only_office/#onlyoffice-on-a-separate-host-and-url","title":"OnlyOffice on a separate host and URL","text":"
In general, you only need to specify the values \u200b\u200bof the following fields in seahub_settings.py and then restart the service.
For deployments using the onlyoffice.yml file in this document, SSL is primarily handled by the Caddy. If the OnlyOffice document server and Seafile server are not on the same machine, please refer to the official document to configure SSL for OnlyOffice.
"},{"location":"extension/seafile_ai/","title":"Seafile AI extension","text":"
From Seafile 13 Pro, users can enable Seafile AI to support the following features:
File tags, file and image summaries, text translation, sdoc writing assistance
Given an image, generate its corresponding tags (including objects, weather, color, etc.)
Detect faces in images and encode them
Detect text in images (OCR)
"},{"location":"extension/seafile_ai/#deploy-seafile-ai-basic-service","title":"Deploy Seafile AI basic service","text":"
The Seafile AI basic service will use API calls to external large language model service (e.g., GPT-4o-mini) to implement file labeling, file and image summaries, text translation, and sdoc writing assistance.
Here is the workflow when a user open sdoc file in browser
When a user open a sdoc file in the browser, a file loading request will be sent to Caddy, and Caddy proxy the request to SeaDoc server (see Seafile instance archticture for the details).
SeaDoc server will send the file's content back if it is already cached, otherwise SeaDoc server will sends a request to Seafile server.
Seafile server loads the content, then sends it to SeaDoc server and write it to the cache at the same time.
After SeaDoc receives the content, it will be sent to the browser.
This extension is already installed by default when deploying Seafile (single-node mode) by Docker.
If you would like to remove it, you can undo the steps in this section (i.e., remove the seadoc.yml in the field COMPOSE_FILE and set ENABLE_SEADOC to false)
The easiest way to deployment SeaDoc is to deploy it with Seafile server on the same host using the same Docker network. If in some situations, you need to deployment SeaDoc standalone, you can follow the next section.
Then modify the .env file according to your environment. The following fields are needed to be modified:
variable description SEADOC_VOLUME The volume directory of SeaDoc data SEAFILE_MYSQL_DB_HOST Seafile MySQL host SEAFILE_MYSQL_DB_USER Seafile MySQL user, default is seafileSEAFILE_MYSQL_DB_PASSWORD Seafile MySQL password TIME_ZONE Time zone JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file SEAFILE_SERVER_HOSTNAME Seafile host name SEAFILE_SERVER_PROTOCOL http or https
(Optional) By default, SeaDoc server will bind to port 80 on the host machine. If the port is already taken by another service, you have to change the listening port of SeaDoc:
Modify seadoc.yml
services:\n seadoc:\n ...\n ports:\n - \"<your SeaDoc server port>:80\"\n...\n
Add a reverse proxy for SeaDoc server. In cluster environtment, it means you need to add reverse proxy rules at load balance. Here, we use Nginx as an example (please replace 127.0.0.1:80 to host:port of your Seadoc server)
<Location /sdoc-server/>\n ProxyPass \"http://127.0.0.1:80/\"\n ProxyPassReverse \"http://127.0.0.1:80/\"\n </Location>\n\n <Location /socket.io/>\n # Since Apache HTTP Server 2.4.47\n ProxyPass \"http://127.0.0.1:80/socket.io/\" upgrade=websocket\n </Location>\n
Start SeaDoc server server with the following command
docker compose up -d\n
Modify Seafile server's configuration and start SeaDoc server
Warning
After using a reverse proxy, your SeaDoc service will be located at the /sdoc-server path of your reverse proxy (i.e. xxx.example.com/sdoc-server). For example:
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files outside. This allows you to rebuild containers easily without losing important information.
/opt/seadoc-data/logs: This is the directory for SeaDoc logs.
"},{"location":"extension/setup_seadoc/#database-used-by-seadoc","title":"Database used by SeaDoc","text":"
SeaDoc used one database table seahub_db.sdoc_operation_log to store operation logs. The database table is cleaned automatically.
"},{"location":"extension/setup_seadoc/#common-issues-when-settings-up-seadoc","title":"Common issues when settings up SeaDoc","text":""},{"location":"extension/setup_seadoc/#server-is-disconnected-reconnecting-error-when-open-a-sdoc","title":"\"Server is disconnected. Reconnecting\u2026\" error when open a sdoc","text":"
This is because websocket for sdoc-server has not been properly configured. If you use the default Caddy proxy, it should be setup correctly.
But if you use your own proxy, you need to make sure it properly proxy your-sdoc-server-domain/socket.io to sdoc-server-docker-image-address/socket.io
"},{"location":"extension/setup_seadoc/#load-doc-content-error-when-open-a-sdoc","title":"\"Load doc content error\" when open a sdoc","text":"
This is because the browser cannot correctly load content from sdoc-server. Make sure
SEADOC_SERVER_URL is correctly set in .env
Make sure sdoc-server can be accessed via the browser.
You can open developer console of the browser to further debug the issue.
"},{"location":"extension/thumbnail-server/","title":"Thumbnail Server Overview","text":"
Since Seafile 13.0, a new component thumbnail server is added. Thumbnail server can create thumbnails for images, videos, PDFs and other file types. Thumbnail server uses a task queue based architecture, it can better handle workloads than thumbnail generating inside Seahub component.
Use this feature by forwarding thumbnail requests directly to thumbnail server via caddy or a reverse proxy.
"},{"location":"extension/thumbnail-server/#how-to-configure-and-run","title":"How to configure and run","text":"
First download thumbnail-server.yml to Seafile directory:
Add following configuration in seahub_settings.py to enable thumbnail for videos:
# video thumbnails (disabled by default)\nENABLE_VIDEO_THUMBNAIL = True\n
Finally, You can run thumbnail server with the following command:
docker compose down\ndocker compose up -d\n
"},{"location":"extension/thumbnail-server/#thumbnail-server-in-seafile-cluster","title":"Thumbnail Server in Seafile cluster","text":"
There is no additional features for thumbnail server in the Pro Edition. It works the same as in community edition.
If you enable clustering, You need to deploy thumbnail server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node.
Download .env and thumbnail-server.yml to thumbnail server directory:
Then modify the .env file according to your environment. The following fields are needed to be modified:
variable description SEAFILE_VOLUME The volume directory of thumbnail server data SEAFILE_MYSQL_DB_HOST Seafile MySQL host SEAFILE_MYSQL_DB_USER Seafile MySQL user, default is seafileSEAFILE_MYSQL_DB_PASSWORD Seafile MySQL password TIME_ZONE Time zone JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file INNER_SEAHUB_SERVICE_URL Inner Seafile url SEAF_SERVER_STORAGE_TYPE What kind of the Seafile data for storage. Available options are disk (i.e., local disk), s3 and multiple (see the details of multiple storage backends) S3_COMMIT_BUCKET S3 storage backend commit objects bucket S3_FS_BUCKET S3 storage backend fs objects bucket S3_BLOCK_BUCKET S3 storage backend block objects bucket S3_KEY_ID S3 storage backend key ID S3_SECRET_KEY S3 storage backend secret key S3_AWS_REGION Region of your buckets S3_HOST Host of your buckets S3_USE_HTTPS Use HTTPS connections to S3 if enabled S3_USE_V4_SIGNATURE Use the v4 protocol of S3 if enabled S3_PATH_STYLE_REQUEST This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. S3_SSE_C_KEY A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C.
Then you can run thumbnail server with the following command:
docker compose up -d\n
You need to configure load balancer according to the following forwarding rules:
Forward /thumbnail requests to thumbnail server via http protocol.
Here is a configuration that uses haproxy to support thumbnail server. Haproxy version needs to be >= 2.0. You should use similar configurations for other load balancers.
The thumbnail server needs to access Seafile storage.
If you use local storage, you need to mount the /opt/seafile-data directory of the Seafile node to the thumbnail node, and set SEAFILE_VOLUME to the mounted directory correctly.
If you use single backend S3 storage, please correctly set relative environment vairables in .env.
If you are using multiple storage backends, you have to copy the seafile.conf of the Seafile node to the /opt/seafile-data/seafile/conf directory of the thumbnail node, and set SEAF_SERVER_STORAGE_TYPE=multiple in .env.
"},{"location":"extension/thumbnail-server/#thumbnail-server-directory-structure","title":"Thumbnail server directory structure","text":"
/opt/seafile-data
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/conf: This is the directory for config files.
/opt/seafile-data/logs: This is the directory for logs.
/opt/seafile-data/seafile-data: This is the directory for seafile storage (if you use local storage).
/opt/seafile-data/seahub-data/thumbnail: This is the directory for thumbnail files.
Seafile can scan uploaded files for malicious content in the background. When configured to run periodically, the scan process scans all existing libraries on the server. In each scan, the process only scans newly uploaded/updated files since the last scan. For each file, the process executes a user-specified virus scan command to check whether the file is a virus or not. Most anti-virus programs provide command line utility for Linux.
To enable this feature, add the following options to seafile.conf:
[virus_scan]\nscan_command = (command for checking virus)\nvirus_code = (command exit codes when file is virus)\nnonvirus_code = (command exit codes when file is not virus)\nscan_interval = (scanning interval, in unit of minutes, default to 60 minutes)\n
More details about the options:
On Linux/Unix, most virus scan commands returns specific exit codes for virus and non-virus. You should consult the manual of your anti-virus program for more information.
An example for ClamAV (http://www.clamav.net/) is provided below:
To test whether your configuration works, you can trigger a scan manually:
cd seafile-server-latest\n./pro/pro.py virus_scan\n
If a virus was detected, you can see scan records and delete infected files on the Virus Scan page in the admin area.
Note
If you directly use clamav command line tool to scan files, scanning files will takes a lot of time. If you want to speed it up, we recommend to run Clamav as a daemon. Please refer to Run ClamAV as a Daemon
When run Clamav as a daemon, the scan_command should be clamdscan in seafile.conf. An example for Clamav-daemon is provided below:
Since Pro edition 6.0.0, a few more options are added to provide finer grained control for virus scan.
[virus_scan]\n......\nscan_size_limit = (size limit for files to be scanned) # The unit is MB.\nscan_skip_ext = (a comma (',') separated list of file extensions to be ignored)\nthreads = (number of concurrent threads for scan, one thread for one file, default to 4)\n
The file extensions should start with '.'. The extensions are case insensitive. By default, files with following extensions will be ignored:
"},{"location":"extension/virus_scan/#scanning-files-on-upload","title":"Scanning Files on Upload","text":"
You may also configure Seafile to scan files for virus upon the files are uploaded. This only works for files uploaded via web interface or web APIs. Files uploaded with syncing or SeaDrive clients cannot be scanned on upload due to performance consideration.
You may scan files uploaded from shared upload links by adding the option below to seahub_settings.py:
ENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n
Since Pro Edition 11.0.7, you may scan all uploaded files via web APIs by adding the option below to seafile.conf:
[fileserver]\ncheck_virus_on_web_upload = true\n
"},{"location":"extension/virus_scan_with_clamav/","title":"Deploy ClamAV with Seafile","text":""},{"location":"extension/virus_scan_with_clamav/#deploy-with-docker","title":"Deploy with Docker","text":"
If your Seafile server is deployed using Docker, we also recommend that you use Docker to deploy ClamAV by following the steps below, otherwise you can deploy it from binary package of ClamAV.
"},{"location":"extension/virus_scan_with_clamav/#download-clamavyml-and-insert-to-docker-compose-lists-in-env","title":"Download clamav.yml and insert to Docker-compose lists in .env","text":"
Wait some minutes until Clamav finished initializing.
Now Clamav can be used.
"},{"location":"extension/virus_scan_with_clamav/#use-clamav-in-binary-based-deployment","title":"Use ClamAV in binary based deployment","text":""},{"location":"extension/virus_scan_with_clamav/#install-clamav-daemon-clamav-freshclam","title":"Install clamav-daemon & clamav-freshclam","text":"
apt-get install clamav-daemon clamav-freshclam\n
You should run Clamd with a root permission to scan any files. Edit the conf /etc/clamav/clamd.conf,change the following line:
LocalSocketGroup root\nUser root\n
"},{"location":"extension/virus_scan_with_clamav/#start-the-clamav-daemon","title":"Start the clamav-daemon","text":"
"},{"location":"extension/virus_scan_with_kav4fs/","title":"Virus Scan with kav4fs","text":""},{"location":"extension/virus_scan_with_kav4fs/#prerequisite","title":"Prerequisite","text":"
Assume you have installed Kaspersky Anti-Virus for Linux File Server on the Seafile Server machine.
If the user that runs Seafile Server is not root, it should have sudoers privilege to avoid writing password when running kav4fs-control. Add following content to /etc/sudoers:
<user of running seafile server> ALL=(ALL:ALL) ALL\n<user of running seafile server> ALL=NOPASSWD: /opt/kaspersky/kav4fs/bin/kav4fs-control\n
As the return code of kav4fs cannot reflect the file scan result, we use a shell wrapper script to parse the scan output and based on the parse result to return different return codes to reflect the scan result.
Save following contents to a file such as kav4fs_scan.sh:
[virus_scan]\nscan_command = <absolute path of kav4fs_scan.sh>\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = <scanning interval, in unit of minutes, default to 60 minutes>\n
The configuration file is /opt/seafile-data/seafile/conf/seafdav.conf (for deploying from binary packages, it should be /opt/seafile/conf/seafdav.conf). If it is not created already, you can just create the file.
[WEBDAV]\n\n# Default is false. Change it to true to enable SeafDAV server.\nenabled = true\n\nport = 8080\ndebug = true\n\n# If you deploy seafdav behind nginx/apache, you need to modify \"share_name\".\nshare_name = /seafdav\n\n# SeafDAV uses Gunicorn as web server.\n# This option maps to Gunicorn's 'workers' setting. https://docs.gunicorn.org/en/stable/settings.html?#workers\n# By default it's set to 5 processes.\nworkers = 5\n\n# This option maps to Gunicorn's 'timeout' setting. https://docs.gunicorn.org/en/stable/settings.html?#timeout\n# By default it's set to 1200 seconds, to support large file uploads.\ntimeout = 1200\n
Every time the configuration is modified, you need to restart seafile server to make it take effect.
Deploy in DockerDeploy from binary packages
docker compose restart\n
cd /opt/seafile/seafile-server-latest/\n./seafile.sh restart\n
Your WebDAV client would visit the Seafile WebDAV server at http{s}://example.com/seafdav/ (for deploying from binary packages, it should be http{s}://example.com:8080/seafdav/)
In Pro edition 7.1.8 version and community edition 7.1.5, an option is added to append library ID to the library name returned by SeafDAV.
show_repo_id=true\n
"},{"location":"extension/webdav/#proxy-only-for-deploying-from-binary-packages","title":"Proxy (only for deploying from binary packages)","text":"
Tip
For deploying in Docker, the WebDAV server has been proxied in /seafdav/*, as you can skip this step
NginxApache
For Seafdav, the configuration of Nginx is as follows:
"},{"location":"extension/webdav/#notes-on-clients","title":"Notes on Clients","text":"
Please first note that, there are some known performance limitation when you map a Seafile webdav server as a local file system (or network drive).
Uploading large number of files at once is usually much slower than the syncing client. That's because each file needs to be committed separately.
The access to the webdav server may be slow sometimes. That's because the local file system driver sends a lot of unnecessary requests to get the files' attributes.
So WebDAV is more suitable for infrequent file access. If you want better performance, please use the sync client instead.
WindowsLinuxMac OS X
Windows Explorer supports HTTPS connection. But it requires a valid certificate on the server. It's generally recommended to use Windows Explorer to map a webdav server as network dirve. If you use a self-signed certificate, you have to add the certificate's CA into Windows' system CA store.
On Linux you have more choices. You can use file manager such as Nautilus to connect to webdav server. Or you can use davfs2 from the command line.
The -o option sets the owner of the mounted directory to so that it's writable for non-root users.
It's recommended to disable LOCK operation for davfs2. You have to edit /etc/davfs2/davfs2.conf
use_locks 0\n
Finder's support for WebDAV is also not very stable and slow. So it is recommended to use a webdav client software such as Cyberduck.
"},{"location":"extension/webdav/#frequently-asked-questions","title":"Frequently Asked Questions","text":""},{"location":"extension/webdav/#clients-cant-connect-to-seafdav-server","title":"Clients can't connect to seafdav server","text":"
By default, seafdav is disabled. Check whether you have enabled = true in seafdav.conf. If not, modify it and restart seafile server.
"},{"location":"extension/webdav/#the-client-gets-error-404-not-found","title":"The client gets \"Error: 404 Not Found\"","text":"
If you deploy SeafDAV behind Nginx/Apache, make sure to change the value of share_name as the sample configuration above. Restart your seafile server and try again.
First, check the seafdav.log to see if there is log like the following.
\"MOVE ... -> 502 Bad Gateway\n
If you have enabled debug, there will also be the following log.
09:47:06.533 - DEBUG : Raising DAVError 502 Bad Gateway: Source and destination must have the same scheme.\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\n(See https://github.com/mar10/wsgidav/issues/183)\n\n09:47:06.533 - DEBUG : Caught (502, \"Source and destination must have the same scheme.\\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\\n(See https://github.com/mar10/wsgidav/issues/183)\")\n
This issue usually occurs when you have configured HTTPS, but the request was forwarded, resulting in the HTTP_X_FORWARDED_PROTO value in the request received by Seafile not being HTTPS.
You can solve this by manually changing the value of HTTP_X_FORWARDED_PROTO. For example, in nginx, change
proxy_set_header X-Forwarded-Proto $scheme;\n
to
proxy_set_header X-Forwarded-Proto https;\n
"},{"location":"extension/webdav/#windows-explorer-reports-file-size-exceeds-the-limit-allowed-and-cannot-be-saved","title":"Windows Explorer reports \"file size exceeds the limit allowed and cannot be saved\"","text":"
This happens when you map webdav as a network drive, and tries to copy a file larger than about 50MB from the network drive to a local folder.
This is because Windows Explorer has a limit of the file size downloaded from webdav server. To make this size large, change the registry entry on the client machine. There is a registry key named FileSizeLimitInBytes under HKEY_LOCAL_MACHINE -> SYSTEM -> CurrentControlSet -> Services -> WebClient -> Parameters.
Seafile Server consists of the following two components:
Seahub (django)\uff1athe web frontend. Seafile server package contains a light-weight Python HTTP server gunicorn that serves the website. By default, Seahub runs as an application within gunicorn.
Seafile server (seaf-server)\uff1adata service daemon, handles raw file upload, download and synchronization. Seafile server by default listens on port 8082. You can configure Nginx/Apache to proxy traffic to the local 8082 port.
The picture below shows how Seafile clients access files when you configure Seafile behind Nginx/Apache.
Tip
All access to the Seafile service (including Seahub and Seafile server) can be configured behind Nginx or Apache web server. This way all network traffic to the service can be encrypted with HTTPS.
Seafile manages files using libraries. Every library has an owner, who can share the library to other users or share it with groups. The sharing can be read-only or read-write.
Read-only libraries can be synced to local desktop. The modifications at the client will not be synced back. If a user has modified some file contents, he can use \"resync\" to revert the modifications.
Sharing controls whether a user or group can see a library, while sub-folder permissions are used to modify permissions on specific folders.
Supposing you share a library as read-only to a group and then want specific sub-folders to be read-write for a few users, you can set read-write permissions on sub-folders for some users and groups.
Note
Setting sub-folder permission for a user without sharing the folder or parent folder to that user will have no effect.
Sharing a library read-only to a user and then sharing a sub-folder read-write to that user will lead to two shared items for that user. This is going to cause confusion. Use sub-folder permissions instead.
"},{"location":"setup/caddy/","title":"HTTPS and Caddy","text":"
Note
From Seafile Docker 12.0, HTTPS will be handled by the Caddy. The default caddy image used of Seafile docker is lucaslorentz/caddy-docker-proxy:2.9-alpine.
Caddy is a modern open source web server that mainly binds external traffic and internal services in seafile docker. In addition to the advantages of traditional proxy components (e.g., nginx), Caddy also makes it easier for users to complete the acquisition and update of HTTPS certificates by providing simpler configurations.
To engage HTTPS, users only needs to correctly configure the following fields in .env:
The Seafile cluster solution employs a 3-tier architecture:
Load balancer tier: Distribute incoming traffic to Seafile servers. HA can be achieved by deploying multiple load balancer instances.
Seafile server cluster: a cluster of Seafile server instances. If one instance fails, the load balancer will stop handing traffic to it. So HA is achieved.
Backend storage: Distributed storage cluster, e.g. S3, Openstack Swift or Ceph.
This architecture scales horizontally. That means, you can handle more traffic by adding more machines. The architecture is visualized in the following picture.
There are two main components on the Seafile server node: web server (Nginx/Apache) and Seafile app server. The web server passes requests from the clients to Seafile app server. The Seafile app servers work independently. They don't know about each other's state. That means each app server can fail independently without affecting other app server instances. The load balancer is responsible for detecting failure and re-routing requests.
Even though Seafile app servers work independently, they still have to share some session information. All shared session information is stored in memory cache. Thus, all Seafile app servers have to connect to the same memory cache server (cluster). Since Pro Edition 11.0, both memcached and Redis can be used as memory cache. Before 11.0, only memcached is supported. More details about memory cache configuration is available later.
The background server is the workhorse for various background tasks, including full-text indexing, office file preview, virus scanning, LDAP syncing. It should usually be run on a dedicated server for better performance. Currently only one background task server can be running in the entire cluster. If more than one background servers are running, they may conflict with each others when doing some tasks. If you need HA for background task server, you can consider using Keepalived to build a hot backup for it.
In the seafile cluster, only one server should run the background tasks, including:
indexing files for search
email notification
LDAP sync
virus scan
Let's assume you have three nodes in your cluster: A, B, and C.
Node A is backend node that run background tasks.
Node B and C are frontend nodes that serving requests from clients.
Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/cluster_deploy_with_docker/#deploy-seafile-service","title":"Deploy Seafile service","text":""},{"location":"setup/cluster_deploy_with_docker/#deploy-the-first-seafile-frontend-node","title":"Deploy the first Seafile frontend node","text":"
Create the mount directory
mkdir -p /opt/seafile/shared\n
Pulling Seafile image
Tip
Since v12.0, Seafile PE versions are hosted on DockerHub and does not require username and password to download.
Modify the variables in .env (especially the terms like <...>).
Tip
If you have already deployed S3 storage backend and plan to apply it to Seafile cluster, you can modify the variables in .env to set them synchronously during initialization.
Although the current Seafile cluster only supports Memcached as the cache, it also supports setting configurations through '. env'. Therefore, you do not need to pay attention to the selection of CACHE_PROVIDER, so you only need to correctly set MEMCACHED_HOST and MEMCACHED_PORT in .env.
Pleace license file
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile/shared. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Start the Seafile docker
docker compose up -d\n
Cluster init mode
Because CLUSTER_INIT_MODE is true in the .env file, Seafile docker will be started in init mode and generate configuration files. As the results, you can see the following lines if you trace the Seafile container (i.e., docker logs seafile):
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n\n\n[2024-11-21 02:22:37] Updating version stamp\nStart init\n\nInit success\n
In initialization mode, the service will not be started. During this time you can check the generated configuration files (e.g., MySQL, Memcached, Elasticsearch) in configuration files:
seafevents.conf
seafile.conf
seahub_settings.py
After initailizing the cluster, the following fields can be removed in .env
CLUSTER_INIT_MODE, must be removed from .env file
CLUSTER_INIT_ES_HOST
CLUSTER_INIT_ES_PORT
Tip
We recommend that you check that the relevant configuration files are correct and copy the SEAFILE_VOLUME directory before the service is officially started, because only the configuration files are generated after initialization. You can directly migrate the entire copied SEAFILE_VOLUME to other nodes later:
Restart the container to start the service in frontend node
docker compose down\ndocker compose up -d\n
Frontend node starts successfully
After executing the above command, you can trace the logs of container seafile (i.e., docker logs seafile). You can see the following message if the frontend node starts successfully:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 20\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:02:35 Nginx ready \n\n2024-11-21 03:02:35 This is an idle script (infinite loop) to keep container running. \n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n
"},{"location":"setup/cluster_deploy_with_docker/#deploy-the-others-seafile-frontend-nodes","title":"Deploy the others Seafile frontend nodes","text":"
Create the mount directory
$ mkdir -p /opt/seafile/shared\n
Pull Seafile image
Copy seafile-server.yml, .envand configuration files from the first frontend node
Copy seafile-server.yml, .env and configuration files from frontend node
Note
The configuration files from frontend node have to be put in the same path as the frontend node, i.e., /opt/seafile/shared/seafile/conf/*
Modify .env, set CLUSTER_MODE to backend
Start the service in the backend node
docker compose up -d\n
Backend node starts successfully
After executing the above command, you can trace the logs of container seafile (i.e., docker logs seafile). You can see the following message if the backend node starts successfully:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 21\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:11:59 Nginx ready \n2024-11-21 03:11:59 This is an idle script (infinite loop) to keep container running. \n\n---------------------------------\n\nSeafile cluster backend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seafile background tasks ...\nDone.\n
Since Seafile Pro server 6.0.0, cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the \"Load Balancer Setting\" section below for details
Generally speaking, in order to better access the Seafile service, we recommend that you use a load balancing service to access the Seafile cluster and bind your domain name (such as seafile.cluster.com) to the load balancing service. Usually, you can use:
Cloud service provider's load balancing service (e.g., AWS Elastic Load Balancer)
Deploy your own load balancing service, our document will give two of common load balance services:
"},{"location":"setup/cluster_deploy_with_docker/#haproxy-and-keepalived-services","title":"HAproxy and Keepalived services","text":"
Execute the following commands on the two Seafile frontend servers:
$ apt install haproxy keepalived -y\n\n$ mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak\n\n$ cat > /etc/haproxy/haproxy.cfg << 'EOF'\nglobal\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n timeout connect 10000\n timeout client 300000\n timeout server 36000000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafile01 Front-End01-IP:8001 check port 11001 cookie seafile01\n server seafile02 Front-End02-IP:8001 check port 11001 cookie seafile02\nEOF\n
Warning
Please correctly modify the IP address (Front-End01-IP and Front-End02-IP) of the frontend server in the above configuration file. Other wise it cannot work properly.
Choose one of the above two servers as the master node, and the other as the slave node.
Perform the following operations on the master node:
$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node1\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state MASTER\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 100\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n
Warning
Please correctly configure the virtual IP address and network interface device name in the above file. Other wise it cannot work properly.
Perform the following operations on the standby node:
$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node2\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state BACKUP\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 98\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n
Finally, run the following commands on the two Seafile frontend servers to start the corresponding services:
You can engaged HTTPS in your load balance service, as you can use certificates manager (e.g., Certbot) to acquire and enable HTTPS to your Seafile cluster. You have to modify the relative URLs from the prefix http:// to https:// in seahub_settings.py and .env, after enabling HTTPS.
Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
Two tools are suggested and can be installed with official installation guide on all nodes:
kubectl
k8s control plane tool (e.g., kubeadm)
After installation, you need to start the k8s control plane service on each node and refer to the k8s official manual for creating a cluster.
Tip
Although we recommend installing the k8s control plane tool on each node, it does not mean that we will use each node as a control plane node, but it is a necessary tool to create or join a K8S cluster. For details, please refer to the above link about creating or joining into a cluster.
"},{"location":"setup/cluster_deploy_with_k8s/#download-k8s-yaml-files-for-seafile-cluster-without-frontend-node","title":"Download K8S YAML files for Seafile cluster (without frontend node)","text":"
In here we suppose you download the YAML files in /opt/seafile-k8s-yaml, which mainly include about:
seafile-xx-deployment.yaml for frontend and backend services pod management and creation,
seafile-service.yaml for exposing Seafile services to the external network,
seafile-persistentVolume.yaml for defining the location of a volume used for persistent storage on the host
seafile-persistentvolumeclaim.yaml for declaring the use of persistent storage in the container.
For futher configuration details, you can refer the official documents.
"},{"location":"setup/cluster_deploy_with_k8s/#modify-seafile-envyaml-and-seafile-secretyaml","title":"Modify seafile-env.yaml and seafile-secret.yaml","text":"
Similar to Docker-base deployment, Seafile cluster in K8S deployment also supports use files to configure startup progress, you can modify common environment variables by
nano /opt/seafile-k8s-yaml/seafile-env.yaml\n
and sensitive information (e.g., password) by
nano /opt/seafile-k8s-yaml/seafile-secret.yaml\n
For seafile-secret.yaml
To modify sensitive information (e.g., password), you need to convert the password into base64 encoding before writing it into the seafile-secret.yaml file:
echo -n '<your-value>' | base64\n
Warning
For the fields marked with <...> are required, please make sure these items are filled in, otherwise Seafile server may not run properly.
You can use following command to initialize Seafile cluster now (the Seafile's K8S resources will be specified in namespace seafile for easier management):
When Seafile cluster is initializing, it will run with the following conditions:
Only have backend service (i.e., only has the Seafile backend K8S resouce file)
CLUSTER_INIT_MODE=true
Success
You can get the following information through kubectl logs seafile-xxxx -n seafile to check the initialization process is done or not:
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n\n\n[2024-11-21 02:22:37] Updating version stamp\nStart init\n\nInit success\n
When the initialization is complete, the server will stop automaticlly (because no operations will be performed after the initialization is completed).
We recommend that you check whether the contents of the configuration files in /opt/seafile/shared/seafile/conf are correct when going to next step, which are automatically generated during the initialization process.
"},{"location":"setup/cluster_deploy_with_k8s/#put-the-license-into-optseafileshared","title":"Put the license into /opt/seafile/shared","text":"
You have to locate the /opt/seafile/shared directory generated during initialization firsly, then simply put it in this path, if you have a seafile-license.txt license file.
Finally you can use the tar -zcvf and tar -zxvf commands to package the entire /opt/seafile/shared directory of the current node, copy it to other nodes, and unpack it to the same directory to take effect on all nodes.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
"},{"location":"setup/cluster_deploy_with_k8s/#download-frontend-services-yaml-and-restart-pods-to-start-seafile-server","title":"Download frontend service's YAML and restart pods to start Seafile server","text":"
Modify seafile-env.yaml, and set CLUSTER_INIT_MODE to false (i.e., disable initialization mode)
Run the following command to restart pods to restart Seafile cluster:
Tip
If you modify some configurations in /opt/seafile/shared/seafile/conf or YAML files in /opt/seafile-k8s-yaml/, you still need to restart services to make modifications.
You can view the pod's log to check the startup progress is normal or not. You can see the following message if server is running normally:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 20\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:02:35 Nginx ready \n\n2024-11-21 03:02:35 This is an idle script (infinite loop) to keep container running. \n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n
Please refer from here for futher advanced operations.
"},{"location":"setup/helm_chart_cluster/","title":"Deploy Seafile cluster with Kubernetes (K8S) by Seafile Helm Chart","text":"
This manual explains how to deploy and run Seafile cluster on a Linux server using Seafile Helm Chart (chart thereafter). You can also refer to here to use K8S resource files to deploy Seafile cluster in your K8S cluster.
Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
Two tools are suggested and can be installed with official installation guide on all nodes:
kubectl
k8s control plane tool (e.g., kubeadm)
After installation, you need to start the k8s control plane service on each node and refer to the k8s official manual for creating a cluster.
Tip
Although we recommend installing the k8s control plane tool on each node, it does not mean that we will use each node as a control plane node, but it is a necessary tool to create or join a K8S cluster. For details, please refer to the above link about creating or joining into a cluster.
It is not necessary to use the my-values.yaml we provided (i.e., you can create an empty my-values.yaml and add required field, as others have defined default values in our chart), because it destroys the flexibility of deploying with Helm, but it contains some formats of how Seafile Helm Chart reads these configurations, as well as all the environment variables and secret variables that can be read directly.
In addition, you can also create a custom storageClassName for the persistence directory used by Seafile. You only need to specify storageClassName in the seafile.config.seafileDataVolume object in my-values.yaml:
seafile:\n configs:\n seafileDataVolume:\n storageClassName: <your seafile storage class name>\n ...\n
You can check any front-end node in Seafile cluster. If the following information is output, Seafile cluster will run normally in your cluster:
Defaulted container \"seafile-frontend\" out of: seafile-frontend, set-ownership (init)\n*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 21\n*** Running /scripts/enterpoint.sh...\n2025-02-13 09:23:49 Nginx ready \n2025-02-13 09:23:49 This is an idle script (infinite loop) to keep container running. \n\n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(86): fileserver: web_token_expire_time = 3600\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(98): fileserver: max_index_processing_threads= 3\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(111): fileserver: fixed_block_size = 8388608\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(123): fileserver: max_indexing_threads = 1\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(138): fileserver: put_head_commit_request_timeout = 10\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(150): fileserver: skip_block_hash = 0\n[seaf-server] [2025-02-13 09:23:50] [INFO] ../common/seaf-utils.c(581): Use database Mysql\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(243): fileserver: worker_threads = 10\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(256): fileserver: backlog = 32\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(267): fileserver: verify_client_blocks = 1\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(289): fileserver: cluster_shared_temp_file_mode = 600\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(336): fileserver: check_virus_on_web_upload = 0\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(362): fileserver: enable_async_indexing = 0\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(374): fileserver: async_indexing_threshold = 700\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(386): fileserver: fs_id_list_request_timeout = 300\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(399): fileserver: max_sync_file_count = 100000\n[seaf-server] [2025-02-13 09:23:50] [WARNING] ../common/license.c(716): License file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users\nLicense file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users\n[seaf-server] [2025-02-13 09:23:50] [INFO] filelock-mgr.c(1397): Cleaning expired file locks.\n[2025-02-13 09:23:52] Start Monitor \n[2025-02-13 09:23:52] Start seafevents.main \n/opt/seafile/seafile-pro-server-12.0.9/seahub/seahub/settings.py:1101: SyntaxWarning: invalid escape sequence '\\w'\nmatch = re.search('^EXTRA_(\\w+)', attr)\n/opt/seafile/seafile-pro-server-12.0.9/seahub/thirdpart/seafobj/mc.py:13: SyntaxWarning: invalid escape sequence '\\S'\nmatch = re.match('--SERVER\\\\s*=\\\\s*(\\S+)', mc_options)\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\n[seafevents] [2025-02-13 09:23:55] [INFO] root:82 LDAP is not set, disable ldap sync.\n[seafevents] [2025-02-13 09:23:55] [INFO] virus_scan:51 [virus_scan] scan_command option is not found in seafile.conf, disable virus scan.\n[seafevents] [2025-02-13 09:23:55] [INFO] seafevents.app.mq_handler:127 Subscribe to channels: {'seaf_server.stats', 'seahub.stats', 'seaf_server.event', 'seahub.audit'}\n[seafevents] [2025-02-13 09:23:55] [INFO] root:534 Start counting user activity info..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:547 [UserActivityCounter] update 0 items.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:240 Start counting traffic info..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:268 Traffic counter finished, total time: 0.0003578662872314453 seconds.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:23 Start file updates sender, interval = 300 sec\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:57 Can not start work weixin notice sender: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:131 search indexer is started, interval = 600 sec\n[seafevents] [2025-02-13 09:23:55] [INFO] root:56 seahub email sender is started, interval = 1800 sec\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:17 Can not start ldap syncer: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:18 Can not start virus scanner: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:35 Start data statistics..\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:40 Can not start content scanner: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:46 Can not scan repo old files auto del days: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:182 Start counting total storage..\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:78 Can not start filename index updater: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:113 search wiki indexer is started, interval = 600 sec\n[seafevents] [2025-02-13 09:23:55] [INFO] root:87 Start counting file operations..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:403 Start counting monthly traffic info..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:491 Monthly traffic counter finished, update 0 user items, 0 org items, total time: 0.0905158519744873 seconds.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:203 [TotalStorageCounter] No results from seafile-db.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:169 [FileOpsCounter] Finish counting file operations in 0.09510159492492676 seconds, 0 added, 0 deleted, 0 visited, 0 modified\n\nSeahub is started\n\nDone.\n
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile/shared. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Please refer from here for futher advanced operations.
"},{"location":"setup/helm_chart_single_node/","title":"Setup Seafile with a single K8S pod with Seafile Helm Chart","text":"
This manual explains how to deploy and run Seafile server on a Linux server using Seafile Helm Chart (chart thereafter) in a single pod (i.e., single node mode). Comparing to Setup by K8S resource files, deployment with helm chart can simplify the deployment process and provide more flexible deployment control, which the way we recommend in deployment with K8S.
For specific environment and configuration requirements, please refer to the description of the Docker-based Seafile single-node deployment. Please also refer to the description of the K8S tool section in here.
For persisting data using in the docker-base deployment, /opt/seafile-data, is still adopted in this manual. What's more, all K8S YAML files will be placed in /opt/seafile-k8s-yaml (replace it when following these instructions if you would like to use another path).
By the way, we don't provide the deployment methods of basic services (e.g., Redis, MySQL and Elasticsearch) and seafile-compatibility components (e.g., SeaDoc) for K8S in our document. If you need to install these services in K8S format, you can refer to the rewrite method in this document.
Please refer here for the details of system requirements about Seafile service. By the way, this will apply to all nodes where Seafile pods may appear in your K8S cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
It is not necessary to use the my-values.yaml we provided (i.e., you can create an empty my-values.yaml and add required field, as others have defined default values in our chart), because it destroys the flexibility of deploying with Helm, but it contains some formats of how Seafile Helm Chart reads these configurations, as well as all the environment variables and secret variables that can be read directly.
In addition, you can also create a custom storageClassName for the persistence directory used by Seafile. You only need to specify storageClassName in the seafile.config.seafileDataVolume object in my-values.yaml:
seafile:\n configs:\n seafileDataVolume:\n storageClassName: <your seafile storage class name>\n ...\n
After installing the chart, the Seafile pod should startup automaticlly.
About Seafile service
The default service type of Seafile is LoadBalancer. You should specify K8S load balancer for Seafile or specify at least one external ip, that can be accessed from external networks.
Important for deployment
By default, Seafile will access the Redis (the default cache from Seafile 13) and Elasticsearch (Pro only) with the specific service name:
Redis: redis with port 6379
Elasticsearch: elasticsearch with port 9200
If the above services are:
Not in your K8S pods (including using an external service)
With different service name
With different server port
Please modfiy the files in /opt/seafile-data/seafile/conf (especially the seafevents.conf, seafile.conf and seahub_settings.py) to make correct the configurations for above services, otherwise the Seafile server cannot start normally. Then restart Seafile server:
"},{"location":"setup/helm_chart_single_node/#activating-the-seafile-license-pro","title":"Activating the Seafile License (Pro)","text":"
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
This document mainly describes how to manage and maintain Seafile deployed through our K8S deployment document. At the same time, if you are already proficient in using kubectl commands to manage K8S resources, you can also customize the deployment solutions we provide.
Namespaces for Seafile K8S deployment
Our documentation provides two deployment solutions for both single-node and cluster deployment (via Seafile Helm Chart and K8S resource files), both of which can be highly customized.
Regardless of which deployment method you use, in our newer manuals (usually in versions after Seafile 12.0.9), Seafile-related K8S resources (including related Pods, services, and persistent volumes, etc.) are defined in the seafile namespace. In previous versions, you may deploy Seafile in the default namespace, so in this case, when referring to this document for Seafile K8S resource management, be sure to remove -n seafile in the command.
Similar to docker installation, you can also manage containers through some kubectl commands. For example, you can use the following command to check whether the relevant resources are started successfully and whether the relevant services can be accessed normally. First, execute the following command and remember the pod name with seafile- as the prefix (such as seafile-748b695648-d6l4g)
"},{"location":"setup/k8s_advanced_management/#k8s-gateway-and-https","title":"K8S Gateway and HTTPS","text":"
Since the Ingress feature is no longer supported in the new version of K8S (even the commonly used Nginx-Ingress will not be deployed after 1.24), this article will introduce how to use the new version of K8S feature K8S Gateway to implement Seafile service exposure and load balancing.
Still use Nginx-Ingress
If your K8S is still running with an old version, and still using Nginx-Ingress, you can follow here to setup ingress controller and HTTPS. We sincerely thanks Datamate to give an example to this configuration.
For the details and features about K8S Gateway, please refer to the K8S official document, you can simpily install it by
The Gateway API requires configuration of three API categories in its resource model: - GatewayClass:\u00a0Defines a group of gateways with the same configuration, managed by the controller that implements the class. - Gateway:\u00a0Defines an instance of traffic handling infrastructure, which can be thought of as a load balancer. - HTTPRoute:\u00a0Defines HTTP-specific rules for mapping traffic from gateway listeners to representations of backend network endpoints. These endpoints are typically represented as\u00a0Services.
The GatewayClass resource serves the same purpose as the IngressClass in the old-ingress API, similar to the StorageClass in the Storage API. It defines the categories of Gateways that can be created. Typically, this resource is provided by your infrastructure platform, such as EKS or GKE. It can also be provided by a third-party Ingress Controller, such as Nginx-gateway or Istio-gateway.
Here, we take the Nginx-gateway for the example, and you can install it with the official document. After installation, you can view the installation status with the following command:
# `gc` means the `gatewayclass`, and its same as `kubectl get gatewayclass`\nkubectl get gc \n\n#NAME CONTROLLER ACCEPTED AGE\n#nginx gateway.nginx.org/nginx-gateway-controller True 22s\n
Typically, after you install GatewayClass, your cloud provider will provide you with a load balancing IP, which is visible in GatewayClass. If this IP is not assigned, you can manually bind it to a IP that can be accessed from exteranl network.
Gateway is used to describe an instance of traffic processing infrastructure. Usually, Gateway defines a network endpoint that can be used to process traffic, that is, to filter, balance, split, etc. Service and other backends. For example, it can represent a cloud load balancer, or a cluster proxy server configured to accept HTTP traffic. As above, please refer to the official documentation for a detailed description of Gateway. Here is only a simple reference configuration for Seafile:
The HTTPRoute category specifies the routing behavior of HTTP requests from the Gateway listener to the backend network endpoints. For service backends, the implementation can represent the backend network endpoint as a service IP or a supporting endpoint of the service. it represents the configuration that will be applied to the underlying Gateway implementation. For example, defining a new HTTPRoute may result in configuring additional traffic routes in a cloud load balancer or in-cluster proxy server. As above, please refer to the official documentation for a detailed description of the HTTPRoute resource. Here is only a reference configuration solution that is only applicable to this document.
After installing or defining GatewayClass, Gateway and HTTPRoute, you can now enable this feature by following command and view your Seafile server by the URL http://seafile.example.com/:
When using K8S Gateway, a common way to enable HTTPS is to add relevant information about the TLS listener in Gateway resource. You can refer here for futher details. We will provide a simple way here so that you can quickly enable HTTPS for your Seafile K8S.
Create a secret resource (seafile-tls-cert) for your TLS certificates:
kubectl create secret tls seafile-tls-cert \\\n--cert=<your path to fullchain.pem> \\\n--key=<your path to privkey.pem>\n
2. Use the TLS in your Gateway resource and enable HTTPS:
Now you can access your Seafile service in https://<your domain>/
"},{"location":"setup/k8s_advanced_management/#log-routing-and-aggregating-system","title":"Log routing and aggregating system","text":"
Similar to single-node deployment, you can browse the log files of Seafile running directly in the persistent volume directory (i.e., <path>/seafile/logs). The difference is that when using K8S to deploy a Seafile cluster (especially in a cloud environment), the persistent volume created is usually shared and synchronized for all nodes. However, the logs generated by the Seafile service do not record the specific node information where these logs are located, so browsing the files in the above folder may make it difficult to identify which node these logs are generated from. Therefore, one solution proposed here is:
Record the generated logs to the standard output. In this way, the logs can be distinguished under each node by kubectl logs (but all types of logs will be output together now). You can enable this feature (it should be enabled by default in K8S Seafile cluster but not in K8S single-pod Seafile) by modifing SEAFILE_LOG_TO_STDOUT to true in seafile-env.yaml:
Since the logs in step 1 can be distinguished between nodes, but they are aggregated and output together, it is not convenient for log retrieval. So you have to route the standard output logs (i.e., distinguish logs by corresponding components name) and re-record them in a new file or upload them to a log aggregation system (e.g., Loki).
Currently in the K8S environment, the commonly used log routing plugins are:
Fluent Bit
Fluentd
Logstash
Promtail (also a part of Loki)
Fluent Bit and Promtail are more lightweight (i.e., consume less system resources), while Promtail only supports transferring logs to Loki. Therefore, this document will mainly introduce log routing through Fluent Bit which is a fast, lightweight logs and metrics agent. It is also a CNCF graduated sub-project under the umbrella of Fluentd. Fluent Bit is licensed under the terms of the Apache License v2.0. You should deploy the Fluent Bit in your K8S cluster by following offical document firstly. Then modify Fluent-Bit pod settings to mount a new directory to load the configuration files:
For example in here, we use /opt/fluent-bit/confs (it has to be non-shared). What's more, the parsers will be defined in /opt/fluent-bit/confs/parsers.conf, and for each type log (e.g., seahub's log, seafevent's log) will be defined in /opt/fluent-bit/confs/*-log.conf. Each .conf file defines several Fluent-Bit data pipeline components:
Pipeline Description Required/Optional INPUT Specifies where and how Fluent-Bit can get the original log information, and assigns a tag for each log record after read. Required PARSER Parse the read log records. For K8S Docker runtime logs, they are usually in Json format. Required FILTER Filters and selects log records with a specified tag, and assigns a new tag to new records. Optional OUTPUT tells Fluent-Bit what format the log records for the specified tag will be in and where to output them (such as file, Elasticsearch, Loki, etc.). Required
Warning
For PARSER, it can only be stored in /opt/fluent-bit/confs/parsers.conf, otherwise the Fluent-Bit cannot startup normally.
According to the above, a container will generate a log file (usually in /var/log/containers/<container-name>-xxxxxx.log), so you need to prepare an importer and add the following information (for more details, please refer to offical document about TAIL inputer) in /opt/fluent-bit/confs/seafile-log.conf:
[INPUT]\n Name tail\n Path /var/log/containers/seafile-frontend-*.log\n Buffer_Chunk_Size 2MB\n Buffer_Max_Size 10MB\n Docker_Mode On\n Docker_Mode_Flush 5\n Tag seafile.*\n Parser Docker # for definition, please see the next section as well\n\n[INPUT]\n Name tail\n Path /var/log/containers/seafile-backend-*.log\n Buffer_Chunk_Size 2MB\n Buffer_Max_Size 10MB\n Docker_Mode On\n Docker_Mode_Flush 5\n Tag seafile.*\n Parser Docker\n
The above defines two importers, which are used to monitor seafile-frontend and seafile-backend services respectively. The reason why they are written together here is that for a node, you may not know when it will run the frontend service and when it will run the backend service, but they have the same tag prefix seafile..
Each input has to use a parser to parse the logs and pass them to the filter. Here, a parser named Docker is created to parse the logs generated by the K8S-docker-runtime container. The parser is placed in /opt/fluent-bit/confs/parser.conf (for more details, please refer to offical document about JSON parser):
[PARSER]\n Name Docker\n Format json\n Time_Key time\n Time_Format %Y-%m-%dT%H:%M:%S.%LZ\n
Log records after parsing
The logs of the Docker container are saved in /var/log/containers in Json format (see the sample below), which is why we use the Json format in the above parser.
When these logs are obtained by the importer and parsed by the parser, they will become independent log records with the following fields:
log: The original log content (i.e., same as you seen in kubectl logs seafile-xxx -n seafile) and an extra line break at the end (i.e., \\n). This is also the field we need to save or upload to the log aggregation system in the end.
stream: The original log come from. stdout means the standard output.
time: The time when the log is recorded in the corresponding stream (ISO 8601 format).
Add two filters in /opt/fluent-bit/confs/seafile-log.conf for records filtering and routing. Here, the record_modifier filter is to select useful keys (see the contents in above tip label, only the log field is what we need) in the log records and rewrite_tag filter is used to route logs according to specific rules:
[FILTER] \n Name record_modifier\n Match seafile.*\n Allowlist_key log\n\n\n[FILTER]\n Name rewrite_tag\n Match seafile.*\n Rule $log ^.*\\[seaf-server\\].*$ seaf-server false # for seafile's logs\n Rule $log ^.*\\[seahub\\].*$ seahub false # for seahub's logs\n Rule $log ^.*\\[seafevents\\].*$ seafevents false # for seafevents' lgos\n Rule $log ^.*\\[seafile-slow-rpc\\].*$ seafile-slow-rpc false # for slow-rpc's logs\n
"},{"location":"setup/k8s_advanced_management/#output-logs-to-loki","title":"Output log's to Loki","text":"
Loki is multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. The Fluent-Bit loki built-in output plugin allows you to send your log or events to a Loki service. It supports data enrichment with Kubernetes labels, custom label keys and Tenant ID within others.
Alternative Fluent-Bit Loki plugin by Grafana
For sending logs to Loki, there are two plugins for Fluent-Bit:
The built-in Loki plugin maintained by the Fluent-Bit officially, and we will use it in this part because it provides the most complete features.
Grafana-loki plugin maintained by Grafana Labs.
Due to each outputer dose not have a distinguishing marks in the configuration files (because Fluent-Bit takes each plugin as a tag workflow):
Seaf-server log: Add an outputer to /opt/fluent-bit/confs/seaf-server-log.conf:
[OUTPUT]\n Name loki\n Match seaf-server\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n
seahub log: Add an outputer to /opt/fluent-bit/confs/seahub-log.conf:
[OUTPUT]\n Name loki\n Match seahub\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n
seafevents log: Add an outputer to /opt/fluent-bit/confs/seafevents-log.conf:
[OUTPUT]\n Name loki\n Match seafevents\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n
seafile-slow-rpc log: Add an outputer to /opt/fluent-bit/confs/seafile-slow-rpc-log.conf:
[OUTPUT]\n Name loki\n Match seafile-slow-rpc\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n
Cloud Loki instance
If you are using a cloud Loki instance, you can follow the Fluent-Bit Loki plugin document to fill up all necessary fields. Usually, the following fields are additional needs in cloud Loki service:
tls
tls.verify
http_user
http_passwd
"},{"location":"setup/k8s_single_node/","title":"Setup Seafile with a single K8S pod with K8S resources files","text":"
This manual explains how to deploy and run Seafile server on a Linux server using Kubernetes (k8s thereafter) in a single pod (i.e., single node mode). So this document is essentially an extended description of the Docker-based Seafile single-node deployment (support both CE and Pro).
For specific environment and configuration requirements, please refer to the description of the Docker-based Seafile single-node deployment. Please also refer to the description of the K8S tool section in here.
Please refer here for the details of system requirements about Seafile service. By the way, this will apply to all nodes where Seafile pods may appear in your K8S cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
For persisting data using in the docker-base deployment, /opt/seafile-data, is still adopted in this manual. What's more, all K8S YAML files will be placed in /opt/seafile-k8s-yaml (replace it when following these instructions if you would like to use another path).
By the way, we don't provide the deployment methods of basic services (e.g., Memcached, MySQL and Elasticsearch) and seafile-compatibility components (e.g., SeaDoc) for K8S in our document. If you need to install these services in K8S format, you can refer to the rewrite method of this document.
"},{"location":"setup/k8s_single_node/#down-load-the-yaml-files-for-seafile-server","title":"Down load the YAML files for Seafile Server","text":"Pro editionCommunity edition
In here we suppose you download the YAML files in /opt/seafile-k8s-yaml, which mainly include about:
seafile-deployment.yaml for Seafile server pod management and creation,
seafile-service.yaml for exposing Seafile services to the external network,
seafile-persistentVolume.yaml for defining the location of a volume used for persistent storage on the host
seafile-persistentvolumeclaim.yaml for declaring the use of persistent storage in the container.
For futher configuration details, you can refer the official documents.
"},{"location":"setup/k8s_single_node/#modify-seafile-envyaml-and-seafile-secretyaml","title":"Modify seafile-env.yaml and seafile-secret.yaml","text":"
Similar to Docker-base deployment, Seafile cluster in K8S deployment also supports use files to configure startup progress, you can modify common environment variables by
nano /opt/seafile-k8s-yaml/seafile-env.yaml\n
and sensitive information (e.g., password) by
nano /opt/seafile-k8s-yaml/seafile-secret.yaml\n
For seafile-secret.yaml
To modify sensitive information (e.g., password), you need to convert the password into base64 encoding before writing it into the seafile-secret.yaml file:
echo -n '<your-value>' | base64\n
Warning
For the fields marked with <...> are required, please make sure these items are filled in, otherwise Seafile server may not run properly.
By default, Seafile (Pro) will access the Memcached and Elasticsearch with the specific service name:
Memcached: memcached with port 11211
Elasticsearch: elasticsearch with port 9200
If the above services are:
Not in your K8S pods (including using an external service)
With different service name
With different server port
Please modfiy the files in /opt/seafile-data/seafile/conf (especially the seafevents.conf, seafile.conf and seahub_settings.py) to make correct the configurations for above services, otherwise the Seafile server cannot start normally. Then restart Seafile server:
"},{"location":"setup/k8s_single_node/#activating-the-seafile-license-pro","title":"Activating the Seafile License (Pro)","text":"
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Please refer from here for futher advanced operations.
"},{"location":"setup/migrate_backends_data/","title":"Migrate data between different backends","text":"
Seafile supports data migration between filesystem, s3, ceph, swift and Alibaba oss by a built-in script. Before migration, you have to ensure that both S3 hosts can be accessed normally.
Migration to or from S3
Since version 11, when you migrate from S3 to other storage servers or from other storage servers to S3, you have to use V4 authentication protocol. This is because version 11 upgrades to Boto3 library, which fails to list objects from S3 when it's configured to use V2 authentication protocol.
"},{"location":"setup/migrate_backends_data/#copy-seafileconf-and-use-new-s3-configurations","title":"Copy seafile.conf and use new S3 configurations","text":"
During the migration process, Seafile needs to know where the data will be migrated to. The easiest way is to copy the original seafile.conf to a new path, and then use the new S3 configurations in this file.
Deploy with DockerDeploy from binary package
Warning
For deployment with Docker, the new seafile.conf has to be put in the persistent directory (e.g., /opt/seafile-data/seafile.conf) used by Seafile service. Otherwise the script cannot locate the new configurations file.
Then you can follow here to use the new S3 configurations in the new seafile.conf. By the way, if you want to migrate to a local file system, the new seafile.conf configurations for S3 example is as follows:
Since the data migration process will not affect the operation of the Seafile service, if the original S3 data is operated during this process, the data may not be synchronized with the migrated data. Therefore, we recommend that you stop the Seafile service before executing the migration procedure.
cd /opt/seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n
"},{"location":"setup/migrate_backends_data/#run-migratesh-to-initially-migrate-objects","title":"Run migrate.sh to initially migrate objects","text":"
This step will migrate most of objects from the source storage to the destination storage. You don't need to stop Seafile service at this stage as it may take quite long time to finish. Since the service is not stopped, some new objects may be added to the source storage during migration. Those objects will be handled in the next step:
Speed-up migrating large number of objects
If you have millions of objects in the storage (especially the fs objects), it may take quite long time to migrate all objects and more than half is using to check whether an object exists in the destination storage. In this situation, you can modify the nworker and maxsize variables in the migrate.py:
However, if the two values (i.e., nworker and maxsize) \u200b\u200bare too large, the improvement in data migration speed may not be obvious because the disk I/O bottleneck has been reached.
Encrypted storage backend data (deprecated)
If you have an encrypted storage backend, you can use this script to migrate and decrypt the data from that backend to a new one. You can add the --decrypt option in calling the script, which will decrypt the data while reading it, and then write the unencrypted data to the new backend:
./migrate.sh /opt --decrypt\n
Deploy with DockerDeploy from binary package
# make sure you are in the container and in directory `/opt/seafile/seafile-server-latest`\n./migrate.sh /shared\n\n# exit container and stop it\nexit\ndocker compose down\n
# make sure you are in the directory `/opt/seafile/seafile-server-latest`\n./migrate.sh /opt\n
Success
You can see the following message if the migration process is done:
2025-01-15 05:49:39,408 Start to fetch [commits] object from destination\n2025-01-15 05:49:39,422 Start to fetch [fs] object from destination\n2025-01-15 05:49:39,442 Start to fetch [blocks] object from destination\n2025-01-15 05:49:39,677 [commits] [0] objects exist in destination\n2025-01-15 05:49:39,677 Start to migrate [commits] object\n2025-01-15 05:49:39,749 [blocks] [0] objects exist in destination\n2025-01-15 05:49:39,755 Start to migrate [blocks] object\n2025-01-15 05:49:39,752 [fs] [0] objects exist in destination\n2025-01-15 05:49:39,762 Start to migrate [fs] object\n2025-01-15 05:49:40,602 Complete migrate [commits] object\n2025-01-15 05:49:40,626 Complete migrate [blocks] object\n2025-01-15 05:49:40,790 Complete migrate [fs] object\nDone.\n
"},{"location":"setup/migrate_backends_data/#replace-the-original-seafileconf-and-start-seafile","title":"Replace the original seafile.conf and start Seafile","text":"
After running the script, we recommend that you check whether your data already exists on the new S3 storage backend server (i.e., the migration is successful, and the number and size of files should be the same). Then you can remove the file from the old S3 storage backend and replace the original seafile.conf from the new one:
# make sure you are in the directory `/opt/seafile/seafile-server-latest`\n./seahub.sh start\n./seafile.sh start\n
"},{"location":"setup/migrate_ce_to_pro_with_docker/","title":"Migrate CE to Pro with Docker","text":""},{"location":"setup/migrate_ce_to_pro_with_docker/#preparation","title":"Preparation","text":"
Make sure you are running a Seafile Community edition that match the latest version of pro edition. For example, if the latest pro edition is version 13.0, you should first upgrade the community edition to version 13.0.
Purchase Seafile Professional license file.
Download the .env and seafile-server.yml of Seafile Pro.
"},{"location":"setup/migrate_ce_to_pro_with_docker/#migrate","title":"Migrate","text":""},{"location":"setup/migrate_ce_to_pro_with_docker/#stop-the-seafile-ce","title":"Stop the Seafile CE","text":"
docker compose down\n
Tip
To ensure data security, it is recommended that you back up your MySQL data
"},{"location":"setup/migrate_ce_to_pro_with_docker/#put-your-licence-file","title":"Put your licence file","text":"
Copy the seafile-license.txt to the volume directory of the Seafile CE's data. If the directory is /opt/seafile-data, so you should put it in the /opt/seafile-data/seafile/.
"},{"location":"setup/migrate_ce_to_pro_with_docker/#modify-the-new-seafile-serveryml-and-env","title":"Modify the new seafile-server.yml and .env","text":"
Modify .env based on the old configurations from the old .env file. The following fields should be paid special attention and others should be the same as the old configurations:
Variable Description Default Value SEAFILE_IMAGE The Seafile pro docker image, which the tag must be equal to or newer than the old Seafile CE docker tag seafileltd/seafile-pro-mc:13.0-latestSEAFILE_ELASTICSEARCH_VOLUME The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data
For other fileds (e.g., SEAFILE_VOLUME, SEAFILE_MYSQL_VOLUME, SEAFILE_MYSQL_DB_USER, SEAFILE_MYSQL_DB_PASSWORD), must be consistent with the old configurations.
Tip
For the configurations using to do the initializations (e.g, INIT_SEAFILE_ADMIN_EMAIL, INIT_SEAFILE_MYSQL_ROOT_PASSWORD), you can remove it from .env as well
"},{"location":"setup/migrate_ce_to_pro_with_docker/#replace-seafile-serveryml-and-env","title":"Replace seafile-server.yml and .env","text":"
Replace the old seafile-server.yml and .env by the new and modified files, i.e. (if your old seafile-server.yml and .env are in the /opt)
Add [INDEX FILES] section in /opt/seafile-data/seafile/conf/seafevents.conf manually:
Additional system resource requirements
Seafile PE docker requires a minimum of 4 cores and 4GB RAM because of Elasticsearch deployed simultaneously. If you do not have enough system resources, you can use an alternative search engine, SeaSearch, a more lightweight search engine built on open source search engine ZincSearch, as the indexer.
Run the following command to run the Seafile-Pro container\uff1a
docker compose up -d\n
Now you have a Seafile Professional service.
"},{"location":"setup/migrate_non_docker_to_docker/","title":"Migrate from non-docker Seafile deployment to docker","text":"
Note
This document is written to about the single node, you have to do the following opeartions (except migtating database) in all nodes if you are using Seafile Cluster
Normally, we only recommend that you perform the migration operation on two different machines according to the solution in this document. If you decide to perform the operation on the same machine, please pay attention to the corresponding tips in the document.
The recommended steps to migrate from non-docker deployment to docker deployment on two different machines are:
Upgrade your Seafile server to the latest version.
Shutdown the Seafile, Nginx and Memcached according to your situations.
Backup MySQL databse and Seafile libraries data.
Deploy the Seafile Docker in the new machine.
Recover the Seafile libraries and MySQL database in the new machine.
Start Seafile Docker and shutdown the old MySQL (or Mariadb) according to your situations.
"},{"location":"setup/migrate_non_docker_to_docker/#upgrade-your-seafile-server","title":"Upgrade your Seafile server","text":"
You have to upgrade the version of the binary package to latest version before the migration, and ensure that the system is running normally.
Tip
If you running a very old version of Seafile, you can following the FAQ item to migrate to the latest version
"},{"location":"setup/migrate_non_docker_to_docker/#backup-mysql-database-and-seafile-server","title":"Backup MySQL database and Seafile server","text":"
Please follow here to backup:
Backing up MySQL databases
Backing up Seafile library data
"},{"location":"setup/migrate_non_docker_to_docker/#deploy-the-seafile-docker","title":"Deploy the Seafile Docker","text":"
You can follow here to deploy Seafile with Docker, please use your old configurations when modifying .env, and make sure the Seafile server is running normally after deployment.
Use external MySQL service or the old MySQL service
This document is written to migrate from non-Docker version to Docker version Seafile between two different machines. We suggest using the Docker-compose Mariadb service (version 10.11 by default) as the database service in after-migration Seafile. If you would like to use an existed MySQL service, always in which situation you try to do migrate operation on the same host or the old MySQL service is the dependency of other services, you have to follow here to deploy Seafile.
"},{"location":"setup/migrate_non_docker_to_docker/#recovery-libraries-data-for-seafile-docker","title":"Recovery libraries data for Seafile Docker","text":"
Firstly, you should stop the Seafile server before recovering Seafile libraries data:
docker compose down\n
Then recover the data from backuped file:
cp /backup/data/* /opt/seafile-data/seafile\n
"},{"location":"setup/migrate_non_docker_to_docker/#recover-the-database-only-for-the-new-mysql-service-used-in-seafile-docker","title":"Recover the Database (only for the new MySQL service used in Seafile docker)","text":"
Start the database service Only:
docker compose up -d --no-deps db\n
Follow here to recover the database data.
Exit the container and stop the Mariadb service
docker compose down\n
"},{"location":"setup/migrate_non_docker_to_docker/#restart-the-services","title":"Restart the services","text":"
Finally, the migration is complete. You can restart the Seafile server of Docker-base by restarting the service:
docker compose up -d\n
By the way, you can shutdown the old MySQL service, if it is not a dependency of other services, .
Add restart: unless-stopped, and the Seafile container will automatically start when Docker starts. If the Seafile container does not exist (execute docker compose down), the container will not start automatically.
"},{"location":"setup/setup_ce_by_docker/","title":"Installation of Seafile Server Community Edition with Docker","text":""},{"location":"setup/setup_ce_by_docker/#system-requirements","title":"System requirements","text":"
Please refer here for system requirements about Seafile CE. In general, we recommend that you have at least 2G RAM and a 2-core CPU (> 2GHz).
The following assumptions and conventions are used in the rest of this document:
/opt/seafile is the directory for store Seafile docker compose files. If you decide to put Seafile in a different directory \u2014 which you can \u2014 adjust all paths accordingly.
Seafile uses two Docker volumes for persisting data generated in its database and Seafile Docker container. The volumes' host paths are /opt/seafile-mysql and /opt/seafile-data, respectively. It is not recommended to change these paths. If you do, account for it when following these instructions.
All configuration and log files for Seafile and the webserver Nginx are stored in the volume of the Seafile container.
Variable Description Default Value SEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-dataSEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddyINIT_SEAFILE_MYSQL_ROOT_PASSWORD The root password of MySQL (Only required on first deployment) SEAFILE_MYSQL_DB_HOST The host of MySQL dbSEAFILE_MYSQL_DB_PORT The port of MySQL 3306SEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafileSEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME The database name of ccnet ccnet_dbSEAFILE_MYSQL_DB_SEAFILE_DB_NAME The database name of seafile seafile_dbSEAFILE_MYSQL_DB_SEAHUB_DB_NAME The database name of seahub seahub_dbJWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) httpCACHE_PROVIDER The type of cache server used for Seafile. The available options are redis and memcached. Since Seafile 13, it is recommended to use redis as the cache service to support new features, and memcached will no longer be integrated into Seafile Docker by default. redisREDIS_HOST Redis server host redisREDIS_PORT Redis server port 6379REDIS_PASSWORD Redis server password (none) MEMCACHED_HOST Memcached server host memcachedMEMCACHED_PORT Memcached server port 11211TIME_ZONE Time zone UTCNOTIFICATION_SERVER_URL The notification server url, leave blank to disable it (none) INIT_SEAFILE_ADMIN_EMAIL Admin username me@example.com (Recommend modifications) INIT_SEAFILE_ADMIN_PASSWORD Admin password asecret (Recommend modifications) NON_ROOT Run Seafile container without a root user false"},{"location":"setup/setup_ce_by_docker/#start-seafile-server","title":"Start Seafile server","text":"
Start Seafile server with the following command
docker compose up -d\n
ERROR: Named volume \"xxx\" is used in service \"xxx\" but no declaration was found in the volumes section
You may encounter this problem when your Docker (or docker-compose) version is out of date. You can upgrade or reinstall the Docker service to solve this problem according to the Docker official documentation.
Note
You must run the above command in the directory with the .env. If .env file is elsewhere, please run
docker compose -f /path/to/.env up -d\n
Success
After starting the services, you can see the initialization progress by tracing the logs of container seafile (i.e., docker logs seafile -f)
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\ncreating seafile-server-latest symbolic link ... done\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n
And then you can see the following messages which the Seafile server starts successfully:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\nSeahub is started\n\nDone.\n
Finially, you can go to http://seafile.example.com to use Seafile.
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/seafile: This is the directory for seafile server configuration and data.
/opt/seafile-data/seafile/logs: This is the directory that would contain the log files of seafile server processes. For example, you can find seaf-server logs in /opt/seafile-data/seafile/logs/seafile.log.
/opt/seafile-data/logs: This is the directory for operating system.
/opt/seafile-data/logs/var-log: This is the directory that would be mounted as /var/log inside the container. /opt/seafile-data/logs/var-log/nginx contains the logs of Nginx in the Seafile container.
To monitor container logs (from outside of the container), please use the following commands:
# if the `.env` file is in current directory:\ndocker compose logs --follow\n# if the `.env` file is elsewhere:\ndocker compose -f /path/to/.env logs --follow\n\n# you can also specify container name:\ndocker compose logs seafile --follow\n# or, if the `.env` file is elsewhere:\ndocker compose -f /path/to/.env logs seafile --follow\n
The Seafile logs are under /shared/logs/seafile in the docker, or /opt/seafile-data/logs/seafile in the server that run the docker.
The system logs are under /shared/logs/var-log, or /opt/seafile-data/logs/var-log in the server that run the docker.
To monitor all Seafile logs simultaneously (from outside of the container), run
sudo tail -f $(find /opt/seafile-data/ -type f -name *.log 2>/dev/null)\n
When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them.
"},{"location":"setup/setup_ce_by_docker/#faq","title":"FAQ","text":""},{"location":"setup/setup_ce_by_docker/#seafile-service-and-container-maintenance","title":"Seafile service and container maintenance","text":"
Q: If I want enter into the Docker container, which command I can use?
A: You can enter into the docker container using the command:
docker exec -it seafile /bin/bash\n
Q: I forgot the Seafile admin email address/password, how do I create a new admin account?
A: Seafile uses cache to improve performance in many situations. The content includes but is not limited to user session information, avatars, profiles, records from database, etc. From Seafile Docker 13, the Redis takes the default cache server for supporting the new features (please refer the upgradte notes), which has integrated in Seafile Docker 13 and can be configured directly in environment variables in .env (no additional settings are required by default)
Q: Is the Redis integrated in Seafile Docker safe? Does it have an access password?
A: Although the Redis integrated by Seafile Docker does not have a password set by default, it can only be accessed through the Docker private network and will not expose the service port externally. Of course, you can also set a password for it if necessary. You can set REDIS_PASSWORD in .env and remove the following comment markers in seafile-server.yml to set the integrated Redis' password:
services:\n ...\n redis:\n image: ${SEAFILE_REDIS_IMAGE:-redis}\n container_name: seafile-redis\n # remove the following comment markers\n command:\n - /bin/sh\n - -c\n - redis-server --requirepass \"$${REDIS_PASSWORD:?Variable is not set or empty}\"\n networks:\n - seafile-net\n ...\n
Q: For some reason, I still have to use Memcached as my cache server. How can I do this?
A: If you still want to use the Memcached (is not provided from Seafile Docker 13), just follow the steps below:
Set CACHE_PROVIDER to memcached and modify MEMCACHED_xxx in .env
Remove the redis part and and the redis dependency in seafile service section in seafile-server.yml.
By the way, you can make changes to the cache server after the service is started (by setting environment variables in .env), but the corresponding configuration files will not be updated directly (e.g., seahub_settings.py, seafile.conf and seafevents.conf). To avoid ambiguity, we recommend that you also update these configuration files.
"},{"location":"setup/setup_pro_by_docker/","title":"Installation of Seafile Server Professional Edition with Docker","text":"
This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server using Docker and Docker Compose. The deployment has been tested for Debian/Ubuntu and CentOS, but Seafile PE should also work on other Linux distributions.
Please refer here for system requirements about Seafile PE. In general, we recommend that you have at least 4G RAM and a 4-core CPU (> 2GHz).
About license
Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com. For futher details, please refer the license page of Seafile PE.
The following assumptions and conventions are used in the rest of this document:
/opt/seafile is the directory of Seafile for storing Seafile docker files. If you decide to put Seafile in a different directory, adjust all paths accordingly.
Seafile uses two Docker volumes for persisting data generated in its database and Seafile Docker container. The volumes' host paths are /opt/seafile-mysql and /opt/seafile-data, respectively. It is not recommended to change these paths. If you do, account for it when following these instructions.
All configuration and log files for Seafile and the webserver Nginx are stored in the volume of the Seafile container.
Use the official installation guide for your OS to install Docker.
"},{"location":"setup/setup_pro_by_docker/#downloading-the-seafile-image","title":"Downloading the Seafile Image","text":"
Success
Since v12.0, Seafile PE versions are hosted on DockerHub and does not require username and password to download. For older Seafile PE versions are available private docker repository (back to Seafile 7.0). You can get the username and password on the download page in the Customer Center.
Variable Description Default Value SEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-dataSEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddySEAFILE_ELASTICSEARCH_VOLUME The volume directory of Elasticsearch data /opt/seafile-elasticsearch/dataINIT_SEAFILE_MYSQL_ROOT_PASSWORD The root password of MySQL (Only required on first deployment) SEAFILE_MYSQL_DB_HOST The host of MySQL dbSEAFILE_MYSQL_DB_PORT The port of MySQL 3306SEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafileSEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME The database name of ccnet ccnet_dbSEAFILE_MYSQL_DB_SEAFILE_DB_NAME The database name of seafile seafile_dbSEAFILE_MYSQL_DB_SEAHUB_DB_NAME The database name of seahub seahub_dbJWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) httpCACHE_PROVIDER The type of cache server used for Seafile. The available options are redis and memcached. Since Seafile 13, it is recommended to use redis as the cache service to support new features, and memcached will no longer be integrated into Seafile Docker by default. redisREDIS_HOST Redis server host redisREDIS_PORT Redis server port 6379REDIS_PASSWORD Redis server password (none) MEMCACHED_HOST Memcached server host memcachedMEMCACHED_PORT Memcached server port 11211TIME_ZONE Time zone UTCINIT_SEAFILE_ADMIN_EMAIL Synchronously set admin username during initialization me@example.com INIT_SEAFILE_ADMIN_PASSWORD Synchronously set admin password during initialization asecret SEAF_SERVER_STORAGE_TYPE What kind of the Seafile data for storage. Available options are disk (i.e., local disk), s3 and multiple (see the details of multiple storage backends) diskS3_COMMIT_BUCKET S3 storage backend commit objects bucket (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_FS_BUCKET S3 storage backend fs objects bucket (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_BLOCK_BUCKET S3 storage backend block objects bucket (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_KEY_ID S3 storage backend key ID (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_SECRET_KEY S3 storage backend secret key (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_AWS_REGION Region of your buckets us-east-1S3_HOST Host of your buckets (required when not use AWS) S3_USE_HTTPS Use HTTPS connections to S3 if enabled trueS3_USE_V4_SIGNATURE Use the v4 protocol of S3 if enabled trueS3_PATH_STYLE_REQUEST This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. falseS3_SSE_C_KEY A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. (none) NOTIFICATION_SERVER_URL The notification server url, leave blank to disable it (none) NON_ROOT Run Seafile container without a root user false
Easier to configure S3 for Seafile and its components
Since Seafile Pro 13.0, in order to facilitate users to deploy Seafile's related extension components and other services in the future, a section will be provided in .env to store the S3 Configurations for Seafile and some extension components (such as SeaSearch, Metadata server). You can locate it with the title bar Storage configurations for S3.
S3 configurations in .env only support single S3 storage backend mode
The Seafile server only support configuring S3 in .env for single S3 storage backend mode (i.e., when SEAF_SERVER_STORAGE_TYPE=s3). If you would like to use other storage backend (e.g., Ceph, Swift) or other settings that can only be set in seafile.conf (like multiple storage backends), please set SEAF_SERVER_STORAGE_TYPE to multiple, and set MD_STORAGE_TYPE and SS_STORAGE_TYPE according to your configurations.
To conclude, set the directory permissions of the Elasticsearch volumne:
"},{"location":"setup/setup_pro_by_docker/#starting-the-docker-containers","title":"Starting the Docker Containers","text":"
Run docker compose in detached mode:
docker compose up -d\n
ERROR: Named volume \"xxx\" is used in service \"xxx\" but no declaration was found in the volumes section
You may encounter this problem when your Docker (or docker-compose) version is out of date. You can upgrade or reinstall the Docker service to solve this problem according to the Docker official documentation.
Note
You must run the above command in the directory with the .env. If .env file is elsewhere, please run
docker compose -f /path/to/.env up -d\n
Success
After starting the services, you can see the initialization progress by tracing the logs of container seafile (i.e., docker logs seafile -f)
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\ncreating seafile-server-latest symbolic link ... done\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n
And then you can see the following messages which the Seafile server starts successfully:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\nSeahub is started\n\nDone.\n
Finially, you can go to http://seafile.example.com to use Seafile.
A 502 Bad Gateway error means that the system has not yet completed the initialization
To view Seafile docker logs, please use the following command
docker compose logs -f\n
The Seafile logs are under /shared/logs/seafile in the docker, or /opt/seafile-data/logs/seafile in the server that run the docker.
The system logs are under /shared/logs/var-log, or /opt/seafile-data/logs/var-log in the server that run the docker.
"},{"location":"setup/setup_pro_by_docker/#activating-the-seafile-license","title":"Activating the Seafile License","text":"
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/seafile: This is the directory for seafile server configuration, logs and data.
/opt/seafile-data/seafile/logs: This is the directory that would contain the log files of seafile server processes. For example, you can find seaf-server logs in /opt/seafile-data/seafile/logs/seafile.log.
/opt/seafile-data/logs: This is the directory for operating system and Nginx logs.
/opt/seafile-data/logs/var-log: This is the directory that would be mounted as /var/log inside the container. For example, you can find the nginx logs in /opt/seafile-data/logs/var-log/nginx/.
"},{"location":"setup/setup_pro_by_docker/#reviewing-the-deployment","title":"Reviewing the Deployment","text":"
The command docker container list should list the containers specified in the .env.
The directory layout of the Seafile container's volume should look as follows:
When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them.
"},{"location":"setup/setup_pro_by_docker/#faq","title":"FAQ","text":""},{"location":"setup/setup_pro_by_docker/#seafile-service-and-container-maintenance","title":"Seafile service and container maintenance","text":"
Q: If I want enter into the Docker container, which command I can use?
A: You can enter into the docker container using the command:
docker exec -it seafile /bin/bash\n
Q: I forgot the Seafile admin email address/password, how do I create a new admin account?
A: Seafile uses cache to improve performance in many situations. The content includes but is not limited to user session information, avatars, profiles, records from database, etc. From Seafile Docker 13, the Redis takes the default cache server for supporting the new features (please refer the upgradte notes), which has integrated in Seafile Docker 13 and can be configured directly in environment variables in .env (no additional settings are required by default)
Q: Is the Redis integrated in Seafile Docker safe? Does it have an access password?
A: Although the Redis integrated by Seafile Docker does not have a password set by default, it can only be accessed through the Docker private network and will not expose the service port externally. Of course, you can also set a password for it if necessary. You can set REDIS_PASSWORD in .env and remove the following comment markers in seafile-server.yml to set the integrated Redis' password:
services:\n ...\n redis:\n image: ${SEAFILE_REDIS_IMAGE:-redis}\n container_name: seafile-redis\n # remove the following comment markers\n command:\n - /bin/sh\n - -c\n - redis-server --requirepass \"$${REDIS_PASSWORD:?Variable is not set or empty}\"\n networks:\n - seafile-net\n ...\n
Q: For some reason, I still have to use Memcached as my cache server. How can I do this?
A: If you still want to use the Memcached (is not provided from Seafile Docker 13), just follow the steps below:
Set CACHE_PROVIDER to memcached and modify MEMCACHED_xxx in .env
Remove the redis part and and the redis dependency in seafile service section in seafile-server.yml.
By the way, you can make changes to the cache server after the service is started (by setting environment variables in .env), but the corresponding configuration files will not be updated directly (e.g., seahub_settings.py, seafile.conf and seafevents.conf). To avoid ambiguity, we recommend that you also update these configuration files.
"},{"location":"setup/setup_with_an_existing_mysql_server/","title":"Deploy with an existing MySQL server","text":"
The entire db service needs to be removed (or noted) in seafile-server.yml if you would like to use an existing MySQL server, otherwise there is a redundant database service is running
service:\n\n # note or remove the entire `db` service\n #db:\n #image: ${SEAFILE_DB_IMAGE:-mariadb:10.11}\n #container_name: seafile-mysql\n # ... other parts in service `db`\n\n # do not change other services\n...\n
What's more, you have to modify the .env to set correctly the fields with MySQL:
SEAFILE_MYSQL_DB_HOST=192.168.0.2\nSEAFILE_MYSQL_DB_PORT=3306\nINIT_SEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_USER=seafile # the user name of the user you like to use for Seafile server\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD # the password of the user you like to use for Seafile server\n
Tip
INIT_SEAFILE_MYSQL_ROOT_PASSWORD is needed during installation (i.e., the deployment in the first time). After Seafile is installed, the user seafile will be used to connect to the MySQL server (SEAFILE_MYSQL_DB_PASSWORD), then you can remove the INIT_SEAFILE_MYSQL_ROOT_PASSWORD.
"},{"location":"setup/setup_with_ceph/","title":"Setup With Ceph","text":"
Ceph is a scalable distributed storage system. It's recommended to use Ceph's S3 Gateway (RGW) to integarte with Seafile. Seafile can also use Ceph's RADOS object storage layer for storage backend. But using RADOS requires to link with librados library, which may introduce library incompatibility issues during deployment. Furthermore the S3 Gateway provides easier to manage HTTP based interface. If you want to integrate with S3 gateway, please refer to \"Use S3-compatible Object Storage\" section in this documentation. The documentation below is for integrating with RADOS.
"},{"location":"setup/setup_with_ceph/#copy-ceph-conf-file-and-client-keyring","title":"Copy ceph conf file and client keyring","text":"
Seafile acts as a client to Ceph/RADOS, so it needs to access ceph cluster's conf file and keyring. You have to copy these files from a ceph admin node's /etc/ceph directory to the seafile machine.
Since 8.0 version, Seafile bundles librados from Ceph 16. On some systems you may find Seafile fail to connect to your Ceph cluster. In such case, you can usually solve it by removing the bundled librados libraries and use the one installed in the OS.
To do this, you have to remove a few bundled libraries:
cd seafile-server-latest/seafile/lib\nrm librados.so.2 libstdc++.so.6 libnspr4.so\n
The above configuration will use the default (client.admin) user to connect to Ceph. You may want to use some other Ceph user to connect. This is supported in Seafile. To specify the Ceph user, you have to add a ceph_client_id option to seafile.conf, as the following:
[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-fs\n\n# Memcached or Reids configs\n......\n
You can create a ceph user for seafile on your ceph cluster like this:
As Seafile server before 6.3 version doesn't support multiple storage classes, you have to explicitly enable this new feature and define storage classes with a different syntax than how we define storage backend before.
By default, Seafile dose not enable multiple storage classes. So, you have to create a configuration file for storage classes and specify it and enable the feature in seafile.conf:
Create the storage classes file:
nano /opt/seafile-date/seafile/conf\n
For the example of this file, please refer next section
enable_storage_classes \uff1aIf this is set to true, the storage class feature is enabled. You must define the storage classes in a JSON file provided in the next configuration option.
storage_classes_file\uff1aSpecifies the path for the JSON file that contains the storage class definition.
Tip
Make sure you have added memory cache configurations to seafile.conf
Due to the Docker persistence strategy, the path of storage_classes_file in the Seafile container is different from the host usually, so we suggest you put this file in to the Seafile's configurations directory, and use /shared/conf instead of /opt/seafile-date/seafile/conf. Otherwise you have to add another persistent volume mapping strategy in seafile-server.yml. If your Seafile server is not deployed with Docker, we still suggest you put this file into the Seafile configurations file directory.
"},{"location":"setup/setup_with_multiple_storage_backends/#exmaple-of-storage-classes-file","title":"Exmaple of storage classes file","text":"
The storage classes JSON file is about an array consist of objects, for each defines a storage class. The fields in the definition corresponds to the information we need to specify for a storage class:
Variables Descriptions storage_id A unique internal string ID used to identify the storage class. It is not visible to users. For example, \"primary storage\". name A user-visible name for the storage class. is_default Indicates whether this storage class is the default one. commits The storage used for storing commit objects for this class. fs The storage used for storing fs objects for this class. blocks The storage used for storing block objects for this class.
Note
is_default is effective in two cases:
When a user does not choose a mapping policy and can use this storage class for a library;
For other mapping policies, this option only takes effect when you have existing libraries before enabling the multiple storage backend feature, which will be automatically mapped to the default storage backend.
commit, fs, and blocks can be stored in different storages. This provides the most flexible way to define storage classes (e.g., a file system, Ceph, or S3.)
Here is an example, which uses local file system, S3 (default), Swift and Ceph at the same time.
As you may have seen, the commits, fs and blocks information syntax is similar to what is used in [commit_object_backend], [fs_object_backend] and [block_backend] section of seafile.conf for a single backend storage. You can refer to the detailed syntax in the documentation for the storage you use (e.g., S3 Storage for S3).
If you use file system as storage for fs, commits or blocks, you must explicitly provide the path for the seafile-data directory. The objects will be stored in storage/commits, storage/fs, storage/blocks under this path.
Library mapping policies decide the storage class a library uses. Currently we provide 3 policies for 3 different use cases:
User Chosen
Role-based Mapping
Library ID Based Mapping
The storage class of a library is decided on creation and stored in a database table. The storage class of a library won't change if the mapping policy is changed later.
Before choosing your mapping policy, you need to enable the storage classes feature in seahub_settings.py:
This policy lets the users choose which storage class to use when creating a new library. The users can select any storage class that's been defined in the JSON file.
To use this policy, add following options in seahub_settings.py:
STORAGE_CLASS_MAPPING_POLICY = 'USER_SELECT'\n
If you enable storage class support but don't explicitly set STORAGE_CLASS_MAPPING_POLIICY in seahub_settings.py, this policy is used by default.
Due to storage cost or management considerations, sometimes a system admin wants to make different type of users use different storage backends (or classes). You can configure a user's storage classes based on their roles.
A new option storage_ids is added to the role configuration in seahub_settings.py to assign storage classes to each role. If only one storage class is assigned to a role, the users with this role cannot choose storage class for libraries; otherwise, the users can choose a storage class if more than one class are assigned. If no storage class is assigned to a role, the default class specified in the JSON file will be used.
Here are the sample options in seahub_settings.py to use this policy:
"},{"location":"setup/setup_with_multiple_storage_backends/#library-id-based-mapping","title":"Library ID Based Mapping","text":"
This policy maps libraries to storage classes based on its library ID. The ID of a library is an UUID. In this way, the data in the system can be evenly distributed among the storage classes.
Note
This policy is not a designed to be a complete distributed storage solution. It doesn't handle automatic migration of library data between storage classes. If you need to add more storage classes to the configuration, existing libraries will stay in their original storage classes. New libraries can be distributed among the new storage classes (backends). You still have to plan about the total storage capacity of your system at the beginning.
To use this policy, you first add following options in seahub_settings.py:
"},{"location":"setup/setup_with_multiple_storage_backends/#multiple-storage-backend-data-migration","title":"Multiple Storage Backend Data Migration","text":"
Migration from S3
Since version 11, when you migrate from S3 to other storage servers, you have to use V4 authentication protocol. This is because version 11 upgrades to Boto3 library, which fails to list objects from S3 when it's configured to use V2 authentication protocol.
Run the migrate-repo.sh script to migrate library data between different storage backends.
destination_storage_id: migrated destination storage id
repo_id is optional, if not specified, all libraries will be migrated.
Specify a path prefix
You can set the OBJECT_LIST_FILE_PATH environment variable to specify a path prefix to store the migrated object list before running the migration script
For example:
export OBJECT_LIST_FILE_PATH=/opt/test\n
This will create three files in the specified path (/opt):
test_4c731e5c-f589-4eaa-889f-14c00d4893cb.fs
test_4c731e5c-f589-4eaa-889f-14c00d4893cb.commits
test_4c731e5c-f589-4eaa-889f-14c00d4893cb.blocks
Setting the OBJECT_LIST_FILE_PATH environment variable has two purposes:
If the migrated library is very large, you need to run the migration script multiple times. Setting this environment variable can skip the previously migrated objects.
After the migration is complete, if you need to delete the objects in the origin storage, you must set this environment variable.
"},{"location":"setup/setup_with_multiple_storage_backends/#delete-all-objects-in-a-library-in-the-specified-storage-backend","title":"Delete All Objects In a Library In The Specified Storage Backend","text":"
Run the remove-objs.sh script (before migration, you need to set the OBJECT_LIST_FILE_PATH environment variable) to delete all objects in a library in the specified storage backend.
./remove-objs.sh repo_id storage_id\n
"},{"location":"setup/setup_with_s3/","title":"Setup With S3 Storage","text":"
From Seafile 13, there are two ways to configure S3 storage (single S3 storage backend) for Seafile server:
Environment variables (recommend since Seafile 13)
Config file (seafile.conf)
Setup note for binary packages deployment (Pro)
If your Seafile server is deployed from binary packages, you have to do the following steps before deploying:
install boto3 to your machine
sudo pip install boto3\n
Install and configure memcached or Redis.
For best performance, Seafile requires enable memory cache for objects. We recommend to at least allocate 128MB memory for memcached or Redis.
The configuration options differ for different S3 storage. We'll describe the configurations in separate sections. You also need to add memory cache configurations
From Seafile 13, configuring S3 from environment variables will be supported and will provide a more convenient way. You can refer to the detailed description of this part in the introduction of .env file. Generally,
Prepare at least 3 buckets for Seafile (S3_COMMIT_BUCKET, S3_FS_BUCKET and S3_BLOCK_BUCKET).
Set SEAF_SERVER_STORAGE_TYPE to true
Fill in the corresponding variable values in .env \u200b\u200baccording to the following table:
Variable Description Default Value S3_COMMIT_BUCKET S3 storage backend commit objects bucket (required) S3_FS_BUCKET S3 storage backend fs objects bucket (required) S3_BLOCK_BUCKET S3 storage backend block objects bucket (required) S3_KEY_ID S3 storage backend key ID (required) S3_SECRET_KEY S3 storage backend secret key (required) S3_AWS_REGION Region of your buckets us-east-1S3_HOST Host of your buckets (required when not use AWS) S3_USE_HTTPS Use HTTPS connections to S3 if enabled trueS3_USE_V4_SIGNATURE Use the v4 protocol of S3 if enabled trueS3_PATH_STYLE_REQUEST This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. falseS3_SSE_C_KEY A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. (none)
Bucket naming conventions
No matter if you using AWS or any other S3 compatible object storage, we recommend that you follow S3 naming rules. When you create buckets on S3, please read the S3 rules for naming first. Note, especially do not use capital letters in the name of the bucket (do not use camel-style naming, such as MyCommitObjects).
Good naming of a bucketBad naming of a bucket
seafile-commit-object
seafile-fs-object
seafile-block-object
SeafileCommitObject
seafileFSObject
seafile block object
About S3_SSE_C_KEY
S3_SSE_C_KEY is a string of 32 characters.
You can generate sse_c_key with the following command. Note that the key doesn't have to be base64 encoded. It can be any 32-character long random string. The example just show one possible way to generate such a key.
openssl rand -base64 24\n
Howevery, if you have existing data in your S3 storage bucket, turning on the above configuration will make your data inaccessible. That's because Seafile server doesn't support encrypted and non-encrypted objects mixed in the same bucket. You have to create a new bucket, and migrate your data to it by following storage backend migration documentation.
For other S3 support extensions
In addition to Seafile server, the following extensions (if already installed) will share the same S3 authorization information in .env with Seafile server:
SeaSearch: Enable the feature by specifying SS_STORAGE_TYPE=s3 and S3_SS_BUCKET
Metadata server: Enable the feature by specifying MD_STORAGE_TYPE=s3 and S3_MD_BUCKET
"},{"location":"setup/setup_with_s3/#example-configurations","title":"Example configurations","text":"AWSExoscaleHetznerOther Public Hosted S3 StorageSelf-hosted S3 Storage
SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=sos-de-fra-1.exo.io\nS3_USE_HTTPS=true\n
SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=fsn1.your-objectstorage.com\nS3_USE_HTTPS=true\n
There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support
SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=<access endpoint for storage provider>\nS3_USE_HTTPS=true\n
Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift, Ceph's RADOS Gateway and Minio. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:
SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=<your s3 api endpoint host>:<your s3 api endpoint port>\nS3_USE_HTTPS=true # according to your S3 configuration\n
"},{"location":"setup/setup_with_s3/#setup-with-config-file","title":"Setup with config file","text":"
Seafile configures S3 storage by adding or modifying the following section in seafile.conf:
Similar to configure in .env, you have to create at least 3 buckets for Seafile too, corresponding to the sections: commit_object_backend, fs_object_backend and block_backend. For the configurations for each backend section, please refer to the following table:
Variable Description bucket Bucket name for commit, fs, and block objects. Make sure it follows S3 naming rules (you can refer the notes below the table). key_id The key_id is required to authenticate you to S3. You can find the key_id in the \"security credentials\" section on your AWS account page or from your storage provider. key The key is required to authenticate you to S3. You can find the key in the \"security credentials\" section on your AWS account page or from your storage provider. use_v4_signature There are two versions of authentication protocols that can be used with S3 storage: Version 2 (older, may still be supported by some regions) and Version 4 (current, used by most regions). If you don't set this option, Seafile will use the v2 protocol. It's suggested to use the v4 protocol. use_https Use https to connect to S3. It's recommended to use https. aws_region (Optional) If you use the v4 protocol and AWS S3, set this option to the region you chose when you create the buckets. If it's not set and you're using the v4 protocol, Seafile will use us-east-1 as the default. This option will be ignored if you use the v2 protocol. host (Optional) The endpoint by which you access the storage service. Usually it starts with the region name. It's required to provide the host address if you use storage provider other than AWS, otherwise Seafile will use AWS's address (i.e., s3.us-east-1.amazonaws.com). sse_c_key (Optional) A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. path_style_request (Optional) This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. So we recommend to set this option to true for self-hosted storage."},{"location":"setup/setup_with_s3/#example-configurations_1","title":"Example configurations","text":"AWSExoscaleHetznerOther Public Hosted S3 StorageSelf-hosted S3 Storage
There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support
[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\n# v2 authentication protocol will be used if not set\nuse_v4_signature = true\n# required for v4 protocol. ignored for v2 protocol.\naws_region = <region name for storage provider>\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\nuse_https = true\n
Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift, Ceph's RADOS Gateway and Minio. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:
Install and configure memcached or Redis. For best performance, Seafile requires enable memory cache for objects. We recommend to at least allocate 128MB memory for memcached.
The above config is just an example. You should replace the options according to your own environment.
Seafile supports Swift with Keystone as authentication mechanism. The auth_host option is the address and port of Keystone service.The region option is used to select publicURL,if you don't configure it, use the first publicURL in returning authenticated information.
Seafile also supports Tempauth and Swauth since professional edition 6.2.1. The auth_ver option should be set to v1.0, tenant and region are no longer needed.
It's required to create separate containers for commit, fs, and block objects.
"},{"location":"setup/setup_with_swift/#use-https-connections-to-swift","title":"Use HTTPS connections to Swift","text":"
Since Pro 5.0.4, you can use HTTPS connections to Swift. Add the following options to seafile.conf:
Because the server package is built on CentOS 6, if you're using Debian/Ubuntu, you have to copy the system CA bundle to CentOS's CA bundle path. Otherwise Seafile can't find the CA bundle so that the SSL connection will fail.
This page shows the minimal requirements of Seafile.
About the system requirements
The system requirements in this document refer to the minimum system hardware requirements are the suggestions to smooth operation of Seafile (network connection is not discussed here). If not otherwise specified, it will apply to all deployment scenarios, but for binary installations, the libraries we provided in the documents are only supporting the following operation systems:
Ubuntu 24.04
Ubuntu 22.04
Debian 12
Debian 11
Important: Information of Docker-base deployment integration services
In each case, we have shown the services integrated names Docker-base deployment integration services by standard installation with Docker. If these services are already installed and you do not need them in your deployment, you need to refer to the corresponding documentation and disable them in the Docker resource file.However, we do not recommend that you reduce the corresponding system resource requirements on our suggestions, unless otherwise specified.
However, if you use other installation methods (e.g., binary deployment, K8S deployment) you have to make sure you have installed these services, because it will not include the installation of that.
If you need to install other extensions not included here (e.g., OnlyOffice), you should increase the system requirements appropriately above our recommendations.
Deployment Scenarios CPU Requirements Memory Requirements Indexer / Search Engine Docker deployment 4 Cores 4G Default All 4 Cores 4G With existing ElasticSearch service, but on the same machine / node All 2 Cores 2G With existing ElasticSearch service, and on another machine / node All 2 Cores 2G Use SeaSearch as the search engine, instead of ElasticSearch
Hard disk requirements: More than 50G are recommended
Docker-base deployment integration services:
Seafile
Redis
Mariadb
ElasticSearch
Seadoc
Caddy
More details of files indexer used in Seafile PE
By default, Seafile Pro will use Elasticsearch as the files indexer
Please make sure the mmapfs counts do not cause excptions like out of memory, which can be increased by following command (see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html for futher details):
sysctl -w vm.max_map_count=262144 #run as root\n
or modify /etc/sysctl.conf and reboot to set this value permanently:
Node requirements: Minimal 2 nodes (one frontend and one backend), but recommend more than 3 nodes (two frontend and three backend)
More details about the number of nodes
If your number of nodes does not meet our recommended number (i.e. 3 nodes), please adjust according to the following strategies:
2 nodes: A frontend service and a backend service on the same node
1 node: Please deploy Seafile in a single node instead a cluster.
If you have more available nodes for Seafile server, please provide them to the Seafile frontend service and make sure there is only one backend service running. Here is a simple relationship between the number of Seafile frontent services (\\(N_f\\)) and total nodes (\\(N_t\\)): $$ N_f = N_t - 1, $$ where the number 1 means one node for Seafile backend service.
Other system requirements: similar with Seafile Pro, but make sure that all nodes should meet this condition
Docker-base deployment integration services: Seafile only
More suggestions in Seafile cluster
We assume you have already deployed Memcached (redis is not supported in cluster), MariaDB, file indexer (e.g., ElasticSearch) in separate machines and use S3 like object storage.
Generally, when deploying Seafile in a cluster, we recommend that you use a storage backend (such as AWS S3) to store Seafile data. However, according to the Seafile image startup rules and K8S persistent storage strategy, you still need to prepare a persistent directory for configuring the startup of the Seafile container.
"},{"location":"setup/use_other_reverse_proxy/","title":"Use other reverse proxy","text":"
Since Seafile 12.0, all reverse proxy, HTTPS, etc. processing for single-node deployment based on Docker is handled by caddy. If you need to use other reverse proxy services, you can refer to this document to modify the relevant configuration files.
"},{"location":"setup/use_other_reverse_proxy/#services-that-require-reverse-proxy","title":"Services that require reverse proxy","text":"
Before making changes to the configuration files, you have to know the services used by Seafile and related components (Table 1 therafter).
Tip
The services shown in the table below are all based on the single-node integrated deployment in accordance with the Seafile official documentation.
If these services are deployed in standalone mode (such as seadoc and notification-server), or deployed in the official documentation of third-party plugins (such as onlyoffice and collabora), you can skip modifying the configuration files of these services (because Caddy is not used as a reverse proxy for such deployment approaches).
If you have not integrated the services in the Table 1, please choose Standalone or Refer to the official documentation of third-party plugins to install them when you need these services
YML Service Suggest exposed port Service listen port Require WebSocket seafile-server.yml seafile 80 80 No seadoc.yml seadoc 8888 80 Yes notification-server.yml notification-server 8083 8083 Yes collabora.yml collabora 6232 9980 No onlyoffice.yml onlyoffice 6233 80 No"},{"location":"setup/use_other_reverse_proxy/#modify-yml-files","title":"Modify YML files","text":"
Refer to Table 1 for the related service exposed ports. Add section ports for corresponding services
services:\n <the service need to be modified>:\n ...\n ports:\n - \"<Suggest exposed port>:<Service listen port>\"\n
Delete all fields related to Caddy reverse proxy (in label section)
Tip
Some .yml files (e.g., collabora.yml) also have port-exposing information with Caddy in the top of the file, which also needs to be removed.
We take seafile-server.yml for example (Pro edition):
services:\n # ... other services\n\n seafile:\n image: ${SEAFILE_IMAGE:-seafileltd/seafile-pro-mc:13.0-latest}\n container_name: seafile\n ports:\n - \"80:80\"\n volumes:\n - ${SEAFILE_VOLUME:-/opt/seafile-data}:/shared\n environment:\n ... # enviroment variables map, donnot make change\n\n # please remove the `label` section\n #label: ... <- remove this section\n\n depends_on:\n ... # dependencies, donnot make change\n ...\n\n# ... other options\n
"},{"location":"setup/use_other_reverse_proxy/#add-reverse-proxy-for-related-services","title":"Add reverse proxy for related services","text":"
Modify nginx.conf and add reverse proxy for services seafile and seadoc:
Note
If your proxy server's host is not the same as the host the Seafile deployed to, please replase 127.0.0.1 to your Seafile server's host
"},{"location":"setup/use_other_reverse_proxy/#restart-services-and-nginx","title":"Restart services and nginx","text":"
docker compose down\ndocker compose up -d\nnginx restart\n
"},{"location":"setup/use_seasearch/","title":"Use SeaSearch as search engine (Pro)","text":"
SeaSearch, a file indexer with more lightweight and efficiency than Elasticsearch, is supported from Seafile 12.
For Seafile deploy from binary package
We currently only support Docker-based deployment for SeaSearch Server, so this document describes the configuration with the situation of using Docker to deploy Seafile server.
If your Seafile Server deploy from binary package, please refer here to start or stop Seafile Server.
For Seafile cluster
Theoretically, at least the backend node has to restart, if your Seafile server deploy in cluster mode, but we still suggest you configure and restart all node to make sure the consistency and synchronization in the cluster
SeaSearch service is currently mainly deployed via docker. We have integrated it into the relevant docker-compose file. You only need to download it to the same directory as seafile-server.yml:
We have configured the relevant variables in .env. Here you must pay special attention to the following variable information, which will affect the SeaSearch initialization process. For variables in .env of SeaSearch service, please refer here for the details. We use /opt/seasearch-data as the persistent directory of SeaSearch (the information of administrator are same as Seafile's admin by default from Seafile 13):
For Apple's Chips
Since Apple's chips (such as M2) do not support MKL, you need to set the relevant image to xxx-nomkl:latest, e.g.:
COMPOSE_FILE='...,seasearch.yml' # ... means other docker-compose files\n\n#SEASEARCH_IMAGE=seafileltd/seasearch-nomkl:1.0-latest # for Apple's Chip\nSEASEARCH_IMAGE=seafileltd/seasearch:1.0-latest\n\nSS_DATA_PATH=/opt/seasearch-data\nINIT_SS_ADMIN_USER=<admin-username> \nINIT_SS_ADMIN_PASSWORD=<admin-password>\n\n\n# if you would like to use S3 for saving seasearch data\nSS_STORAGE_TYPE=s3\nS3_SS_BUCKET=...\nS3_KEY_ID=<your-key-id>\nS3_SECRET_KEY=<your-secret-key>\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=false\nS3_AWS_REGION=us-east-1\nS3_HOST=\nS3_USE_HTTPS=true\nS3_SSE_C_KEY=\n
"},{"location":"setup/use_seasearch/#modify-seafile-serveryml-to-disable-elasticsearch-service","title":"Modify seafile-server.yml to disable elasticSearch service","text":"
If you would like to use SeaSearch as the search engine, the elasticSearch service can be removed, which is no longer used: remove elasticsearch.yml in the list variable COMPOSE_FILE on the file .env.
Get your authorization token by base64 code consist of INIT_SS_ADMIN_USER and INIT_SS_ADMIN_PASSWORD defined in .env firsly, which is used to authorize when calling the SeaSearch API:
echo -n 'username:password' | base64\n\n# example output\nYWRtaW46YWRtaW5fcGFzc3dvcmQ=\n
Add the following section in seafevents to enable seafile backend service to access SeaSearch APIs
SeaSearch server deploy on a different machine with Seafile
If your SeaSearch server deploy on a different machine with Seafile, please replace http://seasearch:4080 to the url <scheme>://<address>:<prot> of your SeaSearch server
After startup the SeaSearch service, you can check the following logs for Whether SeaSearch runs normally and Seafile is called successfully:
container logs by command docker logs -f seafile-seasearch
/opt/seasearch-data/log/seafevents.log
After first time start SeaSearch Server
You can remove the initial admin account informations in .env (e.g., INIT_SS_ADMIN_USER, INIT_SS_ADMIN_PASSWORD), which are only used in the SeaSearch initialization progress (i.e., the first time to start services). But make sure you have recorded it somewhere else in case you forget the password.
By default, SeaSearch use word based tokenizer designed for English/German/French language. You can add following configuration to use tokenizer designed for Chinese language.
Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
Cache server (the first step) is not necessary, if you donot wish this node deploy it.
"},{"location":"setup_binary/cluster_deployment/#create-user-seafile","title":"Create user seafile","text":"
Create a new user and follow the instructions on the screen:
adduser seafile\n
Change ownership of the created directory to the new user:
chown -R seafile: /opt/seafile\n
All the following steps are done as user seafile.
Change to user seafile:
su seafile\n
"},{"location":"setup_binary/cluster_deployment/#placing-the-seafile-pe-license-in-optseafile","title":"Placing the Seafile PE license in /opt/seafile","text":"
Save the license file in Seafile's programm directory /opt/seafile. Make sure that the name is seafile-license.txt.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
"},{"location":"setup_binary/cluster_deployment/#setup-and-configure-nginx-only-for-frontend-nodes","title":"Setup and configure Nginx (only for frontend nodes)","text":"
For security reasons, the Seafile frontend service will only listen to requests from the local port 8000. You need to use Nginx to reverse proxy this port to port 80 for external access:
There are 2 firewall rule changes for Seafile cluster:
On each nodes, you should open the health check port (default 11001);
On the Cache and ElasticSearch server, please only allow Seafile servers to access this port for security resons.
"},{"location":"setup_binary/cluster_deployment/#setup-the-first-frontend-node","title":"Setup the first frontend Node","text":""},{"location":"setup_binary/cluster_deployment/#setup-seafile-server-pro","title":"Setup Seafile server Pro","text":"
Please follow Installation of Seafile Server Professional Edition to setup:
Download the install package
Uncompress the package
Set up Seafile Pro databases
"},{"location":"setup_binary/cluster_deployment/#create-and-modify-configuration-files-in-optseafileconf","title":"Create and Modify configuration files in /opt/seafile/conf","text":""},{"location":"setup_binary/cluster_deployment/#env","title":".env","text":"
Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters can be generated from:
pwgen -s 40 1\n
JWT_PRIVATE_KEY=<Your jwt private key>\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=<your database host>\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n
The Seafile server also opens a port for the load balancers to run health checks. Seafile by default uses port 11001. You can change this by adding the following config:
please Refer to Django's documentation about using Redis cache to add Redis configurations to seahub_settings.py.
Add following options to seahub_setting.py, which will tell Seahub to store avatar in database and cache avatar in memcached, and store css CACHE to local memory.
In cluster environment, we have to store avatars in the database instead of in a local disk.
mysql -h<your MySQL host> -P<your MySQL port> -useafile -p<user seafile's password>\n\n# enter MySQL environment\nUSE seahub_db;\n\nCREATE TABLE `avatar_uploaded` (`filename` TEXT NOT NULL, `filename_md5` CHAR(32) NOT NULL PRIMARY KEY, `data` MEDIUMTEXT NOT NULL, `size` INTEGER NOT NULL, `mtime` datetime NOT NULL);\n
"},{"location":"setup_binary/cluster_deployment/#run-and-test-the-single-node","title":"Run and Test the Single Node","text":"
Once you have finished configuring this single node, start it to test if it runs properly:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n
cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seahub.sh start\n
Success
The first time you start seahub, the script would prompt you to create an admin account for your Seafile server. Then you can see the following message in your console:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n
Finally, you can visit http://ip-address-of-this-node:80 and login with the admin account to test if this node is working fine or not.
"},{"location":"setup_binary/cluster_deployment/#configure-other-frontend-nodes","title":"Configure other frontend nodes","text":"
If the first frontend node works fine, you can compress the whole directory /opt/seafile into a tarball and copy it to all other Seafile server nodes. You can simply uncompress it and start the server by:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n
cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seahub.sh start\n
In the backend node, you need to execute the following command to start Seafile server. CLUSTER_MODE=backend means this node is seafile backend server.
Note
For installations using python virtual environment, activate it if it isn't already active
Since Seafile Pro server 6.0.0, cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the \"Load Balancer Setting\" section below for details
Generally speaking, in order to better access the Seafile service, we recommend that you use a load balancing service to access the Seafile cluster and bind your domain name (such as seafile.cluster.com) to the load balancing service. Usually, you can use:
Cloud service provider's load balancing service (e.g., AWS Elastic Load Balancer)
Deploy your own load balancing service, our document will give two of common load balance services:
global\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n maxconn 2000\n timeout connect 10000\n timeout client 300000\n timeout server 36000000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.1.165:80 check port 11001 cookie seafileserver01\n server seafileserver02 192.168.1.200:80 check port 11001 cookie seafileserver02\n
"},{"location":"setup_binary/cluster_deployment/#see-how-it-runs","title":"See how it runs","text":"
Now you should be able to test your cluster. Open https://seafile.example.com in your browser and enjoy. You can also synchronize files with Seafile clients.
"},{"location":"setup_binary/cluster_deployment/#the-final-configuration-of-the-front-end-nodes","title":"The final configuration of the front-end nodes","text":"
Here is the summary of configurations at the front-end node that related to cluster setup. (for version 7.1+)
For seafile.conf:
[cluster]\nenabled = true\nmemcached_options = --SERVER=<IP of memcached node> --POOL-MIN=10 --POOL-MAX=100\n
The enabled option will prevent the start of background tasks by ./seafile.sh start in the front-end node. The tasks should be explicitly started by ./seafile-background-tasks.sh start at the back-end node.
You can engaged HTTPS in your load balance service, as you can use certificates manager (e.g., Certbot) to acquire and enable HTTPS to your Seafile cluster. You have to modify the relative URLs from the prefix http:// to https:// in seahub_settings.py and .env, after enabling HTTPS.
You can follow here to deploy SeaDoc server. And then modify SEADOC_SERVER_URL in your .env file
"},{"location":"setup_binary/https_with_nginx/","title":"Enabling HTTPS with Nginx","text":"
After completing the installation of Seafile Server Professional Edition, communication between the Seafile server and clients runs over (unencrypted) HTTP. While HTTP is ok for testing purposes, switching to HTTPS is imperative for production use.
HTTPS requires a SSL certificate from a Certificate Authority (CA). Unless you already have a SSL certificate, we recommend that you get your SSL certificate from Let\u2019s Encrypt using Certbot. If you have a SSL certificate from another CA, skip the section \"Getting a Let's Encrypt certificate\".
A second requirement is a reverse proxy supporting SSL. Nginx, a popular and resource-friendly web server and reverse proxy, is a good option. Nginx's documentation is available at http://nginx.org/en/docs/.
Copy the following sample Nginx config file into the just created seafile.conf (i.e., nano /etc/nginx/sites-available/seafile.conf) and modify the content to fit your needs:
The following options must be modified in the CONF file:
Server name (server_name)
Optional customizable options in the seafile.conf are:
Server listening port (listen) - if Seafile server should be available on a non-standard port
Proxy pass for location / - if Seahub is configured to start on a different port than 8000
Proxy pass for location /seafhttp - if seaf-server is configured to start on a different port than 8082
Maximum allowed size of the client request body (client_max_body_size)
The default value for client_max_body_size is 1M. Uploading larger files will result in an error message HTTP error code 413 (\"Request Entity Too Large\"). It is recommended to syncronize the value of client_max_body_size with the parameter max_upload_size in section [fileserver] of seafile.conf. Optionally, the value can also be set to 0 to disable this feature. Client uploads are only partly effected by this limit. With a limit of 100 MiB they can safely upload files of any size.
Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect:
nginx -t\nnginx -s reload\n
"},{"location":"setup_binary/https_with_nginx/#getting-a-lets-encrypt-certificate","title":"Getting a Let's Encrypt certificate","text":"
Getting a Let's Encrypt certificate is straightforward thanks to Certbot. Certbot is a free, open source software tool for requesting, receiving, and renewing Let's Encrypt certificates.
First, go to the Certbot website and choose your webserver and OS.
Second, follow the detailed instructions then shown.
We recommend that you get just a certificate and that you modify the Nginx configuration yourself:
sudo certbot certonly --nginx\n
Follow the instructions on the screen.
Upon successful verification, Certbot saves the certificate files in a directory named after the host name in /etc/letsencrypt/live. For the host name seafile.example.com, the files are stored in /etc/letsencrypt/live/seafile.example.com.
Normally, your nginx configuration can be automatically managed by a certificate manager (e.g., CertBot) after you install the certificate. If you find that your nginx is already listening on port 443 through the certificate manager after installing the certificate, you can skip this step.
Add an server block for port 443 and a http-to-https redirect to the seafile.conf configuration file in /etc/nginx.
This is a (shortened) sample configuration for the host name seafile.example.com:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n\n server_tokens off; # Prevents the Nginx version from being displayed in the HTTP response header\n}\n\nserver {\n listen 443 ssl;\n ssl_certificate /etc/letsencrypt/live/seafile.example.com/fullchain.pem; # Path to your fullchain.pem\n ssl_certificate_key /etc/letsencrypt/live/seafile.example.com/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n proxy_set_header X-Forwarded-Proto https;\n\n... # No changes beyond this point compared to the Nginx configuration without HTTPS\n
Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect:
Tip for uploading very large files (> 4GB): By default Nginx will buffer large request body in temp file. After the body is completely received, Nginx will send the body to the upstream server (seaf-server in our case). But it seems when file size is very large, the buffering mechanism dosen't work well. It may stop proxying the body in the middle. So if you want to support file upload larger for 4GB, we suggest you install Nginx version >= 1.8.0 and add the following options to Nginx config file:
To improve security, the file server should only be accessible via Nginx.
Add the following line in the [fileserver] block on seafile.conf in /opt/seafile/conf:
host = 127.0.0.1 ## default port 0.0.0.0\n
After his change, the file server only accepts requests from Nginx.
"},{"location":"setup_binary/https_with_nginx/#starting-seafile-and-seahub","title":"Starting Seafile and Seahub","text":"
Restart the seaf-server and Seahub for the config changes to take effect:
su seafile\ncd /opt/seafile/seafile-server-latest\n./seafile.sh restart\n./seahub.sh restart # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n
"},{"location":"setup_binary/https_with_nginx/#additional-modern-settings-for-nginx-optional","title":"Additional modern settings for Nginx (optional)","text":""},{"location":"setup_binary/https_with_nginx/#activating-ipv6","title":"Activating IPv6","text":"
Require IPv6 on server otherwise the server will not start! Also the AAAA dns record is required for IPv6 usage.
Activate HTTP2 for more performance. Only available for SSL and nginx version>=1.9.5. Simply add http2.
listen 443 http2;\nlisten [::]:443 http2;\n
"},{"location":"setup_binary/https_with_nginx/#advanced-tls-configuration-for-nginx-optional","title":"Advanced TLS configuration for Nginx (optional)","text":"
The TLS configuration in the sample Nginx configuration file above receives a B overall rating on SSL Labs. By modifying the TLS configuration in seafile.conf, this rating can be significantly improved.
The following sample Nginx configuration file for the host name seafile.example.com contains additional security-related directives. (Note that this sample file uses a generic path for the SSL certificate files.) Some of the directives require further steps as explained below.
server {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n server_tokens off;\n }\n server {\n listen 443 ssl;\n ssl_certificate /etc/ssl/cacert.pem; # Path to your cacert.pem\n ssl_certificate_key /etc/ssl/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n # HSTS for protection against man-in-the-middle-attacks\n add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\";\n\n # DH parameters for Diffie-Hellman key exchange\n ssl_dhparam /etc/nginx/dhparam.pem;\n\n # Supported protocols and ciphers for general purpose server with good security and compatability with most clients\n ssl_protocols TLSv1.2 TLSv1.3;\n ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;\n ssl_prefer_server_ciphers off;\n\n # Supported protocols and ciphers for server when clients > 5years (i.e., Windows Explorer) must be supported\n #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;\n #ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA;\n #ssl_prefer_server_ciphers on;\n\n ssl_session_timeout 5m;\n ssl_session_cache shared:SSL:5m;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto https;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n\n proxy_read_timeout 1200s;\n\n client_max_body_size 0;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n send_timeout 36000s;\n }\n\n location /media {\n root /home/user/haiwen/seafile-server-latest/seahub;\n }\n }\n
"},{"location":"setup_binary/https_with_nginx/#enabling-http-strict-transport-security","title":"Enabling HTTP Strict Transport Security","text":"
Enable HTTP Strict Transport Security (HSTS) to prevent man-in-the-middle-attacks by adding this directive:
HSTS instructs web browsers to automatically use HTTPS. That means, after the first visit of the HTTPS version of Seahub, the browser will only use https to access the site.
The generation of the the DH parameters may take some time depending on the server's processing power.
Add the following directive in the HTTPS server block:
ssl_dhparam /etc/nginx/dhparam.pem;\n
"},{"location":"setup_binary/https_with_nginx/#restricting-tls-protocols-and-ciphers","title":"Restricting TLS protocols and ciphers","text":"
Disallow the use of old TLS protocols and cipher. Mozilla provides a configuration generator for optimizing the conflicting objectives of security and compabitility. Visit https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx for more Information.
"},{"location":"setup_binary/installation_pro/","title":"Installation of Seafile Server Professional Edition","text":"
This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server from a pre-built package using MySQL/MariaDB as database. The deployment has been tested for Debian/Ubuntu.
Please refer here for system requirements about Seafile PE. In general, we recommend that you should have at least 4G RAM and a 4-core CPU (> 2GHz).
Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com or one of our partners.
"},{"location":"setup_binary/installation_pro/#setup","title":"Setup","text":""},{"location":"setup_binary/installation_pro/#installing-and-preparing-the-sql-database","title":"Installing and preparing the SQL database","text":"
Seafile supports MySQL and MariaDB. We recommend that you use the preferred SQL database management engine included in the package repositories of your distribution.
You can find step-by-step how-tos for installing MySQL and MariaDB in the tutorials on the Digital Ocean website.
Seafile uses the mysql_native_password plugin for authentication. The versions of MySQL and MariaDB installed on CentOS 8, Debian 10, and Ubuntu 20.04 use a different authentication plugin by default. It is therefore required to change to authentication plugin to mysql_native_password for the root user prior to the installation of Seafile. The above mentioned tutorials explain how to do it.
The standard directory /opt/seafile is assumed for the rest of this manual. If you decide to put Seafile in another directory, some commands need to be modified accordingly
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate.
sudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev python3.12-venv default-libmysqlclient-dev build-essential pkg-config libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==1.0.* mysqlclient==2.2.* \\\n pymysql pillow==10.4.* pylibmc captcha==0.6.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.* \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.9.* pysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 lxml python-ldap==3.4.* gevent==24.2.*\n
Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate.
sudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmariadb-dev-compat ldap-utils libldap2-dev libsasl2-dev python3.11-venv \n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* pymysql pillow==10.0.* pylibmc captcha==0.4 markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 psd-tools django-pylibmc django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n
"},{"location":"setup_binary/installation_pro/#creating-user-seafile","title":"Creating user seafile","text":"
Elasticsearch, the indexing server, cannot be run as root. More generally, it is good practice not to run applications as root.
Create a new user and follow the instructions on the screen:
Ubuntu 24.04/22.04Debian 12/11
adduser seafile\n
/usr/sbin/adduser seafile\n
Change ownership of the created directory to the new user:
chown -R seafile: /opt/seafile\n
All the following steps are done as user seafile.
Change to user seafile:
su seafile\n
"},{"location":"setup_binary/installation_pro/#placing-the-seafile-pe-license","title":"Placing the Seafile PE license","text":"
Save the license file in Seafile's programm directory /opt/seafile. Make sure that the name is seafile-license.txt.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
"},{"location":"setup_binary/installation_pro/#downloading-the-install-package","title":"Downloading the install package","text":"
The install packages for Seafile PE are available for download in the the Seafile Customer Center. To access the Customer Center, a user account is necessary. The registration is free.
Beginning with Seafile PE 7.0.17, the Seafile Customer Center provides two install packages for every version (using Seafile PE 12.0.6 as an example):
seafile-pro-server_12.0.6_x86-64_Ubuntu.tar.gz, compiled in Ubuntu environment
The former is suitable for installation on Ubuntu/Debian servers.
Download the install package using wget (replace the x.x.x with the version you wish to download):
The names of the install packages differ for Seafile CE and Seafile PE. Using Seafile CE and Seafile PE 12.0.6 as an example, the names are as follows:
Seafile CE: seafile-server_12.0.6_x86-86.tar.gz; uncompressing into folder seafile-server-12.0.6
Seafile PE: seafile-pro-server_12.0.6_x86-86.tar.gz; uncompressing into folder seafile-pro-server-12.0.6
"},{"location":"setup_binary/installation_pro/#setting-up-seafile-pro-databases","title":"Setting up Seafile Pro databases","text":"
The install package comes with a script that sets Seafile up for you. Specifically, the script creates the required directories and extracts all files in the right place. It can also create a MySQL user and the three databases that Seafile's components require:
ccnet server
seafile server
seahub
While ccnet server was merged into the seafile-server in Seafile 8.0, the corresponding database is still required for the time being
Run the script as user seafile:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n
cd seafile-pro-server-12.0.6\n./setup-seafile-mysql.sh\n
Configure your Seafile Server by specifying the following three parameters:
Option Description Note server name Name of the Seafile Server 3-15 characters, only English letters, digits and underscore ('_') are allowed server's ip or domain IP address or domain name used by the Seafile Server Seafile client program will access the server using this address fileserver port TCP port used by the Seafile fileserver Default port is 8082, it is recommended to use this port and to only change it if is used by other service
In the next step, choose whether to create new databases for Seafile or to use existing databases. The creation of new databases requires the root password for the SQL server.
Note
If you don't have the root password, you need someone who has the privileges, e.g., the database admin, to create the three databases required by Seafile, as well as a MySQL user who can access the databases. For example, to create three databases ccnet_db / seafile_db / seahub_db for ccnet/seafile/seahub respectively, and a MySQL user \"seafile\" to access these databases run the following SQL queries:
create database `ccnet_db` character set = 'utf8';\ncreate database `seafile_db` character set = 'utf8';\ncreate database `seahub_db` character set = 'utf8';\n\ncreate user 'seafile'@'localhost' identified by 'seafile';\n\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seafile_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seahub_db`.* to `seafile`@localhost;\n
[1] Create new ccnet/seafile/seahub databases[2] Use existing ccnet/seafile/seahub databases
The script creates these databases and a MySQL user that Seafile Server will use to access them. To this effect, you need to answer these questions:
Question Description Note mysql server host Host address of the MySQL server Default is localhost mysql server port TCP port used by the MySQL server Default port is 3306; almost every MySQL server uses this port mysql root password Password of the MySQL root account The root password is required to create new databases and a MySQL user mysql user for Seafile MySQL user created by the script, used by Seafile's components to access the databases Default is seafile; the user is created unless it exists mysql password for Seafile user Password for the user above, written in Seafile's config files Percent sign ('%') is not allowed database name Name of the database used by ccnet Default is \"ccnet_db\", the database is created if it does not exist seafile database name Name of the database used by Seafile Default is \"seafile_db\", the database is created if it does not exist seahub database name Name of the database used by seahub Default is \"seahub_db\", the database is created if it does not exist
The prompts you need to answer:
Question Description Note mysql server host Host address of the MySQL server Default is localhost mysql server port TCP port used by MySQL server Default port is 3306; almost every MySQL server uses this port mysql user for Seafile User used by Seafile's components to access the databases The user must exists mysql password for Seafile user Password for the user above ccnet database name Name of the database used by ccnet, default is \"ccnet_db\" The database must exist seafile database name Name of the database used by Seafile, default is \"seafile_db\" The database must exist seahub dabase name Name of the database used by Seahub, default is \"seahub_db\" The database must exist
If the setup is successful, you see the following output:
The folder seafile-server-latest is a symbolic link to the current Seafile Server folder. When later you upgrade to a new version, the upgrade scripts update this link to point to the latest Seafile Server folder.
"},{"location":"setup_binary/installation_pro/#enabling-httphttps-optional-but-recommended","title":"Enabling HTTP/HTTPS (Optional but Recommended)","text":"
You need at least setup HTTP to make Seafile's web interface work. This manual provides instructions for enabling HTTP/HTTPS for the two most popular web servers and reverse proxies (e.g., Nginx).
"},{"location":"setup_binary/installation_pro/#create-the-env-file-in-conf-directory","title":"Create the .env file in conf/ directory","text":"
nano /opt/seafile/conf/.env\n
Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters can be generated from:
pwgen -s 40 1\n
JWT_PRIVATE_KEY=<Your jwt private key>\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=<your database host>\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n
Run the following commands in /opt/seafile/seafile-server-latest:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n
su seafile\n./seafile.sh start # Start Seafile service\n./seahub.sh start # Start seahub website, port defaults to 127.0.0.1:8000\n
Success
The first time you start Seahub, the script prompts you to create an admin account for your Seafile Server. Enter the email address of the admin user followed by the password, i.e.:
What is the email for the admin account?\n[ admin email ] <please input your admin's email>\n\nWhat is the password for the admin account?\n[ admin password ] <please input your admin's password>\n\nEnter the password again:\n[ admin password again ] <please input your admin's password again>\n
Now you can access Seafile via the web interface at the host address (e.g., https://seafile.example.com).
"},{"location":"setup_binary/installation_pro/#enabling-full-text-search","title":"Enabling full text search","text":"
Seafile uses the indexing server ElasticSearch to enable full text search.
Our recommendation for deploying ElasticSearch is using Docker. Detailed information about installing Docker on various Linux distributions is available at Docker Docs.
Seafile PE 9.0 only supports ElasticSearch 7.x. Seafile PE 10.0, 11.0, 12.0 only supports ElasticSearch 8.x.
We use ElasticSearch version 8.15.0 as an example in this section. Version 8.15.0 and newer version have been successfully tested with Seafile.
Pull the Docker image:
sudo docker pull elasticsearch:8.15.0\n
Create a folder for persistent data created by ElasticSearch and change its permission:
We sincerely thank Mohammed Adel of Safe Decision Co., for the suggestion of this notice.
By default, Elasticsearch will only listen on 127.0.0.1, but this rule may become invalid after Docker exposes the service port, which will make your Elasticsearch service vulnerable to attackers accessing and extracting sensitive data due to exposure to the external network. We recommend that you manually configure the Docker firewall, such as
sudo iptables -A INPUT -p tcp -s <your seafile server ip> --dport 9200 -j ACCEPT\nsudo iptables -A INPUT -p tcp --dport 9200 -j DROP\n
The above command will only allow the host where your Seafile service is located to connect to Elasticsearch, and other addresses will be blocked. If you deploy Elasticsearch based on binary packages, you need to refer to the official document to set the address that Elasticsearch binds to.
Add the following configuration to seafevents.conf:
[INDEX FILES]\nes_host = <your elasticsearch server's IP, e.g., 127.0.0.1> # IP address of ElasticSearch host\nes_port = 9200 # port of ElasticSearch host\n
Finally, restart Seafile:
su seafile\n./seafile.sh restart && ./seahub.sh restart \n
"},{"location":"setup_binary/migrate_from_sqlite_to_mysql/","title":"Migrate From SQLite to MySQL","text":"
Note
The tutorial is only related to Seafile CE edition.
First make sure the python module for MySQL is installed. On Ubuntu/Debian, use sudo apt-get install python-mysqldb or sudo apt-get install python3-mysqldb to install it.
Steps to migrate Seafile from SQLite to MySQL:
Stop Seafile and Seahub.
Download sqlite2mysql.sh and sqlite2mysql.py to the top directory of your Seafile installation path. For example, /opt/seafile.
Run sqlite2mysql.sh:
chmod +x sqlite2mysql.sh\n./sqlite2mysql.sh\n
This script will produce three files: ccnet-db.sql, seafile-db.sql, seahub-db.sql.
Then create 3 databases ccnet_db, seafile_db, seahub_db and seafile user.
mysql> create database ccnet_db character set = 'utf8';\nmysql> create database seafile_db character set = 'utf8';\nmysql> create database seahub_db character set = 'utf8';\n
Import ccnet data to MySql.
mysql> use ccnet_db;\nmysql> source ccnet-db.sql;\n
Import seafile data to MySql.
mysql> use seafile_db;\nmysql> source seafile-db.sql;\n
Import seahub data to MySql.
mysql> use seahub_db;\nmysql> source seahub-db.sql;\n
ccnet.conf has been removed since Seafile 12.0
Modify configure files\uff1aAppend following lines to ccnet.conf:
DATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'USER' : 'root',\n 'PASSWORD' : 'root',\n 'NAME' : 'seahub_db',\n 'HOST' : '127.0.0.1',\n 'PORT': '3306',\n # This is only needed for MySQL older than 5.5.5.\n # For MySQL newer than 5.5.5 INNODB is the default already.\n 'OPTIONS': {\n \"init_command\": \"SET storage_engine=INNODB\",\n }\n }\n}\n
Restart seafile and seahub
Note
User notifications will be cleared during migration due to the slight difference between MySQL and SQLite, if you only see the busy icon when click the notitfications button beside your avatar, please remove user_notitfications table manually by:
use seahub_db;\ndelete from notifications_usernotification;\n
"},{"location":"setup_binary/migrate_from_sqlite_to_mysql/#faq","title":"FAQ","text":""},{"location":"setup_binary/migrate_from_sqlite_to_mysql/#encountered-errno-150-foreign-key-constraint-is-incorrectly-formed","title":"Encountered errno: 150 \"Foreign key constraint is incorrectly formed\"","text":"
This error typically occurs because the current table being created contains a foreign key that references a table whose primary key has not yet been created. Therefore, please check the database table creation order in the SQL file. The correct order is:
\"You and Your\" means the party licensing the Software hereunder.
\"Software\" means the computer programs provided under the terms of this license by Seafile Ltd. together with any documentation provided therewith.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#2-grant-of-rights","title":"2. GRANT OF RIGHTS","text":""},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#21-general","title":"2.1 General","text":"
The License granted for Software under this Agreement authorizes You on a non-exclusive basis to use the Software. The Software is licensed, not sold to You and Seafile Ltd. reserves all rights not expressly granted to You in this Agreement. The License is personal to You and may not be assigned by You to any third party.
Subject to the receipt by Seafile Ltd. of the applicable license fees, You have the right use the Software as follows:
You may use and install the Software on an unlimited number of computers that are owned, leased, or controlled by you.
Nothing in this Agreement shall permit you, or any third party to disclose or otherwise make available to any third party the licensed Software, source code or any portion thereof.
You agree to indemnify, hold harmless and defend Seafile Ltd. from and against any claims or lawsuits, including attorney's fees, that arise as a result from the use of the Software;
You do not permit further redistribution of the Software by Your end-user customers
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#3-no-derivative-works","title":"3. NO DERIVATIVE WORKS","text":"
The inclusion of source code with the License is explicitly not for your use to customize a solution or re-use in your own projects or products. The benefit of including the source code is for purposes of security auditing. You may modify the code only for emergency bug fixes that impact security or performance and only for use within your enterprise. You may not create or distribute derivative works based on the Software or any part thereof. If you need enhancements to the software features, you should suggest them to Seafile Ltd. for version improvements.
You acknowledge that all copies of the Software in any form are the sole property of Seafile Ltd.. You have no right, title or interest to any such Software or copies thereof except as provided in this Agreement.
You hereby acknowledge and agreed that the Software constitute and contain valuable proprietary products and trade secrets of Seafile Ltd., embodying substantial creative efforts and confidential information, ideas, and expressions. You agree to treat, and take precautions to ensure that your employees and other third parties treat, the Software as confidential in accordance with the confidentiality requirements herein.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#6-disclaimer-of-warranties","title":"6. DISCLAIMER OF WARRANTIES","text":"
EXCEPT AS OTHERWISE SET FORTH IN THIS AGREEMENT THE SOFTWARE IS PROVIDED TO YOU \"AS IS\", AND Seafile Ltd. MAKES NO EXPRESS OR IMPLIED WARRANTIES WITH RESPECT TO ITS FUNCTIONALITY, CONDITION, PERFORMANCE, OPERABILITY OR USE. WITHOUT LIMITING THE FOREGOING, Seafile Ltd. DISCLAIMS ALL IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR FREEDOM FROM INFRINGEMENT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSIONS MAY NOT APPLY TO YOU. THE LIMITED WARRANTY HEREIN GIVES YOU SPECIFIC LEGAL RIGHTS, AND YOU MAY ALSO HAVE OTHER RIGHTS THAT VARY FROM ONE JURISDICTION TO ANOTHER.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#7-limitation-of-liability","title":"7. LIMITATION OF LIABILITY","text":"
YOU ACKNOWLEDGE AND AGREE THAT THE CONSIDERATION WHICH Seafile Ltd. IS CHARGING HEREUNDER DOES NOT INCLUDE ANY CONSIDERATION FOR ASSUMPTION BY Seafile Ltd. OF THE RISK OF YOUR CONSEQUENTIAL OR INCIDENTAL DAMAGES WHICH MAY ARISE IN CONNECTION WITH YOUR USE OF THE SOFTWARE. ACCORDINGLY, YOU AGREE THAT Seafile Ltd. SHALL NOT BE RESPONSIBLE TO YOU OR ANY THIRD PARTY FOR ANY LOSS-OF-PROFIT, LOST SAVINGS, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF A LICENSING OR USE OF THE SOFTWARE.
You agree to defend, indemnify and hold Seafile Ltd. and its employees, agents, representatives and assigns harmless from and against any claims, proceedings, damages, injuries, liabilities, costs, attorney's fees relating to or arising out of Your use of the Software or any breach of this Agreement.
Your license is effective until terminated. You may terminate it at any time by destroying the Software or returning all copies of the Software to Seafile Ltd.. Your license will terminate immediately without notice if You breach any of the terms and conditions of this Agreement, including non or incomplete payment of the license fee. Upon termination of this Agreement for any reason: You will uninstall all copies of the Software; You will immediately cease and desist all use of the Software; and will destroy all copies of the software in your possession.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#10-updates-and-support","title":"10. UPDATES AND SUPPORT","text":"
Seafile Ltd. has the right, but no obligation, to periodically update the Software, at its complete discretion, without the consent or obligation to You or any licensee or user.
YOU HEREBY ACKNOWLEDGE THAT YOU HAVE READ THIS AGREEMENT, UNDERSTAND IT AND AGREE TO BE BOUND BY ITS TERMS AND CONDITIONS.
"},{"location":"setup_binary/start_seafile_at_system_bootup/","title":"Start Seafile at System Bootup","text":""},{"location":"setup_binary/start_seafile_at_system_bootup/#for-systems-running-systemd-and-python-virtual-environments","title":"For systems running systemd and python virtual environments","text":"
For example Debian 12
Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload.
Firstly, you should create a script to activate the python virtual environment, which goes in the ${seafile_dir} directory. Put another way, it does not go in \"seafile-server-latest\", but the directory above that. Throughout this manual the examples use /opt/seafile for this directory, but you might have chosen to use a different directory.
sudo vim /opt/seafile/run_with_venv.sh\n
The content of the file is:
#!/bin/bash\n# Activate the python virtual environment (venv) before starting one of the seafile scripts\n\ndir_name=\"$(dirname $0)\"\nsource \"${dir_name}/python-venv/bin/activate\"\nscript=\"$1\"\nshift 1\n\necho \"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n\"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n
[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seafile.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#for-systems-running-systemd-without-python-virtual-environment","title":"For systems running systemd without python virtual environment","text":"
For example Debian 8 through Debian 11, Linux Ubuntu 15.04 and newer
Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload.
[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seafile.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
Create systemd service file /etc/systemd/system/seafile-client.service
You need to create this service file only if you have seafile console client and you want to run it on system boot.
sudo vim /etc/systemd/system/seafile-client.service\n
The content of the file is:
[Unit]\nDescription=Seafile client\n# Uncomment the next line you are running seafile client on the same computer as server\n# After=seafile.service\n# Or the next one in other case\n# After=network.target\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/seaf-cli start\nExecStop=/usr/bin/seaf-cli stop\nRemainAfterExit=yes\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#enable-service-start-on-system-boot","title":"Enable service start on system boot","text":"
"},{"location":"setup_binary/using_logrotate/","title":"Set up logrotate for server","text":""},{"location":"setup_binary/using_logrotate/#how-it-works","title":"How it works","text":"
seaf-server support reopenning logfiles by receiving a SIGUR1 signal.
This feature is very useful when you need cut logfiles while you don't want to shutdown the server. All you need to do now is cutting the logfile on the fly.
Assuming your seaf-server's logfile is setup to /opt/seafile/logs/seafile.log and your seaf-server's pidfile is setup to /opt/seafile/pids/seaf-server.pid:
The configuration for logrotate could be like this:
There are three types of upgrade, i.e., major version upgrade, minor version upgrade and maintenance version upgrade. This page contains general instructions for the three types of upgrade.
After upgrading, you may need to clean seahub cache if it doesn't behave as expect.
If you are using a Docker based deployment, please read upgrade a Seafile docker instance
If you are running a cluster, please read upgrade a Seafile cluster.
If you are using a binary package based deployment, please read instructions below.
Please check the upgrade notes for any special configuration or changes before/while upgrading.
Upgrade notes for 7.1.x
Upgrade notes for 8.0.x
Upgrade notes for 9.0.x
Upgrade notes for 10.0.x
Upgrade notes for 11.0.x
Upgrade notes for 12.0.x
"},{"location":"upgrade/upgrade/#upgrade-a-binary-package-based-deployment","title":"Upgrade a binary package based deployment","text":""},{"location":"upgrade/upgrade/#major-version-upgrade-eg-from-5xx-to-6yy","title":"Major version upgrade (e.g. from 5.x.x to 6.y.y)","text":"
Suppose you are using version 5.1.0 and like to upgrade to version 6.1.0. First download and extract the new version. You should have a directory layout similar to this:
cd seafile/seafile-server-latest/\n./seafile.sh start\n./seahub.sh start # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n# or via service\n/etc/init.d/seafile-server start\n
If the new version works fine, the old version can be removed
rm -rf seafile-server-5.1.0/\n
"},{"location":"upgrade/upgrade/#minor-version-upgrade-eg-from-61x-to-62y","title":"Minor version upgrade (e.g. from 6.1.x to 6.2.y)","text":"
Suppose you are using version 6.1.0 and like to upgrade to version 6.2.0. First download and extract the new version. You should have a directory layout similar to this:
Start from your current version, run the script(s one by one)
upgrade/upgrade_6.1_6.2.sh\n
Start Seafile server
./seafile.sh start\n./seahub.sh start\n# or via service\n/etc/init.d/seafile-server start\n
If the new version works, the old version can be removed
rm -rf seafile-server-6.1.0/\n
"},{"location":"upgrade/upgrade/#maintenance-version-upgrade-eg-from-622-to-623","title":"Maintenance version upgrade (e.g. from 6.2.2 to 6.2.3)","text":"
A maintenance upgrade is for example an upgrade from 6.2.2 to 6.2.3.
Shutdown Seafile server if it's running
For this type of upgrade, you only need to update the symbolic links (for avatar and a few other folders). A script to perform a minor upgrade is provided with Seafile server (for history reasons, the script is called minor-upgrade.sh):
cd seafile-server-6.2.3/upgrade/ && ./minor-upgrade.sh\n
Start Seafile
If the new version works, the old version can be removed
rm -rf seafile-server-6.2.2/\n
"},{"location":"upgrade/upgrade_a_cluster/","title":"Upgrade a Seafile cluster","text":""},{"location":"upgrade/upgrade_a_cluster/#major-and-minor-version-upgrade","title":"Major and minor version upgrade","text":"
Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps:
Update Seafile image
Upgrade the database
Update configuration files at each node
Update search index in the backend node
In general, to upgrade a cluster, you need:
Download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Start with docker compose up.
Run the upgrade script in container (for example, /opt/seafile/seafile-server-latest/upgrade/upgrade_x_x_x_x.sh) in one frontend node
Update configuration files at each node according to the documentation for each version
Delete old search index in the backend node if needed
"},{"location":"upgrade/upgrade_a_cluster/#upgrade-a-cluster-from-seafile-11-to-12","title":"Upgrade a cluster from Seafile 11 to 12","text":"
Fill up the following field according to your configurations using in Seafile 11:
SEAFILE_SERVER_HOSTNAME=<your loadbalance's host>\nSEAFILE_SERVER_PROTOCOL=https # or http\nSEAFILE_MYSQL_DB_HOST=<your mysql host>\nSEAFILE_MYSQL_DB_USER=seafile # if you don't use `seafile` as your Seafile server's account, please correct it\nSEAFILE_MYSQL_DB_PASSWORD=<your mysql password for user `seafile`>\nJWT_PRIVATE_KEY=<your JWT key generated in Sec. 3.1>\n
Remove the variables using in Cluster initialization
Since Seafile has been initialized in Seafile 11, the variables related to Seafile cluster initialization can be removed from .env:
INIT_SEAFILE_MYSQL_ROOT_PASSWORD
CLUSTER_INIT_MODE
CLUSTER_INIT_MEMCACHED_HOST
CLUSTER_INIT_ES_HOST
CLUSTER_INIT_ES_PORT
INIT_S3_STORAGE_BACKEND_CONFIG
INIT_S3_COMMIT_BUCKET
INIT_S3_FS_BUCKET
INIT_S3_BLOCK_BUCKET
INIT_S3_KEY_ID
INIT_S3_USE_V4_SIGNATURE
INIT_S3_SECRET_KEY
INIT_S3_AWS_REGION
INIT_S3_HOST
INIT_S3_USE_HTTPS
Start the Seafile in a node
Note
According to this upgrade document, a frontend service will be started here. If you plan to use this node as a backend node, you need to modify this item in .env and set it to backend:
CLUSTER_MODE=backend\n
docker compose up -d\n
Upgrade Seafile
docker exec -it seafile bash\n# enter the container `seafile`\n\n# stop servers\ncd /opt/seafile/seafile-server-latest\n./seafile.sh stop\n./seahub.sh stop\n\n# upgrade seafile\ncd upgrade\n./upgrade_11.0_12.0.sh\n
Success
After upgrading the Seafile, you can see the following messages in your console:
Updating seafile/seahub database ...\n\n[INFO] You are using MySQL\n[INFO] updating seafile database...\n[INFO] updating seahub database...\n[INFO] updating seafevents database...\nDone\n\nmigrating avatars ...\n\nDone\n\nupdating /opt/seafile/seafile-server-latest symbolic link to /opt/seafile/seafile-pro-server-12.0.6 ...\n\n\n\n-----------------------------------------------------------------\nUpgraded your seafile server successfully.\n-----------------------------------------------------------------\n
Then you can exit the container by exit
Restart current node
docker compose down\n docker compose up -d\n
Tip
You can use docker logs -f seafile to check whether the current node service is running normally
Operations for other nodes
Download and modify .env similar to the first node (for backend node, you should set CLUSTER_MODE=backend)
Start the Seafile server:
docker compose up -d\n
"},{"location":"upgrade/upgrade_a_cluster_binary/","title":"Upgrade a Seafile cluster (binary)","text":""},{"location":"upgrade/upgrade_a_cluster_binary/#major-and-minor-version-upgrade","title":"Major and minor version upgrade","text":"
Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps:
Upgrade the database
Update symbolic link at frontend and backend nodes to point to the newest version
Update configuration files at each node
Update search index in the backend node
In general, to upgrade a cluster, you need:
Run the upgrade script (for example, ./upgrade/upgrade_4_0_4_1.sh) in one frontend node
Run the minor upgrade script (./upgrade/minor_upgrade.sh) in all other nodes to update symbolic link
Update configuration files at each node according to the documentation for each version
Delete old search index in the backend node if needed
For maintenance upgrade, like from version 10.0.1 to version 10.0.4, just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
For major version upgrade, like from 10.0 to 11.0, see instructions below.
Please check the upgrade notes for any special configuration or changes before/while upgrading.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-120-to-130","title":"Upgrade from 12.0 to 13.0","text":"
From Seafile Docker 13.0, the elasticsearch.yml has separated from seafile-server.yml, and Seafile will support getting cache configuration from environment variables
From Seafile Docker 13.0 (Pro), the ElasticSearch service will be controlled by a separate resource file (i.e., elasticsearch.yml). If you are using Seafile Pro and still plan to use ElasticSearch, please download the elasticsearch.yml
Modify .env, update image version and add cache configurations:
Variables change logs for .env in Seafile 13
The configurations of database and cache can get from environment variables directly (you can define it in the .env). What's more, the Redis will be recommended as the primary cache server for supporting some new features (please refer the upgradte notes, you can also refer to more details about Redis in Seafile Docker here) and is the default type of cache provided in Seafile 13.
The configuration of S3 (including Seafile server, SeaSearch, and the newly supported Metadata server) will use unified variables (i.e., S3_xxx) for the authorization information of S3 in the new deployment. Please refer to the end of the table in Seafile Pro deployment for details. If you plan to deploy or redeploy these components in the future, please pay attention to changes in variable names.
The configuration of notification server will no longer read from seafile.conf, rather than the variable NOTIFICATION_SERVER_URL in the .env and leave blank to disable this feature.
Update image version to Seafile 13
Seafile ProSeafile CE
COMPOSE_FILE='...,elasticsearch.yml' # add `elasticsearch.yml` if you are still using ElasticSearch\nSEAFILE_IMAGE=seafileltd/seafile-pro-mc:13.0-latest\n
SEAFILE_IMAGE=seafileltd/seafile-mc:13.0-latest\n
Add configurations for cache:
## Cache\nCACHE_PROVIDER=redis # or memcached\n\n### Redis\nREDIS_HOST=redis\nREDIS_PORT=6379\nREDIS_PASSWORD=\n\n### Memcached\nMEMCACHED_HOST=memcached\nMEMCACHED_PORT=11211\n
Add configuration for notification server (if is enabled in Seafile 12):
NOTIFICATION_SERVER_URL=<your notification server url>\n
Optional but recommended modifications for further configuration files
Although the configurations in environment (i.e., .env) have higher priority than the configurations in config files, we recommend that you remove or modify the cache configuration in the following files to avoid ambiguity:\uff1a
seafile.conf: remove the [memcached] section
seahub_settings.py: remove the key default in variable CACHES
Start with docker compose up -d.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-110-to-120","title":"Upgrade from 11.0 to 12.0","text":"
Note: If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
From Seafile Docker 12.0, we recommend that you use .env and seafile-server.yml files for configuration.
"},{"location":"upgrade/upgrade_docker/#backup-the-original-docker-composeyml-file","title":"Backup the original docker-compose.yml file:","text":"
The following fields merit particular attention: Variable Description Default Value SEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-dataSEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddySEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafileSEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME The database name of ccnet ccnet_dbSEAFILE_MYSQL_DB_SEAFILE_DB_NAME The database name of seafile seafile_dbSEAFILE_MYSQL_DB_SEAHUB_DB_NAME The database name of seahub seahub_dbJWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) httpTIME_ZONE Time zone UTC
The following fields merit particular attention: Variable Description Default Value SEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-dataSEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddySEAFILE_ELASTICSEARCH_VOLUME (Only valid for Seafile PE) The volume directory of Elasticsearch data /opt/seafile-elasticsearch/dataSEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafileSEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) JWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) httpTIME_ZONE Time zone UTC
Note
The value of the variables in the above table should be identical to your existing installation. You should check them from the existing configuration files (e.g., seafile.conf).
For variables used to initialize configurations (e.g., INIT_SEAFILE_MYSQL_ROOT_PASSWORD, INIT_SEAFILE_ADMIN_EMAIL, INIT_SEAFILE_ADMIN_PASSWORD), you can remove it in the .env file.
SSL is now handled by the caddy server. If you have used SSL before, you will also need modify the seafile.nginx.conf. Change server listen 443 to 80.
Backup the original seafile.nginx.conf file:
cp seafile.nginx.conf seafile.nginx.conf.bak\n
Remove the server listen 80 section:
#server {\n# listen 80;\n# server_name _ default_server;\n\n # allow certbot to connect to challenge location via HTTP Port 80\n # otherwise renewal request will fail\n# location /.well-known/acme-challenge/ {\n# alias /var/www/challenges/;\n# try_files $uri =404;\n# }\n\n# location / {\n# rewrite ^ https://seafile.example.com$request_uri? permanent;\n# }\n#}\n
If you has deployed the notification server. The Notification Server is now moved to its own Docker image. You need to redeploy it according to Notification Server document
"},{"location":"upgrade/upgrade_docker/#upgrade-seadoc-from-08-to-10-for-seafile-v120","title":"Upgrade SeaDoc from 0.8 to 1.0 for Seafile v12.0","text":"
If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following steps:
Delete sdoc_db.
Remove SeaDoc configs in seafile.nginx.conf file.
Re-deploy SeaDoc server. In other words, delete the old SeaDoc deployment and deploy a new SeaDoc server.
From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db.
"},{"location":"upgrade/upgrade_docker/#remove-seadoc-configs-in-seafilenginxconf-file","title":"Remove SeaDoc configs in seafile.nginx.conf file","text":"
If you have deployed SeaDoc older version, you should remove /sdoc-server/, /socket.io configs in seafile.nginx.conf file.
"},{"location":"upgrade/upgrade_docker/#supplement-or-remove-allowed_hosts-in-seahub_settingspy","title":"Supplement or remove ALLOWED_HOSTS in seahub_settings.py","text":"
Since version 12.0, the seaf-server component need to send internal requests to seahub component to check permissions, as reporting 400 Error when downloading files if the ALLOWED_HOSTS set incorrect. In this case, you can either remove ALLOWED_HOSTS in seahub_settings.py or supplement 127.0.0.1 in ALLOWED_HOSTS list:
"},{"location":"upgrade/upgrade_docker/#upgrade-from-100-to-110","title":"Upgrade from 10.0 to 11.0","text":"
Download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Taking the community edition as an example, you have to modify
It is also recommended that you upgrade mariadb and memcached to newer versions as in the v11.0 docker-compose.yml file. Specifically, in version 11.0, we use the following versions:
MariaDB: 10.11
Memcached: 1.6.18
What's more, you have to migrate configuration for LDAP and OAuth according to here
Start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-90-to-100","title":"Upgrade from 9.0 to 10.0","text":"
Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
If you are using pro edition with ElasticSearch, SAML SSO and storage backend features, follow the upgrading manual on how to update the configuration for these features.
If you want to use the new notification server and rate control (pro edition only), please refer to the upgrading manual.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-80-to-90","title":"Upgrade from 8.0 to 9.0","text":"
Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
Since version 9.0.6, we use Acme V3 (not acme-tiny) to get certificate.
If there is a certificate generated by an old version, you need to back up and move the old certificate directory and the seafile.nginx.conf before starting.
Starting the new container will automatically apply a certificate.
docker compose down\ndocker compose up -d\n
Please wait a moment for the certificate to be applied, then you can modify the new seafile.nginx.conf as you want. Execute the following command to make the nginx configuration take effect.
docker exec seafile nginx -s reload\n
A cron job inside the container will automatically renew the certificate.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-71-to-80","title":"Upgrade from 7.1 to 8.0","text":"
Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-70-to-71","title":"Upgrade from 7.0 to 7.1","text":"
Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
"},{"location":"upgrade/upgrade_notes_for_10.0.x/","title":"Upgrade notes for 10.0","text":"
These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
The notification server enables desktop syncing and drive clients to get notification of library changes immediately using websocket. There are two benefits:
Reduce the time for syncing new changes to local
Reduce the load of the server as periodically pulling is removed. There are significant reduction of load when you have 1000+ clients.
The notification server works with Seafile syncing client 9.0+ and drive client 3.0+.
Please follow the document to enable notification server
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#memcached-section-in-the-seafileconf-pro-edition-only","title":"Memcached section in the seafile.conf (pro edition only)","text":"
If you use storage backend or cluster, make sure the memcached section is in the seafile.conf.
Since version 10.0, all memcached options are consolidated to the one below.
Modify the seafile.conf:
[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#rate-control-in-role-settings-pro-edition-only","title":"Rate control in role settings (pro edition only)","text":"
Starting from version 10.0, Seafile allows administrators to configure upload and download speed limits for users with different roles through the following two steps:
Configuring rate limiting for different roles in seahub_settings.py.
Elasticsearch is upgraded to version 8.x, fixed and improved some issues of file search function.
Since elasticsearch 7.x, the default number of shards has changed from 5 to 1, because too many index shards will over-occupy system resources; but when a single shard data is too large, it will also reduce search performance. Starting from version 10.0, Seafile supports customizing the number of shards in the configuration file.
You can use the following command to query the current size of each shard to determine the best number of shards for you:
The official recommendation is that the size of each shard should be between 10G-50G: https://www.elastic.co/guide/en/elasticsearch/reference/8.6/size-your-shards.html#shard-size-recommendation.
Modify the seafevents.conf:
[INDEX FILES]\n...\nshards = 10 # default is 5\n...\n
5. Use the following command to check if the reindex task is complete:
# Get the task_id of the reindex task:\n$ curl 'http{s}://{es server IP}:9200/_tasks?actions=*reindex&pretty'\n# Check to see if the reindex task is complete:\n$ curl 'http{s}://{es server IP}:9200/_tasks/:<task_id>?pretty'\n
6. Reset the refresh_interval and number_of_replicas to the values used in the old index:
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-two-rebuild-the-index-and-discard-the-old-index-data","title":"Method two, rebuild the index and discard the old index data","text":"
1. Pull Elasticsearch image:
docker pull elasticsearch:8.5.3\n
Create a new folder to store ES data and give the folder permissions:
[INDEX FILES]\n...\nexternal_es_server = true\nes_host = http{s}://{es server IP}\nes_port = 9200\nshards = 10 # default is 5.\n...\n
Restart Seafile server:
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start\n
3. Delete old index data
rm -rf /opt/seafile-elasticsearch/data/*\n
4. Create new index data:
$ cd /opt/seafile/seafile-server-latest\n$ ./pro/pro.py search --update\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"
1. Deploy elasticsearch 8.x according to method two. Use Seafile 10.0 version to deploy a new backend node and modify the seafevents.conf file. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update.
2. Upgrade the other nodes to Seafile 10.0 version and use the new Elasticsearch 8.x server.
3. Then deactivate the old backend node and the old version of Elasticsearch.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/","title":"Upgrade notes for 11.0","text":"
These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#important-release-changes","title":"Important release changes","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-of-user-identity","title":"Change of user identity","text":"
Previous Seafile versions directly used a user's email address or SSO identity as their internal user ID.
Seafile 11.0 introduces virtual user IDs - random, internal identifiers like \"adc023e7232240fcbb83b273e1d73d36@auth.local\". For new users, a virtual ID will be generated instead of directly using their email. A mapping between the email and virtual ID will be stored in the \"profile_profile\" database table. For SSO users,the mapping between SSO ID and virtual ID is stored in the \"social_auth_usersocialauth\" table.
Overall this brings more flexibility to handle user accounts and identity changes. Existing users will use the same old ID.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#reimplementation-of-ldap-integration","title":"Reimplementation of LDAP Integration","text":"
Previous Seafile versions handled LDAP authentication in the ccnet-server component. In Seafile 11.0, LDAP is reimplemented within the Seahub Python codebase.
LDAP configuration has been moved from ccnet.conf to seahub_settings.py. The ccnet_db.LDAPImported table is no longer used - LDAP users are now stored in ccnet_db.EmailUsers along with other users.
Benefits of this new implementation:
Improved compatibility across different systems. Python code is more portable than the previous C implementation.
Consistent handling of users whether they login via LDAP or other methods like email/password.
You need to run migrate_ldapusers.py script to merge ccnet_db.LDAPImported table to ccnet_db.EmailUsers table. The setting files need to be changed manually. (See more details below)
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#oauth-authentication-and-other-sso-methods","title":"OAuth authentication and other SSO methods","text":"
If you use OAuth authentication, the configuration need to be changed a bit.
If you use SAML, you don't need to change configuration files. For SAML2, in version 10, the name_id field is returned from SAML server, and is used as the username (the email field in ccnet_dbEmailUser). In version 11, for old users, Seafile will find the old user and create a name_id to name_id mapping in social_auth_usersocialauth. For new users, Seafile will create a new user with random ID and add a name_id to the random ID mapping in social_auth_usersocialauth. In addition, we have added a feature where you can configure to disable login with a username and password for saml users by using the config of DISABLE_ADFS_USER_PWD_LOGIN = True in seahub_settings.py.
Seafile 11.0 dropped using SQLite as the database. It is better to migrate from SQLite database to MySQL database before upgrading to version 11.0.
There are several reasons driving this change:
Focus on collaborative features - SQLite's limitations make advanced concurrency and locking difficult, which collaborative editing requires. Different Seafile components need simultaneous database access. Especially after adding seafevents component in version 11.0 for the community edition.
Docker deployments - Our official Docker images do not support SQLite. MySQL is the preferred option.
Migration difficulties - Migrating SQLite databases to MySQL via SQL translation is unreliable.
To migrate from SQLite database to MySQL database, you can follow the document Migrate from SQLite to MySQL. If you have issues in the migration, just post a thread in our forum. We are glad to help you.
Django 4.* has introduced a new check for the origin http header in CSRF verification. It now compares the values of the origin field in HTTP header and the host field in HTTP header. If they are different, an error is triggered.
If you deploy Seafile behind a proxy, or if you use a non-standard port, or if you deploy Seafile in cluster, it is likely the origin field in HTTP header received by Django and the host field in HTTP header received by Django are different. Because the host field in HTTP header is likely to be modified by proxy. This mismatch results in a CSRF error.
You can add CSRF_TRUSTED_ORIGINS to seahub_settings.py to solve the problem:
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#upgrade-to-110x","title":"Upgrade to 11.0.x","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#1-stop-seafile-100x-server","title":"1) Stop Seafile-10.0.x server.","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#2-start-from-seafile-110x-run-the-script","title":"2) Start from Seafile 11.0.x, run the script:","text":"
upgrade/upgrade_10.0_11.0.sh\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#3modify-configurations-and-migrate-ldap-records","title":"3\uff09Modify configurations and migrate LDAP records","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configurations-for-ldap","title":"Change configurations for LDAP","text":"
The configuration items of LDAP login and LDAP sync tasks are migrated from ccnet.conf to seahub_settings.py. The name of the configuration item is based on the 10.0 version, and the characters 'LDAP_' or 'MULTI_LDAP_1' are added. Examples are as follows:
# Basic configuration items for LDAP login\nENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.125' # The URL of LDAP server\nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' # The root node of users who can \n # log in to Seafile in the LDAP server\nLDAP_ADMIN_DN = 'administrator@seafile.ren' # DN of the administrator used \n # to query the LDAP server for information\nLDAP_ADMIN_PASSWORD = 'Hello@123' # Password of LDAP_ADMIN_DN\nLDAP_PROVIDER = 'ldap' # Identify the source of the user, used in \n # the table social_auth_usersocialauth, defaults to 'ldap'\nLDAP_LOGIN_ATTR = 'userPrincipalName' # User's attribute used to log in to Seafile, \n # can be mail or userPrincipalName, cannot be changed\nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' # Additional filter conditions,\n # users who meet the filter conditions can log in, otherwise they cannot log in\n# For update user info when login\nLDAP_CONTACT_EMAIL_ATTR = '' # For update user's contact_email\nLDAP_USER_ROLE_ATTR = '' # For update user's role\nLDAP_USER_FIRST_NAME_ATTR = 'givenName' # For update user's first name\nLDAP_USER_LAST_NAME_ATTR = 'sn' # For update user's last name\nLDAP_USER_NAME_REVERSE = False # Whether to reverse the user's first and last name\n
The following configuration items are only for Pro Edition:
# Configuration items for LDAP sync tasks.\nLDAP_SYNC_INTERVAL = 60 # LDAP sync task period, in minutes\n\n# LDAP user sync configuration items.\nENABLE_LDAP_USER_SYNC = True # Whether to enable user sync\nLDAP_USER_OBJECT_CLASS = 'person' # This is the name of the class used to search for user objects. \n # In Active Directory, it's usually \"person\". The default value is \"person\".\nLDAP_DEPT_ATTR = '' # LDAP user's department info\nLDAP_UID_ATTR = '' # LDAP user's login_id attribute\nLDAP_AUTO_REACTIVATE_USERS = True # Whether to auto activate deactivated user\nLDAP_USE_PAGED_RESULT = False # Whether to use pagination extension\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nENABLE_EXTRA_USER_INFO_SYNC = True # Whether to enable sync of additional user information,\n # including user's full name, contact_email, department, and Windows login name, etc.\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# LDAP group sync configuration items.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_FILTER = '' # Group sync filter\nLDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n
If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth or Shibboleth), you want Seafile to find the existing account for this user instead of creating a new one, you can set SSO_LDAP_USE_SAME_UID = True:
SSO_LDAP_USE_SAME_UID = True\n
Note, here the UID means the unique user ID, in LDAP it is the attribute you use for LDAP_LOGIN_ATTR (not LDAP_UID_ATTR), in ADFS it is uid attribute. You need make sure you use the same attribute for the two settings.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configuration-for-oauth","title":"Change configuration for OAuth:","text":"
In the new version, the OAuth login configuration should keep the email attribute unchanged to be compatible with new and old user logins. In version 11.0, a new uid attribute is added to be used as a user's external unique ID. The uid will be stored in social_auth_usersocialauth to map to internal virtual ID. For old users, the original email is used the internal virtual ID. The example is as follows:
# Version 10.0 or earlier\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n\n# Since 11.0 version, added 'uid' attribute.\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # In the new version, the email attribute configuration should be kept unchanged to be compatible with old and new user logins\n \"uid\": (True, \"uid\"), # Seafile use 'uid' as the external unique identifier of the user. Different OAuth systems have different attributes, which may be: 'uid' or 'username', etc.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n
When a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration \"id\": (True, \"email\"). You can also manully add records in social_auth_usersocialauth to map extenral uid to old users.
We have documented common issues encountered by users when upgrading to version 11.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/","title":"Upgrade notes for 12.0","text":"
These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
SeaDoc is now stable, providing online notes and documents feature
A new wiki module
A new trash mechanism, that deleted files will be recorded in database for fast listing. In the old version, deleted files are scanned from library history, which is slow.
Community edition now also support online GC (because SQLite support is dropped)
Configuration changes:
Notification server is now packaged into its own docker image.
For binary package based installation, a new .env file is needed to contain some configuration items. These configuration items need to be shared by different components in Seafile. We name it .env to be consistant with docker based installation.
The password strength level is now calculated by algorithm. The old USER_PASSWORD_MIN_LENGTH, USER_PASSWORD_STRENGTH_LEVEL is removed. Only USER_STRONG_PASSWORD_REQUIRED is still used.
ADDITIONAL_APP_BOTTOM_LINKS is removed. Because there is no buttom bar in the navigation side bar now.
SERVICE_URL and FILE_SERVER_ROOT are removed. SERVICE_URL will be calculated from SEAFILE_SERVER_PROTOCOL and SEAFILE_SERVER_HOSTNAME in .env file.
ccnet.conf is removed. Some of its configuration items are moved from .env file, others are read from items in seafile.conf with same name.
Two role permissions are added, can_create_wiki and can_publish_wiki are used to control whether a role can create a Wiki and publish a Wiki. The old role permission can_publish_repo is removed.
REMOTE_USER header is not passed to Seafile by default, you need to change gunicorn.conf.py if you need REMOTE_USER header for SSO.
Other changes:
A new lightweight and fast search engine, SeaSearch. SeaSearch is optional, you can still use ElasticSearch.
Breaking changes
For security reason, WebDAV no longer support login with LDAP account, the user with LDAP account must generate a WebDAV token at the profile page
[File tags] The current file tags feature is deprecated. We will re-implement a new one in version 13.0 with a new general metadata management module.
For ElasticSearch based search, full text search of doc/xls/ppt file types are no longer supported. This enable us to remove Java dependency in Seafile side.
Deploying Seafile with binary package is now deprecated and probably no longer be supported in version 13.0. We recommend you to migrate your existing Seafile deployment to docker based.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#upgrade-to-120-for-binary-installation","title":"Upgrade to 12.0 (for binary installation)","text":"
The following instruction is for binary package based installation. If you use Docker based installation, please see Updgrade Docker
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#1-clean-database-tables-before-upgrade","title":"1) Clean database tables before upgrade","text":"
If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#2-install-new-system-libraries-and-python-libraries","title":"2) Install new system libraries and Python libraries","text":"
Install new system libraries and Python libraries for your operation system as documented above.
In the folder of Seafile 11.0.x, run the commands:
./seahub.sh stop\n./seafile.sh stop\n
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#4-run-seafile-120x-upgrade-script","title":"4) Run Seafile 12.0.x upgrade script","text":"
In the folder of Seafile 12.0.x, run the upgrade script
upgrade/upgrade_11.0_12.0.sh\n
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#5-create-the-env-file-in-conf-directory","title":"5) Create the .env file in conf/ directory","text":"
conf/.env
TIME_ZONE=UTC\nJWT_PRIVATE_KEY=xxx\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=db # your MySQL host\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n
Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, can be generated by
Since seafile 12.0, we use docker to deploy the notification server. Please follow the document of notification server to re-deploy notification server.
Note
Notification server is designed to be work with Docker based deployment. To make it work with Seafile binary package on the same server, you will need to add Nginx rules for notification server properly.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#8-optional-upgrade-seadoc-from-08-to-10","title":"8) (Optional) Upgrade SeaDoc from 0.8 to 1.0","text":"
If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following two steps:
Delete sdoc_db.
Re-deploy SeaDoc server. In other words, delete the old SeaDoc deployment and re-deploy a new SeaDoc server.
SeaDoc and Seafile binary package
Deploying SeaDoc and Seafile binary package on the same server is no longer officially supported. You will need to add Nginx rules for SeaDoc server properly.
From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#82-deploy-a-new-seadoc-server","title":"8.2) Deploy a new SeaDoc server","text":"
Please see the document Setup SeaDoc to install SeaDoc on a separate machine and integrate with your binary packaged based Seafile server v12.0.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#9-optional-update-gunicornconfpy-file-in-conf-directory","title":"9) (Optional) Update gunicorn.conf.py file in conf/ directory","text":"
If you deployed single sign on (SSO) by Shibboleth protocol, the following line should be added to the gunicorn config file.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#10-optional-other-configuration-changes","title":"10) (Optional) Other configuration changes","text":""},{"location":"upgrade/upgrade_notes_for_12.0.x/#enable-passing-of-remote_user","title":"Enable passing of REMOTE_USER","text":"
REMOTE_USER header is not passed to Seafile by default, you need to change gunicorn.conf.py if you need REMOTE_USER header for SSO.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#supplement-or-remove-allowed_hosts-in-seahub_settingspy","title":"Supplement or remove ALLOWED_HOSTS in seahub_settings.py","text":"
Since version 12.0, the seaf-server component need to send internal requests to seahub component to check permissions, as reporting 400 Error when downloading files if the ALLOWED_HOSTS set incorrect. In this case, you can either remove ALLOWED_HOSTS in seahub_settings.py or supplement 127.0.0.1 in ALLOWED_HOSTS list:
We have documented common issues encountered by users when upgrading to version 12.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_13.0.x/","title":"Upgrade notes for 12.0","text":"
These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
SeaDoc: SeaDoc is now version 2.0, beside support sdoc, it support whiteboard too
Thumbnail server: A new thumbnail server component is added to improve performance for thumbnail generating and support thumbnail for videos
Metadata server: A new metadata server component is avaible to manage extended file properties
Notification server: The web interface now support real-time update when other people add or remove files if notification-server is enabled
SeaSearch: SeaSearch is now version 1.0 and support full-text search
Configuration changes:
Database and memcache configurations are added to .env, it is recommended to use environment variables to config database and memcache
Redis is recommended to be used as memcache server
(Optional) S3 configuration can be done via environment variables and is much simplified
Elastic search is now have its own yml file
Breaking changes
For security reason, WebDAV no longer support login with LDAP account, the user with LDAP account must generate a WebDAV token at the profile page
[File tags] The old file tags feature can no longer be used, the interface provide an upgrade notice for migrate the data to the new file tags feature
Deploying Seafile with binary package is no longer supported for community edition. We recommend you to migrate your existing Seafile deployment to docker based.
Elasticsearch version is not changed in Seafile version 13.0
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#new-system-libraries-to-be-updated","title":"New system libraries (TO be updated)","text":"Ubuntu 24.04/22.04Debian 11
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#upgrade-to-130-for-binary-installation","title":"Upgrade to 13.0 (for binary installation)","text":"
The following instruction is for binary package based installation. If you use Docker based installation, please see Updgrade Docker
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#1-clean-database-tables-before-upgrade","title":"1) Clean database tables before upgrade","text":"
If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#2-install-new-system-libraries-and-python-libraries","title":"2) Install new system libraries and Python libraries","text":"
Install new system libraries and Python libraries for your operation system as documented above.
In the folder of Seafile 11.0.x, run the commands:
./seahub.sh stop\n./seafile.sh stop\n
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#4-run-seafile-120x-upgrade-script","title":"4) Run Seafile 12.0.x upgrade script","text":"
In the folder of Seafile 12.0.x, run the upgrade script
upgrade/upgrade_11.0_12.0.sh\n
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#5-create-the-env-file-in-conf-directory","title":"5) Create the .env file in conf/ directory","text":"
conf/.env
TIME_ZONE=UTC\nJWT_PRIVATE_KEY=xxx\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=db # your MySQL host\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n
Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, can be generated by
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#7-optional-upgrade-notification-server","title":"7) (Optional) Upgrade notification server","text":""},{"location":"upgrade/upgrade_notes_for_13.0.x/#8-optional-upgrade-seadoc-from-10-to-20","title":"8) (Optional) Upgrade SeaDoc from 1.0 to 2.0","text":""},{"location":"upgrade/upgrade_notes_for_13.0.x/#faq","title":"FAQ","text":"
We have documented common issues encountered by users when upgrading to version 12.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_8.0.x/","title":"Upgrade notes for 8.0","text":"
These notes give additional information about changes. Please always follow the main upgrade guide.
SERVICE_URL is moved from ccnet.conf to seahub_settings.py. The upgrade script will read it from ccnet.conf and write to seahub_settings.py
(pro edition only) ElasticSearch is upgraded to version 6.8. ElasticSearch needs to be installed and managed individually. (As ElasticSearch changes license since 6.2, it can no longer be included in Seafile package). There are some benefits for ElasticSearch to be managed individually:
Reduce the size of Seafile package
You can change ElasticSearch setttings more easily
(pro edition only) The built-in Office file preview is now implemented by a separate docker image. This makes is more easy to maintain. We also suggest users to use OnlyOffice as an alternative.
Seafile community edition package for CentOS is no longer maintained (pro editions will still be maintaied). We suggest users to migrate to Docker images.
We rewrite HTTP service in seaf-server with golang and move it to a separate component (turn off by default)
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
The performance is better in a high-concurrency environment and it can handle long requests.
Now you can sync libraries with large number of files.
Now file zipping and downloading can be done simutaneously. When zip downloading a folder, you don't need to wait until zip is done.
Support rate control for file uploading and downloading.
You can turn golang file-server on by adding following configuration in seafile.conf
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#upgrade-to-90x","title":"Upgrade to 9.0.x","text":"
Stop Seafile-8.0.x server.
Start from Seafile 9.0.x, run the script:
upgrade/upgrade_8.0_9.0.sh\n
Start Seafile-9.0.x server.
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#update-elasticsearch-pro-edition-only","title":"Update ElasticSearch (pro edition only)","text":""},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-one-rebuild-the-index-and-discard-the-old-index-data","title":"Method one, rebuild the index and discard the old index data","text":"
If your elasticsearch data is not large, it is recommended to deploy the latest 7.x version of ElasticSearch and then rebuild the new index. Specific steps are as follows
Download ElasticSearch image
docker pull elasticsearch:7.16.2\n
Create a new folder to store ES data and give the folder permissions
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-two-reindex-the-existing-data","title":"Method two, reindex the existing data","text":"
If your data volume is relatively large, it will take a long time to rebuild indexes for all Seafile databases, so you can reindex the existing data. This requires the following steps
Download and start Elasticsearch 7.x
Use the existing data to execute ElasticSearch Reindex in order to build an index that can be used in 7.x
The detailed process is as follows
Download ElasticSearch image:
docker pull elasticsearch:7.16.2\n
PS\uff1aFor seafile version 9.0, you need to manually create the elasticsearch mapping path on the host machine and give it 777 permission, otherwise elasticsearch will report path permission problems when starting, the command is as follows
mkdir -p /opt/seafile-elasticsearch/data \n
Move original data to the new folder and give the folder permissions
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"
Deploy a new ElasticSeach 7.x service, use Seafile 9.0 version to deploy a new backend node, and connect to ElasticSeach 7.x. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update, and then upgrade the other nodes to Seafile 9.0 version and use the new ElasticSeach 7.x after the index is created. Then deactivate the old backend node and the old version of ElasticSeach.
"}]}
\ No newline at end of file
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Introduction","text":"
Seafile 13.0
Our document of Seafile 13.0 is still in progress, the updates of some key components have not been completed yet. Please refer to Seafile 12.0 for more stable support.
Seafile is an open source cloud storage system for file sync, share and document collaboration. SeaDoc is an extension of Seafile that providing a lightweight online collaborative document feature.
"},{"location":"administration/","title":"Administration","text":""},{"location":"administration/#enter-the-admin-panel","title":"Enter the admin panel","text":"
As the system admin, you can enter the admin panel by click System Admin in the popup of avatar.
When you setup seahub website, you should have setup a admin account. After you logged in a admin, you may add/delete users and file libraries.
"},{"location":"administration/account/#how-to-change-a-users-id","title":"How to change a user's ID","text":"
Since version 11.0, if you need to change a user's external ID, you can manually modify database table social_auth_usersocialauth to map the new external ID to internal ID.
"},{"location":"administration/account/#resetting-user-password","title":"Resetting User Password","text":"
Administrator can reset password for a user in \"System Admin\" page.
In a private server, the default settings doesn't support users to reset their password by email. If you want to enable this, you have first to set up notification email.
"},{"location":"administration/account/#forgot-admin-account-or-password","title":"Forgot Admin Account or Password?","text":"
You may run reset-admin.sh script under seafile-server-latest directory. This script would help you reset the admin account and password. Your data will not be deleted from the admin account, this only unlocks and changes the password for the admin account.
Tip
Enter into the docker image, then go to /opt/seafile/seafile-server-latest
Under the seafile-server-latest directory, run ./seahub.sh python-env python seahub/manage.py check_user_quota , when the user quota exceeds 90%, an email will be sent. If you want to enable this, you have first to set up notification email.
"},{"location":"administration/auditing/","title":"Access log and auditing (Pro)","text":"
In the Pro Edition, Seafile offers four audit logs in system admin panel:
Login log
File access log (including access to shared files)
File update log
Permission change log
The audit log data is saved in seahub_db.
"},{"location":"administration/backup_recovery/","title":"Backup and Recovery","text":""},{"location":"administration/backup_recovery/#overview","title":"Overview","text":"
There are generally two parts of data to backup
Seafile library data
Databases
There are 3 databases:
ccnet_db: contains user and group information
seafile_db: contains library metadata
seahub_db: contains tables used by the web front end (seahub)
"},{"location":"administration/backup_recovery/#backup-order-database-first-or-data-directory-first","title":"Backup Order: Database First or Data Directory First","text":"
backup data directory first, SQL later: When you're backing up data directory, some new objects are written and they're not backed up. Those new objects may be referenced in SQL database. So when you restore, some records in the database cannot find its object. So the library is corrupted.
backup SQL first, data directory later: Since you backup database first, all records in the database have valid objects to be referenced. So the libraries won't be corrupted. But new objects written to storage when you're backing up are not referenced by database records. So some libraries are out of date. When you restore, some new data are lost.
The second sequence is better in the sense that it avoids library corruption. Like other backup solutions, some new data can be lost in recovery. There is always a backup window. However, if your storage backup mechanism can finish quickly enough, using the first sequence can retain more data.
We assume your seafile data directory is in /opt/seafile for binary package based deployment (or /opt/seafile-data for docker based deployment). And you want to backup to /backup directory. The /backup can be an NFS or Windows share mount exported by another machine, or just an external disk. You can create a layout similar to the following in /backup directory:
/backup\n---- databases/ contains database backup files\n---- data/ contains backups of the data directory\n
"},{"location":"administration/backup_recovery/#backup-and-restore-for-binary-package-based-deployment","title":"Backup and restore for binary package based deployment","text":""},{"location":"administration/backup_recovery/#backing-up-databases","title":"Backing up Databases","text":"
It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.
Assume your database names are ccnet_db, seafile_db and seahub_db. mysqldump automatically locks the tables so you don't need to stop Seafile server when backing up MySQL databases. Since the database tables are usually very small, it won't take long to dump.
You may encounter this problem on some machines with a minimal (from 10.5) or a newer (from 11.0) Mariadb server installed, of which the mysql* series of commands have been gradually deprecated. If you encounter this error, use the mariadb-dump command, such as:
"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data","title":"Backing up Seafile library data","text":"
The data files are all stored in the /opt/seafile directory, so just back up the whole directory. You can directly copy the whole directory to the backup destination, or you can use rsync to do incremental backup.
This produces a separate copy of the data directory each time. You can delete older backup copies after a new one is completed.
If you have a lot of data, copying the whole data directory would take long. You can use rsync to do incremental backup.
rsync -az /opt/seafile /backup/data\n
This command backup the data directory to /backup/data/seafile.
"},{"location":"administration/backup_recovery/#restore-from-backup","title":"Restore from backup","text":"
Now supposed your primary seafile server is broken, you're switching to a new machine. Using the backup data to restore your Seafile instance:
Copy /backup/data/seafile to the new machine. Let's assume the seafile deployment location new machine is also /opt/seafile.
Restore the database.
Since database and data are backed up separately, they may become a little inconsistent with each other. To correct the potential inconsistency, run seaf-fsck tool to check data integrity on the new machine. See seaf-fsck documentation.
"},{"location":"administration/backup_recovery/#restore-the-databases","title":"Restore the databases","text":"
Now with the latest valid database backup files at hand, you can restore them.
You may encounter this problem on some machines with a minimal (from 10.5) or a newer (from 11.0) Mariadb server installed, of which the mysql* series of commands have been gradually deprecated. If you encounter this error, use the mariadb command, such as:
"},{"location":"administration/backup_recovery/#backup-and-restore-for-docker-based-deployment","title":"Backup and restore for Docker based deployment","text":""},{"location":"administration/backup_recovery/#structure","title":"Structure","text":"
We assume your seafile volumns path is in /opt/seafile-data. And you want to backup to /backup directory.
The data files to be backed up:
/opt/seafile-data/seafile/conf # configuration files\n/opt/seafile-data/seafile/seafile-data # data of seafile\n/opt/seafile-data/seafile/seahub-data # data of seahub\n
"},{"location":"administration/backup_recovery/#backing-up-database","title":"Backing up Database","text":"
# It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.\ncd /backup/databases\ndocker exec -it seafile-mysql mariadb-dump -u[username] -p[password] --opt ccnet_db > ccnet_db.sql\ndocker exec -it seafile-mysql mariadb-dump -u[username] -p[password] --opt seafile_db > seafile_db.sql\ndocker exec -it seafile-mysql mariadb-dump -u[username] -p[password] --opt seahub_db > seahub_db.sql\n
Tip
The default image of database is Mariadb 10.11 from Seafile 12, you may not be able to find these commands in the container (such as mysqldump: command not found), since commands of mysql* series have been gradually deprecated. So we recommend that you use the mariadb* series of commands.
However, if you still use the MySQL docker image, you should continue to use mysqldump here:
"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data_1","title":"Backing up Seafile library data","text":""},{"location":"administration/backup_recovery/#to-directly-copy-the-whole-data-directory","title":"To directly copy the whole data directory","text":"
cp -R /opt/seafile-data/seafile /backup/data/\n
"},{"location":"administration/backup_recovery/#use-rsync-to-do-incremental-backup","title":"Use rsync to do incremental backup","text":"
"},{"location":"administration/backup_recovery/#recovery","title":"Recovery","text":""},{"location":"administration/backup_recovery/#restore-the-databases_1","title":"Restore the databases","text":"
The default image of database is Mariadb 10.11 from Seafile 12, you may not be able to find these commands in the container (such as mysql: command not found), since commands of mysql* series have been gradually deprecated. So we recommend that you use the mariadb* series of commands.
However, if you still use the MySQL docker image, you should continue to use mysql here:
Use the following command to clear expired session records in Seahub database:
cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clearsessions\n
Tip
Enter into the docker image, then go to /opt/seafile/seafile-server-latest
"},{"location":"administration/clean_database/#use-clean_db_records-command-to-clean-seahub_db","title":"Use clean_db_records command to clean seahub_db","text":"
Use the following command to simultaneously clean up table records of Activity, sysadmin_extra_userloginlog, FileAudit, FileUpdate, FileHistory, PermAudit, FileTrash 90 days ago:
Use the following command to clear the activity records:
use seahub_db;\nDELETE FROM Activity WHERE to_days(now()) - to_days(timestamp) > 90;\nDELETE FROM UserActivity WHERE to_days(now()) - to_days(timestamp) > 90;\n
Since version 6.2, we offer command to clear outdated library records in Seafile database, e.g. records that are not deleted after a library is deleted. This is because users can restore a deleted library, so we can't delete these records at library deleting time.
There are two tables in Seafile db that are related to library sync tokens.
RepoUserToken contains the authentication tokens used for library syncing. Note that a separate token is created for every client (including sync client and SeaDrive.)
RepoTokenPeerInfo contains more information about each client token, such as client name, IP address, last sync time etc.
When you have many sync clients connected to the server, these two tables can have large number of rows. Many of them are no longer actively used. You may clean the tokens that are not used in a recent period, by the following SQL query:
delete t,i from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
xxxx is the UNIX timestamp for the time before which tokens will be deleted.
To be safe, you can first check how many tokens will be removed:
select * from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01\n
"},{"location":"administration/logs/","title":"Seafile server logs","text":""},{"location":"administration/logs/#log-files-of-seafile-server","title":"Log files of seafile server","text":"
seafile.log: logs of seaf-server
seahub.log: logs from Django framework
fileserver.log: logs of the golang file server component
seafevents.log: logs for background tasks and office file conversion
seahub_email_sender.log: logs for periodically email sending of background tasks
"},{"location":"administration/logs/#log-files-for-seafile-background-node-in-cluster-mode","title":"Log files for seafile background node in cluster mode","text":"
seafile.log: logs of seaf-server
seafevents.log: Empty
seafile-background-tasks.log: logs for background tasks and office file convertion
seahub_email_sender.log: logs for periodically email sending of background tasks
On the server side, Seafile stores the files in the libraries in an internal format. Seafile has its own representation of directories and files (similar to Git).
With default installation, these internal objects are stored in the server's file system directly (such as Ext4, NTFS). But most file systems don't assure the integrity of file contents after a hard shutdown or system crash. So if new Seafile internal objects are being written when the system crashes, they can be corrupt after the system reboots. This will make part of the corresponding library not accessible.
Warning
If you store the seafile-data directory in a battery-backed NAS (like EMC or NetApp), or use S3 backend available in the Pro edition, the internal objects won't be corrupt.
Note
If your Seafile server is deployed with Docker, make sure you have enter the container before executing the following commands in this manual:
docker exec -it seafile bash\n
This is also required for the other scripts in this document.
We provide a seaf-fsck.sh script to check the integrity of libraries. The seaf-fsck tool accepts the following arguments:
cd /opt/seafile/seafile-server-latest\n./seaf-fsck.sh [--repair|-r] [--export|-E export_path] [repo_id_1 [repo_id_2 ...]]\n
There are three modes of operation for seaf-fsck:
checking integrity of libraries.
repairing corrupted libraries.
exporting libraries.
"},{"location":"administration/seafile_fsck/#checking-integrity-of-libraries","title":"Checking Integrity of Libraries","text":"
Running seaf-fsck.sh without any arguments will run a read-only integrity check for all libraries.
./seaf-fsck.sh\n
If you want to check integrity for specific libraries, just append the library id's as arguments:
./seaf-fsck.sh [library-id1] [library-id2] ...\n
The output looks like:
[02/13/15 16:21:07] fsck.c(470): Running fsck for repo ca1a860d-e1c1-4a52-8123-0bf9def8697f.\n[02/13/15 16:21:07] fsck.c(413): Checking file system integrity of repo fsck(ca1a860d)...\n[02/13/15 16:21:07] fsck.c(35): Dir 9c09d937397b51e1283d68ee7590cd9ce01fe4c9 is missing.\n[02/13/15 16:21:07] fsck.c(200): Dir /bf/pk/(9c09d937) is corrupted.\n[02/13/15 16:21:07] fsck.c(105): Block 36e3dd8757edeb97758b3b4d8530a4a8a045d3cb is corrupted.\n[02/13/15 16:21:07] fsck.c(178): File /bf/02.1.md(ef37e350) is corrupted.\n[02/13/15 16:21:07] fsck.c(85): Block 650fb22495b0b199cff0f1e1ebf036e548fcb95a is missing.\n[02/13/15 16:21:07] fsck.c(178): File /01.2.md(4a73621f) is corrupted.\n[02/13/15 16:21:07] fsck.c(514): Fsck finished for repo ca1a860d.\n
The corrupted files and directories are reported in the above message. By the way, you may also see output like the following:
[02/13/15 16:36:11] Commit 6259251e2b0dd9a8e99925ae6199cbf4c134ec10 is missing\n[02/13/15 16:36:11] fsck.c(476): Repo ca1a860d HEAD commit is corrupted, need to restore to an old version.\n[02/13/15 16:36:11] fsck.c(314): Scanning available commits...\n[02/13/15 16:36:11] fsck.c(376): Find available commit 1b26b13c(created at 2015-02-13 16:10:21) for repo ca1a860d.\n
This means the head commit (current state of the library) recorded in database is not consistent with the library data. In such case, fsck will try to find the last consistent state and check the integrity in that state.
Tip
If you have many libraries, it's helpful to save the fsck output into a log file for later analysis.
Corruption repair in seaf-fsck basically works in two steps:
If the library state (commit) recorded in database is not found in data directory, find the last available state from data directory.
Check data integrity in that specific state. If files or directories are corrupted, set them to empty files or empty directories. The corrupted paths will be reported, so that the user can recover them from somewhere else.
Running the following command repairs all the libraries:
./seaf-fsck.sh --repair\n
Most of time you run the read-only integrity check first, to find out which libraries are corrupted. And then you repair specific libraries with the following command:
After repairing, in the library history, seaf-fsck includes the list of files and folders that are corrupted. So it's much easier to located corrupted paths.
"},{"location":"administration/seafile_fsck/#best-practice-for-repairing-a-library","title":"Best Practice for Repairing a Library","text":"
To check all libraries and find out which library is corrupted, the system admin can run seaf-fsck.sh without any argument and save the output to a log file. Search for keyword \"Fail\" in the log file to locate corrupted libraries. You can run seaf-fsck to check all libraries when your Seafile server is running. It won't damage or change any files.
When the system admin find a library is corrupted, he/she should run seaf-fsck.sh with \"--repair\" for the library. After the command fixes the library, the admin should inform user to recover files from other places. There are two ways:
Upload corrupted files or folders via the web interface
If the library was synced to some desktop computer, and that computer has a correct version of the corrupted file, resyncing the library on that computer will upload the corrupted files to the server.
"},{"location":"administration/seafile_fsck/#speeding-up-fsck-by-not-checking-file-contents","title":"Speeding up FSCK by not checking file contents","text":"
Starting from Pro edition 7.1.5, an option is added to speed up FSCK. Most of the running time of seaf-fsck is spent on calculating hashes for file contents. This hash will be compared with block object ID. If they're not consistent, the block is detected as corrupted.
In many cases, the file contents won't be corrupted most of time. Some objects are just missing from the system. So it's enough to only check for object existence. This will greatly speed up the fsck process.
To skip checking file contents, add the --shallow or -s option to seaf-fsck.
"},{"location":"administration/seafile_fsck/#exporting-libraries-to-file-system","title":"Exporting Libraries to File System","text":"
You can use seaf-fsck to export all the files in libraries to external file system (such as Ext4). This procedure doesn't rely on the seafile database. As long as you have your seafile-data directory, you can always export your files from Seafile to external file system. The command about this operation is
The argument top_export_path is a directory to place the exported files. Each library will be exported as a sub-directory of the export path. If you don't specify library ids, all libraries will be exported.
Note
Currently only un-encrypted libraries can be exported. Encrypted libraries will be skipped.
Seafile uses storage de-duplication technology to reduce storage usage. The underlying data blocks will not be removed immediately after you delete a file or a library. As a result, the number of unused data blocks will increase on Seafile server.
To release the storage space occupied by unused blocks, you have to run a \"garbage collection\" program to clean up unused blocks on your server.
The GC program cleans up two types of unused blocks:
Blocks that no library references to, that is, the blocks belong to deleted libraries;
If you set history length limit on some libraries, the out-dated blocks in those libraries will also be removed.
[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo My Library(ffa57d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 265.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo ffa57d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 5 commits, 265 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 265 blocks total, about 265 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo aa(f3d0a8d0)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 5.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo f3d0a8d0.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 8 commits, 5 blocks.\n[03/19/15 19:41:49] gc-core.c(264): Populating index for sub-repo 9217622a.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 4 commits, 4 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 5 blocks total, about 9 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo test2(e7d26d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 507.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo e7d26d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 577 commits, 507 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 507 blocks total, about 507 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:50] seafserv-gc.c(124): === Repos deleted by users ===\n[03/19/15 19:41:50] seafserv-gc.c(145): === GC is finished ===\n\n[03/19/15 19:41:50] Following repos have blocks to be removed:\nrepo-id1\nrepo-id2\nrepo-id3\n
If you give specific library ids, only those libraries will be checked; otherwise all libraries will be checked.
repos have blocks to be removed
Notice that at the end of the output there is a \"repos have blocks to be removed\" section. It contains the list of libraries that have garbage blocks. Later when you run GC without --dry-run option, you can use these libraris ids as input arguments to GC program.
To actually remove garbage blocks, run without the --dry-run option:
./seaf-gc.sh [repo-id1] [repo-id2] ...\n
If libraries ids are specified, only those libraries will be checked for garbage.
As described before, there are two types of garbage blocks to be removed. Sometimes just removing the first type of blocks (those that belong to deleted libraries) is good enough. In this case, the GC program won't bother to check the libraries for outdated historic blocks. The \"-r\" option implements this feature:
./seaf-gc.sh -r\n
Success
Libraries deleted by the users are not immediately removed from the system. Instead, they're moved into a \"trash\" in the system admin page. Before they're cleared from the trash, their blocks won't be garbage collected.
Since Pro server 8.0.6 and community edition 9.0, you can remove garbage fs objects. It should be run without the --dry-run option:
./seaf-gc.sh --rm-fs\n
Bug reports
This command has bug before Pro Edition 10.0.15 and Community Edition 11.0.7. It could cause virtual libraries (e.g. shared folders) failing to merge into their parent libraries. Please avoid using this option in the affected versions. Please contact our support team if you are affected by this bug.
"},{"location":"administration/seafile_gc/#using-multiple-threads-in-gc","title":"Using Multiple Threads in GC","text":"
You can specify the thread number in GC. By default,
If storage backend is S3/Swift/Ceph, 10 threads are started to do the GC work.
If storage backend is file system, only 1 thread is started.
You can specify the thread number in with \"-t\" option. \"-t\" option can be used together with all other options. Each thread will do GC on one library. For example, the following command will use 20 threads to GC all libraries:
./seaf-gc.sh -t 20\n
Since the threads are concurrent, the output of each thread may mix with each others. Library ID is printed in each line of output.
"},{"location":"administration/seafile_gc/#run-gc-based-on-library-id-prefix","title":"Run GC based on library ID prefix","text":"
Since GC usually runs quite slowly as it needs to traverse the entire library history. You can use multiple threads to run GC in parallel. For even larger deployments, it's also desirable to run GC on multiple server in parallel.
A simple pattern to divide the workload among multiple GC servers is to assign libraries to servers based on library ID. Since Pro edition 7.1.5, this is supported. You can add \"--id-prefix\" option to seaf-gc.sh, to specify the library ID prefix. For example, the below command will only process libraries having \"a123\" as ID prefix.
./seaf-gc.sh --id-prefix a123\n
"},{"location":"administration/security_features/","title":"Security Questions","text":""},{"location":"administration/security_features/#how-is-the-connection-between-client-and-server-encrypted","title":"How is the connection between client and server encrypted?","text":"
Seafile uses HTTP(S) to syncing files between client and server (Since version 4.1.0).
Seafile provides a feature called encrypted library to protect your privacy. The file encryption/decryption is performed on client-side when using the desktop client for file synchronization. The password of an encrypted library is not stored on the server. Even the system admin of the server can't view the file contents.
There are a few limitation about this feature:
File metadata is NOT encrypted. The metadata includes: the complete list of directory and file names, every files size, the history of editors, when, and what byte ranges were altered.
The client side encryption does currently NOT work while using the web browser and the cloud file explorer of the desktop client. When you are browsing encrypted libraries via the web browser or the cloud file explorer, you need to input the password and the server is going to use the password to decrypt the \"file key\" for the library (see description below) and cache the password in memory for one hour. The plain text password is never stored or cached on the server.
If you create an encrypted library on the web interface, the library password and encryption keys will pass through the server. If you want end-to-end protection, you should create encrypted libraries from desktop client only.
For encryption protocol version 4, each library use its own salt to derive key/iv pairs. However, all files within a library shares the same salt. Likewise, all the files within a library are encrypted with the same key/iv pair. With encryption protocol version 2, all libraries use the same salt, but separate key/iv pairs.
Encrypted library doesn't ensure file integrity. For example, the server admin can still partially change the contents of files in an encrypted library. The client is not able to detect such changes to contents.
The client side encryption works on iOS client since version 2.1.6. The Android client support client side encryption since version 2.1.0. But since version 3.0.0, the iOS and Android clients drop support for client side encryptioin. You need to send the password to the server to encrypt/decrypt files.
"},{"location":"administration/security_features/#how-does-an-encrypted-library-work","title":"How does an encrypted library work?","text":"
When you create an encrypted library, you'll need to provide a password for it. All the data in that library will be encrypted with the password before uploading it to the server (see limitations above).
There are currently two supported encryption protocol versions for encrypted libraries, version 2 and versioin 4. The two versions shares the same basic procedure so we first describe the procedure.
Generate a 32-byte long cryptographically strong random number. This will be used as the file encryption key (\"file key\").
Encrypt the file key with the user provided password. We first use a secure hash algorithm to derive a key/iv pair from the password, then use AES 256/CBC to encrypt the file key. The result is called the \"encrypted file key\". This encrypted file key will be sent to and stored on the server. When you need to access the data, you can decrypt the file key from the encrypted file key.
A \"magic token\" is derived from the password and library id, with the same secure hash algorithm. This token is stored with the library and will be use to check passwords before decrypting data later.
All file data is encrypted by the file key with AES 256/CBC. We use PBKDF2-SHA256 with 1000 iterations to derive key/iv pair from the file key. After encryption, the data is uploaded to the server.
The only difference between version 2 and version 4 is on the usage of salt for the secure hash algorithm. In version 2, all libaries share the same fixed salt. In version 4, each library will use a separate and randomly generated salt.
"},{"location":"administration/security_features/#secure-hash-algorithms-for-password-verification","title":"Secure hash algorithms for password verification","text":"
A secure hash algorithm is used to derive key/iv pair for encrypting the file key. So it's critical to choose a relatively costly algorithm to prevent brute-force guessing for the password.
Before version 12, a fixed secure hash algorithm (PBKDF2-SHA256 with 1000 iterations) is used, which is far from secure for today's standard.
Since Seafile server version 12, we allow the admin to choose proper secure hash algorithms. Currently two hash algorithms are supported.
PBKDF2: The only available parameter is the number of iterations. You need to increase the the number of iterations over time, as GPUs are more and more used for such calculation. The default number of iterations is 1000. As of 2023, the recommended iterations is 600,000.
Argon2id: Secure hash algorithm that has high cost even for GPUs. There are 3 parameters that can be set: time cost, memory cost, and parallelism degree. The parameters are seperated by commas, e.g. \"2,102400,8\", which the default parameters used in Seafile. Learn more about this algorithm on https://github.com/P-H-C/phc-winner-argon2 .
"},{"location":"administration/security_features/#client-side-encryption-and-decryption","title":"Client-side encryption and decryption","text":"
The above encryption procedure can be executed on the desktop and the mobile client. The Seahub browser client uses a different encryption procedure that happens at the server. Because of this your password will be transferred to the server.
When you sync an encrypted library to the desktop, the client needs to verify your password. When you create the library, a \"magic token\" is derived from the password and library id. This token is stored with the library on the server side. The client use this token to check whether your password is correct before you sync the library. The magic token is generated by the secure hash algorithm chosen when the library was created.
For maximum security, the plain-text password won't be saved on the client side, too. The client only saves the key/iv pair derived from the \"file key\", which is used to decrypt the data. So if you forget the password, you won't be able to recover it or access your data on the server.
"},{"location":"administration/security_features/#why-fileserver-delivers-every-content-to-everybody-knowing-the-content-url-of-an-unshared-private-file","title":"Why fileserver delivers every content to everybody knowing the content URL of an unshared private file?","text":"
When a file download link is clicked, a random URL is generated for user to access the file from fileserver. This url can only be access once. After that, all access will be denied to the url. So even if someone else happens to know about the url, he can't access it anymore.
This was changed in Seafile server version 12. Instead of a random URL, a URL like 'https://yourserver.com/seafhttp/repos/{library id}/file_path' is used for downloading the file. Authorization will be done by checking cookies or API tokens on the server side. This makes the URL more cache-friendly while still being secure.
"},{"location":"administration/security_features/#how-does-seafile-store-user-login-password","title":"How does Seafile store user login password?","text":"
User login passwords are stored in hash form only. Note that user login password is different from the passwords used in encrypted libraries. In the database, its format is
PBKDF2SHA256$iterations$salt$hash\n
The record is divided into 4 parts by the $ sign.
The first part is the used hash algorithm. Currently we use PBKDF2 with SHA256. It can be changed to an even stronger algorithm if needed.
The second part is the number of iterations of the hash algorithm
The third part is the random salt used to generate the hash
The fourth part is the final hash generated from the password
To calculate the hash:
First, generate a 32-byte long cryptographically strong random number, use it as the salt.
Calculate the hash with PBKDF2(password, salt, iterations). The number of iterations is currently 10000.
After that, there will be a \"Two-Factor Authentication\" section in the user profile page.
Users can use the Google Authenticator app on their smart-phone to scan the QR code.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/","title":"Seafile Professional Server Changelog (old)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#44","title":"4.4","text":"
Note: Two new options are added in version 4.4, both are in seahub_settings.py
SHOW_TRAFFIC: default is True, set to False if you what to hide public link traffic in profile
[fix] Fix support for syncing old formatted libraries
Remove commit and fs objects in GC for deleted libraries
Add \"transfer\" operation to library list in \"admin panel->a single user\"
[fix] Fix the showing of the folder name for upload link generated from the root of a library
[fix] Add access log for online file preview
[fix] Fix permission settings for a sub-folder of a shared sub-folder
LDAP improvements and fixes
Only import LDAP users to Seafile internal database upon login
Only list imported LDAP users in \"organization->members\"
Add option to not import users via LDAP Sync (Only update information for already imported users). The option name is IMPORT_NEW_USER. See document http://manual.seafile.com/deploy/ldap_user_sync.html (url might deprecated)
[security] Check validity of file object id to avoid a potential attack
[fix] Check the validity of system default library template, if it is broken, recreate a new one.
[fix] After transfer a library, remove original sharing information
[security] Fix possibility to bypass Captcha check
[security] More security fixes.
[pro] Enable syncing a sub-sub-folder of a shared sub-folder (For example, if you share library-A/sub-folder-B to a group, other group members can selectively sync sub-folder-B/sub-sub-folder-C)
[fix, office preview] Handle the case that \"/tmp/seafile-office-output\"is removed by operating system
[ui] Improve UI for sharing link page, login page, file upload link page
[security] Clean web sessions when reset an user's password
Delete the user's libraries when deleting an user
Show link expiring date in sharing link management page
[admin] In a user's admin page, showing libraries' size and last modify time
[fix, api] Fix star file API
[pro, beta] Add \"Open via Client\" to enable calling local program to open a file at the web
About \"Open via Client\": The web interface will call Seafile desktop client via \"seafile://\" protocol to use local program to open a file. If the file is already synced, the local file will be opened. Otherwise it is downloaded and uploaded after modification. Need client version 4.3.0+
Improve preview for office files (doc/docx/ppt/pptx)
In the old way, the whole file is converted to HTML5 before returning to the client. By converting an office file to HTML5 page by page, the first page will be displayed faster. By displaying each page in a separate frame, the quality for some files is improved too.
Add global address book and remove the contacts module (You can disable it if you use CLOUD_MODE by adding ENABLE_GLOBAL_ADDRESSBOOK = False in seahub_settings.py)
List users imported from LDAP
[guest] Enable guest user by default
[guest] Guest user can't generate share link
Don't count inactive users as licensed users
Important
[fix] Fix viewing sub-folders for password protected sharing
[fix] Fix viewing starred files
[fix] Fix support of uploading multiple files in clients' cloud file browser
Improve security of password resetting link
Remove user private message feature
New features
Enable syncing any folder for an encrypted library
Add open file locally (open file via desktop client)
Others
[fix] Fix permission checking for sub-folder permissions
Change \"quit\" to \"Leave group\"
Clean inline CSS
Use image gallery module in sharing link for folders containing images
[api] Update file details api, fix error
Enable share link file download token available for multiple downloads
[fix] Fix visiting share link whose original path is deleted
Hide enable sub-library option since it is not meaningless for Pro edition
Support syncing any sub-folder in the desktop client
Add audit log, see http://manual.seafile.com/security/auditing.html (url might deprecated). This feature is turned off by default. To turn it on, see http://manual.seafile.com/deploy_pro/configurable_options.html (url might deprecated)
Syncing LDAP groups
Add permission setting for a sub-folder (beta)
Updates in community edition too
[fix] Fix image thumbnail in sharing link
Show detailed time when mouse over a relative time
Add trashed libraries (deleted libraries will first be put into trashed libraries where system admin can restore)
Improve seaf-gc.sh
Redesign fsck.
Add API to support logout/login an account in the desktop client
Add API to generate thumbnails for images files
Clean syncing tokens after deleting an account
Change permission of seahub_settings.py, ccnet.conf, seafile.conf to 0600
"},{"location":"changelog/changelog-for-seafile-professional-server/","title":"Seafile Professional Server Changelog","text":"
You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000
SeaDoc is now stable, providing online notes and documents feature
A new wiki module
A new trash mechanism, that deleted files will be recorded in database for fast listing
The password strength level is now calculated by algorithm. The old USER_PASSWORD_MIN_LENGTH, USER_PASSWORD_STRENGTH_LEVEL is removed. Only USER_STRONG_PASSWORD_REQUIRED is still used.
ADDITIONAL_APP_BOTTOM_LINKS is removed. Because there is no buttom bar in the navigation side bar now.
SERVICE_URL and FILE_SERVER_ROOT are removed. SERVICE_URL will be calculated from SEAFILE_SERVER_PROTOCOL and SEAFILE_SERVER_HOSTNAME in .env file.
ccnet.conf is removed. Some of its configuration items are moved from .env file, others are read from items in seafile.conf with same name.
For security reason, WebDAV no longer support login with LDAP account, the user with LDAP account must generate a WebDAV token at the profile page
[File tags] The current file tags feature is deprecated. We will re-implement a new one in version 13.0 with a new general metadata management module.
For ElasticSearch based search, full text search of doc/xls/ppt file types are no longer supported. This enable us to remove Java dependency in Seafile side.
Forbid generating share links for a library if the user has invisible/cloud-read-only permission on the library
[fix] Fix a configuration error for Ceph storage (if you don't use S3 interface)
[fix] Fix a bug in traffic statistic in golang file server
Support use different index names for ElasticSearch
Fix column view is limited to 100 items
Fix LDAP user login for WebDAV
Remove the configuration item \"ENABLE_FILE_COMMENT\" as it is no longer needed
Enable copy/move files between encrypted and non-encrypted libraries
Forbid creating libraries with Emoji in name
Fix some letters in the file name do not fit in height in some dialogs
Fix a performance issue in sending file updates report
Some other UI fixes and improvements
SDoc editor 0.6
Support convert docx file to sdoc file
Support Markdown format in comments
Support drag rows/columns in table element and other improvements for table elements
Other UI fixes and improvements
"},{"location":"changelog/changelog-for-seafile-professional-server/#1104-beta-and-sdoc-editor-05-2024-02-01","title":"11.0.4 beta and SDoc editor 0.5 (2024-02-01)","text":"
Major changes
Use a virtual ID to identify a user
LDAP login update
SAML/Shibboleth/OAuth login update
Update Django to version 4.2
Update SQLAlchemy to version 2.x
Add SeaDoc
UI Improvements
Improve UI of PDF view page
Update folder icon
The activities page support filter records based on modifiers
Add indicator for folders that has been shared out
Use file type icon as favicon for file preview pages
Support preview of JFIF image format
Pro edition only changes
Support S3 SSE-C encryption
Support a new invisible sub-folder permission
Update of online read-write permission, online read-write permission now support the shared user to update/rename/delete files online, making it consistent with normal read-write permission
Other changes
Remove file comment features as they are used very little (except for SeaDoc)
Add move dir/file, copy dir/file, delete dir/file, rename dir/file APIs for library token based API
Use user's current language when create Office files in OnlyOffice
Please check our document for how to upgrade to 10.0.
Note
If you upgrade to version 10.0.18+ from 10.0.16 or below, you need to upgrade the sqlalchemy to version 1.4.44+ if you use binary based installation. Otherwise \"activities\" page will not work.
[security] Upgrade pillow dependency from 9.0 to 10.0.
Note, after upgrading to this version, you need to upgrade the Python libraries in your server \"pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20\"
Note: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command pip3 install lxml to install it.
A page in published libraries is rendered at the server side to improve loading speed.
Upgrade Django from 3.2.6 to 3.2.14
Fix a bug in collaboration notice sending via email to users' contact email
Support OnlyOffice oform/docxf files
Improve user search when sharing a library
Admin panel support searching a library via ID prefix
[fix] Fix preview PSD images
[fix] Fix a bug that office files can't be opened in sharing links via OnlyOffice
[fix] Go fileserver: Folder or File is not deletable when there is a spurious utf-8 char inside the filename
[fix] Fix file moving in WebDAV
ElasticSearch now support https
Support advanced permissions like cloud-preview only, cloud read-write only when shareing a department library
[fix] Fix a bug in get library sharing info in multi-tenancy mode
[fix] Fix a bug in library list cache used by syncing client
[fix] Fix anothoer bug in upload files to a sharing link with upload permission
Potential breaking change in Seafile Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
[fix] Fix deleting libraries without owner in admin panel
Add an API to change a user's email
[fix] Fix a bug in storage migration script
[fix] Fix a bug that will cause fsck crash
[fix] Fix a XSS problem in notification
Potential breaking change in Seafile Pro 7.1.16: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
Since seafile-pro 7.0.0, we have upgraded Elasticsearch to 5.6. As Elasticsearch 5.6 relies on the Java 8 environment and can't run with root, you need to run Seafile with a non-root user and upgrade the Java version.
In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.
With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.
The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf instead of running ./seahub.sh start <another-port>.
Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:
Note, this command should be run while Seafile server is running.
Version 6.3 changed '/shib-login' to '/sso'. If you use Shibboleth, you need to to update your Apache/Nginx config. Please check the updated document: shibboleth config v6.3
Version 6.3 add a new option for file search (seafevents.conf):
[INDEX FILES]\n...\nhighlight = fvh\n...\n
This option will make search speed improved significantly (10x) when the search result contains large pdf/doc files. But you need to rebuild search index if you want to add this option.
From 6.2, It is recommended to use proxy mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:
Change the config file of Nginx/Apache.
Restart Seahub with ./seahub.sh start instead of ./seahub.sh start-fastcgi
[fix] Fix a bug when concurrent uploading/creating files (in the old version, when a user uploading/deleting multiple files in cloud file browser, it had a high chance to get \u201cinternal server error\u201d message)
[fix] Fix thumbnails for some images that 90 degrees rotated
[fix] Fix support for resumable file upload
[fix] Fix MySQL connection pool in Ccnet
[fix] Use original GIF file when view GIF files
[fix, api] Check if name is valid when creating folder/file
Remove deleted libraries in search index
Use 30MB as the default value of THUMBNAIL_IMAGE_SIZE_LIMIT
[api] Improve performance when move or copy multiple files/folders
[admin] Support syncing user role from AD/LDAP attribute (ldap role sync)
[admin] Support deleting all outdated invitations at once
[admin] Improve access log
[admin] Support upload seafile-license.txt via web interface (only for single machine deployment)
[admin] Admin can cancel two-factor authentication of a user
[admin, role] Show user\u2019s role in LDAP(Imported) table
[admin, role] Add wildcard support in role mapping for Shibboleth login
[admin] Improve performance in getting total file number, used space and total number of devices
[admin] Admin can add users to an institution via Web UI
[admin] Admin can choose a user\u2019s role when creating a user
In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share.
cd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n
The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster.
Force users to change password if imported via csv
Support set user's quota, name when import user via csv
Set user's quota in user list page
Add search group by group name
Use ajax when deleting a user's library in admin panel
Support logrotate for controller.log
Add a log when a user can't be find in LDAP during login, so that the system admin can know whether it is caused by password error or the user can't be find
Delete shared libraries information when deleting a user
Add admin API to create default library for a user
[ldap-sync] Support syncing users from AD/LDAP as inactive user
Other
[fix] Fix user search when global address book is disabled in CLOUD_MODE
[fix] Avoid timeout in some cases when showing a library trash
Show \"the account is inactive\" when an inactive account try to login
[security] Remove viewer.js to show open document files (ods, odt) because viewer.js is not actively maintained and may have potential security bugs
[fix] Exclude virtual libraries from storage size statistics
[fix] Fix mysql gone away problem in seafevents
Add region config option for Swift storage backend
[anti-virus] Send notification to the library owner if a virus is found
Guest invitation: Prevent the same address can be invited multiple times by the same inviter and by multiple inviters
Guest invitation: Add an regex to prevent certain email addresses be invited (see roles permissions)
Office online: support co-authoring
Admin can set users' department and name when creating users
Show total number of files and storage in admin info page
Show total number of devices and recently connected devices in admin info page
Delete shared libraries information when deleting a user
Upgrade Django to 1.8.17
Admin can create group in admin panel
[fix] Fix quota check: users can't upload a file if the quota will be exceeded after uploading the file
[fix] Fix quota check when copy file from one library to another
Add # -*- coding: utf-8 -*- to seahub_settings.py, so that admin can use non-ascii characters in the file.
[fix] Prevent admin from access group's wiki
[fix] Prevent transfering libraries to guest account
[fix] Prevent guest accout to create share link via API v2
Add a log when a user can't be find in LDAP during login, so that the system admin can know whether it is caused by password error or the user can't be find
Ingore white space character in the end of lines in ccnet.conf
[fix] Virus scan fails when the keystone token has expired https://github.com/haiwen/seafile/issues/1737
[fix] If you share a sub-folder to a group, the sub-folder will appear as a library in that group page. Don't show \"permission\" menu item for such a shared sub-folder on the group page, because setting permissions on this shared sub-folder not work. The user should set permissions on the original library directly.
[fix] Fix API for uploading file by blocks (Used by iOS client when uploading a large file)
[fix] Fix a database connection problem in ccnet-server
[fix] Fix moved files are still present in local folder until refresh
[fix] Fix admin panel can't show deleted libraries
[admin] Add group transfer function in admin panel
[admin] Admin can set library permissions in admin panel
Improve checking the user running Seafile must be the owner of seafile-data. If seafile-data is symbolic link, check the destination folder instead of the symbolic link.
[ui] Improve rename operation
Show name/contact email in admin panel and enable search user by name/contact email
Add printing style for markdown and doc/pdf
The \u201cSeafile\u201d in \"Welcome to Seafile\" message can be customised by SITE_NAME
Improve sorting of files with numbers
[api] Add admin API to only return LDAP imported user list
Code clean and update Web APIs
Remove number of synced libraries in devices page for simplify the interface and concept
Update help pages
[online preview] The online preview size limit setting FILE_PREVIEW_MAX_SIZE will not affect videos and audio files. So videos and audio with any size can be previewed online.
[online preview] Add printing style for markdown
Pro only features
Support LibreOffice online/Collabora Office online
Add two-factor authentication
Remote wipe (need desktop client 6.0.0)
[anti-virus] Support parallel scan
[anti-virus] Add option to only scan a file with size less than xx MB
[anti-virus] Add option to specific which file types to scan
[anti-virus] Add scanning virus instantly when user upload files via upload link
[online preivew] Add printing style for doc/pdf
[online preivew] Warn user if online preview only show 50 pages for doc/pdf with more than 50 pages
[fix] Fix search only work on the first page of search result pages
Add \u201cGroups\u201d category in the client\u2019s library view
Click notification pop up now open the exact folder containing the modified file.
Change \"Get Seafile Share Link\" to \"Get Seafile Download Link\"
[fix] Use case-insensitive sorting in cloud file browser
[fix] Don't sync a folder in Windows if it contains invalid characters instead of creating an empty folder with invalid name
[fix] Fix a rare bug where sometimes files are synced as zero length files. This happens when another software doesn't change the file timestamp after changing the content of the file.
Fix popup two password input dialogs when visit an encrypted library
Popup a tip when file conflicts happen
Don't send the password to server when creating an encrypted library
[mac] Fix support for TLS 1.2
[win, extension] Add context menu \"get internal link\"
Enable uploading of an empty folder in cloud file browser
[pro] Enable customization of app name and logo for the main window (See https://github.com/haiwen/seafile-docs/blob/master/config/seahub_customization.md#customize-the-logo-and-name-displayed-on-seafile-desktop-clients-seafile-professional-only)
[fix, windows] Fix a bug that causes freeze of Seafile UI
[sync] Improve index performance after a file is modified
[sync] Use multi-threads to upload/download file blocks
[admin] Enable config Seafile via seafile.rc in Mac/Linux or seafile.ini in Windows (https://github.com/haiwen/seafile-user-manual/blob/master/en/faq.md)
[admin] Enable uninstall Seafile without popup \"deleting config files\" dialog
Add file lock
[mac, extension] Add getting Seafile internal link
[mac, extension] Improve performance of showing sync status
[win] support for file path greater than 260 characters.
In the old version, you will sometimes see strange directory such as \"Documents~1\" synced to the server, this because the old version did not handle long path correctly.
[mac] Fix a syncing problem when library name contains \"\u00e8\" characters
[windows] Gracefully handle file lock issue.
In the previous version, when you open an office file in Windows, it is locked by the operating system. If another person modify this file in another computer, the syncing will be stopped until you close the locked file. In this new version, the syncing process will continue. The locked file will not be synced to local computer, but other files will not be affected.
[fix] Fix \"sometimes deleted folder reappearing problem\" on Windows.
You have to update all the clients in all the PCs. If one PC does not use the v3.1.11, when the \"deleting folder\" information synced to this PC, it will fail to delete the folder completely. And the folder will be synced back to other PCs. So other PCs will see the folder reappear again.
"},{"location":"changelog/server-changelog-old/","title":"Seafile Server Changelog (old)","text":""},{"location":"changelog/server-changelog-old/#50","title":"5.0","text":"
Note when upgrade to 5.0 from 4.4
You can follow the document on major upgrade (http://manual.seafile.com/deploy/upgrade.html) (url might deprecated)
In Seafile 5.0, we have moved all config files to folder conf, including:
If you want to downgrade from v5.0 to v4.4, you should manually copy these files back to the original place, then run minor_upgrade.sh to upgrade symbolic links back to version 4.4.
The 5.0 server is compatible with v4.4 and v4.3 desktop clients.
Common issues (solved) when upgrading to v5.0:
DatabaseError after Upgrade to 5.0 https://github.com/haiwen/seafile/issues/1429#issuecomment-153695240
Get name, institution, contact_email field from Shibboleth
[webdav] Don't show sub-libraries
Enable LOGIN_URL to be configured, user need to add LOGIN_URL to seahub_settings.py explicitly if deploy at non-root domain, e.g. LOGIN_URL = '//accounts/login/'.
Add ENABLE_USER_CREATE_ORG_REPO to enable/disable organization repo creation.
Change the Chinese translation of \"organization\"
Use GB/MB/KB instead of GiB/MiB/KiB in quota calculation and quota setting (1GB = 1000MB = 1,000,000KB)
Show detailed message if sharing a library failed.
[fix] Fix JPG Preview in IE11
[fix] Show \"out of quota\" instead of \"DERP\" in the case of out of quota when uploading files via web interface
[fix] Fix empty nickname during shibboleth login.
[fix] Fix default repo re-creation bug when web login after desktop.
[fix] Don't show sub-libraries at choose default library page, seafadmin page and save shared file to library page
[fix] Seafile server daemon: write PID file before connecting to database to avoid a problem when the database connection is slow
[fix] Don't redirect to old library page when restoring a folder in snapshot page
[fix] Fix start up parameters for seaf-fuse, seaf-server, seaf-fsck
Update Markdown editor and viewer. The update of the markdown editor and parser removed support for the Seafile-specific wiki syntax: Linking to other wikipages isn't possible anymore using [[ Pagename]].
Add tooltip in admin panel->library->Trash: \"libraries deleted 30 days before will be cleaned automatically\"
Don't open a new page when click the settings, trash and history icons in the library page
other small UI improvements
Config changes:
Move all config files to folder conf
Add web UI to config the server. The config items are saved in database table (seahub-dab/constance_config). They have a higher priority over the items in config files.
Trash:
A trash for every folder, showing deleted items in the folder and sub-folders. Others changes
Admin:
Admin can see the file numbers of a library
Admin can disable the creation of encrypted library
Security:
Change most GET requests to POST to increase security
Add global address book and remove the contacts module (You can disable it if you use CLOUD_MODE by adding ENABLE_GLOBAL_ADDRESSBOOK = False in seahub_settings.py)
Use image gallery module in sharing link for folders containing images
[fix] Fix missing library names (show as none) in 32bit version
[fix] Fix viewing sub-folders for password protected sharing
[fix] Fix viewing starred files
[fix] Fix supporting of uploading multi-files in clients' cloud file browser
Use unix domain socket in ccnet to listen for local connections. This isolates the access to ccnet daemon for different users. Thanks to Kimmo Huoman and Henri Salo for reporting this issue.
[fix] Handle loading avatar exceptions to avoid 500 error
Platform
Use random salt and PBKDF2 algorithm to store users' password. (You need to manually upgrade the database if you using 3.0.0 beta2 with MySQL backend.)
Syncing and sharing a sub-directory of an existing library.
Directly sharing files between two users (instead of generating public links)
User can save shared files to one's own library
[wiki] Add frame and max-width to images
Use 127.0.0.1 to read files (markdown, txt, pdf) in file preview
[bugfix] Fix pagination in library snapshot page
Set the max length of message reply from 128 characters to 2000 characters.
Improved performance for home page and group page
[admin] Add administration of public links
API
Add creating/deleting library API
Platform
Improve HTTPS support, now HTTPS reverse proxy is the recommend way.
Add LDAP filter and multiple DN
Case insensitive login
Move log files to a single directory
[security] Add salt when saving user's password
[bugfix] Fix a bug in handling client connection
"},{"location":"changelog/server-changelog-old/#17","title":"1.7","text":""},{"location":"changelog/server-changelog-old/#1702-for-linux-32-bit","title":"1.7.0.2 for Linux 32 bit","text":"
[bugfix] Fix \"Page Unavailable\" when view doc/docx/ppt.
"},{"location":"changelog/server-changelog-old/#1701-for-linux-32-bit","title":"1.7.0.1 for Linux 32 bit","text":"
Video/Audio playback with MediaElement.js (Contributed by Phillip Thelen)
Edit library title/description
Public Info & Public Library page are combined into one
Support selection of file encoding when viewing online
Improved online picture view (Switch to prev/next picture with keyboard)
Fixed a bug when doing diff for a newly created file.
Sort starred files by last-modification time.
Seafile Daemon
Fixed bugs for using httpserver under https
Fixed performance bug when checking client's credential during sync.
LDAP support
Enable setting of the size of the thread pool.
API
Add listing of shared libraries
Add unsharing of a library.
"},{"location":"changelog/server-changelog/","title":"Seafile Server Changelog","text":"
You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000
SeaDoc is now stable, providing online notes and documents feature
A new wiki module
A new trash mechanism, that deleted files will be recorded in database for fast listing
The password strength level is now calculated by algorithm. The old USER_PASSWORD_MIN_LENGTH, USER_PASSWORD_STRENGTH_LEVEL is removed. Only USER_STRONG_PASSWORD_REQUIRED is still used.
ADDITIONAL_APP_BOTTOM_LINKS is removed. Because there is no buttom bar in the navigation side bar now.
SERVICE_URL and FILE_SERVER_ROOT are removed. SERVICE_URL will be calculated from SEAFILE_SERVER_PROTOCOL and SEAFILE_SERVER_HOSTNAME in .env file.
ccnet.conf is removed. Some of its configuration items are moved from .env file, others are read from items in seafile.conf with same name.
For security reason, WebDAV no longer support login with LDAP account, the user with LDAP account must generate a WebDAV token at the profile page
[File tags] The current file tags feature is deprecated. We will re-implement a new one in version 13.0 with a new general metadata management module.
Note: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command pip3 install lxml to install it.
A page in published libraries is rendered at the server side to improve loading speed.
Upgrade Django from 3.2.6 to 3.2.14
Fix a bug in collaboration notice sending via email to users' contact email
Support OnlyOffice oform/docxf files
Improve user search when sharing a library
Admin panel support searching a library via ID prefix
[fix] Fix preview PSD images
[fix] Fix a bug that office files can't be opened in sharing links via OnlyOffice
[fix] Go fileserver: Folder or File is not deletable when there is a spurious utf-8 char inside the filename
In version 6.3, users can create public or private Wikis. In version 7.0, private Wikis is replaced by column mode view. Every library has a column mode view. So users don't need to explicitly create private Wikis.
Public Wikis are now renamed to published libraries.
Upgrade
Just follow our document on major version upgrade. No special steps are needed.
In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.
With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.
The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf instead of running ./seahub.sh start <another-port>.
Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:
From 6.2, It is recommended to use WSGI mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:
Change the config file of Nginx/Apache.
Restart Seahub with ./seahub.sh start instead of ./seahub.sh start-fastcgi
Enable fixing the email for share link to be fixed in certain language (option SHARE_LINK_EMAIL_LANGUAGE in seahub_setting.py). So admin can force the language for a email of a share link to be always in English, regardless of what language the sender is using.
The language of the interface of CollaboraOffice/OnlyOffice will be determined by the language of the current user.
Display the correct image thumbnails in favorites instead of the generic one
Note: If you ever used 6.0.0 or 6.0.1 or 6.0.2 with SQLite as database and encoutered a problem with desktop/mobile client login, follow https://github.com/haiwen/seafile/pull/1738 to fix the problem.
Show total storage, total number of files, total number of connected devices in the info page of admin panel
Force users to change password if imported via csv
Support set user's quota, name when import user via csv
Set user's quota in user list page
Add search group by group name
Use ajax when deleting a user's library in admin panel
Support logrotate for controller.log
Add # -*- coding: utf-8 -*- to seahub_settings.py, so that admin can use non-ascii characters in the file.
Ingore white space character in the end of lines in ccnet.conf
Add a log when a user can't be find in LDAP during login, so that the system admin can know whether it is caused by password error or the user can't be find
Delete shared libraries information when deleting a user
Other
[fix] Uploading files with special names lets seaf-server crash
[fix] Fix user search when global address book is disabled in CLOUD_MODE
[fix] Avoid timeout in some cases when showing a library trash
Show \"the account is inactive\" when an inactive account try to login
[security] Remove viewer.js to show open document files (ods, odt) because viewer.js is not actively maintained and may have potential security bugs (Thanks to Lukas Reschke from Nextcloud GmbH to report the issue)
[fix] Fix PostgreSQL support
Update Django to 1.8.17
Change time_zone to UTC as default
[fix] Fix quota check: users can't upload a file if the quota will be exceeded after uploading the file
[fix] Fix quota check when copy file from one library to another
[fix] Fix default value of created_at in table api2_tokenv2. This bug leads to login problems for desktop and mobile clients.
[fix] Fix a bug in generating a password protected share link
Improve checking the user running Seafile must be the owner of seafile-data. If seafile-data is symbolic link, check the destination folder instead of the symbolic link.
[ui] Improve rename operation
Admin can set library permissions in admin panel
Show name/contact email in admin panel and enable search user by name/contact email
Add printing style for markdown
The \u201cSeafile\u201d in \"Welcome to Seafile\" message can be customised by SITE_NAME
Improve sorting of files with numbers
[fix] Fix can't view more than 100 files
[api] Add admin API to only return LDAP imported user list
[fix] Fix seaf-fsck.sh --export fails without database
[fix] Fix users with Umlauts in their display name breaks group management and api2/account/info on some special Linux distribution
Remove user from groups when a user is deleted.
[fix] Fix can't generate shared link for read-only shared library
[fix] Fix can still view file history after library history is set to \"no history\".
[fix] Fix after moving or deleting multiple selected items in the webinterface, the buttons are lost until reloading
Check user before start seafile. The user must be the owner of seafile-data directory
Don't allow emails with very special characters that may containing XSS string to register
[fix] During downloading multiple files/folders, show \"Total size exceeds limits\" instead of \"internal server error\" when selected items exceeds limits.
[fix] When delete a share, only check whether the be-shared user exist or not. This is to avoid the situation that share to a user can't be deleted after the user be deleted.
Add a notificition to a user if he/she is added to a group
Improve UI for password change page when forcing password change after admin reset a user's password
[fix] Fix duplicated files show in Firefox if the folder name contains single quote '
Note: in this version, the group discussion is not re-implement yet. It will be available when the stable verison is released.
Redesign navigation
Rewrite group management
Improve sorting for large folder
Remember the sorting option for folder
Improve devices page
Update icons for libraries and files
Remove library settings page, re-implement them with dialogs
Remove group avatar
Don't show share menu in top bar when multiple item selected
Auto-focus on username field when loading the login page
Remove self-introduction in user profile
Upgrade to django 1.8
Force the user to change password if adding by admin or password reset by admin
disable add non-existing user to a group
"},{"location":"config/","title":"Server Configuration and Customization","text":""},{"location":"config/#config-files","title":"Config Files","text":"
The config files used in Seafile include:
environment variables: contains environment variables, the items here are shared between different components. Newly introduced components, like sdoc-server and notificaiton server, read configuraitons from environment variables and have no config files.
seafile.conf: contains settings for seafile daemon and fileserver.
seahub_settings.py: contains settings for Seahub
seafevents.conf: contains settings for background tasks and file search.
You can also modify most of the config items via web interface.The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files.
"},{"location":"config/#the-design-of-configure-options","title":"The design of configure options","text":"
There are now three places you can config Seafile server:
environment variables
config files
via web interface
The web interface has the highest priority. It contains a subset of end-user oriented settings. In practise, you can disable settings via web interface for simplicity.
Environment variables contains system level settings that needed when initialize Seafile server or run Seafile server. Environment variables also have three categories:
Initialization variables that used to generate config files when Seafile server run for the first time.
Variables that shared and used by multiple components of Seafile server.
Variables that used both in generate config files and later also needed for some components that have no corresponding config files.
The variables in the first category can be deleted after initialization. In the future, we will make more components to read config from environment variables, so that the third category is no longer needed.
"},{"location":"config/admin_roles_permissions/","title":"Roles and Permissions Support","text":"
You can add/edit roles and permission for administrators. Seafile has four build-in admin roles:
default_admin, has all permissions.
system_admin, can only view system info and config system.
daily_admin, can only view system info, view statistic, manage library/user/group, view user log.
audit_admin, can only view system info and admin log.
All administrators will have default_admin role with all permissions by default. If you set an administrator to some other admin role, the administrator will only have the permissions you configured to True.
Seafile supports eight permissions for now, its configuration is very like common user role, you can custom it by adding the following settings to seahub_settings.py.
Seafile Server supports the following external authentication types:
LDAP (Auth and Sync)
OAuth
Shibboleth
SAML
Since 11.0 version, switching between the types is possible, but any switch requires modifications of Seafile's databases.
Note
Before manually manipulating your database, make a database backup, so you can restore your system if anything goes wrong!
See more about make a database backup.
"},{"location":"config/auth_switch/#migrating-from-local-user-database-to-external-authentication","title":"Migrating from local user database to external authentication","text":"
As an organisation grows and its IT infrastructure matures, the migration from local authentication to external authentication like LDAP, SAML, OAUTH is common requirement. Fortunately, the switch is comparatively simple.
Configure and test the desired external authentication. Note the name of the provider you use in the config file. The user to be migrated should already be able to log in with this new authentication type, but he will be created as a new user with a new unique identifier, so he will not have access to his existing libraries. Note the uid from the social_auth_usersocialauth table. Delete this new, still empty user again.
Determine the ID of the user to be migrated in ccnet_db.EmailUser. For users created before version 11, the ID should be the user's email, for users created after version 11, the ID should be a string like xxx@auth.local.
Replace the password hash with an exclamation mark.
Create a new entry in social_auth_usersocialauth with the xxx@auth.local, your provider and the uid.
The login with the password stored in the local database is not possible anymore. After logging in via external authentication, the user has access to all his previous libraries.
This example shows how to migrate the user with the username 12ae56789f1e4c8d8e1c31415867317c@auth.local from local database authentication to OAuth. The OAuth authentication is configured in seahub_settings.py with the provider name authentik-oauth. The uid of the user inside the Identity Provider is HR12345.
This is what the database looks like before these commands must be executed:
mysql> select email,left(passwd,25) from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------------------------------+\n| email | left(passwd,25) |\n+---------------------------------------------+------------------------------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | PBKDF2SHA256$10000$4cdda6... |\n+---------------------------------------------+------------------------------+\n\nmysql> update EmailUser set passwd = '!' where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n\nmysql> insert into `social_auth_usersocialauth` (`username`, `provider`, `uid`, `extra_data`) values ('12ae56789f1e4c8d8e1c31415867317c@auth.local', 'authentik-oauth', 'HR12345', '');\n
Note
The extra_data field store user's information returned from the provider. For most providers, the extra_data field is usually an empty character. Since version 11.0.3-Pro, the default value of the extra_data field is NULL.
Afterwards the databases should look like this:
mysql> select email,passwd from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------- +\n| email | passwd |\n+---------------------------------------------+--------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | ! |\n+---------------------------------------------+--------+\n\nmysql> select username,provider,uid from social_auth_usersocialauth where username = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+-----------------+---------+\n| username | provider | uid |\n+---------------------------------------------+-----------------+---------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | authentik-oauth | HR12345 |\n+---------------------------------------------+-----------------+---------+\n
"},{"location":"config/auth_switch/#migrating-from-one-external-authentication-to-another","title":"Migrating from one external authentication to another","text":"
First configure the two external authentications and test them with a dummy user. Then, to migrate all the existing users you only need to make changes to the social_auth_usersocialauth table. No entries need to be deleted or created. You only need to modify the existing ones. The xxx@auth.local remains the same, you only need to replace the provider and the uid.
"},{"location":"config/auth_switch/#migrating-from-external-authentication-to-local-user-database","title":"Migrating from external authentication to local user database","text":"
First, delete the entry in the social_auth_usersocialauth table that belongs to the particular user.
Then you can reset the user's password, e.g. via the web interface. The user will be assigned a local password and from now on the authentication against the local database of Seafile will be done.
More details about this option will follow soon.
"},{"location":"config/auto_login_seadrive/","title":"Auto Login to SeaDrive on Windows","text":"
Kerberos is a widely used single sign on (SSO) protocol. Supporting of auto login will use a Kerberos service. For server configuration, please read remote user authentication documentation. You have to configure Apache to authenticate with Kerberos. This is out of the scope of this documentation. You can for example refer to this webpage.
The client machine has to join the AD domain. In a Windows domain, the Kerberos Key Distribution Center (KDC) is implemented on the domain service. Since the client machine has been authenticated by KDC when a Windows user logs in, a Kerberos ticket will be generated for current user without needs of another login in the browser.
When a program using the WinHttp API tries to connect a server, it can perform a login automatically through the Integrated Windows Authentication. Internet Explorer and SeaDrive both use this mechanism.
The details of Integrated Windows Authentication is described below:
Decide whether or not to use IWA according to the address and Internet Options. (more in next section)
Send a request to the server (e.g. http://test.seafile.com/sso)
The server returns an HTTP 401 unauthorized response with the Negotiate header which includes an authentication protocol.
The WinHttp API will try to use Kerberos first, if there is a valid ticket from KDC. The request will be sent again, together with the ticket in an HTTP header.
Then, Apache can check the ticket with KDC, and extract the username from it. The username will be passed to SeaHub for a successful auto login.
If the WinHttp API failed to get a ticket, it will then try the NTLM protocol by sending an HTTP request with Negotiate NTLMSSP token in the header. Without supporting the NTLM protocol, Apache shall returns an HTTP 401 unauthorized response and stops negotiation. At this point, the browser will pop up a login dialog, which means an auto login failure.
In short:
The client machine has to join the AD domain.
The Internet Options has to be configured properly.
The WinHttp API should be able to get a valid ticket from KDC. Make sure you use the correct server address (e.g. test.seafile.com) when you generate keytab file on the domain controller.
"},{"location":"config/auto_login_seadrive/#auto-login-on-internet-explorer","title":"Auto Login on Internet Explorer","text":"
The Internet Options has to be configured as following:
Open \"Internet Options\", select \"Security\" tab, select \"Local Intranet\" zone.
\"Sites\" -> \"Advanced\" -> \"Add this website to zone\". This is the place where we fill the address (e.g. http://test.seafile.com)
\"Security level for this zone\" -> \"Custom level...\" -> \"Automatic log-on with current username and password\".
Note
Above configuration requires a reboot to take effect.
Next, we shall test the auto login function on Internet Explorer: visit the website and click \"Single Sign-On\" link. It should be able to log in directly, otherwise the auto login is malfunctioned.
Note
The address in the test must be same as the address specified in the keytab file. Otherwise, the client machine can't get a valid ticket from Kerberos.
"},{"location":"config/auto_login_seadrive/#auto-login-on-seadrive","title":"Auto Login on SeaDrive","text":"
SeaDrive will use the Kerberos login configuration from the Windows Registry under HKEY_CURRENT_USER/SOFTWARE/SeaDrive.
Key : PreconfigureServerAddr\nType : REG_SZ\nValue : <the url of seafile server>\n\nKey : PreconfigureUseKerberosLogin\nType : REG_SZ\nValue : <0|1> // 0 for normal login, 1 for SSO login\n
The system wide configuration path is located at HKEY_LOCAL_MACHINE/SOFTWARE/Wow6432Node/SeaDrive.
Ccnet is the internal RPC framework used by Seafile server and also manages the user database. A few useful options are in ccnet.conf.
ccnet.conf is removed in version 12.0
"},{"location":"config/ccnet-conf/#options-that-moved-to-env-file","title":"Options that moved to .env file","text":"
Due to ccnet.conf is removed in version 12.0, the following informaiton is read from .env file
SEAFILE_MYSQL_DB_USER: The database user, the default is seafile\nSEAFILE_MYSQL_DB_PASSWORD: The database password\nSEAFILE_MYSQL_DB_HOST: The database host\nSEAFILE_MYSQL_DB_CCNET_DB_NAME: The database name for ccnet db, the default is ccnet_db\n
"},{"location":"config/ccnet-conf/#changing-mysql-connection-pool-size","title":"Changing MySQL Connection Pool Size","text":"
In version 12.0, the following information is read from the same option in seafile.conf
When you configure ccnet to use MySQL, the default connection pool size is 100, which should be enough for most use cases. You can change this value by adding following options to ccnet.conf:
[Database]\n......\n# Use larger connection pool\nMAX_CONNECTIONS = 200\n
When set use_ssl to true and skip_verify to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path. The ca_path is a trusted CA certificate path for signing MySQL server certificates. When skip_verify is true, there is no need to add the ca_path option. The MySQL server certificate won't be verified at this time.
"},{"location":"config/config_seafile_with_ADFS/","title":"config seafile with ADFS","text":""},{"location":"config/config_seafile_with_ADFS/#requirements","title":"Requirements","text":"
To use ADFS to log in to your Seafile, you need the following components:
A Winodws Server with ADFS installed. For configuring and installing ADFS you can see this article.
A valid SSL certificate for ADFS server, and here we use adfs-server.adfs.com as the domain name example.
A valid SSL certificate for Seafile server, and here we use demo.seafile.com as the domain name example.
These x.509 certs are used to sign and encrypt elements like NameID and Metadata for SAML. \n\n Then copy these two files to **<seafile-install-path>/seahub-data/certs**. (if the certs folder not exists, create it.)\n\n2. x.509 cert from IdP (Identity Provider)\n\n 1. Log into the ADFS server and open the ADFS management.\n\n 1. Double click **Service** and choose **Certificates**.\n\n 1. Export the **Token-Signing** certificate:\n\n 1. Right-click the certificate and select **View Certificate**.\n 1. Select the **Details** tab.\n 1. Click **Copy to File** (select **DER encoded binary X.509**).\n\n 1. Convert this certificate to PEM format, rename it to **idp.crt**\n\n 1. Then copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Prepare IdP Metadata File\n\n1. Open https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml\n\n1. Save this xml file, rename it to **idp_federation_metadata.xml**\n\n1. Copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Install Requirements on Seafile Server\n\n- For Ubuntu 16.04\n
### Config Seafile\n\nAdd the following lines to **seahub_settings.py**\n
from os import path import saml2 import saml2.saml"},{"location":"config/config_seafile_with_ADFS/#update-following-lines-according-to-your-situation","title":"update following lines according to your situation","text":"
CERTS_DIR = '/seahub-data/certs' SP_SERVICE_URL = 'https://demo.seafile.com' XMLSEC_BINARY = '/usr/local/bin/xmlsec1' ATTRIBUTE_MAP_DIR = '/seafile-server-latest/seahub-extra/seahub_extra/adfs_auth/attribute-maps' SAML_ATTRIBUTE_MAPPING = { 'DisplayName': ('display_name', ), 'ContactEmail': ('contact_email', ), 'Deparment': ('department', ), 'Telephone': ('telephone', ), }"},{"location":"config/config_seafile_with_ADFS/#update-the-idp-section-in-sampl_config-according-to-your-situation-and-leave-others-as-default","title":"update the 'idp' section in SAMPL_CONFIG according to your situation, and leave others as default","text":"
ENABLE_ADFS_LOGIN = True EXTRA_AUTHENTICATION_BACKENDS = ( 'seahub_extra.adfs_auth.backends.Saml2Backend', ) SAML_USE_NAME_ID_AS_USERNAME = True LOGIN_REDIRECT_URL = '/saml2/complete/' SAML_CONFIG = { # full path to the xmlsec1 binary programm 'xmlsec_binary': XMLSEC_BINARY,
'allow_unknown_attributes': True,\n\n# your entity id, usually your subdomain plus the url to the metadata view\n'entityid': SP_SERVICE_URL + '/saml2/metadata/',\n\n# directory with attribute mapping\n'attribute_map_dir': ATTRIBUTE_MAP_DIR,\n\n# this block states what services we provide\n'service': {\n # we are just a lonely SP\n 'sp' : {\n \"allow_unsolicited\": True,\n 'name': 'Federated Seafile Service',\n 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS,\n 'endpoints': {\n # url and binding to the assetion consumer service view\n # do not change the binding or service name\n 'assertion_consumer_service': [\n (SP_SERVICE_URL + '/saml2/acs/',\n saml2.BINDING_HTTP_POST),\n ],\n # url and binding to the single logout service view\n # do not change the binding or service name\n 'single_logout_service': [\n (SP_SERVICE_URL + '/saml2/ls/',\n saml2.BINDING_HTTP_REDIRECT),\n (SP_SERVICE_URL + '/saml2/ls/post',\n saml2.BINDING_HTTP_POST),\n ],\n },\n\n # attributes that this project need to identify a user\n 'required_attributes': [\"uid\"],\n\n # attributes that may be useful to have but not required\n 'optional_attributes': ['eduPersonAffiliation', ],\n\n # in this section the list of IdPs we talk to are defined\n 'idp': {\n # we do not need a WAYF service since there is\n # only an IdP defined here. This IdP should be\n # present in our metadata\n\n # the keys of this dictionary are entity ids\n 'https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml': {\n 'single_sign_on_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/idpinitiatedsignon.aspx',\n },\n 'single_logout_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/?wa=wsignout1.0',\n },\n },\n },\n },\n},\n\n# where the remote metadata is stored\n'metadata': {\n 'local': [path.join(CERTS_DIR, 'idp_federation_metadata.xml')],\n},\n\n# set to 1 to output debugging information\n'debug': 1,\n\n# Signing\n'key_file': '', \n'cert_file': path.join(CERTS_DIR, 'idp.crt'), # from IdP\n\n# Encryption\n'encryption_keypairs': [{\n 'key_file': path.join(CERTS_DIR, 'sp.key'), # private part\n 'cert_file': path.join(CERTS_DIR, 'sp.crt'), # public part\n}],\n\n'valid_for': 24, # how long is our metadata valid\n
Relying Party Trust is the connection between Seafile and ADFS.
Log into the ADFS server and open the ADFS management.
Double click Trust Relationships, then right click Relying Party Trusts, select Add Relying Party Trust\u2026.
Select Import data about the relying party published online or one a local network, input https://demo.seafile.com/saml2/metadata/ in the Federation metadata address.
Then Next until Finish.
Add Relying Party Claim Rules
Relying Party Claim Rules is used for attribute communication between Seafile and users in Windows Domain.
Important: Users in Windows domain must have the E-mail value setted.
Right-click on the relying party trust and select Edit Claim Rules...
On the Issuance Transform Rules tab select Add Rules...
Select Send LDAP Attribute as Claims as the claim rule template to use.
Give the claim a name such as LDAP Attributes.
Set the Attribute Store to Active Directory, the LDAP Attribute to E-Mail-Addresses, and the Outgoing Claim Type to E-mail Address.
Select Finish.
Click Add Rule... again.
Select Transform an Incoming Claim.
Give it a name such as Email to Name ID.
Incoming claim type should be E-mail Address (it must match the Outgoing Claim Type in rule #1).
The Outgoing claim type is Name ID (this is required in Seafile settings policy 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS).
Note: You can copy password_reset_email.html to seahub-data/custom/templates/registration/password_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
"},{"location":"config/customize_email_notifications/#system-admin-add-new-member","title":"System admin add new member","text":"
Note: You can copy user_add_email.html to seahub-data/custom/templates/sysadmin/user_add_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
"},{"location":"config/customize_email_notifications/#system-admin-reset-user-password","title":"System admin reset user password","text":"
Note: You can copy user_reset_email.html to seahub-data/custom/templates/sysadmin/user_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
"},{"location":"config/details_about_file_search/","title":"Details about File Search","text":""},{"location":"config/details_about_file_search/#search-options","title":"Search Options","text":"
The following options can be set in seafevents.conf to control the behaviors of file search. You need to restart seafile and seahub to make them take effect.
[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## this is for improving the search speed\nhighlight = fvh \n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\nindex_office_pdf=false\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n
"},{"location":"config/details_about_file_search/#enable-full-text-search-for-officepdf-files","title":"Enable full text search for Office/PDF files","text":"
Full text search is not enabled by default to save system resources. If you want to enable it, you need to follow the instructions below.
"},{"location":"config/details_about_file_search/#modify-seafeventsconf","title":"Modify seafevents.conf","text":"Deploy in DockerDeploy from binary packages
cd /opt/seafile-data/seafile/conf\nnano seafevents.conf\n
"},{"location":"config/details_about_file_search/#restart-seafile-server","title":"Restart Seafile server","text":"Deploy in DockerDeploy from binary packages
docker exec -it seafile bash\ncd /scripts\n./seafile.sh restart\n\n# delete the existing search index and recreate it\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
cd /opt/seafile/seafile-server-latest\n./seafile.sh restart\n\n# delete the existing search index and recreate it\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
"},{"location":"config/details_about_file_search/#common-problems","title":"Common problems","text":""},{"location":"config/details_about_file_search/#how-to-rebuild-the-index-if-something-went-wrong","title":"How to rebuild the index if something went wrong","text":"
cd /opt/seafile/seafile-server-latest\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
Tip
If this does not work, you can try the following steps:
Stop Seafile
Remove the old search index rm -rf pro-data/search
Restart Seafile
Wait one minute then run ./pro/pro.py search --update
"},{"location":"config/details_about_file_search/#access-the-aws-elasticsearch-service-using-https","title":"Access the AWS elasticsearch service using HTTPS","text":"
Create an elasticsearch service on AWS according to the documentation.
Configure the seafevents.conf:
[INDEX FILES]\nenabled = true\ninterval = 10m\nindex_office_pdf=true\nes_host = your domain endpoint(for example, https://search-my-domain.us-east-1.es.amazonaws.com)\nes_port = 443\nscheme = https\nusername = master user\npassword = password\nhighlight = fvh\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n
Note
The version of the Python third-party package elasticsearch cannot be greater than 7.14.0, otherwise the elasticsearch service cannot be accessed: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/samplecode.html#client-compatibility, https://github.com/elastic/elasticsearch-py/pull/1623.
"},{"location":"config/details_about_file_search/#i-get-no-result-when-i-search-a-keyword","title":"I get no result when I search a keyword","text":"
The search index is updated every 10 minutes by default. So before the first index update is performed, you get nothing no matter what you search.
COMPOSE_FILE: .yml files for components of Seafile-docker, each .yml must be separated by the symbol defined in COMPOSE_PATH_SEPARATOR. The core components are involved in seafile-server.yml and caddy.yml which must be taken in this term.
COMPOSE_PATH_SEPARATOR: The symbol used to separate the .yml files in term COMPOSE_FILE, default is ','.
CACHE_PROVIDER: The type of cache server used for Seafile. The available options are redis and memcached. Since Seafile 13, it is recommended to use redis as the cache service to support new features, and memcached will no longer be integrated into Seafile Docker by default. Default is redis
SEAF_SERVER_STORAGE_TYPE: What kind of the Seafile data for storage. Available options are disk (i.e., local disk), s3 and multiple (see the details of multiple storage backends)
S3_SS_BUCKET: S3 storage bucket for SeaSearch data (valid when service enabled)
S3_MD_BUCKET: S3 storage bucket for metadata-sever data (valid when service available)
S3_KEY_ID: S3 storage backend key ID
S3_SECRET_KEY: S3 storage backend secret key
S3_USE_V4_SIGNATURE: Use the v4 protocol of S3 if enabled, default is true
S3_AWS_REGION: Region of your buckets (AWS only), default is us-east-1.
S3_HOST: Host of your buckets (required when not use AWS).
S3_USE_HTTPS: Use HTTPS connections to S3 if enabled, default is true
S3_PATH_STYLE_REQUEST: This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. Default false.
S3_SSE_C_KEY: A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C.
Easier to configure S3 for Seafile and its components
Since Seafile Pro 13.0, in order to facilitate users to deploy Seafile's related extension components and other services in the future, a section will be provided in .env to store the S3 Configurations for Seafile and some extension components (such as SeaSearch, Metadata server). You can locate it with the title bar Storage configurations for S3.
S3 configurations in .env only support single S3 storage backend mode
The Seafile server only support configuring S3 in .env for single S3 storage backend mode (i.e., when SEAF_SERVER_STORAGE_TYPE=s3). If you would like to use other storage backend (e.g., Ceph, Swift) or other settings that can only be set in seafile.conf (like multiple storage backends), please set SEAF_SERVER_STORAGE_TYPE to multiple, and set MD_STORAGE_TYPE and SS_STORAGE_TYPE according to your configurations.
The S3 configurations only valid with at least one STORAGE_TYPE has specified to s3
Now there are three (pro) and one (cluster) STORAGE_TYPE we provided in .env: - SEAF_SERVER_STORAGE_TYPE (pro & cluster) - MD_STORAGE_TYPE (pro, see the Metadata server section for the details) - SS_STORAGE_TYPE (pro, see the SeaSearch section for the details)
You have to specify at least one of them as s3 for the above configuration to take effect.
NOTIFICATION_SERVER_URL: The notification server url, leave blank to disable it (default).
In Seafile cluster or standalone-deployment notification server
In addition to NOTIFICATION_SERVER_URL, you also need to specify INNER_NOTIFICATION_SERVER_URL=$NOTIFICATION_SERVER_URL, which will be used for the connection between Seafile server and notification server.
CLUSTER_INIT_MODE: (only valid in pro edition at deploying first time). Cluster initialization mode, in which the necessary configuration files for the service to run will be generated (but the service will not be started). If the configuration file already exists, no operation will be performed. The default value is true. When the configuration file is generated, be sure to set this item to false.
CLUSTER_INIT_ES_HOST: (only valid in pro edition at deploying first time). Your cluster Elasticsearch server host.
CLUSTER_INIT_ES_PORT: (only valid in pro edition at deploying first time). Your cluster Elasticsearch server port. Default is 9200.
CLUSTER_MODE: Seafile service node type, i.e., frontend (default) or backend.
"},{"location":"config/ldap_in_ce/","title":"Configure Seafile to use LDAP","text":"
This documentation is for the Community Edition. If you're using Pro Edition, please refer to the Seafile Pro documentation
"},{"location":"config/ldap_in_ce/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"
When Seafile is integrated with LDAP, users in the system can be divided into two tiers:
Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated.
Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them.
When Seafile counts the number of users in the system, it only counts the activated users in its internal database.
The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier:
Email address: this is the most common choice. Most organizations assign unique email address for each member.
UserPrincipalName: this is a user attribute only available in Active Directory. It's format is user-login-name@domain-name, e.g. john@example.com. It's not a real email address, but it works fine as the unique identifier.
The identifier is stored in table social_auth_usersocialauth to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth table
variable description LDAP_SERVER_URL The URL of LDAP server LDAP_BASE_DN The root node of users who can log in to Seafile in the LDAP server LDAP_ADMIN_DN DN of the administrator used to query the LDAP server for information. For OpenLDAP, it may be cn=admin,dc=example,dc=comLDAP_ADMIN_PASSWORD Password of LDAP_ADMIN_DNLDAP_PROVIDER Identify the source of the user, used in the table social_auth_usersocialauth, defaults to 'ldap' LDAP_LOGIN_ATTR User's attribute used to log in to Seafile. It should be a unique identifier for the user in LDAP server. Learn more about this id from the descriptions at the beginning of this section. LDAP_CONTACT_EMAIL_ATTR LDAP user's contact_email attribute LDAP_USER_ROLE_ATTR LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR Attribute for user's first name. It's \"givenName\" by default. LDAP_USER_LAST_NAME_ATTR Attribute for user's last name. It's \"sn\" by default. LDAP_USER_NAME_REVERSE In some languages, such as Chinese, the display order of the first and last name is reversed. Set this option if you need it. LDAP_FILTER Additional filter conditions. Users who meet the filter conditions can log in, otherwise they cannot log in.
Tips for choosing LDAP_BASE_DN and LDAP_ADMIN_DN:
To determine the LDAP_BASE_DN, you first have to navigate your organization hierachy on the domain controller GUI.
If you want to allow all users to use Seafile, you can use cn=users,dc=yourdomain,dc=com as LDAP_BASE_DN (with proper adjustment for your own needs).
If you want to limit users to a certain OU (Organization Unit), you run dsquery command on the domain controller to find out the DN for this OU. For example, if the OU is staffs, you can run dsquery ou -name staff. More information can be found here.
AD supports user@domain.name format for the LDAP_ADMIN_DN option. For example you can use administrator@example.com for LDAP_ADMIN_DN. Sometime the domain controller doesn't recognize this format. You can still use dsquery command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser. More information here.
Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the LDAP_BASE_DN option. The DNs are separated by \";\", e.g.
Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting LDAP_FILTER option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).
The final filter used for searching for users is (&($LOGIN_ATTR=*)($LDAP_FILTER)). $LOGIN_ATTR and $LDAP_FILTER will be replaced by your option values.
For example, add below option to seahub_settings.py:
The final search filter would be (&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))
Note that the case of attribute names in the above example is significant. The memberOf attribute is only available in Active Directory.
"},{"location":"config/ldap_in_ce/#limiting-seafile-users-to-a-group-in-active-directory","title":"Limiting Seafile Users to a Group in Active Directory","text":"
You can use the LDAP_FILTER option to limit user scope to a certain AD group.
First, you should find out the DN for the group. Again, we'll use the dsquery command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup.
Add below option to seahub_settings.py:
LDAP_FILTER = 'memberOf={output of dsquery command}'\n
"},{"location":"config/ldap_in_ce/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"
If your LDAP service supports TLS connections, you can configure LDAP_SERVER_URL as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
LDAP_SERVER_URL = 'ldaps://192.168.0.1:636/'\n
"},{"location":"config/ldap_in_pro/","title":"Configure Seafile Pro Edition to use LDAP","text":""},{"location":"config/ldap_in_pro/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"
When Seafile is integrated with LDAP, users in the system can be divided into two tiers:
Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated.
Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them.
When Seafile counts the number of users in the system, it only counts the activated users in its internal database.
The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier:
Email address: this is the most common choice. Most organizations assign unique email address for each member.
UserPrincipalName: this is a user attribute only available in Active Directory. It's format is user-login-name@domain-name, e.g. john@example.com. It's not a real email address, but it works fine as the unique identifier.
The identifier is stored in table social_auth_usersocialauth to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth table
variable description LDAP_SERVER_URL The URL of LDAP server LDAP_BASE_DN The root node of users who can log in to Seafile in the LDAP server LDAP_ADMIN_DN DN of the administrator used to query the LDAP server for information. For OpenLDAP, it may be cn=admin,dc=example,dc=comLDAP_ADMIN_PASSWORD Password of LDAP_ADMIN_DNLDAP_PROVIDER Identify the source of the user, used in the table social_auth_usersocialauth, defaults to 'ldap' LDAP_LOGIN_ATTR User's attribute used to log in to Seafile. It should be a unique identifier for the user in LDAP server. Learn more about this id from the descriptions at the beginning of this section. LDAP_CONTACT_EMAIL_ATTR LDAP user's contact_email attribute LDAP_USER_ROLE_ATTR LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR Attribute for user's first name. It's \"givenName\" by default. LDAP_USER_LAST_NAME_ATTR Attribute for user's last name. It's \"sn\" by default. LDAP_USER_NAME_REVERSE In some languages, such as Chinese, the display order of the first and last name is reversed. Set this option if you need it. LDAP_FILTER Additional filter conditions. Users who meet the filter conditions can log in, otherwise they cannot log in.
Tips for choosing LDAP_BASE_DN and LDAP_ADMIN_DN:
To determine the LDAP_BASE_DN, you first have to navigate your organization hierachy on the domain controller GUI.
If you want to allow all users to use Seafile, you can use cn=users,dc=yourdomain,dc=com as LDAP_BASE_DN (with proper adjustment for your own needs).
If you want to limit users to a certain OU (Organization Unit), you run dsquery command on the domain controller to find out the DN for this OU. For example, if the OU is staffs, you can run dsquery ou -name staff. More information can be found here.
AD supports user@domain.name format for the LDAP_ADMIN_DN option. For example you can use administrator@example.com for LDAP_ADMIN_DN. Sometime the domain controller doesn't recognize this format. You can still use dsquery command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser. More information here.
"},{"location":"config/ldap_in_pro/#setting-up-ldap-user-sync-optional","title":"Setting Up LDAP User Sync (optional)","text":"
In Seafile Pro, except for importing users into internal database when they log in, you can also configure Seafile to periodically sync user information from LDAP server into the internal database.
User's full name, department and contact email address can be synced to internal database. Users can use this information to more easily search for a specific user. User's Windows or Unix login id can be synced to the internal database. This allows the user to log in with its familiar login id. When a user is removed from LDAP, the corresponding user in Seafile will be deactivated. Otherwise, he could still sync files with Seafile client or access the web interface. After synchronization is complete, you can see the user's full name, department and contact email on its profile page.
Variable Description LDAP_SYNC_INTERVAL The interval to sync. Unit is minutes. Defaults to 60 minutes. ENABLE_LDAP_USER_SYNC set to \"true\" if you want to enable ldap user synchronization LDAP_USER_OBJECT_CLASS This is the name of the class used to search for user objects. In Active Directory, it's usually \"person\". The default value is \"person\". LDAP_DEPT_ATTR Attribute for department info. LDAP_UID_ATTR Attribute for Windows login name. If this is synchronized, users can also log in with their Windows login name. In AD, the attribute sAMAccountName can be used as UID_ATTR. The attribute will be stored as login_id in Seafile (in seahub_db.profile_profile table). LDAP_AUTO_REACTIVATE_USERS Whether to auto activate deactivated user, default by 'true' LDAP_USE_PAGED_RESULT Whether to use pagination extension. It is useful when you have more than 1000 users in LDAP server. IMPORT_NEW_USER Whether to import new users when sync user. ACTIVE_USER_WHEN_IMPORT Whether to activate the user automatically when imported. DEACTIVE_USER_IF_NOTFOUND set to \"true\" if you want to deactivate a user when he/she was deleted in AD server. ENABLE_EXTRA_USER_INFO_SYNC Enable synchronization of additional user information, including user's full name, department, and Windows login name, etc."},{"location":"config/ldap_in_pro/#importing-users-without-activating-them","title":"Importing Users without Activating Them","text":"
The users imported with the above configuration will be activated by default. For some organizations with large number of users, they may want to import user information (such as user full name) without activating the imported users. Activating all imported users will require licenses for all users in LDAP, which may not be affordable.
Seafile provides a combination of options for such use case. You can modify below option in seahub_settings.py:
ACTIVATE_USER_WHEN_IMPORT = False\n
This prevents Seafile from activating imported users. Then, add below option to seahub_settings.py:
ACTIVATE_AFTER_FIRST_LOGIN = True\n
This option will automatically activate users when they login to Seafile for the first time.
When you set the DEACTIVE_USER_IF_NOTFOUND option, a user will be deactivated when he/she is not found in LDAP server. By default, even after this user reappears in the LDAP server, it won't be reactivated automatically. This is to prevent auto reactivating a user that was manually deactivated by the system admin.
However, sometimes it's desirable to auto reactivate such users. You can modify below option in seahub_settings.py:
"},{"location":"config/ldap_in_pro/#setting-up-ldap-group-sync-optional","title":"Setting Up LDAP Group Sync (optional)","text":""},{"location":"config/ldap_in_pro/#how-it-works","title":"How It Works","text":"
The importing or syncing process maps groups from LDAP directory server to groups in Seafile's internal database. This process is one-way.
Any changes to groups in the database won't propagate back to LDAP;
Any changes to groups in the database, except for \"setting a member as group admin\", will be overwritten in the next LDAP sync operation. If you want to add or delete members, you can only do that on LDAP server.
The creator of imported groups will be set to the system admin.
There are two modes of operation:
Periodical: the syncing process will be executed in a fixed interval
Manual: there is a script you can run to trigger the syncing once
Before enabling LDAP group sync, you should have configured LDAP authentication. See Basic LDAP Integration for details.
The following are LDAP group sync related options:
# ldap group sync options.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_GROUP_FILTER = '' # An additional filter to use when searching group objects.\n # If it's set, the final filter used to run search is \"(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER))\";\n # otherwise the final filter would be \"(objectClass=GROUP_OBJECT_CLASS)\".\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile.\n # Learn more about departments in Seafile [here](https://help.seafile.com/sharing_collaboration/departments/).\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\n
Meaning of some options:
variable description ENABLE_LDAP_GROUP_SYNC Whether to enable group sync. LDAP_GROUP_OBJECT_CLASS This is the name of the class used to search for group objects. LDAP_GROUP_MEMBER_ATTR The attribute field to use when loading the group's members. For most directory servers, the attribute is \"member\" which is the default value. For \"posixGroup\", it should be set to \"memberUid\". LDAP_USER_ATTR_IN_MEMBERUID The user attribute set in 'memberUid' option, which is used in \"posixGroup\". The default value is \"uid\". LDAP_GROUP_UUID_ATTR Used to uniquely identify groups in LDAP. LDAP_GROUP_FILTER An additional filter to use when searching group objects. If it's set, the final filter used to run search is (&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER)); otherwise the final filter would be (objectClass=GROUP_OBJECT_CLASS). LDAP_USER_GROUP_MEMBER_RANGE_QUERY When a group contains too many members, AD will only return part of them. Set this option to TRUE to make LDAP sync work with large groups. DEL_GROUP_IF_NOT_FOUND Set to \"true\", sync process will delete the group if not found in the LDAP server. LDAP_SYNC_GROUP_AS_DEPARTMENT Whether to sync groups as top-level departments in Seafile. Learn more about departments in Seafile here. LDAP_DEPT_NAME_ATTR Used to get the department name.
Tip
The search base for groups is the option LDAP_BASE_DN.
Some LDAP server, such as Active Directory, allows a group to be a member of another group. This is called \"group nesting\". If we find a nested group B in group A, we should recursively add all the members from group B into group A. And group B should still be imported a separate group. That is, all members of group B are also members in group A.
In some LDAP server, such as OpenLDAP, it's common practice to use Posix groups to store group membership. To import Posix groups as Seafile groups, set LDAP_GROUP_OBJECT_CLASS option to posixGroup. A posixGroup object in LDAP usually contains a multi-value attribute for the list of member UIDs. The name of this attribute can be set with the LDAP_GROUP_MEMBER_ATTR option. It's MemberUid by default. The value of the MemberUid attribute is an ID that can be used to identify a user, which corresponds to an attribute in the user object. The name of this ID attribute is usually uid, but can be set via the LDAP_USER_ATTR_IN_MEMBERUID option. Note that posixGroup doesn't support nested groups.
"},{"location":"config/ldap_in_pro/#sync-ou-as-departments","title":"Sync OU as Departments","text":"
A department in Seafile is a special group. In addition to what you can do with a group, there are two key new features for departments:
Department supports hierarchy. A department can have any levels of sub-departments.
Department can have storage quota.
Seafile supports syncing OU (Organizational Units) from AD/LDAP to departments. The sync process keeps the hierarchical structure of the OUs.
Options for syncing departments from OU:
LDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_DEPT_NAME_ATTR = 'description' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n
"},{"location":"config/ldap_in_pro/#periodical-and-manual-sync","title":"Periodical and Manual Sync","text":"
Periodical sync won't happen immediately after you restart seafile server. It gets scheduled after the first sync interval. For example if you set sync interval to 30 minutes, the first auto sync will happen after 30 minutes you restarts. To sync immediately, you need to manually trigger it.
After the sync is run, you should see log messages like the following in logs/seafevents.log. And you should be able to see the groups in system admin page.
[2023-03-30 18:15:05,109] [DEBUG] create group 1, and add dn pair CN=DnsUpdateProxy,CN=Users,DC=Seafile,DC=local<->1 success.\n[2023-03-30 18:15:05,145] [DEBUG] create group 2, and add dn pair CN=Domain Computers,CN=Users,DC=Seafile,DC=local<->2 success.\n[2023-03-30 18:15:05,154] [DEBUG] create group 3, and add dn pair CN=Domain Users,CN=Users,DC=Seafile,DC=local<->3 success.\n[2023-03-30 18:15:05,164] [DEBUG] create group 4, and add dn pair CN=Domain Admins,CN=Users,DC=Seafile,DC=local<->4 success.\n[2023-03-30 18:15:05,176] [DEBUG] create group 5, and add dn pair CN=RAS and IAS Servers,CN=Users,DC=Seafile,DC=local<->5 success.\n[2023-03-30 18:15:05,186] [DEBUG] create group 6, and add dn pair CN=Enterprise Admins,CN=Users,DC=Seafile,DC=local<->6 success.\n[2023-03-30 18:15:05,197] [DEBUG] create group 7, and add dn pair CN=dev,CN=Users,DC=Seafile,DC=local<->7 success.\n
Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the LDAP_BASE_DN option. The DNs are separated by \";\", e.g.
Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting LDAP_FILTER option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).
The final filter used for searching for users is (&($LOGIN_ATTR=*)($LDAP_FILTER)). $LOGIN_ATTR and $LDAP_FILTER will be replaced by your option values.
For example, add below option to seahub_settings.py:
The final search filter would be (&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))
The case of attribute names in the above example is significant. The memberOf attribute is only available in Active Directory
"},{"location":"config/ldap_in_pro/#limiting-seafile-users-to-a-group-in-active-directory","title":"Limiting Seafile Users to a Group in Active Directory","text":"
You can use the LDAP_FILTER option to limit user scope to a certain AD group.
First, you should find out the DN for the group. Again, we'll use the dsquery command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup.
Add below option to seahub_settings.py:
LDAP_FILTER = 'memberOf={output of dsquery command}'\n
"},{"location":"config/ldap_in_pro/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"
If your LDAP service supports TLS connections, you can configure LDAP_SERVER_URL as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
LDAP protocol version 3 supports \"paged results\" (PR) extension. When you have large number of users, this option can greatly improve the performance of listing users. Most directory server nowadays support this extension.
In Seafile Pro Edition, add this option to seahub_settings.py to enable PR:
Seafile Pro Edition supports auto following referrals in LDAP search. This is useful for partitioned LDAP or AD servers, where users may be spreaded on multiple directory servers. For more information about referrals, you can refer to this article.
To configure, add below option to seahub_settings.py, e.g.:
Seafile Pro Edition supports multi-ldap servers, you can configure two ldap servers to work with seafile. Multi-ldap servers mean that, when get or search ldap user, it will iterate all configured ldap servers until a match is found; When listing all ldap users, it will iterate all ldap servers to get all users; For Ldap sync it will sync all user/group info in all configured ldap servers to seafile.
Currently, only two LDAP servers are supported.
If you want to use multi-ldap servers, please replace LDAP in the options with MULTI_LDAP_1, and then add them to seahub_settings.py, for example:
!!! note: There are still some shared config options are used for all LDAP servers, as follows:
```python\n# Common user sync options\nLDAP_SYNC_INTERVAL = 60\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# Common group sync options\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n```\n
"},{"location":"config/ldap_in_pro/#sso-and-ldap-users-use-the-same-uid","title":"SSO and LDAP users use the same uid","text":"
If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth or Shibboleth), you want Seafile to find the existing account for this user instead of creating a new one, you can set
SSO_LDAP_USE_SAME_UID = True\n
Here the UID means the unique user ID, in LDAP it is the attribute you use for LDAP_LOGIN_ATTR (not LDAP_UID_ATTR), in ADFS it is uid attribute. You need make sure you use the same attribute for the two settings
On this basis, if you only want users to login using SSO and not through LDAP, you can set
USE_LDAP_SYNC_ONLY = True\n
"},{"location":"config/ldap_in_pro/#importing-roles-from-ldap","title":"Importing Roles from LDAP","text":"
Seafile Pro Edition supports syncing roles from LDAP or Active Directory.
To enable this feature, add below option to seahub_settings.py, e.g.
LDAP_USER_ROLE_ATTR = 'title'\n
LDAP_USER_ROLE_ATTR is the attribute field to configure roles in LDAP. You can write a custom function to map the role by creating a file seahub_custom_functions.py under conf/ and edit it like:
# -*- coding: utf-8 -*-\n\n# The AD roles attribute returns a list of roles (role_list).\n# The following function use the first entry in the list.\ndef ldap_role_mapping(role):\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n\n# From version 11.0.11-pro, you can define the following function\n# to calculate a role from the role_list.\ndef ldap_role_list_mapping(role_list):\n if not role_list:\n return ''\n for role in role_list:\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n
You should only define one of the two functions
You can rewrite the function (in python) to make your own mapping rules. If the file or function doesn't exist, the first entry in role_list will be synced.
"},{"location":"config/multi_institutions/","title":"Multiple Organization/Institution User Management","text":"
Starting from version 5.1, you can add institutions into Seafile and assign users into institutions. Each institution can have one or more administrators. This feature is to ease user administration when multiple organizations (universities) share a single Seafile instance. Unlike multi-tenancy, the users are not-isolated. A user from one institution can share files with another institution.
"},{"location":"config/multi_institutions/#turn-on-the-feature","title":"Turn on the feature","text":"
In seahub_settings.py, add MULTI_INSTITUTION = True to enable multi-institution feature, and add
Please replease += to = if EXTRA_MIDDLEWARE is not defined
"},{"location":"config/multi_institutions/#add-institutions-and-institution-admins","title":"Add institutions and institution admins","text":"
After restarting Seafile, a system admin can add institutions by adding institution name in admin panel. He can also click into an institution, which will list all users whose profile.institution match the name.
"},{"location":"config/multi_institutions/#assign-users-to-institutions","title":"Assign users to institutions","text":"
If you are using Shibboleth, you can map a Shibboleth attribute into institution. For example, the following configuration maps organization attribute to institution.
Multi-tenancy feature is designed for hosting providers that what to host several customers in a single Seafile instance. You can create multi-organizations. Organizations is separated from each other. Users can't share libraries between organizations.
CLOUD_MODE = True\nMULTI_TENANCY = True\n\nORG_MEMBER_QUOTA_ENABLED = True\n\nORG_ENABLE_ADMIN_CUSTOM_NAME = True # Default is True, meaning organization name can be customized\nORG_ENABLE_ADMIN_CUSTOM_LOGO = False # Default is False, if set to True, organization logo can be customized\n\nENABLE_MULTI_ADFS = True # Default is False, if set to True, support per organization custom ADFS/SAML2 login\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n
An organization can be created via system admin in \u201cadmin panel->organization->Add organization\u201d.
Every organization has an URL prefix. This field is for future usage. When a user create an organization, an URL like org1 will be automatically assigned.
After creating an organization, the first user will become the admin of that organization. The organization admin can add other users. Note, the system admin can't add users.
"},{"location":"config/multi_tenancy/#adfssaml-single-sign-on-integration-in-multi-tenancy","title":"ADFS/SAML single sign-on integration in multi-tenancy","text":""},{"location":"config/multi_tenancy/#preparation-for-adfssaml","title":"Preparation for ADFS/SAML","text":"
1) Prepare SP(Seafile) certificate directory and SP certificates:
The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command:
The days option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly
Note
If certificates are not placed in /opt/seafile-data/seafile/seahub-data/certs, you need to add the following configuration in seahub_settings.py:
SAML_CERTS_DIR = '/path/to/certs'\n
2) Add the following configuration to seahub_settings.py and then restart Seafile:
Before using OAuth, you should first register an OAuth2 client application on your authorization server, then add some configurations to seahub_settings.py.
"},{"location":"config/oauth/#register-an-oauth2-client-application","title":"Register an OAuth2 client application","text":"
Here we use Github as an example. First you should register an OAuth2 client application on Github, official document from Github is very detailed.
Add the folllowing configurations to seahub_settings.py:
ENABLE_OAUTH = True\n\n# If create new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_CREATE_UNKNOWN_USER = True\n\n# If active new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_ACTIVATE_USER_AFTER_CREATION = True\n\n# Usually OAuth works through SSL layer. If your server is not parametrized to allow HTTPS, some method will raise an \"oauthlib.oauth2.rfc6749.errors.InsecureTransportError\". Set this to `True` to avoid this error.\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\n# Client id/secret generated by authorization server when you register your client application.\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\n\n# Callback url when user authentication succeeded. Note, the redirect url you input when you register your client application MUST be exactly the same as this value.\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following should NOT be changed if you are using Github as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'github.com' \nOAUTH_PROVIDER = 'github.com'\n\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # Please keep the 'email' option unchanged to be compatible with the login of users of version 11.0 and earlier.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n \"uid\": (True, \"uid\"), # Seafile v11.0 + \n}\n
"},{"location":"config/oauth/#more-explanations-about-the-settings","title":"More explanations about the settings","text":"
OAUTH_PROVIDER / OAUTH_PROVIDER_DOMAIN
OAUTH_PROVIDER_DOMAIN will be deprecated, and it can be replaced by OAUTH_PROVIDER. This variable is used in the database to identify third-party providers, either as a domain or as an easy-to-remember string less than 32 characters.
OAUTH_ATTRIBUTE_MAP
This variables describes which claims from the response of the user info endpoint are to be filled into which attributes of the new Seafile user. The format is showing like below:
OAUTH_ATTRIBUTE_MAP = {\n <:Attribute in the OAuth provider>: (<:Is required or not in Seafile?>, <:Attribute in Seafile >)\n }\n
If the remote resource server, like Github, uses email to identify an unique user too, Seafile will use Github id directorily, the OAUTH_ATTRIBUTE_MAP setting for Github should be like this:
OAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # it is deprecated\n \"uid / id / username\": (True, \"uid\") \n\n # extra infos you want to update to Seafile\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n
The key part id stands for an unique identifier of user in Github, this tells Seafile which attribute remote resoure server uses to indentify its user. The value part True stands for if this field is mandatory by Seafile.
Since 11.0 version, Seafile use uid as the external unique identifier of the user. It stores uid in table social_auth_usersocialauth and map it to internal unique identifier used in Seafile. Different OAuth systems have different attributes, which may be: id or uid or username, etc. And the id/email config id: (True, email) is deprecated.
If you upgrade from a version below 11.0, you need to have both fields configured, i.e., you configuration should be like:
In this way, when a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration \"id\": (True, \"email\").
If you use a newly deployed 11.0+ Seafile instance, you don't need the \"id\": (True, \"email\") item. Your configuration should be like:
ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following shoud NOT be changed if you are using Google as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'google.com'\nOAUTH_AUTHORIZATION_URL = 'https://accounts.google.com/o/oauth2/v2/auth'\nOAUTH_TOKEN_URL = 'https://www.googleapis.com/oauth2/v4/token'\nOAUTH_USER_INFO_URL = 'https://www.googleapis.com/oauth2/v1/userinfo'\nOAUTH_SCOPE = [\n \"openid\",\n \"https://www.googleapis.com/auth/userinfo.email\",\n \"https://www.googleapis.com/auth/userinfo.profile\",\n]\nOAUTH_ATTRIBUTE_MAP = {\n \"sub\": (True, \"uid\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n
Note
For Github, email is not the unique identifier for an user, but id is in most cases, so we use id as settings example in our manual. As Seafile uses email to identify an unique user account for now, so we combine id and OAUTH_PROVIDER_DOMAIN, which is github.com in your case, to an email format string and then create this account if not exist.
For users of Azure Cloud, as there is no id field returned from Azure Cloud's user info endpoint, so we use a special configuration for OAUTH_ATTRIBUTE_MAP setting (others are the same as Github/Google). Please see this tutorial for the complete deployment process of OAuth against Azure Cloud.
Add the following configuration to seahub_settings.py.
Sharing between Seafile serversSharing from NextCloud to Seafile
# Enable OCM\nENABLE_OCM = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"dev\",\n \"server_url\": \"https://seafile-domain-1/\", # should end with '/'\n },\n {\n \"server_name\": \"download\",\n \"server_url\": \"https://seafile-domain-2/\", # should end with '/'\n },\n]\n
# Enable OCM\nENABLE_OCM_VIA_WEBDAV = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"nextcloud\",\n \"server_url\": \"https://nextcloud-domain-1/\", # should end with '/'\n }\n]\n
OCM_REMOTE_SERVERS is a list of servers that you allow your users to share libraries with
"},{"location":"config/ocm/#usage","title":"Usage","text":""},{"location":"config/ocm/#share-library-to-other-server","title":"Share library to other server","text":"
In the library sharing dialog jump to \"Share to other server\", you can share this library to users of another server with \"Read-Only\" or \"Read-Write\" permission. You can also view shared records and cancel sharing.
"},{"location":"config/ocm/#view-be-shared-libraries","title":"View be shared libraries","text":"
You can jump to \"Shared from other servers\" page to view the libraries shared by other servers and cancel the sharing.
And enter the library to view, download or upload files.
"},{"location":"config/remote_user/","title":"SSO using Remote User","text":"
Starting from 7.0.0, Seafile can integrate with various Single Sign On systems via a proxy server. Examples include Apache as Shibboleth proxy, or LemonLdap as a proxy to LDAP servers, or Apache as Kerberos proxy. Seafile can retrieve user information from special request headers (HTTP_REMOTE_USER, HTTP_X_AUTH_USER, etc.) set by the proxy servers.
After the proxy server (Apache/Nginx) is successfully authenticated, the user information is set to the request header, and Seafile creates and logs in the user based on this information.
Make sure that the proxy server has a corresponding security mechanism to protect against forgery request header attacks
Please add the following settings to conf/seahub_settings.py to enable this feature.
ENABLE_REMOTE_USER_AUTHENTICATION = True\n\n# Optional, HTTP header, which is configured in your web server conf file,\n# used for Seafile to get user's unique id, default value is 'HTTP_REMOTE_USER'.\nREMOTE_USER_HEADER = 'HTTP_REMOTE_USER'\n\n# Optional, when the value of HTTP_REMOTE_USER is not a valid email address\uff0c\n# Seafile will build a email-like unique id from the value of 'REMOTE_USER_HEADER'\n# and this domain, e.g. user1@example.com.\nREMOTE_USER_DOMAIN = 'example.com'\n\n# Optional, whether to create new user in Seafile system, default value is True.\n# If this setting is disabled, users doesn't preexist in the Seafile DB cannot login.\n# The admin has to first import the users from external systems like LDAP.\nREMOTE_USER_CREATE_UNKNOWN_USER = True\n\n# Optional, whether to activate new user in Seafile system, default value is True.\n# If this setting is disabled, user will be unable to login by default.\n# the administrator needs to manually activate this user.\nREMOTE_USER_ACTIVATE_USER_AFTER_CREATION = True\n\n# Optional, map user attribute in HTTP header and Seafile's user attribute.\nREMOTE_USER_ATTRIBUTE_MAP = {\n 'HTTP_DISPLAYNAME': 'name',\n 'HTTP_MAIL': 'contact_email',\n\n # for user info\n \"HTTP_GIVENNAME\": 'givenname',\n \"HTTP_SN\": 'surname',\n \"HTTP_ORGANIZATION\": 'institution',\n\n # for user role\n 'HTTP_SHIBBOLETH_AFFILIATION': 'affiliation',\n}\n\n# Map affiliation to user role. Though the config name is SHIBBOLETH_AFFILIATION_ROLE_MAP,\n# it is not restricted to Shibboleth\nSHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\n
Then restart Seafile.
"},{"location":"config/roles_permissions/","title":"Roles and Permissions Support","text":"
You can add/edit roles and permission for users. A role is just a group of users with some pre-defined permissions, you can toggle user roles in user list page at admin panel. For most permissions, the meaning can be easily obtained from the variable name. The following is a further detailed introduction to some variables.
role_quota is used to set quota for a certain role of users. For example, we can set the quota of employee to 100G by adding 'role_quota': '100g', and leave other role of users to the default quota.
After set role_quote, it will take affect once a user with such a role login into Seafile. You can also manually change seafile-db.RoleQuota, if you want to see the effect immediately.
can_add_public_repo is to set whether a role can create a public library (shared by all login users), default is False.
Since version 11.0.9 pro, can_share_repo is added to limit users' ability to share a library
The can_add_public_repo option will not take effect if you configure global CLOUD_MODE = True
can_create_wiki and can_publish_wiki are used to control whether a role can create a Wiki and publish a Wiki. (A published Wiki have a special URL and can be visited by anonymous users)
storage_ids permission is used for assigning storage backends to users with specific role. More details can be found in multiple storage backends.
upload_rate_limit and download_rate_limit are added to limit upload and download speed for users with different roles.
Note
After configured the rate limit, run the following command in the seafile-server-latest directory to make the configuration take effect:
If you want to edit the permissions of build-in roles, e.g. default users can invite guest, guest users can view repos in organization, you can add following lines to seahub_settings.py with corresponding permissions set to True.
After that, email address \"a@a.com\", any email address ends with \"@a-a-a.com\" and any email address ends with \"@foo.com\" or \"@bar.com\" will not be allowed.
If you want to add a new role and assign some users with this role, e.g. new role employee can invite guest and can create public library and have all other permissions a default user has, you can add following lines to seahub_settings.py
"},{"location":"config/saml2/","title":"SAML 2.0 in version 10.0+","text":"
In this document, we use Microsoft Azure SAML single sign-on app and Microsoft on-premise ADFS to show how Seafile integrate SAML 2.0. Other SAML 2.0 provider should be similar.
"},{"location":"config/saml2/#preparations-for-saml-20","title":"Preparations for SAML 2.0","text":"
Second, prepare SP(Seafile) certificate directory and SP certificates:
Create certs dir
$ mkdir -p /opt/seafile/seahub-data/certs\n
The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command:
The days option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly
"},{"location":"config/saml2/#integration-with-adfssaml-single-sign-on","title":"Integration with ADFS/SAML single sign-on","text":""},{"location":"config/saml2/#microsoft-azure-saml-single-sign-on-app","title":"Microsoft Azure SAML single sign-on app","text":"
If you use Microsoft Azure SAML app to achieve single sign-on, please follow the steps below:
First, add SAML single sign-on app and assign users, refer to: add an Azure AD SAML application, create and assign users.
Second, setup the Identifier, Reply URL, and Sign on URL of the SAML app based on your service URL, refer to: enable single sign-on for saml app. The format of the Identifier, Reply URL, and Sign on URL are: https://example.com/saml2/metadata/, https://example.com/saml2/acs/, https://example.com/, e.g.:
Next, edit saml attributes & claims. Keep the default attributes & claims of SAML app unchanged, the uid attribute must be added, the mail and name attributes are optional, e.g.:
Next, download the base64 format SAML app's certificate and rename to idp.crt:
and put it under the certs directory(/opt/seafile/seahub-data/certs).
Next, copy the metadata URL of the SAML app:
and paste it into the SAML_REMOTE_METADATA_URL option in seahub_settings.py, e.g.:
SAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n
Next, add ENABLE_ADFS_LOGIN, LOGIN_REDIRECT_URL and SAML_ATTRIBUTE_MAPPING options to seahub_settings.py, and then restart Seafile, e.g:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n\n}\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n
Note
If the xmlsec1 binary is not located in /usr/bin/xmlsec1, you need to add the following configuration in seahub_settings.py:
SAML_XMLSEC_BINARY_PATH = '/path/to/xmlsec1'\n
View where the xmlsec1 binary is located:
$ which xmlsec1\n
If certificates are not placed in /opt/seafile/seahub-data/certs, you need to add the following configuration in seahub_settings.py:
SAML_CERTS_DIR = '/path/to/certs'\n
Finally, open the browser and enter the Seafile login page, click Single Sign-On, and use the user assigned to SAML app to perform a SAML login test.
If you use Microsoft ADFS to achieve single sign-on, please follow the steps below:
First, please make sure the following preparations are done:
A Windows Server with ADFS installed. For configuring and installing ADFS you can see this article.
A valid SSL certificate for ADFS server, and here we use temp.adfs.com as the domain name example.
A valid SSL certificate for Seafile server, and here we use demo.seafile.com as the domain name example.
Second, download the base64 format certificate and upload it:
Navigate to the AD FS management window. In the left sidebar menu, navigate to Services > Certificates.
Locate the Token-signing certificate. Right-click the certificate and select View Certificate.
In the dialog box, select the Details tab.
Click Copy to File.
In the Certificate Export Wizard that opens, click Next.
Select Base-64 encoded X.509 (.CER), then click Next.
Named it idp.crt, then click Next.
Click Finish to complete the download.
And then put it under the certs directory(/opt/seafile/seahub-data/certs).
Next, add the following configurations to seahub_settings.py and then restart Seafile:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n}\nSAML_REMOTE_METADATA_URL = 'https://temp.adfs.com/federationmetadata/2007-06/federationmetadata.xml' # The format of the ADFS federation metadata URL is: `https://{your ADFS domain name}/federationmetadata/2007-06/federationmetadata.xml`\n
Next, add relying party trust:
Log into the ADFS server and open the ADFS management.
Under Actions, click Add Relying Party Trust.
On the Welcome page, choose Claims aware and click Start.
Select Import data about the relying party published online or on a local network, type your metadate url in Federation metadata address (host name or URL), and then click Next. Your metadate url format is: https://example.com/saml2/metadata/, e.g.:
On the Specify Display Name page type a name in Display name, e.g. Seafile, under Notes type a description for this relying party trust, and then click Next.
In the Choose an access control policy window, select Permit everyone, then click Next.
Review your settings, then click Next.
Click Close.
Next, create claims rules:
Open the ADFS management, click Relying Party Trusts.
Right-click your trust, and then click Edit Claim Issuance Policy.
On the Issuance Transform Rules tab click Add Rules.
Click the Claim rule template dropdown menu and select Send LDAP Attributes as Claims, and then click Next.
In the Claim rule name field, type the display name for this rule, such as Seafile Claim rule. Click the Attribute store dropdown menu and select Active Directory. In the LDAP Attribute column, click the dropdown menu and select User-Principal-Name. In the Outgoing Claim Type column, click the dropdown menu and select UPN. And then click Finish.
Click Add Rule again.
Click the Claim rule template dropdown menu and select Transform an Incoming Claim, and then click Next.
In the Claim rule name field, type the display name for this rule, such as UPN to Name ID. Click the Incoming claim type dropdown menu and select UPN(It must match the Outgoing Claim Type in rule Seafile Claim rule). Click the Outgoing claim type dropdown menu and select Name ID. Click the Outgoing name ID format dropdown menu and select Email. And then click Finish.
Click OK to add both new rules.
When creating claims rule, you can also select other LDAP Attributes, such as E-Mail-Addresses, depending on your ADFS service
Finally, open the browser and enter the Seafile login page, click Single Sign-On to perform ADFS login test.
[DATABASE]\ntype = mysql\nhost = 192.168.0.2\nport = 3306\nusername = seafile\npassword = password\nname = seahub_db\n\n[STATISTICS]\n## must be \"true\" to enable statistics\nenabled = false\n\n[SEAHUB EMAIL]\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending Seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n\n[FILE HISTORY]\nenabled = true\nthreshold = 5\nsuffix = md,txt,...\n\n## From seafile 7.0.0\n## Recording file history to database for fast access is enabled by default for 'Markdown, .txt, ppt, pptx, doc, docx, xls, xlsx'. \n## After enable the feature, the old histories version for markdown, doc, docx files will not be list in the history page.\n## (Only new histories that stored in database will be listed) But the users can still access the old versions in the library snapshots.\n## For file types not listed in the suffix , histories version will be scanned from the library history as before.\n## The feature default is enable. You can set the 'enabled = false' to disable the feature.\n\n## The 'threshold' is the time threshold for recording the historical version of a file, in minutes, the default is 5 minutes. \n## This means that if the interval between two adjacent file saves is less than 5 minutes, the two file changes will be merged and recorded as a historical version. \n## When set to 0, there is no time limit, which means that each save will generate a separate historical version.\n\n## If you need to modify the file list format, you can add 'suffix = md, txt, ...' configuration items to achieve.\n\n# From Seafile 13.0 Redis also support using in CE, and is the default cached server\n[REDIS]\n## redis use the 0 database and \"repo_update\" channel\nserver = 192.168.1.1\nport = 6379\npassword = q!1w@#123\n
"},{"location":"config/seafevents-conf/#the-following-configurations-for-pro-edition-only","title":"The following configurations for Pro Edition only","text":"
[AUDIT]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\n\n[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## From Seafile 6.3.0 pro, in order to speed up the full-text search speed, you should setup\nhighlight = fvh\n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\n## Refer to file search manual for details.\nindex_office_pdf=false\n\n## The default size limit for doc, docx, ppt, pptx, xls, xlsx and pdf files. Files larger than this will not be indexed.\n## Since version 6.2.0\n## Unit: MB\noffice_file_size_limit = 10\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n\n## The default loglevel is `warning`.\n## Since version 11.0.4\nloglevel = info\n\n[EVENTS PUBLISH]\n## must be \"true\" to enable publish events messages\nenabled = false\n## message format: repo-update\\t{{repo_id}}}\\t{{commit_id}}\n## Currently only support redis message queue\nmq_type = redis\n\n[AUTO DELETION]\nenabled = true # Default is false, when enabled, users can use file auto deletion feature\ninterval = 86400 # The unit is second(s), the default frequency is one day, that is, it runs once a day\n\n[SEASEARCH]\nenabled = true # Default is false, when enabled, seafile can use SeaSearch as the search engine\nseasearch_url = http://seasearch:4080 # If your SeaSearch server deploy on another machine, replace it to the truth address\nseasearch_token = <your auth token> # base64 code consist of `username:password`\ninterval = 10m # The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\n
You may set a default quota (e.g. 2GB) for all users. To do this, just add the following lines to seafile.conf file
[quota]\n# default user quota in GB, integer only\ndefault = 2\n
This setting applies to all users. If you want to set quota for a specific user, you may log in to seahub website as administrator, then set it in \"System Admin\" page.
Since Pro 10.0.9 version, you can set the maximum number of files allowed in a library, and when this limit is exceeded, files cannot be uploaded to this library. There is no limit by default.
[quota]\nlibrary_file_limit = 100000\n
"},{"location":"config/seafile-conf/#default-history-length-limit","title":"Default history length limit","text":"
If you don't want to keep all file revision history, you may set a default history length limit for all libraries.
seaf-server component in Seafile Pro Edition uses memory caches in various cases to improve performance. (seaf-server component in community edition does not use cache) Some session information is also saved into memory cache to be shared among the cluster nodes. Memcached or Reids can be use for memory cache.
Tip
Redis support is added in version 11.0 and is the default cache server from Seafile 13.0. Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet.
memcachedRedis
[memcached]\n# Replace `localhost` with the memcached address:port if you're using remote memcached\n# POOL-MIN and POOL-MAX is used to control connection pool size. Usually the default is good enough.\nmemcached_options = --SERVER=localhost --POOL-MIN=10 --POOL-MAX=100\n
[redis]\n# your redis server address\nredis_host = 127.0.0.1\n# your redis server port\nredis_port = 6379\n# size of connection pool to redis, default is 100\nmax_connections = 100\n
The configuration of seafile fileserver is in the [fileserver] section of the file seafile.conf
[fileserver]\n# bind address for fileserver\n# default to 0.0.0.0, if deployed without proxy: no access restriction\n# set to 127.0.0.1, if used with local proxy: only access by local\nhost = 127.0.0.1\n# tcp port for fileserver\nport = 8082\n
Since Community Edition 6.2 and Pro Edition 6.1.9, you can set the number of worker threads to server http requests. Default value is 10, which is a good value for most use cases.
[fileserver]\nworker_threads = 15\n
Change upload/download settings.
[fileserver]\n# Set maximum upload file size to 200M.\n# If not configured, there is no file size limit for uploading.\nmax_upload_size=200\n\n# Set maximum download directory size to 200M.\n# Default is 100M.\nmax_download_dir_size=200\n
After a file is uploaded via the web interface, or the cloud file browser in the client, it needs to be divided into fixed size blocks and stored into storage backend. We call this procedure \"indexing\". By default, the file server uses 1 thread to sequentially index the file and store the blocks one by one. This is suitable for most cases. But if you're using S3/Ceph/Swift backends, you may have more bandwidth in the storage backend for storing multiple blocks in parallel. We provide an option to define the number of concurrent threads in indexing:
[fileserver]\nmax_indexing_threads = 10\n
When users upload files in the web interface (seahub), file server divides the file into fixed size blocks. Default blocks size for web uploaded files is 8MB. The block size can be set here.
[fileserver]\n#Set block size to 2MB\nfixed_block_size=2\n
When users upload files in the web interface, file server assigns an token to authorize the upload operation. This token is valid for 1 hour by default. When uploading a large file via WAN, the upload time can be longer than 1 hour. You can change the token expire time to a larger value.
[fileserver]\n#Set uploading time limit to 3600s\nweb_token_expire_time=3600\n
You can download a folder as a zip archive from seahub, but some zip software on windows doesn't support UTF-8, in which case you can use the \"windows_encoding\" settings to solve it.
[zip]\n# The file name encoding of the downloaded zip file.\nwindows_encoding = iso-8859-1\n
The \"httptemp\" directory contains temporary files created during file upload and zip download. In some cases the temporary files are not cleaned up after the file transfer was interrupted. Starting from 7.1.5 version, file server will regularly scan the \"httptemp\" directory to remove files created long time ago.
[fileserver]\n# After how much time a temp file will be removed. The unit is in seconds. Default to 3 days.\nhttp_temp_file_ttl = x\n# File scan interval. The unit is in seconds. Default to 1 hour.\nhttp_temp_scan_interval = x\n
New in Seafile Pro 7.1.16 and Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server.
Since Pro 8.0.4 version, you can set both options to -1, to allow unlimited size and timeout.
If you use object storage as storage backend, when a large file is frequently downloaded, the same blocks need to be fetched from the storage backend to Seafile server. This may waste bandwith and cause high load on the internal network. Since Seafile Pro 8.0.5 version, we add block caching to improve the situation.
To enable this feature, set use_block_cache option in the [fileserver] group. It's not enabled by default.
The block_cache_size_limit option is used to limit the size of the cache. Its default value is 10GB. The blocks are cached in seafile-data/block-cache directory. When the total size of cached files exceeds the limit, seaf-server will clean up older files until the size reduces to 70% of the limit. The cleanup interval is 5 minutes. You have to have a good estimate on how much space you need for the cache directory. Otherwise on frequent downloads this directory can be quickly filled up.
The block_cache_file_types configuration is used to choose the file types that are cached. block_cache_file_types the default value is mp4;mov.
[fileserver]\nuse_block_cache = true\n# Set block cache size limit to 100MB\nblock_cache_size_limit = 100\nblock_cache_file_types = mp4;mov\n
When a large number of files are uploaded through the web page and API, it will be expensive to calculate block IDs based on the block contents. Since Seafile-pro-9.0.6, you can add the skip_block_hash option to use a random string as block ID.
Warning
This option will prevent fsck from checking block content integrity. You should specify --shallow option to fsck to not check content integrity.
[fileserver]\nskip_block_hash = true\n
If you want to limit the type of files when uploading files, since Seafile Pro 10.0.0 version, you can set file_ext_white_list option in the [fileserver] group. This option is a list of file types, only the file types in this list are allowed to be uploaded. It's not enabled by default.
[fileserver]\nfile_ext_white_list = md;mp4;mov\n
Since seafile 10.0.1, when you use go fileserver, you can set upload_limit and download_limit option in the [fileserver] group to limit the speed of file upload and download. It's not enabled by default.
[fileserver]\n# The unit is in KB/s.\nupload_limit = 100\ndownload_limit = 100\n
Since Seafile 11.0.7 Pro, you can ask file server to check virus for every file uploaded with web APIs. Find more options about virus scanning at virus scan.
[fileserver]\n# default is false\ncheck_virus_on_web_upload = true\n
When set use_ssl to true and skip_verify to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path. The ca_path is a trusted CA certificate path for signing MySQL server certificates. When skip_verify is true, there is no need to add the ca_path option. The MySQL server certificate won't be verified at this time.
The Seafile Pro server auto expires file locks after some time, to prevent a locked file being locked for too long. The expire time can be tune in seafile.conf file.
[file_lock]\ndefault_expire_hours = 6\n
The default is 12 hours.
Since Seafile-pro-9.0.6, you can add cache for getting locked files (reduce server load caused by sync clients). Since Pro Edition 12, this option is enabled by default.
[file_lock]\nuse_locked_file_cache = true\n
At the same time, you also need to configure the following memcache options for the cache to take effect:
[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n
Since Seafile-pro-6.3.10, you can enable seaf-server's RPC slow log to do performance analysis.The slow log is enabled by default.
If you want to configure related options, add the options to seafile.conf:
[slow_log]\n# default to true\nenable_slow_log = true\n# the unit of all slow log thresholds is millisecond.\n# default to 5000 milliseconds, only RPC queries processed for longer than 5000 milliseconds will be logged.\nrpc_slow_threshold = 5000\n
You can find seafile_slow_rpc.log in logs/slow_logs. You can also use log-rotate to rotate the log files. You just need to send SIGUSR2 to seaf-server process. The slow log file will be closed and reopened.
Since 9.0.2 Pro, the signal to trigger log rotation has been changed to SIGUSR1. This signal will trigger rotation for all log files opened by seaf-server. You should change your log rotate settings accordingly.
Even though Nginx logs all requests with certain details, such as url, response code, upstream process time, it's sometimes desirable to have more context about the requests, such as the user id for each request. Such information can only be logged from file server itself. Since 9.0.2 Pro, access log feature is added to fileserver.
To enable access log, add below options to seafile.conf:
[fileserver]\n# default to false. If enabled, fileserver-access.log will be written to log directory.\nenable_access_log = true\n
The log format is as following:
start time - user id - url - response code - process time\n
Seafile 9.0 introduces a new fileserver implemented in Go programming language. To enable it, you can set the options below in seafile.conf:
[fileserver]\nuse_go_fileserver = true\n
Go fileserver has 3 advantages over the traditional fileserver implemented in C language:
Better performance when syncing libraries with large number of files. With C fileserver, syncing large libraries may consume all the worker threads in the server and make the service slow. There is a config option max_sync_file_count to limit the size of library to be synced. The default is 100K. With Go fileserver you can set this option to a much higher number, such as 1 million.
Downloading zipped folders on the fly. And there is no limit on the size of the downloaded folder. With C fileserver, the server has to first create a zip file for the downloaded folder then send it to the client. With Go fileserver, the zip file can be created while transferring to the client. The option max_download_dir_size is thus no longer needed by Go fileserver.
Since version 10.0 you can also set upload/download rate limits.
Go fileserver caches fs objects in memory. On the one hand, it avoids repeated creation and destruction of repeatedly accessed objects; on the other hand it will also slow down the speed at which objects are released, which will prevent go's gc mechanism from consuming too much CPU time. You can set the size of memory used by fs cache through the following options.
[fileserver]\n# The unit is in M. Default to 2G.\nfs_cache_limit = 100\n
Since Pro 12.0.10 version, you can set the max threads of fs-id-list requests. When you download a repo, Seafile client will request fs id list, and you can control the maximum concurrency for handling fs-id-list requests in the go fileserver through fs_id_list_max_threads configuration, which defaults to 10.
[fileserver]\nfs_id_list_max_threads = 20\n
"},{"location":"config/seafile-conf/#profiling-go-fileserver-performance","title":"Profiling Go Fileserver Performance","text":"
Since Seafile 9.0.7, you can enable the profile function of go fileserver by adding the following configuration options:
# profile_password is required, change it for your need\n[fileserver]\nenable_profiling = true\nprofile_password = 8kcUz1I2sLaywQhCRtn2x1\n
This interface can be used through the pprof tool provided by Go language. See https://pkg.go.dev/net/http/pprof for details. Note that you have to first install Go on the client that issues the below commands. The password parameter should match the one you set in the configuration.
go tool pprof http://localhost:8082/debug/pprof/heap?password=8kcUz1I2sLaywQhCRtn2x1\ngo tool pprof http://localhost:8082/debug/pprof/profile?password=8kcUz1I2sLaywQhCRtn2x1\n
"},{"location":"config/seafile-conf/#notification-server-configuration","title":"Notification server configuration","text":"
Since Seafile 10.0.0, you can ask Seafile server to send notifications (file changes, lock changes and folder permission changes) to Notification Server component.
[notification]\nenabled = true\n# IP address of the server running notification server\n# or \"notification-server\" if you are running notification server container on the same host as Seafile server\nhost = 192.168.0.83\n# the port of notification server\nport = 8083\n
Tip
The configuration here only works for version >= 12.0. The configuration for notificaton server has been changed in 12.0 to make it clearer. The new configuration is not compatible with older versions.
"},{"location":"config/seahub_customization/","title":"Seahub customization","text":""},{"location":"config/seahub_customization/#customize-seahub-logo-and-css","title":"Customize Seahub Logo and CSS","text":"
For example, modify the templates/help/base.html file and save it. You will see the new help page.
Note
There are some more help pages available for modifying, you can find the list of the html file here
"},{"location":"config/seahub_customization/#add-an-extra-note-in-sharing-dialog","title":"Add an extra note in sharing dialog","text":"
You can add an extra note in sharing dialog in seahub_settings.py
ADDITIONAL_SHARE_DIALOG_NOTE = {\n 'title': 'Attention! Read before shareing files:',\n 'content': 'Do not share personal or confidential official data with **.'\n}\n
Since Pro 7.0.9, Seafile supports adding some custom navigation entries to the home page for quick access. This requires you to add the following configuration information to the conf/seahub_settings.py configuration file:
You can also modify most of the config items via web interface. The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files. If you want to disable settings via web interface, you can add ENABLE_SETTINGS_VIA_WEB = False to seahub_settings.py.
"},{"location":"config/seahub_settings_py/#sending-email-notifications-on-seahub","title":"Sending Email Notifications on Seahub","text":"
# For security consideration, please set to match the host/domain of your site, e.g., ALLOWED_HOSTS = ['.example.com'].\n# Please refer https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts for details.\nALLOWED_HOSTS = ['.myseafile.com']\n\n\n# Whether to use a secure cookie for the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = True\n\n# The value of the SameSite flag on the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-samesite\nCSRF_COOKIE_SAMESITE = 'Strict'\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-trusted-origins\nCSRF_TRUSTED_ORIGINS = ['https://www.myseafile.com']\n
The following options affect user registration, password and session.
# Enalbe or disalbe registration on web. Default is `False`.\nENABLE_SIGNUP = False\n\n# Activate or deactivate user when registration complete. Default is `True`.\n# If set to `False`, new users need to be activated by admin in admin panel.\nACTIVATE_AFTER_REGISTRATION = False\n\n# Whether to send email when a system admin adding a new member. Default is `True`.\nSEND_EMAIL_ON_ADDING_SYSTEM_MEMBER = True\n\n# Whether to send email when a system admin resetting a user's password. Default is `True`.\nSEND_EMAIL_ON_RESETTING_USER_PASSWD = True\n\n# Send system admin notify email when user registration is complete. Default is `False`.\nNOTIFY_ADMIN_AFTER_REGISTRATION = True\n\n# Remember days for login. Default is 7\nLOGIN_REMEMBER_DAYS = 7\n\n# Attempt limit before showing a captcha when login.\nLOGIN_ATTEMPT_LIMIT = 3\n\n# deactivate user account when login attempts exceed limit\n# Since version 5.1.2 or pro 5.1.3\nFREEZE_USER_ON_LOGIN_FAILED = False\n\n# mininum length for user's password\nUSER_PASSWORD_MIN_LENGTH = 6\n\n# LEVEL based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nUSER_PASSWORD_STRENGTH_LEVEL = 3\n\n# default False, only check USER_PASSWORD_MIN_LENGTH\n# when True, check password strength level, STRONG(or above) is allowed\nUSER_STRONG_PASSWORD_REQUIRED = False\n\n# Force user to change password when admin add/reset a user.\n# Added in 5.1.1, deafults to True.\nFORCE_PASSWORD_CHANGE = True\n\n# Age of cookie, in seconds (default: 2 weeks).\nSESSION_COOKIE_AGE = 60 * 60 * 24 * 7 * 2\n\n# Whether a user's session cookie expires when the Web browser is closed.\nSESSION_EXPIRE_AT_BROWSER_CLOSE = False\n\n# Whether to save the session data on every request. Default is `False`\nSESSION_SAVE_EVERY_REQUEST = False\n\n# Whether enable the feature \"published library\". Default is `False`\n# Since 6.1.0 CE\nENABLE_WIKI = True\n\n# In old version, if you use Single Sign On, the password is not saved in Seafile.\n# Users can't use WebDAV because Seafile can't check whether the password is correct.\n# Since version 6.3.8, you can enable this option to let user's to specific a password for WebDAV login.\n# Users login via SSO can use this password to login in WebDAV.\n# Enable the feature. pycryptodome should be installed first.\n# sudo pip install pycryptodome==3.12.0\nENABLE_WEBDAV_SECRET = True\nWEBDAV_SECRET_MIN_LENGTH = 8\n\n# LEVEL for the password, based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nWEBDAV_SECRET_STRENGTH_LEVEL = 1\n\n\n# Since version 7.0.9, you can force a full user to log in with a two factor authentication.\n# The prerequisite is that the administrator should 'enable two factor authentication' in the 'System Admin -> Settings' page.\n# Then you can add the following configuration information to the configuration file.\nENABLE_FORCE_2FA_TO_ALL_USERS = True\n
# if enable create encrypted library\nENABLE_ENCRYPTED_LIBRARY = True\n\n# version for encrypted library\n# should only be `2` or `4`.\n# version 3 is insecure (using AES128 encryption) so it's not supported any more.\n# refer to https://manual.seafile.com/latest/administration/security_features/#how-does-an-encrypted-library-work\n# for the difference between version 2 and 4.\nENCRYPTED_LIBRARY_VERSION = 2\n\n# Since version 12, you can choose password hash algorithm for new encrypted libraries.\n# The password is used to encrypt the encryption key. So using a secure password hash algorithm to\n# prevent brute-force password guessing is important.\n# Before version 12, a fixed algorithm (PBKDF2-SHA256 with 1000 iterations) is used.\n#\n# Currently two hash algorithms are supported.\n# - PBKDF2: The only available parameter is the number of iterations. You need to increase the\n# the number of iterations over time, as GPUs are more and more used for such calculation.\n# The default number of iterations is 1000. As of 2023, the recommended iterations is 600,000.\n# - Argon2id: Secure hash algorithm that has high cost even for GPUs. There are 3 parameters that\n# can be set: time cost, memory cost, and parallelism degree. The parameters are seperated by commas,\n# e.g. \"2,102400,8\", which the default parameters used in Seafile. Learn more about this algorithm\n# on https://github.com/P-H-C/phc-winner-argon2 .\n#\n# Note that only sync client >= 9.0.9 and SeaDrive >= 3.0.12 supports syncing libraries created with these algorithms.\nENCRYPTED_LIBRARY_PWD_HASH_ALGO = \"argon2id\"\nENCRYPTED_LIBRARY_PWD_HASH_PARAMS = \"2,102400,8\"\n# ENCRYPTED_LIBRARY_PWD_HASH_ALGO = \"pbkdf2_sha256\"\n# ENCRYPTED_LIBRARY_PWD_HASH_PARAMS = \"600000\"\n\n# mininum length for password of encrypted library\nREPO_PASSWORD_MIN_LENGTH = 8\n\n# force use password when generate a share/upload link (since version 8.0.9)\nSHARE_LINK_FORCE_USE_PASSWORD = False\n\n# mininum length for password for share link (since version 4.4)\nSHARE_LINK_PASSWORD_MIN_LENGTH = 8\n\n# LEVEL for the password of a share/upload link\n# based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above. (since version 8.0.9)\nSHARE_LINK_PASSWORD_STRENGTH_LEVEL = 3\n\n# Default expire days for share link (since version 6.3.8)\n# Once this value is configured, the user can no longer generate an share link with no expiration time.\n# If the expiration value is not set when the share link is generated, the value configured here will be used.\nSHARE_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be less than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be greater than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# Default expire days for upload link (since version 7.1.6)\n# Once this value is configured, the user can no longer generate an upload link with no expiration time.\n# If the expiration value is not set when the upload link is generated, the value configured here will be used.\nUPLOAD_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MIN should be less than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MAX should be greater than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# force user login when view file/folder share link (since version 6.3.6)\nSHARE_LINK_LOGIN_REQUIRED = True\n\n# enable water mark when view(not edit) file in web browser (since version 6.3.6)\nENABLE_WATERMARK = True\n\n# Disable sync with any folder. Default is `False`\n# NOTE: since version 4.2.4\nDISABLE_SYNC_WITH_ANY_FOLDER = True\n\n# Enable or disable library history setting\nENABLE_REPO_HISTORY_SETTING = True\n\n# Enable or disable user share library to any group\n# Since version 6.2.0\nENABLE_SHARE_TO_ALL_GROUPS = True\n\n# Enable or disable user to clean trash (default is True)\n# Since version 6.3.6\nENABLE_USER_CLEAN_TRASH = True\n\n# Add a report abuse button on download links. (since version 7.1.0)\n# Users can report abuse on the share link page, fill in the report type, contact information, and description.\n# Default is false.\nENABLE_SHARE_LINK_REPORT_ABUSE = True\n
Options for online file preview:
# Online preview maximum file size, defaults to 30M.\nFILE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024\n\n# Extensions of previewed text files.\n# NOTE: since version 6.1.1\nTEXT_PREVIEW_EXT = \"\"\"ac, am, bat, c, cc, cmake, cpp, cs, css, diff, el, h, html,\nhtm, java, js, json, less, make, org, php, pl, properties, py, rb,\nscala, script, sh, sql, txt, text, tex, vi, vim, xhtml, xml, log, csv,\ngroovy, rst, patch, go\"\"\"\n\n\n# Seafile only generates thumbnails for images smaller than the following size.\n# Since version 6.3.8 pro, suport the psd online preview.\nTHUMBNAIL_IMAGE_SIZE_LIMIT = 30 # MB\n\n# Enable or disable thumbnail for video. ffmpeg and moviepy should be installed first.\n# For details, please refer to https://manual.seafile.com/deploy/video_thumbnails.html\n# NOTE: this option is deprecated in version 7.1\nENABLE_VIDEO_THUMBNAIL = False\n\n# Use the frame at 5 second as thumbnail\n# NOTE: this option is deprecated in version 7.1\nTHUMBNAIL_VIDEO_FRAME_TIME = 5\n\n# Absolute filesystem path to the directory that will hold thumbnail files.\nTHUMBNAIL_ROOT = '/haiwen/seahub-data/thumbnail/thumb/'\n\n# Default size for picture preview. Enlarge this size can improve the preview quality.\n# NOTE: since version 6.1.1\nTHUMBNAIL_SIZE_FOR_ORIGINAL = 1024\n
You should enable cloud mode if you use Seafile with an unknown user base. It disables the organization tab in Seahub's website to ensure that users can't access the user list. Cloud mode provides some nice features like sharing content with unregistered users and sending invitations to them. Therefore you also want to enable user registration. Through the global address book (since version 4.2.3) you can do a search for every user account. So you probably want to disable it.
# Enable cloude mode and hide `Organization` tab.\nCLOUD_MODE = True\n\n# Disable global address book\nENABLE_GLOBAL_ADDRESSBOOK = False\n
# Enable authentication with ADFS\n# Default is False\n# Since 6.0.9\nENABLE_ADFS_LOGIN = True\n\n# Force user login through ADFS/OAuth instead of email and password\n# Default is False\n# Since 11.0.7, in version 12.0, it also controls users via OAuth\nDISABLE_ADFS_USER_PWD_LOGIN = True\n\n# Enable authentication wit Kerberos\n# Default is False\nENABLE_KRB5_LOGIN = True\n\n# Enable authentication with Shibboleth\n# Default is False\nENABLE_SHIBBOLETH_LOGIN = True\n\n# Enable client to open an external browser for single sign on\n# When it is false, the old buitin browser is opened for single sign on\n# When it is true, the default browser of the operation system is opened\n# The benefit of using system browser is that it can support hardware 2FA\n# Since 11.0.0, and sync client 9.0.5, drive client 3.0.8\nCLIENT_SSO_VIA_LOCAL_BROWSER = True # default is False\nCLIENT_SSO_UUID_EXPIRATION = 5 * 60 # in seconds\n
# This is outside URL for Seahub(Seafile Web). \n# The domain part (i.e., www.example.com) will be used in generating share links and download/upload file via web.\n# Note: SERVICE_URL is moved to seahub_settings.py since 9.0.0\n# Note: SERVICE_URL is no longer used since version 12.0\n# SERVICE_URL = 'https://seafile.example.com:'\n\n# Disable settings via Web interface in system admin->settings\n# Default is True\n# Since 5.1.3\nENABLE_SETTINGS_VIA_WEB = False\n\n# Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = 'UTC'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\n# Default language for sending emails.\nLANGUAGE_CODE = 'en'\n\n# Custom language code choice.\nLANGUAGES = (\n ('en', 'English'),\n ('zh-cn', '\u7b80\u4f53\u4e2d\u6587'),\n ('zh-tw', '\u7e41\u9ad4\u4e2d\u6587'),\n)\n\n# Set this to your website/company's name. This is contained in email notifications and welcome message when user login for the first time.\nSITE_NAME = 'Seafile'\n\n# Browser tab's title\nSITE_TITLE = 'Private Seafile'\n\n# If you don't want to run seahub website on your site's root path, set this option to your preferred path.\n# e.g. setting it to '/seahub/' would run seahub on http://example.com/seahub/.\nSITE_ROOT = '/'\n\n# Max number of files when user upload file/folder.\n# Since version 6.0.4\nMAX_NUMBER_OF_FILES_FOR_FILEUPLOAD = 500\n\n# Control the language that send email. Default to user's current language.\n# Since version 6.1.1\nSHARE_LINK_EMAIL_LANGUAGE = ''\n\n# Interval for browser requests unread notifications\n# Since PRO 6.1.4 or CE 6.1.2\nUNREAD_NOTIFICATIONS_REQUEST_INTERVAL = 3 * 60 # seconds\n\n# Whether to allow user to delete account, change login password or update basic user\n# info on profile page.\n# Since PRO 6.3.10\nENABLE_DELETE_ACCOUNT = False\nENABLE_UPDATE_USER_INFO = False\nENABLE_CHANGE_PASSWORD = False\n\n# Get web api auth token on profile page.\nENABLE_GET_AUTH_TOKEN_BY_SESSION = True\n\n# Since 8.0.6 CE/PRO version.\n# Url redirected to after user logout Seafile.\n# Usually configured as Single Logout url.\nLOGOUT_REDIRECT_URL = 'http{s}://www.example-url.com'\n\n\n# Enable system admin add T&C, all users need to accept terms before using. Defaults to `False`.\n# Since version 6.0\nENABLE_TERMS_AND_CONDITIONS = True\n\n# Enable two factor authentication for accounts. Defaults to `False`.\n# Since version 6.0\nENABLE_TWO_FACTOR_AUTH = True\n\n# Enable user select a template when he/she creates library.\n# When user select a template, Seafile will create folders releated to the pattern automaticly.\n# Since version 6.0\nLIBRARY_TEMPLATES = {\n 'Technology': ['/Develop/Python', '/Test'],\n 'Finance': ['/Current assets', '/Fixed assets/Computer']\n}\n\n# Enable a user to change password in 'settings' page. Default to `True`\n# Since version 6.2.11\nENABLE_CHANGE_PASSWORD = True\n\n# If show contact email when search user.\nENABLE_SHOW_CONTACT_EMAIL_WHEN_SEARCH_USER = True\n
"},{"location":"config/seahub_settings_py/#pro-edition-only-options","title":"Pro edition only options","text":"
# Whether to show the used traffic in user's profile popup dialog. Default is True\nSHOW_TRAFFIC = True\n\n# Allow administrator to view user's file in UNENCRYPTED libraries\n# through Libraries page in System Admin. Default is False.\nENABLE_SYS_ADMIN_VIEW_REPO = True\n\n# For un-login users, providing an email before downloading or uploading on shared link page.\n# Since version 5.1.4\nENABLE_SHARE_LINK_AUDIT = True\n\n# Check virus after upload files to shared upload links. Defaults to `False`.\n# Since version 6.0\nENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n\n# Send email to these email addresses when a virus is detected.\n# This list can be any valid email address, not necessarily the emails of Seafile user.\n# Since version 6.0.8\nVIRUS_SCAN_NOTIFY_LIST = ['user_a@seafile.com', 'user_b@seafile.com']\n
# API throttling related settings. Enlarger the rates if you got 429 response code during API calls.\nREST_FRAMEWORK = {\n 'DEFAULT_THROTTLE_RATES': {\n 'ping': '600/minute',\n 'anon': '5/minute',\n 'user': '300/minute',\n },\n 'UNICODE_JSON': False,\n}\n\n# Throtting whitelist used to disable throttle for certain IPs.\n# e.g. REST_FRAMEWORK_THROTTING_WHITELIST = ['127.0.0.1', '192.168.1.1']\n# Please make sure `REMOTE_ADDR` header is configured in Nginx conf according to https://manual.seafile.com/13.0/setup_binary/ce/deploy_with_nginx.html.\nREST_FRAMEWORK_THROTTING_WHITELIST = []\n
Since version 6.2, you can define a custom function to modify the result of user search function.
For example, if you want to limit user only search users in the same institution, you can define custom_search_user function in {seafile install path}/conf/seahub_custom_functions/__init__.py
Code example:
import os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseahub_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seahub/seahub')\nsys.path.append(seahub_dir)\n\nfrom seahub.profile.models import Profile\ndef custom_search_user(request, emails):\n\n institution_name = ''\n\n username = request.user.username\n profile = Profile.objects.get_profile_by_user(username)\n if profile:\n institution_name = profile.institution\n\n inst_users = [p.user for p in\n Profile.objects.filter(institution=institution_name)]\n\n filtered_emails = []\n for email in emails:\n if email in inst_users:\n filtered_emails.append(email)\n\n return filtered_emails\n
You should NOT change the name of custom_search_user and seahub_custom_functions/__init__.py
Since version 6.2.5 pro, if you enable the ENABLE_SHARE_TO_ALL_GROUPS feather on sysadmin settings page, you can also define a custom function to return the groups a user can share library to.
For example, if you want to let a user to share library to both its groups and the groups of user test@test.com, you can define a custom_get_groups function in {seafile install path}/conf/seahub_custom_functions/__init__.py
Code example:
import os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseaserv_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seafile/lib64/python2.7/site-packages')\nsys.path.append(seaserv_dir)\n\ndef custom_get_groups(request):\n\n from seaserv import ccnet_api\n\n groups = []\n username = request.user.username\n\n # for current user\n groups += ccnet_api.get_groups(username)\n\n # for 'test@test.com' user\n groups += ccnet_api.get_groups('test@test.com')\n\n return groups\n
You should NOT change the name of custom_get_groups and seahub_custom_functions/__init__.py
Tip
You need to restart seahub so that your changes take effect.
Deploy in DockerDeploy from binary packages
docker compose restart\n
cd /opt/seafile/seafile-server-latest\n./seahub.sh restart\n
If your changes don't take effect, You may need to delete 'seahub_setting.pyc'. (A cache file)
"},{"location":"config/sending_email/","title":"Sending Email Notifications on Seahub","text":""},{"location":"config/sending_email/#types-of-email-sending-in-seafile","title":"Types of Email Sending in Seafile","text":"
There are currently five types of emails sent in Seafile:
User resets his/her password
System admin adds new member
System admin resets user password
User sends file/folder share and upload link
Reminder of unread notifications
The first four types of email are sent immediately. The last type is sent by a background task running periodically.
"},{"location":"config/sending_email/#options-of-email-sending","title":"Options of Email Sending","text":"
Please add the following lines to seahub_settings.py to enable email sending.
If your email service still does not work, you can checkout the log file logs/seahub.log to see what may cause the problem. For a complete email notification list, please refer to email notification list.
If you want to use the email service without authentication leaf EMAIL_HOST_USER and EMAIL_HOST_PASSWORD blank (''). (But notice that the emails then will be sent without a From: address.)
About using SSL connection (using port 465)
Port 587 is being used to establish a connection using STARTTLS and port 465 is being used to establish an SSL connection. Starting from Django 1.8, it supports both.
If you want to use SSL on port 465, set EMAIL_USE_SSL = True instead of EMAIL_USE_TLS.
"},{"location":"config/sending_email/#change-reply-to-of-email","title":"Change reply to of email","text":"
You can change the reply to field of email by add the following settings to seahub_settings.py. This only affects email sending for file share link.
# Set reply-to header to user's email or not, defaults to ``False``. For details,\n# please refer to http://www.w3.org/Protocols/rfc822/\nADD_REPLY_TO_HEADER = True\n
The background task will run periodically to check whether an user have new unread notifications. If there are any, it will send a reminder email to that user. The background email sending task is controlled by seafevents.conf.
[SEAHUB EMAIL]\n\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n
"},{"location":"config/sending_email/#add-smime-signature-to-email","title":"Add S/MIME signature to email","text":"
If you want the email signed by S/MIME, please add the config in seahub_settings.py
ENABLE_SMIME = True\nSMIME_CERTS_DIR = /opt/seafile/seahub-data/smime-certs # including cert.pem and private_key.pem\n
The certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the certs using the following command:
The simplest way to customize the email messages is setting the SITE_NAME variable in seahub_settings.py. If it is not enough for your case, you can customize the email templates.
Tip
Subject line may vary between different releases, this is based on Release 5.0.0. Restart Seahub so that your changes take effect.
"},{"location":"config/sending_email/#the-email-base-template","title":"The email base template","text":"
seahub/seahub/templates/email_base.html
Tip
You can copy email_base.html to seahub-data/custom/templates/email_base.html and modify the new one. In this way, the customization will be maintained after upgrade.
You can copy password_reset_email.html to seahub-data/custom/templates/registration/password_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
"},{"location":"config/sending_email/#system-admin-adds-new-member","title":"System admin adds new member","text":"
Subject
seahub/seahub/views/sysadmin.py line:424
send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\n
You can copy user_add_email.html to seahub-data/custom/templates/sysadmin/user_add_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
"},{"location":"config/sending_email/#system-admin-resets-user-password","title":"System admin resets user password","text":"
Subject
seahub/seahub/views/sysadmin.py line:1224
send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\n
You can copy user_reset_email.html to seahub-data/custom/templates/sysadmin/user_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
try:\n if file_shared_type == 'f':\n c['file_shared_type'] = _(u\"file\")\n send_html_email(_(u'A file is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to\n )\n else:\n c['file_shared_type'] = _(u\"directory\")\n send_html_email(_(u'A directory is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to)\n
You can copy shared_link_email.html to seahub-data/custom/templates/shared_link_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
"},{"location":"config/sending_email/#reminder-of-unread-notifications","title":"Reminder of unread notifications","text":"
Subject
send_html_email(_('New notice on %s') % settings.SITE_NAME,\n 'notifications/notice_email.html', c,\n None, [to_user])\n
Shibboleth is a widely used single sign on (SSO) protocol. Seafile supports authentication via Shibboleth. It allows users from another organization to log in to Seafile without registering an account on the service provider.
In this documentation, we assume the reader is familiar with Shibboleth installation and configuration. For introduction to Shibboleth concepts, please refer to https://shibboleth.atlassian.net/wiki/spaces/CONCEPT/overview .
Shibboleth Service Provider (SP) should be installed on the same server as the Seafile server. The official SP from https://shibboleth.net/ is implemented as an Apache module. The module handles all Shibboleth authentication details. Seafile server receives authentication information (username) from HTTP request. The username then can be used as login name for the user.
Seahub provides a special URL to handle Shibboleth login. The URL is https://your-seafile-domain/sso. Only this URL needs to be configured under Shibboleth protection. All other URLs don't go through the Shibboleth module. The overall workflow for a user to login with Shibboleth is as follows:
In the Seafile login page, there is a separate \"Single Sign-On\" login button. When the user clicks the button, she/he will be redirected to https://your-seafile-domain/sso.
Since that URL is controlled by Shibboleth, the user will be redirected to IdP for login. After the user logs in, she/he will be redirected back to https://your-seafile-domain/sso.
This time the Shibboleth module passes the request to Seahub. Seahub reads the user information from the request(HTTP_REMOTE_USER header) and brings the user to her/his home page.
All later access to Seahub will not pass through the Shibboleth module. Since Seahub keeps session information internally, the user doesn't need to login again until the session expires.
Since Shibboleth support requires Apache, if you want to use Nginx, you need two servers, one for non-Shibboleth access, another configured with Apache to allow Shibboleth login. In a cluster environment, you can configure your load balancer to direct traffic to different server according to URL. Only the URL https://your-seafile-domain/sso needs to be directed to Apache.
The configuration includes 3 steps:
Install and configure Shibboleth Service Provider;
Configure Apache;
Configure Seahub.
"},{"location":"config/shibboleth_authentication/#install-and-configure-shibboleth-service-provider","title":"Install and Configure Shibboleth Service Provider","text":"
<!-- The ApplicationDefaults element is where most of Shibboleth's SAML bits are defined. -->\n<ApplicationDefaults entityID=\"https://your-seafile-domain/sso\"\n REMOTE_USER=\"mail\"\n cipherSuites=\"DEFAULT:!EXP:!LOW:!aNULL:!eNULL:!DES:!IDEA:!SEED:!RC4:!3DES:!kRSA:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1\">\n
Seahub extracts the username from the REMOTE_USER environment variable. So you should modify your SP's shibboleth2.xml config file, so that Shibboleth translates your desired attribute into REMOTE_USER environment variable.
In Seafile, only one of the following two attributes can be used for username: eppn, and mail. eppn stands for \"Edu Person Principal Name\". It is usually the UserPrincipalName attribute in Active Directory. It's not necessarily a valid email address. mail is the user's email address. You should set REMOTE_USER to either one of these attributes.
<!--\nConfigures SSO for a default IdP. To properly allow for >1 IdP, remove\nentityID property and adjust discoveryURL to point to discovery service.\nYou can also override entityID on /Login query string, or in RequestMap/htaccess.\n-->\n<SSO entityID=\"https://your-IdP-domain\">\n <!--discoveryProtocol=\"SAMLDS\" discoveryURL=\"https://wayf.ukfederation.org.uk/DS\"-->\n SAML2\n</SSO>\n
After restarting Apache, you should be able to get the Service Provider metadata by accessing https://your-seafile-domain/Shibboleth.sso/Metadata. This metadata should be uploaded to the Identity Provider (IdP) server.
Seahub can process additional user attributes from Shibboleth. These attributes are saved into Seahub's database, as user's properties. They're all not mandatory. The internal user properties Seahub now supports are:
givenname
surname
contact_email: used for sending notification email to user if username is not a valid email address (like eppn).
institution: used to identify user's institution
You can specify the mapping between Shibboleth attributes and Seahub's user properties in seahub_settings.py:
In the above config, the hash key is Shibboleth attribute name, the second element in the hash value is Seahub's property name. You can adjust the Shibboleth attribute name for your own needs.
You may have to change attribute-map.xml in your Shibboleth SP, so that the desired attributes are passed to Seahub. And you have to make sure the IdP sends these attributes to the SP
We also added an option SHIB_ACTIVATE_AFTER_CREATION (defaults to True) which control the user status after shibboleth connection. If this option set to False, user will be inactive after connection, and system admins will be notified by email to activate that account.
"},{"location":"config/shibboleth_authentication/#affiliation-and-user-role","title":"Affiliation and user role","text":"
Shibboleth has a field called affiliation. It is a list like: employee@uni-mainz.de;member@uni-mainz.de;faculty@uni-mainz.de;staff@uni-mainz.de.
We are able to set user role from Shibboleth. Details about user role, please refer to Roles and Permissions
To enable this, modify SHIBBOLETH_ATTRIBUTE_MAP above and add Shibboleth-affiliation field, you may need to change Shibboleth-affiliation according to your Shibboleth SP attributes.
After Shibboleth login, Seafile should calcualte user's role from affiliation and SHIBBOLETH_AFFILIATION_ROLE_MAP.
"},{"location":"config/shibboleth_authentication/#custom-set-user-role","title":"Custom set user role","text":"
If you are unable to set user roles by obtaining affiliation information, or if you wish to have a more customized way of setting user roles, you can add the following configuration to achieve this.
For example, set all users whose email addresses end with @seafile.com as default, and set other users as guest.
First, update the SHIBBOLETH_ATTRIBUTE_MAP configuration in seahub_settings.py, and add HTTP_REMOTE_USER.
Then, create /opt/seafile/conf/seahub_custom_functions/__init__.py file and add the following code.
# function name `custom_shibboleth_get_user_role` should NOT be changed\ndef custom_shibboleth_get_user_role(shib_meta):\n\n remote_user = shib_meta.get('remote_user', '')\n if not remote_user:\n return ''\n\n remote_user = remote_user.lower()\n if remote_user.endswith('@seafile.com'):\n return 'default'\n else:\n return 'guest'\n
Open seafile-server-latest/seahub/thirdpart/shibboleth/middleware.py
Insert the following code in line 59
assert False\n
Insert the following code in line 65
if not username:\n assert False\n
The complete code after these changes is as follows:
#Locate the remote user header.\n# import pprint; pprint.pprint(request.META)\ntry:\n username = request.META[SHIB_USER_HEADER]\nexcept KeyError:\n assert False\n # If specified header doesn't exist then return (leaving\n # request.user set to AnonymousUser by the\n # AuthenticationMiddleware).\n return\n\nif not username:\n assert False\n\np_id = ccnet_api.get_primary_id(username)\nif p_id is not None:\n username = p_id\n
Then restart Seafile and relogin, you will see debug info in web page.
"},{"location":"config/single_sign_on/","title":"Single Sign On support in Seafile","text":"
Seafile supports most of the popular single-sign-on authentication protocols. Some are included in Community Edition, some are only in Pro Edition.
In the Community Edition:
Shibboleth
OAuth
Remote User (Proxy Server)
Auto Login to SeaDrive on Windows
Kerberos authentication can be integrated by using Apache as a proxy server and follow the instructions in Remote User Authentication and Auto Login SeaDrive on Windows.
Seafile internally uses a data model similar to GIT's. It consists of Repo, Commit, FS, and Block.
Seafile's high performance comes from the architectural design: stores file metadata in object storage (or file system), while only stores small amount of metadata about the libraries in relational database. An overview of the architecture can be depicted as below. We'll describe the data model in more details.
Commit objects save the change history of a repo. Each update from the web interface, or sync upload operation will create a new commit object. A commit object contains the following information: commit ID, library name, creator of this commit (a.k.a. the modifier), creation time of this commit (a.k.a. modification time), root fs object ID, parent commit ID.
The root fs object ID points to the root FS object, from which we can traverse a file system snapshot for the repo.
The parent commit ID points to the last commit previous to the current commit. The RepoHead table contains the latest head commit ID for each repo. From this head commit, we can traverse the repo history.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/commits/<repo_id>. If you use object storage, commit objects are stored in the commits bucket.
There are two types of FS objects, SeafDir Object and Seafile Object. SeafDir Object represents a directory, and Seafile Object represents a file.
The SeafDir object contains metadata for each file/sub-folder, which includes name, last modification time, last modifier, size, and object ID. The object ID points to another SeafDir or Seafile object. The Seafile object contains a block list, which is a list of block IDs for the file.
The FS object IDs are calculated based on the contents of the object. That means if a folder or a file is not changed, the same objects will be reused across multiple commits. This allow us to create snapshots very efficiently.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/fs/<repo_id>. If you use object storage, commit objects are stored in the fs bucket.
A file is further divided into blocks with variable lengths. We use Content Defined Chunking algorithm to divide file into blocks. A clear overview of this algorithm can be found at http://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf. On average, a block's size is around 8MB.
This mechanism makes it possible to deduplicate data between different versions of frequently updated files, improving storage efficiency. It also enables transferring data to/from multiple servers in parallel.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/blocks/<repo_id>. If you use object storage, commit objects are stored in the blocks bucket.
A \"virtual repo\" is a special repo that will be created in the cases below:
A folder in a library is shared.
A folder in a library is synced selectively from the sync client.
A virtual repo can be understood as a view for part of the data in its parent library. For example, when sharing a folder, the virtual repo only provides access to the shared folder in that library. Virtual repo use the same underlying data as the parent library. So virtual repos use the same fs and blocks storage location as its parent.
Virtual repo has its own change history. So it has separate commits storage location from its parent. The changes in virtual repo and its parent repo will be bidirectional merged. So that changes from each side can be seen from another.
There is a VirtualRepo table in seafile_db database. It contains the folder path in the parent repo for each virtual repo.
The following setups are required for building and packaging Sync Client on macOS:
XCode 13.2 (or later)
After installing XCode, you can start XCode once so that it automatically installs the rest of the components.
Qt 6.2
MacPorts
Modify /opt/local/etc/macports/macports.conf to add configuration universal_archs arm64 x86_64. Specifies the architecture on which MapPorts is compiled.
Modify /opt/local/etc/macports/variants.conf to add configuration +universal. MacPorts installs universal versions of all ports.
Install other dependencies: sudo port install autoconf automake pkgconfig libtool glib2 libevent vala openssl git jansson cmake libwebsockets argon2.
Certificates
Create certificate signing requests for certification, see https://developer.apple.com/help/account/create-certificates/create-a-certificate-signing-request.
Create a Developer ID Application certificate and a Developer ID Installer certificate, see https://developer.apple.com/help/account/create-certificates/create-developer-id-certificates. Install them to the login keychain.
Install the Developer ID Intermediate Certificate (Developer ID - G2), from https://www.apple.com/certificateauthority/
Update the CERT_ID in seafile-workspace/seafile/scripts/build/build-mac-local-py3.py to the ID of Developer ID Application.
Run the packaging script: python3 build-mac-local-py3.py --brand=\"\" --version=1.0.0 --nostrip --universal
"},{"location":"develop/rpi/","title":"How to Build Seafile Server Release Package","text":"
From Seafile 11.0, you can build Seafile release package with seafile-build script. You can check the README.md file in the same folder for detailed instructions.
The seafile-build.sh compatible with more platforms, including Raspberry Pi, arm-64, x86-64.
Old version is below:
Table of contents:
Setup the build environment
Install packages
Compile development libraries
Install Python libraries
Prepare source code
Fetch git tags and prepare source tarballs
Run the packaging script
Test the built package
Test a fresh install
Test upgrading
"},{"location":"develop/rpi/#setup-the-build-environment","title":"Setup the build environment","text":"
Requirements:
A raspberry pi with raspian distribution installed.
"},{"location":"develop/rpi/#compile-development-libraries","title":"Compile development libraries","text":""},{"location":"develop/rpi/#libevhtp","title":"libevhtp","text":"
libevhtp is a http server libary on top of libevent. It's used in seafile file server.
git clone https://www.github.com/haiwen/libevhtp.git\ncd libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nsudo make install\n
After compiling all the libraries, run ldconfig to update the system libraries cache:
After the script finisheds, we would get a seafile-server_6.0.1_pi.tar.gz in ~/seafile-server-pkgs folder.
"},{"location":"develop/rpi/#test-the-built-package","title":"Test the built package","text":""},{"location":"develop/rpi/#test-a-fresh-install","title":"Test a fresh install","text":"
The test should cover these steps at least:
The setup process is ok
After seafile.sh start and seahub.sh start, you can login from a browser.
Uploading/Downloading files through a web browser works correctly.
Seafile WebDAV server works correctly
"},{"location":"develop/rpi/#test-upgrading-from-a-previous-version","title":"Test upgrading from a previous version","text":"
Download the package of the previous version seafile server, and setup it.
Upgrading according to the manual
After the upgrade, check the functionality is ok:
Uploading/Downloading files through a web browser works correctly.
mysql -uroot -pyour_password -e \"CREATE DATABASE ccnet CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seafile CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seahub CHARACTER SET utf8;\"\n
Then, you can visit http://127.0.0.1:8000/ to use Seafile.
"},{"location":"develop/server/#the-final-directory-structure","title":"The Final Directory Structure","text":""},{"location":"develop/server/#more","title":"More","text":""},{"location":"develop/server/#deploy-frontend-development-environment","title":"Deploy Frontend Development Environment","text":"
For deploying frontend development enviroment, you need:
1, checkout seahub to master branch
cd /root/dev/source-code/seahub\n\ngit fetch origin master:master\ngit checkout master\n
2, add the following configration to /root/dev/conf/seahub_settings.py
cd /root/dev/source-code/seahub/frontend\n\nnpm install\n
4, npm run dev
cd /root/dev/source-code/seahub/frontend\n\nnpm run dev\n
5, start seaf-server and seahub
"},{"location":"develop/translation/","title":"Translation","text":""},{"location":"develop/translation/#seahub-seafile-server-71-and-above","title":"Seahub (Seafile Server 7.1 and above)","text":""},{"location":"develop/translation/#translate-and-try-locally","title":"Translate and try locally","text":"
1. Locate the translation files in the seafile-server-latest/seahub directory:
For Seahub (except Markdown editor): /locale/<lang-code>/LC_MESSAGES/django.po\u00a0 and \u00a0/locale/<lang-code>/LC_MESSAGES/djangojs.po
For Markdown editor: /media/locales/<lang-code>/seafile-editor.json
For example, if you want to improve the Russian translation, find the corresponding strings to be edited in either of the following three files:
If there is no translation for your language, create a new folder matching your language code and copy-paste the contents of another language folder in your newly created one. (Don't copy from the 'en' folder because the files therein do not contain the strings to be translated.)
2. Edit the files using an UTF-8 editor.
3. Save your changes.
4. (Only necessary when you created a new language code folder) Add a new entry for your language to the language block in the /seafile-server-latest/seahub/seahub/settings.py file and save it.
LANGUAGES = (\n ...\n ('ru', '\u0420\u0443\u0441\u0441\u043a\u0438\u0439'),\n ...\n)\n
5. (Only necessary when you edited either django.po or djangojs.po) Apply the changes made in django.po and djangojs.po by running the following two commands in /seafile-server-latest/seahub/locale/<lang-code>/LC_MESSAGES:
msgfmt -o django.mo django.po
msgfmt -o djangojs.mo djangojs.po
Note: msgfmt is included in the gettext package.
Additionally, run the following two commands in the seafile-server-latest directory:
6. Restart Seahub to load changes made in django.po and djangojs.po; reload the Markdown editor to check your modifications in the seafile-editor.json file.
"},{"location":"develop/translation/#submit-your-translation","title":"Submit your translation","text":"
Please submit translations via Transifex: https://www.transifex.com/projects/p/seahub/
Steps:
Create a free account on Transifex (https://www.transifex.com/).
Send a request to join the language translation.
After accepted by the project maintainer, then you can upload your file or translate online.
FileNotFoundError occurred when executing the command manage.py collectstatic.
FileNotFoundError: [Errno 2] No such file or directory: '/opt/seafile/seafile-server-latest/seahub/frontend/build'\n
Steps:
Modify STATICFILES_DIRS in /opt/seafile/seafile-server-latest/seahub/seahub/settings.py manually
STATICFILES_DIRS = (\n # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n '%s/static' % PROJECT_ROOT,\n# '%s/frontend/build' % PROJECT_ROOT,\n)\n
```python STATICFILES_DIRS = ( # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. '%s/static' % PROJECT_ROOT, '%s/frontend/build' % PROJECT_ROOT, )
Restart Seahub
./seahub.sh restart\n
This issue has been fixed since version 11.0
"},{"location":"develop/web_api_v2.1/","title":"Web API","text":""},{"location":"develop/web_api_v2.1/#seafile-web-api","title":"Seafile Web API","text":"
The API document can be accessed in the following location:
$ git clone --depth=1 git@github.com:google/breakpad.git\n$ cd breakpad\n$ git clone https://github.com/google/googletest.git testing\n$ cd ..\n# create vs solution, this may throw an error \"module collections.abc has no attribute OrderedDict\", you should open the msvs.py and replace 'collections.abc' with 'collections'.\n$ gyp \u2013-no-circular-check breakpad\\src\\client\\windows\\breakpad_client.gyp\n
open breakpad_client.sln and configure C++ Language Standard to C++17 and C/C++ ---> Code Generation ---> Runtime Library to Multi-threaded DLL (/MD)
The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, github.com/haiwen/seafile-client, and github.com/haiwen/seafile-shell-ext.
If you use a cluster to deploy Seafile, you can use distributed indexing to realize real-time indexing and improve indexing efficiency. The indexing process is as follows:
"},{"location":"extension/distributed_indexing/#install-redis-and-modify-configuration-files","title":"Install redis and modify configuration files","text":""},{"location":"extension/distributed_indexing/#1-install-redis-on-all-frontend-nodes","title":"1. Install redis on all frontend nodes","text":"
Tip
If you use redis cloud service, skip this step and modify the configuration files directly
UbuntuCentOS
$ apt install redis-server\n
$ yum install redis\n
"},{"location":"extension/distributed_indexing/#2-install-python-redis-third-party-package-on-all-frontend-nodes","title":"2. Install python redis third-party package on all frontend nodes","text":"
$ pip install redis\n
"},{"location":"extension/distributed_indexing/#3-modify-the-seafeventsconf-on-all-frontend-nodes","title":"3. Modify the seafevents.conf on all frontend nodes","text":"
Add the following config items
[EVENTS PUBLISH]\nmq_type=redis # must be redis\nenabled=true\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n
"},{"location":"extension/distributed_indexing/#4-modify-the-seafeventsconf-on-the-backend-node","title":"4. Modify the seafevents.conf on the backend node","text":"
Disable the scheduled indexing task, because the scheduled indexing task and the distributed indexing task conflict.
First, prepare a index-server master node and several index-server slave nodes, the number of slave nodes depends on your needs. Copy the seafile.conf and the seafevents.conf in the conf directory from the Seafile frontend nodes to /opt/seafile-data/seafile/conf in index-server nodes. The master node and slave nodes need to read the configuration files to obtain the necessary information.
CLUSTER_MODE needs to be configured as master on the master node, and needs to be configured as worker on the slave nodes.
Next, create a configuration file index-master.conf in the conf directory of the master node, e.g.
[DEFAULT]\nmq_type=redis # must be redis\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n
Start master node.
docker compose up -d\n
Next, create a configuration file index-worker.conf in the conf directory of all slave nodes, e.g.
[DEFAULT]\nmq_type=redis # must be redis\nindex_workers=2 # number of threads to create/update indexes, you can increase this value according to your needs\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n
Start all slave nodes.
docker compose up -d\n
"},{"location":"extension/distributed_indexing/#some-commands-in-distributed-indexing","title":"Some commands in distributed indexing","text":"
Rebuild search index, first execute the command in the Seafile node:
cd /opt/seafile/seafile-server-last/\n./pro/pro.py search --clear\n
Then execute the command in the index-server master node:
Files in the seafile system are split to blocks, which means what are stored on your seafile server are not complete files, but blocks. This design faciliates effective data deduplication.
However, administrators sometimes want to access the files directly on the server. You can use seaf-fuse to do this.
Seaf-fuse is an implementation of the FUSE virtual filesystem. In a word, it mounts all the seafile files to a folder (which is called the '''mount point'''), so that you can access all the files managed by seafile server, just as you access a normal folder on your server.
Note
Encrypted folders can't be accessed by seaf-fuse.
Currently the implementation is '''read-only''', which means you can't modify the files through the mounted folder.
On debian/centos systems, you need to be in the \"fuse\" group to have the permission to mount a FUSE folder.
"},{"location":"extension/fuse/#use-seaf-fuse-in-docker-based-deployment","title":"Use seaf-fuse in Docker based deployment","text":"
Assume we want to mount to /opt/seafile-fuse in host.
The fuse enables the block cache function by default to cache block objects, thereby reducing access to backend storage, but this function will occupy local disk space. Since Seafile-pro-10.0.0, you can disable block cache by adding following options:
"},{"location":"extension/fuse/#contents-of-the-mounted-folder","title":"Contents of the mounted folder","text":""},{"location":"extension/fuse/#the-top-level-folder","title":"The top level folder","text":"
Now you can list the content of /data/seafile-fuse.
$ ls -lhp /data/seafile-fuse\n\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 abc@abc.com/\ndrwxr-xr-x 2 root root 4.0K Jan 4 2015 foo@foo.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 plus@plus.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 sharp@sharp.com/\ndrwxr-xr-x 2 root root 4.0K Jan 3 2015 test@test.com/\n
The top level folder contains many subfolders, each of which corresponds to a user
"},{"location":"extension/fuse/#the-folder-for-each-user","title":"The folder for each user","text":"
$ ls -lhp /data/seafile-fuse/abc@abc.com\n\ndrwxr-xr-x 2 root root 924 Jan 1 1970 5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\ndrwxr-xr-x 2 root root 1.6K Jan 1 1970 a09ab9fc-7bd0-49f1-929d-6abeb8491397_My Notes/\n
From the above list you can see, under the folder of a user there are subfolders, each of which represents a library of that user, and has a name of this format: '''{library_id}-{library-name}'''.
"},{"location":"extension/fuse/#the-folder-for-a-library","title":"The folder for a library","text":"
$ ls -lhp /data/seafile-fuse/abc@abc.com/5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\n\n-rw-r--r-- 1 root root 501K Jan 1 2015 image.png\n-rw-r--r-- 1 root root 501K Jan 1 2015 sample.jpng\n
"},{"location":"extension/fuse/#if-you-get-a-permission-denied-error","title":"If you get a \"Permission denied\" error","text":"
If you get an error message saying \"Permission denied\" when running ./seaf-fuse.sh start, most likely you are not in the \"fuse group\". You should:
Add yourself to the fuse group
sudo usermod -a -G fuse <your-user-name>\n
Logout your shell and login again
Now try ./seaf-fuse.sh start <path>again.
"},{"location":"extension/libreoffice_online/","title":"Integrate Seafile with Collabora Online (LibreOffice Online)","text":""},{"location":"extension/libreoffice_online/#setup-collaboraonline","title":"Setup CollaboraOnline","text":"
Deployment Tips
The steps from this guide only cover installing collabora as another container on the same docker host that your seafile docker container is on. Please make sure your host have sufficient cores and RAM.
If you want to install on another host please refer the collabora documentation for instructions. Then you should follow here to configure seahub_settings.py to enable online office.
Note
To integrate LibreOffice with Seafile, you have to enable HTTPS in your Seafile server:
Add following config option to seahub_settings.py:
OFFICE_SERVER_TYPE = 'CollaboraOffice'\nENABLE_OFFICE_WEB_APP = True\nOFFICE_WEB_APP_BASE_URL = 'http://collabora:9980/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use LibreOffice Online view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 30 * 60 # seconds\n\n# List of file formats that you want to view through LibreOffice Online\n# You can change this value according to your preferences\n# And of course you should make sure your LibreOffice Online supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through LibreOffice Online\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through LibreOffice Online\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n
Then restart Seafile.
Click an office file in Seafile web interface, you will see the online preview rendered by CollaboraOnline. Here is an example:
CollaboraOnline container will output the logs in the stdout, you can use following command to access it
docker logs seafile-collabora\n
If you would like to use file to save log (i.e., a .log file), you can modify .env with following statment, and remove the notes in the collabora.yml
# .env\nCOLLABORA_ENABLE_FILE_LOGGING=True\nCOLLABORA_PATH=/opt/collabora # path of the collabora logs\n
# collabora.yml\n# remove the following notes\n...\nservices:\n collabora:\n ...\n volumes:\n - \"${COLLABORA_PATH:-/opt/collabora}/logs:/opt/cool/logs/\" # chmod 777 needed\n ...\n...\n
Create the logs directory, and restart Seafile server
mkdir -p /opt/collabora\nchmod 777 /opt/collabora\ndocker compose down\ndocker compose up -d\n
"},{"location":"extension/libreoffice_online/#collaboraonline-server-on-a-separate-host","title":"CollaboraOnline server on a separate host","text":"
If your CollaboraOnline server on a separate host, you just need to modify the seahub_settings.py similar to deploy on the same host. The only different is you have to change the field OFFICE_WEB_APP_BASE_URL to your CollaboraOnline host (e.g., https://collabora-online.seafile.com/hosting/discovery).
The startup of Metadata server requires using Redis as the cache server (it should be the default cache server in Seafile 13.0). So you must deploy Redis for Seafile, then modify seafile.conf, seahub_settings.py and seafevents.conf to enable it before deploying metadata server.
Warning
Please make sure your Seafile service has been deployed before deploying Metadata server. This is because Metadata server needs to read Seafile's configuration file seafile.conf. If you deploy Metadata server before or at the same time with Seafile, it may not be able to detect seafile.conf and fail to start.
Metadata server read all configurations from environtment and does not need a dedicated configuration file, and you don't need to add additional variables to your .env (except for standalone deployment) to get the metadata server started, because it will read the exact same configuration as the Seafile server (including JWT_PRIVATE_KEY ) and keep the repository metadata locally (default /opt/seafile-data/seafile/md-data). But you still need to modify the COMPOSE_FILE list in .env, and add md-server.yml to enable the metadata server:
COMPOSE_FILE='...,md-server.yml'\n
To facilitate your deployment, we still provide two different configuration solutions for your reference:
"},{"location":"extension/metadata-server/#example-env-for-seafile-data-is-stored-locally","title":"Example .env for Seafile data is stored locally","text":"
In this case you don't need to add any additional configuration to your .env. You can also specify image version, maximum local cache size, etc.
"},{"location":"extension/metadata-server/#example-env-for-seafile-data-is-stored-in-the-storage-backend-eg-s3","title":"Example .env for Seafile data is stored in the storage backend (e.g., S3)","text":"
First you need to create a bucket for metadata on your S3 storage backend provider. Then add or modify the following information to .env:
Data for Seafile server should be accessible for Metadata server
In order to correctly obtain metadata information, you must ensure that the data of your Seafile server can be correctly accessed. In the case of deploying Metadata server and Seafile server together, Metadata server will be able to automatically obtain the configuration information of Seafile server, so you don't need to worry about this. But if your Metadata server is deployed in Standalone (usually in a cluster environment), then you need to ensure that the description of the Seafile server storage part in the .env deployed by Metadata server needs to be consistent with the .env deployed by Seafile server (e.g., SEAF_SERVER_STORAGE_TYPE), and can access the configuration file information of Seafile server (e.g., seafile.conf) to ensure that Metadata server can correctly obtain data from Seafile server.
"},{"location":"extension/metadata-server/#list-of-environment-variables-for-metadata-server","title":"List of environment variables for Metadata server","text":"
The following table is all the related environment variables with Metadata server:
Variables Description Required JWT_PRIVATE_KEY The JWT key used to connect with Seafile server Required MD_MAX_CACHE_SIZE The maximum cache size. Optional, default 1GBREDIS_HOST Your Redis service host. Optional, default redisREDIS_PORT Your Redis service port. Optional, default 6379REDIS_PASSWORD Your Redis access password. Optional MD_STORAGE_TYPE Where the metadata storage in. Available options are disk (local storage) and s3diskS3_MD_BUCKET Your S3 bucket name for the bucket storing metadata Required when using S3 (MD_STORAGE_TYPE=s3)
In addition, there are some environment variables related to S3 authorization, please refer to the part with S3_ prefix in this table (the buckets name for Seafile are also needed).
Metadata server supports Redis only
To enable metadata feature, you have to use Redis for cache, as the CACHE_PROVIDER must be set to redis in your .env
You can use following command to start metadata server (and the Seafile service also have to restart):
docker compose down\ndocker compose up -d\n
"},{"location":"extension/metadata-server/#verify-metadata-server-and-enable-it-in-the-seafile","title":"Verify Metadata server and enable it in the Seafile","text":"
Check container log for seafile-md-server, you can see the following message if it runs fine:
When you deploy Seafile server and Metadata server to the same machine, Metadata server will use the same persistence directory (e.g. /opt/seafile-data) as Seafile server. Metadata server will use the following directories or files:
/opt/seafile-data/seafile/md-data: Metadata server data and cache
/opt/seafile-data/seafile/logs/seaf-md-server: The logs directory of Metadata server, consist of a running log and an access log.
"},{"location":"extension/notification-server/","title":"Notification Server Overview","text":"
Currently, the status updates of files and libraries on the client and web interface are based on polling the server. The latest status cannot be reflected in real time on the client due to polling delays. The client needs to periodically refresh the library modification, file locking, subdirectory permissions and other information, which causes additional performance overhead to the server.
When a directory is opened on the web interface, the lock status of the file cannot be updated in real time, and the page needs to be refreshed.
The notification server uses websocket protocol and maintains a two-way communication connection with the client or the web interface. When the above changes occur, seaf-server will notify the notification server of the changes. Then the notification server can notify the client or the web interface in real time. This not only improves the real-time performance, but also reduces the performance overhead of the server.
NOTIFICATION_SERVER_URL=<your notification server URL>\nINNER_NOTIFICATION_SERVER_URL=$NOTIFICATION_SERVER_URL\n
Difference between NOTIFICATION_SERVER_URL and INNER_NOTIFICATION_SERVER_URL
NOTIFICATION_SERVER_URL: used to do the connection between client (i.e., user's browser) and notification server
INNER_NOTIFICATION_SERVER_URL: used to do the connection between Seafile server and notification server
Finally, You can run notification server with the following command:
docker compose down\ndocker compose up -d\n
"},{"location":"extension/notification-server/#checking-notification-server-status","title":"Checking notification server status","text":"
When the notification server is working, you can access http://127.0.0.1:8083/ping from your browser, which will answer {\"ret\": \"pong\"}. If you have a proxy configured, you can access https://seafile.example.com/notification/ping from your browser instead.
If the client works with notification server, there should be a log message in seafile.log or seadrive.log.
Notification server is enabled on the remote server xxxx\n
"},{"location":"extension/notification-server/#notification-server-in-seafile-cluster","title":"Notification Server in Seafile cluster","text":"
There is no additional features for notification server in the Pro Edition. It works the same as in community edition.
If you enable clustering, You need to deploy notification server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node.
Download .env and notification-server.yml to notification server directory:
Then modify the .env file according to your environment. The following fields are needed to be modified:
variable description SEAFILE_MYSQL_DB_HOST Seafile MySQL host SEAFILE_MYSQL_DB_USER Seafile MySQL user, default is seafileSEAFILE_MYSQL_DB_PASSWORD Seafile MySQL password TIME_ZONE Time zone JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file SEAFILE_SERVER_HOSTNAME Seafile host name SEAFILE_SERVER_PROTOCOL http or https
You can run notification server with the following command:
docker compose up -d\n
And you need to add the following configurations under seafile.conf and restart Seafile server:
[notification]\nenabled = true\n# the ip of notification server.\nhost = 192.168.0.83\n# the port of notification server\nport = 8083\n
You need to configure load balancer according to the following forwarding rules:
Forward /notification/ping requests to notification server via http protocol.
Forward websockets requests with URL prefix /notification to notification server.
Here is a configuration that uses haproxy to support notification server. Haproxy version needs to be >= 2.0. You should use similar configurations for other load balancers.
#/etc/haproxy/haproxy.cfg\n\n# Other existing haproxy configurations\n......\n\nfrontend seafile\n bind 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n acl notif_ping_request url_sub -i /notification/ping\n acl ws_requests url -i /notification\n acl hdr_connection_upgrade hdr(Connection) -i upgrade\n acl hdr_upgrade_websocket hdr(Upgrade) -i websocket\n use_backend ws_backend if hdr_connection_upgrade hdr_upgrade_websocket\n use_backend notif_ping_backend if notif_ping_request\n use_backend ws_backend if ws_requests\n default_backend backup_nodes\n\nbackend backup_nodes\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.0.137:80\n\nbackend notif_ping_backend\n option forwardfor\n server ws 192.168.0.137:8083\n\nbackend ws_backend\n option forwardfor # This sets X-Forwarded-For\n server ws 192.168.0.137:8083\n
In Seafile Professional Server Version 4.4.0 (or above), you can use Microsoft Office Online Server (formerly named Office Web Apps) to preview documents online. Office Online Server provides the best preview for all Office format files. It also support collaborative editing of Office files directly in the web browser. For organizations with Microsoft Office Volume License, it's free to use Office Online Server. For more information about Office Online Server and how to deploy it, please refer to https://technet.microsoft.com/en-us/library/jj219455(v=office.16).aspx.
Seafile only supports Office Online Server 2016 and above
To use Office Online Server for preview, please add following config option to seahub_settings.py.
# Enable Office Online Server\nENABLE_OFFICE_WEB_APP = True\n\n# Url of Office Online Server's discovery page\n# The discovery page tells Seafile how to interact with Office Online Server when view file online\n# You should change `http://example.office-web-app.com` to your actual Office Online Server server address\nOFFICE_WEB_APP_BASE_URL = 'http://example.office-web-app.com/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use Office Online Server view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 60 * 60 * 24 # seconds\n\n# List of file formats that you want to view through Office Online Server\n# You can change this value according to your preferences\n# And of course you should make sure your Office Online Server supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('ods', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt',\n 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through Office Online Server\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through Office Online Server\n# Note, Office Online Server 2016 is needed for editing docx\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('xlsx', 'pptx', 'docx')\n\n\n# HTTPS authentication related (optional)\n\n# Server certificates\n# Path to a CA_BUNDLE file or directory with certificates of trusted CAs\n# NOTE: If set this setting to a directory, the directory must have been processed using the c_rehash utility supplied with OpenSSL.\nOFFICE_WEB_APP_SERVER_CA = '/path/to/certfile'\n\n\n# Client certificates\n# You can specify a single file (containing the private key and the certificate) to use as client side certificate\nOFFICE_WEB_APP_CLIENT_PEM = 'path/to/client.pem'\n\n# or you can specify these two file path to use as client side certificate\nOFFICE_WEB_APP_CLIENT_CERT = 'path/to/client.cert'\nOFFICE_WEB_APP_CLIENT_KEY = 'path/to/client.key'\n
Then restart
./seafile.sh restart\n./seahub.sh restart\n
After you click the document you specified in seahub_settings.py, you will see the new preview page.
Understanding how the web app integration works is going to help you debugging the problem. When a user visits a file page:
(seahub->browser) Seahub will generate a page containing an iframe and send it to the browser
(browser->office online server) With the iframe, the browser will try to load the file preview page from the office online server
(office online server->seahub) office online server receives the request and sends a request to Seahub to get the file content
(office online server->browser) office online server sends the file preview page to the browser.
Please check the Nginx log for Seahub (for step 3) and Office Online Server to see which step is wrong.
Warning
You should make sure you have configured at least a few GB of paging files in your Windows system. Otherwise the IIS worker processes may die randomly when handling Office Online requests.
Seafile supports OnlyOffice to view/edit office files online. In order to use OnlyOffice, you must first deploy an OnlyOffice server.
Deployment Tips
You can deploy OnlyOffice to the same machine as Seafile (only support deploying with Docker with sufficient cores and RAM) using the onlyoffice.yml provided by Seafile according to this document, or you can deploy it to a different machine according to OnlyOffice official document.
"},{"location":"extension/only_office/#deployment-of-onlyoffice","title":"Deployment of OnlyOffice","text":"
insert onlyoffice.yml into COMPOSE_FILE list (i.e., COMPOSE_FILE='...,onlyoffice.yml'), and add the following configurations of onlyoffice in .env file.
# OnlyOffice image\nONLYOFFICE_IMAGE=onlyoffice/documentserver:8.1.0.1\n\n# Persistent storage directory of OnlyOffice\nONLYOFFICE_VOLUME=/opt/onlyoffice\n\n# OnlyOffice document server port\nONLYOFFICE_PORT=6233\n\n# jwt secret, generated by `pwgen -s 40 1` \nONLYOFFICE_JWT_SECRET=<your jwt secret>\n
Note
From Seafile 12.0, OnlyOffice's JWT verification will be forced to enable. Secure communication between Seafile and OnlyOffice is granted by a shared secret. You can get the JWT secret by following command
pwgen -s 40 1\n
Also modify seahub_settings.py
ENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'https://seafile.example.com:6233/web-apps/apps/api/documents/api.js'\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\n\n# NOTE\n# The following two configurations, do NOT need to configure them explicitly.\n# The default values are as follows.\n# If you have custom needs, you can also configure them, which will override the default values.\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods', 'ppsx', 'pps', 'csv')\nONLYOFFICE_EDIT_FILE_EXTENSION = ('docx', 'pptx', 'xlsx', 'csv')\n
Tip
By default OnlyOffice will use port 6233 used for communication between Seafile and Document Server, You can modify the bound port by specifying ONLYOFFICE_PORT, and the port in the term ONLYOFFICE_APIJS_URL in seahub_settings.py should be modified together.
"},{"location":"extension/only_office/#advanced-custom-settings-of-onlyoffice","title":"Advanced: Custom settings of OnlyOffice","text":"
The following configuration options are only for OnlyOffice experts. You can create and mount a custom configuration file called local-production-linux.json to force some settings.
nano local-production-linux.json\n
For example, you can configure OnlyOffice to automatically save by copying the following code block in this file:
For more information you can check the official documentation: https://api.onlyoffice.com/editors/signature/ and https://github.com/ONLYOFFICE/Docker-DocumentServer#available-configuration-parameters
"},{"location":"extension/only_office/#restart-seafile-docker-instance-and-test-that-onlyoffice-is-running","title":"Restart Seafile-docker instance and test that OnlyOffice is running","text":"
docker-compose down\ndocker-compose up -d\n
Success
After the installation process is finished, visit this page to make sure you have deployed OnlyOffice successfully: http{s}://{your Seafile server's domain or IP}:6233/welcome, you will get Document Server is running info at this page.
Firstly, run docker logs -f seafile-onlyoffice, then open an office file. After the \"Download failed.\" error appears on the page, observe the logs for the following error:
==> /var/log/onlyoffice/documentserver/converter/out.log <==\n...\nError: DNS lookup {local IP} (family:undefined, host:undefined) is not allowed. Because, It is a private IP address.\n...\n
If it shows this error message and you haven't enabled JWT while using a local network, then it's likely due to an error triggered proactively by OnlyOffice server for enhanced security. (https://github.com/ONLYOFFICE/DocumentServer/issues/2268#issuecomment-1600787905)
So, as mentioned in the post, we highly recommend you enabling JWT in your integrations to fix this problem.
"},{"location":"extension/only_office/#the-document-security-token-is-not-correctly-formed","title":"The document security token is not correctly formed","text":"
Starting from OnlyOffice Docker-DocumentServer version 7.2, JWT is enabled by default on OnlyOffice server.
So, for security reason, please Configure OnlyOffice to use JWT Secret.
"},{"location":"extension/only_office/#onlyoffice-on-a-separate-host-and-url","title":"OnlyOffice on a separate host and URL","text":"
In general, you only need to specify the values \u200b\u200bof the following fields in seahub_settings.py and then restart the service.
For deployments using the onlyoffice.yml file in this document, SSL is primarily handled by the Caddy. If the OnlyOffice document server and Seafile server are not on the same machine, please refer to the official document to configure SSL for OnlyOffice.
"},{"location":"extension/seafile_ai/","title":"Seafile AI extension","text":"
From Seafile 13 Pro, users can enable Seafile AI to support the following features:
File tags, file and image summaries, text translation, sdoc writing assistance
Given an image, generate its corresponding tags (including objects, weather, color, etc.)
Detect faces in images and encode them
Detect text in images (OCR)
"},{"location":"extension/seafile_ai/#deploy-seafile-ai-basic-service","title":"Deploy Seafile AI basic service","text":"
The Seafile AI basic service will use API calls to external large language model service (e.g., GPT-4o-mini) to implement file labeling, file and image summaries, text translation, and sdoc writing assistance.
Here is the workflow when a user open sdoc file in browser
When a user open a sdoc file in the browser, a file loading request will be sent to Caddy, and Caddy proxy the request to SeaDoc server (see Seafile instance archticture for the details).
SeaDoc server will send the file's content back if it is already cached, otherwise SeaDoc server will sends a request to Seafile server.
Seafile server loads the content, then sends it to SeaDoc server and write it to the cache at the same time.
After SeaDoc receives the content, it will be sent to the browser.
This extension is already installed by default when deploying Seafile (single-node mode) by Docker.
If you would like to remove it, you can undo the steps in this section (i.e., remove the seadoc.yml in the field COMPOSE_FILE and set ENABLE_SEADOC to false)
The easiest way to deployment SeaDoc is to deploy it with Seafile server on the same host using the same Docker network. If in some situations, you need to deployment SeaDoc standalone, you can follow the next section.
Then modify the .env file according to your environment. The following fields are needed to be modified:
variable description SEADOC_VOLUME The volume directory of SeaDoc data SEAFILE_MYSQL_DB_HOST Seafile MySQL host SEAFILE_MYSQL_DB_USER Seafile MySQL user, default is seafileSEAFILE_MYSQL_DB_PASSWORD Seafile MySQL password TIME_ZONE Time zone JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file SEAFILE_SERVER_HOSTNAME Seafile host name SEAFILE_SERVER_PROTOCOL http or https
(Optional) By default, SeaDoc server will bind to port 80 on the host machine. If the port is already taken by another service, you have to change the listening port of SeaDoc:
Modify seadoc.yml
services:\n seadoc:\n ...\n ports:\n - \"<your SeaDoc server port>:80\"\n...\n
Add a reverse proxy for SeaDoc server. In cluster environtment, it means you need to add reverse proxy rules at load balance. Here, we use Nginx as an example (please replace 127.0.0.1:80 to host:port of your Seadoc server)
<Location /sdoc-server/>\n ProxyPass \"http://127.0.0.1:80/\"\n ProxyPassReverse \"http://127.0.0.1:80/\"\n </Location>\n\n <Location /socket.io/>\n # Since Apache HTTP Server 2.4.47\n ProxyPass \"http://127.0.0.1:80/socket.io/\" upgrade=websocket\n </Location>\n
Start SeaDoc server server with the following command
docker compose up -d\n
Modify Seafile server's configuration and start SeaDoc server
Warning
After using a reverse proxy, your SeaDoc service will be located at the /sdoc-server path of your reverse proxy (i.e. xxx.example.com/sdoc-server). For example:
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files outside. This allows you to rebuild containers easily without losing important information.
/opt/seadoc-data/logs: This is the directory for SeaDoc logs.
"},{"location":"extension/setup_seadoc/#database-used-by-seadoc","title":"Database used by SeaDoc","text":"
SeaDoc used one database table seahub_db.sdoc_operation_log to store operation logs. The database table is cleaned automatically.
"},{"location":"extension/setup_seadoc/#common-issues-when-settings-up-seadoc","title":"Common issues when settings up SeaDoc","text":""},{"location":"extension/setup_seadoc/#server-is-disconnected-reconnecting-error-when-open-a-sdoc","title":"\"Server is disconnected. Reconnecting\u2026\" error when open a sdoc","text":"
This is because websocket for sdoc-server has not been properly configured. If you use the default Caddy proxy, it should be setup correctly.
But if you use your own proxy, you need to make sure it properly proxy your-sdoc-server-domain/socket.io to sdoc-server-docker-image-address/socket.io
"},{"location":"extension/setup_seadoc/#load-doc-content-error-when-open-a-sdoc","title":"\"Load doc content error\" when open a sdoc","text":"
This is because the browser cannot correctly load content from sdoc-server. Make sure
SEADOC_SERVER_URL is correctly set in .env
Make sure sdoc-server can be accessed via the browser.
You can open developer console of the browser to further debug the issue.
"},{"location":"extension/thumbnail-server/","title":"Thumbnail Server Overview","text":"
Since Seafile 13.0, a new component thumbnail server is added. Thumbnail server can create thumbnails for images, videos, PDFs and other file types. Thumbnail server uses a task queue based architecture, it can better handle workloads than thumbnail generating inside Seahub component.
Use this feature by forwarding thumbnail requests directly to thumbnail server via caddy or a reverse proxy.
"},{"location":"extension/thumbnail-server/#how-to-configure-and-run","title":"How to configure and run","text":"
First download thumbnail-server.yml to Seafile directory:
Add following configuration in seahub_settings.py to enable thumbnail for videos:
# video thumbnails (disabled by default)\nENABLE_VIDEO_THUMBNAIL = True\n
Finally, You can run thumbnail server with the following command:
docker compose down\ndocker compose up -d\n
"},{"location":"extension/thumbnail-server/#thumbnail-server-in-seafile-cluster","title":"Thumbnail Server in Seafile cluster","text":"
There is no additional features for thumbnail server in the Pro Edition. It works the same as in community edition.
If you enable clustering, You need to deploy thumbnail server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node.
Download .env and thumbnail-server.yml to thumbnail server directory:
Then modify the .env file according to your environment. The following fields are needed to be modified:
variable description SEAFILE_VOLUME The volume directory of thumbnail server data SEAFILE_MYSQL_DB_HOST Seafile MySQL host SEAFILE_MYSQL_DB_USER Seafile MySQL user, default is seafileSEAFILE_MYSQL_DB_PASSWORD Seafile MySQL password TIME_ZONE Time zone JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file INNER_SEAHUB_SERVICE_URL Inner Seafile url SEAF_SERVER_STORAGE_TYPE What kind of the Seafile data for storage. Available options are disk (i.e., local disk), s3 and multiple (see the details of multiple storage backends) S3_COMMIT_BUCKET S3 storage backend commit objects bucket S3_FS_BUCKET S3 storage backend fs objects bucket S3_BLOCK_BUCKET S3 storage backend block objects bucket S3_KEY_ID S3 storage backend key ID S3_SECRET_KEY S3 storage backend secret key S3_AWS_REGION Region of your buckets S3_HOST Host of your buckets S3_USE_HTTPS Use HTTPS connections to S3 if enabled S3_USE_V4_SIGNATURE Use the v4 protocol of S3 if enabled S3_PATH_STYLE_REQUEST This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. S3_SSE_C_KEY A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C.
Then you can run thumbnail server with the following command:
docker compose up -d\n
You need to configure load balancer according to the following forwarding rules:
Forward /thumbnail requests to thumbnail server via http protocol.
Here is a configuration that uses haproxy to support thumbnail server. Haproxy version needs to be >= 2.0. You should use similar configurations for other load balancers.
The thumbnail server needs to access Seafile storage.
If you use local storage, you need to mount the /opt/seafile-data directory of the Seafile node to the thumbnail node, and set SEAFILE_VOLUME to the mounted directory correctly.
If you use single backend S3 storage, please correctly set relative environment vairables in .env.
If you are using multiple storage backends, you have to copy the seafile.conf of the Seafile node to the /opt/seafile-data/seafile/conf directory of the thumbnail node, and set SEAF_SERVER_STORAGE_TYPE=multiple in .env.
"},{"location":"extension/thumbnail-server/#thumbnail-server-directory-structure","title":"Thumbnail server directory structure","text":"
/opt/seafile-data
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/conf: This is the directory for config files.
/opt/seafile-data/logs: This is the directory for logs.
/opt/seafile-data/seafile-data: This is the directory for seafile storage (if you use local storage).
/opt/seafile-data/seahub-data/thumbnail: This is the directory for thumbnail files.
Seafile can scan uploaded files for malicious content in the background. When configured to run periodically, the scan process scans all existing libraries on the server. In each scan, the process only scans newly uploaded/updated files since the last scan. For each file, the process executes a user-specified virus scan command to check whether the file is a virus or not. Most anti-virus programs provide command line utility for Linux.
To enable this feature, add the following options to seafile.conf:
[virus_scan]\nscan_command = (command for checking virus)\nvirus_code = (command exit codes when file is virus)\nnonvirus_code = (command exit codes when file is not virus)\nscan_interval = (scanning interval, in unit of minutes, default to 60 minutes)\n
More details about the options:
On Linux/Unix, most virus scan commands returns specific exit codes for virus and non-virus. You should consult the manual of your anti-virus program for more information.
An example for ClamAV (http://www.clamav.net/) is provided below:
To test whether your configuration works, you can trigger a scan manually:
cd seafile-server-latest\n./pro/pro.py virus_scan\n
If a virus was detected, you can see scan records and delete infected files on the Virus Scan page in the admin area.
Note
If you directly use clamav command line tool to scan files, scanning files will takes a lot of time. If you want to speed it up, we recommend to run Clamav as a daemon. Please refer to Run ClamAV as a Daemon
When run Clamav as a daemon, the scan_command should be clamdscan in seafile.conf. An example for Clamav-daemon is provided below:
Since Pro edition 6.0.0, a few more options are added to provide finer grained control for virus scan.
[virus_scan]\n......\nscan_size_limit = (size limit for files to be scanned) # The unit is MB.\nscan_skip_ext = (a comma (',') separated list of file extensions to be ignored)\nthreads = (number of concurrent threads for scan, one thread for one file, default to 4)\n
The file extensions should start with '.'. The extensions are case insensitive. By default, files with following extensions will be ignored:
"},{"location":"extension/virus_scan/#scanning-files-on-upload","title":"Scanning Files on Upload","text":"
You may also configure Seafile to scan files for virus upon the files are uploaded. This only works for files uploaded via web interface or web APIs. Files uploaded with syncing or SeaDrive clients cannot be scanned on upload due to performance consideration.
You may scan files uploaded from shared upload links by adding the option below to seahub_settings.py:
ENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n
Since Pro Edition 11.0.7, you may scan all uploaded files via web APIs by adding the option below to seafile.conf:
[fileserver]\ncheck_virus_on_web_upload = true\n
"},{"location":"extension/virus_scan_with_clamav/","title":"Deploy ClamAV with Seafile","text":""},{"location":"extension/virus_scan_with_clamav/#deploy-with-docker","title":"Deploy with Docker","text":"
If your Seafile server is deployed using Docker, we also recommend that you use Docker to deploy ClamAV by following the steps below, otherwise you can deploy it from binary package of ClamAV.
"},{"location":"extension/virus_scan_with_clamav/#download-clamavyml-and-insert-to-docker-compose-lists-in-env","title":"Download clamav.yml and insert to Docker-compose lists in .env","text":"
Wait some minutes until Clamav finished initializing.
Now Clamav can be used.
"},{"location":"extension/virus_scan_with_clamav/#use-clamav-in-binary-based-deployment","title":"Use ClamAV in binary based deployment","text":""},{"location":"extension/virus_scan_with_clamav/#install-clamav-daemon-clamav-freshclam","title":"Install clamav-daemon & clamav-freshclam","text":"
apt-get install clamav-daemon clamav-freshclam\n
You should run Clamd with a root permission to scan any files. Edit the conf /etc/clamav/clamd.conf,change the following line:
LocalSocketGroup root\nUser root\n
"},{"location":"extension/virus_scan_with_clamav/#start-the-clamav-daemon","title":"Start the clamav-daemon","text":"
"},{"location":"extension/virus_scan_with_kav4fs/","title":"Virus Scan with kav4fs","text":""},{"location":"extension/virus_scan_with_kav4fs/#prerequisite","title":"Prerequisite","text":"
Assume you have installed Kaspersky Anti-Virus for Linux File Server on the Seafile Server machine.
If the user that runs Seafile Server is not root, it should have sudoers privilege to avoid writing password when running kav4fs-control. Add following content to /etc/sudoers:
<user of running seafile server> ALL=(ALL:ALL) ALL\n<user of running seafile server> ALL=NOPASSWD: /opt/kaspersky/kav4fs/bin/kav4fs-control\n
As the return code of kav4fs cannot reflect the file scan result, we use a shell wrapper script to parse the scan output and based on the parse result to return different return codes to reflect the scan result.
Save following contents to a file such as kav4fs_scan.sh:
[virus_scan]\nscan_command = <absolute path of kav4fs_scan.sh>\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = <scanning interval, in unit of minutes, default to 60 minutes>\n
The configuration file is /opt/seafile-data/seafile/conf/seafdav.conf (for deploying from binary packages, it should be /opt/seafile/conf/seafdav.conf). If it is not created already, you can just create the file.
[WEBDAV]\n\n# Default is false. Change it to true to enable SeafDAV server.\nenabled = true\n\nport = 8080\ndebug = true\n\n# If you deploy seafdav behind nginx/apache, you need to modify \"share_name\".\nshare_name = /seafdav\n\n# SeafDAV uses Gunicorn as web server.\n# This option maps to Gunicorn's 'workers' setting. https://docs.gunicorn.org/en/stable/settings.html?#workers\n# By default it's set to 5 processes.\nworkers = 5\n\n# This option maps to Gunicorn's 'timeout' setting. https://docs.gunicorn.org/en/stable/settings.html?#timeout\n# By default it's set to 1200 seconds, to support large file uploads.\ntimeout = 1200\n
Every time the configuration is modified, you need to restart seafile server to make it take effect.
Deploy in DockerDeploy from binary packages
docker compose restart\n
cd /opt/seafile/seafile-server-latest/\n./seafile.sh restart\n
Your WebDAV client would visit the Seafile WebDAV server at http{s}://example.com/seafdav/ (for deploying from binary packages, it should be http{s}://example.com:8080/seafdav/)
In Pro edition 7.1.8 version and community edition 7.1.5, an option is added to append library ID to the library name returned by SeafDAV.
show_repo_id=true\n
"},{"location":"extension/webdav/#proxy-only-for-deploying-from-binary-packages","title":"Proxy (only for deploying from binary packages)","text":"
Tip
For deploying in Docker, the WebDAV server has been proxied in /seafdav/*, as you can skip this step
NginxApache
For Seafdav, the configuration of Nginx is as follows:
"},{"location":"extension/webdav/#notes-on-clients","title":"Notes on Clients","text":"
Please first note that, there are some known performance limitation when you map a Seafile webdav server as a local file system (or network drive).
Uploading large number of files at once is usually much slower than the syncing client. That's because each file needs to be committed separately.
The access to the webdav server may be slow sometimes. That's because the local file system driver sends a lot of unnecessary requests to get the files' attributes.
So WebDAV is more suitable for infrequent file access. If you want better performance, please use the sync client instead.
WindowsLinuxMac OS X
Windows Explorer supports HTTPS connection. But it requires a valid certificate on the server. It's generally recommended to use Windows Explorer to map a webdav server as network dirve. If you use a self-signed certificate, you have to add the certificate's CA into Windows' system CA store.
On Linux you have more choices. You can use file manager such as Nautilus to connect to webdav server. Or you can use davfs2 from the command line.
The -o option sets the owner of the mounted directory to so that it's writable for non-root users.
It's recommended to disable LOCK operation for davfs2. You have to edit /etc/davfs2/davfs2.conf
use_locks 0\n
Finder's support for WebDAV is also not very stable and slow. So it is recommended to use a webdav client software such as Cyberduck.
"},{"location":"extension/webdav/#frequently-asked-questions","title":"Frequently Asked Questions","text":""},{"location":"extension/webdav/#clients-cant-connect-to-seafdav-server","title":"Clients can't connect to seafdav server","text":"
By default, seafdav is disabled. Check whether you have enabled = true in seafdav.conf. If not, modify it and restart seafile server.
"},{"location":"extension/webdav/#the-client-gets-error-404-not-found","title":"The client gets \"Error: 404 Not Found\"","text":"
If you deploy SeafDAV behind Nginx/Apache, make sure to change the value of share_name as the sample configuration above. Restart your seafile server and try again.
First, check the seafdav.log to see if there is log like the following.
\"MOVE ... -> 502 Bad Gateway\n
If you have enabled debug, there will also be the following log.
09:47:06.533 - DEBUG : Raising DAVError 502 Bad Gateway: Source and destination must have the same scheme.\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\n(See https://github.com/mar10/wsgidav/issues/183)\n\n09:47:06.533 - DEBUG : Caught (502, \"Source and destination must have the same scheme.\\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\\n(See https://github.com/mar10/wsgidav/issues/183)\")\n
This issue usually occurs when you have configured HTTPS, but the request was forwarded, resulting in the HTTP_X_FORWARDED_PROTO value in the request received by Seafile not being HTTPS.
You can solve this by manually changing the value of HTTP_X_FORWARDED_PROTO. For example, in nginx, change
proxy_set_header X-Forwarded-Proto $scheme;\n
to
proxy_set_header X-Forwarded-Proto https;\n
"},{"location":"extension/webdav/#windows-explorer-reports-file-size-exceeds-the-limit-allowed-and-cannot-be-saved","title":"Windows Explorer reports \"file size exceeds the limit allowed and cannot be saved\"","text":"
This happens when you map webdav as a network drive, and tries to copy a file larger than about 50MB from the network drive to a local folder.
This is because Windows Explorer has a limit of the file size downloaded from webdav server. To make this size large, change the registry entry on the client machine. There is a registry key named FileSizeLimitInBytes under HKEY_LOCAL_MACHINE -> SYSTEM -> CurrentControlSet -> Services -> WebClient -> Parameters.
Seafile Server consists of the following two components:
Seahub (django)\uff1athe web frontend. Seafile server package contains a light-weight Python HTTP server gunicorn that serves the website. By default, Seahub runs as an application within gunicorn.
Seafile server (seaf-server)\uff1adata service daemon, handles raw file upload, download and synchronization. Seafile server by default listens on port 8082. You can configure Nginx/Apache to proxy traffic to the local 8082 port.
The picture below shows how Seafile clients access files when you configure Seafile behind Nginx/Apache.
Tip
All access to the Seafile service (including Seahub and Seafile server) can be configured behind Nginx or Apache web server. This way all network traffic to the service can be encrypted with HTTPS.
Seafile manages files using libraries. Every library has an owner, who can share the library to other users or share it with groups. The sharing can be read-only or read-write.
Read-only libraries can be synced to local desktop. The modifications at the client will not be synced back. If a user has modified some file contents, he can use \"resync\" to revert the modifications.
Sharing controls whether a user or group can see a library, while sub-folder permissions are used to modify permissions on specific folders.
Supposing you share a library as read-only to a group and then want specific sub-folders to be read-write for a few users, you can set read-write permissions on sub-folders for some users and groups.
Note
Setting sub-folder permission for a user without sharing the folder or parent folder to that user will have no effect.
Sharing a library read-only to a user and then sharing a sub-folder read-write to that user will lead to two shared items for that user. This is going to cause confusion. Use sub-folder permissions instead.
"},{"location":"setup/caddy/","title":"HTTPS and Caddy","text":"
Note
From Seafile Docker 12.0, HTTPS will be handled by the Caddy. The default caddy image used of Seafile docker is lucaslorentz/caddy-docker-proxy:2.9-alpine.
Caddy is a modern open source web server that mainly binds external traffic and internal services in seafile docker. In addition to the advantages of traditional proxy components (e.g., nginx), Caddy also makes it easier for users to complete the acquisition and update of HTTPS certificates by providing simpler configurations.
"},{"location":"setup/caddy/#engage-https-by-caddy","title":"Engage HTTPS by caddy","text":"
We provide two options for enabling HTTPS via Caddy, which mainly rely on The caddy docker proxy container from Lucaslorentz supports dynamic configuration with labels:
With a automatically generated certificate
Using a custom (existing) certificate
"},{"location":"setup/caddy/#with-a-automatically-generated-certificate","title":"With a automatically generated certificate","text":"
To engage HTTPS, users only needs to correctly configure the following fields in .env:
After Seafile Docker startup, you can use following command to access the logs of Caddy
docker logs seafile-caddy -f\n
"},{"location":"setup/caddy/#using-a-custom-existing-certificate","title":"Using a custom (existing) certificate","text":"
With the caddy.yml, a default volume-mount is created: /opt/seafile-caddy (as you can change it by modifying SEAFILE_CADDY_VOLUME in .env). By convention you should provide your certificate & key files in the container host filesystem under /opt/seafile-caddy/certs/ to make it available to caddy:
/opt/seafile-caddy/certs/\n\u251c\u2500\u2500 cert.pem # xxx.crt in some case\n\u251c\u2500\u2500 key.pem # xxx.key in some case\n
Command to generate custom certificates
With this command, you can generate your own custom certificates:
Please be aware that custom certicates can not be used for ip-adresses
Then modify seafile-server.yml to enable your custom certificate, by the way, we strongly recommend you to make a backup of seafile-server.yml before doing this:
services:\n ...\n seafile:\n ...\n volumes:\n ...\n # If you use a self-generated certificate, please add it to the Seafile server trusted directory (i.e. remove the comment symbol below)\n # - \"/opt/seafile-caddy/certs/cert.pem:/usr/local/share/ca-certificates/cert.crt\"\n labels:\n caddy: ${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty} # leave this variables only\n caddy.tls: \"/data/caddy/certs/cert.pem /data/caddy/certs/key.pem\"\n ...\n
DNS resolution must work inside the container
If you're using a non-public url like my-custom-setup.local, you have to make sure, that the docker container can resolve this DNS query. If you don't run your own DNS servers, you have to add extras_hosts to your .yml file.
The Seafile cluster solution employs a 3-tier architecture:
Load balancer tier: Distribute incoming traffic to Seafile servers. HA can be achieved by deploying multiple load balancer instances.
Seafile server cluster: a cluster of Seafile server instances. If one instance fails, the load balancer will stop handing traffic to it. So HA is achieved.
Backend storage: Distributed storage cluster, e.g. S3, Openstack Swift or Ceph.
This architecture scales horizontally. That means, you can handle more traffic by adding more machines. The architecture is visualized in the following picture.
There are two main components on the Seafile server node: web server (Nginx/Apache) and Seafile app server. The web server passes requests from the clients to Seafile app server. The Seafile app servers work independently. They don't know about each other's state. That means each app server can fail independently without affecting other app server instances. The load balancer is responsible for detecting failure and re-routing requests.
Even though Seafile app servers work independently, they still have to share some session information. All shared session information is stored in memory cache. Thus, all Seafile app servers have to connect to the same memory cache server (cluster). Since Pro Edition 11.0, both memcached and Redis can be used as memory cache. Before 11.0, only memcached is supported. More details about memory cache configuration is available later.
The background server is the workhorse for various background tasks, including full-text indexing, office file preview, virus scanning, LDAP syncing. It should usually be run on a dedicated server for better performance. Currently only one background task server can be running in the entire cluster. If more than one background servers are running, they may conflict with each others when doing some tasks. If you need HA for background task server, you can consider using Keepalived to build a hot backup for it.
In the seafile cluster, only one server should run the background tasks, including:
indexing files for search
email notification
LDAP sync
virus scan
Let's assume you have three nodes in your cluster: A, B, and C.
Node A is backend node that run background tasks.
Node B and C are frontend nodes that serving requests from clients.
Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/cluster_deploy_with_docker/#deploy-seafile-service","title":"Deploy Seafile service","text":""},{"location":"setup/cluster_deploy_with_docker/#deploy-the-first-seafile-frontend-node","title":"Deploy the first Seafile frontend node","text":"
Create the mount directory
mkdir -p /opt/seafile/shared\n
Pulling Seafile image
Tip
Since v12.0, Seafile PE versions are hosted on DockerHub and does not require username and password to download.
Modify the variables in .env (especially the terms like <...>).
Tip
If you have already deployed S3 storage backend and plan to apply it to Seafile cluster, you can modify the variables in .env to set them synchronously during initialization.
Although the current Seafile cluster only supports Memcached as the cache, it also supports setting configurations through '. env'. Therefore, you do not need to pay attention to the selection of CACHE_PROVIDER, so you only need to correctly set MEMCACHED_HOST and MEMCACHED_PORT in .env.
Pleace license file
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile/shared. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Start the Seafile docker
docker compose up -d\n
Cluster init mode
Because CLUSTER_INIT_MODE is true in the .env file, Seafile docker will be started in init mode and generate configuration files. As the results, you can see the following lines if you trace the Seafile container (i.e., docker logs seafile):
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n\n\n[2024-11-21 02:22:37] Updating version stamp\nStart init\n\nInit success\n
In initialization mode, the service will not be started. During this time you can check the generated configuration files (e.g., MySQL, Memcached, Elasticsearch) in configuration files:
seafevents.conf
seafile.conf
seahub_settings.py
After initailizing the cluster, the following fields can be removed in .env
CLUSTER_INIT_MODE, must be removed from .env file
CLUSTER_INIT_ES_HOST
CLUSTER_INIT_ES_PORT
Tip
We recommend that you check that the relevant configuration files are correct and copy the SEAFILE_VOLUME directory before the service is officially started, because only the configuration files are generated after initialization. You can directly migrate the entire copied SEAFILE_VOLUME to other nodes later:
Restart the container to start the service in frontend node
docker compose down\ndocker compose up -d\n
Frontend node starts successfully
After executing the above command, you can trace the logs of container seafile (i.e., docker logs seafile). You can see the following message if the frontend node starts successfully:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 20\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:02:35 Nginx ready \n\n2024-11-21 03:02:35 This is an idle script (infinite loop) to keep container running. \n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n
"},{"location":"setup/cluster_deploy_with_docker/#deploy-the-others-seafile-frontend-nodes","title":"Deploy the others Seafile frontend nodes","text":"
Create the mount directory
$ mkdir -p /opt/seafile/shared\n
Pull Seafile image
Copy seafile-server.yml, .envand configuration files from the first frontend node
Copy seafile-server.yml, .env and configuration files from frontend node
Note
The configuration files from frontend node have to be put in the same path as the frontend node, i.e., /opt/seafile/shared/seafile/conf/*
Modify .env, set CLUSTER_MODE to backend
Start the service in the backend node
docker compose up -d\n
Backend node starts successfully
After executing the above command, you can trace the logs of container seafile (i.e., docker logs seafile). You can see the following message if the backend node starts successfully:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 21\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:11:59 Nginx ready \n2024-11-21 03:11:59 This is an idle script (infinite loop) to keep container running. \n\n---------------------------------\n\nSeafile cluster backend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seafile background tasks ...\nDone.\n
Since Seafile Pro server 6.0.0, cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the \"Load Balancer Setting\" section below for details
Generally speaking, in order to better access the Seafile service, we recommend that you use a load balancing service to access the Seafile cluster and bind your domain name (such as seafile.cluster.com) to the load balancing service. Usually, you can use:
Cloud service provider's load balancing service (e.g., AWS Elastic Load Balancer)
Deploy your own load balancing service, our document will give two of common load balance services:
"},{"location":"setup/cluster_deploy_with_docker/#haproxy-and-keepalived-services","title":"HAproxy and Keepalived services","text":"
Execute the following commands on the two Seafile frontend servers:
$ apt install haproxy keepalived -y\n\n$ mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak\n\n$ cat > /etc/haproxy/haproxy.cfg << 'EOF'\nglobal\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n timeout connect 10000\n timeout client 300000\n timeout server 36000000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafile01 Front-End01-IP:8001 check port 11001 cookie seafile01\n server seafile02 Front-End02-IP:8001 check port 11001 cookie seafile02\nEOF\n
Warning
Please correctly modify the IP address (Front-End01-IP and Front-End02-IP) of the frontend server in the above configuration file. Other wise it cannot work properly.
Choose one of the above two servers as the master node, and the other as the slave node.
Perform the following operations on the master node:
$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node1\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state MASTER\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 100\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n
Warning
Please correctly configure the virtual IP address and network interface device name in the above file. Other wise it cannot work properly.
Perform the following operations on the standby node:
$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node2\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state BACKUP\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 98\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n
Finally, run the following commands on the two Seafile frontend servers to start the corresponding services:
You can engaged HTTPS in your load balance service, as you can use certificates manager (e.g., Certbot) to acquire and enable HTTPS to your Seafile cluster. You have to modify the relative URLs from the prefix http:// to https:// in seahub_settings.py and .env, after enabling HTTPS.
Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
Two tools are suggested and can be installed with official installation guide on all nodes:
kubectl
k8s control plane tool (e.g., kubeadm)
After installation, you need to start the k8s control plane service on each node and refer to the k8s official manual for creating a cluster.
Tip
Although we recommend installing the k8s control plane tool on each node, it does not mean that we will use each node as a control plane node, but it is a necessary tool to create or join a K8S cluster. For details, please refer to the above link about creating or joining into a cluster.
"},{"location":"setup/cluster_deploy_with_k8s/#download-k8s-yaml-files-for-seafile-cluster-without-frontend-node","title":"Download K8S YAML files for Seafile cluster (without frontend node)","text":"
In here we suppose you download the YAML files in /opt/seafile-k8s-yaml, which mainly include about:
seafile-xx-deployment.yaml for frontend and backend services pod management and creation,
seafile-service.yaml for exposing Seafile services to the external network,
seafile-persistentVolume.yaml for defining the location of a volume used for persistent storage on the host
seafile-persistentvolumeclaim.yaml for declaring the use of persistent storage in the container.
For futher configuration details, you can refer the official documents.
"},{"location":"setup/cluster_deploy_with_k8s/#modify-seafile-envyaml-and-seafile-secretyaml","title":"Modify seafile-env.yaml and seafile-secret.yaml","text":"
Similar to Docker-base deployment, Seafile cluster in K8S deployment also supports use files to configure startup progress, you can modify common environment variables by
nano /opt/seafile-k8s-yaml/seafile-env.yaml\n
and sensitive information (e.g., password) by
nano /opt/seafile-k8s-yaml/seafile-secret.yaml\n
For seafile-secret.yaml
To modify sensitive information (e.g., password), you need to convert the password into base64 encoding before writing it into the seafile-secret.yaml file:
echo -n '<your-value>' | base64\n
Warning
For the fields marked with <...> are required, please make sure these items are filled in, otherwise Seafile server may not run properly.
You can use following command to initialize Seafile cluster now (the Seafile's K8S resources will be specified in namespace seafile for easier management):
When Seafile cluster is initializing, it will run with the following conditions:
Only have backend service (i.e., only has the Seafile backend K8S resouce file)
CLUSTER_INIT_MODE=true
Success
You can get the following information through kubectl logs seafile-xxxx -n seafile to check the initialization process is done or not:
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n\n\n[2024-11-21 02:22:37] Updating version stamp\nStart init\n\nInit success\n
When the initialization is complete, the server will stop automaticlly (because no operations will be performed after the initialization is completed).
We recommend that you check whether the contents of the configuration files in /opt/seafile/shared/seafile/conf are correct when going to next step, which are automatically generated during the initialization process.
"},{"location":"setup/cluster_deploy_with_k8s/#put-the-license-into-optseafileshared","title":"Put the license into /opt/seafile/shared","text":"
You have to locate the /opt/seafile/shared directory generated during initialization firsly, then simply put it in this path, if you have a seafile-license.txt license file.
Finally you can use the tar -zcvf and tar -zxvf commands to package the entire /opt/seafile/shared directory of the current node, copy it to other nodes, and unpack it to the same directory to take effect on all nodes.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
"},{"location":"setup/cluster_deploy_with_k8s/#download-frontend-services-yaml-and-restart-pods-to-start-seafile-server","title":"Download frontend service's YAML and restart pods to start Seafile server","text":"
Modify seafile-env.yaml, and set CLUSTER_INIT_MODE to false (i.e., disable initialization mode)
Run the following command to restart pods to restart Seafile cluster:
Tip
If you modify some configurations in /opt/seafile/shared/seafile/conf or YAML files in /opt/seafile-k8s-yaml/, you still need to restart services to make modifications.
You can view the pod's log to check the startup progress is normal or not. You can see the following message if server is running normally:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 20\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:02:35 Nginx ready \n\n2024-11-21 03:02:35 This is an idle script (infinite loop) to keep container running. \n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n
Please refer from here for futher advanced operations.
"},{"location":"setup/helm_chart_cluster/","title":"Deploy Seafile cluster with Kubernetes (K8S) by Seafile Helm Chart","text":"
This manual explains how to deploy and run Seafile cluster on a Linux server using Seafile Helm Chart (chart thereafter). You can also refer to here to use K8S resource files to deploy Seafile cluster in your K8S cluster.
Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
Two tools are suggested and can be installed with official installation guide on all nodes:
kubectl
k8s control plane tool (e.g., kubeadm)
After installation, you need to start the k8s control plane service on each node and refer to the k8s official manual for creating a cluster.
Tip
Although we recommend installing the k8s control plane tool on each node, it does not mean that we will use each node as a control plane node, but it is a necessary tool to create or join a K8S cluster. For details, please refer to the above link about creating or joining into a cluster.
It is not necessary to use the my-values.yaml we provided (i.e., you can create an empty my-values.yaml and add required field, as others have defined default values in our chart), because it destroys the flexibility of deploying with Helm, but it contains some formats of how Seafile Helm Chart reads these configurations, as well as all the environment variables and secret variables that can be read directly.
In addition, you can also create a custom storageClassName for the persistence directory used by Seafile. You only need to specify storageClassName in the seafile.config.seafileDataVolume object in my-values.yaml:
seafile:\n configs:\n seafileDataVolume:\n storageClassName: <your seafile storage class name>\n ...\n
You can check any front-end node in Seafile cluster. If the following information is output, Seafile cluster will run normally in your cluster:
Defaulted container \"seafile-frontend\" out of: seafile-frontend, set-ownership (init)\n*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 21\n*** Running /scripts/enterpoint.sh...\n2025-02-13 09:23:49 Nginx ready \n2025-02-13 09:23:49 This is an idle script (infinite loop) to keep container running. \n\n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(86): fileserver: web_token_expire_time = 3600\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(98): fileserver: max_index_processing_threads= 3\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(111): fileserver: fixed_block_size = 8388608\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(123): fileserver: max_indexing_threads = 1\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(138): fileserver: put_head_commit_request_timeout = 10\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(150): fileserver: skip_block_hash = 0\n[seaf-server] [2025-02-13 09:23:50] [INFO] ../common/seaf-utils.c(581): Use database Mysql\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(243): fileserver: worker_threads = 10\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(256): fileserver: backlog = 32\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(267): fileserver: verify_client_blocks = 1\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(289): fileserver: cluster_shared_temp_file_mode = 600\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(336): fileserver: check_virus_on_web_upload = 0\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(362): fileserver: enable_async_indexing = 0\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(374): fileserver: async_indexing_threshold = 700\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(386): fileserver: fs_id_list_request_timeout = 300\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(399): fileserver: max_sync_file_count = 100000\n[seaf-server] [2025-02-13 09:23:50] [WARNING] ../common/license.c(716): License file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users\nLicense file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users\n[seaf-server] [2025-02-13 09:23:50] [INFO] filelock-mgr.c(1397): Cleaning expired file locks.\n[2025-02-13 09:23:52] Start Monitor \n[2025-02-13 09:23:52] Start seafevents.main \n/opt/seafile/seafile-pro-server-12.0.9/seahub/seahub/settings.py:1101: SyntaxWarning: invalid escape sequence '\\w'\nmatch = re.search('^EXTRA_(\\w+)', attr)\n/opt/seafile/seafile-pro-server-12.0.9/seahub/thirdpart/seafobj/mc.py:13: SyntaxWarning: invalid escape sequence '\\S'\nmatch = re.match('--SERVER\\\\s*=\\\\s*(\\S+)', mc_options)\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\n[seafevents] [2025-02-13 09:23:55] [INFO] root:82 LDAP is not set, disable ldap sync.\n[seafevents] [2025-02-13 09:23:55] [INFO] virus_scan:51 [virus_scan] scan_command option is not found in seafile.conf, disable virus scan.\n[seafevents] [2025-02-13 09:23:55] [INFO] seafevents.app.mq_handler:127 Subscribe to channels: {'seaf_server.stats', 'seahub.stats', 'seaf_server.event', 'seahub.audit'}\n[seafevents] [2025-02-13 09:23:55] [INFO] root:534 Start counting user activity info..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:547 [UserActivityCounter] update 0 items.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:240 Start counting traffic info..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:268 Traffic counter finished, total time: 0.0003578662872314453 seconds.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:23 Start file updates sender, interval = 300 sec\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:57 Can not start work weixin notice sender: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:131 search indexer is started, interval = 600 sec\n[seafevents] [2025-02-13 09:23:55] [INFO] root:56 seahub email sender is started, interval = 1800 sec\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:17 Can not start ldap syncer: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:18 Can not start virus scanner: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:35 Start data statistics..\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:40 Can not start content scanner: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:46 Can not scan repo old files auto del days: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:182 Start counting total storage..\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:78 Can not start filename index updater: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:113 search wiki indexer is started, interval = 600 sec\n[seafevents] [2025-02-13 09:23:55] [INFO] root:87 Start counting file operations..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:403 Start counting monthly traffic info..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:491 Monthly traffic counter finished, update 0 user items, 0 org items, total time: 0.0905158519744873 seconds.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:203 [TotalStorageCounter] No results from seafile-db.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:169 [FileOpsCounter] Finish counting file operations in 0.09510159492492676 seconds, 0 added, 0 deleted, 0 visited, 0 modified\n\nSeahub is started\n\nDone.\n
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile/shared. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Please refer from here for futher advanced operations.
"},{"location":"setup/helm_chart_single_node/","title":"Setup Seafile with a single K8S pod with Seafile Helm Chart","text":"
This manual explains how to deploy and run Seafile server on a Linux server using Seafile Helm Chart (chart thereafter) in a single pod (i.e., single node mode). Comparing to Setup by K8S resource files, deployment with helm chart can simplify the deployment process and provide more flexible deployment control, which the way we recommend in deployment with K8S.
For specific environment and configuration requirements, please refer to the description of the Docker-based Seafile single-node deployment. Please also refer to the description of the K8S tool section in here.
For persisting data using in the docker-base deployment, /opt/seafile-data, is still adopted in this manual. What's more, all K8S YAML files will be placed in /opt/seafile-k8s-yaml (replace it when following these instructions if you would like to use another path).
By the way, we don't provide the deployment methods of basic services (e.g., Redis, MySQL and Elasticsearch) and seafile-compatibility components (e.g., SeaDoc) for K8S in our document. If you need to install these services in K8S format, you can refer to the rewrite method in this document.
Please refer here for the details of system requirements about Seafile service. By the way, this will apply to all nodes where Seafile pods may appear in your K8S cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
It is not necessary to use the my-values.yaml we provided (i.e., you can create an empty my-values.yaml and add required field, as others have defined default values in our chart), because it destroys the flexibility of deploying with Helm, but it contains some formats of how Seafile Helm Chart reads these configurations, as well as all the environment variables and secret variables that can be read directly.
In addition, you can also create a custom storageClassName for the persistence directory used by Seafile. You only need to specify storageClassName in the seafile.config.seafileDataVolume object in my-values.yaml:
seafile:\n configs:\n seafileDataVolume:\n storageClassName: <your seafile storage class name>\n ...\n
After installing the chart, the Seafile pod should startup automaticlly.
About Seafile service
The default service type of Seafile is LoadBalancer. You should specify K8S load balancer for Seafile or specify at least one external ip, that can be accessed from external networks.
Important for deployment
By default, Seafile will access the Redis (the default cache from Seafile 13) and Elasticsearch (Pro only) with the specific service name:
Redis: redis with port 6379
Elasticsearch: elasticsearch with port 9200
If the above services are:
Not in your K8S pods (including using an external service)
With different service name
With different server port
Please modfiy the files in /opt/seafile-data/seafile/conf (especially the seafevents.conf, seafile.conf and seahub_settings.py) to make correct the configurations for above services, otherwise the Seafile server cannot start normally. Then restart Seafile server:
"},{"location":"setup/helm_chart_single_node/#activating-the-seafile-license-pro","title":"Activating the Seafile License (Pro)","text":"
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
This document mainly describes how to manage and maintain Seafile deployed through our K8S deployment document. At the same time, if you are already proficient in using kubectl commands to manage K8S resources, you can also customize the deployment solutions we provide.
Namespaces for Seafile K8S deployment
Our documentation provides two deployment solutions for both single-node and cluster deployment (via Seafile Helm Chart and K8S resource files), both of which can be highly customized.
Regardless of which deployment method you use, in our newer manuals (usually in versions after Seafile 12.0.9), Seafile-related K8S resources (including related Pods, services, and persistent volumes, etc.) are defined in the seafile namespace. In previous versions, you may deploy Seafile in the default namespace, so in this case, when referring to this document for Seafile K8S resource management, be sure to remove -n seafile in the command.
Similar to docker installation, you can also manage containers through some kubectl commands. For example, you can use the following command to check whether the relevant resources are started successfully and whether the relevant services can be accessed normally. First, execute the following command and remember the pod name with seafile- as the prefix (such as seafile-748b695648-d6l4g)
"},{"location":"setup/k8s_advanced_management/#k8s-gateway-and-https","title":"K8S Gateway and HTTPS","text":"
Since the Ingress feature is no longer supported in the new version of K8S (even the commonly used Nginx-Ingress will not be deployed after 1.24), this article will introduce how to use the new version of K8S feature K8S Gateway to implement Seafile service exposure and load balancing.
Still use Nginx-Ingress
If your K8S is still running with an old version, and still using Nginx-Ingress, you can follow here to setup ingress controller and HTTPS. We sincerely thanks Datamate to give an example to this configuration.
For the details and features about K8S Gateway, please refer to the K8S official document, you can simpily install it by
The Gateway API requires configuration of three API categories in its resource model: - GatewayClass:\u00a0Defines a group of gateways with the same configuration, managed by the controller that implements the class. - Gateway:\u00a0Defines an instance of traffic handling infrastructure, which can be thought of as a load balancer. - HTTPRoute:\u00a0Defines HTTP-specific rules for mapping traffic from gateway listeners to representations of backend network endpoints. These endpoints are typically represented as\u00a0Services.
The GatewayClass resource serves the same purpose as the IngressClass in the old-ingress API, similar to the StorageClass in the Storage API. It defines the categories of Gateways that can be created. Typically, this resource is provided by your infrastructure platform, such as EKS or GKE. It can also be provided by a third-party Ingress Controller, such as Nginx-gateway or Istio-gateway.
Here, we take the Nginx-gateway for the example, and you can install it with the official document. After installation, you can view the installation status with the following command:
# `gc` means the `gatewayclass`, and its same as `kubectl get gatewayclass`\nkubectl get gc \n\n#NAME CONTROLLER ACCEPTED AGE\n#nginx gateway.nginx.org/nginx-gateway-controller True 22s\n
Typically, after you install GatewayClass, your cloud provider will provide you with a load balancing IP, which is visible in GatewayClass. If this IP is not assigned, you can manually bind it to a IP that can be accessed from exteranl network.
Gateway is used to describe an instance of traffic processing infrastructure. Usually, Gateway defines a network endpoint that can be used to process traffic, that is, to filter, balance, split, etc. Service and other backends. For example, it can represent a cloud load balancer, or a cluster proxy server configured to accept HTTP traffic. As above, please refer to the official documentation for a detailed description of Gateway. Here is only a simple reference configuration for Seafile:
The HTTPRoute category specifies the routing behavior of HTTP requests from the Gateway listener to the backend network endpoints. For service backends, the implementation can represent the backend network endpoint as a service IP or a supporting endpoint of the service. it represents the configuration that will be applied to the underlying Gateway implementation. For example, defining a new HTTPRoute may result in configuring additional traffic routes in a cloud load balancer or in-cluster proxy server. As above, please refer to the official documentation for a detailed description of the HTTPRoute resource. Here is only a reference configuration solution that is only applicable to this document.
After installing or defining GatewayClass, Gateway and HTTPRoute, you can now enable this feature by following command and view your Seafile server by the URL http://seafile.example.com/:
When using K8S Gateway, a common way to enable HTTPS is to add relevant information about the TLS listener in Gateway resource. You can refer here for futher details. We will provide a simple way here so that you can quickly enable HTTPS for your Seafile K8S.
Create a secret resource (seafile-tls-cert) for your TLS certificates:
kubectl create secret tls seafile-tls-cert \\\n--cert=<your path to fullchain.pem> \\\n--key=<your path to privkey.pem>\n
2. Use the TLS in your Gateway resource and enable HTTPS:
Now you can access your Seafile service in https://<your domain>/
"},{"location":"setup/k8s_advanced_management/#log-routing-and-aggregating-system","title":"Log routing and aggregating system","text":"
Similar to single-node deployment, you can browse the log files of Seafile running directly in the persistent volume directory (i.e., <path>/seafile/logs). The difference is that when using K8S to deploy a Seafile cluster (especially in a cloud environment), the persistent volume created is usually shared and synchronized for all nodes. However, the logs generated by the Seafile service do not record the specific node information where these logs are located, so browsing the files in the above folder may make it difficult to identify which node these logs are generated from. Therefore, one solution proposed here is:
Record the generated logs to the standard output. In this way, the logs can be distinguished under each node by kubectl logs (but all types of logs will be output together now). You can enable this feature (it should be enabled by default in K8S Seafile cluster but not in K8S single-pod Seafile) by modifing SEAFILE_LOG_TO_STDOUT to true in seafile-env.yaml:
Since the logs in step 1 can be distinguished between nodes, but they are aggregated and output together, it is not convenient for log retrieval. So you have to route the standard output logs (i.e., distinguish logs by corresponding components name) and re-record them in a new file or upload them to a log aggregation system (e.g., Loki).
Currently in the K8S environment, the commonly used log routing plugins are:
Fluent Bit
Fluentd
Logstash
Promtail (also a part of Loki)
Fluent Bit and Promtail are more lightweight (i.e., consume less system resources), while Promtail only supports transferring logs to Loki. Therefore, this document will mainly introduce log routing through Fluent Bit which is a fast, lightweight logs and metrics agent. It is also a CNCF graduated sub-project under the umbrella of Fluentd. Fluent Bit is licensed under the terms of the Apache License v2.0. You should deploy the Fluent Bit in your K8S cluster by following offical document firstly. Then modify Fluent-Bit pod settings to mount a new directory to load the configuration files:
For example in here, we use /opt/fluent-bit/confs (it has to be non-shared). What's more, the parsers will be defined in /opt/fluent-bit/confs/parsers.conf, and for each type log (e.g., seahub's log, seafevent's log) will be defined in /opt/fluent-bit/confs/*-log.conf. Each .conf file defines several Fluent-Bit data pipeline components:
Pipeline Description Required/Optional INPUT Specifies where and how Fluent-Bit can get the original log information, and assigns a tag for each log record after read. Required PARSER Parse the read log records. For K8S Docker runtime logs, they are usually in Json format. Required FILTER Filters and selects log records with a specified tag, and assigns a new tag to new records. Optional OUTPUT tells Fluent-Bit what format the log records for the specified tag will be in and where to output them (such as file, Elasticsearch, Loki, etc.). Required
Warning
For PARSER, it can only be stored in /opt/fluent-bit/confs/parsers.conf, otherwise the Fluent-Bit cannot startup normally.
According to the above, a container will generate a log file (usually in /var/log/containers/<container-name>-xxxxxx.log), so you need to prepare an importer and add the following information (for more details, please refer to offical document about TAIL inputer) in /opt/fluent-bit/confs/seafile-log.conf:
[INPUT]\n Name tail\n Path /var/log/containers/seafile-frontend-*.log\n Buffer_Chunk_Size 2MB\n Buffer_Max_Size 10MB\n Docker_Mode On\n Docker_Mode_Flush 5\n Tag seafile.*\n Parser Docker # for definition, please see the next section as well\n\n[INPUT]\n Name tail\n Path /var/log/containers/seafile-backend-*.log\n Buffer_Chunk_Size 2MB\n Buffer_Max_Size 10MB\n Docker_Mode On\n Docker_Mode_Flush 5\n Tag seafile.*\n Parser Docker\n
The above defines two importers, which are used to monitor seafile-frontend and seafile-backend services respectively. The reason why they are written together here is that for a node, you may not know when it will run the frontend service and when it will run the backend service, but they have the same tag prefix seafile..
Each input has to use a parser to parse the logs and pass them to the filter. Here, a parser named Docker is created to parse the logs generated by the K8S-docker-runtime container. The parser is placed in /opt/fluent-bit/confs/parser.conf (for more details, please refer to offical document about JSON parser):
[PARSER]\n Name Docker\n Format json\n Time_Key time\n Time_Format %Y-%m-%dT%H:%M:%S.%LZ\n
Log records after parsing
The logs of the Docker container are saved in /var/log/containers in Json format (see the sample below), which is why we use the Json format in the above parser.
When these logs are obtained by the importer and parsed by the parser, they will become independent log records with the following fields:
log: The original log content (i.e., same as you seen in kubectl logs seafile-xxx -n seafile) and an extra line break at the end (i.e., \\n). This is also the field we need to save or upload to the log aggregation system in the end.
stream: The original log come from. stdout means the standard output.
time: The time when the log is recorded in the corresponding stream (ISO 8601 format).
Add two filters in /opt/fluent-bit/confs/seafile-log.conf for records filtering and routing. Here, the record_modifier filter is to select useful keys (see the contents in above tip label, only the log field is what we need) in the log records and rewrite_tag filter is used to route logs according to specific rules:
[FILTER] \n Name record_modifier\n Match seafile.*\n Allowlist_key log\n\n\n[FILTER]\n Name rewrite_tag\n Match seafile.*\n Rule $log ^.*\\[seaf-server\\].*$ seaf-server false # for seafile's logs\n Rule $log ^.*\\[seahub\\].*$ seahub false # for seahub's logs\n Rule $log ^.*\\[seafevents\\].*$ seafevents false # for seafevents' lgos\n Rule $log ^.*\\[seafile-slow-rpc\\].*$ seafile-slow-rpc false # for slow-rpc's logs\n
"},{"location":"setup/k8s_advanced_management/#output-logs-to-loki","title":"Output log's to Loki","text":"
Loki is multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. The Fluent-Bit loki built-in output plugin allows you to send your log or events to a Loki service. It supports data enrichment with Kubernetes labels, custom label keys and Tenant ID within others.
Alternative Fluent-Bit Loki plugin by Grafana
For sending logs to Loki, there are two plugins for Fluent-Bit:
The built-in Loki plugin maintained by the Fluent-Bit officially, and we will use it in this part because it provides the most complete features.
Grafana-loki plugin maintained by Grafana Labs.
Due to each outputer dose not have a distinguishing marks in the configuration files (because Fluent-Bit takes each plugin as a tag workflow):
Seaf-server log: Add an outputer to /opt/fluent-bit/confs/seaf-server-log.conf:
[OUTPUT]\n Name loki\n Match seaf-server\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n
seahub log: Add an outputer to /opt/fluent-bit/confs/seahub-log.conf:
[OUTPUT]\n Name loki\n Match seahub\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n
seafevents log: Add an outputer to /opt/fluent-bit/confs/seafevents-log.conf:
[OUTPUT]\n Name loki\n Match seafevents\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n
seafile-slow-rpc log: Add an outputer to /opt/fluent-bit/confs/seafile-slow-rpc-log.conf:
[OUTPUT]\n Name loki\n Match seafile-slow-rpc\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n
Cloud Loki instance
If you are using a cloud Loki instance, you can follow the Fluent-Bit Loki plugin document to fill up all necessary fields. Usually, the following fields are additional needs in cloud Loki service:
tls
tls.verify
http_user
http_passwd
"},{"location":"setup/k8s_single_node/","title":"Setup Seafile with a single K8S pod with K8S resources files","text":"
This manual explains how to deploy and run Seafile server on a Linux server using Kubernetes (k8s thereafter) in a single pod (i.e., single node mode). So this document is essentially an extended description of the Docker-based Seafile single-node deployment (support both CE and Pro).
For specific environment and configuration requirements, please refer to the description of the Docker-based Seafile single-node deployment. Please also refer to the description of the K8S tool section in here.
Please refer here for the details of system requirements about Seafile service. By the way, this will apply to all nodes where Seafile pods may appear in your K8S cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
For persisting data using in the docker-base deployment, /opt/seafile-data, is still adopted in this manual. What's more, all K8S YAML files will be placed in /opt/seafile-k8s-yaml (replace it when following these instructions if you would like to use another path).
By the way, we don't provide the deployment methods of basic services (e.g., Memcached, MySQL and Elasticsearch) and seafile-compatibility components (e.g., SeaDoc) for K8S in our document. If you need to install these services in K8S format, you can refer to the rewrite method of this document.
"},{"location":"setup/k8s_single_node/#down-load-the-yaml-files-for-seafile-server","title":"Down load the YAML files for Seafile Server","text":"Pro editionCommunity edition
In here we suppose you download the YAML files in /opt/seafile-k8s-yaml, which mainly include about:
seafile-deployment.yaml for Seafile server pod management and creation,
seafile-service.yaml for exposing Seafile services to the external network,
seafile-persistentVolume.yaml for defining the location of a volume used for persistent storage on the host
seafile-persistentvolumeclaim.yaml for declaring the use of persistent storage in the container.
For futher configuration details, you can refer the official documents.
"},{"location":"setup/k8s_single_node/#modify-seafile-envyaml-and-seafile-secretyaml","title":"Modify seafile-env.yaml and seafile-secret.yaml","text":"
Similar to Docker-base deployment, Seafile cluster in K8S deployment also supports use files to configure startup progress, you can modify common environment variables by
nano /opt/seafile-k8s-yaml/seafile-env.yaml\n
and sensitive information (e.g., password) by
nano /opt/seafile-k8s-yaml/seafile-secret.yaml\n
For seafile-secret.yaml
To modify sensitive information (e.g., password), you need to convert the password into base64 encoding before writing it into the seafile-secret.yaml file:
echo -n '<your-value>' | base64\n
Warning
For the fields marked with <...> are required, please make sure these items are filled in, otherwise Seafile server may not run properly.
By default, Seafile (Pro) will access the Memcached and Elasticsearch with the specific service name:
Memcached: memcached with port 11211
Elasticsearch: elasticsearch with port 9200
If the above services are:
Not in your K8S pods (including using an external service)
With different service name
With different server port
Please modfiy the files in /opt/seafile-data/seafile/conf (especially the seafevents.conf, seafile.conf and seahub_settings.py) to make correct the configurations for above services, otherwise the Seafile server cannot start normally. Then restart Seafile server:
"},{"location":"setup/k8s_single_node/#activating-the-seafile-license-pro","title":"Activating the Seafile License (Pro)","text":"
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Please refer from here for futher advanced operations.
"},{"location":"setup/migrate_backends_data/","title":"Migrate data between different backends","text":"
Seafile supports data migration between filesystem, s3, ceph, swift and Alibaba oss by a built-in script. Before migration, you have to ensure that both S3 hosts can be accessed normally.
Migration to or from S3
Since version 11, when you migrate from S3 to other storage servers or from other storage servers to S3, you have to use V4 authentication protocol. This is because version 11 upgrades to Boto3 library, which fails to list objects from S3 when it's configured to use V2 authentication protocol.
"},{"location":"setup/migrate_backends_data/#copy-seafileconf-and-use-new-s3-configurations","title":"Copy seafile.conf and use new S3 configurations","text":"
During the migration process, Seafile needs to know where the data will be migrated to. The easiest way is to copy the original seafile.conf to a new path, and then use the new S3 configurations in this file.
Deploy with DockerDeploy from binary package
Warning
For deployment with Docker, the new seafile.conf has to be put in the persistent directory (e.g., /opt/seafile-data/seafile.conf) used by Seafile service. Otherwise the script cannot locate the new configurations file.
Then you can follow here to use the new S3 configurations in the new seafile.conf. By the way, if you want to migrate to a local file system, the new seafile.conf configurations for S3 example is as follows:
Since the data migration process will not affect the operation of the Seafile service, if the original S3 data is operated during this process, the data may not be synchronized with the migrated data. Therefore, we recommend that you stop the Seafile service before executing the migration procedure.
cd /opt/seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n
"},{"location":"setup/migrate_backends_data/#run-migratesh-to-initially-migrate-objects","title":"Run migrate.sh to initially migrate objects","text":"
This step will migrate most of objects from the source storage to the destination storage. You don't need to stop Seafile service at this stage as it may take quite long time to finish. Since the service is not stopped, some new objects may be added to the source storage during migration. Those objects will be handled in the next step:
Speed-up migrating large number of objects
If you have millions of objects in the storage (especially the fs objects), it may take quite long time to migrate all objects and more than half is using to check whether an object exists in the destination storage. In this situation, you can modify the nworker and maxsize variables in the migrate.py:
However, if the two values (i.e., nworker and maxsize) \u200b\u200bare too large, the improvement in data migration speed may not be obvious because the disk I/O bottleneck has been reached.
Encrypted storage backend data (deprecated)
If you have an encrypted storage backend, you can use this script to migrate and decrypt the data from that backend to a new one. You can add the --decrypt option in calling the script, which will decrypt the data while reading it, and then write the unencrypted data to the new backend:
./migrate.sh /opt --decrypt\n
Deploy with DockerDeploy from binary package
# make sure you are in the container and in directory `/opt/seafile/seafile-server-latest`\n./migrate.sh /shared\n\n# exit container and stop it\nexit\ndocker compose down\n
# make sure you are in the directory `/opt/seafile/seafile-server-latest`\n./migrate.sh /opt\n
Success
You can see the following message if the migration process is done:
2025-01-15 05:49:39,408 Start to fetch [commits] object from destination\n2025-01-15 05:49:39,422 Start to fetch [fs] object from destination\n2025-01-15 05:49:39,442 Start to fetch [blocks] object from destination\n2025-01-15 05:49:39,677 [commits] [0] objects exist in destination\n2025-01-15 05:49:39,677 Start to migrate [commits] object\n2025-01-15 05:49:39,749 [blocks] [0] objects exist in destination\n2025-01-15 05:49:39,755 Start to migrate [blocks] object\n2025-01-15 05:49:39,752 [fs] [0] objects exist in destination\n2025-01-15 05:49:39,762 Start to migrate [fs] object\n2025-01-15 05:49:40,602 Complete migrate [commits] object\n2025-01-15 05:49:40,626 Complete migrate [blocks] object\n2025-01-15 05:49:40,790 Complete migrate [fs] object\nDone.\n
"},{"location":"setup/migrate_backends_data/#replace-the-original-seafileconf-and-start-seafile","title":"Replace the original seafile.conf and start Seafile","text":"
After running the script, we recommend that you check whether your data already exists on the new S3 storage backend server (i.e., the migration is successful, and the number and size of files should be the same). Then you can remove the file from the old S3 storage backend and replace the original seafile.conf from the new one:
# make sure you are in the directory `/opt/seafile/seafile-server-latest`\n./seahub.sh start\n./seafile.sh start\n
"},{"location":"setup/migrate_ce_to_pro_with_docker/","title":"Migrate CE to Pro with Docker","text":""},{"location":"setup/migrate_ce_to_pro_with_docker/#preparation","title":"Preparation","text":"
Make sure you are running a Seafile Community edition that match the latest version of pro edition. For example, if the latest pro edition is version 13.0, you should first upgrade the community edition to version 13.0.
Purchase Seafile Professional license file.
Download the .env and seafile-server.yml of Seafile Pro.
"},{"location":"setup/migrate_ce_to_pro_with_docker/#migrate","title":"Migrate","text":""},{"location":"setup/migrate_ce_to_pro_with_docker/#stop-the-seafile-ce","title":"Stop the Seafile CE","text":"
docker compose down\n
Tip
To ensure data security, it is recommended that you back up your MySQL data
"},{"location":"setup/migrate_ce_to_pro_with_docker/#put-your-licence-file","title":"Put your licence file","text":"
Copy the seafile-license.txt to the volume directory of the Seafile CE's data. If the directory is /opt/seafile-data, so you should put it in the /opt/seafile-data/seafile/.
"},{"location":"setup/migrate_ce_to_pro_with_docker/#modify-the-new-seafile-serveryml-and-env","title":"Modify the new seafile-server.yml and .env","text":"
Modify .env based on the old configurations from the old .env file. The following fields should be paid special attention and others should be the same as the old configurations:
Variable Description Default Value SEAFILE_IMAGE The Seafile pro docker image, which the tag must be equal to or newer than the old Seafile CE docker tag seafileltd/seafile-pro-mc:13.0-latestSEAFILE_ELASTICSEARCH_VOLUME The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data
For other fileds (e.g., SEAFILE_VOLUME, SEAFILE_MYSQL_VOLUME, SEAFILE_MYSQL_DB_USER, SEAFILE_MYSQL_DB_PASSWORD), must be consistent with the old configurations.
Tip
For the configurations using to do the initializations (e.g, INIT_SEAFILE_ADMIN_EMAIL, INIT_SEAFILE_MYSQL_ROOT_PASSWORD), you can remove it from .env as well
"},{"location":"setup/migrate_ce_to_pro_with_docker/#replace-seafile-serveryml-and-env","title":"Replace seafile-server.yml and .env","text":"
Replace the old seafile-server.yml and .env by the new and modified files, i.e. (if your old seafile-server.yml and .env are in the /opt)
Add [INDEX FILES] section in /opt/seafile-data/seafile/conf/seafevents.conf manually:
Additional system resource requirements
Seafile PE docker requires a minimum of 4 cores and 4GB RAM because of Elasticsearch deployed simultaneously. If you do not have enough system resources, you can use an alternative search engine, SeaSearch, a more lightweight search engine built on open source search engine ZincSearch, as the indexer.
Run the following command to run the Seafile-Pro container\uff1a
docker compose up -d\n
Now you have a Seafile Professional service.
"},{"location":"setup/migrate_non_docker_to_docker/","title":"Migrate from non-docker Seafile deployment to docker","text":"
Note
This document is written to about the single node, you have to do the following opeartions (except migtating database) in all nodes if you are using Seafile Cluster
Normally, we only recommend that you perform the migration operation on two different machines according to the solution in this document. If you decide to perform the operation on the same machine, please pay attention to the corresponding tips in the document.
The recommended steps to migrate from non-docker deployment to docker deployment on two different machines are:
Upgrade your Seafile server to the latest version.
Shutdown the Seafile, Nginx and Memcached according to your situations.
Backup MySQL databse and Seafile libraries data.
Deploy the Seafile Docker in the new machine.
Recover the Seafile libraries and MySQL database in the new machine.
Start Seafile Docker and shutdown the old MySQL (or Mariadb) according to your situations.
"},{"location":"setup/migrate_non_docker_to_docker/#upgrade-your-seafile-server","title":"Upgrade your Seafile server","text":"
You have to upgrade the version of the binary package to latest version before the migration, and ensure that the system is running normally.
Tip
If you running a very old version of Seafile, you can following the FAQ item to migrate to the latest version
"},{"location":"setup/migrate_non_docker_to_docker/#backup-mysql-database-and-seafile-server","title":"Backup MySQL database and Seafile server","text":"
Please follow here to backup:
Backing up MySQL databases
Backing up Seafile library data
"},{"location":"setup/migrate_non_docker_to_docker/#deploy-the-seafile-docker","title":"Deploy the Seafile Docker","text":"
You can follow here to deploy Seafile with Docker, please use your old configurations when modifying .env, and make sure the Seafile server is running normally after deployment.
Use external MySQL service or the old MySQL service
This document is written to migrate from non-Docker version to Docker version Seafile between two different machines. We suggest using the Docker-compose Mariadb service (version 10.11 by default) as the database service in after-migration Seafile. If you would like to use an existed MySQL service, always in which situation you try to do migrate operation on the same host or the old MySQL service is the dependency of other services, you have to follow here to deploy Seafile.
"},{"location":"setup/migrate_non_docker_to_docker/#recovery-libraries-data-for-seafile-docker","title":"Recovery libraries data for Seafile Docker","text":"
Firstly, you should stop the Seafile server before recovering Seafile libraries data:
docker compose down\n
Then recover the data from backuped file:
cp /backup/data/* /opt/seafile-data/seafile\n
"},{"location":"setup/migrate_non_docker_to_docker/#recover-the-database-only-for-the-new-mysql-service-used-in-seafile-docker","title":"Recover the Database (only for the new MySQL service used in Seafile docker)","text":"
Start the database service Only:
docker compose up -d --no-deps db\n
Follow here to recover the database data.
Exit the container and stop the Mariadb service
docker compose down\n
"},{"location":"setup/migrate_non_docker_to_docker/#restart-the-services","title":"Restart the services","text":"
Finally, the migration is complete. You can restart the Seafile server of Docker-base by restarting the service:
docker compose up -d\n
By the way, you can shutdown the old MySQL service, if it is not a dependency of other services, .
Add restart: unless-stopped, and the Seafile container will automatically start when Docker starts. If the Seafile container does not exist (execute docker compose down), the container will not start automatically.
"},{"location":"setup/setup_ce_by_docker/","title":"Installation of Seafile Server Community Edition with Docker","text":""},{"location":"setup/setup_ce_by_docker/#system-requirements","title":"System requirements","text":"
Please refer here for system requirements about Seafile CE. In general, we recommend that you have at least 2G RAM and a 2-core CPU (> 2GHz).
The following assumptions and conventions are used in the rest of this document:
/opt/seafile is the directory for store Seafile docker compose files. If you decide to put Seafile in a different directory \u2014 which you can \u2014 adjust all paths accordingly.
Seafile uses two Docker volumes for persisting data generated in its database and Seafile Docker container. The volumes' host paths are /opt/seafile-mysql and /opt/seafile-data, respectively. It is not recommended to change these paths. If you do, account for it when following these instructions.
All configuration and log files for Seafile and the webserver Nginx are stored in the volume of the Seafile container.
Variable Description Default Value SEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-dataSEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddyINIT_SEAFILE_MYSQL_ROOT_PASSWORD The root password of MySQL (Only required on first deployment) SEAFILE_MYSQL_DB_HOST The host of MySQL dbSEAFILE_MYSQL_DB_PORT The port of MySQL 3306SEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafileSEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME The database name of ccnet ccnet_dbSEAFILE_MYSQL_DB_SEAFILE_DB_NAME The database name of seafile seafile_dbSEAFILE_MYSQL_DB_SEAHUB_DB_NAME The database name of seahub seahub_dbJWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) httpCACHE_PROVIDER The type of cache server used for Seafile. The available options are redis and memcached. Since Seafile 13, it is recommended to use redis as the cache service to support new features, and memcached will no longer be integrated into Seafile Docker by default. redisREDIS_HOST Redis server host redisREDIS_PORT Redis server port 6379REDIS_PASSWORD Redis server password (none) MEMCACHED_HOST Memcached server host memcachedMEMCACHED_PORT Memcached server port 11211TIME_ZONE Time zone UTCNOTIFICATION_SERVER_URL The notification server url, leave blank to disable it (none) INIT_SEAFILE_ADMIN_EMAIL Admin username me@example.com (Recommend modifications) INIT_SEAFILE_ADMIN_PASSWORD Admin password asecret (Recommend modifications) NON_ROOT Run Seafile container without a root user false"},{"location":"setup/setup_ce_by_docker/#start-seafile-server","title":"Start Seafile server","text":"
Start Seafile server with the following command
docker compose up -d\n
ERROR: Named volume \"xxx\" is used in service \"xxx\" but no declaration was found in the volumes section
You may encounter this problem when your Docker (or docker-compose) version is out of date. You can upgrade or reinstall the Docker service to solve this problem according to the Docker official documentation.
Note
You must run the above command in the directory with the .env. If .env file is elsewhere, please run
docker compose -f /path/to/.env up -d\n
Success
After starting the services, you can see the initialization progress by tracing the logs of container seafile (i.e., docker logs seafile -f)
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\ncreating seafile-server-latest symbolic link ... done\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n
And then you can see the following messages which the Seafile server starts successfully:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\nSeahub is started\n\nDone.\n
Finially, you can go to http://seafile.example.com to use Seafile.
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/seafile: This is the directory for seafile server configuration and data.
/opt/seafile-data/seafile/logs: This is the directory that would contain the log files of seafile server processes. For example, you can find seaf-server logs in /opt/seafile-data/seafile/logs/seafile.log.
/opt/seafile-data/logs: This is the directory for operating system.
/opt/seafile-data/logs/var-log: This is the directory that would be mounted as /var/log inside the container. /opt/seafile-data/logs/var-log/nginx contains the logs of Nginx in the Seafile container.
To monitor container logs (from outside of the container), please use the following commands:
# if the `.env` file is in current directory:\ndocker compose logs --follow\n# if the `.env` file is elsewhere:\ndocker compose -f /path/to/.env logs --follow\n\n# you can also specify container name:\ndocker compose logs seafile --follow\n# or, if the `.env` file is elsewhere:\ndocker compose -f /path/to/.env logs seafile --follow\n
The Seafile logs are under /shared/logs/seafile in the docker, or /opt/seafile-data/logs/seafile in the server that run the docker.
The system logs are under /shared/logs/var-log, or /opt/seafile-data/logs/var-log in the server that run the docker.
To monitor all Seafile logs simultaneously (from outside of the container), run
sudo tail -f $(find /opt/seafile-data/ -type f -name *.log 2>/dev/null)\n
When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them.
"},{"location":"setup/setup_ce_by_docker/#faq","title":"FAQ","text":""},{"location":"setup/setup_ce_by_docker/#seafile-service-and-container-maintenance","title":"Seafile service and container maintenance","text":"
Q: If I want enter into the Docker container, which command I can use?
A: You can enter into the docker container using the command:
docker exec -it seafile /bin/bash\n
Q: I forgot the Seafile admin email address/password, how do I create a new admin account?
A: Seafile uses cache to improve performance in many situations. The content includes but is not limited to user session information, avatars, profiles, records from database, etc. From Seafile Docker 13, the Redis takes the default cache server for supporting the new features (please refer the upgradte notes), which has integrated in Seafile Docker 13 and can be configured directly in environment variables in .env (no additional settings are required by default)
Q: Is the Redis integrated in Seafile Docker safe? Does it have an access password?
A: Although the Redis integrated by Seafile Docker does not have a password set by default, it can only be accessed through the Docker private network and will not expose the service port externally. Of course, you can also set a password for it if necessary. You can set REDIS_PASSWORD in .env and remove the following comment markers in seafile-server.yml to set the integrated Redis' password:
services:\n ...\n redis:\n image: ${SEAFILE_REDIS_IMAGE:-redis}\n container_name: seafile-redis\n # remove the following comment markers\n command:\n - /bin/sh\n - -c\n - redis-server --requirepass \"$${REDIS_PASSWORD:?Variable is not set or empty}\"\n networks:\n - seafile-net\n ...\n
Q: For some reason, I still have to use Memcached as my cache server. How can I do this?
A: If you still want to use the Memcached (is not provided from Seafile Docker 13), just follow the steps below:
Set CACHE_PROVIDER to memcached and modify MEMCACHED_xxx in .env
Remove the redis part and and the redis dependency in seafile service section in seafile-server.yml.
By the way, you can make changes to the cache server after the service is started (by setting environment variables in .env), but the corresponding configuration files will not be updated directly (e.g., seahub_settings.py, seafile.conf and seafevents.conf). To avoid ambiguity, we recommend that you also update these configuration files.
"},{"location":"setup/setup_pro_by_docker/","title":"Installation of Seafile Server Professional Edition with Docker","text":"
This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server using Docker and Docker Compose. The deployment has been tested for Debian/Ubuntu and CentOS, but Seafile PE should also work on other Linux distributions.
Please refer here for system requirements about Seafile PE. In general, we recommend that you have at least 4G RAM and a 4-core CPU (> 2GHz).
About license
Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com. For futher details, please refer the license page of Seafile PE.
The following assumptions and conventions are used in the rest of this document:
/opt/seafile is the directory of Seafile for storing Seafile docker files. If you decide to put Seafile in a different directory, adjust all paths accordingly.
Seafile uses two Docker volumes for persisting data generated in its database and Seafile Docker container. The volumes' host paths are /opt/seafile-mysql and /opt/seafile-data, respectively. It is not recommended to change these paths. If you do, account for it when following these instructions.
All configuration and log files for Seafile and the webserver Nginx are stored in the volume of the Seafile container.
Use the official installation guide for your OS to install Docker.
"},{"location":"setup/setup_pro_by_docker/#downloading-the-seafile-image","title":"Downloading the Seafile Image","text":"
Success
Since v12.0, Seafile PE versions are hosted on DockerHub and does not require username and password to download. For older Seafile PE versions are available private docker repository (back to Seafile 7.0). You can get the username and password on the download page in the Customer Center.
Variable Description Default Value SEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-dataSEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddySEAFILE_ELASTICSEARCH_VOLUME The volume directory of Elasticsearch data /opt/seafile-elasticsearch/dataINIT_SEAFILE_MYSQL_ROOT_PASSWORD The root password of MySQL (Only required on first deployment) SEAFILE_MYSQL_DB_HOST The host of MySQL dbSEAFILE_MYSQL_DB_PORT The port of MySQL 3306SEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafileSEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME The database name of ccnet ccnet_dbSEAFILE_MYSQL_DB_SEAFILE_DB_NAME The database name of seafile seafile_dbSEAFILE_MYSQL_DB_SEAHUB_DB_NAME The database name of seahub seahub_dbJWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) httpCACHE_PROVIDER The type of cache server used for Seafile. The available options are redis and memcached. Since Seafile 13, it is recommended to use redis as the cache service to support new features, and memcached will no longer be integrated into Seafile Docker by default. redisREDIS_HOST Redis server host redisREDIS_PORT Redis server port 6379REDIS_PASSWORD Redis server password (none) MEMCACHED_HOST Memcached server host memcachedMEMCACHED_PORT Memcached server port 11211TIME_ZONE Time zone UTCINIT_SEAFILE_ADMIN_EMAIL Synchronously set admin username during initialization me@example.com INIT_SEAFILE_ADMIN_PASSWORD Synchronously set admin password during initialization asecret SEAF_SERVER_STORAGE_TYPE What kind of the Seafile data for storage. Available options are disk (i.e., local disk), s3 and multiple (see the details of multiple storage backends) diskS3_COMMIT_BUCKET S3 storage backend commit objects bucket (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_FS_BUCKET S3 storage backend fs objects bucket (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_BLOCK_BUCKET S3 storage backend block objects bucket (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_KEY_ID S3 storage backend key ID (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_SECRET_KEY S3 storage backend secret key (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_AWS_REGION Region of your buckets us-east-1S3_HOST Host of your buckets (required when not use AWS) S3_USE_HTTPS Use HTTPS connections to S3 if enabled trueS3_USE_V4_SIGNATURE Use the v4 protocol of S3 if enabled trueS3_PATH_STYLE_REQUEST This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. falseS3_SSE_C_KEY A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. (none) NOTIFICATION_SERVER_URL The notification server url, leave blank to disable it (none) NON_ROOT Run Seafile container without a root user false
Easier to configure S3 for Seafile and its components
Since Seafile Pro 13.0, in order to facilitate users to deploy Seafile's related extension components and other services in the future, a section will be provided in .env to store the S3 Configurations for Seafile and some extension components (such as SeaSearch, Metadata server). You can locate it with the title bar Storage configurations for S3.
S3 configurations in .env only support single S3 storage backend mode
The Seafile server only support configuring S3 in .env for single S3 storage backend mode (i.e., when SEAF_SERVER_STORAGE_TYPE=s3). If you would like to use other storage backend (e.g., Ceph, Swift) or other settings that can only be set in seafile.conf (like multiple storage backends), please set SEAF_SERVER_STORAGE_TYPE to multiple, and set MD_STORAGE_TYPE and SS_STORAGE_TYPE according to your configurations.
To conclude, set the directory permissions of the Elasticsearch volumne:
"},{"location":"setup/setup_pro_by_docker/#starting-the-docker-containers","title":"Starting the Docker Containers","text":"
Run docker compose in detached mode:
docker compose up -d\n
ERROR: Named volume \"xxx\" is used in service \"xxx\" but no declaration was found in the volumes section
You may encounter this problem when your Docker (or docker-compose) version is out of date. You can upgrade or reinstall the Docker service to solve this problem according to the Docker official documentation.
Note
You must run the above command in the directory with the .env. If .env file is elsewhere, please run
docker compose -f /path/to/.env up -d\n
Success
After starting the services, you can see the initialization progress by tracing the logs of container seafile (i.e., docker logs seafile -f)
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\ncreating seafile-server-latest symbolic link ... done\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n
And then you can see the following messages which the Seafile server starts successfully:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\nSeahub is started\n\nDone.\n
Finially, you can go to http://seafile.example.com to use Seafile.
A 502 Bad Gateway error means that the system has not yet completed the initialization
To view Seafile docker logs, please use the following command
docker compose logs -f\n
The Seafile logs are under /shared/logs/seafile in the docker, or /opt/seafile-data/logs/seafile in the server that run the docker.
The system logs are under /shared/logs/var-log, or /opt/seafile-data/logs/var-log in the server that run the docker.
"},{"location":"setup/setup_pro_by_docker/#activating-the-seafile-license","title":"Activating the Seafile License","text":"
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/seafile: This is the directory for seafile server configuration, logs and data.
/opt/seafile-data/seafile/logs: This is the directory that would contain the log files of seafile server processes. For example, you can find seaf-server logs in /opt/seafile-data/seafile/logs/seafile.log.
/opt/seafile-data/logs: This is the directory for operating system and Nginx logs.
/opt/seafile-data/logs/var-log: This is the directory that would be mounted as /var/log inside the container. For example, you can find the nginx logs in /opt/seafile-data/logs/var-log/nginx/.
"},{"location":"setup/setup_pro_by_docker/#reviewing-the-deployment","title":"Reviewing the Deployment","text":"
The command docker container list should list the containers specified in the .env.
The directory layout of the Seafile container's volume should look as follows:
When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them.
"},{"location":"setup/setup_pro_by_docker/#faq","title":"FAQ","text":""},{"location":"setup/setup_pro_by_docker/#seafile-service-and-container-maintenance","title":"Seafile service and container maintenance","text":"
Q: If I want enter into the Docker container, which command I can use?
A: You can enter into the docker container using the command:
docker exec -it seafile /bin/bash\n
Q: I forgot the Seafile admin email address/password, how do I create a new admin account?
A: Seafile uses cache to improve performance in many situations. The content includes but is not limited to user session information, avatars, profiles, records from database, etc. From Seafile Docker 13, the Redis takes the default cache server for supporting the new features (please refer the upgradte notes), which has integrated in Seafile Docker 13 and can be configured directly in environment variables in .env (no additional settings are required by default)
Q: Is the Redis integrated in Seafile Docker safe? Does it have an access password?
A: Although the Redis integrated by Seafile Docker does not have a password set by default, it can only be accessed through the Docker private network and will not expose the service port externally. Of course, you can also set a password for it if necessary. You can set REDIS_PASSWORD in .env and remove the following comment markers in seafile-server.yml to set the integrated Redis' password:
services:\n ...\n redis:\n image: ${SEAFILE_REDIS_IMAGE:-redis}\n container_name: seafile-redis\n # remove the following comment markers\n command:\n - /bin/sh\n - -c\n - redis-server --requirepass \"$${REDIS_PASSWORD:?Variable is not set or empty}\"\n networks:\n - seafile-net\n ...\n
Q: For some reason, I still have to use Memcached as my cache server. How can I do this?
A: If you still want to use the Memcached (is not provided from Seafile Docker 13), just follow the steps below:
Set CACHE_PROVIDER to memcached and modify MEMCACHED_xxx in .env
Remove the redis part and and the redis dependency in seafile service section in seafile-server.yml.
By the way, you can make changes to the cache server after the service is started (by setting environment variables in .env), but the corresponding configuration files will not be updated directly (e.g., seahub_settings.py, seafile.conf and seafevents.conf). To avoid ambiguity, we recommend that you also update these configuration files.
"},{"location":"setup/setup_with_an_existing_mysql_server/","title":"Deploy with an existing MySQL server","text":"
The entire db service needs to be removed (or noted) in seafile-server.yml if you would like to use an existing MySQL server, otherwise there is a redundant database service is running
service:\n\n # note or remove the entire `db` service\n #db:\n #image: ${SEAFILE_DB_IMAGE:-mariadb:10.11}\n #container_name: seafile-mysql\n # ... other parts in service `db`\n\n # do not change other services\n...\n
What's more, you have to modify the .env to set correctly the fields with MySQL:
SEAFILE_MYSQL_DB_HOST=192.168.0.2\nSEAFILE_MYSQL_DB_PORT=3306\nINIT_SEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_USER=seafile # the user name of the user you like to use for Seafile server\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD # the password of the user you like to use for Seafile server\n
Tip
INIT_SEAFILE_MYSQL_ROOT_PASSWORD is needed during installation (i.e., the deployment in the first time). After Seafile is installed, the user seafile will be used to connect to the MySQL server (SEAFILE_MYSQL_DB_PASSWORD), then you can remove the INIT_SEAFILE_MYSQL_ROOT_PASSWORD.
"},{"location":"setup/setup_with_ceph/","title":"Setup With Ceph","text":"
Ceph is a scalable distributed storage system. It's recommended to use Ceph's S3 Gateway (RGW) to integarte with Seafile. Seafile can also use Ceph's RADOS object storage layer for storage backend. But using RADOS requires to link with librados library, which may introduce library incompatibility issues during deployment. Furthermore the S3 Gateway provides easier to manage HTTP based interface. If you want to integrate with S3 gateway, please refer to \"Use S3-compatible Object Storage\" section in this documentation. The documentation below is for integrating with RADOS.
"},{"location":"setup/setup_with_ceph/#copy-ceph-conf-file-and-client-keyring","title":"Copy ceph conf file and client keyring","text":"
Seafile acts as a client to Ceph/RADOS, so it needs to access ceph cluster's conf file and keyring. You have to copy these files from a ceph admin node's /etc/ceph directory to the seafile machine.
Since 8.0 version, Seafile bundles librados from Ceph 16. On some systems you may find Seafile fail to connect to your Ceph cluster. In such case, you can usually solve it by removing the bundled librados libraries and use the one installed in the OS.
To do this, you have to remove a few bundled libraries:
cd seafile-server-latest/seafile/lib\nrm librados.so.2 libstdc++.so.6 libnspr4.so\n
The above configuration will use the default (client.admin) user to connect to Ceph. You may want to use some other Ceph user to connect. This is supported in Seafile. To specify the Ceph user, you have to add a ceph_client_id option to seafile.conf, as the following:
[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-fs\n\n# Memcached or Reids configs\n......\n
You can create a ceph user for seafile on your ceph cluster like this:
As Seafile server before 6.3 version doesn't support multiple storage classes, you have to explicitly enable this new feature and define storage classes with a different syntax than how we define storage backend before.
By default, Seafile dose not enable multiple storage classes. So, you have to create a configuration file for storage classes and specify it and enable the feature in seafile.conf:
Create the storage classes file:
nano /opt/seafile-date/seafile/conf\n
For the example of this file, please refer next section
enable_storage_classes \uff1aIf this is set to true, the storage class feature is enabled. You must define the storage classes in a JSON file provided in the next configuration option.
storage_classes_file\uff1aSpecifies the path for the JSON file that contains the storage class definition.
Tip
Make sure you have added memory cache configurations to seafile.conf
Due to the Docker persistence strategy, the path of storage_classes_file in the Seafile container is different from the host usually, so we suggest you put this file in to the Seafile's configurations directory, and use /shared/conf instead of /opt/seafile-date/seafile/conf. Otherwise you have to add another persistent volume mapping strategy in seafile-server.yml. If your Seafile server is not deployed with Docker, we still suggest you put this file into the Seafile configurations file directory.
"},{"location":"setup/setup_with_multiple_storage_backends/#exmaple-of-storage-classes-file","title":"Exmaple of storage classes file","text":"
The storage classes JSON file is about an array consist of objects, for each defines a storage class. The fields in the definition corresponds to the information we need to specify for a storage class:
Variables Descriptions storage_id A unique internal string ID used to identify the storage class. It is not visible to users. For example, \"primary storage\". name A user-visible name for the storage class. is_default Indicates whether this storage class is the default one. commits The storage used for storing commit objects for this class. fs The storage used for storing fs objects for this class. blocks The storage used for storing block objects for this class.
Note
is_default is effective in two cases:
When a user does not choose a mapping policy and can use this storage class for a library;
For other mapping policies, this option only takes effect when you have existing libraries before enabling the multiple storage backend feature, which will be automatically mapped to the default storage backend.
commit, fs, and blocks can be stored in different storages. This provides the most flexible way to define storage classes (e.g., a file system, Ceph, or S3.)
Here is an example, which uses local file system, S3 (default), Swift and Ceph at the same time.
As you may have seen, the commits, fs and blocks information syntax is similar to what is used in [commit_object_backend], [fs_object_backend] and [block_backend] section of seafile.conf for a single backend storage. You can refer to the detailed syntax in the documentation for the storage you use (e.g., S3 Storage for S3).
If you use file system as storage for fs, commits or blocks, you must explicitly provide the path for the seafile-data directory. The objects will be stored in storage/commits, storage/fs, storage/blocks under this path.
Library mapping policies decide the storage class a library uses. Currently we provide 3 policies for 3 different use cases:
User Chosen
Role-based Mapping
Library ID Based Mapping
The storage class of a library is decided on creation and stored in a database table. The storage class of a library won't change if the mapping policy is changed later.
Before choosing your mapping policy, you need to enable the storage classes feature in seahub_settings.py:
This policy lets the users choose which storage class to use when creating a new library. The users can select any storage class that's been defined in the JSON file.
To use this policy, add following options in seahub_settings.py:
STORAGE_CLASS_MAPPING_POLICY = 'USER_SELECT'\n
If you enable storage class support but don't explicitly set STORAGE_CLASS_MAPPING_POLIICY in seahub_settings.py, this policy is used by default.
Due to storage cost or management considerations, sometimes a system admin wants to make different type of users use different storage backends (or classes). You can configure a user's storage classes based on their roles.
A new option storage_ids is added to the role configuration in seahub_settings.py to assign storage classes to each role. If only one storage class is assigned to a role, the users with this role cannot choose storage class for libraries; otherwise, the users can choose a storage class if more than one class are assigned. If no storage class is assigned to a role, the default class specified in the JSON file will be used.
Here are the sample options in seahub_settings.py to use this policy:
"},{"location":"setup/setup_with_multiple_storage_backends/#library-id-based-mapping","title":"Library ID Based Mapping","text":"
This policy maps libraries to storage classes based on its library ID. The ID of a library is an UUID. In this way, the data in the system can be evenly distributed among the storage classes.
Note
This policy is not a designed to be a complete distributed storage solution. It doesn't handle automatic migration of library data between storage classes. If you need to add more storage classes to the configuration, existing libraries will stay in their original storage classes. New libraries can be distributed among the new storage classes (backends). You still have to plan about the total storage capacity of your system at the beginning.
To use this policy, you first add following options in seahub_settings.py:
"},{"location":"setup/setup_with_multiple_storage_backends/#multiple-storage-backend-data-migration","title":"Multiple Storage Backend Data Migration","text":"
Migration from S3
Since version 11, when you migrate from S3 to other storage servers, you have to use V4 authentication protocol. This is because version 11 upgrades to Boto3 library, which fails to list objects from S3 when it's configured to use V2 authentication protocol.
Run the migrate-repo.sh script to migrate library data between different storage backends.
destination_storage_id: migrated destination storage id
repo_id is optional, if not specified, all libraries will be migrated.
Specify a path prefix
You can set the OBJECT_LIST_FILE_PATH environment variable to specify a path prefix to store the migrated object list before running the migration script
For example:
export OBJECT_LIST_FILE_PATH=/opt/test\n
This will create three files in the specified path (/opt):
test_4c731e5c-f589-4eaa-889f-14c00d4893cb.fs
test_4c731e5c-f589-4eaa-889f-14c00d4893cb.commits
test_4c731e5c-f589-4eaa-889f-14c00d4893cb.blocks
Setting the OBJECT_LIST_FILE_PATH environment variable has two purposes:
If the migrated library is very large, you need to run the migration script multiple times. Setting this environment variable can skip the previously migrated objects.
After the migration is complete, if you need to delete the objects in the origin storage, you must set this environment variable.
"},{"location":"setup/setup_with_multiple_storage_backends/#delete-all-objects-in-a-library-in-the-specified-storage-backend","title":"Delete All Objects In a Library In The Specified Storage Backend","text":"
Run the remove-objs.sh script (before migration, you need to set the OBJECT_LIST_FILE_PATH environment variable) to delete all objects in a library in the specified storage backend.
./remove-objs.sh repo_id storage_id\n
"},{"location":"setup/setup_with_s3/","title":"Setup With S3 Storage","text":"
From Seafile 13, there are two ways to configure S3 storage (single S3 storage backend) for Seafile server:
Environment variables (recommend since Seafile 13)
Config file (seafile.conf)
Setup note for binary packages deployment (Pro)
If your Seafile server is deployed from binary packages, you have to do the following steps before deploying:
install boto3 to your machine
sudo pip install boto3\n
Install and configure memcached or Redis.
For best performance, Seafile requires enable memory cache for objects. We recommend to at least allocate 128MB memory for memcached or Redis.
The configuration options differ for different S3 storage. We'll describe the configurations in separate sections. You also need to add memory cache configurations
From Seafile 13, configuring S3 from environment variables will be supported and will provide a more convenient way. You can refer to the detailed description of this part in the introduction of .env file. Generally,
Prepare at least 3 buckets for Seafile (S3_COMMIT_BUCKET, S3_FS_BUCKET and S3_BLOCK_BUCKET).
Set SEAF_SERVER_STORAGE_TYPE to true
Fill in the corresponding variable values in .env \u200b\u200baccording to the following table:
Variable Description Default Value S3_COMMIT_BUCKET S3 storage backend commit objects bucket (required) S3_FS_BUCKET S3 storage backend fs objects bucket (required) S3_BLOCK_BUCKET S3 storage backend block objects bucket (required) S3_KEY_ID S3 storage backend key ID (required) S3_SECRET_KEY S3 storage backend secret key (required) S3_AWS_REGION Region of your buckets us-east-1S3_HOST Host of your buckets (required when not use AWS) S3_USE_HTTPS Use HTTPS connections to S3 if enabled trueS3_USE_V4_SIGNATURE Use the v4 protocol of S3 if enabled trueS3_PATH_STYLE_REQUEST This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. falseS3_SSE_C_KEY A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. (none)
Bucket naming conventions
No matter if you using AWS or any other S3 compatible object storage, we recommend that you follow S3 naming rules. When you create buckets on S3, please read the S3 rules for naming first. Note, especially do not use capital letters in the name of the bucket (do not use camel-style naming, such as MyCommitObjects).
Good naming of a bucketBad naming of a bucket
seafile-commit-object
seafile-fs-object
seafile-block-object
SeafileCommitObject
seafileFSObject
seafile block object
About S3_SSE_C_KEY
S3_SSE_C_KEY is a string of 32 characters.
You can generate sse_c_key with the following command. Note that the key doesn't have to be base64 encoded. It can be any 32-character long random string. The example just show one possible way to generate such a key.
openssl rand -base64 24\n
Howevery, if you have existing data in your S3 storage bucket, turning on the above configuration will make your data inaccessible. That's because Seafile server doesn't support encrypted and non-encrypted objects mixed in the same bucket. You have to create a new bucket, and migrate your data to it by following storage backend migration documentation.
For other S3 support extensions
In addition to Seafile server, the following extensions (if already installed) will share the same S3 authorization information in .env with Seafile server:
SeaSearch: Enable the feature by specifying SS_STORAGE_TYPE=s3 and S3_SS_BUCKET
Metadata server: Enable the feature by specifying MD_STORAGE_TYPE=s3 and S3_MD_BUCKET
"},{"location":"setup/setup_with_s3/#example-configurations","title":"Example configurations","text":"AWSExoscaleHetznerOther Public Hosted S3 StorageSelf-hosted S3 Storage
SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=sos-de-fra-1.exo.io\nS3_USE_HTTPS=true\n
SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=fsn1.your-objectstorage.com\nS3_USE_HTTPS=true\n
There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support
SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=<access endpoint for storage provider>\nS3_USE_HTTPS=true\n
Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift, Ceph's RADOS Gateway and Minio. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:
SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=<your s3 api endpoint host>:<your s3 api endpoint port>\nS3_USE_HTTPS=true # according to your S3 configuration\n
"},{"location":"setup/setup_with_s3/#setup-with-config-file","title":"Setup with config file","text":"
Seafile configures S3 storage by adding or modifying the following section in seafile.conf:
Similar to configure in .env, you have to create at least 3 buckets for Seafile too, corresponding to the sections: commit_object_backend, fs_object_backend and block_backend. For the configurations for each backend section, please refer to the following table:
Variable Description bucket Bucket name for commit, fs, and block objects. Make sure it follows S3 naming rules (you can refer the notes below the table). key_id The key_id is required to authenticate you to S3. You can find the key_id in the \"security credentials\" section on your AWS account page or from your storage provider. key The key is required to authenticate you to S3. You can find the key in the \"security credentials\" section on your AWS account page or from your storage provider. use_v4_signature There are two versions of authentication protocols that can be used with S3 storage: Version 2 (older, may still be supported by some regions) and Version 4 (current, used by most regions). If you don't set this option, Seafile will use the v2 protocol. It's suggested to use the v4 protocol. use_https Use https to connect to S3. It's recommended to use https. aws_region (Optional) If you use the v4 protocol and AWS S3, set this option to the region you chose when you create the buckets. If it's not set and you're using the v4 protocol, Seafile will use us-east-1 as the default. This option will be ignored if you use the v2 protocol. host (Optional) The endpoint by which you access the storage service. Usually it starts with the region name. It's required to provide the host address if you use storage provider other than AWS, otherwise Seafile will use AWS's address (i.e., s3.us-east-1.amazonaws.com). sse_c_key (Optional) A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. path_style_request (Optional) This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. So we recommend to set this option to true for self-hosted storage."},{"location":"setup/setup_with_s3/#example-configurations_1","title":"Example configurations","text":"AWSExoscaleHetznerOther Public Hosted S3 StorageSelf-hosted S3 Storage
There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support
[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\n# v2 authentication protocol will be used if not set\nuse_v4_signature = true\n# required for v4 protocol. ignored for v2 protocol.\naws_region = <region name for storage provider>\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\nuse_https = true\n
Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift, Ceph's RADOS Gateway and Minio. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:
Install and configure memcached or Redis. For best performance, Seafile requires enable memory cache for objects. We recommend to at least allocate 128MB memory for memcached.
The above config is just an example. You should replace the options according to your own environment.
Seafile supports Swift with Keystone as authentication mechanism. The auth_host option is the address and port of Keystone service.The region option is used to select publicURL,if you don't configure it, use the first publicURL in returning authenticated information.
Seafile also supports Tempauth and Swauth since professional edition 6.2.1. The auth_ver option should be set to v1.0, tenant and region are no longer needed.
It's required to create separate containers for commit, fs, and block objects.
"},{"location":"setup/setup_with_swift/#use-https-connections-to-swift","title":"Use HTTPS connections to Swift","text":"
Since Pro 5.0.4, you can use HTTPS connections to Swift. Add the following options to seafile.conf:
Because the server package is built on CentOS 6, if you're using Debian/Ubuntu, you have to copy the system CA bundle to CentOS's CA bundle path. Otherwise Seafile can't find the CA bundle so that the SSL connection will fail.
This page shows the minimal requirements of Seafile.
About the system requirements
The system requirements in this document refer to the minimum system hardware requirements are the suggestions to smooth operation of Seafile (network connection is not discussed here). If not otherwise specified, it will apply to all deployment scenarios, but for binary installations, the libraries we provided in the documents are only supporting the following operation systems:
Ubuntu 24.04
Ubuntu 22.04
Debian 12
Debian 11
Important: Information of Docker-base deployment integration services
In each case, we have shown the services integrated names Docker-base deployment integration services by standard installation with Docker. If these services are already installed and you do not need them in your deployment, you need to refer to the corresponding documentation and disable them in the Docker resource file.However, we do not recommend that you reduce the corresponding system resource requirements on our suggestions, unless otherwise specified.
However, if you use other installation methods (e.g., binary deployment, K8S deployment) you have to make sure you have installed these services, because it will not include the installation of that.
If you need to install other extensions not included here (e.g., OnlyOffice), you should increase the system requirements appropriately above our recommendations.
Deployment Scenarios CPU Requirements Memory Requirements Indexer / Search Engine Docker deployment 4 Cores 4G Default All 4 Cores 4G With existing ElasticSearch service, but on the same machine / node All 2 Cores 2G With existing ElasticSearch service, and on another machine / node All 2 Cores 2G Use SeaSearch as the search engine, instead of ElasticSearch
Hard disk requirements: More than 50G are recommended
Docker-base deployment integration services:
Seafile
Redis
Mariadb
ElasticSearch
Seadoc
Caddy
More details of files indexer used in Seafile PE
By default, Seafile Pro will use Elasticsearch as the files indexer
Please make sure the mmapfs counts do not cause excptions like out of memory, which can be increased by following command (see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html for futher details):
sysctl -w vm.max_map_count=262144 #run as root\n
or modify /etc/sysctl.conf and reboot to set this value permanently:
Node requirements: Minimal 2 nodes (one frontend and one backend), but recommend more than 3 nodes (two frontend and three backend)
More details about the number of nodes
If your number of nodes does not meet our recommended number (i.e. 3 nodes), please adjust according to the following strategies:
2 nodes: A frontend service and a backend service on the same node
1 node: Please deploy Seafile in a single node instead a cluster.
If you have more available nodes for Seafile server, please provide them to the Seafile frontend service and make sure there is only one backend service running. Here is a simple relationship between the number of Seafile frontent services (\\(N_f\\)) and total nodes (\\(N_t\\)): $$ N_f = N_t - 1, $$ where the number 1 means one node for Seafile backend service.
Other system requirements: similar with Seafile Pro, but make sure that all nodes should meet this condition
Docker-base deployment integration services: Seafile only
More suggestions in Seafile cluster
We assume you have already deployed Memcached (redis is not supported in cluster), MariaDB, file indexer (e.g., ElasticSearch) in separate machines and use S3 like object storage.
Generally, when deploying Seafile in a cluster, we recommend that you use a storage backend (such as AWS S3) to store Seafile data. However, according to the Seafile image startup rules and K8S persistent storage strategy, you still need to prepare a persistent directory for configuring the startup of the Seafile container.
"},{"location":"setup/use_other_reverse_proxy/","title":"Use other reverse proxy","text":"
Since Seafile 12.0, all reverse proxy, HTTPS, etc. processing for single-node deployment based on Docker is handled by caddy. If you need to use other reverse proxy services, you can refer to this document to modify the relevant configuration files.
"},{"location":"setup/use_other_reverse_proxy/#services-that-require-reverse-proxy","title":"Services that require reverse proxy","text":"
Before making changes to the configuration files, you have to know the services used by Seafile and related components (Table 1 therafter).
Tip
The services shown in the table below are all based on the single-node integrated deployment in accordance with the Seafile official documentation.
If these services are deployed in standalone mode (such as seadoc and notification-server), or deployed in the official documentation of third-party plugins (such as onlyoffice and collabora), you can skip modifying the configuration files of these services (because Caddy is not used as a reverse proxy for such deployment approaches).
If you have not integrated the services in the Table 1, please choose Standalone or Refer to the official documentation of third-party plugins to install them when you need these services
YML Service Suggest exposed port Service listen port Require WebSocket seafile-server.yml seafile 80 80 No seadoc.yml seadoc 8888 80 Yes notification-server.yml notification-server 8083 8083 Yes collabora.yml collabora 6232 9980 No onlyoffice.yml onlyoffice 6233 80 No"},{"location":"setup/use_other_reverse_proxy/#modify-yml-files","title":"Modify YML files","text":"
Refer to Table 1 for the related service exposed ports. Add section ports for corresponding services
services:\n <the service need to be modified>:\n ...\n ports:\n - \"<Suggest exposed port>:<Service listen port>\"\n
Delete all fields related to Caddy reverse proxy (in label section)
Tip
Some .yml files (e.g., collabora.yml) also have port-exposing information with Caddy in the top of the file, which also needs to be removed.
We take seafile-server.yml for example (Pro edition):
services:\n # ... other services\n\n seafile:\n image: ${SEAFILE_IMAGE:-seafileltd/seafile-pro-mc:13.0-latest}\n container_name: seafile\n ports:\n - \"80:80\"\n volumes:\n - ${SEAFILE_VOLUME:-/opt/seafile-data}:/shared\n environment:\n ... # enviroment variables map, donnot make change\n\n # please remove the `label` section\n #label: ... <- remove this section\n\n depends_on:\n ... # dependencies, donnot make change\n ...\n\n# ... other options\n
"},{"location":"setup/use_other_reverse_proxy/#add-reverse-proxy-for-related-services","title":"Add reverse proxy for related services","text":"
Modify nginx.conf and add reverse proxy for services seafile and seadoc:
Note
If your proxy server's host is not the same as the host the Seafile deployed to, please replase 127.0.0.1 to your Seafile server's host
"},{"location":"setup/use_other_reverse_proxy/#restart-services-and-nginx","title":"Restart services and nginx","text":"
docker compose down\ndocker compose up -d\nnginx restart\n
"},{"location":"setup/use_seasearch/","title":"Use SeaSearch as search engine (Pro)","text":"
SeaSearch, a file indexer with more lightweight and efficiency than Elasticsearch, is supported from Seafile 12.
For Seafile deploy from binary package
We currently only support Docker-based deployment for SeaSearch Server, so this document describes the configuration with the situation of using Docker to deploy Seafile server.
If your Seafile Server deploy from binary package, please refer here to start or stop Seafile Server.
For Seafile cluster
Theoretically, at least the backend node has to restart, if your Seafile server deploy in cluster mode, but we still suggest you configure and restart all node to make sure the consistency and synchronization in the cluster
SeaSearch service is currently mainly deployed via docker. We have integrated it into the relevant docker-compose file. You only need to download it to the same directory as seafile-server.yml:
We have configured the relevant variables in .env. Here you must pay special attention to the following variable information, which will affect the SeaSearch initialization process. For variables in .env of SeaSearch service, please refer here for the details. We use /opt/seasearch-data as the persistent directory of SeaSearch (the information of administrator are same as Seafile's admin by default from Seafile 13):
For Apple's Chips
Since Apple's chips (such as M2) do not support MKL, you need to set the relevant image to xxx-nomkl:latest, e.g.:
COMPOSE_FILE='...,seasearch.yml' # ... means other docker-compose files\n\n#SEASEARCH_IMAGE=seafileltd/seasearch-nomkl:1.0-latest # for Apple's Chip\nSEASEARCH_IMAGE=seafileltd/seasearch:1.0-latest\n\nSS_DATA_PATH=/opt/seasearch-data\nINIT_SS_ADMIN_USER=<admin-username> \nINIT_SS_ADMIN_PASSWORD=<admin-password>\n\n\n# if you would like to use S3 for saving seasearch data\nSS_STORAGE_TYPE=s3\nS3_SS_BUCKET=...\nS3_KEY_ID=<your-key-id>\nS3_SECRET_KEY=<your-secret-key>\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=false\nS3_AWS_REGION=us-east-1\nS3_HOST=\nS3_USE_HTTPS=true\nS3_SSE_C_KEY=\n
"},{"location":"setup/use_seasearch/#modify-seafile-serveryml-to-disable-elasticsearch-service","title":"Modify seafile-server.yml to disable elasticSearch service","text":"
If you would like to use SeaSearch as the search engine, the elasticSearch service can be removed, which is no longer used: remove elasticsearch.yml in the list variable COMPOSE_FILE on the file .env.
Get your authorization token by base64 code consist of INIT_SS_ADMIN_USER and INIT_SS_ADMIN_PASSWORD defined in .env firsly, which is used to authorize when calling the SeaSearch API:
echo -n 'username:password' | base64\n\n# example output\nYWRtaW46YWRtaW5fcGFzc3dvcmQ=\n
Add the following section in seafevents to enable seafile backend service to access SeaSearch APIs
SeaSearch server deploy on a different machine with Seafile
If your SeaSearch server deploy on a different machine with Seafile, please replace http://seasearch:4080 to the url <scheme>://<address>:<prot> of your SeaSearch server
After startup the SeaSearch service, you can check the following logs for Whether SeaSearch runs normally and Seafile is called successfully:
container logs by command docker logs -f seafile-seasearch
/opt/seasearch-data/log/seafevents.log
After first time start SeaSearch Server
You can remove the initial admin account informations in .env (e.g., INIT_SS_ADMIN_USER, INIT_SS_ADMIN_PASSWORD), which are only used in the SeaSearch initialization progress (i.e., the first time to start services). But make sure you have recorded it somewhere else in case you forget the password.
By default, SeaSearch use word based tokenizer designed for English/German/French language. You can add following configuration to use tokenizer designed for Chinese language.
Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
Cache server (the first step) is not necessary, if you donot wish this node deploy it.
"},{"location":"setup_binary/cluster_deployment/#create-user-seafile","title":"Create user seafile","text":"
Create a new user and follow the instructions on the screen:
adduser seafile\n
Change ownership of the created directory to the new user:
chown -R seafile: /opt/seafile\n
All the following steps are done as user seafile.
Change to user seafile:
su seafile\n
"},{"location":"setup_binary/cluster_deployment/#placing-the-seafile-pe-license-in-optseafile","title":"Placing the Seafile PE license in /opt/seafile","text":"
Save the license file in Seafile's programm directory /opt/seafile. Make sure that the name is seafile-license.txt.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
"},{"location":"setup_binary/cluster_deployment/#setup-and-configure-nginx-only-for-frontend-nodes","title":"Setup and configure Nginx (only for frontend nodes)","text":"
For security reasons, the Seafile frontend service will only listen to requests from the local port 8000. You need to use Nginx to reverse proxy this port to port 80 for external access:
There are 2 firewall rule changes for Seafile cluster:
On each nodes, you should open the health check port (default 11001);
On the Cache and ElasticSearch server, please only allow Seafile servers to access this port for security resons.
"},{"location":"setup_binary/cluster_deployment/#setup-the-first-frontend-node","title":"Setup the first frontend Node","text":""},{"location":"setup_binary/cluster_deployment/#setup-seafile-server-pro","title":"Setup Seafile server Pro","text":"
Please follow Installation of Seafile Server Professional Edition to setup:
Download the install package
Uncompress the package
Set up Seafile Pro databases
"},{"location":"setup_binary/cluster_deployment/#create-and-modify-configuration-files-in-optseafileconf","title":"Create and Modify configuration files in /opt/seafile/conf","text":""},{"location":"setup_binary/cluster_deployment/#env","title":".env","text":"
Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters can be generated from:
pwgen -s 40 1\n
JWT_PRIVATE_KEY=<Your jwt private key>\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=<your database host>\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n
The Seafile server also opens a port for the load balancers to run health checks. Seafile by default uses port 11001. You can change this by adding the following config:
please Refer to Django's documentation about using Redis cache to add Redis configurations to seahub_settings.py.
Add following options to seahub_setting.py, which will tell Seahub to store avatar in database and cache avatar in memcached, and store css CACHE to local memory.
In cluster environment, we have to store avatars in the database instead of in a local disk.
mysql -h<your MySQL host> -P<your MySQL port> -useafile -p<user seafile's password>\n\n# enter MySQL environment\nUSE seahub_db;\n\nCREATE TABLE `avatar_uploaded` (`filename` TEXT NOT NULL, `filename_md5` CHAR(32) NOT NULL PRIMARY KEY, `data` MEDIUMTEXT NOT NULL, `size` INTEGER NOT NULL, `mtime` datetime NOT NULL);\n
"},{"location":"setup_binary/cluster_deployment/#run-and-test-the-single-node","title":"Run and Test the Single Node","text":"
Once you have finished configuring this single node, start it to test if it runs properly:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n
cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seahub.sh start\n
Success
The first time you start seahub, the script would prompt you to create an admin account for your Seafile server. Then you can see the following message in your console:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n
Finally, you can visit http://ip-address-of-this-node:80 and login with the admin account to test if this node is working fine or not.
"},{"location":"setup_binary/cluster_deployment/#configure-other-frontend-nodes","title":"Configure other frontend nodes","text":"
If the first frontend node works fine, you can compress the whole directory /opt/seafile into a tarball and copy it to all other Seafile server nodes. You can simply uncompress it and start the server by:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n
cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seahub.sh start\n
In the backend node, you need to execute the following command to start Seafile server. CLUSTER_MODE=backend means this node is seafile backend server.
Note
For installations using python virtual environment, activate it if it isn't already active
Since Seafile Pro server 6.0.0, cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the \"Load Balancer Setting\" section below for details
Generally speaking, in order to better access the Seafile service, we recommend that you use a load balancing service to access the Seafile cluster and bind your domain name (such as seafile.cluster.com) to the load balancing service. Usually, you can use:
Cloud service provider's load balancing service (e.g., AWS Elastic Load Balancer)
Deploy your own load balancing service, our document will give two of common load balance services:
global\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n maxconn 2000\n timeout connect 10000\n timeout client 300000\n timeout server 36000000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.1.165:80 check port 11001 cookie seafileserver01\n server seafileserver02 192.168.1.200:80 check port 11001 cookie seafileserver02\n
"},{"location":"setup_binary/cluster_deployment/#see-how-it-runs","title":"See how it runs","text":"
Now you should be able to test your cluster. Open https://seafile.example.com in your browser and enjoy. You can also synchronize files with Seafile clients.
"},{"location":"setup_binary/cluster_deployment/#the-final-configuration-of-the-front-end-nodes","title":"The final configuration of the front-end nodes","text":"
Here is the summary of configurations at the front-end node that related to cluster setup. (for version 7.1+)
For seafile.conf:
[cluster]\nenabled = true\nmemcached_options = --SERVER=<IP of memcached node> --POOL-MIN=10 --POOL-MAX=100\n
The enabled option will prevent the start of background tasks by ./seafile.sh start in the front-end node. The tasks should be explicitly started by ./seafile-background-tasks.sh start at the back-end node.
You can engaged HTTPS in your load balance service, as you can use certificates manager (e.g., Certbot) to acquire and enable HTTPS to your Seafile cluster. You have to modify the relative URLs from the prefix http:// to https:// in seahub_settings.py and .env, after enabling HTTPS.
You can follow here to deploy SeaDoc server. And then modify SEADOC_SERVER_URL in your .env file
"},{"location":"setup_binary/https_with_nginx/","title":"Enabling HTTPS with Nginx","text":"
After completing the installation of Seafile Server Professional Edition, communication between the Seafile server and clients runs over (unencrypted) HTTP. While HTTP is ok for testing purposes, switching to HTTPS is imperative for production use.
HTTPS requires a SSL certificate from a Certificate Authority (CA). Unless you already have a SSL certificate, we recommend that you get your SSL certificate from Let\u2019s Encrypt using Certbot. If you have a SSL certificate from another CA, skip the section \"Getting a Let's Encrypt certificate\".
A second requirement is a reverse proxy supporting SSL. Nginx, a popular and resource-friendly web server and reverse proxy, is a good option. Nginx's documentation is available at http://nginx.org/en/docs/.
Copy the following sample Nginx config file into the just created seafile.conf (i.e., nano /etc/nginx/sites-available/seafile.conf) and modify the content to fit your needs:
The following options must be modified in the CONF file:
Server name (server_name)
Optional customizable options in the seafile.conf are:
Server listening port (listen) - if Seafile server should be available on a non-standard port
Proxy pass for location / - if Seahub is configured to start on a different port than 8000
Proxy pass for location /seafhttp - if seaf-server is configured to start on a different port than 8082
Maximum allowed size of the client request body (client_max_body_size)
The default value for client_max_body_size is 1M. Uploading larger files will result in an error message HTTP error code 413 (\"Request Entity Too Large\"). It is recommended to syncronize the value of client_max_body_size with the parameter max_upload_size in section [fileserver] of seafile.conf. Optionally, the value can also be set to 0 to disable this feature. Client uploads are only partly effected by this limit. With a limit of 100 MiB they can safely upload files of any size.
Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect:
nginx -t\nnginx -s reload\n
"},{"location":"setup_binary/https_with_nginx/#getting-a-lets-encrypt-certificate","title":"Getting a Let's Encrypt certificate","text":"
Getting a Let's Encrypt certificate is straightforward thanks to Certbot. Certbot is a free, open source software tool for requesting, receiving, and renewing Let's Encrypt certificates.
First, go to the Certbot website and choose your webserver and OS.
Second, follow the detailed instructions then shown.
We recommend that you get just a certificate and that you modify the Nginx configuration yourself:
sudo certbot certonly --nginx\n
Follow the instructions on the screen.
Upon successful verification, Certbot saves the certificate files in a directory named after the host name in /etc/letsencrypt/live. For the host name seafile.example.com, the files are stored in /etc/letsencrypt/live/seafile.example.com.
Normally, your nginx configuration can be automatically managed by a certificate manager (e.g., CertBot) after you install the certificate. If you find that your nginx is already listening on port 443 through the certificate manager after installing the certificate, you can skip this step.
Add an server block for port 443 and a http-to-https redirect to the seafile.conf configuration file in /etc/nginx.
This is a (shortened) sample configuration for the host name seafile.example.com:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n\n server_tokens off; # Prevents the Nginx version from being displayed in the HTTP response header\n}\n\nserver {\n listen 443 ssl;\n ssl_certificate /etc/letsencrypt/live/seafile.example.com/fullchain.pem; # Path to your fullchain.pem\n ssl_certificate_key /etc/letsencrypt/live/seafile.example.com/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n proxy_set_header X-Forwarded-Proto https;\n\n... # No changes beyond this point compared to the Nginx configuration without HTTPS\n
Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect:
Tip for uploading very large files (> 4GB): By default Nginx will buffer large request body in temp file. After the body is completely received, Nginx will send the body to the upstream server (seaf-server in our case). But it seems when file size is very large, the buffering mechanism dosen't work well. It may stop proxying the body in the middle. So if you want to support file upload larger for 4GB, we suggest you install Nginx version >= 1.8.0 and add the following options to Nginx config file:
To improve security, the file server should only be accessible via Nginx.
Add the following line in the [fileserver] block on seafile.conf in /opt/seafile/conf:
host = 127.0.0.1 ## default port 0.0.0.0\n
After his change, the file server only accepts requests from Nginx.
"},{"location":"setup_binary/https_with_nginx/#starting-seafile-and-seahub","title":"Starting Seafile and Seahub","text":"
Restart the seaf-server and Seahub for the config changes to take effect:
su seafile\ncd /opt/seafile/seafile-server-latest\n./seafile.sh restart\n./seahub.sh restart # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n
"},{"location":"setup_binary/https_with_nginx/#additional-modern-settings-for-nginx-optional","title":"Additional modern settings for Nginx (optional)","text":""},{"location":"setup_binary/https_with_nginx/#activating-ipv6","title":"Activating IPv6","text":"
Require IPv6 on server otherwise the server will not start! Also the AAAA dns record is required for IPv6 usage.
Activate HTTP2 for more performance. Only available for SSL and nginx version>=1.9.5. Simply add http2.
listen 443 http2;\nlisten [::]:443 http2;\n
"},{"location":"setup_binary/https_with_nginx/#advanced-tls-configuration-for-nginx-optional","title":"Advanced TLS configuration for Nginx (optional)","text":"
The TLS configuration in the sample Nginx configuration file above receives a B overall rating on SSL Labs. By modifying the TLS configuration in seafile.conf, this rating can be significantly improved.
The following sample Nginx configuration file for the host name seafile.example.com contains additional security-related directives. (Note that this sample file uses a generic path for the SSL certificate files.) Some of the directives require further steps as explained below.
server {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n server_tokens off;\n }\n server {\n listen 443 ssl;\n ssl_certificate /etc/ssl/cacert.pem; # Path to your cacert.pem\n ssl_certificate_key /etc/ssl/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n # HSTS for protection against man-in-the-middle-attacks\n add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\";\n\n # DH parameters for Diffie-Hellman key exchange\n ssl_dhparam /etc/nginx/dhparam.pem;\n\n # Supported protocols and ciphers for general purpose server with good security and compatability with most clients\n ssl_protocols TLSv1.2 TLSv1.3;\n ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;\n ssl_prefer_server_ciphers off;\n\n # Supported protocols and ciphers for server when clients > 5years (i.e., Windows Explorer) must be supported\n #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;\n #ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA;\n #ssl_prefer_server_ciphers on;\n\n ssl_session_timeout 5m;\n ssl_session_cache shared:SSL:5m;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto https;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n\n proxy_read_timeout 1200s;\n\n client_max_body_size 0;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n send_timeout 36000s;\n }\n\n location /media {\n root /home/user/haiwen/seafile-server-latest/seahub;\n }\n }\n
"},{"location":"setup_binary/https_with_nginx/#enabling-http-strict-transport-security","title":"Enabling HTTP Strict Transport Security","text":"
Enable HTTP Strict Transport Security (HSTS) to prevent man-in-the-middle-attacks by adding this directive:
HSTS instructs web browsers to automatically use HTTPS. That means, after the first visit of the HTTPS version of Seahub, the browser will only use https to access the site.
The generation of the the DH parameters may take some time depending on the server's processing power.
Add the following directive in the HTTPS server block:
ssl_dhparam /etc/nginx/dhparam.pem;\n
"},{"location":"setup_binary/https_with_nginx/#restricting-tls-protocols-and-ciphers","title":"Restricting TLS protocols and ciphers","text":"
Disallow the use of old TLS protocols and cipher. Mozilla provides a configuration generator for optimizing the conflicting objectives of security and compabitility. Visit https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx for more Information.
"},{"location":"setup_binary/installation_pro/","title":"Installation of Seafile Server Professional Edition","text":"
This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server from a pre-built package using MySQL/MariaDB as database. The deployment has been tested for Debian/Ubuntu.
Please refer here for system requirements about Seafile PE. In general, we recommend that you should have at least 4G RAM and a 4-core CPU (> 2GHz).
Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com or one of our partners.
"},{"location":"setup_binary/installation_pro/#setup","title":"Setup","text":""},{"location":"setup_binary/installation_pro/#installing-and-preparing-the-sql-database","title":"Installing and preparing the SQL database","text":"
Seafile supports MySQL and MariaDB. We recommend that you use the preferred SQL database management engine included in the package repositories of your distribution.
You can find step-by-step how-tos for installing MySQL and MariaDB in the tutorials on the Digital Ocean website.
Seafile uses the mysql_native_password plugin for authentication. The versions of MySQL and MariaDB installed on CentOS 8, Debian 10, and Ubuntu 20.04 use a different authentication plugin by default. It is therefore required to change to authentication plugin to mysql_native_password for the root user prior to the installation of Seafile. The above mentioned tutorials explain how to do it.
The standard directory /opt/seafile is assumed for the rest of this manual. If you decide to put Seafile in another directory, some commands need to be modified accordingly
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate.
sudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev python3.12-venv default-libmysqlclient-dev build-essential pkg-config libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==1.0.* mysqlclient==2.2.* \\\n pymysql pillow==10.4.* pylibmc captcha==0.6.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.* \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.9.* pysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 lxml python-ldap==3.4.* gevent==24.2.*\n
Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate.
sudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmariadb-dev-compat ldap-utils libldap2-dev libsasl2-dev python3.11-venv \n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* pymysql pillow==10.0.* pylibmc captcha==0.4 markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 psd-tools django-pylibmc django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n
"},{"location":"setup_binary/installation_pro/#creating-user-seafile","title":"Creating user seafile","text":"
Elasticsearch, the indexing server, cannot be run as root. More generally, it is good practice not to run applications as root.
Create a new user and follow the instructions on the screen:
Ubuntu 24.04/22.04Debian 12/11
adduser seafile\n
/usr/sbin/adduser seafile\n
Change ownership of the created directory to the new user:
chown -R seafile: /opt/seafile\n
All the following steps are done as user seafile.
Change to user seafile:
su seafile\n
"},{"location":"setup_binary/installation_pro/#placing-the-seafile-pe-license","title":"Placing the Seafile PE license","text":"
Save the license file in Seafile's programm directory /opt/seafile. Make sure that the name is seafile-license.txt.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
"},{"location":"setup_binary/installation_pro/#downloading-the-install-package","title":"Downloading the install package","text":"
The install packages for Seafile PE are available for download in the the Seafile Customer Center. To access the Customer Center, a user account is necessary. The registration is free.
Beginning with Seafile PE 7.0.17, the Seafile Customer Center provides two install packages for every version (using Seafile PE 12.0.6 as an example):
seafile-pro-server_12.0.6_x86-64_Ubuntu.tar.gz, compiled in Ubuntu environment
The former is suitable for installation on Ubuntu/Debian servers.
Download the install package using wget (replace the x.x.x with the version you wish to download):
The names of the install packages differ for Seafile CE and Seafile PE. Using Seafile CE and Seafile PE 12.0.6 as an example, the names are as follows:
Seafile CE: seafile-server_12.0.6_x86-86.tar.gz; uncompressing into folder seafile-server-12.0.6
Seafile PE: seafile-pro-server_12.0.6_x86-86.tar.gz; uncompressing into folder seafile-pro-server-12.0.6
"},{"location":"setup_binary/installation_pro/#setting-up-seafile-pro-databases","title":"Setting up Seafile Pro databases","text":"
The install package comes with a script that sets Seafile up for you. Specifically, the script creates the required directories and extracts all files in the right place. It can also create a MySQL user and the three databases that Seafile's components require:
ccnet server
seafile server
seahub
While ccnet server was merged into the seafile-server in Seafile 8.0, the corresponding database is still required for the time being
Run the script as user seafile:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n
cd seafile-pro-server-12.0.6\n./setup-seafile-mysql.sh\n
Configure your Seafile Server by specifying the following three parameters:
Option Description Note server name Name of the Seafile Server 3-15 characters, only English letters, digits and underscore ('_') are allowed server's ip or domain IP address or domain name used by the Seafile Server Seafile client program will access the server using this address fileserver port TCP port used by the Seafile fileserver Default port is 8082, it is recommended to use this port and to only change it if is used by other service
In the next step, choose whether to create new databases for Seafile or to use existing databases. The creation of new databases requires the root password for the SQL server.
Note
If you don't have the root password, you need someone who has the privileges, e.g., the database admin, to create the three databases required by Seafile, as well as a MySQL user who can access the databases. For example, to create three databases ccnet_db / seafile_db / seahub_db for ccnet/seafile/seahub respectively, and a MySQL user \"seafile\" to access these databases run the following SQL queries:
create database `ccnet_db` character set = 'utf8';\ncreate database `seafile_db` character set = 'utf8';\ncreate database `seahub_db` character set = 'utf8';\n\ncreate user 'seafile'@'localhost' identified by 'seafile';\n\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seafile_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seahub_db`.* to `seafile`@localhost;\n
[1] Create new ccnet/seafile/seahub databases[2] Use existing ccnet/seafile/seahub databases
The script creates these databases and a MySQL user that Seafile Server will use to access them. To this effect, you need to answer these questions:
Question Description Note mysql server host Host address of the MySQL server Default is localhost mysql server port TCP port used by the MySQL server Default port is 3306; almost every MySQL server uses this port mysql root password Password of the MySQL root account The root password is required to create new databases and a MySQL user mysql user for Seafile MySQL user created by the script, used by Seafile's components to access the databases Default is seafile; the user is created unless it exists mysql password for Seafile user Password for the user above, written in Seafile's config files Percent sign ('%') is not allowed database name Name of the database used by ccnet Default is \"ccnet_db\", the database is created if it does not exist seafile database name Name of the database used by Seafile Default is \"seafile_db\", the database is created if it does not exist seahub database name Name of the database used by seahub Default is \"seahub_db\", the database is created if it does not exist
The prompts you need to answer:
Question Description Note mysql server host Host address of the MySQL server Default is localhost mysql server port TCP port used by MySQL server Default port is 3306; almost every MySQL server uses this port mysql user for Seafile User used by Seafile's components to access the databases The user must exists mysql password for Seafile user Password for the user above ccnet database name Name of the database used by ccnet, default is \"ccnet_db\" The database must exist seafile database name Name of the database used by Seafile, default is \"seafile_db\" The database must exist seahub dabase name Name of the database used by Seahub, default is \"seahub_db\" The database must exist
If the setup is successful, you see the following output:
The folder seafile-server-latest is a symbolic link to the current Seafile Server folder. When later you upgrade to a new version, the upgrade scripts update this link to point to the latest Seafile Server folder.
"},{"location":"setup_binary/installation_pro/#enabling-httphttps-optional-but-recommended","title":"Enabling HTTP/HTTPS (Optional but Recommended)","text":"
You need at least setup HTTP to make Seafile's web interface work. This manual provides instructions for enabling HTTP/HTTPS for the two most popular web servers and reverse proxies (e.g., Nginx).
"},{"location":"setup_binary/installation_pro/#create-the-env-file-in-conf-directory","title":"Create the .env file in conf/ directory","text":"
nano /opt/seafile/conf/.env\n
Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters can be generated from:
pwgen -s 40 1\n
JWT_PRIVATE_KEY=<Your jwt private key>\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=<your database host>\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n
Run the following commands in /opt/seafile/seafile-server-latest:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n
su seafile\n./seafile.sh start # Start Seafile service\n./seahub.sh start # Start seahub website, port defaults to 127.0.0.1:8000\n
Success
The first time you start Seahub, the script prompts you to create an admin account for your Seafile Server. Enter the email address of the admin user followed by the password, i.e.:
What is the email for the admin account?\n[ admin email ] <please input your admin's email>\n\nWhat is the password for the admin account?\n[ admin password ] <please input your admin's password>\n\nEnter the password again:\n[ admin password again ] <please input your admin's password again>\n
Now you can access Seafile via the web interface at the host address (e.g., https://seafile.example.com).
"},{"location":"setup_binary/installation_pro/#enabling-full-text-search","title":"Enabling full text search","text":"
Seafile uses the indexing server ElasticSearch to enable full text search.
Our recommendation for deploying ElasticSearch is using Docker. Detailed information about installing Docker on various Linux distributions is available at Docker Docs.
Seafile PE 9.0 only supports ElasticSearch 7.x. Seafile PE 10.0, 11.0, 12.0 only supports ElasticSearch 8.x.
We use ElasticSearch version 8.15.0 as an example in this section. Version 8.15.0 and newer version have been successfully tested with Seafile.
Pull the Docker image:
sudo docker pull elasticsearch:8.15.0\n
Create a folder for persistent data created by ElasticSearch and change its permission:
We sincerely thank Mohammed Adel of Safe Decision Co., for the suggestion of this notice.
By default, Elasticsearch will only listen on 127.0.0.1, but this rule may become invalid after Docker exposes the service port, which will make your Elasticsearch service vulnerable to attackers accessing and extracting sensitive data due to exposure to the external network. We recommend that you manually configure the Docker firewall, such as
sudo iptables -A INPUT -p tcp -s <your seafile server ip> --dport 9200 -j ACCEPT\nsudo iptables -A INPUT -p tcp --dport 9200 -j DROP\n
The above command will only allow the host where your Seafile service is located to connect to Elasticsearch, and other addresses will be blocked. If you deploy Elasticsearch based on binary packages, you need to refer to the official document to set the address that Elasticsearch binds to.
Add the following configuration to seafevents.conf:
[INDEX FILES]\nes_host = <your elasticsearch server's IP, e.g., 127.0.0.1> # IP address of ElasticSearch host\nes_port = 9200 # port of ElasticSearch host\n
Finally, restart Seafile:
su seafile\n./seafile.sh restart && ./seahub.sh restart \n
"},{"location":"setup_binary/migrate_from_sqlite_to_mysql/","title":"Migrate From SQLite to MySQL","text":"
Note
The tutorial is only related to Seafile CE edition.
First make sure the python module for MySQL is installed. On Ubuntu/Debian, use sudo apt-get install python-mysqldb or sudo apt-get install python3-mysqldb to install it.
Steps to migrate Seafile from SQLite to MySQL:
Stop Seafile and Seahub.
Download sqlite2mysql.sh and sqlite2mysql.py to the top directory of your Seafile installation path. For example, /opt/seafile.
Run sqlite2mysql.sh:
chmod +x sqlite2mysql.sh\n./sqlite2mysql.sh\n
This script will produce three files: ccnet-db.sql, seafile-db.sql, seahub-db.sql.
Then create 3 databases ccnet_db, seafile_db, seahub_db and seafile user.
mysql> create database ccnet_db character set = 'utf8';\nmysql> create database seafile_db character set = 'utf8';\nmysql> create database seahub_db character set = 'utf8';\n
Import ccnet data to MySql.
mysql> use ccnet_db;\nmysql> source ccnet-db.sql;\n
Import seafile data to MySql.
mysql> use seafile_db;\nmysql> source seafile-db.sql;\n
Import seahub data to MySql.
mysql> use seahub_db;\nmysql> source seahub-db.sql;\n
ccnet.conf has been removed since Seafile 12.0
Modify configure files\uff1aAppend following lines to ccnet.conf:
DATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'USER' : 'root',\n 'PASSWORD' : 'root',\n 'NAME' : 'seahub_db',\n 'HOST' : '127.0.0.1',\n 'PORT': '3306',\n # This is only needed for MySQL older than 5.5.5.\n # For MySQL newer than 5.5.5 INNODB is the default already.\n 'OPTIONS': {\n \"init_command\": \"SET storage_engine=INNODB\",\n }\n }\n}\n
Restart seafile and seahub
Note
User notifications will be cleared during migration due to the slight difference between MySQL and SQLite, if you only see the busy icon when click the notitfications button beside your avatar, please remove user_notitfications table manually by:
use seahub_db;\ndelete from notifications_usernotification;\n
"},{"location":"setup_binary/migrate_from_sqlite_to_mysql/#faq","title":"FAQ","text":""},{"location":"setup_binary/migrate_from_sqlite_to_mysql/#encountered-errno-150-foreign-key-constraint-is-incorrectly-formed","title":"Encountered errno: 150 \"Foreign key constraint is incorrectly formed\"","text":"
This error typically occurs because the current table being created contains a foreign key that references a table whose primary key has not yet been created. Therefore, please check the database table creation order in the SQL file. The correct order is:
\"You and Your\" means the party licensing the Software hereunder.
\"Software\" means the computer programs provided under the terms of this license by Seafile Ltd. together with any documentation provided therewith.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#2-grant-of-rights","title":"2. GRANT OF RIGHTS","text":""},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#21-general","title":"2.1 General","text":"
The License granted for Software under this Agreement authorizes You on a non-exclusive basis to use the Software. The Software is licensed, not sold to You and Seafile Ltd. reserves all rights not expressly granted to You in this Agreement. The License is personal to You and may not be assigned by You to any third party.
Subject to the receipt by Seafile Ltd. of the applicable license fees, You have the right use the Software as follows:
You may use and install the Software on an unlimited number of computers that are owned, leased, or controlled by you.
Nothing in this Agreement shall permit you, or any third party to disclose or otherwise make available to any third party the licensed Software, source code or any portion thereof.
You agree to indemnify, hold harmless and defend Seafile Ltd. from and against any claims or lawsuits, including attorney's fees, that arise as a result from the use of the Software;
You do not permit further redistribution of the Software by Your end-user customers
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#3-no-derivative-works","title":"3. NO DERIVATIVE WORKS","text":"
The inclusion of source code with the License is explicitly not for your use to customize a solution or re-use in your own projects or products. The benefit of including the source code is for purposes of security auditing. You may modify the code only for emergency bug fixes that impact security or performance and only for use within your enterprise. You may not create or distribute derivative works based on the Software or any part thereof. If you need enhancements to the software features, you should suggest them to Seafile Ltd. for version improvements.
You acknowledge that all copies of the Software in any form are the sole property of Seafile Ltd.. You have no right, title or interest to any such Software or copies thereof except as provided in this Agreement.
You hereby acknowledge and agreed that the Software constitute and contain valuable proprietary products and trade secrets of Seafile Ltd., embodying substantial creative efforts and confidential information, ideas, and expressions. You agree to treat, and take precautions to ensure that your employees and other third parties treat, the Software as confidential in accordance with the confidentiality requirements herein.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#6-disclaimer-of-warranties","title":"6. DISCLAIMER OF WARRANTIES","text":"
EXCEPT AS OTHERWISE SET FORTH IN THIS AGREEMENT THE SOFTWARE IS PROVIDED TO YOU \"AS IS\", AND Seafile Ltd. MAKES NO EXPRESS OR IMPLIED WARRANTIES WITH RESPECT TO ITS FUNCTIONALITY, CONDITION, PERFORMANCE, OPERABILITY OR USE. WITHOUT LIMITING THE FOREGOING, Seafile Ltd. DISCLAIMS ALL IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR FREEDOM FROM INFRINGEMENT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSIONS MAY NOT APPLY TO YOU. THE LIMITED WARRANTY HEREIN GIVES YOU SPECIFIC LEGAL RIGHTS, AND YOU MAY ALSO HAVE OTHER RIGHTS THAT VARY FROM ONE JURISDICTION TO ANOTHER.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#7-limitation-of-liability","title":"7. LIMITATION OF LIABILITY","text":"
YOU ACKNOWLEDGE AND AGREE THAT THE CONSIDERATION WHICH Seafile Ltd. IS CHARGING HEREUNDER DOES NOT INCLUDE ANY CONSIDERATION FOR ASSUMPTION BY Seafile Ltd. OF THE RISK OF YOUR CONSEQUENTIAL OR INCIDENTAL DAMAGES WHICH MAY ARISE IN CONNECTION WITH YOUR USE OF THE SOFTWARE. ACCORDINGLY, YOU AGREE THAT Seafile Ltd. SHALL NOT BE RESPONSIBLE TO YOU OR ANY THIRD PARTY FOR ANY LOSS-OF-PROFIT, LOST SAVINGS, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF A LICENSING OR USE OF THE SOFTWARE.
You agree to defend, indemnify and hold Seafile Ltd. and its employees, agents, representatives and assigns harmless from and against any claims, proceedings, damages, injuries, liabilities, costs, attorney's fees relating to or arising out of Your use of the Software or any breach of this Agreement.
Your license is effective until terminated. You may terminate it at any time by destroying the Software or returning all copies of the Software to Seafile Ltd.. Your license will terminate immediately without notice if You breach any of the terms and conditions of this Agreement, including non or incomplete payment of the license fee. Upon termination of this Agreement for any reason: You will uninstall all copies of the Software; You will immediately cease and desist all use of the Software; and will destroy all copies of the software in your possession.
"},{"location":"setup_binary/seafile_professional_sdition_software_license_agreement/#10-updates-and-support","title":"10. UPDATES AND SUPPORT","text":"
Seafile Ltd. has the right, but no obligation, to periodically update the Software, at its complete discretion, without the consent or obligation to You or any licensee or user.
YOU HEREBY ACKNOWLEDGE THAT YOU HAVE READ THIS AGREEMENT, UNDERSTAND IT AND AGREE TO BE BOUND BY ITS TERMS AND CONDITIONS.
"},{"location":"setup_binary/start_seafile_at_system_bootup/","title":"Start Seafile at System Bootup","text":""},{"location":"setup_binary/start_seafile_at_system_bootup/#for-systems-running-systemd-and-python-virtual-environments","title":"For systems running systemd and python virtual environments","text":"
For example Debian 12
Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload.
Firstly, you should create a script to activate the python virtual environment, which goes in the ${seafile_dir} directory. Put another way, it does not go in \"seafile-server-latest\", but the directory above that. Throughout this manual the examples use /opt/seafile for this directory, but you might have chosen to use a different directory.
sudo vim /opt/seafile/run_with_venv.sh\n
The content of the file is:
#!/bin/bash\n# Activate the python virtual environment (venv) before starting one of the seafile scripts\n\ndir_name=\"$(dirname $0)\"\nsource \"${dir_name}/python-venv/bin/activate\"\nscript=\"$1\"\nshift 1\n\necho \"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n\"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n
[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seafile.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#for-systems-running-systemd-without-python-virtual-environment","title":"For systems running systemd without python virtual environment","text":"
For example Debian 8 through Debian 11, Linux Ubuntu 15.04 and newer
Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload.
[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seafile.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
Create systemd service file /etc/systemd/system/seafile-client.service
You need to create this service file only if you have seafile console client and you want to run it on system boot.
sudo vim /etc/systemd/system/seafile-client.service\n
The content of the file is:
[Unit]\nDescription=Seafile client\n# Uncomment the next line you are running seafile client on the same computer as server\n# After=seafile.service\n# Or the next one in other case\n# After=network.target\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/seaf-cli start\nExecStop=/usr/bin/seaf-cli stop\nRemainAfterExit=yes\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"setup_binary/start_seafile_at_system_bootup/#enable-service-start-on-system-boot","title":"Enable service start on system boot","text":"
"},{"location":"setup_binary/using_logrotate/","title":"Set up logrotate for server","text":""},{"location":"setup_binary/using_logrotate/#how-it-works","title":"How it works","text":"
seaf-server support reopenning logfiles by receiving a SIGUR1 signal.
This feature is very useful when you need cut logfiles while you don't want to shutdown the server. All you need to do now is cutting the logfile on the fly.
Assuming your seaf-server's logfile is setup to /opt/seafile/logs/seafile.log and your seaf-server's pidfile is setup to /opt/seafile/pids/seaf-server.pid:
The configuration for logrotate could be like this:
There are three types of upgrade, i.e., major version upgrade, minor version upgrade and maintenance version upgrade. This page contains general instructions for the three types of upgrade.
After upgrading, you may need to clean seahub cache if it doesn't behave as expect.
If you are using a Docker based deployment, please read upgrade a Seafile docker instance
If you are running a cluster, please read upgrade a Seafile cluster.
If you are using a binary package based deployment, please read instructions below.
Please check the upgrade notes for any special configuration or changes before/while upgrading.
Upgrade notes for 7.1.x
Upgrade notes for 8.0.x
Upgrade notes for 9.0.x
Upgrade notes for 10.0.x
Upgrade notes for 11.0.x
Upgrade notes for 12.0.x
"},{"location":"upgrade/upgrade/#upgrade-a-binary-package-based-deployment","title":"Upgrade a binary package based deployment","text":""},{"location":"upgrade/upgrade/#major-version-upgrade-eg-from-5xx-to-6yy","title":"Major version upgrade (e.g. from 5.x.x to 6.y.y)","text":"
Suppose you are using version 5.1.0 and like to upgrade to version 6.1.0. First download and extract the new version. You should have a directory layout similar to this:
cd seafile/seafile-server-latest/\n./seafile.sh start\n./seahub.sh start # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n# or via service\n/etc/init.d/seafile-server start\n
If the new version works fine, the old version can be removed
rm -rf seafile-server-5.1.0/\n
"},{"location":"upgrade/upgrade/#minor-version-upgrade-eg-from-61x-to-62y","title":"Minor version upgrade (e.g. from 6.1.x to 6.2.y)","text":"
Suppose you are using version 6.1.0 and like to upgrade to version 6.2.0. First download and extract the new version. You should have a directory layout similar to this:
Start from your current version, run the script(s one by one)
upgrade/upgrade_6.1_6.2.sh\n
Start Seafile server
./seafile.sh start\n./seahub.sh start\n# or via service\n/etc/init.d/seafile-server start\n
If the new version works, the old version can be removed
rm -rf seafile-server-6.1.0/\n
"},{"location":"upgrade/upgrade/#maintenance-version-upgrade-eg-from-622-to-623","title":"Maintenance version upgrade (e.g. from 6.2.2 to 6.2.3)","text":"
A maintenance upgrade is for example an upgrade from 6.2.2 to 6.2.3.
Shutdown Seafile server if it's running
For this type of upgrade, you only need to update the symbolic links (for avatar and a few other folders). A script to perform a minor upgrade is provided with Seafile server (for history reasons, the script is called minor-upgrade.sh):
cd seafile-server-6.2.3/upgrade/ && ./minor-upgrade.sh\n
Start Seafile
If the new version works, the old version can be removed
rm -rf seafile-server-6.2.2/\n
"},{"location":"upgrade/upgrade_a_cluster/","title":"Upgrade a Seafile cluster","text":""},{"location":"upgrade/upgrade_a_cluster/#major-and-minor-version-upgrade","title":"Major and minor version upgrade","text":"
Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps:
Update Seafile image
Upgrade the database
Update configuration files at each node
Update search index in the backend node
In general, to upgrade a cluster, you need:
Download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Start with docker compose up.
Run the upgrade script in container (for example, /opt/seafile/seafile-server-latest/upgrade/upgrade_x_x_x_x.sh) in one frontend node
Update configuration files at each node according to the documentation for each version
Delete old search index in the backend node if needed
"},{"location":"upgrade/upgrade_a_cluster/#upgrade-a-cluster-from-seafile-11-to-12","title":"Upgrade a cluster from Seafile 11 to 12","text":"
Fill up the following field according to your configurations using in Seafile 11:
SEAFILE_SERVER_HOSTNAME=<your loadbalance's host>\nSEAFILE_SERVER_PROTOCOL=https # or http\nSEAFILE_MYSQL_DB_HOST=<your mysql host>\nSEAFILE_MYSQL_DB_USER=seafile # if you don't use `seafile` as your Seafile server's account, please correct it\nSEAFILE_MYSQL_DB_PASSWORD=<your mysql password for user `seafile`>\nJWT_PRIVATE_KEY=<your JWT key generated in Sec. 3.1>\n
Remove the variables using in Cluster initialization
Since Seafile has been initialized in Seafile 11, the variables related to Seafile cluster initialization can be removed from .env:
INIT_SEAFILE_MYSQL_ROOT_PASSWORD
CLUSTER_INIT_MODE
CLUSTER_INIT_MEMCACHED_HOST
CLUSTER_INIT_ES_HOST
CLUSTER_INIT_ES_PORT
INIT_S3_STORAGE_BACKEND_CONFIG
INIT_S3_COMMIT_BUCKET
INIT_S3_FS_BUCKET
INIT_S3_BLOCK_BUCKET
INIT_S3_KEY_ID
INIT_S3_USE_V4_SIGNATURE
INIT_S3_SECRET_KEY
INIT_S3_AWS_REGION
INIT_S3_HOST
INIT_S3_USE_HTTPS
Start the Seafile in a node
Note
According to this upgrade document, a frontend service will be started here. If you plan to use this node as a backend node, you need to modify this item in .env and set it to backend:
CLUSTER_MODE=backend\n
docker compose up -d\n
Upgrade Seafile
docker exec -it seafile bash\n# enter the container `seafile`\n\n# stop servers\ncd /opt/seafile/seafile-server-latest\n./seafile.sh stop\n./seahub.sh stop\n\n# upgrade seafile\ncd upgrade\n./upgrade_11.0_12.0.sh\n
Success
After upgrading the Seafile, you can see the following messages in your console:
Updating seafile/seahub database ...\n\n[INFO] You are using MySQL\n[INFO] updating seafile database...\n[INFO] updating seahub database...\n[INFO] updating seafevents database...\nDone\n\nmigrating avatars ...\n\nDone\n\nupdating /opt/seafile/seafile-server-latest symbolic link to /opt/seafile/seafile-pro-server-12.0.6 ...\n\n\n\n-----------------------------------------------------------------\nUpgraded your seafile server successfully.\n-----------------------------------------------------------------\n
Then you can exit the container by exit
Restart current node
docker compose down\n docker compose up -d\n
Tip
You can use docker logs -f seafile to check whether the current node service is running normally
Operations for other nodes
Download and modify .env similar to the first node (for backend node, you should set CLUSTER_MODE=backend)
Start the Seafile server:
docker compose up -d\n
"},{"location":"upgrade/upgrade_a_cluster_binary/","title":"Upgrade a Seafile cluster (binary)","text":""},{"location":"upgrade/upgrade_a_cluster_binary/#major-and-minor-version-upgrade","title":"Major and minor version upgrade","text":"
Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps:
Upgrade the database
Update symbolic link at frontend and backend nodes to point to the newest version
Update configuration files at each node
Update search index in the backend node
In general, to upgrade a cluster, you need:
Run the upgrade script (for example, ./upgrade/upgrade_4_0_4_1.sh) in one frontend node
Run the minor upgrade script (./upgrade/minor_upgrade.sh) in all other nodes to update symbolic link
Update configuration files at each node according to the documentation for each version
Delete old search index in the backend node if needed
For maintenance upgrade, like from version 10.0.1 to version 10.0.4, just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
For major version upgrade, like from 10.0 to 11.0, see instructions below.
Please check the upgrade notes for any special configuration or changes before/while upgrading.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-120-to-130","title":"Upgrade from 12.0 to 13.0","text":"
From Seafile Docker 13.0, the elasticsearch.yml has separated from seafile-server.yml, and Seafile will support getting cache configuration from environment variables
From Seafile Docker 13.0 (Pro), the ElasticSearch service will be controlled by a separate resource file (i.e., elasticsearch.yml). If you are using Seafile Pro and still plan to use ElasticSearch, please download the elasticsearch.yml
Modify .env, update image version and add cache configurations:
Variables change logs for .env in Seafile 13
The configurations of database and cache can get from environment variables directly (you can define it in the .env). What's more, the Redis will be recommended as the primary cache server for supporting some new features (please refer the upgradte notes, you can also refer to more details about Redis in Seafile Docker here) and is the default type of cache provided in Seafile 13.
The configuration of S3 (including Seafile server, SeaSearch, and the newly supported Metadata server) will use unified variables (i.e., S3_xxx) for the authorization information of S3 in the new deployment. Please refer to the end of the table in Seafile Pro deployment for details. If you plan to deploy or redeploy these components in the future, please pay attention to changes in variable names.
The configuration of notification server will no longer read from seafile.conf, rather than the variable NOTIFICATION_SERVER_URL in the .env and leave blank to disable this feature.
Update image version to Seafile 13
Seafile ProSeafile CE
COMPOSE_FILE='...,elasticsearch.yml' # add `elasticsearch.yml` if you are still using ElasticSearch\nSEAFILE_IMAGE=seafileltd/seafile-pro-mc:13.0-latest\n
SEAFILE_IMAGE=seafileltd/seafile-mc:13.0-latest\n
Add configurations for cache:
## Cache\nCACHE_PROVIDER=redis # or memcached\n\n### Redis\nREDIS_HOST=redis\nREDIS_PORT=6379\nREDIS_PASSWORD=\n\n### Memcached\nMEMCACHED_HOST=memcached\nMEMCACHED_PORT=11211\n
Add configuration for notification server (if is enabled in Seafile 12):
NOTIFICATION_SERVER_URL=<your notification server url>\n
Optional but recommended modifications for further configuration files
Although the configurations in environment (i.e., .env) have higher priority than the configurations in config files, we recommend that you remove or modify the cache configuration in the following files to avoid ambiguity:\uff1a
seafile.conf: remove the [memcached] section
seahub_settings.py: remove the key default in variable CACHES
Start with docker compose up -d.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-110-to-120","title":"Upgrade from 11.0 to 12.0","text":"
Note: If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
From Seafile Docker 12.0, we recommend that you use .env and seafile-server.yml files for configuration.
"},{"location":"upgrade/upgrade_docker/#backup-the-original-docker-composeyml-file","title":"Backup the original docker-compose.yml file:","text":"
The following fields merit particular attention: Variable Description Default Value SEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-dataSEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddySEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafileSEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME The database name of ccnet ccnet_dbSEAFILE_MYSQL_DB_SEAFILE_DB_NAME The database name of seafile seafile_dbSEAFILE_MYSQL_DB_SEAHUB_DB_NAME The database name of seahub seahub_dbJWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) httpTIME_ZONE Time zone UTC
The following fields merit particular attention: Variable Description Default Value SEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-dataSEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddySEAFILE_ELASTICSEARCH_VOLUME (Only valid for Seafile PE) The volume directory of Elasticsearch data /opt/seafile-elasticsearch/dataSEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafileSEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) JWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) httpTIME_ZONE Time zone UTC
Note
The value of the variables in the above table should be identical to your existing installation. You should check them from the existing configuration files (e.g., seafile.conf).
For variables used to initialize configurations (e.g., INIT_SEAFILE_MYSQL_ROOT_PASSWORD, INIT_SEAFILE_ADMIN_EMAIL, INIT_SEAFILE_ADMIN_PASSWORD), you can remove it in the .env file.
SSL is now handled by the caddy server. If you have used SSL before, you will also need modify the seafile.nginx.conf. Change server listen 443 to 80.
Backup the original seafile.nginx.conf file:
cp seafile.nginx.conf seafile.nginx.conf.bak\n
Remove the server listen 80 section:
#server {\n# listen 80;\n# server_name _ default_server;\n\n # allow certbot to connect to challenge location via HTTP Port 80\n # otherwise renewal request will fail\n# location /.well-known/acme-challenge/ {\n# alias /var/www/challenges/;\n# try_files $uri =404;\n# }\n\n# location / {\n# rewrite ^ https://seafile.example.com$request_uri? permanent;\n# }\n#}\n
If you has deployed the notification server. The Notification Server is now moved to its own Docker image. You need to redeploy it according to Notification Server document
"},{"location":"upgrade/upgrade_docker/#upgrade-seadoc-from-08-to-10-for-seafile-v120","title":"Upgrade SeaDoc from 0.8 to 1.0 for Seafile v12.0","text":"
If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following steps:
Delete sdoc_db.
Remove SeaDoc configs in seafile.nginx.conf file.
Re-deploy SeaDoc server. In other words, delete the old SeaDoc deployment and deploy a new SeaDoc server.
From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db.
"},{"location":"upgrade/upgrade_docker/#remove-seadoc-configs-in-seafilenginxconf-file","title":"Remove SeaDoc configs in seafile.nginx.conf file","text":"
If you have deployed SeaDoc older version, you should remove /sdoc-server/, /socket.io configs in seafile.nginx.conf file.
"},{"location":"upgrade/upgrade_docker/#supplement-or-remove-allowed_hosts-in-seahub_settingspy","title":"Supplement or remove ALLOWED_HOSTS in seahub_settings.py","text":"
Since version 12.0, the seaf-server component need to send internal requests to seahub component to check permissions, as reporting 400 Error when downloading files if the ALLOWED_HOSTS set incorrect. In this case, you can either remove ALLOWED_HOSTS in seahub_settings.py or supplement 127.0.0.1 in ALLOWED_HOSTS list:
"},{"location":"upgrade/upgrade_docker/#upgrade-from-100-to-110","title":"Upgrade from 10.0 to 11.0","text":"
Download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Taking the community edition as an example, you have to modify
It is also recommended that you upgrade mariadb and memcached to newer versions as in the v11.0 docker-compose.yml file. Specifically, in version 11.0, we use the following versions:
MariaDB: 10.11
Memcached: 1.6.18
What's more, you have to migrate configuration for LDAP and OAuth according to here
Start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-90-to-100","title":"Upgrade from 9.0 to 10.0","text":"
Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
If you are using pro edition with ElasticSearch, SAML SSO and storage backend features, follow the upgrading manual on how to update the configuration for these features.
If you want to use the new notification server and rate control (pro edition only), please refer to the upgrading manual.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-80-to-90","title":"Upgrade from 8.0 to 9.0","text":"
Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
Since version 9.0.6, we use Acme V3 (not acme-tiny) to get certificate.
If there is a certificate generated by an old version, you need to back up and move the old certificate directory and the seafile.nginx.conf before starting.
Starting the new container will automatically apply a certificate.
docker compose down\ndocker compose up -d\n
Please wait a moment for the certificate to be applied, then you can modify the new seafile.nginx.conf as you want. Execute the following command to make the nginx configuration take effect.
docker exec seafile nginx -s reload\n
A cron job inside the container will automatically renew the certificate.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-71-to-80","title":"Upgrade from 7.1 to 8.0","text":"
Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-70-to-71","title":"Upgrade from 7.0 to 7.1","text":"
Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
"},{"location":"upgrade/upgrade_notes_for_10.0.x/","title":"Upgrade notes for 10.0","text":"
These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
The notification server enables desktop syncing and drive clients to get notification of library changes immediately using websocket. There are two benefits:
Reduce the time for syncing new changes to local
Reduce the load of the server as periodically pulling is removed. There are significant reduction of load when you have 1000+ clients.
The notification server works with Seafile syncing client 9.0+ and drive client 3.0+.
Please follow the document to enable notification server
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#memcached-section-in-the-seafileconf-pro-edition-only","title":"Memcached section in the seafile.conf (pro edition only)","text":"
If you use storage backend or cluster, make sure the memcached section is in the seafile.conf.
Since version 10.0, all memcached options are consolidated to the one below.
Modify the seafile.conf:
[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#rate-control-in-role-settings-pro-edition-only","title":"Rate control in role settings (pro edition only)","text":"
Starting from version 10.0, Seafile allows administrators to configure upload and download speed limits for users with different roles through the following two steps:
Configuring rate limiting for different roles in seahub_settings.py.
Elasticsearch is upgraded to version 8.x, fixed and improved some issues of file search function.
Since elasticsearch 7.x, the default number of shards has changed from 5 to 1, because too many index shards will over-occupy system resources; but when a single shard data is too large, it will also reduce search performance. Starting from version 10.0, Seafile supports customizing the number of shards in the configuration file.
You can use the following command to query the current size of each shard to determine the best number of shards for you:
The official recommendation is that the size of each shard should be between 10G-50G: https://www.elastic.co/guide/en/elasticsearch/reference/8.6/size-your-shards.html#shard-size-recommendation.
Modify the seafevents.conf:
[INDEX FILES]\n...\nshards = 10 # default is 5\n...\n
5. Use the following command to check if the reindex task is complete:
# Get the task_id of the reindex task:\n$ curl 'http{s}://{es server IP}:9200/_tasks?actions=*reindex&pretty'\n# Check to see if the reindex task is complete:\n$ curl 'http{s}://{es server IP}:9200/_tasks/:<task_id>?pretty'\n
6. Reset the refresh_interval and number_of_replicas to the values used in the old index:
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-two-rebuild-the-index-and-discard-the-old-index-data","title":"Method two, rebuild the index and discard the old index data","text":"
1. Pull Elasticsearch image:
docker pull elasticsearch:8.5.3\n
Create a new folder to store ES data and give the folder permissions:
[INDEX FILES]\n...\nexternal_es_server = true\nes_host = http{s}://{es server IP}\nes_port = 9200\nshards = 10 # default is 5.\n...\n
Restart Seafile server:
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start\n
3. Delete old index data
rm -rf /opt/seafile-elasticsearch/data/*\n
4. Create new index data:
$ cd /opt/seafile/seafile-server-latest\n$ ./pro/pro.py search --update\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"
1. Deploy elasticsearch 8.x according to method two. Use Seafile 10.0 version to deploy a new backend node and modify the seafevents.conf file. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update.
2. Upgrade the other nodes to Seafile 10.0 version and use the new Elasticsearch 8.x server.
3. Then deactivate the old backend node and the old version of Elasticsearch.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/","title":"Upgrade notes for 11.0","text":"
These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#important-release-changes","title":"Important release changes","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-of-user-identity","title":"Change of user identity","text":"
Previous Seafile versions directly used a user's email address or SSO identity as their internal user ID.
Seafile 11.0 introduces virtual user IDs - random, internal identifiers like \"adc023e7232240fcbb83b273e1d73d36@auth.local\". For new users, a virtual ID will be generated instead of directly using their email. A mapping between the email and virtual ID will be stored in the \"profile_profile\" database table. For SSO users,the mapping between SSO ID and virtual ID is stored in the \"social_auth_usersocialauth\" table.
Overall this brings more flexibility to handle user accounts and identity changes. Existing users will use the same old ID.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#reimplementation-of-ldap-integration","title":"Reimplementation of LDAP Integration","text":"
Previous Seafile versions handled LDAP authentication in the ccnet-server component. In Seafile 11.0, LDAP is reimplemented within the Seahub Python codebase.
LDAP configuration has been moved from ccnet.conf to seahub_settings.py. The ccnet_db.LDAPImported table is no longer used - LDAP users are now stored in ccnet_db.EmailUsers along with other users.
Benefits of this new implementation:
Improved compatibility across different systems. Python code is more portable than the previous C implementation.
Consistent handling of users whether they login via LDAP or other methods like email/password.
You need to run migrate_ldapusers.py script to merge ccnet_db.LDAPImported table to ccnet_db.EmailUsers table. The setting files need to be changed manually. (See more details below)
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#oauth-authentication-and-other-sso-methods","title":"OAuth authentication and other SSO methods","text":"
If you use OAuth authentication, the configuration need to be changed a bit.
If you use SAML, you don't need to change configuration files. For SAML2, in version 10, the name_id field is returned from SAML server, and is used as the username (the email field in ccnet_dbEmailUser). In version 11, for old users, Seafile will find the old user and create a name_id to name_id mapping in social_auth_usersocialauth. For new users, Seafile will create a new user with random ID and add a name_id to the random ID mapping in social_auth_usersocialauth. In addition, we have added a feature where you can configure to disable login with a username and password for saml users by using the config of DISABLE_ADFS_USER_PWD_LOGIN = True in seahub_settings.py.
Seafile 11.0 dropped using SQLite as the database. It is better to migrate from SQLite database to MySQL database before upgrading to version 11.0.
There are several reasons driving this change:
Focus on collaborative features - SQLite's limitations make advanced concurrency and locking difficult, which collaborative editing requires. Different Seafile components need simultaneous database access. Especially after adding seafevents component in version 11.0 for the community edition.
Docker deployments - Our official Docker images do not support SQLite. MySQL is the preferred option.
Migration difficulties - Migrating SQLite databases to MySQL via SQL translation is unreliable.
To migrate from SQLite database to MySQL database, you can follow the document Migrate from SQLite to MySQL. If you have issues in the migration, just post a thread in our forum. We are glad to help you.
Django 4.* has introduced a new check for the origin http header in CSRF verification. It now compares the values of the origin field in HTTP header and the host field in HTTP header. If they are different, an error is triggered.
If you deploy Seafile behind a proxy, or if you use a non-standard port, or if you deploy Seafile in cluster, it is likely the origin field in HTTP header received by Django and the host field in HTTP header received by Django are different. Because the host field in HTTP header is likely to be modified by proxy. This mismatch results in a CSRF error.
You can add CSRF_TRUSTED_ORIGINS to seahub_settings.py to solve the problem:
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#upgrade-to-110x","title":"Upgrade to 11.0.x","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#1-stop-seafile-100x-server","title":"1) Stop Seafile-10.0.x server.","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#2-start-from-seafile-110x-run-the-script","title":"2) Start from Seafile 11.0.x, run the script:","text":"
upgrade/upgrade_10.0_11.0.sh\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#3modify-configurations-and-migrate-ldap-records","title":"3\uff09Modify configurations and migrate LDAP records","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configurations-for-ldap","title":"Change configurations for LDAP","text":"
The configuration items of LDAP login and LDAP sync tasks are migrated from ccnet.conf to seahub_settings.py. The name of the configuration item is based on the 10.0 version, and the characters 'LDAP_' or 'MULTI_LDAP_1' are added. Examples are as follows:
# Basic configuration items for LDAP login\nENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.125' # The URL of LDAP server\nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' # The root node of users who can \n # log in to Seafile in the LDAP server\nLDAP_ADMIN_DN = 'administrator@seafile.ren' # DN of the administrator used \n # to query the LDAP server for information\nLDAP_ADMIN_PASSWORD = 'Hello@123' # Password of LDAP_ADMIN_DN\nLDAP_PROVIDER = 'ldap' # Identify the source of the user, used in \n # the table social_auth_usersocialauth, defaults to 'ldap'\nLDAP_LOGIN_ATTR = 'userPrincipalName' # User's attribute used to log in to Seafile, \n # can be mail or userPrincipalName, cannot be changed\nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' # Additional filter conditions,\n # users who meet the filter conditions can log in, otherwise they cannot log in\n# For update user info when login\nLDAP_CONTACT_EMAIL_ATTR = '' # For update user's contact_email\nLDAP_USER_ROLE_ATTR = '' # For update user's role\nLDAP_USER_FIRST_NAME_ATTR = 'givenName' # For update user's first name\nLDAP_USER_LAST_NAME_ATTR = 'sn' # For update user's last name\nLDAP_USER_NAME_REVERSE = False # Whether to reverse the user's first and last name\n
The following configuration items are only for Pro Edition:
# Configuration items for LDAP sync tasks.\nLDAP_SYNC_INTERVAL = 60 # LDAP sync task period, in minutes\n\n# LDAP user sync configuration items.\nENABLE_LDAP_USER_SYNC = True # Whether to enable user sync\nLDAP_USER_OBJECT_CLASS = 'person' # This is the name of the class used to search for user objects. \n # In Active Directory, it's usually \"person\". The default value is \"person\".\nLDAP_DEPT_ATTR = '' # LDAP user's department info\nLDAP_UID_ATTR = '' # LDAP user's login_id attribute\nLDAP_AUTO_REACTIVATE_USERS = True # Whether to auto activate deactivated user\nLDAP_USE_PAGED_RESULT = False # Whether to use pagination extension\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nENABLE_EXTRA_USER_INFO_SYNC = True # Whether to enable sync of additional user information,\n # including user's full name, contact_email, department, and Windows login name, etc.\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# LDAP group sync configuration items.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_FILTER = '' # Group sync filter\nLDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n
If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth or Shibboleth), you want Seafile to find the existing account for this user instead of creating a new one, you can set SSO_LDAP_USE_SAME_UID = True:
SSO_LDAP_USE_SAME_UID = True\n
Note, here the UID means the unique user ID, in LDAP it is the attribute you use for LDAP_LOGIN_ATTR (not LDAP_UID_ATTR), in ADFS it is uid attribute. You need make sure you use the same attribute for the two settings.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configuration-for-oauth","title":"Change configuration for OAuth:","text":"
In the new version, the OAuth login configuration should keep the email attribute unchanged to be compatible with new and old user logins. In version 11.0, a new uid attribute is added to be used as a user's external unique ID. The uid will be stored in social_auth_usersocialauth to map to internal virtual ID. For old users, the original email is used the internal virtual ID. The example is as follows:
# Version 10.0 or earlier\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n\n# Since 11.0 version, added 'uid' attribute.\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # In the new version, the email attribute configuration should be kept unchanged to be compatible with old and new user logins\n \"uid\": (True, \"uid\"), # Seafile use 'uid' as the external unique identifier of the user. Different OAuth systems have different attributes, which may be: 'uid' or 'username', etc.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n
When a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration \"id\": (True, \"email\"). You can also manully add records in social_auth_usersocialauth to map extenral uid to old users.
We have documented common issues encountered by users when upgrading to version 11.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/","title":"Upgrade notes for 12.0","text":"
These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
SeaDoc is now stable, providing online notes and documents feature
A new wiki module
A new trash mechanism, that deleted files will be recorded in database for fast listing. In the old version, deleted files are scanned from library history, which is slow.
Community edition now also support online GC (because SQLite support is dropped)
Configuration changes:
Notification server is now packaged into its own docker image.
For binary package based installation, a new .env file is needed to contain some configuration items. These configuration items need to be shared by different components in Seafile. We name it .env to be consistant with docker based installation.
The password strength level is now calculated by algorithm. The old USER_PASSWORD_MIN_LENGTH, USER_PASSWORD_STRENGTH_LEVEL is removed. Only USER_STRONG_PASSWORD_REQUIRED is still used.
ADDITIONAL_APP_BOTTOM_LINKS is removed. Because there is no buttom bar in the navigation side bar now.
SERVICE_URL and FILE_SERVER_ROOT are removed. SERVICE_URL will be calculated from SEAFILE_SERVER_PROTOCOL and SEAFILE_SERVER_HOSTNAME in .env file.
ccnet.conf is removed. Some of its configuration items are moved from .env file, others are read from items in seafile.conf with same name.
Two role permissions are added, can_create_wiki and can_publish_wiki are used to control whether a role can create a Wiki and publish a Wiki. The old role permission can_publish_repo is removed.
REMOTE_USER header is not passed to Seafile by default, you need to change gunicorn.conf.py if you need REMOTE_USER header for SSO.
Other changes:
A new lightweight and fast search engine, SeaSearch. SeaSearch is optional, you can still use ElasticSearch.
Breaking changes
For security reason, WebDAV no longer support login with LDAP account, the user with LDAP account must generate a WebDAV token at the profile page
[File tags] The current file tags feature is deprecated. We will re-implement a new one in version 13.0 with a new general metadata management module.
For ElasticSearch based search, full text search of doc/xls/ppt file types are no longer supported. This enable us to remove Java dependency in Seafile side.
Deploying Seafile with binary package is now deprecated and probably no longer be supported in version 13.0. We recommend you to migrate your existing Seafile deployment to docker based.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#upgrade-to-120-for-binary-installation","title":"Upgrade to 12.0 (for binary installation)","text":"
The following instruction is for binary package based installation. If you use Docker based installation, please see Updgrade Docker
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#1-clean-database-tables-before-upgrade","title":"1) Clean database tables before upgrade","text":"
If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#2-install-new-system-libraries-and-python-libraries","title":"2) Install new system libraries and Python libraries","text":"
Install new system libraries and Python libraries for your operation system as documented above.
In the folder of Seafile 11.0.x, run the commands:
./seahub.sh stop\n./seafile.sh stop\n
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#4-run-seafile-120x-upgrade-script","title":"4) Run Seafile 12.0.x upgrade script","text":"
In the folder of Seafile 12.0.x, run the upgrade script
upgrade/upgrade_11.0_12.0.sh\n
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#5-create-the-env-file-in-conf-directory","title":"5) Create the .env file in conf/ directory","text":"
conf/.env
TIME_ZONE=UTC\nJWT_PRIVATE_KEY=xxx\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=db # your MySQL host\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n
Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, can be generated by
Since seafile 12.0, we use docker to deploy the notification server. Please follow the document of notification server to re-deploy notification server.
Note
Notification server is designed to be work with Docker based deployment. To make it work with Seafile binary package on the same server, you will need to add Nginx rules for notification server properly.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#8-optional-upgrade-seadoc-from-08-to-10","title":"8) (Optional) Upgrade SeaDoc from 0.8 to 1.0","text":"
If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following two steps:
Delete sdoc_db.
Re-deploy SeaDoc server. In other words, delete the old SeaDoc deployment and re-deploy a new SeaDoc server.
SeaDoc and Seafile binary package
Deploying SeaDoc and Seafile binary package on the same server is no longer officially supported. You will need to add Nginx rules for SeaDoc server properly.
From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#82-deploy-a-new-seadoc-server","title":"8.2) Deploy a new SeaDoc server","text":"
Please see the document Setup SeaDoc to install SeaDoc on a separate machine and integrate with your binary packaged based Seafile server v12.0.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#9-optional-update-gunicornconfpy-file-in-conf-directory","title":"9) (Optional) Update gunicorn.conf.py file in conf/ directory","text":"
If you deployed single sign on (SSO) by Shibboleth protocol, the following line should be added to the gunicorn config file.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#10-optional-other-configuration-changes","title":"10) (Optional) Other configuration changes","text":""},{"location":"upgrade/upgrade_notes_for_12.0.x/#enable-passing-of-remote_user","title":"Enable passing of REMOTE_USER","text":"
REMOTE_USER header is not passed to Seafile by default, you need to change gunicorn.conf.py if you need REMOTE_USER header for SSO.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#supplement-or-remove-allowed_hosts-in-seahub_settingspy","title":"Supplement or remove ALLOWED_HOSTS in seahub_settings.py","text":"
Since version 12.0, the seaf-server component need to send internal requests to seahub component to check permissions, as reporting 400 Error when downloading files if the ALLOWED_HOSTS set incorrect. In this case, you can either remove ALLOWED_HOSTS in seahub_settings.py or supplement 127.0.0.1 in ALLOWED_HOSTS list:
We have documented common issues encountered by users when upgrading to version 12.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_13.0.x/","title":"Upgrade notes for 12.0","text":"
These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
SeaDoc: SeaDoc is now version 2.0, beside support sdoc, it support whiteboard too
Thumbnail server: A new thumbnail server component is added to improve performance for thumbnail generating and support thumbnail for videos
Metadata server: A new metadata server component is avaible to manage extended file properties
Notification server: The web interface now support real-time update when other people add or remove files if notification-server is enabled
SeaSearch: SeaSearch is now version 1.0 and support full-text search
Configuration changes:
Database and memcache configurations are added to .env, it is recommended to use environment variables to config database and memcache
Redis is recommended to be used as memcache server
(Optional) S3 configuration can be done via environment variables and is much simplified
Elastic search is now have its own yml file
Breaking changes
For security reason, WebDAV no longer support login with LDAP account, the user with LDAP account must generate a WebDAV token at the profile page
[File tags] The old file tags feature can no longer be used, the interface provide an upgrade notice for migrate the data to the new file tags feature
Deploying Seafile with binary package is no longer supported for community edition. We recommend you to migrate your existing Seafile deployment to docker based.
Elasticsearch version is not changed in Seafile version 13.0
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#new-system-libraries-to-be-updated","title":"New system libraries (TO be updated)","text":"Ubuntu 24.04/22.04Debian 11
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#upgrade-to-130-for-binary-installation","title":"Upgrade to 13.0 (for binary installation)","text":"
The following instruction is for binary package based installation. If you use Docker based installation, please see Updgrade Docker
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#1-clean-database-tables-before-upgrade","title":"1) Clean database tables before upgrade","text":"
If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#2-install-new-system-libraries-and-python-libraries","title":"2) Install new system libraries and Python libraries","text":"
Install new system libraries and Python libraries for your operation system as documented above.
In the folder of Seafile 11.0.x, run the commands:
./seahub.sh stop\n./seafile.sh stop\n
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#4-run-seafile-120x-upgrade-script","title":"4) Run Seafile 12.0.x upgrade script","text":"
In the folder of Seafile 12.0.x, run the upgrade script
upgrade/upgrade_11.0_12.0.sh\n
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#5-create-the-env-file-in-conf-directory","title":"5) Create the .env file in conf/ directory","text":"
conf/.env
TIME_ZONE=UTC\nJWT_PRIVATE_KEY=xxx\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=db # your MySQL host\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n
Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, can be generated by
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#7-optional-upgrade-notification-server","title":"7) (Optional) Upgrade notification server","text":""},{"location":"upgrade/upgrade_notes_for_13.0.x/#8-optional-upgrade-seadoc-from-10-to-20","title":"8) (Optional) Upgrade SeaDoc from 1.0 to 2.0","text":""},{"location":"upgrade/upgrade_notes_for_13.0.x/#faq","title":"FAQ","text":"
We have documented common issues encountered by users when upgrading to version 12.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_8.0.x/","title":"Upgrade notes for 8.0","text":"
These notes give additional information about changes. Please always follow the main upgrade guide.
SERVICE_URL is moved from ccnet.conf to seahub_settings.py. The upgrade script will read it from ccnet.conf and write to seahub_settings.py
(pro edition only) ElasticSearch is upgraded to version 6.8. ElasticSearch needs to be installed and managed individually. (As ElasticSearch changes license since 6.2, it can no longer be included in Seafile package). There are some benefits for ElasticSearch to be managed individually:
Reduce the size of Seafile package
You can change ElasticSearch setttings more easily
(pro edition only) The built-in Office file preview is now implemented by a separate docker image. This makes is more easy to maintain. We also suggest users to use OnlyOffice as an alternative.
Seafile community edition package for CentOS is no longer maintained (pro editions will still be maintaied). We suggest users to migrate to Docker images.
We rewrite HTTP service in seaf-server with golang and move it to a separate component (turn off by default)
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
The performance is better in a high-concurrency environment and it can handle long requests.
Now you can sync libraries with large number of files.
Now file zipping and downloading can be done simutaneously. When zip downloading a folder, you don't need to wait until zip is done.
Support rate control for file uploading and downloading.
You can turn golang file-server on by adding following configuration in seafile.conf
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#upgrade-to-90x","title":"Upgrade to 9.0.x","text":"
Stop Seafile-8.0.x server.
Start from Seafile 9.0.x, run the script:
upgrade/upgrade_8.0_9.0.sh\n
Start Seafile-9.0.x server.
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#update-elasticsearch-pro-edition-only","title":"Update ElasticSearch (pro edition only)","text":""},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-one-rebuild-the-index-and-discard-the-old-index-data","title":"Method one, rebuild the index and discard the old index data","text":"
If your elasticsearch data is not large, it is recommended to deploy the latest 7.x version of ElasticSearch and then rebuild the new index. Specific steps are as follows
Download ElasticSearch image
docker pull elasticsearch:7.16.2\n
Create a new folder to store ES data and give the folder permissions
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-two-reindex-the-existing-data","title":"Method two, reindex the existing data","text":"
If your data volume is relatively large, it will take a long time to rebuild indexes for all Seafile databases, so you can reindex the existing data. This requires the following steps
Download and start Elasticsearch 7.x
Use the existing data to execute ElasticSearch Reindex in order to build an index that can be used in 7.x
The detailed process is as follows
Download ElasticSearch image:
docker pull elasticsearch:7.16.2\n
PS\uff1aFor seafile version 9.0, you need to manually create the elasticsearch mapping path on the host machine and give it 777 permission, otherwise elasticsearch will report path permission problems when starting, the command is as follows
mkdir -p /opt/seafile-elasticsearch/data \n
Move original data to the new folder and give the folder permissions
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"
Deploy a new ElasticSeach 7.x service, use Seafile 9.0 version to deploy a new backend node, and connect to ElasticSeach 7.x. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update, and then upgrade the other nodes to Seafile 9.0 version and use the new ElasticSeach 7.x after the index is created. Then deactivate the old backend node and the old version of ElasticSeach.
"}]}
\ No newline at end of file
diff --git a/13.0/setup/caddy/index.html b/13.0/setup/caddy/index.html
index bdda7b33..c8ddeafd 100644
--- a/13.0/setup/caddy/index.html
+++ b/13.0/setup/caddy/index.html
@@ -768,6 +768,19 @@
+
+
@@ -780,6 +793,58 @@
+
+
+
+
+
+
+
+
+
+
+
Caddy is a modern open source web server that mainly binds external traffic and internal services in seafile docker. In addition to the advantages of traditional proxy components (e.g., nginx), Caddy also makes it easier for users to complete the acquisition and update of HTTPS certificates by providing simpler configurations.
We provide two options for enabling HTTPS via Caddy, which mainly rely on The caddy docker proxy container from Lucaslorentz supports dynamic configuration with labels:
With the caddy.yml, a default volume-mount is created: /opt/seafile-caddy (as you can change it by modifying SEAFILE_CADDY_VOLUME in .env). By convention you should provide your certificate & key files in the container host filesystem under /opt/seafile-caddy/certs/ to make it available to caddy:
+
/opt/seafile-caddy/certs/
+├──cert.pem# xxx.crt in some case
+├──key.pem# xxx.key in some case
+
+
+
Command to generate custom certificates
+
With this command, you can generate your own custom certificates:
Please be aware that custom certicates can not be used for ip-adresses
+
+
Then modify seafile-server.yml to enable your custom certificate, by the way, we strongly recommend you to make a backup of seafile-server.yml before doing this:
services:
+ ...
+ seafile:
+ ...
+ volumes:
+ ...
+ # If you use a self-generated certificate, please add it to the Seafile server trusted directory (i.e. remove the comment symbol below)
+ # - "/opt/seafile-caddy/certs/cert.pem:/usr/local/share/ca-certificates/cert.crt"
+ labels:
+ caddy: ${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty} # leave this variables only
+ caddy.tls: "/data/caddy/certs/cert.pem /data/caddy/certs/key.pem"
+ ...
+
+
+
DNS resolution must work inside the container
+
If you're using a non-public url like my-custom-setup.local, you have to make sure, that the docker container can resolve this DNS query. If you don't run your own DNS servers, you have to add extras_hosts to your .yml file.