From af60c4b912c19c90778ac91458ee1566ca520021 Mon Sep 17 00:00:00 2001
From: ci-bot
After that, there will be a \"Two-Factor Authentication\" section in the user profile page.
Users can use the Google Authenticator app on their smart-phone to scan the QR code.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/","title":"Seafile Professional Server Changelog (old)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#71","title":"7.1","text":"Upgrade
Please check our document for how to upgrade to 7.1.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#7122-20210729","title":"7.1.22 (2021/07/29)","text":"Potential breaking change in Seafile Pro 7.1.16: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n"},{"location":"changelog/changelog-for-seafile-professional-server-old/#7115-20210318","title":"7.1.15 (2021/03/18)","text":"Since seafile-pro 7.0.0, we have upgraded Elasticsearch to 5.6. As Elasticsearch 5.6 relies on the Java 8 environment and can't run with root, you need to run Seafile with a non-root user and upgrade the Java version.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#7019-20200907","title":"7.0.19 (2020/09/07)","text":"-Xms1g -Xmx1g In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.
With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.
The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf instead of running ./seahub.sh start <another-port>.
Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:
./seahub.sh python-env seahub/manage.py migrate_file_comment\n Note, this command should be run while Seafile server is running.
Version 6.3 changed '/shib-login' to '/sso'. If you use Shibboleth, you need to to update your Apache/Nginx config. Please check the updated document: shibboleth config v6.3
Version 6.3 add a new option for file search (seafevents.conf):
[INDEX FILES]\n...\nhighlight = fvh\n...\n This option will make search speed improved significantly (10x) when the search result contains large pdf/doc files. But you need to rebuild search index if you want to add this option.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#6314-20190521","title":"6.3.14 (2019/05/21)","text":"New features
From 6.2, It is recommended to use proxy mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:
./seahub.sh start instead of ./seahub.sh start-fastcgiThe configuration of Nginx is as following:
location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n }\n The configuration of Apache is as following:
# seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n"},{"location":"changelog/changelog-for-seafile-professional-server-old/#6213-2018518","title":"6.2.13 (2018.5.18)","text":"file already exists error for the first time.per_page parameter to 10 when search file via api.repo_owner field to library search web api.ENABLE_REPO_SNAPSHOT_LABEL = True to turn the feature on)You can follow the document on minor upgrade.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#619-20170928","title":"6.1.9 \uff082017.09.28\uff09","text":"Web UI Improvement:
Improvement for admins:
System changes:
ENABLE_WIKI = True in seahub_settings.py)You can follow the document on minor upgrade.
Special note for upgrading a cluster:
In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share.
cd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#6013-20170508","title":"6.0.13 (2017.05.08)","text":"Improvement for admin
Other
# -*- coding: utf-8 -*- to seahub_settings.py, so that admin can use non-ascii characters in the file.[Audit] and [AUDIT] in seafevent.confPro only features
Note: Two new options are added in version 4.4, both are in seahub_settings.py
This version contains no database table change.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#449-20160229","title":"4.4.9 (2016.02.29)","text":"LDAP improvements and fixes
New features:
Pro only:
Fixes:
Note: this version contains no database table change from v4.2. But the old search index will be deleted and regenerated.
Note when upgrading from v4.2 and using cluster, a new option COMPRESS_CACHE_BACKEND = 'locmem://' should be added to seahub_settings.py
About \"Open via Client\": The web interface will call Seafile desktop client via \"seafile://\" protocol to use local program to open a file. If the file is already synced, the local file will be opened. Otherwise it is downloaded and uploaded after modification. Need client version 4.3.0+
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#430-20150725","title":"4.3.0 (2015.07.25)","text":"Usability improvements
Pro only features:
Others
THUMBNAIL_DEFAULT_SIZE = 24, instead of THUMBNAIL_DEFAULT_SIZE = '24'Note: because Seafile has changed the way how office preview work in version 4.2.2, you need to clean the old generated files using the command:
rm -rf /tmp/seafile-office-output/html/\n"},{"location":"changelog/changelog-for-seafile-professional-server-old/#424-20150708","title":"4.2.4 (2015.07.08)","text":"In the old way, the whole file is converted to HTML5 before returning to the client. By converting an office file to HTML5 page by page, the first page will be displayed faster. By displaying each page in a separate frame, the quality for some files is improved too.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#421-20150630","title":"4.2.1 (2015.06.30)","text":"Improved account management
Important
New features
Others
Pro only updates
Usability
Security Improvement
Platform
Pro only updates
Updates in community edition too
Important
Small
Pro edition only:
Syncing
Platform
Web
Web
Platform
Web
Platform
Misc
WebDAV
pro.py search --clear commandPlatform
Web
Web for Admin
Platform
Web
Web for Admin
API
Web
API
Platform
You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000
"},{"location":"changelog/changelog-for-seafile-professional-server/#130","title":"13.0","text":"Upgrade
Please check our document for how to upgrade to 13.0
"},{"location":"changelog/changelog-for-seafile-professional-server/#13012-beta-2025-10-24","title":"13.0.12 beta (2025-10-24)","text":".env, it is recommended to use environment variables to config database and memcacheUpgrade
Please check our document for how to upgrade to 12.0
"},{"location":"changelog/changelog-for-seafile-professional-server/#12018-2025-11-17","title":"12.0.18 (2025-11-17)","text":".env file.ccnet.conf is removed. Some of its configuration items are moved from .env file, others are read from items in seafile.conf with same name.Upgrade
Please check our document for how to upgrade to 11.0
"},{"location":"changelog/changelog-for-seafile-professional-server/#11019-2025-03-21","title":"11.0.19 (2025-03-21)","text":"Seafile
SDoc editor 0.8
Seafile
SDoc editor 0.7
SDoc editor 0.6
Major changes
UI Improvements
Pro edition only changes
Other changes
Upgrade
Please check our document for how to upgrade to 10.0.
Note
If you upgrade to version 10.0.18+ from 10.0.16 or below, you need to upgrade the sqlalchemy to version 1.4.44+ if you use binary based installation. Otherwise \"activities\" page will not work.
"},{"location":"changelog/changelog-for-seafile-professional-server/#10018-2024-11-01","title":"10.0.18 (2024-11-01)","text":"This release is for Docker image only
Note, after upgrading to this version, you need to upgrade the Python libraries in your server \"pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20\"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10012-2024-01-16","title":"10.0.12 (2024-01-16)","text":"Upgrade
Please check our document for how to upgrade to 9.0.
"},{"location":"changelog/changelog-for-seafile-professional-server/#9016-2023-03-22","title":"9.0.16 (2023-03-22)","text":"Note: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command pip3 install lxml to install it.
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
You can turn golang file-server on by adding following configuration in seafile.conf
[fileserver]\nuse_go_fileserver = true\n"},{"location":"changelog/changelog-for-seafile-professional-server/#901","title":"9.0.1","text":"Deprecated
"},{"location":"changelog/changelog-for-seafile-professional-server/#900","title":"9.0.0","text":"Deprecated
"},{"location":"changelog/changelog-for-seafile-professional-server/#80","title":"8.0","text":"Upgrade
Please check our document for how to upgrade to 8.0.
"},{"location":"changelog/changelog-for-seafile-professional-server/#8017-20220110","title":"8.0.17 (2022/01/10)","text":"Potential breaking change in Seafile Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n"},{"location":"changelog/changelog-for-seafile-professional-server/#802-20210421","title":"8.0.2 (2021/04/21)","text":"._ cloud file browser
others
This version has a few bugs. We will fix it soon.
"},{"location":"changelog/client-changelog/#601-20161207","title":"6.0.1 (2016/12/07)","text":"Note: Seafile client now support HiDPI under Windows, you should remove QT_DEVICE_PIXEL_RATIO settings if you had set one previous.
In the old version, you will sometimes see strange directory such as \"Documents~1\" synced to the server, this because the old version did not handle long path correctly.
"},{"location":"changelog/client-changelog/#406-20150109","title":"4.0.6 (2015/01/09)","text":"In the previous version, when you open an office file in Windows, it is locked by the operating system. If another person modify this file in another computer, the syncing will be stopped until you close the locked file. In this new version, the syncing process will continue. The locked file will not be synced to local computer, but other files will not be affected.
"},{"location":"changelog/client-changelog/#403-20141203","title":"4.0.3 (2014/12/03)","text":"You have to update all the clients in all the PCs. If one PC does not use the v3.1.11, when the \"deleting folder\" information synced to this PC, it will fail to delete the folder completely. And the folder will be synced back to other PCs. So other PCs will see the folder reappear again.
"},{"location":"changelog/client-changelog/#3110-20141113","title":"3.1.10 (2014/11/13)","text":"Note: This version contains a bug that you can't login into your private servers.
1.8.1
1.8.0
1.7.3
1.7.2
1.7.1
1.7.0
1.6.2
1.6.1
1.6.0
1.5.3
1.5.2
1.5.1
1.5.0
S: because a few programs will automatically try to create files in S:Feature changes
Progresql support is dropped as we have rewritten the database access code to remove copyright issue.
Upgrade
Please check our document for how to upgrade to 7.1.
"},{"location":"changelog/server-changelog-old/#715-20200922","title":"7.1.5 (2020/09/22)","text":"Feature changes
In version 6.3, users can create public or private Wikis. In version 7.0, private Wikis is replaced by column mode view. Every library has a column mode view. So users don't need to explicitly create private Wikis.
Public Wikis are now renamed to published libraries.
Upgrade
Just follow our document on major version upgrade. No special steps are needed.
"},{"location":"changelog/server-changelog-old/#705-20190923","title":"7.0.5 (2019/09/23)","text":"In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.
With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.
The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf instead of running ./seahub.sh start <another-port>.
Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:
./seahub.sh python-env seahub/manage.py migrate_file_comment\n Note, this command should be run while Seafile server is running.
"},{"location":"changelog/server-changelog-old/#634-20180915","title":"6.3.4 (2018/09/15)","text":"From 6.2, It is recommended to use WSGI mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:
./seahub.sh start instead of ./seahub.sh start-fastcgiThe configuration of Nginx is as following:
location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n }\n The configuration of Apache is as following:
# seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n"},{"location":"changelog/server-changelog-old/#625-20180123","title":"6.2.5 (2018/01/23)","text":"ENABLE_REPO_SNAPSHOT_LABEL = True to turn the feature on)If you upgrade from 6.0 and you'd like to use the feature video thumbnail, you need to install ffmpeg package:
# for ubuntu 16.04\napt-get install ffmpeg\npip install pillow moviepy\n\n# for Centos 7\nyum -y install epel-release\nrpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro\nyum -y install ffmpeg ffmpeg-devel\npip install pillow moviepy\n"},{"location":"changelog/server-changelog-old/#612-20170815","title":"6.1.2 (2017.08.15)","text":"Web UI Improvement:
Improvement for admins:
System changes:
Note: If you ever used 6.0.0 or 6.0.1 or 6.0.2 with SQLite as database and encoutered a problem with desktop/mobile client login, follow https://github.com/haiwen/seafile/pull/1738 to fix the problem.
"},{"location":"changelog/server-changelog-old/#609-20170330","title":"6.0.9 (2017.03.30)","text":"Improvement for admin
# -*- coding: utf-8 -*- to seahub_settings.py, so that admin can use non-ascii characters in the file.Other
Warning:
Note: when upgrade from 5.1.3 or lower version to 5.1.4+, you need to install python-urllib3 (or python2-urllib3 for Arch Linux) manually:
# for Ubuntu\nsudo apt-get install python-urllib3\n# for CentOS\nsudo yum install python-urllib3\n"},{"location":"changelog/server-changelog-old/#514-20160723","title":"5.1.4 (2016.07.23)","text":"Note: downloading multiple files at once will be added in the next release.
Note: in this version, the group discussion is not re-implement yet. It will be available when the stable verison is released.
Note when upgrade to 5.0 from 4.4
You can follow the document on major upgrade (http://manual.seafile.com/deploy/upgrade.html) (url might deprecated)
In Seafile 5.0, we have moved all config files to folder conf, including:
If you want to downgrade from v5.0 to v4.4, you should manually copy these files back to the original place, then run minor_upgrade.sh to upgrade symbolic links back to version 4.4.
The 5.0 server is compatible with v4.4 and v4.3 desktop clients.
Common issues (solved) when upgrading to v5.0:
Improve seaf-fsck
Sharing link
[[ Pagename]].UI changes:
Config changes:
confTrash:
Admin:
Security:
New features:
Fixes:
Usability Improvement
Others
THUMBNAIL_DEFAULT_SIZE = 24, instead of THUMBNAIL_DEFAULT_SIZE = '24'Note when upgrade to 4.2 from 4.1:
If you deploy Seafile in a non-root domain, you need to add the following extra settings in seahub_settings.py:
COMPRESS_URL = MEDIA_URL\nSTATIC_URL = MEDIA_URL + '/assets/'\n"},{"location":"changelog/server-changelog-old/#423-20150618","title":"4.2.3 (2015.06.18)","text":"Usability
Security Improvement
Platform
Important
Small
Important
Small improvements
Syncing
Platform
Web
Web
Platform
Web
Platform
Platform
Web
WebDAV
<a>, <table>, <img> and a few other html elements in markdown to avoid XSS attack. Platform
Web
Web for Admin
Platform
Web
Web for Admin
API
Web
API
Platform
Web
Daemon
Web
Daemon
Web
For Admin
API
Seafile Web
Seafile Daemon
API
You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000
"},{"location":"changelog/server-changelog/#130","title":"13.0","text":"Upgrade
Please check our document for how to upgrade to 13.0
"},{"location":"changelog/server-changelog/#13012-2025-10-24","title":"13.0.12 (2025-10-24)","text":"Deploying Seafile with binary package is no longer supported for community edition. We recommend you to migrate your existing Seafile deployment to docker based.
.env, it is recommended to use environment variables to config database and memcacheUpgrade
Please check our document for how to upgrade to 12.0
"},{"location":"changelog/server-changelog/#12014-2025-05-29","title":"12.0.14 (2025-05-29)","text":".env file.ccnet.conf is removed. Some of its configuration items are moved from .env file, others are read from items in seafile.conf with same name.Upgrade
Please check our document for how to upgrade to 11.0
"},{"location":"changelog/server-changelog/#11012-2024-08-14","title":"11.0.12 (2024-08-14)","text":"Seafile
Seafile
SDoc editor 0.8
Seafile
SDoc editor 0.7
Seafile
SDoc editor 0.6
Seafile
Seafile
SDoc editor 0.5
Seafile
SDoc editor 0.4
Seafile
SDoc editor 0.3
Seafile
SDoc editor 0.2
Upgrade
Please check our document for how to upgrade to 10.0.
"},{"location":"changelog/server-changelog/#1001-2023-04-11","title":"10.0.1 (2023-04-11)","text":"/accounts/login redirect by ?next= parameterNote: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command pip3 install lxml to install it.
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
You can turn golang file-server on by adding following configuration in seafile.conf
[fileserver]\nuse_go_fileserver = true\n"},{"location":"changelog/server-changelog/#80","title":"8.0","text":"Please check our document for how to upgrade to 8.0.
"},{"location":"changelog/server-changelog/#808-20211206","title":"8.0.8 (2021/12/06)","text":"The config files used in Seafile include:
You can also modify most of the config items via web interface.The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files.
"},{"location":"config/#the-design-of-configure-options","title":"The design of configure options","text":"There are now three places you can config Seafile server:
The web interface has the highest priority. It contains a subset of end-user oriented settings. In practise, you can disable settings via web interface for simplicity.
Environment variables contains system level settings that needed when initialize Seafile server or run Seafile server. Environment variables also have three categories:
The variables in the first category can be deleted after initialization. In the future, we will make more components to read config from environment variables, so that the third category is no longer needed.
"},{"location":"config/admin_roles_permissions/","title":"Roles and Permissions Support","text":"You can add/edit roles and permission for administrators. Seafile has four build-in admin roles:
default_admin, has all permissions.
system_admin, can only view system info and config system.
daily_admin, can only view system info, view statistic, manage library/user/group, view user log.
audit_admin, can only view system info and admin log.
All administrators will have default_admin role with all permissions by default. If you set an administrator to some other admin role, the administrator will only have the permissions you configured to True.
Seafile supports eight permissions for now, its configuration is very like common user role, you can custom it by adding the following settings to seahub_settings.py.
ENABLED_ADMIN_ROLE_PERMISSIONS = {\n 'system_admin': {\n 'can_view_system_info': True,\n 'can_config_system': True,\n },\n 'daily_admin': {\n 'can_view_system_info': True,\n 'can_view_statistic': True,\n 'can_manage_library': True,\n 'can_manage_user': True,\n 'can_manage_group': True,\n 'can_view_user_log': True,\n },\n 'audit_admin': {\n 'can_view_system_info': True,\n 'can_view_admin_log': True,\n },\n 'custom_admin': {\n 'can_view_system_info': True,\n 'can_config_system': True,\n 'can_view_statistic': True,\n 'can_manage_library': True,\n 'can_manage_user': True,\n 'can_manage_group': True,\n 'can_view_user_log': True,\n 'can_view_admin_log': True,\n },\n}\n"},{"location":"config/auth_switch/","title":"Switch authentication type","text":"Seafile Server supports the following external authentication types:
Since 11.0 version, switching between the types is possible, but any switch requires modifications of Seafile's databases.
Note
Before manually manipulating your database, make a database backup, so you can restore your system if anything goes wrong!
See more about make a database backup.
"},{"location":"config/auth_switch/#migrating-from-local-user-database-to-external-authentication","title":"Migrating from local user database to external authentication","text":"As an organisation grows and its IT infrastructure matures, the migration from local authentication to external authentication like LDAP, SAML, OAUTH is common requirement. Fortunately, the switch is comparatively simple.
"},{"location":"config/auth_switch/#general-procedure","title":"General procedure","text":"Configure and test the desired external authentication. Note the name of the provider you use in the config file. The user to be migrated should already be able to log in with this new authentication type, but he will be created as a new user with a new unique identifier, so he will not have access to his existing libraries. Note the uid from the social_auth_usersocialauth table. Delete this new, still empty user again.
Determine the ID of the user to be migrated in ccnet_db.EmailUser. For users created before version 11, the ID should be the user's email, for users created after version 11, the ID should be a string like xxx@auth.local.
Replace the password hash with an exclamation mark.
Create a new entry in social_auth_usersocialauth with the xxx@auth.local, your provider and the uid.
The login with the password stored in the local database is not possible anymore. After logging in via external authentication, the user has access to all his previous libraries.
"},{"location":"config/auth_switch/#example","title":"Example","text":"This example shows how to migrate the user with the username 12ae56789f1e4c8d8e1c31415867317c@auth.local from local database authentication to OAuth. The OAuth authentication is configured in seahub_settings.py with the provider name authentik-oauth. The uid of the user inside the Identity Provider is HR12345.
This is what the database looks like before these commands must be executed:
mysql> select email,left(passwd,25) from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------------------------------+\n| email | left(passwd,25) |\n+---------------------------------------------+------------------------------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | PBKDF2SHA256$10000$4cdda6... |\n+---------------------------------------------+------------------------------+\n\nmysql> update EmailUser set passwd = '!' where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n\nmysql> insert into `social_auth_usersocialauth` (`username`, `provider`, `uid`, `extra_data`) values ('12ae56789f1e4c8d8e1c31415867317c@auth.local', 'authentik-oauth', 'HR12345', '');\n Note
The extra_data field store user's information returned from the provider. For most providers, the extra_data field is usually an empty character. Since version 11.0.3-Pro, the default value of the extra_data field is NULL.
Afterwards the databases should look like this:
mysql> select email,passwd from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------- +\n| email | passwd |\n+---------------------------------------------+--------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | ! |\n+---------------------------------------------+--------+\n\nmysql> select username,provider,uid from social_auth_usersocialauth where username = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+-----------------+---------+\n| username | provider | uid |\n+---------------------------------------------+-----------------+---------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | authentik-oauth | HR12345 |\n+---------------------------------------------+-----------------+---------+\n"},{"location":"config/auth_switch/#migrating-from-one-external-authentication-to-another","title":"Migrating from one external authentication to another","text":"First configure the two external authentications and test them with a dummy user. Then, to migrate all the existing users you only need to make changes to the social_auth_usersocialauth table. No entries need to be deleted or created. You only need to modify the existing ones. The xxx@auth.local remains the same, you only need to replace the provider and the uid.
First, delete the entry in the social_auth_usersocialauth table that belongs to the particular user.
Then you can reset the user's password, e.g. via the web interface. The user will be assigned a local password and from now on the authentication against the local database of Seafile will be done.
More details about this option will follow soon.
"},{"location":"config/auto_login_seadrive/","title":"Auto Login to SeaDrive on Windows","text":"Kerberos is a widely used single sign on (SSO) protocol. Supporting of auto login will use a Kerberos service. For server configuration, please read remote user authentication documentation. You have to configure Apache to authenticate with Kerberos. This is out of the scope of this documentation. You can for example refer to this webpage.
"},{"location":"config/auto_login_seadrive/#technical-details","title":"Technical Details","text":"The client machine has to join the AD domain. In a Windows domain, the Kerberos Key Distribution Center (KDC) is implemented on the domain service. Since the client machine has been authenticated by KDC when a Windows user logs in, a Kerberos ticket will be generated for current user without needs of another login in the browser.
When a program using the WinHttp API tries to connect a server, it can perform a login automatically through the Integrated Windows Authentication. Internet Explorer and SeaDrive both use this mechanism.
The details of Integrated Windows Authentication is described below:
In short:
The Internet Options has to be configured as following:
Open \"Internet Options\", select \"Security\" tab, select \"Local Intranet\" zone.
Note
Above configuration requires a reboot to take effect.
Next, we shall test the auto login function on Internet Explorer: visit the website and click \"Single Sign-On\" link. It should be able to log in directly, otherwise the auto login is malfunctioned.
Note
The address in the test must be same as the address specified in the keytab file. Otherwise, the client machine can't get a valid ticket from Kerberos.
"},{"location":"config/auto_login_seadrive/#auto-login-on-seadrive","title":"Auto Login on SeaDrive","text":"SeaDrive will use the Kerberos login configuration from the Windows Registry under HKEY_CURRENT_USER/SOFTWARE/SeaDrive.
Key : PreconfigureServerAddr\nType : REG_SZ\nValue : <the url of seafile server>\n\nKey : PreconfigureUseKerberosLogin\nType : REG_SZ\nValue : <0|1> // 0 for normal login, 1 for SSO login\n The system wide configuration path is located at HKEY_LOCAL_MACHINE/SOFTWARE/Wow6432Node/SeaDrive.
SeaDrive can be installed silently with the following command (requires admin privileges):
msiexec /i seadrive.msi /quiet /qn /log install.log\n"},{"location":"config/auto_login_seadrive/#auto-login-via-group-policy","title":"Auto Login via Group Policy","text":"The configuration of Internet Options : https://docs.microsoft.com/en-us/troubleshoot/browsers/how-to-configure-group-policy-preference-settings
The configuration of Windows Registry : https://thesolving.com/server-room/how-to-deploy-a-registry-key-via-group-policy/
"},{"location":"config/config_seafile_with_ADFS/","title":"config seafile with ADFS","text":""},{"location":"config/config_seafile_with_ADFS/#requirements","title":"Requirements","text":"To use ADFS to log in to your Seafile, you need the following components:
A Winodws Server with ADFS installed. For configuring and installing ADFS you can see this article.
A valid SSL certificate for ADFS server, and here we use adfs-server.adfs.com as the domain name example.
A valid SSL certificate for Seafile server, and here we use demo.seafile.com as the domain name example.
You can generate them by:
``` openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sp.key -out sp.crt
These x.509 certs are used to sign and encrypt elements like NameID and Metadata for SAML. \n\n Then copy these two files to **<seafile-install-path>/seahub-data/certs**. (if the certs folder not exists, create it.)\n\n2. x.509 cert from IdP (Identity Provider)\n\n 1. Log into the ADFS server and open the ADFS management.\n\n 1. Double click **Service** and choose **Certificates**.\n\n 1. Export the **Token-Signing** certificate:\n\n 1. Right-click the certificate and select **View Certificate**.\n 1. Select the **Details** tab.\n 1. Click **Copy to File** (select **DER encoded binary X.509**).\n\n 1. Convert this certificate to PEM format, rename it to **idp.crt**\n\n 1. Then copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Prepare IdP Metadata File\n\n1. Open https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml\n\n1. Save this xml file, rename it to **idp_federation_metadata.xml**\n\n1. Copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Install Requirements on Seafile Server\n\n- For Ubuntu 16.04\n sudo apt install xmlsec1 sudo pip install cryptography djangosaml2==0.15.0 ### Config Seafile\n\nAdd the following lines to **seahub_settings.py**\n from os import path import saml2 import saml2.saml"},{"location":"config/config_seafile_with_ADFS/#update-following-lines-according-to-your-situation","title":"update following lines according to your situation","text":"CERTS_DIR = '/seahub-data/certs' SP_SERVICE_URL = 'https://demo.seafile.com' XMLSEC_BINARY = '/usr/local/bin/xmlsec1' ATTRIBUTE_MAP_DIR = '/seafile-server-latest/seahub-extra/seahub_extra/adfs_auth/attribute-maps' SAML_ATTRIBUTE_MAPPING = { 'DisplayName': ('display_name', ), 'ContactEmail': ('contact_email', ), 'Deparment': ('department', ), 'Telephone': ('telephone', ), }"},{"location":"config/config_seafile_with_ADFS/#update-the-idp-section-in-sampl_config-according-to-your-situation-and-leave-others-as-default","title":"update the 'idp' section in SAMPL_CONFIG according to your situation, and leave others as default","text":"
ENABLE_ADFS_LOGIN = True EXTRA_AUTHENTICATION_BACKENDS = ( 'seahub_extra.adfs_auth.backends.Saml2Backend', ) SAML_USE_NAME_ID_AS_USERNAME = True LOGIN_REDIRECT_URL = '/saml2/complete/' SAML_CONFIG = { # full path to the xmlsec1 binary programm 'xmlsec_binary': XMLSEC_BINARY,
'allow_unknown_attributes': True,\n\n# your entity id, usually your subdomain plus the url to the metadata view\n'entityid': SP_SERVICE_URL + '/saml2/metadata/',\n\n# directory with attribute mapping\n'attribute_map_dir': ATTRIBUTE_MAP_DIR,\n\n# this block states what services we provide\n'service': {\n # we are just a lonely SP\n 'sp' : {\n \"allow_unsolicited\": True,\n 'name': 'Federated Seafile Service',\n 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS,\n 'endpoints': {\n # url and binding to the assetion consumer service view\n # do not change the binding or service name\n 'assertion_consumer_service': [\n (SP_SERVICE_URL + '/saml2/acs/',\n saml2.BINDING_HTTP_POST),\n ],\n # url and binding to the single logout service view\n # do not change the binding or service name\n 'single_logout_service': [\n (SP_SERVICE_URL + '/saml2/ls/',\n saml2.BINDING_HTTP_REDIRECT),\n (SP_SERVICE_URL + '/saml2/ls/post',\n saml2.BINDING_HTTP_POST),\n ],\n },\n\n # attributes that this project need to identify a user\n 'required_attributes': [\"uid\"],\n\n # attributes that may be useful to have but not required\n 'optional_attributes': ['eduPersonAffiliation', ],\n\n # in this section the list of IdPs we talk to are defined\n 'idp': {\n # we do not need a WAYF service since there is\n # only an IdP defined here. This IdP should be\n # present in our metadata\n\n # the keys of this dictionary are entity ids\n 'https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml': {\n 'single_sign_on_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/idpinitiatedsignon.aspx',\n },\n 'single_logout_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/?wa=wsignout1.0',\n },\n },\n },\n },\n},\n\n# where the remote metadata is stored\n'metadata': {\n 'local': [path.join(CERTS_DIR, 'idp_federation_metadata.xml')],\n},\n\n# set to 1 to output debugging information\n'debug': 1,\n\n# Signing\n'key_file': '', \n'cert_file': path.join(CERTS_DIR, 'idp.crt'), # from IdP\n\n# Encryption\n'encryption_keypairs': [{\n 'key_file': path.join(CERTS_DIR, 'sp.key'), # private part\n 'cert_file': path.join(CERTS_DIR, 'sp.crt'), # public part\n}],\n\n'valid_for': 24, # how long is our metadata valid\n }
```
"},{"location":"config/config_seafile_with_ADFS/#config-adfs-server","title":"Config ADFS Server","text":"Relying Party Trust is the connection between Seafile and ADFS.
Log into the ADFS server and open the ADFS management.
Double click Trust Relationships, then right click Relying Party Trusts, select Add Relying Party Trust\u2026.
Select Import data about the relying party published online or one a local network, input https://demo.seafile.com/saml2/metadata/ in the Federation metadata address.
Then Next until Finish.
Add Relying Party Claim Rules
Relying Party Claim Rules is used for attribute communication between Seafile and users in Windows Domain.
Important: Users in Windows domain must have the E-mail value setted.
Right-click on the relying party trust and select Edit Claim Rules...
On the Issuance Transform Rules tab select Add Rules...
Select Send LDAP Attribute as Claims as the claim rule template to use.
Give the claim a name such as LDAP Attributes.
Set the Attribute Store to Active Directory, the LDAP Attribute to E-Mail-Addresses, and the Outgoing Claim Type to E-mail Address.
Select Finish.
Click Add Rule... again.
Select Transform an Incoming Claim.
Give it a name such as Email to Name ID.
Incoming claim type should be E-mail Address (it must match the Outgoing Claim Type in rule #1).
The Outgoing claim type is Name ID (this is required in Seafile settings policy 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS).
the Outgoing name ID format is Email.
Pass through all claim values and click Finish.
https://support.zendesk.com/hc/en-us/articles/203663886-Setting-up-single-sign-on-using-Active-Directory-with-ADFS-and-SAML-Plus-and-Enterprise-
http://wiki.servicenow.com/?title=Configuring_ADFS_2.0_to_Communicate_with_SAML_2.0#gsc.tab=0
https://github.com/rohe/pysaml2/blob/master/src/saml2/saml.py
Note: Subject line may vary between different releases, this is based on Release 2.0.1. Restart Seahub so that your changes take effect.
"},{"location":"config/customize_email_notifications/#user-reset-hisher-password","title":"User reset his/her password","text":"Subject
seahub/seahub/auth/forms.py line:103
Body
seahub/seahub/templates/registration/password_reset_email.html
Note: You can copy password_reset_email.html to seahub-data/custom/templates/registration/password_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:424
Body
seahub/seahub/templates/sysadmin/user_add_email.html
Note: You can copy user_add_email.html to seahub-data/custom/templates/sysadmin/user_add_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:368
Body
seahub/seahub/templates/sysadmin/user_reset_email.html
Note: You can copy user_reset_email.html to seahub-data/custom/templates/sysadmin/user_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/share/views.py line:668
Body
seahub/seahub/templates/shared_link_email.html
"},{"location":"config/details_about_file_search/","title":"Details about File Search","text":""},{"location":"config/details_about_file_search/#search-options","title":"Search Options","text":"The following options can be set in seafevents.conf to control the behaviors of file search. You need to restart seafile and seahub to make them take effect.
[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## this is for improving the search speed\nhighlight = fvh \n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\nindex_office_pdf=false\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n"},{"location":"config/details_about_file_search/#enable-full-text-search-for-officepdf-files","title":"Enable full text search for Office/PDF files","text":"Full text search is not enabled by default to save system resources. If you want to enable it, you need to follow the instructions below.
"},{"location":"config/details_about_file_search/#modify-seafeventsconf","title":"Modifyseafevents.conf","text":"Deploy in DockerDeploy from binary packages cd /opt/seafile-data/seafile/conf\nnano seafevents.conf\n cd /opt/seafile/conf\nnano seafevents.conf\n set index_office_pdf to true
...\n[INDEX FILES]\n...\nindex_office_pdf=true\n...\n"},{"location":"config/details_about_file_search/#restart-seafile-server","title":"Restart Seafile server","text":"Deploy in DockerDeploy from binary packages docker exec -it seafile bash\ncd /opt/seafile/seafile-server-latest\n./seafile.sh restart\n\n# delete the existing search index and recreate it\n./pro/pro.py search --clear\n./pro/pro.py search --update\n cd /opt/seafile/seafile-server-latest\n./seafile.sh restart\n\n# delete the existing search index and recreate it\n./pro/pro.py search --clear\n./pro/pro.py search --update\n"},{"location":"config/details_about_file_search/#common-problems","title":"Common problems","text":""},{"location":"config/details_about_file_search/#how-to-rebuild-the-index-if-something-went-wrong","title":"How to rebuild the index if something went wrong","text":"You can rebuild search index by running:
Deploy in DockerDeploy from binary packagesdocker exec -it seafile bash\ncd /opt/seafile/seafile-server-latest\n./pro/pro.py search --clear\n./pro/pro.py search --update\n cd /opt/seafile/seafile-server-latest\n./pro/pro.py search --clear\n./pro/pro.py search --update\n Tip
If this does not work, you can try the following steps:
rm -rf pro-data/search./pro/pro.py search --updateCreate an elasticsearch service on AWS according to the documentation.
Configure the seafevents.conf:
[INDEX FILES]\nenabled = true\ninterval = 10m\nindex_office_pdf=true\nes_host = your domain endpoint(for example, https://search-my-domain.us-east-1.es.amazonaws.com)\nes_port = 443\nscheme = https\nusername = master user\npassword = password\nhighlight = fvh\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n"},{"location":"config/details_about_file_search/#i-get-no-result-when-i-search-a-keyword","title":"I get no result when I search a keyword","text":"The search index is updated every 10 minutes by default. So before the first index update is performed, you get nothing no matter what you search.
To be able to search immediately,
docker exec -it seafile bash\ncd /opt/seafile/seafile-server-latest\n./pro/pro.py search --update\n cd /opt/seafile/seafile-server-latest\n./pro/pro.py search --update\n"},{"location":"config/details_about_file_search/#encrypted-files-cannot-be-searched","title":"Encrypted files cannot be searched","text":"This is because the server cannot index encrypted files, since they are encrypted.
"},{"location":"config/env/","title":".env","text":"The .env file will be used to specify the components used by the Seafile-docker instance and the environment variables required by each component.
COMPOSE_FILE: .yml files for components of Seafile-docker, each .yml must be separated by the symbol defined in COMPOSE_PATH_SEPARATOR. The core components are involved in seafile-server.yml and caddy.yml which must be taken in this term.COMPOSE_PATH_SEPARATOR: The symbol used to separate the .yml files in term COMPOSE_FILE, default is ','.SEAFILE_IMAGE: The image of Seafile-server, default is seafileltd/seafile-pro-mc:12.0-latest.SEAFILE_DB_IMAGE: Database server image, default is mariadb:10.11.SEAFILE_MEMCACHED_IMAGE: Cached server image, default is memcached:1.6.29SEAFILE_ELASTICSEARCH_IMAGE: Only valid in pro edition. The elasticsearch image, default is elasticsearch:8.15.0.SEAFILE_CADDY_IMAGE: Caddy server image, default is lucaslorentz/caddy-docker-proxy:2.9-alpine.SEADOC_IMAGE: Only valid after integrating SeaDoc. SeaDoc server image, default is seafileltd/sdoc-server:2.0-latest.NON_ROOT: Run Seafile container without a root user, default is falseSEAFILE_VOLUME: The volume directory of Seafile data, default is /opt/seafile-data.SEAFILE_MYSQL_VOLUME: The volume directory of MySQL data, default is /opt/seafile-mysql/db.SEAFILE_CADDY_VOLUME: The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's, default is /opt/seafile-caddy.SEAFILE_ELASTICSEARCH_VOLUME: Only valid in pro edition. The volume directory of Elasticsearch data, default is /opt/seafile-elasticsearch/data.SEADOC_VOLUME: Only valid after integrating SeaDoc. The volume directory of SeaDoc server data, default is /opt/seadoc-data.SEAFILE_MYSQL_DB_HOST: The host address of Mysql, default is the pre-defined service name db in Seafile-docker instance.SEAFILE_MYSQL_DB_PORT: The port of Mysql, default is 3306.INIT_SEAFILE_MYSQL_ROOT_PASSWORD: (Only required on first deployment) The root password of MySQL. SEAFILE_MYSQL_DB_USER: The user of MySQL (database - user can be found in conf/seafile.conf).SEAFILE_MYSQL_DB_PASSWORD: The user seafile password of MySQL.SEAFILE_MYSQL_DB_SEAFILE_DB_NAME: The name of Seafile database name, default is seafile_dbSEAFILE_MYSQL_DB_CCNET_DB_NAME: The name of ccnet database name, default is ccnet_dbSEAFILE_MYSQL_DB_SEAHUB_DB_NAME: The name of seahub database name, default is seahub_dbCACHE_PROVIDER: The type of cache server used for Seafile. The available options are redis and memcached. Since Seafile 13, it is recommended to use redis as the cache service to support new features, and memcached will no longer be integrated into Seafile Docker by default. Default is redisThis part of configurations is only valid in CACHE_PROVIDER=redis:
REDIS_HOST: Redis server host, default is redisREDIS_PORT: Redis server port, default is 6379REDIS_PASSWORD: Redis server password. This part of configurations is only valid in CACHE_PROVIDER=memcached:
MEMCACHED_HOST: Memcached server host, default is memcachedMEMCACHED_PORT: Memcached server port, default is 11211JWT_PRIVATE_KEY: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1SEAFILE_SERVER_HOSTNAME: Seafile server hostname or domainSEAFILE_SERVER_PROTOCOL: Seafile server protocol (http or https)TIME_ZONE: Time zone (default UTC)INIT_SEAFILE_ADMIN_EMAIL: Admin usernameINIT_SEAFILE_ADMIN_PASSWORD: Admin passwordENABLE_SEADOC: Enable the SeaDoc server or not, default is false.SEADOC_SERVER_URL: Only valid in ENABLE_SEADOC=true. External URL of Seadoc server (e.g., https://seafile.example.com/sdoc-server).SEAF_SERVER_STORAGE_TYPE: What kind of the Seafile data for storage. Available options are disk (i.e., local disk), s3 and multiple (see the details of multiple storage backends)S3_COMMIT_BUCKET: S3 storage backend fs objects bucketS3_FS_BUCKET: S3 storage backend block objects bucketS3_BLOCK_BUCKET: S3 storage backend block objects bucketS3_SS_BUCKET: S3 storage bucket for SeaSearch data (valid when service enabled)S3_MD_BUCKET: S3 storage bucket for metadata-sever data (valid when service available)S3_KEY_ID: S3 storage backend key IDS3_SECRET_KEY: S3 storage backend secret keyS3_USE_V4_SIGNATURE: Use the v4 protocol of S3 if enabled, default is trueS3_AWS_REGION: Region of your buckets (AWS only), default is us-east-1.S3_HOST: Host of your buckets (required when not use AWS).S3_USE_HTTPS: Use HTTPS connections to S3 if enabled, default is trueS3_PATH_STYLE_REQUEST: This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. Default false.S3_SSE_C_KEY: A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C.Easier to configure S3 for Seafile and its components
Since Seafile Pro 13.0, in order to facilitate users to deploy Seafile's related extension components and other services in the future, a section will be provided in .env to store the S3 Configurations for Seafile and some extension components (such as SeaSearch, Metadata server). You can locate it with the title bar Storage configurations for S3.
S3 configurations in .env only support single S3 storage backend mode
The Seafile server only support configuring S3 in .env for single S3 storage backend mode (i.e., when SEAF_SERVER_STORAGE_TYPE=s3). If you would like to use other storage backend (e.g., Ceph, Swift) or other settings that can only be set in seafile.conf (like multiple storage backends), please set SEAF_SERVER_STORAGE_TYPE to multiple, and set MD_STORAGE_TYPE and SS_STORAGE_TYPE according to your configurations.
The S3 configurations only valid with at least one STORAGE_TYPE has specified to s3
Now there are three (pro) and one (cluster) STORAGE_TYPE we provided in .env: - SEAF_SERVER_STORAGE_TYPE (pro & cluster) - MD_STORAGE_TYPE (pro, see the Metadata server section for the details) - SS_STORAGE_TYPE (pro, see the SeaSearch section for the details)
You have to specify at least one of them as s3 for the above configuration to take effect.
"},{"location":"config/env/#seasearch","title":"SeaSearch","text":"For configurations about SeaSearch in .env, please refer here for the details.
For configurations about Metadata server in .env, please refer here for the details.
ENABLE_NOTIFICATION_SERVER: Enable (true) or disable (false) notification feature for Seafile. Default is false.NOTIFICATION_SERVER_URL: Used to do the connection between client (i.e., user's browser) and notification server. Default is https://seafile.example.com/notification. INNER_NOTIFICATION_SERVER_URL: Used to do the connection between Seafile server and notification server. Default is http://notification-server:8083.MD_FILE_COUNT_LIMIT: The maximum number of files in a repository that the metadata feature allows. If the number of files in a repository exceeds this value, the metadata management function will not be enabled for the repository. For a repository with metadata management enabled, if the number of records in it reaches this value but there are still some files that are not recorded in metadata server, the metadata management of the unrecorded files will be skipped. Default is 100000.CLUSTER_INIT_MODE: (only valid in pro edition at deploying first time). Cluster initialization mode, in which the necessary configuration files for the service to run will be generated (but the service will not be started). If the configuration file already exists, no operation will be performed. The default value is true. When the configuration file is generated, be sure to set this item to false.CLUSTER_INIT_ES_HOST: (only valid in pro edition at deploying first time). Your cluster Elasticsearch server host.CLUSTER_INIT_ES_PORT: (only valid in pro edition at deploying first time). Your cluster Elasticsearch server port. Default is 9200.CLUSTER_MODE: Seafile service node type, i.e., frontend (default) or backend.This documentation is for the Community Edition. If you're using Pro Edition, please refer to the Seafile Pro documentation
"},{"location":"config/ldap_in_ce/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"When Seafile is integrated with LDAP, users in the system can be divided into two tiers:
Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated.
Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them.
When Seafile counts the number of users in the system, it only counts the activated users in its internal database.
"},{"location":"config/ldap_in_ce/#basic-ldap-integration","title":"Basic LDAP Integration","text":"The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier:
user-login-name@domain-name, e.g. john@example.com. It's not a real email address, but it works fine as the unique identifier.The identifier is stored in table social_auth_usersocialauth to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth table
Add the following options to seahub_settings.py. Examples are as follows:
ENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.1' \nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' \nLDAP_ADMIN_DN = 'administrator@example.com' \nLDAP_ADMIN_PASSWORD = 'yourpassword' \nLDAP_PROVIDER = 'ldap' \nLDAP_LOGIN_ATTR = 'email' \nLDAP_CONTACT_EMAIL_ATTR = '' \nLDAP_USER_ROLE_ATTR = '' \nLDAP_USER_FIRST_NAME_ATTR = 'givenName' \nLDAP_USER_LAST_NAME_ATTR = 'sn' \nLDAP_USER_NAME_REVERSE = False \nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \n Meaning of some options:
variable descriptionLDAP_SERVER_URL The URL of LDAP server LDAP_BASE_DN The root node of users who can log in to Seafile in the LDAP server LDAP_ADMIN_DN DN of the administrator used to query the LDAP server for information. For OpenLDAP, it may be cn=admin,dc=example,dc=com LDAP_ADMIN_PASSWORD Password of LDAP_ADMIN_DN LDAP_PROVIDER Identify the source of the user, used in the table social_auth_usersocialauth, defaults to 'ldap' LDAP_LOGIN_ATTR User's attribute used to log in to Seafile. It should be a unique identifier for the user in LDAP server. Learn more about this id from the descriptions at the beginning of this section. LDAP_CONTACT_EMAIL_ATTR LDAP user's contact_email attribute LDAP_USER_ROLE_ATTR LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR Attribute for user's first name. It's \"givenName\" by default. LDAP_USER_LAST_NAME_ATTR Attribute for user's last name. It's \"sn\" by default. LDAP_USER_NAME_REVERSE In some languages, such as Chinese, the display order of the first and last name is reversed. Set this option if you need it. LDAP_FILTER Additional filter conditions. Users who meet the filter conditions can log in, otherwise they cannot log in. Tips for choosing LDAP_BASE_DN and LDAP_ADMIN_DN:
To determine the LDAP_BASE_DN, you first have to navigate your organization hierachy on the domain controller GUI.
If you want to allow all users to use Seafile, you can use cn=users,dc=yourdomain,dc=com as LDAP_BASE_DN (with proper adjustment for your own needs).
If you want to limit users to a certain OU (Organization Unit), you run dsquery command on the domain controller to find out the DN for this OU. For example, if the OU is staffs, you can run dsquery ou -name staff. More information can be found here.
AD supports user@domain.name format for the LDAP_ADMIN_DN option. For example you can use administrator@example.com for LDAP_ADMIN_DN. Sometime the domain controller doesn't recognize this format. You can still use dsquery command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser. More information here.
Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the LDAP_BASE_DN option. The DNs are separated by \";\", e.g.
LDAP_BASE_DN = 'ou=developers,dc=example,dc=com;ou=marketing,dc=example,dc=com'\n"},{"location":"config/ldap_in_ce/#additional-search-filter","title":"Additional Search Filter","text":"Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting LDAP_FILTER option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).
The final filter used for searching for users is (&($LOGIN_ATTR=*)($LDAP_FILTER)). $LOGIN_ATTR and $LDAP_FILTER will be replaced by your option values.
For example, add below option to seahub_settings.py:
LDAP_FILTER = 'memberOf=CN=group,CN=developers,DC=example,DC=com'\n The final search filter would be (&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))
Note that the case of attribute names in the above example is significant. The memberOf attribute is only available in Active Directory.
You can use the LDAP_FILTER option to limit user scope to a certain AD group.
First, you should find out the DN for the group. Again, we'll use the dsquery command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup.
Add below option to seahub_settings.py:
LDAP_FILTER = 'memberOf={output of dsquery command}'\n"},{"location":"config/ldap_in_ce/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"If your LDAP service supports TLS connections, you can configure LDAP_SERVER_URL as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
LDAP_SERVER_URL = 'ldaps://192.168.0.1:636'\n"},{"location":"config/ldap_in_pro/","title":"Configure Seafile Pro Edition to use LDAP","text":""},{"location":"config/ldap_in_pro/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"When Seafile is integrated with LDAP, users in the system can be divided into two tiers:
Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated.
Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them.
When Seafile counts the number of users in the system, it only counts the activated users in its internal database.
"},{"location":"config/ldap_in_pro/#basic-ldap-integration","title":"Basic LDAP Integration","text":"The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier:
user-login-name@domain-name, e.g. john@example.com. It's not a real email address, but it works fine as the unique identifier.The identifier is stored in table social_auth_usersocialauth to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth table
Add the following options to seahub_settings.py. Examples are as follows:
ENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.1' \nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' \nLDAP_ADMIN_DN = 'administrator@example.com' \nLDAP_ADMIN_PASSWORD = 'yourpassword' \nLDAP_PROVIDER = 'ldap' \nLDAP_LOGIN_ATTR = 'email' \nLDAP_CONTACT_EMAIL_ATTR = '' \nLDAP_USER_ROLE_ATTR = '' \nLDAP_USER_FIRST_NAME_ATTR = 'givenName' \nLDAP_USER_LAST_NAME_ATTR = 'sn' \nLDAP_USER_NAME_REVERSE = False \nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \n Meaning of some options:
variable descriptionLDAP_SERVER_URL The URL of LDAP server LDAP_BASE_DN The root node of users who can log in to Seafile in the LDAP server LDAP_ADMIN_DN DN of the administrator used to query the LDAP server for information. For OpenLDAP, it may be cn=admin,dc=example,dc=com LDAP_ADMIN_PASSWORD Password of LDAP_ADMIN_DN LDAP_PROVIDER Identify the source of the user, used in the table social_auth_usersocialauth, defaults to 'ldap' LDAP_LOGIN_ATTR User's attribute used to log in to Seafile. It should be a unique identifier for the user in LDAP server. Learn more about this id from the descriptions at the beginning of this section. LDAP_CONTACT_EMAIL_ATTR LDAP user's contact_email attribute LDAP_USER_ROLE_ATTR LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR Attribute for user's first name. It's \"givenName\" by default. LDAP_USER_LAST_NAME_ATTR Attribute for user's last name. It's \"sn\" by default. LDAP_USER_NAME_REVERSE In some languages, such as Chinese, the display order of the first and last name is reversed. Set this option if you need it. LDAP_FILTER Additional filter conditions. Users who meet the filter conditions can log in, otherwise they cannot log in. Tips for choosing LDAP_BASE_DN and LDAP_ADMIN_DN:
To determine the LDAP_BASE_DN, you first have to navigate your organization hierachy on the domain controller GUI.
If you want to allow all users to use Seafile, you can use cn=users,dc=yourdomain,dc=com as LDAP_BASE_DN (with proper adjustment for your own needs).
If you want to limit users to a certain OU (Organization Unit), you run dsquery command on the domain controller to find out the DN for this OU. For example, if the OU is staffs, you can run dsquery ou -name staff. More information can be found here.
AD supports user@domain.name format for the LDAP_ADMIN_DN option. For example you can use administrator@example.com for LDAP_ADMIN_DN. Sometime the domain controller doesn't recognize this format. You can still use dsquery command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser. More information here.
In Seafile Pro, except for importing users into internal database when they log in, you can also configure Seafile to periodically sync user information from LDAP server into the internal database.
User's full name, department and contact email address can be synced to internal database. Users can use this information to more easily search for a specific user. User's Windows or Unix login id can be synced to the internal database. This allows the user to log in with its familiar login id. When a user is removed from LDAP, the corresponding user in Seafile will be deactivated. Otherwise, he could still sync files with Seafile client or access the web interface. After synchronization is complete, you can see the user's full name, department and contact email on its profile page.
"},{"location":"config/ldap_in_pro/#sync-configuration-items","title":"Sync configuration items","text":"Add the following options to seahub_settings.py. Examples are as follows:
# Basic configuration items\nENABLE_LDAP = True\n......\n\n# ldap user sync options.\nLDAP_SYNC_INTERVAL = 60 \nENABLE_LDAP_USER_SYNC = True \nLDAP_USER_OBJECT_CLASS = 'person'\nLDAP_DEPT_ATTR = '' \nLDAP_UID_ATTR = '' \nLDAP_AUTO_REACTIVATE_USERS = True \nLDAP_USE_PAGED_RESULT = False \nIMPORT_NEW_USER = True \nACTIVATE_USER_WHEN_IMPORT = True \nDEACTIVE_USER_IF_NOTFOUND = False \nENABLE_EXTRA_USER_INFO_SYNC = True \n Meaning of some options:
Variable Description LDAP_SYNC_INTERVAL The interval to sync. Unit is minutes. Defaults to 60 minutes. ENABLE_LDAP_USER_SYNC set to \"true\" if you want to enable ldap user synchronization LDAP_USER_OBJECT_CLASS This is the name of the class used to search for user objects. In Active Directory, it's usually \"person\". The default value is \"person\". LDAP_DEPT_ATTR Attribute for department info. LDAP_UID_ATTR Attribute for Windows login name. If this is synchronized, users can also log in with their Windows login name. In AD, the attributesAMAccountName can be used as UID_ATTR. The attribute will be stored as login_id in Seafile (in seahub_db.profile_profile table). LDAP_AUTO_REACTIVATE_USERS Whether to auto activate deactivated user, default by 'true' LDAP_USE_PAGED_RESULT Whether to use pagination extension. It is useful when you have more than 1000 users in LDAP server. IMPORT_NEW_USER Whether to import new users when sync user. ACTIVATE_USER_WHEN_IMPORT Whether to activate the user automatically when imported. DEACTIVE_USER_IF_NOTFOUND set to \"true\" if you want to deactivate a user when he/she was deleted in AD server. ENABLE_EXTRA_USER_INFO_SYNC Enable synchronization of additional user information, including user's full name, department, and Windows login name, etc."},{"location":"config/ldap_in_pro/#importing-users-without-activating-them","title":"Importing Users without Activating Them","text":"The users imported with the above configuration will be activated by default. For some organizations with large number of users, they may want to import user information (such as user full name) without activating the imported users. Activating all imported users will require licenses for all users in LDAP, which may not be affordable.
Seafile provides a combination of options for such use case. You can modify below option in seahub_settings.py:
ACTIVATE_USER_WHEN_IMPORT = False\n This prevents Seafile from activating imported users. Then, add below option to seahub_settings.py:
ACTIVATE_AFTER_FIRST_LOGIN = True\n This option will automatically activate users when they login to Seafile for the first time.
"},{"location":"config/ldap_in_pro/#reactivating-users","title":"Reactivating Users","text":"When you set the DEACTIVE_USER_IF_NOTFOUND option, a user will be deactivated when he/she is not found in LDAP server. By default, even after this user reappears in the LDAP server, it won't be reactivated automatically. This is to prevent auto reactivating a user that was manually deactivated by the system admin.
However, sometimes it's desirable to auto reactivate such users. You can modify below option in seahub_settings.py:
LDAP_AUTO_REACTIVATE_USERS = True\n"},{"location":"config/ldap_in_pro/#manually-trigger-synchronization","title":"Manually Trigger Synchronization","text":"To test your LDAP sync configuration, you can run the sync command manually.
To trigger LDAP sync manually:
cd seafile-server-latest\n./pro/pro.py ldapsync\n For Seafile Docker
docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py ldapsync\n"},{"location":"config/ldap_in_pro/#setting-up-ldap-group-sync-optional","title":"Setting Up LDAP Group Sync (optional)","text":""},{"location":"config/ldap_in_pro/#how-it-works","title":"How It Works","text":"The importing or syncing process maps groups from LDAP directory server to groups in Seafile's internal database. This process is one-way.
Any changes to groups in the database won't propagate back to LDAP;
Any changes to groups in the database, except for \"setting a member as group admin\", will be overwritten in the next LDAP sync operation. If you want to add or delete members, you can only do that on LDAP server.
The creator of imported groups will be set to the system admin.
There are two modes of operation:
Periodical: the syncing process will be executed in a fixed interval
Manual: there is a script you can run to trigger the syncing once
Before enabling LDAP group sync, you should have configured LDAP authentication. See Basic LDAP Integration for details.
The following are LDAP group sync related options:
# ldap group sync options.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_GROUP_FILTER = '' # An additional filter to use when searching group objects.\n # If it's set, the final filter used to run search is \"(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER))\";\n # otherwise the final filter would be \"(objectClass=GROUP_OBJECT_CLASS)\".\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile.\n # Learn more about departments in Seafile [here](https://help.seafile.com/sharing_collaboration/departments/).\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\n Meaning of some options:
variable description ENABLE_LDAP_GROUP_SYNC Whether to enable group sync. LDAP_GROUP_OBJECT_CLASS This is the name of the class used to search for group objects. LDAP_GROUP_MEMBER_ATTR The attribute field to use when loading the group's members. For most directory servers, the attribute is \"member\" which is the default value. For \"posixGroup\", it should be set to \"memberUid\". LDAP_USER_ATTR_IN_MEMBERUID The user attribute set in 'memberUid' option, which is used in \"posixGroup\". The default value is \"uid\". LDAP_GROUP_UUID_ATTR Used to uniquely identify groups in LDAP. LDAP_GROUP_FILTER An additional filter to use when searching group objects. If it's set, the final filter used to run search is(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER)); otherwise the final filter would be (objectClass=GROUP_OBJECT_CLASS). LDAP_USER_GROUP_MEMBER_RANGE_QUERY When a group contains too many members, AD will only return part of them. Set this option to TRUE to make LDAP sync work with large groups. DEL_GROUP_IF_NOT_FOUND Set to \"true\", sync process will delete the group if not found in the LDAP server. LDAP_SYNC_GROUP_AS_DEPARTMENT Whether to sync groups as top-level departments in Seafile. Learn more about departments in Seafile here. LDAP_DEPT_NAME_ATTR Used to get the department name. Tip
The search base for groups is the option LDAP_BASE_DN.
Some LDAP server, such as Active Directory, allows a group to be a member of another group. This is called \"group nesting\". If we find a nested group B in group A, we should recursively add all the members from group B into group A. And group B should still be imported a separate group. That is, all members of group B are also members in group A.
In some LDAP server, such as OpenLDAP, it's common practice to use Posix groups to store group membership. To import Posix groups as Seafile groups, set LDAP_GROUP_OBJECT_CLASS option to posixGroup. A posixGroup object in LDAP usually contains a multi-value attribute for the list of member UIDs. The name of this attribute can be set with the LDAP_GROUP_MEMBER_ATTR option. It's MemberUid by default. The value of the MemberUid attribute is an ID that can be used to identify a user, which corresponds to an attribute in the user object. The name of this ID attribute is usually uid, but can be set via the LDAP_USER_ATTR_IN_MEMBERUID option. Note that posixGroup doesn't support nested groups.
A department in Seafile is a special group. In addition to what you can do with a group, there are two key new features for departments:
Department supports hierarchy. A department can have any levels of sub-departments.
Department can have storage quota.
Seafile supports syncing OU (Organizational Units) from AD/LDAP to departments. The sync process keeps the hierarchical structure of the OUs.
Options for syncing departments from OU:
LDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_DEPT_NAME_ATTR = 'description' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n"},{"location":"config/ldap_in_pro/#periodical-and-manual-sync","title":"Periodical and Manual Sync","text":"Periodical sync won't happen immediately after you restart seafile server. It gets scheduled after the first sync interval. For example if you set sync interval to 30 minutes, the first auto sync will happen after 30 minutes you restarts. To sync immediately, you need to manually trigger it.
After the sync is run, you should see log messages like the following in logs/seafevents.log. And you should be able to see the groups in system admin page.
[2023-03-30 18:15:05,109] [DEBUG] create group 1, and add dn pair CN=DnsUpdateProxy,CN=Users,DC=Seafile,DC=local<->1 success.\n[2023-03-30 18:15:05,145] [DEBUG] create group 2, and add dn pair CN=Domain Computers,CN=Users,DC=Seafile,DC=local<->2 success.\n[2023-03-30 18:15:05,154] [DEBUG] create group 3, and add dn pair CN=Domain Users,CN=Users,DC=Seafile,DC=local<->3 success.\n[2023-03-30 18:15:05,164] [DEBUG] create group 4, and add dn pair CN=Domain Admins,CN=Users,DC=Seafile,DC=local<->4 success.\n[2023-03-30 18:15:05,176] [DEBUG] create group 5, and add dn pair CN=RAS and IAS Servers,CN=Users,DC=Seafile,DC=local<->5 success.\n[2023-03-30 18:15:05,186] [DEBUG] create group 6, and add dn pair CN=Enterprise Admins,CN=Users,DC=Seafile,DC=local<->6 success.\n[2023-03-30 18:15:05,197] [DEBUG] create group 7, and add dn pair CN=dev,CN=Users,DC=Seafile,DC=local<->7 success.\n To trigger LDAP sync manually,
cd seafile-server-latest\n./pro/pro.py ldapsync\n For Seafile Docker
docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py ldapsync\n"},{"location":"config/ldap_in_pro/#advanced-ldap-integration-options","title":"Advanced LDAP Integration Options","text":""},{"location":"config/ldap_in_pro/#multiple-base","title":"Multiple BASE","text":"Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the LDAP_BASE_DN option. The DNs are separated by \";\", e.g.
LDAP_BASE_DN = 'ou=developers,dc=example,dc=com;ou=marketing,dc=example,dc=com'\n"},{"location":"config/ldap_in_pro/#additional-search-filter","title":"Additional Search Filter","text":"Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting LDAP_FILTER option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).
The final filter used for searching for users is (&($LOGIN_ATTR=*)($LDAP_FILTER)). $LOGIN_ATTR and $LDAP_FILTER will be replaced by your option values.
For example, add below option to seahub_settings.py:
LDAP_FILTER = 'memberOf=CN=group,CN=developers,DC=example,DC=com'\n The final search filter would be (&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))
The case of attribute names in the above example is significant. The memberOf attribute is only available in Active Directory
You can use the LDAP_FILTER option to limit user scope to a certain AD group.
First, you should find out the DN for the group. Again, we'll use the dsquery command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup.
Add below option to seahub_settings.py:
LDAP_FILTER = 'memberOf={output of dsquery command}'\n"},{"location":"config/ldap_in_pro/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"If your LDAP service supports TLS connections, you can configure LDAP_SERVER_URL as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
LDAP_SERVER_URL = 'ldaps://192.168.0.1:636'\n"},{"location":"config/ldap_in_pro/#use-paged-results-extension","title":"Use paged results extension","text":"LDAP protocol version 3 supports \"paged results\" (PR) extension. When you have large number of users, this option can greatly improve the performance of listing users. Most directory server nowadays support this extension.
In Seafile Pro Edition, add this option to seahub_settings.py to enable PR:
LDAP_USE_PAGED_RESULT = True\n"},{"location":"config/ldap_in_pro/#follow-referrals","title":"Follow referrals","text":"Seafile Pro Edition supports auto following referrals in LDAP search. This is useful for partitioned LDAP or AD servers, where users may be spreaded on multiple directory servers. For more information about referrals, you can refer to this article.
Note: If you get the error like Invalid credentials, you can try to set the LDAP_FOLLOW_REFERRALS = False to solve the problem:
LDAP_FOLLOW_REFERRALS = False\n"},{"location":"config/ldap_in_pro/#configure-multi-ldap-servers","title":"Configure Multi-ldap Servers","text":"Seafile Pro Edition supports multi-ldap servers, you can configure two ldap servers to work with seafile. Multi-ldap servers mean that, when get or search ldap user, it will iterate all configured ldap servers until a match is found; When listing all ldap users, it will iterate all ldap servers to get all users; For Ldap sync it will sync all user/group info in all configured ldap servers to seafile.
Currently, only two LDAP servers are supported.
If you want to use multi-ldap servers, please replace LDAP in the options with MULTI_LDAP_1, and then add them to seahub_settings.py, for example:
# Basic config options\nENABLE_LDAP = True\n......\n\n# Multi ldap config options\nENABLE_MULTI_LDAP = True\nMULTI_LDAP_1_SERVER_URL = 'ldap://192.168.0.2'\nMULTI_LDAP_1_BASE_DN = 'ou=test,dc=seafile,dc=top'\nMULTI_LDAP_1_ADMIN_DN = 'administrator@example.top'\nMULTI_LDAP_1_ADMIN_PASSWORD = 'Hello@123'\nMULTI_LDAP_1_PROVIDER = 'ldap1'\nMULTI_LDAP_1_LOGIN_ATTR = 'userPrincipalName'\n\n# Optional configs\nMULTI_LDAP_1_USER_FIRST_NAME_ATTR = 'givenName'\nMULTI_LDAP_1_USER_LAST_NAME_ATTR = 'sn'\nMULTI_LDAP_1_USER_NAME_REVERSE = False\nENABLE_MULTI_LDAP_1_EXTRA_USER_INFO_SYNC = True\n\nMULTI_LDAP_1_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \nMULTI_LDAP_1_USE_PAGED_RESULT = False\nMULTI_LDAP_1_FOLLOW_REFERRALS = True\nENABLE_MULTI_LDAP_1_USER_SYNC = True\nENABLE_MULTI_LDAP_1_GROUP_SYNC = True\nMULTI_LDAP_1_SYNC_DEPARTMENT_FROM_OU = True\n\nMULTI_LDAP_1_USER_OBJECT_CLASS = 'person'\nMULTI_LDAP_1_DEPT_ATTR = ''\nMULTI_LDAP_1_UID_ATTR = ''\nMULTI_LDAP_1_CONTACT_EMAIL_ATTR = ''\nMULTI_LDAP_1_USER_ROLE_ATTR = ''\nMULTI_LDAP_1_AUTO_REACTIVATE_USERS = True\n\nMULTI_LDAP_1_GROUP_OBJECT_CLASS = 'group'\nMULTI_LDAP_1_GROUP_FILTER = ''\nMULTI_LDAP_1_GROUP_MEMBER_ATTR = 'member'\nMULTI_LDAP_1_GROUP_UUID_ATTR = 'objectGUID'\nMULTI_LDAP_1_CREATE_DEPARTMENT_LIBRARY = False\nMULTI_LDAP_1_DEPT_REPO_PERM = 'rw'\nMULTI_LDAP_1_DEFAULT_DEPARTMENT_QUOTA = -2\nMULTI_LDAP_1_SYNC_GROUP_AS_DEPARTMENT = False\nMULTI_LDAP_1_USE_GROUP_MEMBER_RANGE_QUERY = False\nMULTI_LDAP_1_USER_ATTR_IN_MEMBERUID = 'uid'\nMULTI_LDAP_1_DEPT_NAME_ATTR = ''\n......\n !!! note: There are still some shared config options are used for all LDAP servers, as follows:
```python\n# Common user sync options\nLDAP_SYNC_INTERVAL = 60\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# Common group sync options\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n```\n"},{"location":"config/ldap_in_pro/#sso-and-ldap-users-use-the-same-uid","title":"SSO and LDAP users use the same uid","text":"If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth or Shibboleth), you want Seafile to find the existing account for this user instead of creating a new one, you can set
SSO_LDAP_USE_SAME_UID = True\n Here the UID means the unique user ID, in LDAP it is the attribute you use for LDAP_LOGIN_ATTR (not LDAP_UID_ATTR), in ADFS it is uid attribute. You need make sure you use the same attribute for the two settings
On this basis, if you only want users to login using SSO and not through LDAP, you can set
USE_LDAP_SYNC_ONLY = True\n"},{"location":"config/ldap_in_pro/#importing-roles-from-ldap","title":"Importing Roles from LDAP","text":"Seafile Pro Edition supports syncing roles from LDAP or Active Directory.
To enable this feature, add below option to seahub_settings.py, e.g.
LDAP_USER_ROLE_ATTR = 'title'\n LDAP_USER_ROLE_ATTR is the attribute field to configure roles in LDAP. You can write a custom function to map the role by creating a file seahub_custom_functions.py under conf/ and edit it like:
# -*- coding: utf-8 -*-\n\n# The AD roles attribute returns a list of roles (role_list).\n# The following function use the first entry in the list.\ndef ldap_role_mapping(role):\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n\n# From version 11.0.11-pro, you can define the following function\n# to calculate a role from the role_list.\ndef ldap_role_list_mapping(role_list):\n if not role_list:\n return ''\n for role in role_list:\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n You should only define one of the two functions
You can rewrite the function (in python) to make your own mapping rules. If the file or function doesn't exist, the first entry in role_list will be synced.
"},{"location":"config/multi_institutions/","title":"Multiple Organization/Institution User Management","text":"Starting from version 5.1, you can add institutions into Seafile and assign users into institutions. Each institution can have one or more administrators. This feature is to ease user administration when multiple organizations (universities) share a single Seafile instance. Unlike multi-tenancy, the users are not-isolated. A user from one institution can share files with another institution.
"},{"location":"config/multi_institutions/#turn-on-the-feature","title":"Turn on the feature","text":"In seahub_settings.py, add MULTI_INSTITUTION = True to enable multi-institution feature, and add
EXTRA_MIDDLEWARE += (\n 'seahub.institutions.middleware.InstitutionMiddleware',\n )\n Please replease += to = if EXTRA_MIDDLEWARE is not defined
After restarting Seafile, a system admin can add institutions by adding institution name in admin panel. He can also click into an institution, which will list all users whose profile.institution match the name.
If you are using Shibboleth, you can map a Shibboleth attribute into institution. For example, the following configuration maps organization attribute to institution.
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"givenname\": (False, \"givenname\"),\n \"sn\": (False, \"surname\"),\n \"mail\": (False, \"contact_email\"),\n \"organization\": (False, \"institution\"),\n}\n"},{"location":"config/multi_tenancy/","title":"Multi-Tenancy Support","text":"Multi-tenancy feature is designed for hosting providers that what to host several customers in a single Seafile instance. You can create multi-organizations. Organizations is separated from each other. Users can't share libraries between organizations.
"},{"location":"config/multi_tenancy/#seafile-config","title":"Seafile Config","text":""},{"location":"config/multi_tenancy/#seafileconf","title":"seafile.conf","text":"[general]\nmulti_tenancy = true\n"},{"location":"config/multi_tenancy/#seahub_settingspy","title":"seahub_settings.py","text":"CLOUD_MODE = True\nMULTI_TENANCY = True\n\nORG_MEMBER_QUOTA_ENABLED = True\n\nORG_ENABLE_ADMIN_CUSTOM_NAME = True # Default is True, meaning organization name can be customized\nORG_ENABLE_ADMIN_CUSTOM_LOGO = False # Default is False, if set to True, organization logo can be customized\n\nENABLE_MULTI_ADFS = True # Default is False, if set to True, support per organization custom ADFS/SAML2 login\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n"},{"location":"config/multi_tenancy/#usage","title":"Usage","text":"An organization can be created via system admin in \u201cadmin panel->organization->Add organization\u201d.
Every organization has an URL prefix. This field is for future usage. When a user create an organization, an URL like org1 will be automatically assigned.
After creating an organization, the first user will become the admin of that organization. The organization admin can add other users. Note, the system admin can't add users.
"},{"location":"config/multi_tenancy/#adfssaml-single-sign-on-integration-in-multi-tenancy","title":"ADFS/SAML single sign-on integration in multi-tenancy","text":""},{"location":"config/multi_tenancy/#preparation-for-adfssaml","title":"Preparation for ADFS/SAML","text":"1) Prepare SP(Seafile) certificate directory and SP certificates:
Create sp certs dir
$ mkdir -p /opt/seafile-data/seafile/seahub-data/certs\n The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command:
$ cd /opt/seafile-data/seafile/seahub-data/certs\n$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sp.key -out sp.crt\n The days option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly
Note
If certificates are not placed in /opt/seafile-data/seafile/seahub-data/certs, you need to add the following configuration in seahub_settings.py:
SAML_CERTS_DIR = '/path/to/certs'\n 2) Add the following configuration to seahub_settings.py and then restart Seafile:
ENABLE_MULTI_ADFS = True\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n"},{"location":"config/multi_tenancy/#integration-with-adfssaml-single-sign-on","title":"Integration with ADFS/SAML single sign-on","text":"Please refer to this document.
"},{"location":"config/oauth/","title":"OAuth Authentication","text":""},{"location":"config/oauth/#oauth","title":"OAuth","text":"Before using OAuth, you should first register an OAuth2 client application on your authorization server, then add some configurations to seahub_settings.py.
"},{"location":"config/oauth/#register-an-oauth2-client-application","title":"Register an OAuth2 client application","text":"Here we use Github as an example. First you should register an OAuth2 client application on Github, official document from Github is very detailed.
"},{"location":"config/oauth/#configuration","title":"Configuration","text":"Add the folllowing configurations to seahub_settings.py:
ENABLE_OAUTH = True\n\n# If create new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_CREATE_UNKNOWN_USER = True\n\n# If active new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_ACTIVATE_USER_AFTER_CREATION = True\n\n# Usually OAuth works through SSL layer. If your server is not parametrized to allow HTTPS, some method will raise an \"oauthlib.oauth2.rfc6749.errors.InsecureTransportError\". Set this to `True` to avoid this error.\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\n# Client id/secret generated by authorization server when you register your client application.\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\n\n# Callback url when user authentication succeeded. Note, the redirect url you input when you register your client application MUST be exactly the same as this value.\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following should NOT be changed if you are using Github as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'github.com' \nOAUTH_PROVIDER = 'github.com'\n\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # Please keep the 'email' option unchanged to be compatible with the login of users of version 11.0 and earlier.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n \"uid\": (True, \"uid\"), # Seafile v11.0 + \n}\n"},{"location":"config/oauth/#more-explanations-about-the-settings","title":"More explanations about the settings","text":"OAUTH_PROVIDER / OAUTH_PROVIDER_DOMAIN
OAUTH_PROVIDER_DOMAIN will be deprecated, and it can be replaced by OAUTH_PROVIDER. This variable is used in the database to identify third-party providers, either as a domain or as an easy-to-remember string less than 32 characters.
OAUTH_ATTRIBUTE_MAP
This variables describes which claims from the response of the user info endpoint are to be filled into which attributes of the new Seafile user. The format is showing like below:
OAUTH_ATTRIBUTE_MAP = {\n <:Attribute in the OAuth provider>: (<:Is required or not in Seafile?>, <:Attribute in Seafile >)\n }\n If the remote resource server, like Github, uses email to identify an unique user too, Seafile will use Github id directorily, the OAUTH_ATTRIBUTE_MAP setting for Github should be like this:
OAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # it is deprecated\n \"uid / id / username\": (True, \"uid\") \n\n # extra infos you want to update to Seafile\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n The key part id stands for an unique identifier of user in Github, this tells Seafile which attribute remote resoure server uses to indentify its user. The value part True stands for if this field is mandatory by Seafile.
Since 11.0 version, Seafile use uid as the external unique identifier of the user. It stores uid in table social_auth_usersocialauth and map it to internal unique identifier used in Seafile. Different OAuth systems have different attributes, which may be: id or uid or username, etc. And the id/email config id: (True, email) is deprecated.
If you upgrade from a version below 11.0, you need to have both fields configured, i.e., you configuration should be like:
OAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"uid\": (True, \"uid\") ,\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n In this way, when a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration \"id\": (True, \"email\").
If you use a newly deployed 11.0+ Seafile instance, you don't need the \"id\": (True, \"email\") item. Your configuration should be like:
OAUTH_ATTRIBUTE_MAP = {\n \"uid\": (True, \"uid\") ,\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n"},{"location":"config/oauth/#sample-settings","title":"Sample settings","text":"GoogleGithubGitLabAzure Cloud ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following shoud NOT be changed if you are using Google as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'google.com'\nOAUTH_AUTHORIZATION_URL = 'https://accounts.google.com/o/oauth2/v2/auth'\nOAUTH_TOKEN_URL = 'https://www.googleapis.com/oauth2/v4/token'\nOAUTH_USER_INFO_URL = 'https://www.googleapis.com/oauth2/v1/userinfo'\nOAUTH_SCOPE = [\n \"openid\",\n \"https://www.googleapis.com/auth/userinfo.email\",\n \"https://www.googleapis.com/auth/userinfo.profile\",\n]\nOAUTH_ATTRIBUTE_MAP = {\n \"sub\": (True, \"uid\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n Note
For Github, email is not the unique identifier for an user, but id is in most cases, so we use id as settings example in our manual. As Seafile uses email to identify an unique user account for now, so we combine id and OAUTH_PROVIDER_DOMAIN, which is github.com in your case, to an email format string and then create this account if not exist.
ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\nOAUTH_PROVIDER_DOMAIN = 'github.com'\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, 'uid'),\n \"email\": (False, \"contact_email\"),\n \"name\": (False, \"name\"),\n}\n Note
To enable OAuth via GitLab. Create an application in GitLab (under Admin area->Applications).
Fill in required fields:
Name: a name you specify
Redirect URI: The callback url see below OAUTH_REDIRECT_URL
Trusted: Skip confirmation dialog page. Select this to not ask the user if he wants to authorize seafile to receive access to his/her account data.
Scopes: Select openid and read_user in the scopes list.
Press submit and copy the client id and secret you receive on the confirmation page and use them in this template for your seahub_settings.py
ENABLE_OAUTH = True\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = \"https://your-seafile/oauth/callback/\"\n\nOAUTH_PROVIDER_DOMAIN = 'your-domain'\nOAUTH_AUTHORIZATION_URL = 'https://gitlab.your-domain/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://gitlab.your-domain/oauth/token'\nOAUTH_USER_INFO_URL = 'https://gitlab.your-domain/api/v4/user'\nOAUTH_SCOPE = [\"openid\", \"read_user\"]\nOAUTH_ATTRIBUTE_MAP = {\n \"email\": (True, \"uid\"),\n \"name\": (False, \"name\")\n}\n Note
For users of Azure Cloud, as there is no id field returned from Azure Cloud's user info endpoint, so we use a special configuration for OAUTH_ATTRIBUTE_MAP setting (others are the same as Github/Google). Please see this tutorial for the complete deployment process of OAuth against Azure Cloud.
OAUTH_ATTRIBUTE_MAP = {\n \"email\": (True, \"uid\"),\n \"name\": (False, \"name\")\n}\n"},{"location":"config/ocm/","title":"Open Cloud Mesh","text":"From 8.0.0, Seafile supports OCM protocol. With OCM, user can share library to other server which enabled OCM too.
Seafile currently supports sharing between Seafile servers with version greater than 8.0, and sharing from NextCloud to Seafile since 9.0.
These two functions cannot be enabled at the same time
"},{"location":"config/ocm/#configuration","title":"Configuration","text":"Add the following configuration to seahub_settings.py.
# Enable OCM\nENABLE_OCM = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"dev\",\n \"server_url\": \"https://seafile-domain-1/\", # should end with '/'\n },\n {\n \"server_name\": \"download\",\n \"server_url\": \"https://seafile-domain-2/\", # should end with '/'\n },\n]\n # Enable OCM\nENABLE_OCM_VIA_WEBDAV = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"nextcloud\",\n \"server_url\": \"https://nextcloud-domain-1/\", # should end with '/'\n }\n]\n OCM_REMOTE_SERVERS is a list of servers that you allow your users to share libraries with
"},{"location":"config/ocm/#usage","title":"Usage","text":""},{"location":"config/ocm/#share-library-to-other-server","title":"Share library to other server","text":"In the library sharing dialog jump to \"Share to other server\", you can share this library to users of another server with \"Read-Only\" or \"Read-Write\" permission. You can also view shared records and cancel sharing.
"},{"location":"config/ocm/#view-be-shared-libraries","title":"View be shared libraries","text":"You can jump to \"Shared from other servers\" page to view the libraries shared by other servers and cancel the sharing.
And enter the library to view, download or upload files.
"},{"location":"config/remote_user/","title":"SSO using Remote User","text":"Starting from 7.0.0, Seafile can integrate with various Single Sign On systems via a proxy server. Examples include Apache as Shibboleth proxy, or LemonLdap as a proxy to LDAP servers, or Apache as Kerberos proxy. Seafile can retrieve user information from special request headers (HTTP_REMOTE_USER, HTTP_X_AUTH_USER, etc.) set by the proxy servers.
After the proxy server (Apache/Nginx) is successfully authenticated, the user information is set to the request header, and Seafile creates and logs in the user based on this information.
Make sure that the proxy server has a corresponding security mechanism to protect against forgery request header attacks
Please add the following settings to conf/seahub_settings.py to enable this feature.
ENABLE_REMOTE_USER_AUTHENTICATION = True\n\n# Optional, HTTP header, which is configured in your web server conf file,\n# used for Seafile to get user's unique id, default value is 'HTTP_REMOTE_USER'.\nREMOTE_USER_HEADER = 'HTTP_REMOTE_USER'\n\n# Optional, when the value of HTTP_REMOTE_USER is not a valid email address\uff0c\n# Seafile will build a email-like unique id from the value of 'REMOTE_USER_HEADER'\n# and this domain, e.g. user1@example.com.\nREMOTE_USER_DOMAIN = 'example.com'\n\n# Optional, whether to create new user in Seafile system, default value is True.\n# If this setting is disabled, users doesn't preexist in the Seafile DB cannot login.\n# The admin has to first import the users from external systems like LDAP.\nREMOTE_USER_CREATE_UNKNOWN_USER = True\n\n# Optional, whether to activate new user in Seafile system, default value is True.\n# If this setting is disabled, user will be unable to login by default.\n# the administrator needs to manually activate this user.\nREMOTE_USER_ACTIVATE_USER_AFTER_CREATION = True\n\n# Optional, map user attribute in HTTP header and Seafile's user attribute.\nREMOTE_USER_ATTRIBUTE_MAP = {\n 'HTTP_DISPLAYNAME': 'name',\n 'HTTP_MAIL': 'contact_email',\n\n # for user info\n \"HTTP_GIVENNAME\": 'givenname',\n \"HTTP_SN\": 'surname',\n \"HTTP_ORGANIZATION\": 'institution',\n\n # for user role\n 'HTTP_SHIBBOLETH_AFFILIATION': 'affiliation',\n}\n\n# Map affiliation to user role. Though the config name is SHIBBOLETH_AFFILIATION_ROLE_MAP,\n# it is not restricted to Shibboleth\nSHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\n Then restart Seafile.
"},{"location":"config/roles_permissions/","title":"Roles and Permissions Support","text":"You can add/edit roles and permission for users. A role is just a group of users with some pre-defined permissions, you can toggle user roles in user list page at admin panel. For most permissions, the meaning can be easily obtained from the variable name. The following is a further detailed introduction to some variables.
role_quota is used to set quota for a certain role of users. For example, we can set the quota of employee to 100G by adding 'role_quota': '100g', and leave other role of users to the default quota.
After set role_quote, it will take affect once a user with such a role login into Seafile. You can also manually change seafile-db.RoleQuota, if you want to see the effect immediately.
can_add_public_repo is to set whether a role can create a public library (shared by all login users), default is False.
Since version 11.0.9 pro, can_share_repo is added to limit users' ability to share a library
The can_add_public_repo option will not take effect if you configure global CLOUD_MODE = True
can_create_wiki and can_publish_wiki are used to control whether a role can create a Wiki and publish a Wiki. (A published Wiki have a special URL and can be visited by anonymous users)
storage_ids permission is used for assigning storage backends to users with specific role. More details can be found in multiple storage backends.
upload_rate_limit and download_rate_limit are added to limit upload and download speed for users with different roles.
Note
After configured the rate limit, run the following command in the seafile-server-latest directory to make the configuration take effect:
./seahub.sh python-env python3 seahub/manage.py set_user_role_upload_download_rate_limit\n can_drag_drop_folder_to_sync: allow or deny user to sync folder by draging and droping
can_export_files_via_mobile_client: allow or deny user to export files in using mobile client
Seafile comes with two build-in roles default and guest, a default user is a normal user with permissions as followings:
'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': True,\n 'can_publish_wiki': True,\n 'upload_rate_limit': 0, # unit: kb/s\n 'download_rate_limit': 0,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': True,\n 'monthly_ai_credit_per_user': -1,\n 'can_use_sso_in_multi_tenancy': True,\n },\n While a guest user can only read files/folders in the system, here are the permissions for a guest user:
'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'can_export_files_via_mobile_client': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': False,\n 'can_publish_wiki': False,\n 'upload_rate_limit': 0,\n 'download_rate_limit': 0,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': False,\n 'can_use_sso_in_multi_tenancy': False,\n },\n"},{"location":"config/roles_permissions/#edit-build-in-roles","title":"Edit build-in roles","text":"If you want to edit the permissions of build-in roles, e.g. default users can invite guest, guest users can view repos in organization, you can add following lines to seahub_settings.py with corresponding permissions set to True.
ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': True,\n 'can_publish_wiki': True,\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': True,\n 'monthly_ai_credit_per_user': -1,\n 'can_use_sso_in_multi_tenancy': True,\n },\n 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'can_export_files_via_mobile_client': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': False,\n 'can_publish_wiki': False,\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': False,\n 'can_use_sso_in_multi_tenancy': False,\n }\n}\n"},{"location":"config/roles_permissions/#more-about-guest-invitation-feature","title":"More about guest invitation feature","text":"An user who has can_invite_guest permission can invite people outside of the organization as guest.
In order to use this feature, in addition to granting can_invite_guest permission to the user, add the following line to seahub_settings.py,
ENABLE_GUEST_INVITATION = True\n\n# invitation expire time\nINVITATIONS_TOKEN_AGE = 72 # hours\n After restarting, users who have can_invite_guest permission will see \"Invite People\" section at sidebar of home page.
Users can invite a guest user by providing his/her email address, system will email the invite link to the user.
Tip
If you want to block certain email addresses for the invitation, you can define a blacklist, e.g.
INVITATION_ACCEPTER_BLACKLIST = [\"a@a.com\", \"*@a-a-a.com\", r\".*@(foo|bar).com\", ]\n After that, email address \"a@a.com\", any email address ends with \"@a-a-a.com\" and any email address ends with \"@foo.com\" or \"@bar.com\" will not be allowed.
"},{"location":"config/roles_permissions/#add-custom-roles","title":"Add custom roles","text":"If you want to add a new role and assign some users with this role, e.g. new role employee can invite guest and can create public library and have all other permissions a default user has, you can add following lines to seahub_settings.py
ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': True,\n 'can_publish_wiki': True,\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': True,\n 'monthly_ai_credit_per_user': -1,\n 'can_use_sso_in_multi_tenancy': True,\n },\n 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'can_export_files_via_mobile_client': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': False,\n 'can_publish_wiki': False,\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': False,\n 'can_use_sso_in_multi_tenancy': False,\n },\n 'employee': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': True,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': True,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': True,\n 'can_publish_wiki': True,\n 'upload_rate_limit': 500,\n 'download_rate_limit': 800,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': True,\n 'monthly_ai_credit_per_user': -1, \n 'can_use_sso_in_multi_tenancy': True,\n },\n}\n"},{"location":"config/saml2/","title":"SAML 2.0 in version 10.0+","text":"In this document, we demonstrate how to integrate Seafile with SAML single sign-on. We will use the Microsoft Azure SAML single sign-on app, Microsoft on-premise ADFS, and Keycloak as three examples. Other SAML 2.0 providers should follow a similar approach.
"},{"location":"config/saml2/#preparations-for-saml-20","title":"Preparations for SAML 2.0","text":""},{"location":"config/saml2/#install-xmlsec1-package-binary-deployment-only","title":"Install xmlsec1 package (binary deployment only)","text":"This step is not needed for Docker based deployment
$ apt update\n$ apt install xmlsec1\n$ apt install dnsutils # For multi-tenancy feature\n"},{"location":"config/saml2/#prepare-spseafile-certificate-directory-and-sp-certificates","title":"Prepare SP(Seafile) certificate directory and SP certificates","text":"Create certs dir\uff1a
Docker DeploymentBinary DeploymentThe default deployment path for Seafile is /opt/seafile, and the corresponding default path for seafile-data is /opt/seafile-data. If you do not deploy Seafile to this directory, you can check the SEAFILE_VOLUME variable in the env to confirm the path of your seafile-data.
cd /opt/seafile-data/seafile/seahub-data\nmkdir certs\n If you deploy Seafile using the binary package, the default installation and data path is /opt/seafile. If you do not deploy Seafile to this directory, please check your actual deployment path.
cd /opt/seafile/seahub-data\nmkdir certs\n The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command:
cd certs\nopenssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout sp.key -out sp.crt\n The days option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly
In the following examples, we assume Seafile is deployed at https://demo.seafile.top. You should change the domain in the exmapels to the domain of your Seafile server.
If you use Microsoft Azure SAML app to achieve single sign-on, please follow the steps below:
First, add SAML single sign-on app and assign users, refer to: add an Azure AD SAML application, create and assign users.
Second, setup the Identifier, Reply URL, and Sign on URL of the SAML app based on your service URL, refer to: enable single sign-on for saml app. The format of the Identifier, Reply URL, and Sign on URL are: https://demo.seafile.top/saml2/metadata/, https://demo.seafile.top/saml2/acs/, https://demo.seafile.top/, e.g.:
Next, edit saml attributes & claims. Keep the default attributes & claims of SAML app unchanged, the uid attribute must be added, the mail and name attributes are optional, e.g.:
Next, download the base64 format SAML app's certificate and rename to idp.crt:
and put it under the certs directory(/opt/seafile-data/seafile/seahub-data/certs).
Next, copy the metadata URL of the SAML app:
and paste it into the SAML_REMOTE_METADATA_URL option in seahub_settings.py, e.g.:
SAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n Next, add ENABLE_ADFS_LOGIN, LOGIN_REDIRECT_URL and SAML_ATTRIBUTE_MAPPING options to seahub_settings.py, and then restart Seafile, e.g:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n\n}\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n Note
/usr/bin/xmlsec1, you need to add the following configuration in seahub_settings.py:SAML_XMLSEC_BINARY_PATH = '/path/to/xmlsec1'\n View where the xmlsec1 binary is located:
$ which xmlsec1\n Finally, open the browser and enter the Seafile login page, click Single Sign-On, and use the user assigned to SAML app to perform a SAML login test.
If you use Microsoft ADFS to achieve single sign-on, please follow the steps below:
First, please make sure the following preparations are done:
A Windows Server with ADFS installed. For configuring and installing ADFS you can see this article.
A valid SSL certificate for ADFS server, and here we use temp.adfs.com as the domain name example.
A valid SSL certificate for Seafile server, and here we use demo.seafile.top as the domain name example.
Second, download the base64 format certificate and upload it:
Navigate to the AD FS management window. In the left sidebar menu, navigate to Services > Certificates.
Locate the Token-signing certificate. Right-click the certificate and select View Certificate.
In the dialog box, select the Details tab.
Click Copy to File.
In the Certificate Export Wizard that opens, click Next.
Select Base-64 encoded X.509 (.CER), then click Next.
Named it idp.crt, then click Next.
Click Finish to complete the download.
And then put it under the certs directory(/opt/seafile/seahub-data/certs).
Next, add the following configurations to seahub_settings.py and then restart Seafile:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n}\nSAML_REMOTE_METADATA_URL = 'https://temp.adfs.com/federationmetadata/2007-06/federationmetadata.xml' # The format of the ADFS federation metadata URL is: `https://{your ADFS domain name}/federationmetadata/2007-06/federationmetadata.xml`\n Next, add relying party trust:
Log into the ADFS server and open the ADFS management.
Under Actions, click Add Relying Party Trust.
On the Welcome page, choose Claims aware and click Start.
Select Import data about the relying party published online or on a local network, type your metadate url in Federation metadata address (host name or URL), and then click Next. Your metadate url format is: https://demo.seafile.top/saml2/metadata/, e.g.:
On the Specify Display Name page type a name in Display name, e.g. Seafile, under Notes type a description for this relying party trust, and then click Next.
In the Choose an access control policy window, select Permit everyone, then click Next.
Review your settings, then click Next.
Click Close.
Next, create claims rules:
Open the ADFS management, click Relying Party Trusts.
Right-click your trust, and then click Edit Claim Issuance Policy.
On the Issuance Transform Rules tab click Add Rules.
Click the Claim rule template dropdown menu and select Send LDAP Attributes as Claims, and then click Next.
In the Claim rule name field, type the display name for this rule, such as Seafile Claim rule. Click the Attribute store dropdown menu and select Active Directory. In the LDAP Attribute column, click the dropdown menu and select User-Principal-Name. In the Outgoing Claim Type column, click the dropdown menu and select UPN. And then click Finish.
Click Add Rule again.
Click the Claim rule template dropdown menu and select Transform an Incoming Claim, and then click Next.
In the Claim rule name field, type the display name for this rule, such as UPN to Name ID. Click the Incoming claim type dropdown menu and select UPN(It must match the Outgoing Claim Type in rule Seafile Claim rule). Click the Outgoing claim type dropdown menu and select Name ID. Click the Outgoing name ID format dropdown menu and select Email. And then click Finish.
Click OK to add both new rules.
When creating claims rule, you can also select other LDAP Attributes, such as E-Mail-Addresses, depending on your ADFS service
Finally, open the browser and enter the Seafile login page, click Single Sign-On to perform ADFS login test.
In this part, we use Keycloak SAML single sign-on app to show how Seafile integrate SAML 2.0.
"},{"location":"config/saml2/#in-keycloak","title":"In Keycloak","text":"First, Create a new Client:
Client type: Choose SAML\uff1b
Client ID: Fill in the SAML metadata address of Seafile (e.g.,https://demo.seafile.top/saml2/metadata/)
Root URL and Home URL: Root Directory/Homepage, fill in the Seafile web service address (e.g.,https://demo.seafile.top/)
Valid redirect URIs: Valid Redirect URIs, fill in all URLs of the Seafile web service (e.g.,https://demo.seafile.top/*)
Next, open the client you just created and make the following modifications; leave all other settings as default.
Settings - SAML capabilities: Set the Name ID Format to email, and only keep Include AuthnStatement enabled, disable all other settings.
Settings - Signature and Encryption: The default encryption algorithm is RSA_SHA256, so no changes are required.
Keys : Confirm that the Signing keys config is in the disabled state.
Client scopes: Configure the protocol mapping to map user information.
Next, choose the custom configuration By configuration:
Next, ensure that the above two attributes are added. After adding them, the result is as follows:
Advanced - Fine Grain SAML Endpoint Configuration
Assertion Consumer Service POST Binding URL: Send the SAML assertion request to the SP using the POST method, and set it to the SAML ACS address of Seafile (e.g.,https://demo.seafile.top/saml2/acs/).
Assertion Consumer Service Redirect Binding URL: Send the SAML assertion request to the SP via the redirect method, and set it to Seafile's SAML ACS address (same as the Assertion Consumer Service POST Binding URL).
Logout Service POST Binding URL: The address for sending a logout request to the SP via the POST method. Fill in the SAML logout POST address of Seafile (e.g.,https://demo.seafile.top/saml2/ls/post/).
Logout Service Redirect Binding URL: The address for sending a logout request to the SP via the redirect method. Fill in Seafile's SAML logout address (e.g.,https://demo.seafile.top/saml2/ls/).
Advanced - Authentication flow overrides: Bind the authenticator (the default account-password login uses the Browser flow).
cd /opt/seafile-data/seafile/conf/\nvim seahub_settings.py \n\n\nENABLE_ADFS_LOGIN = True\n#SAML_CERTS_DIR is a path inside the container and does not need to be changed.\nSAML_CERTS_DIR = '/opt/seafile/seahub-data/certs'\n#The configuration format of SAML_REMOTE_METADATA_URL is '{idp_server_url}/realms/{realm}/protocol/saml/descriptor' \n#idp_server_url: The URL of the Keycloak service\n#realm: Realm name\nSAML_REMOTE_METADATA_URL = 'https://keycloak.seafile.com/realms/haiwen/protocol/saml/descriptor'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n}\n Finally, open the browser and enter the Seafile login page, click Single Sign-On, and use the user assigned to SAML app to perform a SAML login test.
"},{"location":"config/seafevents-conf/","title":"Configurable Options","text":"In the file seafevents.conf:
[STATISTICS]\n## must be \"true\" to enable statistics\nenabled = true\n\n[SEAHUB EMAIL]\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending Seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n\n[FILE HISTORY]\nenabled = true\nthreshold = 5\nsuffix = md,txt,...\n\n## From seafile 7.0.0\n## Recording file history to database for fast access is enabled by default for 'Markdown, .txt, ppt, pptx, doc, docx, xls, xlsx'. \n## After enable the feature, the old histories version for markdown, doc, docx files will not be list in the history page.\n## (Only new histories that stored in database will be listed) But the users can still access the old versions in the library snapshots.\n## For file types not listed in the suffix , histories version will be scanned from the library history as before.\n## The feature default is enable. You can set the 'enabled = false' to disable the feature.\n\n## The 'threshold' is the time threshold for recording the historical version of a file, in minutes, the default is 5 minutes. \n## This means that if the interval between two adjacent file saves is less than 5 minutes, the two file changes will be merged and recorded as a historical version. \n## When set to 0, there is no time limit, which means that each save will generate a separate historical version.\n\n## If you need to modify the file list format, you can add 'suffix = md, txt, ...' configuration items to achieve.\n"},{"location":"config/seafevents-conf/#the-following-configurations-for-pro-edition-only","title":"The following configurations for Pro Edition only","text":"[AUDIT]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\n\n[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## From Seafile 6.3.0 pro, in order to speed up the full-text search speed, you should setup\nhighlight = fvh\n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\n## Refer to file search manual for details.\nindex_office_pdf=false\n\n## The default size limit for doc, docx, ppt, pptx, xls, xlsx and pdf files. Files larger than this will not be indexed.\n## Since version 6.2.0\n## Unit: MB\noffice_file_size_limit = 10\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n\n## The default loglevel is `warning`.\n## Since version 11.0.4\nloglevel = info\n\n[EVENTS PUBLISH]\n## must be \"true\" to enable publish events messages\nenabled = false\n## message format: repo-update\\t{{repo_id}}}\\t{{commit_id}}\n## Currently only support redis message queue\nmq_type = redis\n\n[AUTO DELETION]\nenabled = true # Default is false, when enabled, users can use file auto deletion feature\ninterval = 86400 # The unit is second(s), the default frequency is one day, that is, it runs once a day\n\n[SEASEARCH]\nenabled = true # Default is false, when enabled, seafile can use SeaSearch as the search engine\nseasearch_url = http://seasearch:4080 # If your SeaSearch server deploy on another machine, replace it to the truth address\nseasearch_token = <your auth token> # base64 code consist of `username:password`\ninterval = 10m # The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\n"},{"location":"config/seafile-conf/","title":"Seafile.conf settings","text":"Important
Every entry in this configuration file is case-sensitive.
You need to restart Seafile docker image so that your changes take effect.
"},{"location":"config/seafile-conf/#storage-quota-setting","title":"Storage Quota Setting","text":"You may set a default quota (e.g. 2GB) for all users. To do this, just add the following lines to seafile.conf file
[quota]\n# default user quota in GB, integer only\ndefault = 2\n This setting applies to all users. If you want to set quota for a specific user, you may log in to seahub website as administrator, then set it in \"System Admin\" page.
Since Pro 10.0.9 version, you can set the maximum number of files allowed in a library, and when this limit is exceeded, files cannot be uploaded to this library. There is no limit by default.
[quota]\nlibrary_file_limit = 100000\n"},{"location":"config/seafile-conf/#default-history-length-limit","title":"Default history length limit","text":"If you don't want to keep all file revision history, you may set a default history length limit for all libraries.
[history]\nkeep_days = days of history to keep\n"},{"location":"config/seafile-conf/#default-trash-expiration-time","title":"Default trash expiration time","text":"The default time for automatic cleanup of the libraries trash is 30 days. You can modify this time by adding the following configuration\uff1a
[library_trash]\nexpire_days = 60\n"},{"location":"config/seafile-conf/#seafile-fileserver-configuration","title":"Seafile fileserver configuration","text":"The configuration of seafile fileserver is in the [fileserver] section of the file seafile.conf
You can set the number of worker threads to server http requests. Default value is 10, which is a good value for most use cases.
[fileserver]\nworker_threads = 15\n Change upload/download settings.
[fileserver]\n# Set maximum upload file size to 200M.\n# If not configured, there is no file size limit for uploading.\nmax_upload_size=200\n\n# Set maximum download directory size to 200M.\n# Default is 100M.\nmax_download_dir_size=200\n After a file is uploaded via the web interface, or the cloud file browser in the client, it needs to be divided into fixed size blocks and stored into storage backend. We call this procedure \"indexing\". By default, the file server uses 1 thread to sequentially index the file and store the blocks one by one. This is suitable for most cases. But if you're using S3/Ceph/Swift backends, you may have more bandwidth in the storage backend for storing multiple blocks in parallel. We provide an option to define the number of concurrent threads in indexing:
[fileserver]\nmax_indexing_threads = 10\n When users upload files in the web interface (seahub), file server divides the file into fixed size blocks. Default blocks size for web uploaded files is 8MB. The block size can be set here.
[fileserver]\n#Set block size to 2MB\nfixed_block_size=2\n When users upload files in the web interface, file server assigns an token to authorize the upload operation. This token is valid for 1 hour by default. When uploading a large file via WAN, the upload time can be longer than 1 hour. You can change the token expire time to a larger value.
[fileserver]\n#Set uploading time limit to 3600s\nweb_token_expire_time=3600\n You can download a folder as a zip archive from seahub, but some zip software on windows doesn't support UTF-8, in which case you can use the \"windows_encoding\" settings to solve it.
[zip]\n# The file name encoding of the downloaded zip file.\nwindows_encoding = iso-8859-1\n The \"httptemp\" directory contains temporary files created during file upload and zip download. In some cases the temporary files are not cleaned up after the file transfer was interrupted. Starting from 7.1.5 version, file server will regularly scan the \"httptemp\" directory to remove files created long time ago.
[fileserver]\n# After how much time a temp file will be removed. The unit is in seconds. Default to 3 days.\nhttp_temp_file_ttl = x\n# File scan interval. The unit is in seconds. Default to 1 hour.\nhttp_temp_scan_interval = x\n You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. You can set both options to -1, to allow unlimited size and timeout.
[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n If you use object storage as storage backend, when a large file is frequently downloaded, the same blocks need to be fetched from the storage backend to Seafile server. This may waste bandwith and cause high load on the internal network. Since Seafile Pro 8.0.5 version, we add block caching to improve the situation.
use_block_cache option in the [fileserver] group. It's not enabled by default. block_cache_size_limit option is used to limit the size of the cache. Its default value is 10GB. The blocks are cached in seafile-data/block-cache directory. When the total size of cached files exceeds the limit, seaf-server will clean up older files until the size reduces to 70% of the limit. The cleanup interval is 5 minutes. You have to have a good estimate on how much space you need for the cache directory. Otherwise on frequent downloads this directory can be quickly filled up.block_cache_file_types configuration is used to choose the file types that are cached. block_cache_file_types the default value is mp4;mov.[fileserver]\nuse_block_cache = true\n# Set block cache size limit to 100MB\nblock_cache_size_limit = 100\nblock_cache_file_types = mp4;mov\n When a large number of files are uploaded through the web page and API, it will be expensive to calculate block IDs based on the block contents. Since Seafile-pro-9.0.6, you can add the skip_block_hash option to use a random string as block ID. Warning
This option will prevent fsck from checking block content integrity. You should specify --shallow option to fsck to not check content integrity.
[fileserver]\nskip_block_hash = true\n If you want to limit the type of files when uploading files, since Seafile Pro 10.0.0 version, you can set file_ext_white_list option in the [fileserver] group. This option is a list of file types, only the file types in this list are allowed to be uploaded. It's not enabled by default.
[fileserver]\nfile_ext_white_list = md;mp4;mov\n Since seafile 10.0.1, when you use go fileserver, you can set upload_limit and download_limit option in the [fileserver] group to limit the speed of file upload and download. It's not enabled by default.
[fileserver]\n# The unit is in KB/s.\nupload_limit = 100\ndownload_limit = 100\n Since Seafile 11.0.7 Pro, you can ask file server to check virus for every file uploaded with web APIs. Find more options about virus scanning at virus scan.
[fileserver]\n# default is false\ncheck_virus_on_web_upload = true\n Since Seafile 12.0.4, after the upload is completed by the client, seafile server will check whether the uploaded blocks are complete. Ii's enabled by default.
[fileserver]\n# default is true\nverify_client_blocks_after_sync = true\n"},{"location":"config/seafile-conf/#database-configuration","title":"Database configuration","text":"The configurations of database are stored in the [database] section.
From Seafile 11.0, the SQLite is not supported
[database]\ntype=mysql\nhost=127.0.0.1\nuser=root\npassword=root\ndb_name=seafile_db\nconnection_charset=utf8\nmax_connections=100\n When you configure seafile server to use MySQL, the default connection pool size is 100, which should be enough for most use cases.
Since Seafile 10.0.2, you can enable the encrypted connections to the MySQL server by adding the following configuration options:
[database]\nuse_ssl = true\nskip_verify = false\nca_path = /etc/mysql/ca.pem\n When set use_ssl to true and skip_verify to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path. The ca_path is a trusted CA certificate path for signing MySQL server certificates. When skip_verify is true, there is no need to add the ca_path option. The MySQL server certificate won't be verified at this time.
The Seafile Pro server auto expires file locks after some time, to prevent a locked file being locked for too long. The expire time can be tune in seafile.conf file.
[file_lock]\ndefault_expire_hours = 6\n The default is 12 hours.
Since Seafile-pro-9.0.6, you can add cache for getting locked files (reduce server load caused by sync clients). Since Pro Edition 12, this option is enabled by default.
[file_lock]\nuse_locked_file_cache = true\n At the same time, you also need to configure the following memcache options for the cache to take effect:
[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n"},{"location":"config/seafile-conf/#storage-backends","title":"Storage Backends","text":"You may configure Seafile to use various kinds of object storage backends.
You may also configure Seafile to use multiple storage backends at the same time.
"},{"location":"config/seafile-conf/#cluster","title":"Cluster","text":"When you deploy Seafile in a cluster, you should add the following configuration:
[cluster]\nenabled = true\n Tip
Since version 12, if you use Docker to deploy cluster, this option is no longer needed.
"},{"location":"config/seafile-conf/#enable-slow-log","title":"Enable Slow Log","text":"Since Seafile-pro-6.3.10, you can enable seaf-server's RPC slow log to do performance analysis.The slow log is enabled by default.
If you want to configure related options, add the options to seafile.conf:
[slow_log]\n# default to true\nenable_slow_log = true\n# the unit of all slow log thresholds is millisecond.\n# default to 5000 milliseconds, only RPC queries processed for longer than 5000 milliseconds will be logged.\nrpc_slow_threshold = 5000\n You can find seafile_slow_rpc.log in logs/slow_logs. You can also use log-rotate to rotate the log files. You just need to send SIGUSR2 to seaf-server process. The slow log file will be closed and reopened.
Since 9.0.2 Pro, the signal to trigger log rotation has been changed to SIGUSR1. This signal will trigger rotation for all log files opened by seaf-server. You should change your log rotate settings accordingly.
Even though Nginx logs all requests with certain details, such as url, response code, upstream process time, it's sometimes desirable to have more context about the requests, such as the user id for each request. Such information can only be logged from file server itself. Since 9.0.2 Pro, access log feature is added to fileserver.
To enable access log, add below options to seafile.conf:
[fileserver]\n# default to false. If enabled, fileserver-access.log will be written to log directory.\nenable_access_log = true\n The log format is as following:
start time - user id - url - response code - process time\n You can use SIGUSR1 to trigger log rotation.
Seafile 9.0 introduces a new fileserver implemented in Go programming language. To enable it, you can set the options below in seafile.conf:
[fileserver]\nuse_go_fileserver = true\n Go fileserver has 3 advantages over the traditional fileserver implemented in C language:
max_sync_file_count to limit the size of library to be synced. The default is 100K. With Go fileserver you can set this option to a much higher number, such as 1 million.max_download_dir_size is thus no longer needed by Go fileserver.Go fileserver caches fs objects in memory. On the one hand, it avoids repeated creation and destruction of repeatedly accessed objects; on the other hand it will also slow down the speed at which objects are released, which will prevent go's gc mechanism from consuming too much CPU time. You can set the size of memory used by fs cache through the following options.
[fileserver]\n# The unit is in M. Default to 2G.\nfs_cache_limit = 100\n Since Pro 12.0.10 version, you can set the max threads of fs-id-list requests. When you download a repo, Seafile client will request fs id list, and you can control the maximum concurrency for handling fs-id-list requests in the go fileserver through fs_id_list_max_threads configuration, which defaults to 10.
[fileserver]\nfs_id_list_max_threads = 20\n"},{"location":"config/seafile-conf/#profiling-go-fileserver-performance","title":"Profiling Go Fileserver Performance","text":"Since Seafile 9.0.7, you can enable the profile function of go fileserver by adding the following configuration options:
# profile_password is required, change it for your need\n[fileserver]\nenable_profiling = true\nprofile_password = 8kcUz1I2sLaywQhCRtn2x1\n This interface can be used through the pprof tool provided by Go language. See https://pkg.go.dev/net/http/pprof for details. Note that you have to first install Go on the client that issues the below commands. The password parameter should match the one you set in the configuration.
go tool pprof http://localhost:8082/debug/pprof/heap?password=8kcUz1I2sLaywQhCRtn2x1\ngo tool pprof http://localhost:8082/debug/pprof/profile?password=8kcUz1I2sLaywQhCRtn2x1\n"},{"location":"config/seahub_customization/","title":"Seahub customization","text":""},{"location":"config/seahub_customization/#customize-seahub-logo-and-css","title":"Customize Seahub Logo and CSS","text":"Create customize folder
Deploy in DockerDeploy from binary packagesmkdir -p /opt/seafile-data/seafile/seahub-data/custom\n mkdir /opt/seafile/seafile-server-latest/seahub/media/custom\n During upgrading, Seafile upgrade script will create symbolic link automatically to preserve your customization.
"},{"location":"config/seahub_customization/#customize-logo","title":"Customize Logo","text":"Add your logo file to custom/
Overwrite LOGO_PATH in seahub_settings.py
LOGO_PATH = 'custom/mylogo.png'\n Default width and height for logo is 149px and 32px, you may need to change that according to yours.
LOGO_WIDTH = 149\nLOGO_HEIGHT = 32\n"},{"location":"config/seahub_customization/#customize-favicon","title":"Customize Favicon","text":"Add your favicon file to custom/
Overwrite FAVICON_PATH in seahub_settings.py
FAVICON_PATH = 'custom/favicon.png'\n"},{"location":"config/seahub_customization/#customize-seahub-css","title":"Customize Seahub CSS","text":"Add your css file to custom/, for example, custom.css
Overwrite BRANDING_CSS in seahub_settings.py
BRANDING_CSS = 'custom/custom.css'\n"},{"location":"config/seahub_customization/#customize-help-page","title":"Customize help page","text":"Deploy in DockerDeploy from binary packages mkdir -p /opt/seafile-data/seafile/seahub-data/custom/templates/help/\ncd /opt/seafile-data/seafile/seahub-data/custom\ncp ../../help/templates/help/install.html templates/help/\n mkdir /opt/seafile/seafile-server-latest/seahub/media/custom/templates/help/\ncd /opt/seafile/seafile-server-latest/seahub/media/custom\ncp ../../help/templates/help/base.html templates/help/\n For example, modify the templates/help/install.html file and save it. You will see the new help page.
Note
There are some more help pages available for modifying, you can find the list of the html file here
"},{"location":"config/seahub_customization/#add-an-extra-note-in-sharing-dialog","title":"Add an extra note in sharing dialog","text":"You can add an extra note in sharing dialog in seahub_settings.py
ADDITIONAL_SHARE_DIALOG_NOTE = {\n 'title': 'Attention! Read before shareing files:',\n 'content': 'Do not share personal or confidential official data with **.'\n}\n Result:
"},{"location":"config/seahub_customization/#add-custom-navigation-items","title":"Add custom navigation items","text":"Since Pro 7.0.9, Seafile supports adding some custom navigation entries to the home page for quick access. This requires you to add the following configuration information to the conf/seahub_settings.py configuration file:
CUSTOM_NAV_ITEMS = [\n {'icon': 'sf2-icon-star',\n 'desc': 'Custom navigation 1',\n 'link': 'https://www.seafile.com'\n },\n {'icon': 'sf2-icon-wiki-view',\n 'desc': 'Custom navigation 2',\n 'link': 'https://www.seafile.com/help'\n },\n {'icon': 'sf2-icon-wrench',\n 'desc': 'Custom navigation 3',\n 'link': 'http://www.example.com'\n },\n]\n Note
The icon field currently only supports icons in Seafile that begin with sf2-icon. You can find the list of icons here
Then restart the Seahub service to take effect.
Once you log in to the Seafile system homepage again, you will see the new navigation entry under the Tools navigation bar on the left.
ADDITIONAL_ABOUT_DIALOG_LINKS = {\n 'seafile': 'https://example.seahub.com/seahub',\n 'dtable-web': 'https://example.seahub.com/dtable-web'\n}\n Result:
"},{"location":"config/seahub_settings_py/","title":"Seahub Settings","text":"Tip
You can also modify most of the config items via web interface. The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files. If you want to disable settings via web interface, you can add ENABLE_SETTINGS_VIA_WEB = False to seahub_settings.py.
Refer to email sending documentation.
"},{"location":"config/seahub_settings_py/#security-settings","title":"Security settings","text":"# For security consideration, please set to match the host/domain of your site, e.g., ALLOWED_HOSTS = ['.example.com'].\n# Please refer https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts for details.\nALLOWED_HOSTS = ['.myseafile.com']\n\n\n# Whether to use a secure cookie for the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = True\n\n# The value of the SameSite flag on the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-samesite\nCSRF_COOKIE_SAMESITE = 'Strict'\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-trusted-origins\nCSRF_TRUSTED_ORIGINS = ['https://www.myseafile.com']\n"},{"location":"config/seahub_settings_py/#user-management-options","title":"User management options","text":"The following options affect user registration, password and session.
# Enalbe or disalbe registration on web. Default is `False`.\nENABLE_SIGNUP = False\n\n# Activate or deactivate user when registration complete. Default is `True`.\n# If set to `False`, new users need to be activated by admin in admin panel.\nACTIVATE_AFTER_REGISTRATION = False\n\n# Whether to send email when a system admin adding a new member. Default is `True`.\nSEND_EMAIL_ON_ADDING_SYSTEM_MEMBER = True\n\n# Whether to send email when a system admin resetting a user's password. Default is `True`.\nSEND_EMAIL_ON_RESETTING_USER_PASSWD = True\n\n# Send system admin notify email when user registration is complete. Default is `False`.\nNOTIFY_ADMIN_AFTER_REGISTRATION = True\n\n# Remember days for login. Default is 7\nLOGIN_REMEMBER_DAYS = 7\n\n# Attempt limit before showing a captcha when login.\nLOGIN_ATTEMPT_LIMIT = 3\n\n# deactivate user account when login attempts exceed limit\n# Since version 5.1.2 or pro 5.1.3\nFREEZE_USER_ON_LOGIN_FAILED = False\n\n# default False, only check USER_PASSWORD_MIN_LENGTH\n# when True, check password strength level, STRONG(or above) is allowed\nUSER_STRONG_PASSWORD_REQUIRED = False\n\n# Force user to change password when admin add/reset a user.\n# Added in 5.1.1, deafults to True.\nFORCE_PASSWORD_CHANGE = True\n\n# Age of cookie, in seconds (default: 2 weeks).\nSESSION_COOKIE_AGE = 60 * 60 * 24 * 7 * 2\n\n# Whether a user's session cookie expires when the Web browser is closed.\nSESSION_EXPIRE_AT_BROWSER_CLOSE = False\n\n# Whether to save the session data on every request. Default is `False`\nSESSION_SAVE_EVERY_REQUEST = False\n\n# In old version, if you use Single Sign On, the password is not saved in Seafile.\n# Users can't use WebDAV because Seafile can't check whether the password is correct.\n# Since version 6.3.8, you can enable this option to let user's to specific a password for WebDAV login.\n# Users login via SSO can use this password to login in WebDAV.\n# Enable the feature. pycryptodome should be installed first.\n# sudo pip install pycryptodome==3.12.0\nENABLE_WEBDAV_SECRET = True\nWEBDAV_SECRET_MIN_LENGTH = 8\n\n# LEVEL for the password, based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nWEBDAV_SECRET_STRENGTH_LEVEL = 1\n\n\n# Since version 7.0.9, you can force a full user to log in with a two factor authentication.\n# The prerequisite is that the administrator should 'enable two factor authentication' in the 'System Admin -> Settings' page.\n# Then you can add the following configuration information to the configuration file.\nENABLE_FORCE_2FA_TO_ALL_USERS = True\n\n# Enable two factor authentication for accounts. Defaults to `False`.\n# Since version 6.0\nENABLE_TWO_FACTOR_AUTH = True\n\n# Enable a user to change password in 'settings' page. Default to `True`\n# Since version 6.2.11\nENABLE_CHANGE_PASSWORD = True\n\n# If show contact email when search user.\nENABLE_SHOW_CONTACT_EMAIL_WHEN_SEARCH_USER = True\n"},{"location":"config/seahub_settings_py/#single-sign-on","title":"Single Sign On","text":"# Enable authentication with ADFS\n# Default is False\n# Since 6.0.9\nENABLE_ADFS_LOGIN = True\n\n# Force user login through ADFS/OAuth instead of email and password\n# Default is False\n# Since 11.0.7, in version 12.0, it also controls users via OAuth\nDISABLE_ADFS_USER_PWD_LOGIN = True\n\n# Enable authentication wit Kerberos\n# Default is False\nENABLE_KRB5_LOGIN = True\n\n# Enable authentication with Shibboleth\n# Default is False\nENABLE_SHIBBOLETH_LOGIN = True\n\n# Enable a user associated with SSO account to change/reset local password in 'settings' page. Default to `True`.\n# Change it to false to disable SSO account to change local password\nENABLE_SSO_USER_CHANGE_PASSWORD = True\n\n# Enable client to open an external browser for single sign on\n# When it is false, the old buitin browser is opened for single sign on\n# When it is true, the default browser of the operation system is opened\n# The benefit of using system browser is that it can support hardware 2FA\n# Since 11.0.0, and sync client 9.0.5, drive client 3.0.8\nCLIENT_SSO_VIA_LOCAL_BROWSER = True # default is False\nCLIENT_SSO_UUID_EXPIRATION = 5 * 60 # in seconds\n"},{"location":"config/seahub_settings_py/#library-snapshot-label-feature","title":"Library snapshot label feature","text":"# Turn on this option to let users to add a label to a library snapshot. Default is `False`\nENABLE_REPO_SNAPSHOT_LABEL = False\n"},{"location":"config/seahub_settings_py/#library-options","title":"Library options","text":"Options for libraries:
# if enable create encrypted library\nENABLE_ENCRYPTED_LIBRARY = True\n\n# version for encrypted library\n# should only be `2` or `4`.\n# version 3 is insecure (using AES128 encryption) so it's not supported any more.\n# refer to https://manual.seafile.com/latest/administration/security_features/#how-does-an-encrypted-library-work\n# for the difference between version 2 and 4.\nENCRYPTED_LIBRARY_VERSION = 2\n\n# Since version 12, you can choose password hash algorithm for new encrypted libraries.\n# The password is used to encrypt the encryption key. So using a secure password hash algorithm to\n# prevent brute-force password guessing is important.\n# Before version 12, a fixed algorithm (PBKDF2-SHA256 with 1000 iterations) is used.\n#\n# Currently two hash algorithms are supported.\n# - PBKDF2: The only available parameter is the number of iterations. You need to increase the\n# the number of iterations over time, as GPUs are more and more used for such calculation.\n# The default number of iterations is 1000. As of 2023, the recommended iterations is 600,000.\n# - Argon2id: Secure hash algorithm that has high cost even for GPUs. There are 3 parameters that\n# can be set: time cost, memory cost, and parallelism degree. The parameters are seperated by commas,\n# e.g. \"2,102400,8\", which the default parameters used in Seafile. Learn more about this algorithm\n# on https://github.com/P-H-C/phc-winner-argon2 .\n#\n# Note that only sync client >= 9.0.9 and SeaDrive >= 3.0.12 supports syncing libraries created with these algorithms.\nENCRYPTED_LIBRARY_PWD_HASH_ALGO = \"argon2id\"\nENCRYPTED_LIBRARY_PWD_HASH_PARAMS = \"2,102400,8\"\n# ENCRYPTED_LIBRARY_PWD_HASH_ALGO = \"pbkdf2_sha256\"\n# ENCRYPTED_LIBRARY_PWD_HASH_PARAMS = \"600000\"\n\n# mininum length for password of encrypted library\nREPO_PASSWORD_MIN_LENGTH = 8\n\n# force use password when generate a share/upload link (since version 8.0.9)\nSHARE_LINK_FORCE_USE_PASSWORD = False\n\n# mininum length for password for share link (since version 4.4)\nSHARE_LINK_PASSWORD_MIN_LENGTH = 8\n\n# LEVEL for the password of a share/upload link\n# based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above. (since version 8.0.9)\nSHARE_LINK_PASSWORD_STRENGTH_LEVEL = 3\n\n# Default expire days for share link (since version 6.3.8)\n# Once this value is configured, the user can no longer generate an share link with no expiration time.\n# If the expiration value is not set when the share link is generated, the value configured here will be used.\nSHARE_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be less than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be greater than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# Default expire days for upload link (since version 7.1.6)\n# Once this value is configured, the user can no longer generate an upload link with no expiration time.\n# If the expiration value is not set when the upload link is generated, the value configured here will be used.\nUPLOAD_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MIN should be less than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MAX should be greater than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# force user login when view file/folder share link (since version 6.3.6)\nSHARE_LINK_LOGIN_REQUIRED = True\n\n# enable water mark when view(not edit) file in web browser (since version 6.3.6)\nENABLE_WATERMARK = True\n\n# Disable sync with any folder. Default is `False`\n# NOTE: since version 4.2.4\nDISABLE_SYNC_WITH_ANY_FOLDER = True\n\n# Enable or disable library history setting\nENABLE_REPO_HISTORY_SETTING = True\n\n# Enable or disable user share library to any group\n# Since version 6.2.0\nENABLE_SHARE_TO_ALL_GROUPS = True\n\n# Enable or disable user to clean trash (default is True)\n# Since version 6.3.6\nENABLE_USER_CLEAN_TRASH = True\n\n# Add a report abuse button on download links. (since version 7.1.0)\n# Users can report abuse on the share link page, fill in the report type, contact information, and description.\n# Default is false.\nENABLE_SHARE_LINK_REPORT_ABUSE = True\n Options for online file preview:
# Online preview maximum file size, defaults to 30M.\nFILE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024\n\n# Extensions of previewed text files.\n# NOTE: since version 6.1.1\nTEXT_PREVIEW_EXT = \"\"\"ac, am, bat, c, cc, cmake, cpp, cs, css, diff, el, h, html,\nhtm, java, js, json, less, make, org, php, pl, properties, py, rb,\nscala, script, sh, sql, txt, text, tex, vi, vim, xhtml, xml, log, csv,\ngroovy, rst, patch, go\"\"\"\n\n\n# Seafile only generates thumbnails for images smaller than the following size.\n# Since version 6.3.8 pro, suport the psd online preview.\nTHUMBNAIL_IMAGE_SIZE_LIMIT = 30 # MB\n"},{"location":"config/seahub_settings_py/#map-service","title":"Map service","text":"Options for map service:
# The map service currently relies on the Google Maps API and requires two API keys.\nGOOGLE_MAP_KEY = '<replace with your Google Maps API Key>'\nSERVER_GOOGLE_MAP_KEY = '<replace with your Google Maps API Key>'\n Required scope of the API keys
To safeguard your Google API Keys from abuse, restrict their usage. However, even with restrictions in place, abuse remains a risk\u2014especially since GOOGLE_MAP_KEY must be included in your source code and is therefore publicly accessible. Additionally, heavy use of the maps plugin may increase your Google billing, so monitor your spending closely.
GOOGLE_MAP_KEY Restrict to your Server URL,like https://cloud.seafile.io Maps Javascript API SERVER_GOOGLE_MAP_KEY No website restriction Geocoding API"},{"location":"config/seahub_settings_py/#cloud-mode","title":"Cloud Mode","text":"You should enable cloud mode if you use Seafile with an unknown user base. It disables the organization tab in Seahub's website to ensure that users can't access the user list. Cloud mode provides some nice features like sharing content with unregistered users and sending invitations to them. Therefore you also want to enable user registration. Through the global address book (since version 4.2.3) you can do a search for every user account. So you probably want to disable it.
# Enable cloude mode and hide `Organization` tab.\nCLOUD_MODE = True\n\n# Disable global address book\nENABLE_GLOBAL_ADDRESSBOOK = False\n"},{"location":"config/seahub_settings_py/#other-options","title":"Other options","text":"# Disable settings via Web interface in system admin->settings\n# Default is True\n# Since 5.1.3\nENABLE_SETTINGS_VIA_WEB = False\n\n# Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = 'UTC'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\n# Default language for sending emails.\nLANGUAGE_CODE = 'en'\n\n# Custom language code choice.\nLANGUAGES = (\n ('en', 'English'),\n ('zh-cn', '\u7b80\u4f53\u4e2d\u6587'),\n ('zh-tw', '\u7e41\u9ad4\u4e2d\u6587'),\n)\n\n# Set this to your website/company's name. This is contained in email notifications and welcome message when user login for the first time.\nSITE_NAME = 'Seafile'\n\n# Browser tab's title\nSITE_TITLE = 'Private Seafile'\n\n# Whether enable the feature Wiki (requires sdoc integration). Default is `True`\nENABLE_WIKI = True\n\n# Max number of files when user upload file/folder.\n# Since version 6.0.4\nMAX_NUMBER_OF_FILES_FOR_FILEUPLOAD = 500\n\n# Control the language that send email. Default to user's current language.\n# Since version 6.1.1\nSHARE_LINK_EMAIL_LANGUAGE = ''\n\n# Interval for browser requests unread notifications\n# Since PRO 6.1.4 or CE 6.1.2\nUNREAD_NOTIFICATIONS_REQUEST_INTERVAL = 3 * 60 # seconds\n\n# Whether to allow user to delete account, change login password or update basic user\n# info on profile page.\n# Since PRO 6.3.10\nENABLE_DELETE_ACCOUNT = False\nENABLE_UPDATE_USER_INFO = False\nENABLE_CHANGE_PASSWORD = False\n\n# Get web api auth token on profile page.\nENABLE_GET_AUTH_TOKEN_BY_SESSION = True\n\n# Since 8.0.6 CE/PRO version.\n# Url redirected to after user logout Seafile.\n# Usually configured as Single Logout url.\nLOGOUT_REDIRECT_URL = 'https://www.example-url.com'\n\n\n# Enable system admin add T&C, all users need to accept terms before using. Defaults to `False`.\n# Since version 6.0\nENABLE_TERMS_AND_CONDITIONS = True\n"},{"location":"config/seahub_settings_py/#pro-edition-only-options","title":"Pro edition only options","text":"# Allow administrator to view user's file in UNENCRYPTED libraries\n# through Libraries page in System Admin. Default is False.\nENABLE_SYS_ADMIN_VIEW_REPO = True\n\n# For un-login users, providing an email before downloading or uploading on shared link page.\n# Since version 5.1.4\nENABLE_SHARE_LINK_AUDIT = True\n\n# Check virus after upload files to shared upload links. Defaults to `False`.\n# Since version 6.0\nENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n\n# Send email to these email addresses when a virus is detected.\n# This list can be any valid email address, not necessarily the emails of Seafile user.\n# Since version 6.0.8\nVIRUS_SCAN_NOTIFY_LIST = ['user_a@seafile.com', 'user_b@seafile.com']\n"},{"location":"config/seahub_settings_py/#restful-api","title":"RESTful API","text":"# API throttling related settings. Enlarger the rates if you got 429 response code during API calls.\nREST_FRAMEWORK = {\n 'DEFAULT_THROTTLE_RATES': {\n 'ping': '600/minute',\n 'anon': '5/minute',\n 'user': '300/minute',\n },\n 'UNICODE_JSON': False,\n}\n\n# Throtting whitelist used to disable throttle for certain IPs.\n# e.g. REST_FRAMEWORK_THROTTING_WHITELIST = ['127.0.0.1', '192.168.1.1']\n# Please make sure `REMOTE_ADDR` header is configured in Nginx conf according to https://manual.seafile.com/13.0/setup_binary/ce/deploy_with_nginx.html.\nREST_FRAMEWORK_THROTTING_WHITELIST = []\n"},{"location":"config/seahub_settings_py/#seahub-custom-functions","title":"Seahub Custom Functions","text":"Since version 6.2, you can define a custom function to modify the result of user search function.
For example, if you want to limit user only search users in the same institution, you can define custom_search_user function in {seafile install path}/conf/seahub_custom_functions/__init__.py
Code example:
import os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseahub_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seahub/seahub')\nsys.path.append(seahub_dir)\n\nfrom seahub.profile.models import Profile\ndef custom_search_user(request, emails):\n\n institution_name = ''\n\n username = request.user.username\n profile = Profile.objects.get_profile_by_user(username)\n if profile:\n institution_name = profile.institution\n\n inst_users = [p.user for p in\n Profile.objects.filter(institution=institution_name)]\n\n filtered_emails = []\n for email in emails:\n if email in inst_users:\n filtered_emails.append(email)\n\n return filtered_emails\n You should NOT change the name of custom_search_user and seahub_custom_functions/__init__.py
Since version 6.2.5 pro, if you enable the ENABLE_SHARE_TO_ALL_GROUPS feather on sysadmin settings page, you can also define a custom function to return the groups a user can share library to.
For example, if you want to let a user to share library to both its groups and the groups of user test@test.com, you can define a custom_get_groups function in {seafile install path}/conf/seahub_custom_functions/__init__.py
Code example:
import os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseaserv_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seafile/lib64/python2.7/site-packages')\nsys.path.append(seaserv_dir)\n\ndef custom_get_groups(request):\n\n from seaserv import ccnet_api\n\n groups = []\n username = request.user.username\n\n # for current user\n groups += ccnet_api.get_groups(username)\n\n # for 'test@test.com' user\n groups += ccnet_api.get_groups('test@test.com')\n\n return groups\n You should NOT change the name of custom_get_groups and seahub_custom_functions/__init__.py
Tip
docker compose restart\n cd /opt/seafile/seafile-server-latest\n./seahub.sh restart\n There are currently five types of emails sent in Seafile:
The first four types of email are sent immediately. The last type is sent by a background task running periodically.
"},{"location":"config/sending_email/#options-of-email-sending","title":"Options of Email Sending","text":"Please add the following lines to seahub_settings.py to enable email sending.
EMAIL_USE_TLS = True\nEMAIL_HOST = 'smtp.example.com' # smpt server\nEMAIL_HOST_USER = 'username@example.com' # username and domain\nEMAIL_HOST_PASSWORD = 'password' # password\nEMAIL_PORT = 587\nDEFAULT_FROM_EMAIL = EMAIL_HOST_USER\nSERVER_EMAIL = EMAIL_HOST_USER\n Note
If your email service still does not work, you can checkout the log file logs/seahub.log to see what may cause the problem. For a complete email notification list, please refer to email notification list.
If you want to use the email service without authentication leaf EMAIL_HOST_USER and EMAIL_HOST_PASSWORD blank (''). (But notice that the emails then will be sent without a From: address.)
About using SSL connection (using port 465)
EMAIL_USE_SSL = True instead of EMAIL_USE_TLS.reply to of email","text":"You can change the reply to field of email by add the following settings to seahub_settings.py. This only affects email sending for file share link.
# Set reply-to header to user's email or not, defaults to ``False``. For details,\n# please refer to http://www.w3.org/Protocols/rfc822/\nADD_REPLY_TO_HEADER = True\n"},{"location":"config/sending_email/#config-background-email-sending-task","title":"Config background email sending task","text":"The background task will run periodically to check whether an user have new unread notifications. If there are any, it will send a reminder email to that user. The background email sending task is controlled by seafevents.conf.
[SEAHUB EMAIL]\n\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n"},{"location":"config/sending_email/#add-smime-signature-to-email","title":"Add S/MIME signature to email","text":"If you want the email signed by S/MIME, please add the config in seahub_settings.py
ENABLE_SMIME = True\nSMIME_CERTS_DIR = /opt/seafile/seahub-data/smime-certs # including cert.pem and private_key.pem\n The certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the certs using the following command: mkdir -p /opt/seafile/seahub-data/smime-certs\ncd /opt/seafile/seahub-data/smime-certs\nopenssl req -x509 -newkey rsa:4096 -keyout private_key.pem -outform PEM -out cert.pem -days 3650 -nodes\n Tip
Some email clients may not verify the email signed by certs generated by command line. So it's better to apply the certs from a manufacture
"},{"location":"config/sending_email/#customize-email-messages","title":"Customize email messages","text":"The simplest way to customize the email messages is setting the SITE_NAME variable in seahub_settings.py. If it is not enough for your case, you can customize the email templates.
Tip
Subject line may vary between different releases, this is based on Release 5.0.0. Restart Seahub so that your changes take effect.
"},{"location":"config/sending_email/#the-email-base-template","title":"The email base template","text":"seahub/seahub/templates/email_base.html
Tip
You can copy email_base.html to seahub-data/custom/templates/email_base.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/auth/forms.py line:127
send_html_email(_(\"Reset Password on %s\") % site_name,\n email_template_name, c, None, [user.username])\n Body
seahub/seahub/templates/registration/password_reset_email.html
Tip
You can copy password_reset_email.html to seahub-data/custom/templates/registration/password_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:424
send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\n Body
seahub/seahub/templates/sysadmin/user_add_email.html
Tip
You can copy user_add_email.html to seahub-data/custom/templates/sysadmin/user_add_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:1224
send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\n Body
seahub/seahub/templates/sysadmin/user_reset_email.html
Tip
You can copy user_reset_email.html to seahub-data/custom/templates/sysadmin/user_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/share/views.py line:913
try:\n if file_shared_type == 'f':\n c['file_shared_type'] = _(u\"file\")\n send_html_email(_(u'A file is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to\n )\n else:\n c['file_shared_type'] = _(u\"directory\")\n send_html_email(_(u'A directory is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to)\n Body
seahub/seahub/templates/shared_link_email.html
seahub/seahub/templates/shared_upload_link_email.html
Tip
You can copy shared_link_email.html to seahub-data/custom/templates/shared_link_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
send_html_email(_('New notice on %s') % settings.SITE_NAME,\n 'notifications/notice_email.html', c,\n None, [to_user])\n Body
seahub/seahub/notifications/templates/notifications/notice_email.html
"},{"location":"config/shibboleth_authentication/","title":"Shibboleth Authentication","text":"Shibboleth is a widely used single sign on (SSO) protocol. Seafile supports authentication via Shibboleth. It allows users from another organization to log in to Seafile without registering an account on the service provider.
In this documentation, we assume the reader is familiar with Shibboleth installation and configuration. For introduction to Shibboleth concepts, please refer to https://shibboleth.atlassian.net/wiki/spaces/CONCEPT/overview .
Shibboleth Service Provider (SP) should be installed on the same server as the Seafile server. The official SP from https://shibboleth.net/ is implemented as an Apache module. The module handles all Shibboleth authentication details. Seafile server receives authentication information (username) from HTTP request. The username then can be used as login name for the user.
Seahub provides a special URL to handle Shibboleth login. The URL is https://your-seafile-domain/sso. Only this URL needs to be configured under Shibboleth protection. All other URLs don't go through the Shibboleth module. The overall workflow for a user to login with Shibboleth is as follows:
https://your-seafile-domain/sso.https://your-seafile-domain/sso.HTTP_REMOTE_USER header) and brings the user to her/his home page.Since Shibboleth support requires Apache, if you want to use Nginx, you need two servers, one for non-Shibboleth access, another configured with Apache to allow Shibboleth login. In a cluster environment, you can configure your load balancer to direct traffic to different server according to URL. Only the URL https://your-seafile-domain/sso needs to be directed to Apache.
The configuration includes 3 steps:
We use CentOS 7 as example.
"},{"location":"config/shibboleth_authentication/#configure-apache","title":"Configure Apache","text":"You should create a new virtual host configuration for Shibboleth. And then restart Apache.
<IfModule mod_ssl.c>\n <VirtualHost _default_:443>\n ServerName your-seafile-domain\n DocumentRoot /var/www\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n ErrorLog ${APACHE_LOG_DIR}/seahub.error.log\n CustomLog ${APACHE_LOG_DIR}/seahub.access.log combined\n\n SSLEngine on\n SSLCertificateFile /path/to/ssl-cert.pem\n SSLCertificateKeyFile /path/to/ssl-key.pem\n\n <Location /Shibboleth.sso>\n SetHandler shib\n AuthType shibboleth\n ShibRequestSetting requireSession 1\n Require valid-user\n </Location>\n\n <Location /sso>\n SetHandler shib\n AuthType shibboleth\n ShibUseHeaders On\n ShibRequestSetting requireSession 1\n Require valid-user\n </Location>\n\n RewriteEngine On\n <Location /media>\n Require all granted\n </Location>\n\n # seafile fileserver\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n # seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n\n # for http\n # RequestHeader set REMOTE_USER %{REMOTE_USER}e\n # for https\n RequestHeader set REMOTE_USER %{REMOTE_USER}s\n </VirtualHost>\n</IfModule>\n"},{"location":"config/shibboleth_authentication/#install-and-configure-shibboleth","title":"Install and Configure Shibboleth","text":"Installation and configuration of Shibboleth is out of the scope of this documentation. You can refer to the official Shibboleth document.
"},{"location":"config/shibboleth_authentication/#configure-shibbolethsp","title":"Configure Shibboleth(SP)","text":""},{"location":"config/shibboleth_authentication/#shibboleth2xml","title":"shibboleth2.xml","text":"Open /etc/shibboleth/shibboleth2.xml and change some property. After you have done all the followings, don't forget to restart Shibboleth(SP)
ApplicationDefaults element","text":"Change entityID and REMOTE_USER property:
<!-- The ApplicationDefaults element is where most of Shibboleth's SAML bits are defined. -->\n<ApplicationDefaults entityID=\"https://your-seafile-domain/sso\"\n REMOTE_USER=\"mail\"\n cipherSuites=\"DEFAULT:!EXP:!LOW:!aNULL:!eNULL:!DES:!IDEA:!SEED:!RC4:!3DES:!kRSA:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1\">\n Seahub extracts the username from the REMOTE_USER environment variable. So you should modify your SP's shibboleth2.xml config file, so that Shibboleth translates your desired attribute into REMOTE_USER environment variable.
In Seafile, only one of the following two attributes can be used for username: eppn, and mail. eppn stands for \"Edu Person Principal Name\". It is usually the UserPrincipalName attribute in Active Directory. It's not necessarily a valid email address. mail is the user's email address. You should set REMOTE_USER to either one of these attributes.
SSO element","text":"Change entityID property:
<!--\nConfigures SSO for a default IdP. To properly allow for >1 IdP, remove\nentityID property and adjust discoveryURL to point to discovery service.\nYou can also override entityID on /Login query string, or in RequestMap/htaccess.\n-->\n<SSO entityID=\"https://your-IdP-domain\">\n <!--discoveryProtocol=\"SAMLDS\" discoveryURL=\"https://wayf.ukfederation.org.uk/DS\"-->\n SAML2\n</SSO>\n"},{"location":"config/shibboleth_authentication/#metadataprovider-element","title":"MetadataProvider element","text":"Change url and backingFilePath property:
<!-- Example of remotely supplied batch of signed metadata. -->\n<MetadataProvider type=\"XML\" validate=\"true\"\n url=\"http://your-IdP-metadata-url\"\n backingFilePath=\"your-IdP-metadata.xml\" maxRefreshDelay=\"7200\">\n <MetadataFilter type=\"RequireValidUntil\" maxValidityInterval=\"2419200\"/>\n <MetadataFilter type=\"Signature\" certificate=\"fedsigner.pem\" verifyBackup=\"false\"/>\n"},{"location":"config/shibboleth_authentication/#attribute-mapxml","title":"attribute-map.xml","text":"Open /etc/shibboleth/attribute-map.xml and change some property. After you have done all the followings, don't forget to restart Shibboleth(SP)
Attribute element","text":"Uncomment attribute elements for getting more user info:
<!-- Older LDAP-defined attributes (SAML 2.0 names followed by SAML 1 names)... -->\n<Attribute name=\"urn:oid:2.16.840.1.113730.3.1.241\" id=\"displayName\"/>\n<Attribute name=\"urn:oid:0.9.2342.19200300.100.1.3\" id=\"mail\"/>\n\n<Attribute name=\"urn:mace:dir:attribute-def:displayName\" id=\"displayName\"/>\n<Attribute name=\"urn:mace:dir:attribute-def:mail\" id=\"mail\"/>\n"},{"location":"config/shibboleth_authentication/#upload-shibbolethsps-metadata","title":"Upload Shibboleth(SP)'s metadata","text":"After restarting Apache, you should be able to get the Service Provider metadata by accessing https://your-seafile-domain/Shibboleth.sso/Metadata. This metadata should be uploaded to the Identity Provider (IdP) server.
"},{"location":"config/shibboleth_authentication/#configure-seahub","title":"Configure Seahub","text":"Add the following configuration to seahub_settings.py.
ENABLE_SHIB_LOGIN = True\nSHIBBOLETH_USER_HEADER = 'HTTP_REMOTE_USER'\n# basic user attributes\nSHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_DISPLAYNAME\": (False, \"display_name\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n}\nEXTRA_MIDDLEWARE = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\nEXTRA_AUTHENTICATION_BACKENDS = (\n 'shibboleth.backends.ShibbolethRemoteUserBackend',\n)\n Seahub can process additional user attributes from Shibboleth. These attributes are saved into Seahub's database, as user's properties. They're all not mandatory. The internal user properties Seahub now supports are:
You can specify the mapping between Shibboleth attributes and Seahub's user properties in seahub_settings.py:
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_GIVENNAME\": (False, \"givenname\"),\n \"HTTP_SN\": (False, \"surname\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n \"HTTP_ORGANIZATION\": (False, \"institution\"),\n}\n In the above config, the hash key is Shibboleth attribute name, the second element in the hash value is Seahub's property name. You can adjust the Shibboleth attribute name for your own needs.
You may have to change attribute-map.xml in your Shibboleth SP, so that the desired attributes are passed to Seahub. And you have to make sure the IdP sends these attributes to the SP
We also added an option SHIB_ACTIVATE_AFTER_CREATION (defaults to True) which control the user status after shibboleth connection. If this option set to False, user will be inactive after connection, and system admins will be notified by email to activate that account.
Shibboleth has a field called affiliation. It is a list like: employee@uni-mainz.de;member@uni-mainz.de;faculty@uni-mainz.de;staff@uni-mainz.de.
We are able to set user role from Shibboleth. Details about user role, please refer to Roles and Permissions
To enable this, modify SHIBBOLETH_ATTRIBUTE_MAP above and add Shibboleth-affiliation field, you may need to change Shibboleth-affiliation according to your Shibboleth SP attributes.
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_GIVENNAME\": (False, \"givenname\"),\n \"HTTP_SN\": (False, \"surname\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n \"HTTP_ORGANIZATION\": (False, \"institution\"),\n \"HTTP_SHIBBOLETH_AFFILIATION\": (False, \"affiliation\"),\n}\n Then add new config to define affiliation role map,
SHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\n After Shibboleth login, Seafile should calcualte user's role from affiliation and SHIBBOLETH_AFFILIATION_ROLE_MAP.
"},{"location":"config/shibboleth_authentication/#custom-set-user-role","title":"Custom set user role","text":"If you are unable to set user roles by obtaining affiliation information, or if you wish to have a more customized way of setting user roles, you can add the following configuration to achieve this.
For example, set all users whose email addresses end with @seafile.com as default, and set other users as guest.
First, update the SHIBBOLETH_ATTRIBUTE_MAP configuration in seahub_settings.py, and add HTTP_REMOTE_USER.
SHIBBOLETH_ATTRIBUTE_MAP = {\n ....\n \"HTTP_REMOTE_USER\": (False, \"remote_user\"),\n ....\n}\n Then, create /opt/seafile/conf/seahub_custom_functions/__init__.py file and add the following code.
# function name `custom_shibboleth_get_user_role` should NOT be changed\ndef custom_shibboleth_get_user_role(shib_meta):\n\n remote_user = shib_meta.get('remote_user', '')\n if not remote_user:\n return ''\n\n remote_user = remote_user.lower()\n if remote_user.endswith('@seafile.com'):\n return 'default'\n else:\n return 'guest'\n"},{"location":"config/shibboleth_authentication/#verify","title":"Verify","text":"After restarting Apache and Seahub service (./seahub.sh restart), you can then test the shibboleth login workflow.
If you encountered problems when login, follow these steps to get debug info (for Seafile pro 6.3.13).
"},{"location":"config/shibboleth_authentication/#add-this-setting-to-seahub_settingspy","title":"Add this setting toseahub_settings.py","text":"DEBUG = True\n"},{"location":"config/shibboleth_authentication/#change-seafiles-code","title":"Change Seafile's code","text":"Open seafile-server-latest/seahub/thirdpart/shibboleth/middleware.py
Insert the following code in line 59
assert False\n Insert the following code in line 65
if not username:\n assert False\n The complete code after these changes is as follows:
#Locate the remote user header.\n# import pprint; pprint.pprint(request.META)\ntry:\n username = request.META[SHIB_USER_HEADER]\nexcept KeyError:\n assert False\n # If specified header doesn't exist then return (leaving\n # request.user set to AnonymousUser by the\n # AuthenticationMiddleware).\n return\n\nif not username:\n assert False\n\np_id = ccnet_api.get_primary_id(username)\nif p_id is not None:\n username = p_id\n Then restart Seafile and relogin, you will see debug info in web page.
"},{"location":"config/single_sign_on/","title":"Single Sign On support in Seafile","text":"Seafile supports most of the popular single-sign-on authentication protocols. Some are included in Community Edition, some are only in Pro Edition.
In the Community Edition:
Kerberos authentication can be integrated by using Apache as a proxy server and follow the instructions in Remote User Authentication and Auto Login SeaDrive on Windows.
In Pro Edition:
Build Seafile
Seafile Open API
Seafile Implement Details
You can build Seafile from our source code package or from the Github repo directly.
Client
Server
Seafile internally uses a data model similar to GIT's. It consists of Repo, Commit, FS, and Block.
Seafile's high performance comes from the architectural design: stores file metadata in object storage (or file system), while only stores small amount of metadata about the libraries in relational database. An overview of the architecture can be depicted as below. We'll describe the data model in more details.
"},{"location":"develop/data_model/#repo","title":"Repo","text":"A repo is also called a library. Every repo has an unique id (UUID), and attributes like description, creator, password.
The metadata for a repo is stored in seafile_db database and the commit objects (see description in later section).
There are a few tables in the seafile_db database containing important information about each repo.
Repo: contains the ID for each repo.RepoOwner: contains the owner id for each repo.RepoInfo: it is a \"cache\" table for fast access to repo metadata stored in the commit object. It includes repo name, update time, last modifier.RepoSize: the total size of all files in the repo.RepoFileCount: the file count in the repo.RepoHead: contains the \"head commit ID\". This ID points to the head commit in the storage, which will be described in the next section.Commit objects save the change history of a repo. Each update from the web interface, or sync upload operation will create a new commit object. A commit object contains the following information: commit ID, library name, creator of this commit (a.k.a. the modifier), creation time of this commit (a.k.a. modification time), root fs object ID, parent commit ID.
The root fs object ID points to the root FS object, from which we can traverse a file system snapshot for the repo.
The parent commit ID points to the last commit previous to the current commit. The RepoHead table contains the latest head commit ID for each repo. From this head commit, we can traverse the repo history.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/commits/<repo_id>. If you use object storage, commit objects are stored in the commits bucket.
There are two types of FS objects, SeafDir Object and Seafile Object. SeafDir Object represents a directory, and Seafile Object represents a file.
The SeafDir object contains metadata for each file/sub-folder, which includes name, last modification time, last modifier, size, and object ID. The object ID points to another SeafDir or Seafile object. The Seafile object contains a block list, which is a list of block IDs for the file.
The FS object IDs are calculated based on the contents of the object. That means if a folder or a file is not changed, the same objects will be reused across multiple commits. This allow us to create snapshots very efficiently.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/fs/<repo_id>. If you use object storage, commit objects are stored in the fs bucket.
A file is further divided into blocks with variable lengths. We use Content Defined Chunking algorithm to divide file into blocks. A clear overview of this algorithm can be found at http://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf. On average, a block's size is around 8MB.
This mechanism makes it possible to deduplicate data between different versions of frequently updated files, improving storage efficiency. It also enables transferring data to/from multiple servers in parallel.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/blocks/<repo_id>. If you use object storage, commit objects are stored in the blocks bucket.
A \"virtual repo\" is a special repo that will be created in the cases below:
A virtual repo can be understood as a view for part of the data in its parent library. For example, when sharing a folder, the virtual repo only provides access to the shared folder in that library. Virtual repo use the same underlying data as the parent library. So virtual repos use the same fs and blocks storage location as its parent.
Virtual repo has its own change history. So it has separate commits storage location from its parent. The changes in virtual repo and its parent repo will be bidirectional merged. So that changes from each side can be seen from another.
There is a VirtualRepo table in seafile_db database. It contains the folder path in the parent repo for each virtual repo.
The following list is what you need to install on your development machine. You should install all of them before you build Seafile.
Package names are according to Ubuntu 24.04. For other Linux distros, please find their corresponding names yourself.
sudo apt-get install build-essential autotools-dev libtool libevent-dev libcurl4-openssl-dev libgtk2.0-dev uuid-dev intltool libsqlite3-dev valac git libjansson-dev cmake libwebsockets-dev qtchooser qtbase5-dev libqt5webkit5-dev qttools5-dev qttools5-dev-tools libssl-dev libargon2-dev libglib2.0-dev qtwebengine5-dev qtwayland5\n"},{"location":"develop/linux/#building","title":"Building","text":"First you should get the latest source of libsearpc/seafile/seafile-client:
Download the source code of the latest tag from
For example, if the latest released seafile client is 9.0.15, then just use the v9.0.15 tags of the three projects.
git clone --branch v3.2-latest https://github.com/haiwen/libsearpc.git\ngit clone --branch v9.0.15 https://github.com/haiwen/seafile.git\ngit clone --branch v9.0.15 https://github.com/haiwen/seafile-client.git\n To build Seafile client, you need first build libsearpc and seafile.
"},{"location":"develop/linux/#set-paths","title":"set paths","text":"export PREFIX=/usr\nexport PKG_CONFIG_PATH=\"$PREFIX/lib/pkgconfig:$PKG_CONFIG_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\n"},{"location":"develop/linux/#libsearpc","title":"libsearpc","text":"cd libsearpc\n./autogen.sh\n./configure --prefix=$PREFIX\nmake\nsudo make install\ncd ..\n"},{"location":"develop/linux/#seafile","title":"seafile","text":"cd seafile\n./autogen.sh\n./configure --prefix=$PREFIX --enable-ws=yes\nmake\nsudo make install\ncd ..\n If you don't need notification server, you can set --enable-ws=no to disable notification server.
cd seafile-client\ncmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$PREFIX .\nmake\nsudo make install\ncd ..\n"},{"location":"develop/linux/#custom-prefix","title":"custom prefix","text":"when installing to a custom $PREFIX, i.e. /opt, you may need a script to set the path variables correctly
cat >$PREFIX/bin/seafile-applet.sh <<END\n#!/bin/bash\nexport LD_LIBRARY_PATH=\"$PREFIX/lib:$LD_LIBRARY_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\nexec seafile-applet $@\nEND\ncat >$PREFIX/bin/seaf-cli.sh <<'END'\nexport LD_LIBRARY_PATH=\"$PREFIX/lib:$LD_LIBRARY_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\nexport PYTHONPATH=$PREFIX/lib/python3.12/site-packages\nexec seaf-cli \"$@\"\nEND\nchmod +x $PREFIX/bin/seafile-applet.sh $PREFIX/bin/seaf-cli.sh\n you can now start the client with $PREFIX/bin/seafile-applet.sh.
The following setups are required for building and packaging Sync Client on macOS:
universal_archs arm64 x86_64. Specifies the architecture on which MapPorts is compiled.+universal. MacPorts installs universal versions of all ports.sudo port install autoconf automake pkgconfig libtool glib2 libevent vala openssl git jansson cmake libwebsockets argon2.export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/opt/local/lib/pkgconfig:/usr/local/lib/pkgconfig\nexport PATH=/opt/local/bin:/usr/local/bin:/opt/local/Library/Frameworks/Python.framework/Versions/3.10/bin:$PATH\nexport LDFLAGS=\"-L/opt/local/lib -L/usr/local/lib\"\nexport CFLAGS=\"-I/opt/local/include -I/usr/local/include\"\nexport CPPFLAGS=\"-I/opt/local/include -I/usr/local/include\"\nexport LD_LIBRARY_PATH=/opt/lib:/usr/local/lib:/opt/local/lib/:/usr/local/lib/:$LD_LIBRARY_PATH\n\nQT_BASE=$HOME/Qt/6.2.4/macos\nexport PATH=$QT_BASE/bin:$PATH\nexport PKG_CONFIG_PATH=$QT_BASE/lib/pkgconfig:$PKG_CONFIG_PATH\nexport NOTARIZE_APPLE_ID=\"Your notarize account\"\nexport NOTARIZE_PASSWORD=\"Your notarize password\"\nexport NOTARIZE_TEAM_ID=\"Your notarize team id\"\n Following directory structures are expected when building Sync Client:
seafile-workspace/\nseafile-workspace/libsearpc/\nseafile-workspace/seafile/\nseafile-workspace/seafile-client/\n The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, and github.com/haiwen/seafile-client.
"},{"location":"develop/osx/#building","title":"Building","text":"Note: the building commands have been included in the packaging script, you can skip building commands while packaging.
To build libsearpc:
$ cd seafile-workspace/libsearpc/\n$ ./autogen.sh\n$ ./configure --disable-compile-demo --enable-compile-universal=yes\n$ make\n$ make install\n To build seafile:
$ cd seafile-workspace/seafile/\n$ ./autogen.sh\n$ ./configure --disable-fuse --enable-compile-universal=yes\n$ make\n$ make install\n To build seafile-client:
$ cd seafile-workspace/seafile-client/\n$ cmake -GXcode -B. -S.\n$ xcodebuild -target seafile-applet -configuration Release\n"},{"location":"develop/osx/#packaging","title":"Packaging","text":"python3 build-mac-local-py3.py --brand=\"\" --version=1.0.0 --nostrip --universalFrom Seafile 11.0, you can build Seafile release package with seafile-build script. You can check the README.md file in the same folder for detailed instructions.
The seafile-build.sh compatible with more platforms, including Raspberry Pi, arm-64, x86-64.
Old version is below:
Table of contents:
Requirements:
sudo apt-get install build-essential\nsudo apt-get install libevent-dev libcurl4-openssl-dev libglib2.0-dev uuid-dev intltool libsqlite3-dev libmysqlclient-dev libarchive-dev libtool libjansson-dev valac libfuse-dev re2c flex python-setuptools cmake\n"},{"location":"develop/rpi/#compile-development-libraries","title":"Compile development libraries","text":""},{"location":"develop/rpi/#libevhtp","title":"libevhtp","text":"libevhtp is a http server libary on top of libevent. It's used in seafile file server.
git clone https://www.github.com/haiwen/libevhtp.git\ncd libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nsudo make install\n After compiling all the libraries, run ldconfig to update the system libraries cache:
sudo ldconfig\n"},{"location":"develop/rpi/#install-python-libraries","title":"Install python libraries","text":"Create a new directory /home/pi/dev/seahub_thirdpart:
mkdir -p ~/dev/seahub_thirdpart\n Download these tarballs to /tmp/:
Install all these libaries to /home/pi/dev/seahub_thirdpart:
cd ~/dev/seahub_thirdpart\nexport PYTHONPATH=.\npip install -t ~/dev/seahub_thirdpart/ /tmp/pytz-2016.1.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/Django-1.8.10.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-statici18n-1.1.3.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/djangorestframework-3.3.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django_compressor-1.4.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/jsonfield-1.0.3.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-post_office-2.0.6.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/gunicorn-19.4.5.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/flup-1.0.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/chardet-2.3.0.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/python-dateutil-1.5.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/six-1.9.0.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-picklefield-0.3.2.tar.gz\nwget -O /tmp/django_constance.zip https://github.com/haiwen/django-constance/archive/bde7f7c.zip\npip install -t ~/dev/seahub_thirdpart/ /tmp/django_constance.zip\npip install -t ~/dev/seahub_thirdpart/ /tmp/jdcal-1.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/et_xmlfile-1.0.1.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/openpyxl-2.3.0.tar.gz\n"},{"location":"develop/rpi/#prepare-seafile-source-code","title":"Prepare seafile source code","text":"To build seafile server, there are four sub projects involved:
The build process has two steps:
build-server.py script to build the server package from the source tarballs.Seafile manages the releases in tags on github.
Assume we are packaging for seafile server 6.0.1, then the tags are:
v6.0.1-sever tag.v3.0-latest tag (libsearpc has been quite stable and basically has no further development, so the tag is always v3.0-latest)First setup the PKG_CONFIG_PATH enviroment variable (So we don't need to make and make install libsearpc/ccnet/seafile into the system):
export PKG_CONFIG_PATH=/home/pi/dev/seafile/lib:$PKG_CONFIG_PATH\nexport PKG_CONFIG_PATH=/home/pi/dev/libsearpc:$PKG_CONFIG_PATH\nexport PKG_CONFIG_PATH=/home/pi/dev/ccnet:$PKG_CONFIG_PATH\n"},{"location":"develop/rpi/#libsearpc","title":"libsearpc","text":"cd ~/dev\ngit clone https://github.com/haiwen/libsearpc.git\ncd libsearpc\ngit reset --hard v3.0-latest\n./autogen.sh\n./configure\nmake dist\n"},{"location":"develop/rpi/#ccnet","title":"ccnet","text":"cd ~/dev\ngit clone https://github.com/haiwen/ccnet-server.git\ncd ccnet\ngit reset --hard v6.0.1-server\n./autogen.sh\n./configure\nmake dist\n"},{"location":"develop/rpi/#seafile","title":"seafile","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafile-server.git\ncd seafile\ngit reset --hard v6.0.1-server\n./autogen.sh\n./configure\nmake dist\n"},{"location":"develop/rpi/#seahub","title":"seahub","text":"cd ~/dev\ngit clone https://github.com/haiwen/seahub.git\ncd seahub\ngit reset --hard v6.0.1-server\n./tools/gen-tarball.py --version=6.0.1 --branch=HEAD\n"},{"location":"develop/rpi/#seafobj","title":"seafobj","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafobj.git\ncd seafobj\ngit reset --hard v6.0.1-server\nmake dist\n"},{"location":"develop/rpi/#seafdav","title":"seafdav","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafdav.git\ncd seafdav\ngit reset --hard v6.0.1-server\nmake\n"},{"location":"develop/rpi/#copy-the-source-tar-balls-to-the-same-folder","title":"Copy the source tar balls to the same folder","text":"mkdir ~/seafile-sources\ncp ~/dev/libsearpc/libsearpc-<version>-tar.gz ~/seafile-sources\ncp ~/dev/ccnet/ccnet-<version>-tar.gz ~/seafile-sources\ncp ~/dev/seafile/seafile-<version>-tar.gz ~/seafile-sources\ncp ~/dev/seahub/seahub-<version>-tar.gz ~/seafile-sources\n\ncp ~/dev/seafobj/seafobj.tar.gz ~/seafile-sources\ncp ~/dev/seafdav/seafdav.tar.gz ~/seafile-sources\n"},{"location":"develop/rpi/#run-the-packaging-script","title":"Run the packaging script","text":"Now we have all the tarballs prepared, we can run the build-server.py script to build the server package.
mkdir ~/seafile-server-pkgs\n~/dev/seafile/scripts/build-server.py --libsearpc_version=<libsearpc_version> --ccnet_version=<ccnet_version> --seafile_version=<seafile_version> --seahub_version=<seahub_version> --srcdir= --thirdpartdir=/home/pi/dev/seahub_thirdpart --srcdir=/home/pi/seafile-sources --outputdir=/home/pi/seafile-server-pkgs\n After the script finisheds, we would get a seafile-server_6.0.1_pi.tar.gz in ~/seafile-server-pkgs folder.
The test should cover these steps at least:
seafile.sh start and seahub.sh start, you can login from a browser.This is the document for deploying Seafile open source development environment in Ubuntu 24.04 docker container.
"},{"location":"develop/server/#create-persistent-directories","title":"Create persistent directories","text":"Login a linux server as root user, then:
mkdir -p /root/seafile-ce-docker/source-code\nmkdir -p /root/seafile-ce-docker/conf\nmkdir -p /root/seafile-ce-docker/logs\nmkdir -p /root/seafile-ce-docker/mysql-data\nmkdir -p /root/seafile-ce-docker/seafile-data/library-template\n"},{"location":"develop/server/#run-a-container","title":"Run a container","text":"After install docker, start a container to deploy seafile open source development environment.
docker run --mount type=bind,source=/root/seafile-ce-docker/source-code,target=/root/dev/source-code \\\n --mount type=bind,source=/root/seafile-ce-docker/conf,target=/root/dev/conf \\\n --mount type=bind,source=/root/seafile-ce-docker/logs,target=/root/dev/logs \\\n --mount type=bind,source=/root/seafile-ce-docker/seafile-data,target=/root/dev/seafile-data \\\n --mount type=bind,source=/root/seafile-ce-docker/mysql-data,target=/var/lib/mysql \\\n -it -p 8000:8000 -p 8082:8082 -p 3000:3000 --name seafile-ce-env ubuntu:24.04 bash\n Note, the following commands are all executed in the seafile-ce-env docker container.
"},{"location":"develop/server/#update-source-and-install-dependencies","title":"Update Source and Install Dependencies.","text":"Update base system and install base dependencies:
apt-get update && apt-get upgrade -y\n\napt-get install -y ssh libevent-dev libcurl4-openssl-dev libglib2.0-dev uuid-dev intltool libsqlite3-dev libmysqlclient-dev libarchive-dev libtool libjansson-dev valac libfuse-dev python3-dateutil cmake re2c flex sqlite3 python3-pip python3-simplejson git libssl-dev libldap2-dev libonig-dev vim vim-scripts wget cmake gcc autoconf automake mysql-client librados-dev libxml2-dev curl sudo telnet netcat unzip netbase ca-certificates apt-transport-https build-essential libxslt1-dev libffi-dev libpcre3-dev libz-dev xz-utils nginx pkg-config poppler-utils libmemcached-dev sudo ldap-utils libldap2-dev libjwt-dev libunwind-dev libhiredis-dev google-perftools libgoogle-perftools-dev\n Install Node 20 from nodesource:
curl -sL https://deb.nodesource.com/setup_20.x | sudo -E bash -\napt-get install -y nodejs\n Install other Python 3 dependencies:
apt-get install -y python3 python3-dev python3-pip python3-setuptools python3-ldap\n\npython3 -m pip install --upgrade pip\n\npip3 install pytz jinja2 Django==5.2.* django-statici18n==2.3.* django_webpack_loader==1.7.* django_picklefield==3.1 django_formtools==2.4 django_simple_captcha==0.6.* djangosaml2==1.11.* djangorestframework==3.14.* python-dateutil==2.8.* pyjwt==2.10.* pycryptodome==3.23.* python-cas==1.6.* pysaml2==7.5.* requests==2.28.* requests_oauthlib==1.3.* future==1.0.* gunicorn==20.1.* mysqlclient==2.2.* qrcode==7.3.* pillow==11.3.* pillow-heif==1.0.* chardet==5.1.* cffi==1.17.1 captcha==0.7.* openpyxl==3.0.* Markdown==3.4.* bleach==5.0.* python-ldap==3.4.* sqlalchemy==2.0.* redis mock pytest pymysql==1.1.* configparser pylibmc django-pylibmc nose exam splinter pytest-django psd-tools lxml\n"},{"location":"develop/server/#install-mariadb-and-create-databases","title":"Install MariaDB and Create Databases","text":"apt-get install -y mariadb-server\nservice mariadb start\nmysqladmin -u root password your_password\n sql for create databases
mysql -uroot -pyour_password -e \"CREATE DATABASE ccnet CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seafile CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seahub CHARACTER SET utf8;\"\n"},{"location":"develop/server/#download-source-code","title":"Download Source Code","text":"cd ~/\ncd ~/dev/source-code\n\ngit clone https://github.com/haiwen/libevhtp.git\ngit clone https://github.com/haiwen/libsearpc.git\ngit clone https://github.com/haiwen/seafile-server.git\ngit clone https://github.com/haiwen/seafevents.git\ngit clone https://github.com/haiwen/seafobj.git\ngit clone https://github.com/haiwen/seahub.git\n\ncd libevhtp/\ngit checkout tags/1.1.7 -b tag-1.1.7\n\ncd ../libsearpc/\ngit checkout tags/v3.3-latest -b tag-v3.3-latest\n\ncd ../seafile-server\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seafevents\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seafobj\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seahub\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n"},{"location":"develop/server/#compile-and-install-seaf-server","title":"Compile and Install seaf-server","text":"cd ../libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nmake install\nldconfig\n\ncd ../libsearpc\n./autogen.sh\n./configure\nmake\nmake install\nldconfig\n\ncd ../seafile-server\n./autogen.sh\n./configure --disable-fuse\nmake\nmake install\nldconfig\n"},{"location":"develop/server/#create-conf-files","title":"Create Conf Files","text":"cd ~/dev/conf\n\ncat > ccnet.conf <<EOF\n[Database]\nENGINE = mysql\nHOST = localhost\nPORT = 3306\nUSER = root\nPASSWD = 123456\nDB = ccnet\nCONNECTION_CHARSET = utf8\nCREATE_TABLES = true\nEOF\n\ncat > seafile.conf <<EOF\n[database]\ntype = mysql\nhost = localhost\nport = 3306\nuser = root\npassword = 123456\ndb_name = seafile\nconnection_charset = utf8\ncreate_tables = true\nEOF\n\ncat > seafevents.conf <<EOF\n[DATABASE]\ntype = mysql\nusername = root\npassword = 123456\nname = seahub\nhost = localhost\nEOF\n\ncat > seahub_settings.py <<EOF\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': 'seahub',\n 'USER': 'root',\n 'PASSWORD': '123456',\n 'HOST': 'localhost',\n 'PORT': '3306',\n }\n}\nFILE_SERVER_ROOT = 'http://127.0.0.1:8082'\nSERVICE_URL = 'http://127.0.0.1:8000'\nEOF\n"},{"location":"develop/server/#start-seaf-server","title":"Start seaf-server","text":"seaf-server -F /root/dev/conf -d /root/dev/seafile-data -l /root/dev/logs/seafile.log >> /root/dev/logs/seafile.log 2>&1 &\n"},{"location":"develop/server/#start-seafevents-and-seahub","title":"Start seafevents and seahub","text":""},{"location":"develop/server/#prepare-environment-variables","title":"Prepare environment variables","text":"export CCNET_CONF_DIR=/root/dev/conf\nexport SEAFILE_CONF_DIR=/root/dev/seafile-data\nexport SEAFILE_CENTRAL_CONF_DIR=/root/dev/conf\nexport SEAHUB_DIR=/root/dev/source-code/seahub\nexport SEAHUB_LOG_DIR=/root/dev/logs\nexport PYTHONPATH=/usr/local/lib/python3.10/dist-packages/:/usr/local/lib/python3.10/site-packages/:/root/dev/source-code/:/root/dev/source-code/seafobj/:/root/dev/source-code/seahub/thirdpart:$PYTHONPATH\n"},{"location":"develop/server/#start-seafevents","title":"Start seafevents","text":"cd /root/dev/source-code/seafevents/\npython3 main.py --loglevel=debug --logfile=/root/dev/logs/seafevents.log --config-file /root/dev/conf/seafevents.conf >> /root/dev/logs/seafevents.log 2>&1 &\n"},{"location":"develop/server/#start-seahub","title":"Start seahub","text":""},{"location":"develop/server/#create-seahub-database-tables","title":"Create seahub database tables","text":"cd /root/dev/source-code/seahub/\npython3 manage.py migrate\n"},{"location":"develop/server/#create-user","title":"Create user","text":"python3 manage.py createsuperuser\n"},{"location":"develop/server/#start-seahub_1","title":"Start seahub","text":"python3 manage.py runserver 0.0.0.0:8000\n Then, you can visit http://127.0.0.1:8000/ to use Seafile.
"},{"location":"develop/server/#the-final-directory-structure","title":"The Final Directory Structure","text":""},{"location":"develop/server/#more","title":"More","text":""},{"location":"develop/server/#deploy-frontend-development-environment","title":"Deploy Frontend Development Environment","text":"For deploying frontend development enviroment, you need:
1, checkout seahub to master branch
cd /root/dev/source-code/seahub\n\ngit fetch origin master:master\ngit checkout master\n 2, add the following configration to /root/dev/conf/seahub_settings.py
import os\nPROJECT_ROOT = '/root/dev/source-code/seahub'\nWEBPACK_LOADER = {\n 'DEFAULT': {\n 'BUNDLE_DIR_NAME': 'frontend/',\n 'STATS_FILE': os.path.join(PROJECT_ROOT,\n 'frontend/webpack-stats.dev.json'),\n }\n}\nDEBUG = True\n 3, install js modules
cd /root/dev/source-code/seahub/frontend\n\nnpm install\n 4, npm run dev
cd /root/dev/source-code/seahub/frontend\n\nnpm run dev\n 5, start seaf-server and seahub
"},{"location":"develop/translation/","title":"Translation","text":""},{"location":"develop/translation/#seahub-seafile-server-71-and-above","title":"Seahub (Seafile Server 7.1 and above)","text":""},{"location":"develop/translation/#translate-and-try-locally","title":"Translate and try locally","text":"1. Locate the translation files in the seafile-server-latest/seahub directory:
/locale/<lang-code>/LC_MESSAGES/django.po\u00a0 and \u00a0/locale/<lang-code>/LC_MESSAGES/djangojs.po/media/locales/<lang-code>/seafile-editor.jsonFor example, if you want to improve the Russian translation, find the corresponding strings to be edited in either of the following three files:
/seafile-server-latest/seahub/locale/ru/LC_MESSAGES/django.po/seafile-server-latest/seahub/locale/ru/LC_MESSAGES/djangojs.po/seafile-server-latest/seahub/media/locales/ru/seafile-editor.jsonIf there is no translation for your language, create a new folder matching your language code and copy-paste the contents of another language folder in your newly created one. (Don't copy from the 'en' folder because the files therein do not contain the strings to be translated.)
2. Edit the files using an UTF-8 editor.
3. Save your changes.
4. (Only necessary when you created a new language code folder) Add a new entry for your language to the language block in the /seafile-server-latest/seahub/seahub/settings.py file and save it.
LANGUAGES = (\n ...\n ('ru', '\u0420\u0443\u0441\u0441\u043a\u0438\u0439'),\n ...\n)\n 5. (Only necessary when you edited either django.po or djangojs.po) Apply the changes made in django.po and djangojs.po by running the following two commands in /seafile-server-latest/seahub/locale/<lang-code>/LC_MESSAGES:
msgfmt -o django.mo django.pomsgfmt -o djangojs.mo djangojs.poNote: msgfmt is included in the gettext package.
Additionally, run the following two commands in the seafile-server-latest directory:
./seahub.sh python-env python3 seahub/manage.py compilejsi18n -l <lang-code>./seahub.sh python-env python3 seahub/manage.py collectstatic --noinput -i admin -i termsandconditions --no-post-process6. Restart Seahub to load changes made in django.po and djangojs.po; reload the Markdown editor to check your modifications in the seafile-editor.json file.
"},{"location":"develop/translation/#submit-your-translation","title":"Submit your translation","text":"Please submit translations via Transifex: https://www.transifex.com/projects/p/seahub/
Steps:
FileNotFoundError occurred when executing the command manage.py collectstatic.
FileNotFoundError: [Errno 2] No such file or directory: '/opt/seafile/seafile-server-latest/seahub/frontend/build'\n Steps:
Modify STATICFILES_DIRS in /opt/seafile/seafile-server-latest/seahub/seahub/settings.py manually
STATICFILES_DIRS = (\n # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n '%s/static' % PROJECT_ROOT,\n# '%s/frontend/build' % PROJECT_ROOT,\n)\n Execute the command
./seahub.sh python-env python3 seahub/manage.py collectstatic --noinput -i admin -i termsandconditions --no-post-process\n Restore STATICFILES_DIRS manually
```python STATICFILES_DIRS = ( # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. '%s/static' % PROJECT_ROOT, '%s/frontend/build' % PROJECT_ROOT, )
Restart Seahub
./seahub.sh restart\n This issue has been fixed since version 11.0
"},{"location":"develop/web_api_v2.1/","title":"Web API","text":""},{"location":"develop/web_api_v2.1/#seafile-web-api","title":"Seafile Web API","text":"The API document can be accessed in the following location:
The Admin API document can be accessed in the following location:
The following setups are required for building and packaging Sync Client on Windows:
vcpkg.exe integrate install to integrates vcpkg with projects.Following directory structures are expected when building Sync Client:
seafile-workspace/\nseafile-workspace/libsearpc/\nseafile-workspace/seafile/\nseafile-workspace/seafile-client/\nseafile-workspace/seafile-shell-ext/\n The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, github.com/haiwen/seafile-client, and github.com/haiwen/seafile-shell-ext.
"},{"location":"develop/windows/#building","title":"Building","text":"Note: these commands are run in \"x64 Native Tools Command Prompt for VS 2019\". The \"Debug|x64\" configuration is simplified to build, which does not include breakpad and other dependencies.
To build libsearpc:
$ cd seafile-workspace/libsearpc/\n$ devenv libsearpc.sln /build \"Debug|x64\"\n To build seafile
$ cd seafile-workspace/seafile/\n$ devenv seafile.sln /build \"Debug|x64\"\n$ devenv msi/custom/seafile_custom.sln /build \"Debug|x64\"\n To build seafile-client
$ cd seafile-workspace/seafile-client/\n$ devenv third_party/quazip/quazip.sln /build \"Debug|x64\"\n$ devenv seafile-client.sln /build \"Debug|x64\"\n To build seafile-shell-ext
$ cd seafile-workspace/seafile-shell-ext/\n$ devenv extensions/seafile_ext.sln /build \"Debug|x64\"\n$ devenv seadrive-thumbnail-ext/seadrive_thumbnail_ext.sln /build \"Debug|x64\"\n"},{"location":"develop/windows/#packaging","title":"Packaging","text":"Additional setups are required for packaging:
Certificates
Update the CERTFILE configure in seafile-workspace/seafile/scripts/build/build-msi-vs.py .
$ cd seafile-workspace/seafile-client/third_party/quazip\n$ devenv quazip.sln /build Release|x64\n$ cd seafile-workspace/seafile/scripts/build\n$ python build-msi-vs.py 1.0.0\n If you use a cluster to deploy Seafile, you can use distributed indexing to realize real-time indexing and improve indexing efficiency. The indexing process is as follows:
"},{"location":"extension/distributed_indexing/#install-redis-and-modify-configuration-files","title":"Install redis and modify configuration files","text":""},{"location":"extension/distributed_indexing/#1-install-redis-on-all-frontend-nodes","title":"1. Install redis on all frontend nodes","text":"Tip
If you use redis cloud service, skip this step and modify the configuration files directly
UbuntuCentOS$ apt install redis-server\n $ yum install redis\n"},{"location":"extension/distributed_indexing/#2-install-python-redis-third-party-package-on-all-frontend-nodes","title":"2. Install python redis third-party package on all frontend nodes","text":"$ pip install redis\n"},{"location":"extension/distributed_indexing/#3-modify-the-seafeventsconf-on-all-frontend-nodes","title":"3. Modify the seafevents.conf on all frontend nodes","text":"Add the following config items
[EVENTS PUBLISH]\nmq_type=redis # must be redis\nenabled=true\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n"},{"location":"extension/distributed_indexing/#4-modify-the-seafeventsconf-on-the-backend-node","title":"4. Modify the seafevents.conf on the backend node","text":"Disable the scheduled indexing task, because the scheduled indexing task and the distributed indexing task conflict.
[INDEX FILES]\nenabled=true\n |\n V\nenabled=false \n"},{"location":"extension/distributed_indexing/#5-restart-seafile","title":"5. Restart Seafile","text":"Deploy in DockerDeploy from binary packages docker exec -it seafile bash\ncd /scripts\n./seafile.sh restart && ./seahub.sh restart\n cd /opt/seafile/seafile-server-latest\n./seafile.sh restart && ./seahub.sh restart\n"},{"location":"extension/distributed_indexing/#deploy-distributed-indexing","title":"Deploy distributed indexing","text":"First, prepare a index-server master node and several index-server slave nodes, the number of slave nodes depends on your needs. Copy the seafile.conf and the seafevents.conf in the conf directory from the Seafile frontend nodes to /opt/seafile-data/seafile/conf in index-server nodes. The master node and slave nodes need to read the configuration files to obtain the necessary information.
mkdir -p /opt/seafile-data/seafile/conf\nmkdir -p /opt/seafile\n Then download .env and index-server.yml to /opt/seafile in all index-server nodes.
cd /opt/seafile\nwget https://manual.seafile.com/12.0/repo/docker/index-server/index-server.yml\nwget -O .env https://manual.seafile.com/12.0/repo/docker/index-server/env\n Modify mysql configurations in .env.
SEAFILE_MYSQL_DB_HOST=127.0.0.1\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\n\nCLUSTER_MODE=master\n Note
CLUSTER_MODE needs to be configured as master on the master node, and needs to be configured as worker on the slave nodes.
Next, create a configuration file index-master.conf in the conf directory of the master node, e.g.
[DEFAULT]\nmq_type=redis # must be redis\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n Start master node.
docker compose up -d\n Next, create a configuration file index-worker.conf in the conf directory of all slave nodes, e.g.
[DEFAULT]\nmq_type=redis # must be redis\nindex_workers=2 # number of threads to create/update indexes, you can increase this value according to your needs\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n Start all slave nodes.
docker compose up -d\n"},{"location":"extension/distributed_indexing/#some-commands-in-distributed-indexing","title":"Some commands in distributed indexing","text":"Rebuild search index, first execute the command in the Seafile node:
cd /opt/seafile/seafile-server-last/\n./pro/pro.py search --clear\n Then execute the command in the index-server master node:
docker exec -it index-server bash\n/opt/seafile/index-server/index-server.sh restore-all-repo\n List the number of indexing tasks currently remaining, execute the command in the index-server master node:
/opt/seafile/index-server/index-server.sh show-all-task\n"},{"location":"extension/fuse/","title":"FUSE extension","text":"Files in the seafile system are split to blocks, which means what are stored on your seafile server are not complete files, but blocks. This design faciliates effective data deduplication.
However, administrators sometimes want to access the files directly on the server. You can use seaf-fuse to do this.
Seaf-fuse is an implementation of the FUSE virtual filesystem. In a word, it mounts all the seafile files to a folder (which is called the '''mount point'''), so that you can access all the files managed by seafile server, just as you access a normal folder on your server.
Note
Assume we want to mount to /opt/seafile-fuse in host.
Add the following content
seafile:\n ...\n volumes:\n ...\n - type: bind\n source: /opt/seafile-fuse\n target: /seafile-fuse\n bind:\n propagation: rshared\n privileged: true\n cap_add:\n - SYS_ADMIN\n"},{"location":"extension/fuse/#start-seaf-fuse-with-the-script-in-docker","title":"Start seaf-fuse with the script in docker","text":"Start Seafile server and enter the container
docker compose up -d\n\ndocker exec -it seafile bash\n Start seaf-fuse in the container
cd /opt/seafile/seafile-server-latest/\n\n./seaf-fuse.sh start /seafile-fuse\n"},{"location":"extension/fuse/#use-seaf-fuse-in-binary-based-deployment","title":"Use seaf-fuse in binary based deployment","text":"Assume we want to mount to /data/seafile-fuse.
mkdir -p /data/seafile-fuse\n"},{"location":"extension/fuse/#start-seaf-fuse-with-the-script","title":"Start seaf-fuse with the script","text":"Before start seaf-fuse, you should have started seafile server with ./seafile.sh start
./seaf-fuse.sh start /data/seafile-fuse\n"},{"location":"extension/fuse/#stop-seaf-fuse","title":"Stop seaf-fuse","text":"./seaf-fuse.sh stop\n"},{"location":"extension/fuse/#start-options","title":"Start options","text":"seaf-fuse supports standard mount options for FUSE. For example, you can specify ownership for the mounted folder:
./seaf-fuse.sh start -o uid=<uid> /data/seafile-fuse\n In Pro edition, seaf-fuse enables the block cache function by default to cache block objects when object storage backend is used, thereby reducing access to backend storage, but this function will occupy local disk space. Since Seafile-pro-10.0.0, you can disable block cache by adding following options:
./seaf-fuse.sh start --disable-block-cache /data/seafile-fuse\n You can find the complete list of supported options in man fuse.
Now you can list the content of /data/seafile-fuse.
$ ls -lhp /data/seafile-fuse\n\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 abc@abc.com/\ndrwxr-xr-x 2 root root 4.0K Jan 4 2015 foo@foo.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 plus@plus.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 sharp@sharp.com/\ndrwxr-xr-x 2 root root 4.0K Jan 3 2015 test@test.com/\n $ ls -lhp /data/seafile-fuse/abc@abc.com\n\ndrwxr-xr-x 2 root root 924 Jan 1 1970 5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\ndrwxr-xr-x 2 root root 1.6K Jan 1 1970 a09ab9fc-7bd0-49f1-929d-6abeb8491397_My Notes/\n From the above list you can see, under the folder of a user there are subfolders, each of which represents a library of that user, and has a name of this format: '''{library_id}-{library-name}'''.
"},{"location":"extension/fuse/#the-folder-for-a-library","title":"The folder for a library","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com/5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\n\n-rw-r--r-- 1 root root 501K Jan 1 2015 image.png\n-rw-r--r-- 1 root root 501K Jan 1 2015 sample.jpng\n"},{"location":"extension/fuse/#if-you-get-a-permission-denied-error","title":"If you get a \"Permission denied\" error","text":"If you get an error message saying \"Permission denied\" when running ./seaf-fuse.sh start, most likely you are not in the \"fuse group\". You should:
Add yourself to the fuse group
sudo usermod -a -G fuse <your-user-name>\n Logout your shell and login again
./seaf-fuse.sh start <path>again.Deployment Tips
The steps from this guide only cover installing collabora as another container on the same docker host that your seafile docker container is on. Please make sure your host have sufficient cores and RAM.
If you want to install on another host please refer the collabora documentation for instructions. Then you should follow here to configure seahub_settings.py to enable online office.
Note
To integrate LibreOffice with Seafile, you have to enable HTTPS in your Seafile server:
Deploy in DockerDeploy from binary packagesModify .env file:
SEAFILE_SERVER_PROTOCOL=https\n Please follow the links to enable https by Nginx
Download the collabora.yml
wget https://manual.seafile.com/13.0/repo/docker/collabora.yml\n Insert collabora.yml to field COMPOSE_FILE lists (i.e., COMPOSE_FILE='...,collabora.yml') and add the relative options in .env
COLLABORA_IMAGE=collabora/code:24.04.5.1.1 # image of LibreOffice\nCOLLABORA_PORT=6232 # expose port\nCOLLABORA_USERNAME=<your LibreOffice admin username>\nCOLLABORA_PASSWORD=<your LibreOffice admin password>\nCOLLABORA_ENABLE_ADMIN_CONSOLE=true # enable admin console or not\nCOLLABORA_REMOTE_FONT= # remote font url\nCOLLABORA_ENABLE_FILE_LOGGING=false # use file logs or not, see FQA\n"},{"location":"extension/libreoffice_online/#config-seafile","title":"Config Seafile","text":"Add following config option to seahub_settings.py:
OFFICE_SERVER_TYPE = 'CollaboraOffice'\nENABLE_OFFICE_WEB_APP = True\nOFFICE_WEB_APP_BASE_URL = 'http://collabora:9980/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use LibreOffice Online view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 30 * 60 # seconds\n\n# List of file formats that you want to view through LibreOffice Online\n# You can change this value according to your preferences\n# And of course you should make sure your LibreOffice Online supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through LibreOffice Online\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through LibreOffice Online\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n Then restart Seafile.
Click an office file in Seafile web interface, you will see the online preview rendered by CollaboraOnline. Here is an example:
"},{"location":"extension/libreoffice_online/#trouble-shooting","title":"Trouble shooting","text":"Understanding how the integration work will help you debug the problem. When a user visits a file page:
CollaboraOnline container will output the logs in the stdout, you can use following command to access it
docker logs seafile-collabora\n If you would like to use file to save log (i.e., a .log file), you can modify .env with following statment, and remove the notes in the collabora.yml
# .env\nCOLLABORA_ENABLE_FILE_LOGGING=True\nCOLLABORA_PATH=/opt/collabora # path of the collabora logs\n # collabora.yml\n# remove the following notes\n...\nservices:\n collabora:\n ...\n volumes:\n - \"${COLLABORA_PATH:-/opt/collabora}/logs:/opt/cool/logs/\" # chmod 777 needed\n ...\n...\n Create the logs directory, and restart Seafile server
mkdir -p /opt/collabora\nchmod 777 /opt/collabora\ndocker compose down\ndocker compose up -d\n"},{"location":"extension/libreoffice_online/#collaboraonline-server-on-a-separate-host","title":"CollaboraOnline server on a separate host","text":"For independent deployment of CollaboraOnline on a single server, please refer to the official documentation. After a successful deployment, you only need to specify the values of the following fields in seahub_settings.py and then restart the service.
OFFICE_SERVER_TYPE = 'CollaboraOffice'\nENABLE_OFFICE_WEB_APP = True\nOFFICE_WEB_APP_BASE_URL = 'https://<Your CollaboraOnline host url>/hosting/discovery'\nWOPI_ACCESS_TOKEN_EXPIRATION = 30 * 60 \nENABLE_OFFICE_WEB_APP_EDIT = True\n"},{"location":"extension/metadata-server/","title":"Metadata server","text":"Metadata server aims to provide metadata management for your libraries, so as to better understand the relevant information of your libraries.
"},{"location":"extension/metadata-server/#deployment","title":"Deployment","text":"Prerequisites
The startup of Metadata server requires using Redis as the cache server (it should be the default cache server in Seafile 13.0). So you must deploy Redis for Seafile, then modify seafile.conf, seahub_settings.py and seafevents.conf to enable it before deploying metadata server.
Warning
Please make sure your Seafile service has been deployed before deploying Metadata server. This is because Metadata server needs to read Seafile's configuration file seafile.conf. If you deploy Metadata server before or at the same time with Seafile, it may not be able to detect seafile.conf and fail to start.
Please download the file by following command:
Deploy in the same machine with SeafileStandaloneNote
You have to download this file to the directory same as seafile-server.yml
wget https://manual.seafile.com/13.0/repo/docker/md-server.yml\n Note
For standalone deployment (usually used in cluster deployment), the metadata server only supports Seafile using the storage backend such as S3.
wget https://manual.seafile.com/13.0/repo/docker/metadata-server/md-server.yml\nwget -O .env https://manual.seafile.com/13.0/repo/docker/metadata-server/env\n"},{"location":"extension/metadata-server/#modify-env","title":"Modify .env","text":"Metadata server read all configurations from environtment and does not need a dedicated configuration file, and you don't need to add additional variables to your .env (except for standalone deployment) to get the metadata server started, because it will read the exact same configuration as the Seafile server (including JWT_PRIVATE_KEY ) and keep the repository metadata locally (default /opt/seafile-data/seafile/md-data). But you still need to modify the COMPOSE_FILE list in .env, and add md-server.yml to enable the metadata server:
COMPOSE_FILE='...,md-server.yml'\n To facilitate your deployment, we still provide two different configuration solutions for your reference:
"},{"location":"extension/metadata-server/#example-env-for-seafile-data-is-stored-locally","title":"Example.env for Seafile data is stored locally","text":"In this case you don't need to add any additional configuration to your .env. You can also specify image version, maximum local cache size, etc.
MD_IMAGE=seafileltd/seafile-md-server:13.0-latest\nMD_MAX_CACHE_SIZE=1GB\n"},{"location":"extension/metadata-server/#example-env-for-seafile-data-is-stored-in-the-storage-backend-eg-s3","title":"Example .env for Seafile data is stored in the storage backend (e.g., S3)","text":"First you need to create a bucket for metadata on your S3 storage backend provider. Then add or modify the following information to .env:
MD_IMAGE=seafileltd/seafile-md-server:13.0-latest\nMD_STORAGE_TYPE=s3\nS3_MD_BUCKET=...\nS3_KEY_ID=<your-key-id>\nS3_SECRET_KEY=<your-secret-key>\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=false\nS3_AWS_REGION=us-east-1\nS3_HOST=\nS3_USE_HTTPS=true\nS3_SSE_C_KEY=\n Data for Seafile server should be accessible for Metadata server
In order to correctly obtain metadata information, you must ensure that the data of your Seafile server can be correctly accessed. In the case of deploying Metadata server and Seafile server together, Metadata server will be able to automatically obtain the configuration information of Seafile server, so you don't need to worry about this. But if your Metadata server is deployed in Standalone (usually in a cluster environment), then you need to ensure that the description of the Seafile server storage part in the .env deployed by Metadata server needs to be consistent with the .env deployed by Seafile server (e.g., SEAF_SERVER_STORAGE_TYPE), and can access the configuration file information of Seafile server (e.g., seafile.conf) to ensure that Metadata server can correctly obtain data from Seafile server.
The following table is all the related environment variables with Metadata server:
Variables Description RequiredJWT_PRIVATE_KEY The JWT key used to connect with Seafile server Required MD_MAX_CACHE_SIZE The maximum cache size. Optional, default 1GB REDIS_HOST Your Redis service host. Optional, default redis REDIS_PORT Your Redis service port. Optional, default 6379 REDIS_PASSWORD Your Redis access password. Optional MD_STORAGE_TYPE Where the metadata storage in. Available options are disk (local storage) and s3 disk S3_MD_BUCKET Your S3 bucket name for the bucket storing metadata Required when using S3 (MD_STORAGE_TYPE=s3) MD_CHECK_UPDATE_INTERVAL The interval for updating metadata of the repository 30m MD_FILE_COUNT_LIMIT The maximum number of files in a repository that the metadata feature allows. If the number of files in a repository exceeds this value, the metadata management function will not be enabled for the repository. For a repository with metadata management enabled, if the number of records in it reaches this value but there are still some files that are not recorded in metadata server, the metadata management of the unrecorded files will be skipped. 100000 In addition, there are some environment variables related to S3 authorization, please refer to the part with S3_ prefix in this table (the buckets name for Seafile are also needed).
Metadata server supports Redis only
To enable metadata feature, you have to use Redis for cache, as the CACHE_PROVIDER must be set to redis in your .env
seahub_settings.py","text":"To enable metadata server in Seafile, please add the following field in your seahub_settings.py:
ENABLE_METADATA_MANAGEMENT = True\nMETADATA_SERVER_URL = 'http://seafile-md-server:8084'\n ENABLE_METADATA_MANAGEMENT = True\nMETADATA_SERVER_URL = 'http://<your metadata-server host>:8084'\n"},{"location":"extension/metadata-server/#start-service","title":"Start service","text":"You can use following command to start metadata server (and the Seafile service also have to restart):
docker compose down\ndocker compose up -d\n"},{"location":"extension/metadata-server/#verify-metadata-server-and-enable-it-in-the-seafile","title":"Verify Metadata server and enable it in the Seafile","text":"Check container log for seafile-md-server, you can see the following message if it runs fine:
$docker logs -f seafile-md-server\n\n[md-server] [2025-03-27 02:30:55] [INFO] Created data links\n[md-server] [2025-03-27 02:30:55] [INFO] Database initialization completed\n[md-server] [2025-03-27 02:30:55] [INFO] Starting Metadata server\n 2. Check the seafevents.log and seahub.log, as you can see the following information in seafevents.log and no error log is reported in seahub.log: [2025-02-23 06:08:05] [INFO] seafevents.repo_metadata.index_worker:134 refresh_lock refresh_thread Starting refresh locks\n[2025-02-23 06:08:05] [INFO] seafevents.repo_metadata.slow_task_handler:61 worker_handler slow_task_handler_thread_0 starting update metadata work\n[2025-02-23 06:08:05] [INFO] seafevents.repo_metadata.slow_task_handler:61 worker_handler slow_task_handler_thread_1 starting update metadata work\n[2025-02-23 06:08:05] [INFO] seafevents.repo_metadata.slow_task_handler:61 worker_handler slow_task_handler_thread_2 starting update metadata work\n Switch Enable extended properties in button Settings
Finally, you can see the metadata of your library in views tab
When you deploy Seafile server and Metadata server to the same machine, Metadata server will use the same persistence directory (e.g. /opt/seafile-data) as Seafile server. Metadata server will use the following directories or files:
/opt/seafile-data/seafile/md-data: Metadata server data and cache/opt/seafile-data/seafile/logs/seaf-md-server: The logs directory of Metadata server, consist of a running log and an access log.Currently, the status updates of files and libraries on the client and web interface are based on polling the server. The latest status cannot be reflected in real time on the client due to polling delays. The client needs to periodically refresh the library modification, file locking, subdirectory permissions and other information, which causes additional performance overhead to the server.
When a directory is opened on the web interface, the lock status of the file cannot be updated in real time, and the page needs to be refreshed.
The notification server uses websocket protocol and maintains a two-way communication connection with the client or the web interface. When the above changes occur, seaf-server will notify the notification server of the changes. Then the notification server can notify the client or the web interface in real time. This not only improves the real-time performance, but also reduces the performance overhead of the server.
"},{"location":"extension/notification-server/#supported-update-reminder-types","title":"Supported update reminder types","text":"Since Seafile 12.0, we use a separate Docker image to deploy the notification server. First download notification-server.yml to Seafile directory:
wget https://manual.seafile.com/13.0/repo/docker/notification-server.yml\n Modify .env, and insert notification-server.yml into COMPOSE_FILE:
COMPOSE_FILE='seafile-server.yml,caddy.yml,notification-server.yml'\n then add or modify ENABLE_NOTIFICATION_SERVER:
ENABLE_NOTIFICATION_SERVER=true\n Finally, You can run notification server with the following command:
docker compose down\ndocker compose up -d\n"},{"location":"extension/notification-server/#checking-notification-server-status","title":"Checking notification server status","text":"When the notification server is working, you can access http://127.0.0.1:8083/ping from your browser, which will answer {\"ret\": \"pong\"}. If you have a proxy configured, you can access https://seafile.example.com/notification/ping from your browser instead.
If the client works with notification server, there should be a log message in seafile.log or seadrive.log.
Notification server is enabled on the remote server xxxx\n"},{"location":"extension/notification-server/#notification-server-in-seafile-cluster","title":"Notification Server in Seafile cluster","text":"There is no additional features for notification server in the Pro Edition. It works the same as in community edition.
If you enable clustering, You need to deploy notification server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node.
Download .env and notification-server.yml to notification server directory:
wget https://manual.seafile.com/13.0/repo/docker/notification-server/notification-server.yml\nwget -O .env https://manual.seafile.com/13.0/repo/docker/notification-server/env\n Then modify the .env file according to your environment. The following fields are needed to be modified:
SEAFILE_MYSQL_DB_HOST Seafile MySQL host SEAFILE_MYSQL_DB_USER Seafile MySQL user, default is seafile SEAFILE_MYSQL_DB_PASSWORD Seafile MySQL password TIME_ZONE Time zone JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file SEAFILE_SERVER_HOSTNAME Seafile host name SEAFILE_SERVER_PROTOCOL http or https Now, you can run notification server with the following command:
docker compose up -d\n then you need to modify the .env on the host deployed Seafile:
ENABLE_NOTIFICATION_SERVER=true\nNOTIFICATION_SERVER_URL=https://seafile.example.com/notification\nINNER_NOTIFICATION_SERVER_URL=http://<your notification server host>:8083\n Difference between NOTIFICATION_SERVER_URL and INNER_NOTIFICATION_SERVER_URL
NOTIFICATION_SERVER_URL: used to do the connection between client (i.e., user's browser) and notification serverINNER_NOTIFICATION_SERVER_URL: used to do the connection between Seafile server and notification serverFinally, you need to configure load balancer according to the following forwarding rules:
/notification/ping requests to notification server via http protocol./notification to notification server.Here is a configuration that uses haproxy to support notification server. Haproxy version needs to be >= 2.0. You should use similar configurations for other load balancers.
#/etc/haproxy/haproxy.cfg\n\n# Other existing haproxy configurations\n......\n\nfrontend seafile\n bind 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n acl notif_ping_request url_sub -i /notification/ping\n acl ws_requests url -i /notification\n acl hdr_connection_upgrade hdr(Connection) -i upgrade\n acl hdr_upgrade_websocket hdr(Upgrade) -i websocket\n use_backend ws_backend if hdr_connection_upgrade hdr_upgrade_websocket\n use_backend notif_ping_backend if notif_ping_request\n use_backend ws_backend if ws_requests\n default_backend backup_nodes\n\nbackend backup_nodes\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.0.137:80\n\nbackend notif_ping_backend\n option forwardfor\n server ws 192.168.0.137:8083\n\nbackend ws_backend\n option forwardfor # This sets X-Forwarded-For\n server ws 192.168.0.137:8083\n"},{"location":"extension/office_web_app/","title":"Office Online Server","text":"In Seafile Professional Server Version 4.4.0 (or above), you can use Microsoft Office Online Server (formerly named Office Web Apps) to preview documents online. Office Online Server provides the best preview for all Office format files. It also support collaborative editing of Office files directly in the web browser. For organizations with Microsoft Office Volume License, it's free to use Office Online Server. For more information about Office Online Server and how to deploy it, please refer to https://technet.microsoft.com/en-us/library/jj219455(v=office.16).aspx.
Seafile only supports Office Online Server 2016 and above
To use Office Online Server for preview, please add following config option to seahub_settings.py.
# Enable Office Online Server\nENABLE_OFFICE_WEB_APP = True\n\n# Url of Office Online Server's discovery page\n# The discovery page tells Seafile how to interact with Office Online Server when view file online\n# You should change `http://example.office-web-app.com` to your actual Office Online Server server address\nOFFICE_WEB_APP_BASE_URL = 'http://example.office-web-app.com/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use Office Online Server view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 60 * 60 * 24 # seconds\n\n# List of file formats that you want to view through Office Online Server\n# You can change this value according to your preferences\n# And of course you should make sure your Office Online Server supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('ods', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt',\n 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through Office Online Server\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through Office Online Server\n# Note, Office Online Server 2016 is needed for editing docx\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('xlsx', 'pptx', 'docx')\n\n\n# HTTPS authentication related (optional)\n\n# Server certificates\n# Path to a CA_BUNDLE file or directory with certificates of trusted CAs\n# NOTE: If set this setting to a directory, the directory must have been processed using the c_rehash utility supplied with OpenSSL.\nOFFICE_WEB_APP_SERVER_CA = '/path/to/certfile'\n\n\n# Client certificates\n# You can specify a single file (containing the private key and the certificate) to use as client side certificate\nOFFICE_WEB_APP_CLIENT_PEM = 'path/to/client.pem'\n\n# or you can specify these two file path to use as client side certificate\nOFFICE_WEB_APP_CLIENT_CERT = 'path/to/client.cert'\nOFFICE_WEB_APP_CLIENT_KEY = 'path/to/client.key'\n Then restart
./seafile.sh restart\n./seahub.sh restart\n After you click the document you specified in seahub_settings.py, you will see the new preview page.
"},{"location":"extension/office_web_app/#trouble-shooting","title":"Trouble shooting","text":"Understanding how the web app integration works is going to help you debugging the problem. When a user visits a file page:
Please check the Nginx log for Seahub (for step 3) and Office Online Server to see which step is wrong.
Warning
You should make sure you have configured at least a few GB of paging files in your Windows system. Otherwise the IIS worker processes may die randomly when handling Office Online requests.
"},{"location":"extension/only_office/","title":"OnlyOffice","text":"Seafile supports OnlyOffice to view/edit office files online. In order to use OnlyOffice, you must first deploy an OnlyOffice server.
Deployment Tips
You can deploy OnlyOffice to the same machine as Seafile (only support deploying with Docker with sufficient cores and RAM) using the onlyoffice.yml provided by Seafile according to this document, or you can deploy it to a different machine according to OnlyOffice official document.
Download the onlyoffice.yml
wget https://manual.seafile.com/13.0/repo/docker/onlyoffice.yml\n insert onlyoffice.yml into COMPOSE_FILE list (i.e., COMPOSE_FILE='...,onlyoffice.yml'), and add the following configurations of onlyoffice in .env file.
# OnlyOffice image\nONLYOFFICE_IMAGE=onlyoffice/documentserver:8.1.0.1\n\n# Persistent storage directory of OnlyOffice\nONLYOFFICE_VOLUME=/opt/onlyoffice\n\n# OnlyOffice document server port\nONLYOFFICE_PORT=6233\n\n# jwt secret, generated by `pwgen -s 40 1` \nONLYOFFICE_JWT_SECRET=<your jwt secret>\n Note
From Seafile 12.0, OnlyOffice's JWT verification will be forced to enable. Secure communication between Seafile and OnlyOffice is granted by a shared secret. You can get the JWT secret by following command
pwgen -s 40 1\n Also modify seahub_settings.py
ENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'https://seafile.example.com:6233/web-apps/apps/api/documents/api.js'\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\n\n# NOTE\n# The following two configurations, do NOT need to configure them explicitly.\n# The default values are as follows.\n# If you have custom needs, you can also configure them, which will override the default values.\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods', 'ppsx', 'pps', 'csv')\nONLYOFFICE_EDIT_FILE_EXTENSION = ('docx', 'pptx', 'xlsx', 'csv')\nOFFICE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024 # preview size, 30 MB\n Tip
By default OnlyOffice will use port 6233 used for communication between Seafile and Document Server, You can modify the bound port by specifying ONLYOFFICE_PORT, and the port in the term ONLYOFFICE_APIJS_URL in seahub_settings.py should be modified together.
The following configuration options are only for OnlyOffice experts. You can create and mount a custom configuration file called local-production-linux.json to force some settings.
nano local-production-linux.json\n For example, you can configure OnlyOffice to automatically save by copying the following code block in this file:
{\n \"services\": {\n \"CoAuthoring\": {\n \"autoAssembly\": {\n \"enable\": true,\n \"interval\": \"5m\"\n }\n }\n },\n \"FileConverter\": {\n \"converter\": {\n \"downloadAttemptMaxCount\": 3\n }\n }\n}\n Mount this config file into your onlyoffice block in onlyoffice.yml:
service:\n ...\n onlyoffice:\n ...\n volumes:\n ...\n - <Your path to local-production-linux.json>:/etc/onlyoffice/documentserver/local-production-linux.json\n...\n For more information you can check the official documentation: https://api.onlyoffice.com/editors/signature/ and https://github.com/ONLYOFFICE/Docker-DocumentServer#available-configuration-parameters
"},{"location":"extension/only_office/#restart-seafile-docker-instance-and-test-that-onlyoffice-is-running","title":"Restart Seafile-docker instance and test that OnlyOffice is running","text":"docker-compose down\ndocker-compose up -d\n Success
After the installation process is finished, visit this page to make sure you have deployed OnlyOffice successfully: http{s}://{your Seafile server's domain or IP}:6233/welcome, you will get Document Server is running info at this page.
Firstly, run docker logs -f seafile-onlyoffice, then open an office file. After the \"Download failed.\" error appears on the page, observe the logs for the following error:
==> /var/log/onlyoffice/documentserver/converter/out.log <==\n...\nError: DNS lookup {local IP} (family:undefined, host:undefined) is not allowed. Because, It is a private IP address.\n...\n If it shows this error message and you haven't enabled JWT while using a local network, then it's likely due to an error triggered proactively by OnlyOffice server for enhanced security. (https://github.com/ONLYOFFICE/DocumentServer/issues/2268#issuecomment-1600787905)
So, as mentioned in the post, we highly recommend you enabling JWT in your integrations to fix this problem.
"},{"location":"extension/only_office/#the-document-security-token-is-not-correctly-formed","title":"The document security token is not correctly formed","text":"Starting from OnlyOffice Docker-DocumentServer version 7.2, JWT is enabled by default on OnlyOffice server.
So, for security reason, please Configure OnlyOffice to use JWT Secret.
"},{"location":"extension/only_office/#onlyoffice-on-a-separate-host-and-url","title":"OnlyOffice on a separate host and URL","text":"For independent deployment of OnlyOffice on a single server, please refer to the official documentation. After a successful deployment, you only need to specify the values of the following fields in seahub_settings.py and then restart the service.
ENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'http{s}://<Your OnlyOffice host url>/web-apps/apps/api/documents/api.js'\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\nOFFICE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024\n"},{"location":"extension/only_office/#about-ssl","title":"About SSL","text":"For deployments using the onlyoffice.yml file in this document, SSL is primarily handled by the Caddy. If the OnlyOffice document server and Seafile server are not on the same machine, please refer to the official document to configure SSL for OnlyOffice.
From Seafile 13, users can enable Seafile AI to support the following features:
Prerequisites of Seafile AI deployment
To deploy Seafile AI, you have to deploy metadata server extension firstly. Then you can follow this manual to deploy Seafile AI.
AIGC statement in Seafile
With the help of large language models and face recognition models and algorithm development, Seafile AI supports image recognition and text generation. The generated content is diverse and random, and users need to identify the generated content. Seafile will not be responsible for AI-generated content (AIGC).
At the same time, Seafile AI supports the use of custom LLM and face recognition models. Different large language models will have different impacts on AIGC (including functions and performance), so Seafile will not be responsible for the corresponding rate (i.e., tokens/s), token consumption, and generated content. Including but not limited to
When users use their own OpenAI-compatibility-API LLM service (e.g., LM studio, Ollama) and use self-ablated or abliterated models, Seafile will not be responsible for possible bugs (such as infinite loops outputting the same meaningless content). At the same time, Seafile does not recommend using documents such as SeaDoc to evaluate the performance of ablated models.
"},{"location":"extension/seafile-ai/#deploy-seafile-ai-basic-service","title":"Deploy Seafile AI basic service","text":""},{"location":"extension/seafile-ai/#deploy-seafile-ai-on-the-host-with-seafile","title":"Deploy Seafile AI on the host with Seafile","text":"The Seafile AI basic service will use API calls to external large language model service to implement file labeling, file and image summaries, text translation, and sdoc writing assistance.
Seafile AI requires Redis cache
In order to deploy Seafile AI correctly, you have to use Redis as the cache. Please set CACHE_PROVIDER=redis in .env and set Redis related configuration information correctly.
Download seafile-ai.yml
wget https://manual.seafile.com/13.0/repo/docker/seafile-ai.yml\n Modify .env, insert or modify the following fields:
COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\n\nENABLE_SEAFILE_AI=true\nSEAFILE_AI_LLM_TYPE=openai\nSEAFILE_AI_LLM_KEY=<your openai LLM access key>\nSEAFILE_AI_LLM_MODEL=gpt-4o-mini # recommend\n COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\n\nENABLE_SEAFILE_AI=true\nSEAFILE_AI_LLM_TYPE=deepseek\nSEAFILE_AI_LLM_KEY=<your LLM access key>\nSEAFILE_AI_LLM_MODEL=deepseek-chat # recommend\n COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\n\nENABLE_SEAFILE_AI=true\nSEAFILE_AI_LLM_TYPE=azure\nSEAFILE_AI_LLM_URL= # your deployment url, leave blank to use default endpoint\nSEAFILE_AI_LLM_KEY=<your API key>\nSEAFILE_AI_LLM_MODEL=<your deployment name>\n COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\n\nENABLE_SEAFILE_AI=true\nSEAFILE_AI_LLM_TYPE=ollama\nSEAFILE_AI_LLM_URL=<your LLM endpoint>\nSEAFILE_AI_LLM_KEY=<your LLM access key>\nSEAFILE_AI_LLM_MODEL=<your model-id>\n COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\n\nENABLE_SEAFILE_AI=true\nSEAFILE_AI_LLM_TYPE=huggingface\nSEAFILE_AI_LLM_URL=<your huggingface API endpoint>\nSEAFILE_AI_LLM_KEY=<your huggingface API key>\nSEAFILE_AI_LLM_MODEL=<model provider>/<model-id>\n COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\n\nENABLE_SEAFILE_AI=true\nSEAFILE_AI_LLM_TYPE=proxy\nSEAFILE_AI_LLM_URL=<your proxy url>\nSEAFILE_AI_LLM_KEY=<your proxy virtual key> # optional\nSEAFILE_AI_LLM_MODEL=<model-id>\n Seafile AI utilizes LiteLLM to interact with LLM services. For a complete list of supported LLM providers, please refer to this documentation. Then fill the following fields in your .env:
COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\nENABLE_SEAFILE_AI=true\n\n# according to your situation\nSEAFILE_AI_LLM_TYPE=...\nSEAFILE_AI_LLM_URL=...\nSEAFILE_AI_LLM_KEY=...\nSEAFILE_AI_LLM_MODEL=...\n For example, if you are using a LLM service with OpenAI-compatible endpoints, you should set SEAFILE_AI_LLM_TYPE to other or openai, and set other LLM configuration items accurately.
About model selection
Seafile AI supports using large model providers from LiteLLM or large model services with OpenAI-compatible endpoints. Therefore, Seafile AI is compatible with most custom large model services except the default model (gpt-4o-mini), but in order to ensure the normal use of Seafile AI features, you need to select a multimodal large model (such as supporting image input and recognition)
Restart Seafile server:
docker compose down\ndocker compose up -d\n Download seafile-ai.yml and .env:
wget https://manual.seafile.com/13.0/repo/docker/seafile-ai/seafile-ai.yml\nwget -O .env https://manual.seafile.com/13.0/repo/docker/seafile-ai/env\n Modify .env in the host will deploy Seafile AI according to following table
SEAFILE_VOLUME The volume directory of thumbnail server data JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file INNER_SEAHUB_SERVICE_URL Intranet URL for accessing Seahub component, like http://<your Seafile server intranet IP>. REDIS_HOST Redis server host REDIS_PORT Redis server port REDIS_PASSWORD Redis server password SEAFILE_AI_LLM_TYPE Large Language Model (LLM) Type. Default is openai. SEAFILE_AI_LLM_URL LLM API endpoint. SEAFILE_AI_LLM_KEY LLM API key. SEAFILE_AI_LLM_MODEL LLM model id (or name). Default is gpt-4o-mini FACE_EMBEDDING_SERVICE_URL Face embedding service url then start your Seafile AI server:
docker compose up -d\n Modify .env in the host deployed Seafile
SEAFILE_AI_SERVER_URL=http://<your seafile ai host>:8888\n then restart your Seafile server
docker compose down && docker compose up -d\n The face embedding service is used to detect and encode faces in images and is an extension component of Seafile AI. Generally, we recommend that you deploy the service on a machine with a GPU and a graphics card driver that supports OnnxRuntime (so it can also be deployed on a different machine from the Seafile AI base service). Currently, the Seafile AI face embedding service only supports the following modes:
If you plan to deploy these face embeddings in an environment using a GPU, you need to make sure your graphics card is in the range supported by the acceleration environment (e.g., CUDA 12.4 is supported) and correctly mapped in /dev/dri directory. So in some case, the cloud servers and WSL under some driver versions may not be supported.
Download Docker compose files
CUDACPUwget -O face-embedding.yml https://manual.seafile.com/13.0/repo/docker/face-embedding/cuda.yml\n wget -O face-embedding.yml https://manual.seafile.com/13.0/repo/docker/face-embedding/cpu.yml\n Modify .env, insert or modify the following fields:
COMPOSE_FILE='...,face-embedding.yml' # add face-embedding.yml\n\nFACE_EMBEDDING_VOLUME=/opt/face_embedding\n Restart Seafile server
docker compose down\ndocker compose up -d\n Enable face recognition in the repo's settings:
Since the face embedding service may need to be deployed on some hosts with GPU(s), it may not be deployed together with the Seafile AI basic service. At this time, you should make some changes to the Docker compose file so that the service can be accessed normally.
Modify .yml file, delete the commented out lines to expose the service port:
services:\n face-embedding:\n ...\n ports:\n - 8886:8886\n Modify the .env of where deployed Seafile AI:
FACE_EMBEDDING_SERVICE_URL=http://<your face embedding service host>:8886\n Make sure JWT_PRIVATE_KEY has set in the .env for face embedding and is same as the Seafile server
Restart Seafile server
docker compose down\ndocker compose up -d\n By default, the persistent volume is /opt/face_embedding. It will consist of two subdirectories:
/opt/face_embedding/logs: Contains the startup log and access log of the face embedding/opt/face_embedding/models: Contains the model files of the face embedding. It will automatically obtain the latest applicable models at each startup. These models are hosted by our Hugging Face repository. Of course, you can also manually download your own models on this directory (If you fail to automatically pull the model, you can also manually download it).By default, the access key used by the face embedding is the same as that used by the Seafile server, which is JWT_PRIVATE_KEY. At some point, this will have to be modified for security reasons. If you need to customize the access key for the face embedding, you can do the following steps:
Modify .env file for both face embedding and Seafile AI:
FACE_EMBEDDING_SERVICE_KEY=<your customizing access keys>\n Restart Seafile server
docker compose down\ndocker compose up -d\n Seafile supports counting users' AI usage (how many tokens are used) and setting monthly AI quotas for users.
Open $SEAFILE_VOLUME/seafile/conf/seahub_settings.py and add AI prices (i.e., how much per token) informations:
AI_PRICES = {\n\"gpt-4o-mini\": { # replace gpt-4o-mini to your model name\n \"input_tokens_1k\": 0.0011, # input price per token\n \"output_tokens_1k\": 0.0044 # output price per token\n }\n}\n Refer management of roles and permission to specify monthly_ai_credit_per_user (-1 is unlimited), and the unit should be the same as in AI_PRICES.
monthly_ai_credit_per_user for organization user
For organizational team users, monthly_ai_credit_per_user will apply to the entire team. For example, when monthly_ai_credit_per_user is set to 2 (unit of doller for example) and there are 10 members in the team, all members in the team will share the quota of \\(2\\times10=20\\$\\).
SeaDoc is an extension of Seafile that providing an online collaborative document editor.
SeaDoc designed around the following key ideas:
SeaDoc excels at:
The SeaDoc archticture is demonstrated as below:
Here is the workflow when a user opens an sdoc file in a browser:
Default extension in Docker deployment
This extension is already installed by default when deploying Seafile (single-node mode) by Docker.
If you would like to remove it, you can undo the steps in this section (i.e., remove the seadoc.yml in the field COMPOSE_FILE and set ENABLE_SEADOC to false)
The easiest way to deployment SeaDoc is to deploy it with Seafile server on the same host using the same Docker network. If in some situations, you need to deployment SeaDoc standalone, you can follow the next section.
Download the seadoc.yml to /opt/seafile
wget https://manual.seafile.com/13.0/repo/docker/seadoc.yml\n Modify .env, and insert seadoc.yml into COMPOSE_FILE, and enable SeaDoc server
COMPOSE_FILE='seafile-server.yml,caddy.yml,seadoc.yml'\n\nENABLE_SEADOC=true\n Start SeaDoc server server with the following command
docker compose up -d\n Now you can use SeaDoc!
"},{"location":"extension/setup_seadoc/#deploy-seadoc-standalone","title":"Deploy SeaDoc standalone","text":"If you deploy Seafile in a cluster or if you deploy Seafile with binary package, you need to setup SeaDoc as a standalone service. Here are the steps:
Download and modify the .env and seadoc.yml files to directory /opt/seadoc
wget https://manual.seafile.com/13.0/repo/docker/seadoc/seadoc.yml\nwget -O .env https://manual.seafile.com/13.0/repo/docker/seadoc/env\n Then modify the .env file according to your environment. The following fields are needed to be modified:
SEADOC_VOLUME The volume directory of SeaDoc data SEAFILE_MYSQL_DB_HOST Seafile MySQL host SEAFILE_MYSQL_DB_USER Seafile MySQL user, default is seafile SEAFILE_MYSQL_DB_PASSWORD Seafile MySQL password TIME_ZONE Time zone JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file SEAFILE_SERVER_HOSTNAME Seafile host name SEAFILE_SERVER_PROTOCOL http or https (Optional) By default, SeaDoc server will bind to port 80 on the host machine. If the port is already taken by another service, you have to change the listening port of SeaDoc:
Modify seadoc.yml
services:\n seadoc:\n ...\n ports:\n - \"<your SeaDoc server port>:80\"\n...\n Add a reverse proxy for SeaDoc server. In cluster environtment, it means you need to add reverse proxy rules at load balance. Here, we use Nginx as an example (please replace 127.0.0.1:80 to host:port of your Seadoc server)
...\nserver {\n ...\n\n location /sdoc-server/ {\n proxy_pass http://127.0.0.1:80/;\n proxy_redirect off;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n\n client_max_body_size 100m;\n }\n\n location /socket.io {\n proxy_pass http://127.0.0.1:80;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection 'upgrade';\n proxy_redirect off;\n\n proxy_buffers 8 32k;\n proxy_buffer_size 64k;\n\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header Host $http_host;\n proxy_set_header X-NginX-Proxy true;\n }\n}\n <Location /sdoc-server/>\n ProxyPass \"http://127.0.0.1:80/\"\n ProxyPassReverse \"http://127.0.0.1:80/\"\n </Location>\n\n <Location /socket.io/>\n # Since Apache HTTP Server 2.4.47\n ProxyPass \"http://127.0.0.1:80/socket.io/\" upgrade=websocket\n </Location>\n Start SeaDoc server server with the following command
docker compose up -d\n Modify Seafile server's configuration and start SeaDoc server
Warning
After using a reverse proxy, your SeaDoc service will be located at the /sdoc-server path of your reverse proxy (i.e. xxx.example.com/sdoc-server). For example:
Then SEADOC_SERVER_URL will be
http{s}://xxx.example.com/sdoc-server\n Modify .env in your Seafile-server host:
ENABLE_SEADOC=true\nSEADOC_SERVER_URL=https://seafile.example.com/sdoc-server\n Restart Seafile server
Deploy in Docker (including cluster mode)Deploy from binary packagesdocker compose down\ndocker compose up -d\n cd /opt/seafile/seafile-server-latest\n./seahub.sh restart\n /opt/seadoc-data
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files outside. This allows you to rebuild containers easily without losing important information.
SeaDoc uses one database table seahub_db.sdoc_operation_log to store operation logs. The database table is cleaned automatically.
This is because websocket for sdoc-server has not been properly configured. If you use the default Caddy proxy, it should be setup correctly.
But if you use your own proxy, you need to make sure it properly proxy your-sdoc-server-domain/socket.io to sdoc-server-docker-image-address/socket.io
This is because the browser cannot correctly load content from sdoc-server. Make sure
.envYou can open developer console of the browser to further debug the issue.
"},{"location":"extension/thumbnail-server/","title":"Thumbnail Server Overview","text":"Since Seafile 13.0, a new component thumbnail server is added. Thumbnail server can create thumbnails for images, videos, PDFs and other file types. Thumbnail server uses a task queue based architecture, it can better handle workloads than thumbnail generating inside Seahub component.
Use this feature by forwarding thumbnail requests directly to thumbnail server via caddy or a reverse proxy.
"},{"location":"extension/thumbnail-server/#how-to-configure-and-run","title":"How to configure and run","text":"First download thumbnail-server.yml to Seafile directory:
wget https://manual.seafile.com/13.0/repo/docker/thumbnail-server.yml\n Modify .env, and insert thumbnail-server.yml into COMPOSE_FILE:
COMPOSE_FILE='seafile-server.yml,caddy.yml,thumbnail-server.yml'\n Add following configuration in seahub_settings.py to enable thumbnail for videos:
# video thumbnails (disabled by default)\nENABLE_VIDEO_THUMBNAIL = True\n Finally, You can run thumbnail server with the following command:
docker compose down\ndocker compose up -d\n"},{"location":"extension/thumbnail-server/#thumbnail-server-in-seafile-cluster","title":"Thumbnail Server in Seafile cluster","text":"There is no additional features for thumbnail server in the Pro Edition. It works the same as in community edition.
If you enable clustering, You need to deploy thumbnail server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node.
Download .env and thumbnail-server.yml to thumbnail server directory:
wget https://manual.seafile.com/13.0/repo/docker/thumbnail-server/thumbnail-server.yml\nwget -O .env https://manual.seafile.com/13.0/repo/docker/thumbnail-server/env\n Then modify the .env file according to your environment. The following fields are needed to be modified:
SEAFILE_VOLUME The volume directory of thumbnail server data SEAFILE_MYSQL_DB_HOST Seafile MySQL host SEAFILE_MYSQL_DB_USER Seafile MySQL user, default is seafile SEAFILE_MYSQL_DB_PASSWORD Seafile MySQL password TIME_ZONE Time zone JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file INNER_SEAHUB_SERVICE_URL Intranet URL for accessing Seahub component, like http://<your Seafile server intranet IP>. SEAF_SERVER_STORAGE_TYPE What kind of the Seafile data for storage. Available options are disk (i.e., local disk), s3 and multiple (see the details of multiple storage backends) S3_COMMIT_BUCKET S3 storage backend commit objects bucket S3_FS_BUCKET S3 storage backend fs objects bucket S3_BLOCK_BUCKET S3 storage backend block objects bucket S3_KEY_ID S3 storage backend key ID S3_SECRET_KEY S3 storage backend secret key S3_AWS_REGION Region of your buckets S3_HOST Host of your buckets S3_USE_HTTPS Use HTTPS connections to S3 if enabled S3_USE_V4_SIGNATURE Use the v4 protocol of S3 if enabled S3_PATH_STYLE_REQUEST This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. S3_SSE_C_KEY A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. Then you can run thumbnail server with the following command:
docker compose up -d\n You need to configure load balancer according to the following forwarding rules:
/thumbnail requests to thumbnail server via http protocol.Here is a configuration that uses haproxy to support thumbnail server. Haproxy version needs to be >= 2.0. You should use similar configurations for other load balancers.
#/etc/haproxy/haproxy.cfg\n\n# Other existing haproxy configurations\n......\n\nfrontend seafile\n bind 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n acl thumbnail_request url_sub -i /thumbnail/\n use_backend thumbnail_backend if thumbnail_request\n default_backend backup_nodes\n\nbackend backup_nodes\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.0.2:80\n\nbackend thumbnail_backend\n option forwardfor\n server thumbnail 192.168.0.9:80\n Thumbnail server has to access Seafile' storage
The thumbnail server needs to access Seafile storage.
If you use local storage, you need to mount the /opt/seafile-data directory of the Seafile node to the thumbnail node, and set SEAFILE_VOLUME to the mounted directory correctly.
If you use single backend S3 storage, please correctly set relative environment vairables in .env.
If you are using multiple storage backends, you have to copy the seafile.conf of the Seafile node to the /opt/seafile-data/seafile/conf directory of the thumbnail node, and set SEAF_SERVER_STORAGE_TYPE=multiple in .env.
/opt/seafile-data
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files outside. This allows you to rebuild containers easily without losing important information.
This is because generating thumbnails for high-resolution images can impact system performance. You can raise the threshold by setting the THUMBNAIL_IMAGE_ORIGINAL_SIZE_LIMIT environment variable in the env file; the default is 256 (MB).
Seafile can scan uploaded files for malicious content in the background. When configured to run periodically, the scan process scans all existing libraries on the server. In each scan, the process only scans newly uploaded/updated files since the last scan. For each file, the process executes a user-specified virus scan command to check whether the file is a virus or not. Most anti-virus programs provide command line utility for Linux.
To enable this feature, add the following options to seafile.conf:
[virus_scan]\nscan_command = (command for checking virus)\nvirus_code = (command exit codes when file is virus)\nnonvirus_code = (command exit codes when file is not virus)\nscan_interval = (scanning interval, in unit of minutes, default to 60 minutes)\n More details about the options:
An example for ClamAV (http://www.clamav.net/) is provided below:
[virus_scan]\nscan_command = clamscan\nvirus_code = 1\nnonvirus_code = 0\n To test whether your configuration works, you can trigger a scan manually:
cd seafile-server-latest\n./pro/pro.py virus_scan\n If a virus was detected, you can see scan records and delete infected files on the Virus Scan page in the admin area.
Note
If you directly use clamav command line tool to scan files, scanning files will takes a lot of time. If you want to speed it up, we recommend to run Clamav as a daemon. Please refer to Run ClamAV as a Daemon
When run Clamav as a daemon, the scan_command should be clamdscan in seafile.conf. An example for Clamav-daemon is provided below:
[virus_scan]\nscan_command = clamdscan\nvirus_code = 1\nnonvirus_code = 0\n Since Pro edition 6.0.0, a few more options are added to provide finer grained control for virus scan.
[virus_scan]\n......\nscan_size_limit = (size limit for files to be scanned) # The unit is MB.\nscan_skip_ext = (a comma (',') separated list of file extensions to be ignored)\nthreads = (number of concurrent threads for scan, one thread for one file, default to 4)\n The file extensions should start with '.'. The extensions are case insensitive. By default, files with following extensions will be ignored:
.bmp, .gif, .ico, .png, .jpg, .mp3, .mp4, .wav, .avi, .rmvb, .mkv\n The list you provide will override default list.
"},{"location":"extension/virus_scan/#scanning-files-on-upload","title":"Scanning Files on Upload","text":"You may also configure Seafile to scan files for virus upon the files are uploaded. This only works for files uploaded via web interface or web APIs. Files uploaded with syncing or SeaDrive clients cannot be scanned on upload due to performance consideration.
You may scan files uploaded from shared upload links by adding the option below to seahub_settings.py:
ENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n Since Pro Edition 11.0.7, you may scan all uploaded files via web APIs by adding the option below to seafile.conf:
[fileserver]\ncheck_virus_on_web_upload = true\n"},{"location":"extension/virus_scan_with_clamav/","title":"Deploy ClamAV with Seafile","text":""},{"location":"extension/virus_scan_with_clamav/#deploy-with-docker","title":"Deploy with Docker","text":"If your Seafile server is deployed using Docker, we also recommend that you use Docker to deploy ClamAV by following the steps below, otherwise you can deploy it from binary package of ClamAV.
"},{"location":"extension/virus_scan_with_clamav/#download-clamavyml-and-insert-to-docker-compose-lists-in-env","title":"Download clamav.yml and insert to Docker-compose lists in .env","text":"Download clamav.yml
wget https://manual.seafile.com/13.0/repo/docker/pro/clamav.yml\n Modify .env, insert clamav.yml to field COMPOSE_FILE
COMPOSE_FILE='seafile-server.yml,caddy.yml,clamav.yml'\n"},{"location":"extension/virus_scan_with_clamav/#modify-seafileconf","title":"Modify seafile.conf","text":"Add the following statements to seafile.conf
[virus_scan]\nscan_command = clamdscan\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = 5\nscan_size_limit = 20\nthreads = 2\n"},{"location":"extension/virus_scan_with_clamav/#restart-docker-container","title":"Restart docker container","text":"docker compose down\ndocker compose up -d \n Wait some minutes until Clamav finished initializing.
Now Clamav can be used.
"},{"location":"extension/virus_scan_with_clamav/#use-clamav-in-binary-based-deployment","title":"Use ClamAV in binary based deployment","text":""},{"location":"extension/virus_scan_with_clamav/#install-clamav-daemon-clamav-freshclam","title":"Install clamav-daemon & clamav-freshclam","text":"apt-get install clamav-daemon clamav-freshclam\n You should run Clamd with a root permission to scan any files. Edit the conf /etc/clamav/clamd.conf,change the following line:
LocalSocketGroup root\nUser root\n"},{"location":"extension/virus_scan_with_clamav/#start-the-clamav-daemon","title":"Start the clamav-daemon","text":"systemctl start clamav-daemon\n Test the software
$ curl https://secure.eicar.org/eicar.com.txt | clamdscan -\n The output must include:
stream: Eicar-Test-Signature FOUND\n"},{"location":"extension/virus_scan_with_kav4fs/","title":"Virus Scan with kav4fs","text":""},{"location":"extension/virus_scan_with_kav4fs/#prerequisite","title":"Prerequisite","text":"Assume you have installed Kaspersky Anti-Virus for Linux File Server on the Seafile Server machine.
If the user that runs Seafile Server is not root, it should have sudoers privilege to avoid writing password when running kav4fs-control. Add following content to /etc/sudoers:
<user of running seafile server> ALL=(ALL:ALL) ALL\n<user of running seafile server> ALL=NOPASSWD: /opt/kaspersky/kav4fs/bin/kav4fs-control\n"},{"location":"extension/virus_scan_with_kav4fs/#script","title":"Script","text":"As the return code of kav4fs cannot reflect the file scan result, we use a shell wrapper script to parse the scan output and based on the parse result to return different return codes to reflect the scan result.
Save following contents to a file such as kav4fs_scan.sh:
#!/bin/bash\n\nTEMP_LOG_FILE=`mktemp /tmp/XXXXXXXXXX`\nVIRUS_FOUND=1\nCLEAN=0\nUNDEFINED=2\nKAV4FS='/opt/kaspersky/kav4fs/bin/kav4fs-control'\nif [ ! -x $KAV4FS ]\nthen\n echo \"Binary not executable\"\n exit $UNDEFINED\nfi\n\nsudo $KAV4FS --scan-file \"$1\" > $TEMP_LOG_FILE\nif [ \"$?\" -ne 0 ]\nthen\n echo \"Error due to check file '$1'\"\n exit 3\nfi\nTHREATS_C=`grep 'Threats found:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nRISKWARE_C=`grep 'Riskware found:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nINFECTED=`grep 'Infected:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nSUSPICIOUS=`grep 'Suspicious:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nSCAN_ERRORS_C=`grep 'Scan errors:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nPASSWORD_PROTECTED=`grep 'Password protected:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nCORRUPTED=`grep 'Corrupted:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\n\nrm -f $TEMP_LOG_FILE\n\nif [ $THREATS_C -gt 0 -o $RISKWARE_C -gt 0 -o $INFECTED -gt 0 -o $SUSPICIOUS -gt 0 ]\nthen\n exit $VIRUS_FOUND\nelif [ $SCAN_ERRORS_C -gt 0 -o $PASSWORD_PROTECTED -gt 0 -o $CORRUPTED -gt 0 ]\nthen\n exit $UNDEFINED\nelse\n exit $CLEAN\nfi\n Grant execute permissions for the script (make sure it is owned by the user Seafile is running as):
chmod u+x kav4fs_scan.sh\n The meaning of the script return code:
1: found virus\n0: no virus\nother: scan failed\n"},{"location":"extension/virus_scan_with_kav4fs/#configuration","title":"Configuration","text":"Add following content to seafile.conf:
[virus_scan]\nscan_command = <absolute path of kav4fs_scan.sh>\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = <scanning interval, in unit of minutes, default to 60 minutes>\n"},{"location":"extension/webdav/","title":"WebDAV extension","text":"In the document below, we assume your seafile installation folder is /opt/seafile.
The configuration file is /opt/seafile-data/seafile/conf/seafdav.conf (for deploying from binary packages, it should be /opt/seafile/conf/seafdav.conf). If it is not created already, you can just create the file.
[WEBDAV]\n\n# Default is false. Change it to true to enable SeafDAV server.\nenabled = true\n\nport = 8080\ndebug = true\n\n# If you deploy seafdav behind nginx/apache, you need to modify \"share_name\".\nshare_name = /seafdav\n\n# SeafDAV uses Gunicorn as web server.\n# This option maps to Gunicorn's 'workers' setting. https://docs.gunicorn.org/en/stable/settings.html?#workers\n# By default it's set to 5 processes.\nworkers = 5\n\n# This option maps to Gunicorn's 'timeout' setting. https://docs.gunicorn.org/en/stable/settings.html?#timeout\n# By default it's set to 1200 seconds, to support large file uploads.\ntimeout = 1200\n Every time the configuration is modified, you need to restart seafile server to make it take effect.
Deploy in DockerDeploy from binary packagesdocker compose restart\n cd /opt/seafile/seafile-server-latest/\n./seafile.sh restart\n Your WebDAV client would visit the Seafile WebDAV server at http{s}://example.com/seafdav/ (for deploying from binary packages, it should be http{s}://example.com:8080/seafdav/)
In Pro edition 7.1.8 version and community edition 7.1.5, an option is added to append library ID to the library name returned by SeafDAV.
show_repo_id=true\n"},{"location":"extension/webdav/#proxy-only-for-deploying-from-binary-packages","title":"Proxy (only for deploying from binary packages)","text":"Tip
For deploying in Docker, the WebDAV server has been proxied in /seafdav/*, as you can skip this step
For Seafdav, the configuration of Nginx is as follows:
.....\n\n location /seafdav {\n rewrite ^/seafdav$ /seafdav/ permanent;\n }\n\n location /seafdav/ {\n proxy_pass http://127.0.0.1:8080/seafdav/;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_read_timeout 1200s;\n client_max_body_size 0;\n\ufeff\n access_log /var/log/nginx/seafdav.access.log seafileformat;\n error_log /var/log/nginx/seafdav.error.log;\n }\n\n location /:dir_browser {\n proxy_pass http://127.0.0.1:8080/:dir_browser;\n }\n For Seafdav, the configuration of Apache is as follows:
......\n <Location /seafdav>\n ProxyPass \"http://127.0.0.1:8080/seafdav\"\n </Location>\n"},{"location":"extension/webdav/#notes-on-clients","title":"Notes on Clients","text":"Please first note that, there are some known performance limitation when you map a Seafile webdav server as a local file system (or network drive).
So WebDAV is more suitable for infrequent file access. If you want better performance, please use the sync client instead.
WindowsLinuxMac OS XWindows Explorer supports HTTPS connection. But it requires a valid certificate on the server. It's generally recommended to use Windows Explorer to map a webdav server as network dirve. If you use a self-signed certificate, you have to add the certificate's CA into Windows' system CA store.
On Linux you have more choices. You can use file manager such as Nautilus to connect to webdav server. Or you can use davfs2 from the command line.
To use davfs2
sudo apt-get install davfs2\nsudo mount -t davfs -o uid=<username> https://example.com/seafdav /media/seafdav/\n The -o option sets the owner of the mounted directory to so that it's writable for non-root users.
It's recommended to disable LOCK operation for davfs2. You have to edit /etc/davfs2/davfs2.conf
use_locks 0\n Finder's support for WebDAV is also not very stable and slow. So it is recommended to use a webdav client software such as Cyberduck.
"},{"location":"extension/webdav/#frequently-asked-questions","title":"Frequently Asked Questions","text":""},{"location":"extension/webdav/#clients-cant-connect-to-seafdav-server","title":"Clients can't connect to seafdav server","text":"By default, seafdav is disabled. Check whether you have enabled = true in seafdav.conf. If not, modify it and restart seafile server.
If you deploy SeafDAV behind Nginx/Apache, make sure to change the value of share_name as the sample configuration above. Restart your seafile server and try again.
First, check the seafdav.log to see if there is log like the following.
\"MOVE ... -> 502 Bad Gateway\n If you have enabled debug, there will also be the following log.
09:47:06.533 - DEBUG : Raising DAVError 502 Bad Gateway: Source and destination must have the same scheme.\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\n(See https://github.com/mar10/wsgidav/issues/183)\n\n09:47:06.533 - DEBUG : Caught (502, \"Source and destination must have the same scheme.\\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\\n(See https://github.com/mar10/wsgidav/issues/183)\")\n This issue usually occurs when you have configured HTTPS, but the request was forwarded, resulting in the HTTP_X_FORWARDED_PROTO value in the request received by Seafile not being HTTPS.
You can solve this by manually changing the value of HTTP_X_FORWARDED_PROTO. For example, in nginx, change
proxy_set_header X-Forwarded-Proto $scheme;\n to
proxy_set_header X-Forwarded-Proto https;\n"},{"location":"extension/webdav/#windows-explorer-reports-file-size-exceeds-the-limit-allowed-and-cannot-be-saved","title":"Windows Explorer reports \"file size exceeds the limit allowed and cannot be saved\"","text":"This happens when you map webdav as a network drive, and tries to copy a file larger than about 50MB from the network drive to a local folder.
This is because Windows Explorer has a limit of the file size downloaded from webdav server. To make this size large, change the registry entry on the client machine. There is a registry key named FileSizeLimitInBytes under HKEY_LOCAL_MACHINE -> SYSTEM -> CurrentControlSet -> Services -> WebClient -> Parameters.
The different components of Seafile project are released under different licenses:
Forum: https://forum.seafile.com
Follow us @seafile https://twitter.com/seafile
"},{"location":"introduction/contribution/#report-a-bug","title":"Report a Bug","text":"Seafile manages files using libraries. Every library has an owner, who can share the library to other users or share it with groups. The sharing can be read-only or read-write.
"},{"location":"introduction/file_permission_management/#read-only-syncing","title":"Read-only syncing","text":"Read-only libraries can be synced to local desktop. The modifications at the client will not be synced back. If a user has modified some file contents, he can use \"resync\" to revert the modifications.
"},{"location":"introduction/file_permission_management/#cascading-permissionsub-folder-permissions-pro-edition","title":"Cascading permission/Sub-folder permissions (Pro edition)","text":"Sharing controls whether a user or group can see a library, while sub-folder permissions are used to modify permissions on specific folders.
Supposing you share a library as read-only to a group and then want specific sub-folders to be read-write for a few users, you can set read-write permissions on sub-folders for some users and groups.
Note
Please check https://www.seafile.com/en/roadmap/
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/","title":"Seafile Professional Edition Software License Agreement","text":"Seafile Professional Edition SOFTWARE LICENSE AGREEMENT
Important
READ THE FOLLOWING TERMS AND CONDITIONS CAREFULLY BEFORE YOU DOWNLOAD, INSTALL OR USE Seafile Ltd.'S PROPRIETARY SOFTWARE.
BY INSTALLING OR USING THE SOFTWARE, YOU AGREE TO BE BOUND BY THE FOLLOWING TERMS AND CONDITIONS.
IF YOU DO NOT AGREE TO THE FOLLOWING TERMS AND CONDITIONS, DO NOT INSTALL OR USE THE SOFTWARE.
\"Seafile Ltd.\" means Seafile Ltd.
\"You and Your\" means the party licensing the Software hereunder.
\"Software\" means the computer programs provided under the terms of this license by Seafile Ltd. together with any documentation provided therewith.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#2-grant-of-rights","title":"2. GRANT OF RIGHTS","text":""},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#21-general","title":"2.1 General","text":"The License granted for Software under this Agreement authorizes You on a non-exclusive basis to use the Software. The Software is licensed, not sold to You and Seafile Ltd. reserves all rights not expressly granted to You in this Agreement. The License is personal to You and may not be assigned by You to any third party.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#22-license-provisions","title":"2.2 License Provisions","text":"Subject to the receipt by Seafile Ltd. of the applicable license fees, You have the right use the Software as follows:
The inclusion of source code with the License is explicitly not for your use to customize a solution or re-use in your own projects or products. The benefit of including the source code is for purposes of security auditing. You may modify the code only for emergency bug fixes that impact security or performance and only for use within your enterprise. You may not create or distribute derivative works based on the Software or any part thereof. If you need enhancements to the software features, you should suggest them to Seafile Ltd. for version improvements.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#4-ownership","title":"4. OWNERSHIP","text":"You acknowledge that all copies of the Software in any form are the sole property of Seafile Ltd.. You have no right, title or interest to any such Software or copies thereof except as provided in this Agreement.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#5-confidentiality","title":"5. CONFIDENTIALITY","text":"You hereby acknowledge and agreed that the Software constitute and contain valuable proprietary products and trade secrets of Seafile Ltd., embodying substantial creative efforts and confidential information, ideas, and expressions. You agree to treat, and take precautions to ensure that your employees and other third parties treat, the Software as confidential in accordance with the confidentiality requirements herein.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#6-disclaimer-of-warranties","title":"6. DISCLAIMER OF WARRANTIES","text":"EXCEPT AS OTHERWISE SET FORTH IN THIS AGREEMENT THE SOFTWARE IS PROVIDED TO YOU \"AS IS\", AND Seafile Ltd. MAKES NO EXPRESS OR IMPLIED WARRANTIES WITH RESPECT TO ITS FUNCTIONALITY, CONDITION, PERFORMANCE, OPERABILITY OR USE. WITHOUT LIMITING THE FOREGOING, Seafile Ltd. DISCLAIMS ALL IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR FREEDOM FROM INFRINGEMENT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSIONS MAY NOT APPLY TO YOU. THE LIMITED WARRANTY HEREIN GIVES YOU SPECIFIC LEGAL RIGHTS, AND YOU MAY ALSO HAVE OTHER RIGHTS THAT VARY FROM ONE JURISDICTION TO ANOTHER.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#7-limitation-of-liability","title":"7. LIMITATION OF LIABILITY","text":"YOU ACKNOWLEDGE AND AGREE THAT THE CONSIDERATION WHICH Seafile Ltd. IS CHARGING HEREUNDER DOES NOT INCLUDE ANY CONSIDERATION FOR ASSUMPTION BY Seafile Ltd. OF THE RISK OF YOUR CONSEQUENTIAL OR INCIDENTAL DAMAGES WHICH MAY ARISE IN CONNECTION WITH YOUR USE OF THE SOFTWARE. ACCORDINGLY, YOU AGREE THAT Seafile Ltd. SHALL NOT BE RESPONSIBLE TO YOU OR ANY THIRD PARTY FOR ANY LOSS-OF-PROFIT, LOST SAVINGS, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF A LICENSING OR USE OF THE SOFTWARE.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#8-indemnification","title":"8. INDEMNIFICATION","text":"You agree to defend, indemnify and hold Seafile Ltd. and its employees, agents, representatives and assigns harmless from and against any claims, proceedings, damages, injuries, liabilities, costs, attorney's fees relating to or arising out of Your use of the Software or any breach of this Agreement.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#9-termination","title":"9. TERMINATION","text":"Your license is effective until terminated. You may terminate it at any time by destroying the Software or returning all copies of the Software to Seafile Ltd.. Your license will terminate immediately without notice if You breach any of the terms and conditions of this Agreement, including non or incomplete payment of the license fee. Upon termination of this Agreement for any reason: You will uninstall all copies of the Software; You will immediately cease and desist all use of the Software; and will destroy all copies of the software in your possession.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#10-updates-and-support","title":"10. UPDATES AND SUPPORT","text":"Seafile Ltd. has the right, but no obligation, to periodically update the Software, at its complete discretion, without the consent or obligation to You or any licensee or user.
YOU HEREBY ACKNOWLEDGE THAT YOU HAVE READ THIS AGREEMENT, UNDERSTAND IT AND AGREE TO BE BOUND BY ITS TERMS AND CONDITIONS.
"},{"location":"setup/architecture/","title":"Architecture","text":"Seafile Docker and its components are support both x86 and ARM64 architecture. You can find detailes below.
"},{"location":"setup/architecture/#support-status","title":"Support status","text":"Component x86 ARM seafile-mc \u221a \u221a seafile-pro-mc \u221a \u221a sdoc-server \u221a \u221a notification-server \u221a \u221a seafile-md-server \u221a \u221a seafile-ai \u221a \u221a thumbnail-server \u221a \u221a seasearch \u221a \u221a face-embedding \u221a X index-server (distributed indexing) \u221a XNote, for SeaSearch, you should use seaseach-nomkl version to work on ARM architecture.
"},{"location":"setup/architecture/#pull-the-arm-image","title":"Pull the ARM image","text":"You can use the X.0-latest tag to pull the ARM image without specifying the arm tag.
docker pull seafileltd/seafile-mc:13.0-latest\n"},{"location":"setup/caddy/","title":"HTTPS and Caddy","text":"Note
From Seafile Docker 12.0, HTTPS will be handled by the Caddy. The default caddy image used of Seafile docker is lucaslorentz/caddy-docker-proxy:2.9-alpine.
Caddy is a modern open source web server that mainly binds external traffic and internal services in seafile docker. In addition to the advantages of traditional proxy components (e.g., nginx), Caddy also makes it easier for users to complete the acquisition and update of HTTPS certificates by providing simpler configurations.
"},{"location":"setup/caddy/#engage-https-by-caddy","title":"Engage HTTPS by caddy","text":"We provide two options for enabling HTTPS via Caddy, which mainly rely on The caddy docker proxy container from Lucaslorentz supports dynamic configuration with labels:
To engage HTTPS, users only needs to correctly configure the following fields in .env:
SEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=example.com\n After Seafile Docker startup, you can use following command to access the logs of Caddy
docker logs seafile-caddy -f\n"},{"location":"setup/caddy/#using-a-custom-existing-certificate","title":"Using a custom (existing) certificate","text":"With the caddy.yml, a default volume-mount is created: /opt/seafile-caddy (as you can change it by modifying SEAFILE_CADDY_VOLUME in .env). By convention you should provide your certificate & key files in the container host filesystem under /opt/seafile-caddy/certs/ to make it available to caddy:
/opt/seafile-caddy/certs/\n\u251c\u2500\u2500 cert.pem # xxx.crt in some case\n\u251c\u2500\u2500 key.pem # xxx.key in some case\n Command to generate custom certificates
With this command, you can generate your own custom certificates:
cd /opt/seafile-caddy/certs\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./key.pem -out ./cert.pem\n Please be aware that custom certicates can not be used for ip-adresses
Then modify seafile-server.yml to enable your custom certificate, by the way, we strongly recommend you to make a backup of seafile-server.yml before doing this:
cp seafile-server.yml seafile-server.yml.bak\nnano seafile-server.yml\n and
services:\n ...\n seafile:\n ...\n volumes:\n ...\n # If you use a self-generated certificate, please add it to the Seafile server trusted directory (i.e. remove the comment symbol below)\n # - \"/opt/seafile-caddy/certs/cert.pem:/usr/local/share/ca-certificates/cert.crt\"\n labels:\n caddy: ${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty} # leave this variables only\n caddy.tls: \"/data/caddy/certs/cert.pem /data/caddy/certs/key.pem\"\n ...\n DNS resolution must work inside the container
If you're using a non-public url like my-custom-setup.local, you have to make sure, that the docker container can resolve this DNS query. If you don't run your own DNS servers, you have to add extras_hosts to your .yml file.
The Seafile cluster solution employs a 3-tier architecture:
This architecture scales horizontally. That means, you can handle more traffic by adding more machines. The architecture is visualized in the following picture.
There are two main components on the Seafile server node: web server (Nginx/Apache) and Seafile app server. The web server passes requests from the clients to Seafile app server. The Seafile app servers work independently. They don't know about each other's state. That means each app server can fail independently without affecting other app server instances. The load balancer is responsible for detecting failure and re-routing requests.
Even though Seafile app servers work independently, they still have to share some session information. All shared session information is stored in memory cache. Thus, all Seafile app servers have to connect to the same memory cache server (cluster). Since Pro Edition 11.0, both memcached and Redis can be used as memory cache. Before 11.0, only memcached is supported. More details about memory cache configuration is available later. Since Seafile 13.0, we recommend that you use Redis as a cache to adapt to new features (such as Seafile AI, Metadata management, etc.).
The background server is the workhorse for various background tasks, including full-text indexing, office file preview, virus scanning, LDAP syncing. It should usually be run on a dedicated server for better performance. Currently only one background task server can be running in the entire cluster. If more than one background servers are running, they may conflict with each others when doing some tasks. If you need HA for background task server, you can consider using Keepalived to build a hot backup for it.
In the seafile cluster, only one server should run the background tasks, including:
Let's assume you have three nodes in your cluster: A, B, and C.
Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/cluster_deploy_with_docker/#deploy-seafile-service","title":"Deploy Seafile service","text":""},{"location":"setup/cluster_deploy_with_docker/#deploy-the-first-seafile-frontend-node","title":"Deploy the first Seafile frontend node","text":"Create the mount directory
mkdir -p /opt/seafile/shared\n Pulling Seafile image
docker pull seafileltd/seafile-pro-mc:13.0-latest\n Download the seafile-server.yml and .env
wget -O .env https://manual.seafile.com/13.0/repo/docker/cluster/env\nwget https://manual.seafile.com/13.0/repo/docker/cluster/seafile-server.yml\n Modify the variables in .env (especially the terms like <...>).
Pleace license file
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile/shared. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Start the Seafile docker
docker compose up -d\n Cluster init mode
Because CLUSTER_INIT_MODE is true in the .env file, Seafile docker will be started in init mode and generate configuration files. As the results, you can see the following lines if you trace the Seafile container (i.e., docker logs seafile):
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n\n\n[2024-11-21 02:22:37] Updating version stamp\nStart init\n\nInit success\n After initailizing the cluster, the following fields can be removed in .env
CLUSTER_INIT_MODE, must be removed from .env fileCLUSTER_INIT_ES_HOSTCLUSTER_INIT_ES_PORTTip
We recommend that you check that the relevant configuration files are correct and copy the SEAFILE_VOLUME directory before the service is officially started, because only the configuration files are generated after initialization. You can directly migrate the entire copied SEAFILE_VOLUME to other nodes later:
cp -r /opt/seafile/shared /opt/seafile/shared-bak\n Restart the container to start the service in frontend node
docker compose down\ndocker compose up -d\n Frontend node starts successfully
After executing the above command, you can trace the logs of container seafile (i.e., docker logs seafile). You can see the following message if the frontend node starts successfully:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 20\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:02:35 Nginx ready \n\n2024-11-21 03:02:35 This is an idle script (infinite loop) to keep container running. \n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n Create the mount directory
$ mkdir -p /opt/seafile/shared\n Pull Seafile image
Copy seafile-server.yml, .envand configuration files from the first frontend node
Start the service
docker compose up -d\n Create the mount directory
$ mkdir -p /opt/seafile/shared\n Pull Seafile image
Copy seafile-server.yml, .env and configuration files from frontend node
Note
The configuration files from frontend node have to be put in the same path as the frontend node, i.e., /opt/seafile/shared/seafile/conf/*
Modify .env, set CLUSTER_MODE to backend
Start the service in the backend node
docker compose up -d\n Backend node starts successfully
After executing the above command, you can trace the logs of container seafile (i.e., docker logs seafile). You can see the following message if the backend node starts successfully:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 21\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:11:59 Nginx ready \n2024-11-21 03:11:59 This is an idle script (infinite loop) to keep container running. \n\n---------------------------------\n\nSeafile cluster backend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seafile background tasks ...\nDone.\n Note
Since Seafile Pro server 6.0.0, cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the \"Load Balancer Setting\" section below for details
Generally speaking, in order to better access the Seafile service, we recommend that you use a load balancing service to access the Seafile cluster and bind your domain name (such as seafile.cluster.com) to the load balancing service. Usually, you can use:
Deploy your own load balancing service, our document will give two of common load balance services:
In the AWS ELB management console, after you've added the Seafile server instances to the instance list, you should do two more configurations.
First you should setup HTTP(S) listeners. Ports 443 and 80 of ELB should be forwarded to the ports 80 or 443 of the Seafile servers.
Then you setup health check
Refer to AWS documentation about how to setup sticky sessions.
"},{"location":"setup/cluster_deploy_with_docker/#nginx","title":"Nginx","text":"Install Nginx in the host if you would like to deploy load balance service
sudo apt update\nsudo apt install nginx\n Create the configurations file for Seafile cluster
sudo nano /etc/nginx/sites-available/seafile-cluster\n and, add the following contents into this file:
upstream seafile_cluster {\n server <IP: your frontend node 1>:80;\n server <IP: your frontend node 2>:80;\n ...\n}\n\nserver {\n listen 80;\n server_name <your domain>;\n\n location / {\n proxy_pass http://seafile_cluster;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n http_502 http_503 http_504;\n }\n}\n Link the configurations file to sites-enabled directory:
sudo ln -s /etc/nginx/sites-available/seafile-cluster /etc/nginx/sites-enabled/\n Test and enable configuration
sudo nginx -t\nsudo nginx -s reload\n Execute the following commands on the two Seafile frontend servers:
$ apt install haproxy keepalived -y\n\n$ mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak\n\n$ cat > /etc/haproxy/haproxy.cfg << 'EOF'\nglobal\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n timeout connect 10000\n timeout client 300000\n timeout server 36000000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafile01 Front-End01-IP:8001 check port 11001 cookie seafile01\n server seafile02 Front-End02-IP:8001 check port 11001 cookie seafile02\nEOF\n Warning
Please correctly modify the IP address (Front-End01-IP and Front-End02-IP) of the frontend server in the above configuration file. Other wise it cannot work properly.
Choose one of the above two servers as the master node, and the other as the slave node.
Perform the following operations on the master node:
$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node1\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state MASTER\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 100\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n Warning
Please correctly configure the virtual IP address and network interface device name in the above file. Other wise it cannot work properly.
Perform the following operations on the standby node:
$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node2\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state BACKUP\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 98\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n Finally, run the following commands on the two Seafile frontend servers to start the corresponding services:
$ systemctl enable --now haproxy\n$ systemctl enable --now keepalived\n So far, Seafile cluster has been deployed.
"},{"location":"setup/cluster_deploy_with_docker/#https","title":"HTTPS","text":"You can engaged HTTPS in your load balance service, as you can use certificates manager (e.g., Certbot) to acquire and enable HTTPS to your Seafile cluster. You have to modify the relative URLs from the prefix http:// to https:// in seahub_settings.py and .env, after enabling HTTPS.
You can follow here to deploy SeaDoc server. And then modify SEADOC_SERVER_URL in your .env file
This manual explains how to deploy and run Seafile cluster on a Linux server using Kubernetes (k8s thereafter).
"},{"location":"setup/cluster_deploy_with_k8s/#prerequisites","title":"Prerequisites","text":""},{"location":"setup/cluster_deploy_with_k8s/#cluster-requirements","title":"Cluster requirements","text":"Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/cluster_deploy_with_k8s/#k8s-tools","title":"K8S tools","text":"Two tools are suggested and can be installed with official installation guide on all nodes:
After installation, you need to start the k8s control plane service on each node and refer to the k8s official manual for creating a cluster.
Tip
Although we recommend installing the k8s control plane tool on each node, it does not mean that we will use each node as a control plane node, but it is a necessary tool to create or join a K8S cluster. For details, please refer to the above link about creating or joining into a cluster.
"},{"location":"setup/cluster_deploy_with_k8s/#create-namespace-and-secretmap","title":"Create namespace and secretMap","text":"kubectl create ns seafile\n\nkubectl create secret generic seafile-secret --namespace seafile \\\n--from-literal=JWT_PRIVATE_KEY='<required>' \\\n--from-literal=SEAFILE_MYSQL_DB_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_ADMIN_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_MYSQL_ROOT_PASSWORD='<required>' \\\n--from-literal=REDIS_PASSWORD='' \\\n--from-literal=S3_SECRET_KEY='' \\\n--from-literal=S3_SSE_C_KEY=''\n"},{"location":"setup/cluster_deploy_with_k8s/#download-k8s-yaml-files-for-seafile-cluster-without-frontend-node","title":"Download K8S YAML files for Seafile cluster (without frontend node)","text":"mkdir -p /opt/seafile-k8s-yaml\n\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/cluster/seafile-backend-deployment.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/cluster/seafile-persistentvolume.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/cluster/seafile-persistentvolumeclaim.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/cluster/seafile-service.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/cluster/seafile-env.yaml\n In here we suppose you download the YAML files in /opt/seafile-k8s-yaml, which mainly include about:
seafile-xx-deployment.yaml for frontend and backend services pod management and creation, seafile-service.yaml for exposing Seafile services to the external network, seafile-persistentVolume.yaml for defining the location of a volume used for persistent storage on the hostseafile-persistentvolumeclaim.yaml for declaring the use of persistent storage in the container.For futher configuration details, you can refer the official documents.
Use PV bound from a storage class
If you would like to use automatically allocated persistent volume (PV) by a storage class, please modify seafile-persistentvolumeclaim.yaml and specify storageClassName. On the other hand, the PV defined by seafile-persistentvolume.yaml can be disabled:
rm /opt/seafile-k8s-yaml/seafile-persistentvolume.yaml\n"},{"location":"setup/cluster_deploy_with_k8s/#modify-seafile-envyaml","title":"Modify seafile-env.yaml","text":"Similar to Docker-base deployment, Seafile cluster in K8S deployment also supports use files to configure startup progress, you can modify common environment variables by
nano /opt/seafile-k8s-yaml/seafile-env.yaml\n"},{"location":"setup/cluster_deploy_with_k8s/#initialize-seafile-cluster","title":"Initialize Seafile cluster","text":"You can use following command to initialize Seafile cluster now (the Seafile's K8S resources will be specified in namespace seafile for easier management):
kubectl apply -f /opt/seafile-k8s-yaml/ -n seafile\n About Seafile cluster initialization
When Seafile cluster is initializing, it will run with the following conditions:
CLUSTER_INIT_MODE=trueSuccess
You can get the following information through kubectl logs seafile-xxxx -n seafile to check the initialization process is done or not:
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n\n\n[2024-11-21 02:22:37] Updating version stamp\nStart init\n\nInit success\n When the initialization is complete, the server will stop automaticlly (because no operations will be performed after the initialization is completed).
We recommend that you check whether the contents of the configuration files in /opt/seafile/shared/seafile/conf are correct when going to next step, which are automatically generated during the initialization process.
/opt/seafile/shared","text":"You have to locate the /opt/seafile/shared directory generated during initialization firsly, then simply put it in this path, if you have a seafile-license.txt license file.
Finally you can use the tar -zcvf and tar -zxvf commands to package the entire /opt/seafile/shared directory of the current node, copy it to other nodes, and unpack it to the same directory to take effect on all nodes.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
"},{"location":"setup/cluster_deploy_with_k8s/#download-frontend-services-yaml-and-restart-pods-to-start-seafile-server","title":"Download frontend service's YAML and restart pods to start Seafile server","text":"Download frontend service's YAML by:
wget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/cluster/seafile-frontend-deployment.yaml\n Modify seafile-env.yaml, and set CLUSTER_INIT_MODE to false (i.e., disable initialization mode), then re-apply seafile-env.yaml again:
kubectl apply -f /opt/seafile-k8s-yaml\n Run the following command to restart pods to restart Seafile cluster:
Tip
If you modify some configurations in /opt/seafile/shared/seafile/conf or YAML files in /opt/seafile-k8s-yaml/, you still need to restart services to make modifications.
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n Sucess
You can view the pod's log to check the startup progress is normal or not. You can see the following message if server is running normally:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 20\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:02:35 Nginx ready \n\n2024-11-21 03:02:35 This is an idle script (infinite loop) to keep container running. \n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n"},{"location":"setup/cluster_deploy_with_k8s/#uninstall-seafile-k8s","title":"Uninstall Seafile K8S","text":"You can uninstall the Seafile K8S by the following command:
kubectl delete -f /opt/seafile-k8s-yaml/ -n seafile\n"},{"location":"setup/cluster_deploy_with_k8s/#advanced-operations","title":"Advanced operations","text":"Please refer from here for futher advanced operations.
"},{"location":"setup/helm_chart_cluster/","title":"Deploy Seafile cluster with Kubernetes (K8S) by Seafile Helm Chart","text":"This manual explains how to deploy and run Seafile cluster on a Linux server using Seafile Helm Chart (chart thereafter). You can also refer to here to use K8S resource files to deploy Seafile cluster in your K8S cluster.
"},{"location":"setup/helm_chart_cluster/#prerequisites","title":"Prerequisites","text":""},{"location":"setup/helm_chart_cluster/#cluster-requirements","title":"Cluster requirements","text":"Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/helm_chart_cluster/#k8s-tools","title":"K8S tools","text":"Two tools are suggested and can be installed with official installation guide on all nodes:
After installation, you need to start the k8s control plane service on each node and refer to the k8s official manual for creating a cluster.
Tip
Although we recommend installing the k8s control plane tool on each node, it does not mean that we will use each node as a control plane node, but it is a necessary tool to create or join a K8S cluster. For details, please refer to the above link about creating or joining into a cluster.
"},{"location":"setup/helm_chart_cluster/#install-seafile-helm-chart","title":"Install Seafile helm chart","text":"Create namespace
kubectl create namespace seafile\n Create a secret for sensitive data
kubectl create secret generic seafile-secret --namespace seafile \\\n--from-literal=JWT_PRIVATE_KEY='<required>' \\\n--from-literal=SEAFILE_MYSQL_DB_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_ADMIN_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_MYSQL_ROOT_PASSWORD='<required>' \\\n--from-literal=REDIS_PASSWORD='' \\\n--from-literal=S3_SECRET_KEY='' \\\n--from-literal=S3_SSE_C_KEY=''\n where the JWT_PRIVATE_KEY can be generate by pwgen -s 40 1
Download and modify the my-values.yaml according to your configurations. By the way, you can follow here for the details:
wget -O my-values.yaml https://haiwen.github.io/seafile-helm-chart/values/13.0/cluster.yaml\n\nnano my-values.yaml\n Tip
my-values.yaml we provided (i.e., you can create an empty my-values.yaml and add required field, as others have defined default values in our chart), because it destroys the flexibility of deploying with Helm, but it contains some formats of how Seafile Helm Chart reads these configurations, as well as all the environment variables and secret variables that can be read directly.In addition, you can also create a custom storageClassName for the persistence directory used by Seafile. You only need to specify storageClassName in the seafile.config.seafileDataVolume object in my-values.yaml:
seafile:\n configs:\n seafileDataVolume:\n storageClassName: <your seafile storage class name>\n ...\n Then install the chart use the following command:
helm repo add seafile https://haiwen.github.io/seafile-helm-chart/repo\nhelm upgrade --install seafile seafile/cluster --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n Seafile helm chart 13.0 support variable validity checking
Starting from Seafile helm chart 13.0, the validity of variables in my-values.yaml will be checked at deployment time. When there is a variable validity check that fails, you may encounter the following error message:
You have enabled <Some feature> but <Variable> is not specified and is not allowed to be empty\n If you encounter the following message, please check the relevant configuration in my-values.yaml.
Success
After installing the chart, the cluster is going to initial progress, you can see the following message by kubectl logs seafile-<string> -n seafile:
Defaulted container \"seafile-backend\" out of: seafile-backend, set-ownership (init)\n*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 15\n*** Running /scripts/enterpoint.sh...\n2025-02-13 08:58:35 Nginx ready \n2025-02-13 08:58:35 This is an idle script (infinite loop) to keep container running. \n\n---------------------------------\n\nSeafile cluster backend mode\n\n---------------------------------\n\n[2025-02-13 08:58:35] Now running setup-seafile-mysql.py in auto mode.\nChecking python on this machine ...\n\n\nverifying password of user root ... done\n\n---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: 10.0.0.138\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n\n\n[2025-02-13 08:58:36] Updating version stamp\nStart init\n\nInit success\n After the first-time startup, you have to turn off (i.e., set initMode to false) in your my-values.yaml, then upgrade the chart:
helm upgrade --install seafile seafile/cluster --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n Success
You can check any front-end node in Seafile cluster. If the following information is output, Seafile cluster will run normally in your cluster:
Defaulted container \"seafile-frontend\" out of: seafile-frontend, set-ownership (init)\n*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 21\n*** Running /scripts/enterpoint.sh...\n2025-02-13 09:23:49 Nginx ready \n2025-02-13 09:23:49 This is an idle script (infinite loop) to keep container running. \n\n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(86): fileserver: web_token_expire_time = 3600\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(98): fileserver: max_index_processing_threads= 3\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(111): fileserver: fixed_block_size = 8388608\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(123): fileserver: max_indexing_threads = 1\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(138): fileserver: put_head_commit_request_timeout = 10\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(150): fileserver: skip_block_hash = 0\n[seaf-server] [2025-02-13 09:23:50] [INFO] ../common/seaf-utils.c(581): Use database Mysql\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(243): fileserver: worker_threads = 10\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(256): fileserver: backlog = 32\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(267): fileserver: verify_client_blocks = 1\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(289): fileserver: cluster_shared_temp_file_mode = 600\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(336): fileserver: check_virus_on_web_upload = 0\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(362): fileserver: enable_async_indexing = 0\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(374): fileserver: async_indexing_threshold = 700\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(386): fileserver: fs_id_list_request_timeout = 300\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(399): fileserver: max_sync_file_count = 100000\n[seaf-server] [2025-02-13 09:23:50] [WARNING] ../common/license.c(716): License file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users\nLicense file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users\n[seaf-server] [2025-02-13 09:23:50] [INFO] filelock-mgr.c(1397): Cleaning expired file locks.\n[2025-02-13 09:23:52] Start Monitor \n[2025-02-13 09:23:52] Start seafevents.main \n/opt/seafile/seafile-pro-server-12.0.9/seahub/seahub/settings.py:1101: SyntaxWarning: invalid escape sequence '\\w'\nmatch = re.search('^EXTRA_(\\w+)', attr)\n/opt/seafile/seafile-pro-server-12.0.9/seahub/thirdpart/seafobj/mc.py:13: SyntaxWarning: invalid escape sequence '\\S'\nmatch = re.match('--SERVER\\\\s*=\\\\s*(\\S+)', mc_options)\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\n[seafevents] [2025-02-13 09:23:55] [INFO] root:82 LDAP is not set, disable ldap sync.\n[seafevents] [2025-02-13 09:23:55] [INFO] virus_scan:51 [virus_scan] scan_command option is not found in seafile.conf, disable virus scan.\n[seafevents] [2025-02-13 09:23:55] [INFO] seafevents.app.mq_handler:127 Subscribe to channels: {'seaf_server.stats', 'seahub.stats', 'seaf_server.event', 'seahub.audit'}\n[seafevents] [2025-02-13 09:23:55] [INFO] root:534 Start counting user activity info..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:547 [UserActivityCounter] update 0 items.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:240 Start counting traffic info..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:268 Traffic counter finished, total time: 0.0003578662872314453 seconds.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:23 Start file updates sender, interval = 300 sec\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:57 Can not start work weixin notice sender: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:131 search indexer is started, interval = 600 sec\n[seafevents] [2025-02-13 09:23:55] [INFO] root:56 seahub email sender is started, interval = 1800 sec\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:17 Can not start ldap syncer: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:18 Can not start virus scanner: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:35 Start data statistics..\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:40 Can not start content scanner: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:46 Can not scan repo old files auto del days: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:182 Start counting total storage..\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:78 Can not start filename index updater: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:113 search wiki indexer is started, interval = 600 sec\n[seafevents] [2025-02-13 09:23:55] [INFO] root:87 Start counting file operations..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:403 Start counting monthly traffic info..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:491 Monthly traffic counter finished, update 0 user items, 0 org items, total time: 0.0905158519744873 seconds.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:203 [TotalStorageCounter] No results from seafile-db.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:169 [FileOpsCounter] Finish counting file operations in 0.09510159492492676 seconds, 0 added, 0 deleted, 0 visited, 0 modified\n\nSeahub is started\n\nDone.\n If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile/shared. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Then restart Seafile:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n A safer way to use your Seafile license file
You can also create a secret resource to encrypt your license file in your K8S cluster, which is a safer way:
kubectl create secret generic seafile-license --from-file=seafile-license.txt=$PATH_TO_YOUR_LICENSE_FILE --namespace seafile\n Then modify my-values.yaml to add the information extra volumes:
seafile:\n...\nextraVolumes:\n backend:\n - name: seafileLicense\n volumeInfo:\n secret:\n secretName: seafile-license\n items:\n - key: seafile-license.txt\n path: seafile-license.txt\n subPath: seafile-license.txt\n mountPath: /shared/seafile/seafile-license.txt\n readOnly: true\n frontend:\n - name: seafileLicense\n volumeInfo:\n secret:\n secretName: seafile-license\n items:\n - key: seafile-license.txt\n path: seafile-license.txt\n subPath: seafile-license.txt\n mountPath: /shared/seafile/seafile-license.txt\n readOnly: true\n Finally you can upgrade your chart by:
helm upgrade --install seafile seafile/cluster --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n Seafile Helm Chart is designed to provide fast deployment and version control. You can update and rollback versions using the following setps:
Update Helm repo
helm repo update\n Tip
When using the repo update command, this will not always take effect immediately, as the previous repo will be stored in the cache.
Download (optional) and modify the new my-values.yaml
wget -O my-values.yaml https://haiwen.github.io/seafile-helm-chart/values/<seafile-version>/cluster.yaml\n\nnano my-values.yaml\n About version of Seafile Helm Chart and Seafile
The version of Seafile Helm Chart is same as the major version of Seafile, i.e.:
By default, it will follow the latest Chart and the latest Seafile
Upgrade release to a new version
helm upgrade --install seafile seafile/cluster --namespace seafile --create-namespace --values my-values.yaml --version <release-version>\n (Rollback) if you would like rollback to your old-running release, you can use following command to rollback your current instances
helm rollback seafile -n seafile <revision>\n You can uninstall chart by the following command:
helm delete seafile --namespace seafile\n"},{"location":"setup/helm_chart_cluster/#advanced-operations","title":"Advanced operations","text":"Please refer from here for futher advanced operations.
"},{"location":"setup/helm_chart_single_node/","title":"Setup Seafile with a single K8S pod with Seafile Helm Chart","text":"This manual explains how to deploy and run Seafile server on a Linux server using Seafile Helm Chart (chart thereafter) in a single pod (i.e., single node mode). Comparing to Setup by K8S resource files, deployment with helm chart can simplify the deployment process and provide more flexible deployment control, which the way we recommend in deployment with K8S.
For specific environment and configuration requirements, please refer to the description of the Docker-based Seafile single-node deployment. Please also refer to the description of the K8S tool section in here.
"},{"location":"setup/helm_chart_single_node/#preparation","title":"Preparation","text":"For persisting data using in the docker-base deployment, /opt/seafile-data, is still adopted in this manual. What's more, all K8S YAML files will be placed in /opt/seafile-k8s-yaml (replace it when following these instructions if you would like to use another path).
By the way, we don't provide the deployment methods of basic services (e.g., Redis, MySQL and Elasticsearch) and seafile-compatibility components (e.g., SeaDoc) for K8S in our document. If you need to install these services in K8S format, you can refer to the rewrite method in this document.
"},{"location":"setup/helm_chart_single_node/#system-requirements","title":"System requirements","text":"Please refer here for the details of system requirements about Seafile service. By the way, this will apply to all nodes where Seafile pods may appear in your K8S cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/helm_chart_single_node/#install-seafile-helm-chart","title":"Install Seafile helm chart","text":"Create namespace
kubectl create namespace seafile\n Create a secret for sensitive data
Seafile ProSeafile CEkubectl create secret generic seafile-secret --namespace seafile \\\n--from-literal=JWT_PRIVATE_KEY='<required>' \\\n--from-literal=SEAFILE_MYSQL_DB_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_ADMIN_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_MYSQL_ROOT_PASSWORD='<required>' \\\n--from-literal=REDIS_PASSWORD='' \\\n--from-literal=S3_SECRET_KEY='' \\\n--from-literal=S3_SSE_C_KEY=''\n kubectl create secret generic seafile-secret --namespace seafile \\\n--from-literal=JWT_PRIVATE_KEY='<required>' \\\n--from-literal=SEAFILE_MYSQL_DB_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_ADMIN_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_MYSQL_ROOT_PASSWORD='<required>' \\\n--from-literal=REDIS_PASSWORD=''\n where the JWT_PRIVATE_KEY can be generate by pwgen -s 40 1
Download and modify the my-values.yaml according to your configurations. By the way, you can follow here for the details:
wget -O my-values.yaml https://haiwen.github.io/seafile-helm-chart/values/13.0/pro.yaml\n\nnano my-values.yaml\n wget -O my-values.yaml https://haiwen.github.io/seafile-helm-chart/values/13.0/ce.yaml\n\nnano my-values.yaml\n Tip
my-values.yaml we provided (i.e., you can create an empty my-values.yaml and add required field, as others have defined default values in our chart), because it destroys the flexibility of deploying with Helm, but it contains some formats of how Seafile Helm Chart reads these configurations, as well as all the environment variables and secret variables that can be read directly.In addition, you can also create a custom storageClassName for the persistence directory used by Seafile. You only need to specify storageClassName in the seafile.config.seafileDataVolume object in my-values.yaml:
seafile:\n configs:\n seafileDataVolume:\n storageClassName: <your seafile storage class name>\n ...\n Then install the chart use the following command:
Seafile ProSeafile CEhelm repo add seafile https://haiwen.github.io/seafile-helm-chart/repo\nhelm upgrade --install seafile seafile/pro --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n helm repo add seafile https://haiwen.github.io/seafile-helm-chart/repo\nhelm upgrade --install seafile seafile/ce --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n Seafile helm chart 13.0 support variable validity checking
Starting from Seafile helm chart 13.0, the validity of variables in my-values.yaml will be checked at deployment time. When there is a variable validity check that fails, you may encounter the following error message:
You have enabled <Some feature> but <Variable> is not specified and is not allowed to be empty\n If you encounter the following message, please check the relevant configuration in my-values.yaml.
After installing the chart, the Seafile pod should startup automaticlly.
About Seafile service
The default service type of Seafile is LoadBalancer. You should specify K8S load balancer for Seafile or specify at least one external ip, that can be accessed from external networks.
Important for deployment
By default, Seafile will access the Elasticsearch (Pro only) with the specific service name: - Elasticsearch: elasticsearch with port 9200
If the above services are:
Please modfiy the files in /opt/seafile-data/seafile/conf to make correct the configurations for above services, otherwise the Seafile server cannot start normally. Then restart Seafile server:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n"},{"location":"setup/helm_chart_single_node/#activating-the-seafile-license-pro","title":"Activating the Seafile License (Pro)","text":"If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Then restart Seafile:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n A safer way to use your Seafile license file
You can also create a secret resource to encrypt your license file in your K8S cluster, which is a safer way:
kubectl create secret generic seafile-license --from-file=seafile-license.txt=$PATH_TO_YOUR_LICENSE_FILE --namespace seafile\n Then modify my-values.yaml to add the information extra volumes:
seafile:\n...\nextraVolumes:\n - name: seafileLicense\n volumeInfo:\n secret:\n secretName: seafile-license\n items:\n - key: seafile-license.txt\n path: seafile-license.txt\n subPath: seafile-license.txt\n mountPath: /shared/seafile/seafile-license.txt\n readOnly: true\n Finally you can upgrade your chart by:
Seafile ProSeafile CEhelm upgrade --install seafile seafile/pro --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n helm upgrade --install seafile seafile/ce --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n"},{"location":"setup/helm_chart_single_node/#version-control","title":"Version control","text":"Seafile Helm Chart is designed to provide fast deployment and version control. You can update and rollback versions using the following setps:
Update Helm repo
helm repo update\n Tip
When using the repo update command, this will not always take effect immediately, as the previous repo will be stored in the cache.
Download (optional) and modify the new my-values.yaml
wget -O my-values.yaml https://haiwen.github.io/seafile-helm-chart/values/<seafile-version>/pro.yaml\n\nnano my-values.yaml\n wget -O my-values.yaml https://haiwen.github.io/seafile-helm-chart/values/<seafile-version>/ce.yaml\n\nnano my-values.yaml\n About version of Seafile Helm Chart and Seafile
The version of Seafile Helm Chart is same as the major version of Seafile, i.e.:
By default, it will follow the latest Chart and the latest Seafile
Upgrade release to a new version
Seafile ProSeafile CEhelm upgrade --install seafile seafile/pro --namespace seafile --create-namespace --values my-values.yaml --version <release-version>\n helm upgrade --install seafile seafile/ce --namespace seafile --create-namespace --values my-values.yaml --version <release-version>\n (Rollback) if you would like rollback to your old-running release, you can use following command to rollback your current instances
helm rollback seafile -n seafile <revision>\n You can uninstall chart by the following command:
helm delete seafile --namespace seafile\n"},{"location":"setup/helm_chart_single_node/#advanced-operations","title":"Advanced operations","text":"Please refer from here for futher advanced operations.
"},{"location":"setup/k8s_advanced_management/","title":"Seafile K8S advanced management","text":"This document mainly describes how to manage and maintain Seafile deployed through our K8S deployment document. At the same time, if you are already proficient in using kubectl commands to manage K8S resources, you can also customize the deployment solutions we provide.
Namespaces for Seafile K8S deployment
Our documentation provides two deployment solutions for both single-node and cluster deployment (via Seafile Helm Chart and K8S resource files), both of which can be highly customized.
Regardless of which deployment method you use, in our newer manuals (usually in versions after Seafile 12.0.9), Seafile-related K8S resources (including related Pods, services, and persistent volumes, etc.) are defined in the seafile namespace. In previous versions, you may deploy Seafile in the default namespace, so in this case, when referring to this document for Seafile K8S resource management, be sure to remove -n seafile in the command.
Similar to docker installation, you can also manage containers through some kubectl commands. For example, you can use the following command to check whether the relevant resources are started successfully and whether the relevant services can be accessed normally. First, execute the following command and remember the pod name with seafile- as the prefix (such as seafile-748b695648-d6l4g)
kubectl get pods -n seafile\n You can check a status of a pod by
kubectl logs seafile-748b695648-d6l4g -n seafile\n and enter a container by
kubectl exec -it seafile-748b695648-d6l4g -n seafile -- bash\n Also, you can restart the services by the following commands:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n"},{"location":"setup/k8s_advanced_management/#k8s-gateway-and-https","title":"K8S Gateway and HTTPS","text":"Since the support of Ingress feature is frozen in the new version of K8S, this article will introduce how to use the new version of K8S feature K8S Gateway to implement Seafile service exposure and load balancing.
Still use Nginx-Ingress
If your K8S is still using Nginx-Ingress, you can follow here to setup ingress controller and HTTPS. We sincerely thanks Datamate to give an example to this configuration.
For the details and features about K8S Gateway, please refer to the K8S official document, you can simpily install it by
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yaml\n The Gateway API requires configuration of three API categories in its resource model: - GatewayClass:\u00a0Defines a group of gateways with the same configuration, managed by the controller that implements the class. - Gateway:\u00a0Defines an instance of traffic handling infrastructure, which can be thought of as a load balancer. - HTTPRoute:\u00a0Defines HTTP-specific rules for mapping traffic from gateway listeners to representations of backend network endpoints. These endpoints are typically represented as\u00a0Services.
The GatewayClass resource serves the same purpose as the IngressClass in the old-ingress API, similar to the StorageClass in the Storage API. It defines the categories of Gateways that can be created. Typically, this resource is provided by your infrastructure platform, such as EKS or GKE. It can also be provided by a third-party Ingress Controller, such as Nginx-gateway or Istio-gateway.
Here, we take the Nginx-gateway for the example, and you can install it with the official document. After installation, you can view the installation status with the following command:
# `gc` means the `gatewayclass`, and its same as `kubectl get gatewayclass`\nkubectl get gc \n\n#NAME CONTROLLER ACCEPTED AGE\n#nginx gateway.nginx.org/nginx-gateway-controller True 22s\n Typically, after you install GatewayClass, your cloud provider will provide you with a load balancing IP, which is visible in GatewayClass. If this IP is not assigned, you can manually bind it to a IP that can be accessed from exteranl network.
kubectl edit svc nginx-gateway -n nginx-gateway\n and modify the following section:
...\nspec:\n ...\n externalIPs:\n - <your external IP>\n externalTrafficPolicy: Cluster\n ...\n...\n"},{"location":"setup/k8s_advanced_management/#gateway","title":"Gateway","text":"Gateway is used to describe an instance of traffic processing infrastructure. Usually, Gateway defines a network endpoint that can be used to process traffic, that is, to filter, balance, split, etc. Service and other backends. For example, it can represent a cloud load balancer, or a cluster proxy server configured to accept HTTP traffic. As above, please refer to the official documentation for a detailed description of Gateway. Here is only a simple reference configuration for Seafile:
# nano seafile-gateway/gateway.yaml\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: seafile-gateway\nspec:\n gatewayClassName: nginx\n listeners:\n - name: seafile-http\n protocol: HTTP\n port: 80\n"},{"location":"setup/k8s_advanced_management/#httproute","title":"HTTPRoute","text":"The HTTPRoute category specifies the routing behavior of HTTP requests from the Gateway listener to the backend network endpoints. For service backends, the implementation can represent the backend network endpoint as a service IP or a supporting endpoint of the service. it represents the configuration that will be applied to the underlying Gateway implementation. For example, defining a new HTTPRoute may result in configuring additional traffic routes in a cloud load balancer or in-cluster proxy server. As above, please refer to the official documentation for a detailed description of the HTTPRoute resource. Here is only a reference configuration solution that is only applicable to this document.
# nano seafile-gateway/httproute.yaml\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: seafile-httproute\nspec:\n parentRefs:\n - group: gateway.networking.k8s.io\n kind: Gateway\n name: seafile-gateway\n hostnames:\n - \"<your domain>\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /\n backendRefs:\n - name: seafile\n port: 80\n After installing or defining GatewayClass, Gateway and HTTPRoute, you can now enable this feature by following command and view your Seafile server by the URL http://seafile.example.com/:
kubectl apply -f seafile-gateway -n seafile\n"},{"location":"setup/k8s_advanced_management/#enable-https-optional","title":"Enable HTTPS (Optional)","text":"When using K8S Gateway, a common way to enable HTTPS is to add relevant information about the TLS listener in Gateway resource. You can refer here for futher details. We will provide a simple way here so that you can quickly enable HTTPS for your Seafile K8S.
Create a secret resource (seafile-tls-cert) for your TLS certificates:
kubectl create secret tls seafile-tls-cert \\\n--cert=<your path to fullchain.pem> \\\n--key=<your path to privkey.pem>\n 2. Use the TLS in your Gateway resource and enable HTTPS: # nano seafile-gateway/gateway.yaml\n\n...\nspec:\n ...\n listeners:\n - name: seafile-http\n ...\n tls:\n certificateRefs:\n - kind: Secret\n group: \"\"\n name: seafile-tls-cert\n...\n Modify seahub_settings.py:
SERVICE_URL = \"https://<your domain>/\"\n Restart Seafile K8S Gateway:
kubectl delete -f seafile-gateway -n seafile\nkubectl apply -f seafile-gateway -n seafile\n Now you can access your Seafile service in https://<your domain>/
Similar to single-node deployment, you can browse the log files of Seafile running directly in the persistent volume directory (i.e., <path>/seafile/logs). The difference is that when using K8S to deploy a Seafile cluster (especially in a cloud environment), the persistent volume created is usually shared and synchronized for all nodes. However, the logs generated by the Seafile service do not record the specific node information where these logs are located, so browsing the files in the above folder may make it difficult to identify which node these logs are generated from. Therefore, one solution proposed here is:
Record the generated logs to the standard output. In this way, the logs can be distinguished under each node by kubectl logs (but all types of logs will be output together now). You can enable this feature (it should be enabled by default in K8S Seafile cluster but not in K8S single-pod Seafile) by modifing SEAFILE_LOG_TO_STDOUT to true in seafile-env.yaml:
...\ndata:\n ...\n SEAFILE_LOG_TO_STDOUT: \"true\"\n ...\n Then restart the Seafile server:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n Since the logs in step 1 can be distinguished between nodes, but they are aggregated and output together, it is not convenient for log retrieval. So you have to route the standard output logs (i.e., distinguish logs by corresponding components name) and re-record them in a new file or upload them to a log aggregation system (e.g., Loki).
Currently in the K8S environment, the commonly used log routing plugins are:
Fluent Bit and Promtail are more lightweight (i.e., consume less system resources), while Promtail only supports transferring logs to Loki. Therefore, this document will mainly introduce log routing through Fluent Bit which is a fast, lightweight logs and metrics agent. It is also a CNCF graduated sub-project under the umbrella of Fluentd. Fluent Bit is licensed under the terms of the Apache License v2.0. You should deploy the Fluent Bit in your K8S cluster by following offical document firstly. Then modify Fluent-Bit pod settings to mount a new directory to load the configuration files:
#kubectl edit ds fluent-bit\n\n...\nspec:\n ...\n spec:\n ...\n containers:\n - name: fluent-bit\n volumeMounts:\n ...\n - mountPath: /fluent-bit/etc/seafile\n name: fluent-bit-seafile\n - mountPath: /\n ...\n ...\n volumes:\n ...\n - hostPath:\n path: /opt/fluent-bit\n name: fluent-bit-seafile\n and
#kubectl edit cm fluent-bit\n\ndata:\n ...\n fluent-bit.conf: |\n [SERVICE]\n ...\n Parsers_File /fluent-bit/etc/seafile/confs/parsers.conf\n ...\n @INCLUDE /fluent-bit/etc/seafile/confs/*-log.conf\n For example in here, we use /opt/fluent-bit/confs (it has to be non-shared). What's more, the parsers will be defined in /opt/fluent-bit/confs/parsers.conf, and for each type log (e.g., seahub's log, seafevent's log) will be defined in /opt/fluent-bit/confs/*-log.conf. Each .conf file defines several Fluent-Bit data pipeline components:
Warning
For PARSER, it can only be stored in /opt/fluent-bit/confs/parsers.conf, otherwise the Fluent-Bit cannot startup normally.
According to the above, a container will generate a log file (usually in /var/log/containers/<container-name>-xxxxxx.log), so you need to prepare an importer and add the following information (for more details, please refer to offical document about TAIL inputer) in /opt/fluent-bit/confs/seafile-log.conf:
[INPUT]\n Name tail\n Path /var/log/containers/seafile-frontend-*.log\n Buffer_Chunk_Size 2MB\n Buffer_Max_Size 10MB\n Docker_Mode On\n Docker_Mode_Flush 5\n Tag seafile.*\n Parser Docker # for definition, please see the next section as well\n\n[INPUT]\n Name tail\n Path /var/log/containers/seafile-backend-*.log\n Buffer_Chunk_Size 2MB\n Buffer_Max_Size 10MB\n Docker_Mode On\n Docker_Mode_Flush 5\n Tag seafile.*\n Parser Docker\n The above defines two importers, which are used to monitor seafile-frontend and seafile-backend services respectively. The reason why they are written together here is that for a node, you may not know when it will run the frontend service and when it will run the backend service, but they have the same tag prefix seafile..
Each input has to use a parser to parse the logs and pass them to the filter. Here, a parser named Docker is created to parse the logs generated by the K8S-docker-runtime container. The parser is placed in /opt/fluent-bit/confs/parser.conf (for more details, please refer to offical document about JSON parser):
[PARSER]\n Name Docker\n Format json\n Time_Key time\n Time_Format %Y-%m-%dT%H:%M:%S.%LZ\n Log records after parsing
The logs of the Docker container are saved in /var/log/containers in Json format (see the sample below), which is why we use the Json format in the above parser.
{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] seafile-session.c(86): fileserver: web_token_expire_time = 3600\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.294638442Z\"}\n{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] seafile-session.c(98): fileserver: max_index_processing_threads= 3\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.294810145Z\"}\n{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] seafile-session.c(111): fileserver: fixed_block_size = 8388608\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.294879777Z\"}\n{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] seafile-session.c(123): fileserver: max_indexing_threads = 1\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.295002479Z\"}\n{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] seafile-session.c(138): fileserver: put_head_commit_request_timeout = 10\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.295082733Z\"}\n{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] seafile-session.c(150): fileserver: skip_block_hash = 0\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.295195843Z\"}\n{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] ../common/seaf-utils.c(553): Use database Mysql\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.29704895Z\"}\n When these logs are obtained by the importer and parsed by the parser, they will become independent log records with the following fields:
log: The original log content (i.e., same as you seen in kubectl logs seafile-xxx -n seafile) and an extra line break at the end (i.e., \\n). This is also the field we need to save or upload to the log aggregation system in the end.stream: The original log come from. stdout means the standard output.time: The time when the log is recorded in the corresponding stream (ISO 8601 format).Add two filters in /opt/fluent-bit/confs/seafile-log.conf for records filtering and routing. Here, the record_modifier filter is to select useful keys (see the contents in above tip label, only the log field is what we need) in the log records and rewrite_tag filter is used to route logs according to specific rules:
[FILTER] \n Name record_modifier\n Match seafile.*\n Allowlist_key log\n\n\n[FILTER]\n Name rewrite_tag\n Match seafile.*\n Rule $log ^.*\\[seaf-server\\].*$ seaf-server false # for seafile's logs\n Rule $log ^.*\\[seahub\\].*$ seahub false # for seahub's logs\n Rule $log ^.*\\[seafevents\\].*$ seafevents false # for seafevents' lgos\n Rule $log ^.*\\[seafile-slow-rpc\\].*$ seafile-slow-rpc false # for slow-rpc's logs\n"},{"location":"setup/k8s_advanced_management/#output-logs-to-loki","title":"Output log's to Loki","text":"Loki is multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. The Fluent-Bit loki built-in output plugin allows you to send your log or events to a Loki service. It supports data enrichment with Kubernetes labels, custom label keys and Tenant ID within others.
Alternative Fluent-Bit Loki plugin by Grafana
For sending logs to Loki, there are two plugins for Fluent-Bit:
Due to each outputer dose not have a distinguishing marks in the configuration files (because Fluent-Bit takes each plugin as a tag workflow):
Seaf-server log: Add an outputer to /opt/fluent-bit/confs/seaf-server-log.conf:
[OUTPUT]\n Name loki\n Match seaf-server\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n seahub log: Add an outputer to /opt/fluent-bit/confs/seahub-log.conf:
[OUTPUT]\n Name loki\n Match seahub\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n seafevents log: Add an outputer to /opt/fluent-bit/confs/seafevents-log.conf:
[OUTPUT]\n Name loki\n Match seafevents\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n seafile-slow-rpc log: Add an outputer to /opt/fluent-bit/confs/seafile-slow-rpc-log.conf:
[OUTPUT]\n Name loki\n Match seafile-slow-rpc\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n Cloud Loki instance
If you are using a cloud Loki instance, you can follow the Fluent-Bit Loki plugin document to fill up all necessary fields. Usually, the following fields are additional needs in cloud Loki service:
tlstls.verifyhttp_userhttp_passwdThis manual explains how to deploy and run Seafile server on a Linux server using Kubernetes (k8s thereafter) in a single pod (i.e., single node mode). So this document is essentially an extended description of the Docker-based Seafile single-node deployment (support both CE and Pro).
For specific environment and configuration requirements, please refer to the description of the Docker-based Seafile single-node deployment. Please also refer to the description of the K8S tool section in here.
"},{"location":"setup/k8s_single_node/#system-requirements","title":"System requirements","text":"Please refer here for the details of system requirements about Seafile service. By the way, this will apply to all nodes where Seafile pods may appear in your K8S cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/k8s_single_node/#gettings-started","title":"Gettings started","text":"For persisting data using in the docker-base deployment, /opt/seafile-data, is still adopted in this manual. What's more, all K8S YAML files will be placed in /opt/seafile-k8s-yaml (replace it when following these instructions if you would like to use another path).
By the way, we don't provide the deployment methods of basic services (e.g., Redis, MySQL and Elasticsearch) and seafile-compatibility components (e.g., SeaDoc) for K8S in our document. If you need to install these services in K8S format, you can refer to the rewrite method of this document.
"},{"location":"setup/k8s_single_node/#create-namespace-and-secretmap","title":"Create namespace and secretMap","text":"Seafile ProSeafile CEkubectl create ns seafile\n\nkubectl create secret generic seafile-secret --namespace seafile \\\n--from-literal=JWT_PRIVATE_KEY='<required>' \\\n--from-literal=SEAFILE_MYSQL_DB_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_ADMIN_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_MYSQL_ROOT_PASSWORD='<required>' \\\n--from-literal=REDIS_PASSWORD='' \\\n--from-literal=S3_SECRET_KEY='' \\\n--from-literal=S3_SSE_C_KEY='' \n kubectl create ns seafile\n\nkubectl create secret generic seafile-secret --namespace seafile \\\n--from-literal=JWT_PRIVATE_KEY='<required>' \\\n--from-literal=SEAFILE_MYSQL_DB_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_ADMIN_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_MYSQL_ROOT_PASSWORD='<required>' \\\n--from-literal=REDIS_PASSWORD=''\n"},{"location":"setup/k8s_single_node/#down-load-the-yaml-files-for-seafile-server","title":"Down load the YAML files for Seafile Server","text":"Pro editionCommunity edition mkdir -p /opt/seafile-k8s-yaml\n\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/pro/seafile-deployment.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/pro/seafile-persistentvolume.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/pro/seafile-persistentvolumeclaim.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/pro/seafile-service.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/pro/seafile-env.yaml\n mkdir -p /opt/seafile-k8s-yaml\n\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/ce/seafile-deployment.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/ce/seafile-persistentvolume.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/ce/seafile-persistentvolumeclaim.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/ce/seafile-service.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/ce/seafile-env.yaml\n In here we suppose you download the YAML files in /opt/seafile-k8s-yaml, which mainly include about:
seafile-deployment.yaml for Seafile server pod management and creation, seafile-service.yaml for exposing Seafile services to the external network, seafile-persistentVolume.yaml for defining the location of a volume used for persistent storage on the hostseafile-persistentvolumeclaim.yaml for declaring the use of persistent storage in the container.Use PV bound from a storage class
If you would like to use automatically allocated persistent volume (PV) by a storage class, please modify seafile-persistentvolumeclaim.yaml and specify storageClassName. On the other hand, the PV defined by seafile-persistentvolume.yaml can be disabled:
rm /opt/seafile-k8s-yaml/seafile-persistentvolume.yaml\n For futher configuration details, you can refer the official documents.
"},{"location":"setup/k8s_single_node/#modify-seafile-envyaml","title":"Modifyseafile-env.yaml","text":"Similar to Docker-base deployment, Seafile cluster in K8S deployment also supports use files to configure startup progress, you can modify common environment variables by
nano /opt/seafile-k8s-yaml/seafile-env.yaml\n Warning
For the fields marked with <...> are required, please make sure these items are filled in, otherwise Seafile server may not run properly.
You can start Seafile server and specify the resources into the namespace seafile for easier management by
kubectl apply -f /opt/seafile-k8s-yaml/ -n seafile\n Important for Pro edition
By default, Seafile (Pro) will access the Elasticsearch with the specific service name:
elasticsearch with port 9200If the above services are:
Please modfiy the files in /opt/seafile-data/seafile/conf/seafevents.conf to make correct the configurations for above services, otherwise the Seafile server cannot start normally. Then restart Seafile server:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n"},{"location":"setup/k8s_single_node/#activating-the-seafile-license-pro","title":"Activating the Seafile License (Pro)","text":"If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Then restart Seafile:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n"},{"location":"setup/k8s_single_node/#uninstall-seafile-k8s","title":"Uninstall Seafile K8S","text":"You can uninstall the Seafile K8S by the following command:
kubectl delete -f /opt/seafile-k8s-yaml/ -n seafile\n"},{"location":"setup/k8s_single_node/#advanced-operations","title":"Advanced operations","text":"Please refer from here for futher advanced operations.
"},{"location":"setup/migrate_backends_data/","title":"Migrate data between different backends","text":"Seafile supports data migration between filesystem, s3, ceph, swift and Alibaba oss by a built-in script. Before migration, you have to ensure that both S3 hosts can be accessed normally.
Migration to or from S3
Since version 11, when you migrate from S3 to other storage servers or from other storage servers to S3, you have to use V4 authentication protocol. This is because version 11 upgrades to Boto3 library, which fails to list objects from S3 when it's configured to use V2 authentication protocol.
"},{"location":"setup/migrate_backends_data/#copy-seafileconf-and-use-new-s3-configurations","title":"Copyseafile.conf and use new S3 configurations","text":"During the migration process, Seafile needs to know where the data will be migrated to. The easiest way is to copy the original seafile.conf to a new path, and then use the new S3 configurations in this file.
Warning
For deployment with Docker, the new seafile.conf has to be put in the persistent directory (e.g., /opt/seafile-data/seafile.conf) used by Seafile service. Otherwise the script cannot locate the new configurations file.
cp /opt/seafile-data/seafile/conf/seafile.conf /opt/seafile-data/seafile.conf\n\nnano /opt/seafile-data/seafile.conf\n cp /opt/seafile/conf/seafile.conf /opt/seafile.conf\n\nnano /opt/seafile.conf\n Then you can follow here to use the new S3 configurations in the new seafile.conf. By the way, if you want to migrate to a local file system, the new seafile.conf configurations for S3 example is as follows:
# ... other configurations\n\n[commit_object_backend]\nname = fs\ndir = /var/data_backup\n\n[fs_object_backend]\nname = fs\ndir = /var/data_backup\n\n[block_backend]\nname = fs\ndir = /var/data_backup\n"},{"location":"setup/migrate_backends_data/#stop-seafile-server","title":"Stop Seafile Server","text":"Since the data migration process will not affect the operation of the Seafile service, if the original S3 data is operated during this process, the data may not be synchronized with the migrated data. Therefore, we recommend that you stop the Seafile service before executing the migration procedure.
Deploy with DockerDeploy from binary packagedocker exec -it seafile bash\ncd /opt/seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n cd /opt/seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n"},{"location":"setup/migrate_backends_data/#run-migratesh-to-initially-migrate-objects","title":"Run migrate.sh to initially migrate objects","text":"This step will migrate most of objects from the source storage to the destination storage. You don't need to stop Seafile service at this stage as it may take quite long time to finish. Since the service is not stopped, some new objects may be added to the source storage during migration. Those objects will be handled in the next step:
Speed-up migrating large number of objects
If you have millions of objects in the storage (especially the fs objects), it may take quite long time to migrate all objects and more than half is using to check whether an object exists in the destination storage. In this situation, you can modify the nworker and maxsize variables in the migrate.py:
class ThreadPool(object):\n def __init__(self, do_work, nworker=20):\n self.do_work = do_work\n self.nworker = nworker\n self.task_queue = Queue.Queue(maxsize = 2000)\n However, if the two values (i.e., nworker and maxsize) \u200b\u200bare too large, the improvement in data migration speed may not be obvious because the disk I/O bottleneck has been reached.
Encrypted storage backend data (deprecated)
If you have an encrypted storage backend, you can use this script to migrate and decrypt the data from that backend to a new one. You can add the --decrypt option in calling the script, which will decrypt the data while reading it, and then write the unencrypted data to the new backend:
./migrate.sh /opt --decrypt\n Deploy with DockerDeploy from binary package # make sure you are in the container and in directory `/opt/seafile/seafile-server-latest`\n./migrate.sh /shared\n\n# exit container and stop it\nexit\ndocker compose down\n # make sure you are in the directory `/opt/seafile/seafile-server-latest`\n./migrate.sh /opt\n Success
You can see the following message if the migration process is done:
2025-01-15 05:49:39,408 Start to fetch [commits] object from destination\n2025-01-15 05:49:39,422 Start to fetch [fs] object from destination\n2025-01-15 05:49:39,442 Start to fetch [blocks] object from destination\n2025-01-15 05:49:39,677 [commits] [0] objects exist in destination\n2025-01-15 05:49:39,677 Start to migrate [commits] object\n2025-01-15 05:49:39,749 [blocks] [0] objects exist in destination\n2025-01-15 05:49:39,755 Start to migrate [blocks] object\n2025-01-15 05:49:39,752 [fs] [0] objects exist in destination\n2025-01-15 05:49:39,762 Start to migrate [fs] object\n2025-01-15 05:49:40,602 Complete migrate [commits] object\n2025-01-15 05:49:40,626 Complete migrate [blocks] object\n2025-01-15 05:49:40,790 Complete migrate [fs] object\nDone.\n"},{"location":"setup/migrate_backends_data/#replace-the-original-seafileconf-and-start-seafile","title":"Replace the original seafile.conf and start Seafile","text":"After running the script, we recommend that you check whether your data already exists on the new S3 storage backend server (i.e., the migration is successful, and the number and size of files should be the same). Then you can remove the file from the old S3 storage backend and replace the original seafile.conf from the new one:
mv /opt/seafile-data/seafile.conf /opt/seafile-data/seafile/conf/seafile.conf\n mv /opt/seafile.conf /opt/seafile/conf/seafile.conf\n Finally, you can start Seafile server:
Deploy with DockerDeploy from binary packagedocker compose up -d\n # make sure you are in the directory `/opt/seafile/seafile-server-latest`\n./seahub.sh start\n./seafile.sh start\n"},{"location":"setup/migrate_ce_to_pro_with_docker/","title":"Migrate CE to Pro with Docker","text":""},{"location":"setup/migrate_ce_to_pro_with_docker/#preparation","title":"Preparation","text":".env and seafile-server.yml of Seafile Pro.wget -O .env https://manual.seafile.com/13.0/repo/docker/pro/env\nwget https://manual.seafile.com/13.0/repo/docker/pro/seafile-server.yml\nwget https://manual.seafile.com/13.0/repo/docker/pro/elasticsearch.yml\n"},{"location":"setup/migrate_ce_to_pro_with_docker/#migrate","title":"Migrate","text":""},{"location":"setup/migrate_ce_to_pro_with_docker/#stop-the-seafile-ce","title":"Stop the Seafile CE","text":"docker compose down\n Tip
To ensure data security, it is recommended that you back up your MySQL data
"},{"location":"setup/migrate_ce_to_pro_with_docker/#put-your-licence-file","title":"Put your licence file","text":"Copy the seafile-license.txt to the volume directory of the Seafile CE's data. If the directory is /opt/seafile-data, so you should put it in the /opt/seafile-data/seafile/.
Modify .env based on the old configurations from the old .env file. The following fields should be paid special attention and others should be the same as the old configurations:
SEAFILE_IMAGE The Seafile pro docker image, which the tag must be equal to or newer than the old Seafile CE docker tag seafileltd/seafile-pro-mc:13.0-latest SEAFILE_ELASTICSEARCH_VOLUME The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data For other fileds (e.g., SEAFILE_VOLUME, SEAFILE_MYSQL_VOLUME, SEAFILE_MYSQL_DB_USER, SEAFILE_MYSQL_DB_PASSWORD), must be consistent with the old configurations.
Tip
For the configurations using to do the initializations (e.g, INIT_SEAFILE_ADMIN_EMAIL, INIT_SEAFILE_MYSQL_ROOT_PASSWORD), you can remove it from .env as well
seafile-server.yml and .env","text":"Replace the old seafile-server.yml and .env by the new and modified files, i.e. (if your old seafile-server.yml and .env are in the /opt)
mv -b seafile-server.yml /opt/seafile-server.yml\nmv -b .env /opt/.env\n"},{"location":"setup/migrate_ce_to_pro_with_docker/#modify-seafeventsconf","title":"Modify seafevents.conf","text":"Add [INDEX FILES] section in /opt/seafile-data/seafile/conf/seafevents.conf manually:
Additional system resource requirements
Seafile PE docker requires a minimum of 4 cores and 4GB RAM because of Elasticsearch deployed simultaneously. If you do not have enough system resources, you can use an alternative search engine, SeaSearch, a more lightweight search engine built on open source search engine ZincSearch, as the indexer.
[INDEX FILES]\nes_host = elasticsearch\nes_port = 9200\nenabled = true\ninterval = 10m\n"},{"location":"setup/migrate_ce_to_pro_with_docker/#start-seafile-pro","title":"Start Seafile Pro","text":"Run the following command to run the Seafile-Pro container\uff1a
docker compose up -d\n Now you have a Seafile Professional service.
"},{"location":"setup/migrate_non_docker_to_docker/","title":"Migrate from non-docker Seafile deployment to docker","text":"Note
The recommended steps to migrate from non-docker deployment to docker deployment on two different machines are:
Run the following commands in /opt/seafile/seafile-server-latest:
Note
For installations using python virtual environment, activate it if it isn't already active:
source python-venv/bin/activate\n Tip
If you have integrated some components (e.g., SeaDoc) in your Seafile server, please shutdown them to avoid losting unsaved data
su seafile\n./seafile.sh stop\n./seahub.sh stop\n"},{"location":"setup/migrate_non_docker_to_docker/#stop-nginx-cache-server-eg-redis-elasticsearch","title":"Stop Nginx, cache server (e.g., Redis), ElasticSearch","text":"You have to stop the above services to avoid losing data before migrating.
systemctl stop nginx && systemctl disable nginx\nsystemctl stop redis && systemctl disable redis\ndocker stop es && docker remove es\n"},{"location":"setup/migrate_non_docker_to_docker/#backup-mysql-database-and-seafile-server","title":"Backup MySQL database and Seafile server","text":"Please follow here to backup:
You can follow here to deploy Seafile with Docker, please use your old configurations when modifying .env, and make sure the Seafile server is running normally after deployment.
Use external MySQL service or the old MySQL service
This document is written to migrate from non-Docker version to Docker version Seafile between two different machines. We suggest using the Docker-compose Mariadb service (version 10.11 by default) as the database service in after-migration Seafile. If you would like to use an existed MySQL service, always in which situation you try to do migrate operation on the same host or the old MySQL service is the dependency of other services, you have to follow here to deploy Seafile.
"},{"location":"setup/migrate_non_docker_to_docker/#recovery-libraries-data-for-seafile-docker","title":"Recovery libraries data for Seafile Docker","text":"Firstly, you should stop the Seafile server before recovering Seafile libraries data:
docker compose down\n Then recover the data from backuped file:
cp /backup/data/* /opt/seafile-data/seafile\n"},{"location":"setup/migrate_non_docker_to_docker/#recover-the-database-only-for-the-new-mysql-service-used-in-seafile-docker","title":"Recover the Database (only for the new MySQL service used in Seafile docker)","text":"Start the database service Only:
docker compose up -d --no-deps db\n Follow here to recover the database data.
Exit the container and stop the Mariadb service
docker compose down\n Finally, the migration is complete. You can restart the Seafile server of Docker-base by restarting the service:
docker compose up -d\n By the way, you can shutdown the old MySQL service, if it is not a dependency of other services, .
"},{"location":"setup/overview/","title":"Seafile Docker overview","text":"Seafile docker based installation consist of the following components (docker images):
SSL configuration.You can use run Seafile as non root user in docker.
Note: In non root mode, the seafile user is automatically created in the container, with uid 8000 and gid 8000.
First deploy Seafile with docker, and destroy the containers.
docker compose down\n Then add the NON_ROOT=true to the .env.
NON_ROOT=true\n Then modify /opt/seafile-data/seafile/ permissions.
chmod -R a+rwx /opt/seafile-data/seafile/\n Start Seafile:
docker compose up -d\n Now you can run Seafile as seafile user.
Tip
When doing maintenance, other scripts in docker are also required to be run as seafile user, e.g. su seafile -c ./seaf-gc.sh
You can use one of the following methods to start Seafile container on system bootup.
"},{"location":"setup/seafile_docker_autostart/#modify-docker-composeservice","title":"Modify docker-compose.service","text":"Add docker-compose.service
vim /etc/systemd/system/docker-compose.service
[Unit]\nDescription=Docker Compose Application Service\nRequires=docker.service\nAfter=docker.service\n\n[Service]\nType=forking\nRemainAfterExit=yes\nWorkingDirectory=/opt/ \nExecStart=/usr/bin/docker compose up -d\nExecStop=/usr/bin/docker compose down\nTimeoutStartSec=0\n\n[Install]\nWantedBy=multi-user.target\n Note
WorkingDirectory is the absolute path to the seafile-server.yml file directory.
Set the docker-compose.service file to 644 permissions
chmod 644 /etc/systemd/system/docker-compose.service\n Load autostart configuration
systemctl daemon-reload\nsystemctl enable docker-compose.service\n Add configuration restart: unless-stopped for each container in components of Seafile docker. Take seafile-server.yml for example
services:\n db:\n image: mariadb:10.11\n container_name: seafile-mysql-1\n restart: unless-stopped\n\n redis:\n image: redis\n container_name: seafile-redis\n restart: unless-stopped\n\n elasticsearch:\n image: elasticsearch:8.6.2\n container_name: seafile-elasticsearch\n restart: unless-stopped\n\n seafile:\n image: seafileltd/seafile-pro-mc:12.0-latest\n container_name: seafile\n restart: unless-stopped\n Tip
Add restart: unless-stopped, and the Seafile container will automatically start when Docker starts. If the Seafile container does not exist (execute docker compose down), the container will not start automatically.
Please refer here for system requirements about Seafile CE. In general, we recommend that you have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/setup_ce_by_docker/#getting-started","title":"Getting started","text":"The following assumptions and conventions are used in the rest of this document:
/opt/seafile is the directory for store Seafile docker compose files. If you decide to put Seafile in a different directory \u2014 which you can \u2014 adjust all paths accordingly./opt/seafile-mysql and /opt/seafile-data, respectively. It is not recommended to change these paths. If you do, account for it when following these instructions.Use the official installation guide for your OS to install Docker.
"},{"location":"setup/setup_ce_by_docker/#download-and-modify-env","title":"Download and modify.env","text":"To deploy Seafile with Docker, you have to .env, seafile-server.yml and caddy.yml in a directory (e.g., /opt/seafile):
mkdir /opt/seafile\ncd /opt/seafile\n\nwget -O .env https://manual.seafile.com/13.0/repo/docker/ce/env\nwget https://manual.seafile.com/13.0/repo/docker/ce/seafile-server.yml\nwget https://manual.seafile.com/13.0/repo/docker/seadoc.yml\nwget https://manual.seafile.com/13.0/repo/docker/caddy.yml\n\nnano .env\n The following fields merit particular attention:
Variable Description Default ValueSEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-data SEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/db SEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy INIT_SEAFILE_MYSQL_ROOT_PASSWORD The root password of MySQL (Only required on first deployment) SEAFILE_MYSQL_DB_HOST The host of MySQL db SEAFILE_MYSQL_DB_PORT The port of MySQL 3306 SEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafile SEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME The database name of ccnet ccnet_db SEAFILE_MYSQL_DB_SEAFILE_DB_NAME The database name of seafile seafile_db SEAFILE_MYSQL_DB_SEAHUB_DB_NAME The database name of seahub seahub_db JWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) http CACHE_PROVIDER The type of cache server used for Seafile. The available options are redis and memcached. Since Seafile 13, it is recommended to use redis as the cache service to support new features, and memcached will no longer be integrated into Seafile Docker by default. redis REDIS_HOST Redis server host redis REDIS_PORT Redis server port 6379 REDIS_PASSWORD Redis server password (none) MEMCACHED_HOST Memcached server host memcached MEMCACHED_PORT Memcached server port 11211 TIME_ZONE Time zone UTC ENABLE_NOTIFICATION_SERVER Enable (true) or disable (false) notification feature for Seafile false NOTIFICATION_SERVER_URL The notification server url (none) MD_FILE_COUNT_LIMIT (only valid when deployed metadata server). The maximum number of files in a repository that the metadata feature allows. If the number of files in a repository exceeds this value, the metadata management function will not be enabled for the repository. For a repository with metadata management enabled, if the number of records in it reaches this value but there are still some files that are not recorded in metadata server, the metadata management of the unrecorded files will be skipped. 100000 INIT_SEAFILE_ADMIN_EMAIL Admin username me@example.com (Recommend modifications) INIT_SEAFILE_ADMIN_PASSWORD Admin password asecret (Recommend modifications) NON_ROOT Run Seafile container without a root user false"},{"location":"setup/setup_ce_by_docker/#start-seafile-server","title":"Start Seafile server","text":"Start Seafile server with the following command
docker compose up -d\n ERROR: Named volume \"xxx\" is used in service \"xxx\" but no declaration was found in the volumes section
You may encounter this problem when your Docker (or docker-compose) version is out of date. You can upgrade or reinstall the Docker service to solve this problem according to the Docker official documentation.
Note
You must run the above command in the directory with the .env. If .env file is elsewhere, please run
docker compose --env-file /path/to/.env up -d\n Success
After starting the services, you can see the initialization progress by tracing the logs of container seafile (i.e., docker logs seafile -f)
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\ncreating seafile-server-latest symbolic link ... done\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n And then you can see the following messages which the Seafile server starts successfully:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\nSeahub is started\n\nDone.\n Finially, you can go to http://seafile.example.com to use Seafile.
/opt/seafile-data","text":"Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/seafile/logs/seafile.log./var/log inside the container. /opt/seafile-data/logs/var-log/nginx contains the logs of Nginx in the Seafile container.To monitor container logs (from outside of the container), please use the following commands:
# if the `.env` file is in current directory:\ndocker compose logs --follow\n# if the `.env` file is elsewhere:\ndocker compose --env-file /path/to/.env logs --follow\n\n# you can also specify container name:\ndocker compose logs seafile --follow\n# or, if the `.env` file is elsewhere:\ndocker compose --env-file /path/to/.env logs seafile --follow\n The Seafile logs are under /shared/logs/seafile in the docker, or /opt/seafile-data/logs/seafile in the server that run the docker.
The system logs are under /shared/logs/var-log, or /opt/seafile-data/logs/var-log in the server that run the docker.
To monitor all Seafile logs simultaneously (from outside of the container), run
sudo tail -f $(find /opt/seafile-data/ -type f -name *.log 2>/dev/null)\n"},{"location":"setup/setup_ce_by_docker/#more-configuration-options","title":"More configuration options","text":"The config files are under /opt/seafile-data/seafile/conf. You can modify the configurations according to configuration section
Ensure the container is running, then enter this command:
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\n Enter the username and password according to the prompts. You now have a new admin account.
"},{"location":"setup/setup_ce_by_docker/#backup-and-recovery","title":"Backup and recovery","text":"Follow the instructions in Backup and restore for Seafile Docker
"},{"location":"setup/setup_ce_by_docker/#garbage-collection","title":"Garbage collection","text":"When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them.
"},{"location":"setup/setup_ce_by_docker/#faq","title":"FAQ","text":""},{"location":"setup/setup_ce_by_docker/#seafile-service-and-container-maintenance","title":"Seafile service and container maintenance","text":"Q: If I want enter into the Docker container, which command I can use?
A: You can enter into the docker container using the command:
docker exec -it seafile /bin/bash\n Q: I forgot the Seafile admin email address/password, how do I create a new admin account?
A: You can create a new admin account by running
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\n The Seafile service must be up when running the superuser command.
Q: If, for whatever reason, the installation fails, how do I to start from a clean slate again?
A: Remove the directories /opt/seafile, /opt/seafile-data and /opt/seafile-mysql and start again.
Q: Something goes wrong during the start of the containers. How can I find out more?
A: You can view the docker logs using this command: docker compose logs -f.
Q: How Seafile use cache?
A: Seafile uses cache to improve performance in many situations. The content includes but is not limited to user session information, avatars, profiles, records from database, etc. From Seafile Docker 13, the Redis takes the default cache server for supporting the new features (please refer the upgradte notes), which has integrated in Seafile Docker 13 and can be configured directly in environment variables in .env (no additional settings are required by default)
Q: Is the Redis integrated in Seafile Docker safe? Does it have an access password?
A: Although the Redis integrated by Seafile Docker does not have a password set by default, it can only be accessed through the Docker private network and will not expose the service port externally. Of course, you can also set a password for it if necessary. You can set REDIS_PASSWORD in .env and remove the following comment markers in seafile-server.yml to set the integrated Redis' password:
services:\n ...\n redis:\n image: ${SEAFILE_REDIS_IMAGE:-redis}\n container_name: seafile-redis\n # remove the following comment markers\n command:\n - /bin/sh\n - -c\n - redis-server --requirepass \"$${REDIS_PASSWORD:?Variable is not set or empty}\"\n networks:\n - seafile-net\n ...\n Q: For some reason, I still have to use Memcached as my cache server. How can I do this?
A: If you still want to use the Memcached (is not provided from Seafile Docker 13), just follow the steps below:
CACHE_PROVIDER to memcached and modify MEMCACHED_xxx in .envredis part and and the redis dependency in seafile service section in seafile-server.yml. By the way, you can make changes to the cache server after the service is started (by setting environment variables in .env), but the corresponding configuration files will not be updated directly (e.g., seahub_settings.py, seafile.conf and seafevents.conf). To avoid ambiguity, we recommend that you also update these configuration files.
This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server using Docker and Docker Compose. The deployment has been tested for Debian/Ubuntu and CentOS, but Seafile PE should also work on other Linux distributions.
"},{"location":"setup/setup_pro_by_docker/#system-requirements","title":"System requirements","text":"Please refer here for system requirements about Seafile PE. In general, we recommend that you have at least 4G RAM and a 4-core CPU (> 2GHz).
About license
Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com. For futher details, please refer the license page of Seafile PE.
"},{"location":"setup/setup_pro_by_docker/#setup","title":"Setup","text":"The following assumptions and conventions are used in the rest of this document:
/opt/seafile is the directory of Seafile for storing Seafile docker files. If you decide to put Seafile in a different directory, adjust all paths accordingly.Use the official installation guide for your OS to install Docker.
"},{"location":"setup/setup_pro_by_docker/#downloading-the-seafile-image","title":"Downloading the Seafile Image","text":"Success
Since v12.0, Seafile PE versions are hosted on DockerHub and does not require username and password to download. For older Seafile PE versions are available private docker repository (back to Seafile 7.0). You can get the username and password on the download page in the Customer Center.
docker pull seafileltd/seafile-pro-mc:13.0-latest\n"},{"location":"setup/setup_pro_by_docker/#downloading-and-modifying-env","title":"Downloading and Modifying .env","text":"Seafile uses .env, seafile-server.yml and caddy.yml files for configuration.
mkdir /opt/seafile\ncd /opt/seafile\n\nwget -O .env https://manual.seafile.com/13.0/repo/docker/pro/env\nwget https://manual.seafile.com/13.0/repo/docker/pro/seafile-server.yml\nwget https://manual.seafile.com/13.0/repo/docker/pro/elasticsearch.yml\nwget https://manual.seafile.com/13.0/repo/docker/seadoc.yml\nwget https://manual.seafile.com/13.0/repo/docker/caddy.yml\n\nnano .env\n The following fields merit particular attention:
Variable Description Default ValueSEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-data SEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/db SEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy SEAFILE_ELASTICSEARCH_VOLUME The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data INIT_SEAFILE_MYSQL_ROOT_PASSWORD The root password of MySQL (Only required on first deployment) SEAFILE_MYSQL_DB_HOST The host of MySQL db SEAFILE_MYSQL_DB_PORT The port of MySQL 3306 SEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafile SEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME The database name of ccnet ccnet_db SEAFILE_MYSQL_DB_SEAFILE_DB_NAME The database name of seafile seafile_db SEAFILE_MYSQL_DB_SEAHUB_DB_NAME The database name of seahub seahub_db JWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) http CACHE_PROVIDER The type of cache server used for Seafile. The available options are redis and memcached. Since Seafile 13, it is recommended to use redis as the cache service to support new features, and memcached will no longer be integrated into Seafile Docker by default. redis REDIS_HOST Redis server host redis REDIS_PORT Redis server port 6379 REDIS_PASSWORD Redis server password (none) MEMCACHED_HOST Memcached server host memcached MEMCACHED_PORT Memcached server port 11211 TIME_ZONE Time zone UTC INIT_SEAFILE_ADMIN_EMAIL Synchronously set admin username during initialization me@example.com INIT_SEAFILE_ADMIN_PASSWORD Synchronously set admin password during initialization asecret SEAF_SERVER_STORAGE_TYPE What kind of the Seafile data for storage. Available options are disk (i.e., local disk), s3 and multiple (see the details of multiple storage backends) disk S3_COMMIT_BUCKET S3 storage backend commit objects bucket (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_FS_BUCKET S3 storage backend fs objects bucket (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_BLOCK_BUCKET S3 storage backend block objects bucket (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_KEY_ID S3 storage backend key ID (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_SECRET_KEY S3 storage backend secret key (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_AWS_REGION Region of your buckets us-east-1 S3_HOST Host of your buckets (required when not use AWS) S3_USE_HTTPS Use HTTPS connections to S3 if enabled true S3_USE_V4_SIGNATURE Use the v4 protocol of S3 if enabled true S3_PATH_STYLE_REQUEST This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. false S3_SSE_C_KEY A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. (none) ENABLE_NOTIFICATION_SERVER Enable (true) or disable (false) notification feature for Seafile false NOTIFICATION_SERVER_URL The notification server url (none) MD_FILE_COUNT_LIMIT (only valid when deployed metadata server). The maximum number of files in a repository that the metadata feature allows. If the number of files in a repository exceeds this value, the metadata management function will not be enabled for the repository. For a repository with metadata management enabled, if the number of records in it reaches this value but there are still some files that are not recorded in metadata server, the metadata management of the unrecorded files will be skipped. 100000 NON_ROOT Run Seafile container without a root user false Easier to configure S3 for Seafile and its components
Since Seafile Pro 13.0, in order to facilitate users to deploy Seafile's related extension components and other services in the future, a section will be provided in .env to store the S3 Configurations for Seafile and some extension components (such as SeaSearch, Metadata server). You can locate it with the title bar Storage configurations for S3.
S3 configurations in .env only support single S3 storage backend mode
The Seafile server only support configuring S3 in .env for single S3 storage backend mode (i.e., when SEAF_SERVER_STORAGE_TYPE=s3). If you would like to use other storage backend (e.g., Ceph, Swift) or other settings that can only be set in seafile.conf (like multiple storage backends), please set SEAF_SERVER_STORAGE_TYPE to multiple, and set MD_STORAGE_TYPE and SS_STORAGE_TYPE according to your configurations.
To conclude, set the directory permissions of the Elasticsearch volumne:
mkdir -p /opt/seafile-elasticsearch/data\nchmod 777 -R /opt/seafile-elasticsearch/data\n"},{"location":"setup/setup_pro_by_docker/#starting-the-docker-containers","title":"Starting the Docker Containers","text":"Run docker compose in detached mode:
docker compose up -d\n ERROR: Named volume \"xxx\" is used in service \"xxx\" but no declaration was found in the volumes section
You may encounter this problem when your Docker (or docker-compose) version is out of date. You can upgrade or reinstall the Docker service to solve this problem according to the Docker official documentation.
Note
You must run the above command in the directory with the .env. If .env file is elsewhere, please run
docker compose --env-file /path/to/.env up -d\n Success
After starting the services, you can see the initialization progress by tracing the logs of container seafile (i.e., docker logs seafile -f)
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\ncreating seafile-server-latest symbolic link ... done\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n And then you can see the following messages which the Seafile server starts successfully:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\nSeahub is started\n\nDone.\n Finially, you can go to http://seafile.example.com to use Seafile.
A 502 Bad Gateway error means that the system has not yet completed the initialization
"},{"location":"setup/setup_pro_by_docker/#find-logs","title":"Find logs","text":"To view Seafile docker logs, please use the following command
docker compose logs -f\n The Seafile logs are under /shared/logs/seafile in the docker, or /opt/seafile-data/logs/seafile in the server that run the docker.
The system logs are under /shared/logs/var-log, or /opt/seafile-data/logs/var-log in the server that run the docker.
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Then restart Seafile:
docker compose down\n\ndocker compose up -d\n"},{"location":"setup/setup_pro_by_docker/#seafile-directory-structure","title":"Seafile directory structure","text":""},{"location":"setup/setup_pro_by_docker/#path-optseafile-data","title":"Path /opt/seafile-data","text":"Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/seafile/logs/seafile.log./var/log inside the container. For example, you can find the nginx logs in /opt/seafile-data/logs/var-log/nginx/.The command docker container list should list the containers specified in the .env.
The directory layout of the Seafile container's volume should look as follows:
$ tree /opt/seafile-data -L 2\n/opt/seafile-data\n\u251c\u2500\u2500 logs\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 var-log\n\u251c\u2500\u2500 nginx\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 conf\n\u2514\u2500\u2500 seafile\n \u00a0\u00a0 \u251c\u2500\u2500 ccnet\n \u00a0\u00a0 \u251c\u2500\u2500 conf\n \u00a0\u00a0 \u251c\u2500\u2500 logs\n \u00a0\u00a0 \u251c\u2500\u2500 pro-data\n \u00a0\u00a0 \u251c\u2500\u2500 seafile-data\n \u00a0\u00a0 \u2514\u2500\u2500 seahub-data\n All Seafile config files are stored in /opt/seafile-data/seafile/conf. The nginx config file is in /opt/seafile-data/nginx/conf.
Any modification of a configuration file requires a restart of Seafile to take effect:
docker compose restart\n All Seafile log files are stored in /opt/seafile-data/seafile/logs whereas all other log files are in /opt/seafile-data/logs/var-log.
Follow the instructions in Backup and restore for Seafile Docker
"},{"location":"setup/setup_pro_by_docker/#garbage-collection","title":"Garbage Collection","text":"When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them.
"},{"location":"setup/setup_pro_by_docker/#faq","title":"FAQ","text":""},{"location":"setup/setup_pro_by_docker/#seafile-service-and-container-maintenance","title":"Seafile service and container maintenance","text":"Q: If I want enter into the Docker container, which command I can use?
A: You can enter into the docker container using the command:
docker exec -it seafile /bin/bash\n Q: I forgot the Seafile admin email address/password, how do I create a new admin account?
A: You can create a new admin account by running
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\n The Seafile service must be up when running the superuser command.
Q: If, for whatever reason, the installation fails, how do I to start from a clean slate again?
A: Remove the directories /opt/seafile, /opt/seafile-data and /opt/seafile-mysql and start again.
Q: Something goes wrong during the start of the containers. How can I find out more?
A: You can view the docker logs using this command: docker compose logs -f.
Q: How Seafile use cache?
A: Seafile uses cache to improve performance in many situations. The content includes but is not limited to user session information, avatars, profiles, records from database, etc. From Seafile Docker 13, the Redis takes the default cache server for supporting the new features (please refer the upgradte notes), which has integrated in Seafile Docker 13 and can be configured directly in environment variables in .env (no additional settings are required by default)
Q: Is the Redis integrated in Seafile Docker safe? Does it have an access password?
A: Although the Redis integrated by Seafile Docker does not have a password set by default, it can only be accessed through the Docker private network and will not expose the service port externally. Of course, you can also set a password for it if necessary. You can set REDIS_PASSWORD in .env and remove the following comment markers in seafile-server.yml to set the integrated Redis' password:
services:\n ...\n redis:\n image: ${SEAFILE_REDIS_IMAGE:-redis}\n container_name: seafile-redis\n # remove the following comment markers\n command:\n - /bin/sh\n - -c\n - redis-server --requirepass \"$${REDIS_PASSWORD:?Variable is not set or empty}\"\n networks:\n - seafile-net\n ...\n Q: For some reason, I still have to use Memcached as my cache server. How can I do this?
A: If you still want to use the Memcached (is not provided from Seafile Docker 13), just follow the steps below:
CACHE_PROVIDER to memcached and modify MEMCACHED_xxx in .envredis part and and the redis dependency in seafile service section in seafile-server.yml. By the way, you can make changes to the cache server after the service is started (by setting environment variables in .env), but the corresponding configuration files will not be updated directly (e.g., seahub_settings.py, seafile.conf and seafevents.conf). To avoid ambiguity, we recommend that you also update these configuration files.
The entire db service needs to be removed (or noted) in seafile-server.yml if you would like to use an existing MySQL server, otherwise there is a redundant database service is running
service:\n\n # note or remove the entire `db` service\n #db:\n #image: ${SEAFILE_DB_IMAGE:-mariadb:10.11}\n #container_name: seafile-mysql\n # ... other parts in service `db`\n\n # do not change other services\n...\n What's more, you have to modify the .env to set correctly the fields with MySQL:
SEAFILE_MYSQL_DB_HOST=192.168.0.2\nSEAFILE_MYSQL_DB_PORT=3306\nINIT_SEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_USER=seafile # the user name of the user you like to use for Seafile server\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD # the password of the user you like to use for Seafile server\n Tip
INIT_SEAFILE_MYSQL_ROOT_PASSWORD is needed during installation (i.e., the deployment in the first time). After Seafile is installed, the user seafile will be used to connect to the MySQL server (SEAFILE_MYSQL_DB_PASSWORD), then you can remove the INIT_SEAFILE_MYSQL_ROOT_PASSWORD.
Ceph is a scalable distributed storage system. It's recommended to use Ceph's S3 Gateway (RGW) to integarte with Seafile. Seafile can also use Ceph's RADOS object storage layer for storage backend. But using RADOS requires to link with librados library, which may introduce library incompatibility issues during deployment. Furthermore the S3 Gateway provides easier to manage HTTP based interface. If you want to integrate with S3 gateway, please refer to \"Use S3-compatible Object Storage\" section in this documentation. The documentation below is for integrating with RADOS.
"},{"location":"setup/setup_with_ceph/#copy-ceph-conf-file-and-client-keyring","title":"Copy ceph conf file and client keyring","text":"Seafile acts as a client to Ceph/RADOS, so it needs to access ceph cluster's conf file and keyring. You have to copy these files from a ceph admin node's /etc/ceph directory to the seafile machine.
seafile-machine# sudo scp user@ceph-admin-node:/etc/ceph/ /etc\n"},{"location":"setup/setup_with_ceph/#install-and-enable-memcached","title":"Install and enable memcached","text":"For best performance, Seafile requires install memcached or redis and enable cache for objects.
We recommend to allocate at least 128MB memory for object cache.
"},{"location":"setup/setup_with_ceph/#install-python-ceph-library","title":"Install Python Ceph Library","text":"File search and WebDAV functions rely on Python Ceph library installed in the system.
sudo apt-get install python3-rados\n"},{"location":"setup/setup_with_ceph/#edit-seafile-configuration","title":"Edit seafile configuration","text":"Edit seafile.conf, add the following lines:
[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-fs\n You also need to add memory cache configurations
It's required to create separate pools for commit, fs, and block objects.
ceph-admin-node# rados mkpool seafile-blocks\nceph-admin-node# rados mkpool seafile-commits\nceph-admin-node# rados mkpool seafile-fs\n Troubleshooting librados incompatibility issues
Since 8.0 version, Seafile bundles librados from Ceph 16. On some systems you may find Seafile fail to connect to your Ceph cluster. In such case, you can usually solve it by removing the bundled librados libraries and use the one installed in the OS.
To do this, you have to remove a few bundled libraries:
cd seafile-server-latest/seafile/lib\nrm librados.so.2 libstdc++.so.6 libnspr4.so\n"},{"location":"setup/setup_with_ceph/#use-arbitary-ceph-user","title":"Use arbitary Ceph user","text":"The above configuration will use the default (client.admin) user to connect to Ceph. You may want to use some other Ceph user to connect. This is supported in Seafile. To specify the Ceph user, you have to add a ceph_client_id option to seafile.conf, as the following:
[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-fs\n\n# Memcached or Reids configs\n......\n You can create a ceph user for seafile on your ceph cluster like this:
ceph auth add client.seafile \\\n mds 'allow' \\\n mon 'allow r' \\\n osd 'allow rwx pool=seafile-blocks, allow rwx pool=seafile-commits, allow rwx pool=seafile-fs'\n You also have to add this user's keyring path to /etc/ceph/ceph.conf:
[client.seafile]\nkeyring = <path to user's keyring file>\n"},{"location":"setup/setup_with_multiple_storage_backends/","title":"Multiple Storage Backend","text":"There are some use cases that supporting multiple storage backends in Seafile server is needed. Such as:
Store different types of files into different storage backends:
Combine multiple storage backends to extend storage scalability:
About data of library
To use this feature, you need to:
SEAF_SERVER_STORAGE_TYPE=multiple in .env.seafile.conf.As Seafile server before 6.3 version doesn't support multiple storage classes, you have to explicitly enable this new feature and define storage classes with a different syntax than how we define storage backend before.
By default, Seafile dose not enable multiple storage classes. So, you have to create a configuration file for storage classes and specify it and enable the feature in seafile.conf:
Create the storage classes file:
nano /opt/seafile-date/seafile/conf\n For the example of this file, please refer next section
Modify seafile.conf
[storage]\nenable_storage_classes = true\nstorage_classes_file = /shared/conf/seafile_storage_classes.json\n enable_storage_classes \uff1aIf this is set to true, the storage class feature is enabled. You must define the storage classes in a JSON file provided in the next configuration option.storage_classes_file\uff1aSpecifies the path for the JSON file that contains the storage class definition.Tip
seafile.confstorage_classes_file in the Seafile container is different from the host usually, so we suggest you put this file in to the Seafile's configurations directory, and use /shared/conf instead of /opt/seafile-date/seafile/conf. Otherwise you have to add another persistent volume mapping strategy in seafile-server.yml. If your Seafile server is not deployed with Docker, we still suggest you put this file into the Seafile configurations file directory.The storage classes JSON file is about an array consist of objects, for each defines a storage class. The fields in the definition corresponds to the information we need to specify for a storage class:
Variables Descriptionsstorage_id A unique internal string ID used to identify the storage class. It is not visible to users. For example, \"primary storage\". name A user-visible name for the storage class. is_default Indicates whether this storage class is the default one. commits The storage used for storing commit objects for this class. fs The storage used for storing fs objects for this class. blocks The storage used for storing block objects for this class. Note
is_default is effective in two cases: commit, fs, and blocks can be stored in different storages. This provides the most flexible way to define storage classes (e.g., a file system, Ceph, or S3.)Here is an example, which uses local file system, S3 (default), Swift and Ceph at the same time.
[\n {\n \"storage_id\": \"hot_storage\",\n \"name\": \"Hot Storage\",\n \"is_default\": true,\n \"commits\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-commits\",\n \"key\": \"<your key>\",\n \"key_id\": \"<your key id>\"\n },\n \"fs\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-fs\",\n \"key\": \"<your key>\",\n \"key_id\": \"<your key id>\"\n },\n \"blocks\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-blocks\",\n \"key\": \"<your key>\",\n \"key_id\": \"<your key id>\"\n }\n },\n {\n \"storage_id\": \"cold_storage\",\n \"name\": \"Cold Storage\",\n \"is_default\": false,\n \"fs\": {\n \"backend\": \"fs\",\n \"dir\": \"/share/seafile/seafile-data\" // /opt/seafile/seafile-data for binary-install Seafile\n },\n \"commits\": {\n \"backend\": \"fs\",\n \"dir\": \"/share/seafile/seafile-data\"\n },\n \"blocks\": {\n \"backend\": \"fs\",\n \"dir\": \"/share/seafile/seafile-data\"\n }\n },\n {\n \"storage_id\": \"swift_storage\",\n \"name\": \"Swift Storage\",\n \"fs\": {\n \"backend\": \"swift\",\n \"tenant\": \"<your tenant>\",\n \"user_name\": \"<your username>\",\n \"password\": \"<your password>\",\n \"container\": \"seafile-commits\",\n \"auth_host\": \"<Swift auth host>:<port, default 5000>\",\n \"auth_ver\": \"v2.0\"\n },\n \"commits\": {\n \"backend\": \"swift\",\n \"tenant\": \"<your tenant>\",\n \"user_name\": \"<your username>\",\n \"password\": \"<your password>\",\n \"container\": \"seafile-commits\",\n \"auth_host\": \"<Swift auth host>:<port, default 5000>\",\n \"auth_ver\": \"v2.0\"\n },\n \"blocks\": {\n \"backend\": \"swift\",\n \"tenant\": \"<your tenant>\",\n \"user_name\": \"<your username>\",\n \"password\": \"<your password>\",\n \"container\": \"seafile-commits\",\n \"auth_host\": \"<Swift auth host>:<port, default 5000>\",\n \"auth_ver\": \"v2.0\",\n \"region\": \"RegionTwo\"\n }\n },\n {\n \"storage_id\": \"ceph_storage\",\n \"name\": \"ceph Storage\",\n \"fs\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-fs\"\n },\n \"commits\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-commits\"\n },\n \"blocks\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-blocks\"\n }\n }\n]\n Tip
As you may have seen, the commits, fs and blocks information syntax is similar to what is used in [commit_object_backend], [fs_object_backend] and [block_backend] section of seafile.conf for a single backend storage. You can refer to the detailed syntax in the documentation for the storage you use (e.g., S3 Storage for S3).
If you use file system as storage for fs, commits or blocks, you must explicitly provide the path for the seafile-data directory. The objects will be stored in storage/commits, storage/fs, storage/blocks under this path.
Library mapping policies decide the storage class a library uses. Currently we provide 3 policies for 3 different use cases:
The storage class of a library is decided on creation and stored in a database table. The storage class of a library won't change if the mapping policy is changed later.
Before choosing your mapping policy, you need to enable the storage classes feature in seahub_settings.py:
ENABLE_STORAGE_CLASSES = True\n"},{"location":"setup/setup_with_multiple_storage_backends/#user-chosen","title":"User Chosen","text":"This policy lets the users choose which storage class to use when creating a new library. The users can select any storage class that's been defined in the JSON file.
To use this policy, add following options in seahub_settings.py:
STORAGE_CLASS_MAPPING_POLICY = 'USER_SELECT'\n If you enable storage class support but don't explicitly set STORAGE_CLASS_MAPPING_POLIICY in seahub_settings.py, this policy is used by default.
Due to storage cost or management considerations, sometimes a system admin wants to make different type of users use different storage backends (or classes). You can configure a user's storage classes based on their roles.
A new option storage_ids is added to the role configuration in seahub_settings.py to assign storage classes to each role. If only one storage class is assigned to a role, the users with this role cannot choose storage class for libraries; otherwise, the users can choose a storage class if more than one class are assigned. If no storage class is assigned to a role, the default class specified in the JSON file will be used.
Here are the sample options in seahub_settings.py to use this policy:
ENABLE_STORAGE_CLASSES = True\nSTORAGE_CLASS_MAPPING_POLICY = 'ROLE_BASED'\n\nENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_invite_guest': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': ['old_version_id', 'hot_storage', 'cold_storage', 'a_storage'],\n },\n 'guest': {\n 'can_add_repo': True,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'storage_ids': ['hot_storage', 'cold_storage'],\n },\n}\n"},{"location":"setup/setup_with_multiple_storage_backends/#library-id-based-mapping","title":"Library ID Based Mapping","text":"This policy maps libraries to storage classes based on its library ID. The ID of a library is an UUID. In this way, the data in the system can be evenly distributed among the storage classes.
Note
This policy is not a designed to be a complete distributed storage solution. It doesn't handle automatic migration of library data between storage classes. If you need to add more storage classes to the configuration, existing libraries will stay in their original storage classes. New libraries can be distributed among the new storage classes (backends). You still have to plan about the total storage capacity of your system at the beginning.
To use this policy, you first add following options in seahub_settings.py:
STORAGE_CLASS_MAPPING_POLICY = 'REPO_ID_MAPPING'\n Then you can add option for_new_library to the backends which are expected to store new libraries in json file:
[\n {\n \"storage_id\": \"new_backend\",\n \"name\": \"New store\",\n \"for_new_library\": true,\n \"is_default\": false,\n \"fs\": {\n \"backend\": \"fs\", \n \"dir\": \"/storage/seafile/new-data\"\n },\n \"commits\": {\n \"backend\": \"fs\", \n \"dir\": \"/storage/seafile/new-data\"\n },\n \"blocks\": {\n \"backend\": \"fs\", \n \"dir\": \"/storage/seafile/new-data\"\n }\n }\n]\n"},{"location":"setup/setup_with_multiple_storage_backends/#multiple-storage-backend-data-migration","title":"Multiple Storage Backend Data Migration","text":"Migration from S3
Since version 11, when you migrate from S3 to other storage servers, you have to use V4 authentication protocol. This is because version 11 upgrades to Boto3 library, which fails to list objects from S3 when it's configured to use V2 authentication protocol.
Run the migrate-repo.sh script to migrate library data between different storage backends.
./migrate-repo.sh [repo_id] origin_storage_id destination_storage_id\n repo_id is optional, if not specified, all libraries will be migrated.
Specify a path prefix
You can set the OBJECT_LIST_FILE_PATH environment variable to specify a path prefix to store the migrated object list before running the migration script
For example:
export OBJECT_LIST_FILE_PATH=/opt/test\n This will create three files in the specified path (/opt):
test_4c731e5c-f589-4eaa-889f-14c00d4893cb.fstest_4c731e5c-f589-4eaa-889f-14c00d4893cb.commits test_4c731e5c-f589-4eaa-889f-14c00d4893cb.blocksSetting the OBJECT_LIST_FILE_PATH environment variable has two purposes:
Run the remove-objs.sh script (before migration, you need to set the OBJECT_LIST_FILE_PATH environment variable) to delete all objects in a library in the specified storage backend.
./remove-objs.sh repo_id storage_id\n"},{"location":"setup/setup_with_s3/","title":"Setup With S3 Storage","text":"From Seafile 13, there are two ways to configure S3 storage (single S3 storage backend) for Seafile server:
seafile.conf)Setup note for binary packages deployment (Pro)
If your Seafile server is deployed from binary packages, you have to do the following steps before deploying:
install boto3 to your machine
sudo pip install boto3\n Install and configure memcached or Redis.
For best performance, Seafile requires enable memory cache for objects. We recommend to at least allocate 128MB memory for memcached or Redis.
The configuration options differ for different S3 storage. We'll describe the configurations in separate sections. You also need to add memory cache configurations
From Seafile 13, configuring S3 from environment variables will be supported and will provide a more convenient way. You can refer to the detailed description of this part in the introduction of .env file. Generally,
S3_COMMIT_BUCKET, S3_FS_BUCKET and S3_BLOCK_BUCKET). SEAF_SERVER_STORAGE_TYPE to true.env \u200b\u200baccording to the following table:S3_COMMIT_BUCKET S3 storage backend commit objects bucket (required) S3_FS_BUCKET S3 storage backend fs objects bucket (required) S3_BLOCK_BUCKET S3 storage backend block objects bucket (required) S3_KEY_ID S3 storage backend key ID (required) S3_SECRET_KEY S3 storage backend secret key (required) S3_AWS_REGION Region of your buckets us-east-1 S3_HOST Host of your buckets (required when not use AWS) S3_USE_HTTPS Use HTTPS connections to S3 if enabled true S3_USE_V4_SIGNATURE Use the v4 protocol of S3 if enabled true S3_PATH_STYLE_REQUEST This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. false S3_SSE_C_KEY A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. (none) Bucket naming conventions
No matter if you using AWS or any other S3 compatible object storage, we recommend that you follow S3 naming rules. When you create buckets on S3, please read the S3 rules for naming first. Note, especially do not use capital letters in the name of the bucket (do not use camel-style naming, such as MyCommitObjects).
Good naming of a bucketBad naming of a bucketAbout S3_SSE_C_KEY
S3_SSE_C_KEY is a string of 32 characters.
You can generate sse_c_key with the following command. Note that the key doesn't have to be base64 encoded. It can be any 32-character long random string. The example just show one possible way to generate such a key.
openssl rand -base64 24\n Howevery, if you have existing data in your S3 storage bucket, turning on the above configuration will make your data inaccessible. That's because Seafile server doesn't support encrypted and non-encrypted objects mixed in the same bucket. You have to create a new bucket, and migrate your data to it by following storage backend migration documentation.
For other S3 support extensions
In addition to Seafile server, the following extensions (if already installed) will share the same S3 authorization information in .env with Seafile server:
SS_STORAGE_TYPE=s3 and S3_SS_BUCKETMD_STORAGE_TYPE=s3 and S3_MD_BUCKETSEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=false\nS3_AWS_REGION=eu-central-1\nS3_HOST=\nS3_USE_HTTPS=true\n SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=sos-de-fra-1.exo.io\nS3_USE_HTTPS=true\n SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=fsn1.your-objectstorage.com\nS3_USE_HTTPS=true\n There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support
SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=<access endpoint for storage provider>\nS3_USE_HTTPS=true\n Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift, Ceph's RADOS Gateway and Minio. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:
SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=<your s3 api endpoint host>:<your s3 api endpoint port>\nS3_USE_HTTPS=true # according to your S3 configuration\n"},{"location":"setup/setup_with_s3/#setup-with-config-file","title":"Setup with config file","text":"Seafile configures S3 storage by adding or modifying the following section in seafile.conf:
[xxx_object_backend]\nname = s3\nbucket = my-xxx-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\nuse_https = true\n... ; other optional configurations\n Similar to configure in .env, you have to create at least 3 buckets for Seafile too, corresponding to the sections: commit_object_backend, fs_object_backend and block_backend. For the configurations for each backend section, please refer to the following table:
bucket Bucket name for commit, fs, and block objects. Make sure it follows S3 naming rules (you can refer the notes below the table). key_id The key_id is required to authenticate you to S3. You can find the key_id in the \"security credentials\" section on your AWS account page or from your storage provider. key The key is required to authenticate you to S3. You can find the key in the \"security credentials\" section on your AWS account page or from your storage provider. use_v4_signature There are two versions of authentication protocols that can be used with S3 storage: Version 2 (older, may still be supported by some regions) and Version 4 (current, used by most regions). If you don't set this option, Seafile will use the v2 protocol. It's suggested to use the v4 protocol. use_https Use https to connect to S3. It's recommended to use https. aws_region (Optional) If you use the v4 protocol and AWS S3, set this option to the region you chose when you create the buckets. If it's not set and you're using the v4 protocol, Seafile will use us-east-1 as the default. This option will be ignored if you use the v2 protocol. host (Optional) The endpoint by which you access the storage service. Usually it starts with the region name. It's required to provide the host address if you use storage provider other than AWS, otherwise Seafile will use AWS's address (i.e., s3.us-east-1.amazonaws.com). sse_c_key (Optional) A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. path_style_request (Optional) This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. So we recommend to set this option to true for self-hosted storage."},{"location":"setup/setup_with_s3/#example-configurations_1","title":"Example configurations","text":"AWSExoscaleHetznerOther Public Hosted S3 StorageSelf-hosted S3 Storage [commit_object_backend]\nname = s3\nbucket = my-commit-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\nuse_https = true\n [commit_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = sos-de-fra-1.exo.io\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[fs_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = sos-de-fra-1.exo.io\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[block_backend]\nname = s3\nbucket = your-bucket-name\nhost = sos-de-fra-1.exo.io\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n [commit_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = fsn1.your-objectstorage.com\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[fs_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = fsn1.your-objectstorage.com\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[block_backend]\nname = s3\nbucket = your-bucket-name\nhost = fsn1.your-objectstorage.com\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support
[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\n# v2 authentication protocol will be used if not set\nuse_v4_signature = true\n# required for v4 protocol. ignored for v2 protocol.\naws_region = <region name for storage provider>\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\nuse_https = true\n Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift, Ceph's RADOS Gateway and Minio. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:
[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = <your s3 api endpoint host>:<your s3 api endpoint port>\npath_style_request = true\nuse_v4_signature = true\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = <your s3 api endpoint host>:<your s3 api endpoint port>\npath_style_request = true\nuse_v4_signature = true\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = <your s3 api endpoint host>:<your s3 api endpoint port>\npath_style_request = true\nuse_v4_signature = true\nuse_https = true\n Use server-side encryption with customer-provided keys (SSE-C) in Seafile
Since Pro 11.0, you can use SSE-C to S3. Add the following sse_c_key to seafile.conf (as shown in the above variables table):
[commit_object_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[fs_object_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[block_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n"},{"location":"setup/setup_with_s3/#run-and-test","title":"Run and Test","text":"Now you can start Seafile and test
"},{"location":"setup/setup_with_swift/","title":"Setup With OpenStack Swift","text":"This backend uses the native Swift API. Previously users can only use the S3-compatibility layer of Swift. That way is obsolete now.
Since version 6.3, OpenStack Swift v3.0 API is supported.
"},{"location":"setup/setup_with_swift/#prepare","title":"Prepare","text":"To setup Seafile Professional Server with Swift:
Edit seafile.conf, add the following lines:
[block_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-blocks\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n\n[commit_object_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-commits\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n\n[fs_object_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-fs\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n You also need to add memory cache configurations
The above config is just an example. You should replace the options according to your own environment.
Seafile supports Swift with Keystone as authentication mechanism. The auth_host option is the address and port of Keystone service.The region option is used to select publicURL,if you don't configure it, use the first publicURL in returning authenticated information.
Seafile also supports Tempauth and Swauth since professional edition 6.2.1. The auth_ver option should be set to v1.0, tenant and region are no longer needed.
It's required to create separate containers for commit, fs, and block objects.
"},{"location":"setup/setup_with_swift/#use-https-connections-to-swift","title":"Use HTTPS connections to Swift","text":"Since Pro 5.0.4, you can use HTTPS connections to Swift. Add the following options to seafile.conf:
[commit_object_backend]\nname = swift\n......\nuse_https = true\n\n[fs_object_backend]\nname = swift\n......\nuse_https = true\n\n[block_backend]\nname = swift\n......\nuse_https = true\n Because the server package is built on CentOS 6, if you're using Debian/Ubuntu, you have to copy the system CA bundle to CentOS's CA bundle path. Otherwise Seafile can't find the CA bundle so that the SSL connection will fail.
sudo mkdir -p /etc/pki/tls/certs\nsudo cp /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/certs/ca-bundle.crt\nsudo ln -s /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/cert.pem\n"},{"location":"setup/setup_with_swift/#run-and-test","title":"Run and Test","text":"Now you can start Seafile by ./seafile.sh start and ./seahub.sh start and visit the website.
This page shows the minimal requirements of Seafile.
About the system requirements
The system requirements in this document refer to the minimum system hardware requirements are the suggestions to smooth operation of Seafile (network connection is not discussed here). If not otherwise specified, it will apply to all deployment scenarios, but for binary installations, the libraries we provided in the documents are only supporting the following operation systems:
Important: Information of Docker-base deployment integration services
In each case, we have shown the services integrated names Docker-base deployment integration services by standard installation with Docker. If these services are already installed and you do not need them in your deployment, you need to refer to the corresponding documentation and disable them in the Docker resource file.However, we do not recommend that you reduce the corresponding system resource requirements on our suggestions, unless otherwise specified.
However, if you use other installation methods (e.g., binary deployment, K8S deployment) you have to make sure you have installed these services, because it will not include the installation of that.
If you need to install other extensions not included here (e.g., OnlyOffice), you should increase the system requirements appropriately above our recommendations.
CPU and Memory requirements:
Deployment Scenarios CPU Requirements Memory Requirements Indexer / Search Engine Docker deployment 4 Cores 4G Default All 4 Cores 4G With existing ElasticSearch service, but on the same machine / node All 2 Cores 2G With existing ElasticSearch service, and on another machine / node All 2 Cores 2G Use SeaSearch as the search engine, instead of ElasticSearchHard disk requirements: More than 50G are recommended
More details of files indexer used in Seafile PE
By default, Seafile Pro will use Elasticsearch as the files indexer
Please make sure the mmapfs counts do not cause excptions like out of memory, which can be increased by following command (see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html for futher details):
sysctl -w vm.max_map_count=262144 #run as root\n or modify /etc/sysctl.conf and reboot to set this value permanently:
nano /etc/sysctl.conf\n\n# modify vm.max_map_count\nvm.max_map_count=262144\n If your machine dose not have enough requirements, 2 Cores and 2GB RAM are minimum by chosing one of following two ways after first-time deployment:
Use SeaSearch, a lightweight search engine built on open source search engine ZincSearch, as the indexer
Deploy Elasticsearch in another machine, and modify es_host and es_port in seafevents.conf
More details about the number of nodes
More suggestions in Seafile cluster
We assume you have already deployed Redis (Memcached alternatively, but still recommend Redis), MariaDB, file indexer (e.g., ElasticSearch) in separate machines and use S3 like object storage.
Generally, when deploying Seafile in a cluster, we recommend that you use a storage backend (such as AWS S3) to store Seafile data. However, according to the Seafile image startup rules and K8S persistent storage strategy, you still need to prepare a persistent directory for configuring the startup of the Seafile container.
Since Seafile 12.0, all reverse proxy, HTTPS, etc. processing for single-node deployment based on Docker is handled by caddy. If you need to use other reverse proxy services, you can refer to this document to modify the relevant configuration files.
"},{"location":"setup/use_other_reverse_proxy/#services-that-require-reverse-proxy","title":"Services that require reverse proxy","text":"Before making changes to the configuration files, you have to know the services used by Seafile and related components (Table 1 therafter).
Tip
The services shown in the table below are all based on the single-node integrated deployment in accordance with the Seafile official documentation.
If these services are deployed in standalone mode (such as seadoc and notification-server), or deployed in the official documentation of third-party plugins (such as onlyoffice and collabora), you can skip modifying the configuration files of these services (because Caddy is not used as a reverse proxy for such deployment approaches).
If you have not integrated the services in the Table 1, please choose Standalone or Refer to the official documentation of third-party plugins to install them when you need these services
YML Service Suggest exposed port Service listen port Require WebSocketseafile-server.yml seafile 80 80 No seadoc.yml seadoc 8888 80 Yes notification-server.yml notification-server 8083 8083 Yes collabora.yml collabora 6232 9980 No onlyoffice.yml onlyoffice 6233 80 No thumbnail-server.yml thumbnail 8084 80 No"},{"location":"setup/use_other_reverse_proxy/#modify-yml-files","title":"Modify YML files","text":"Refer to Table 1 for the related service exposed ports. Add section ports for corresponding services
services:\n <the service need to be modified>:\n ...\n ports:\n - \"<Suggest exposed port>:<Service listen port>\"\n Delete all fields related to Caddy reverse proxy (in label section)
Tip
Some .yml files (e.g., collabora.yml) also have port-exposing information with Caddy in the top of the file, which also needs to be removed.
We take seafile-server.yml for example (Pro edition):
services:\n # ... other services\n\n seafile:\n image: ${SEAFILE_IMAGE:-seafileltd/seafile-pro-mc:13.0-latest}\n container_name: seafile\n ports:\n - \"80:80\"\n volumes:\n - ${SEAFILE_VOLUME:-/opt/seafile-data}:/shared\n environment:\n ... # enviroment variables map, donnot make change\n\n # please remove the `label` section\n #label: ... <- remove this section\n\n depends_on:\n ... # dependencies, donnot make change\n ...\n\n# ... other options\n"},{"location":"setup/use_other_reverse_proxy/#add-reverse-proxy-for-related-services","title":"Add reverse proxy for related services","text":"Modify nginx.conf and add reverse proxy for services seafile and seadoc:
Note
If your proxy server's host is not the same as the host the Seafile deployed to, please replase 127.0.0.1 to your Seafile server's host
location / {\n proxy_pass http://127.0.0.1:80;\n proxy_read_timeout 310s;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header Connection \"\";\n proxy_http_version 1.1;\n\n client_max_body_size 0;\n}\n location /sdoc-server/ {\n proxy_pass http://127.0.0.1:8888/;\n proxy_redirect off;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n\n client_max_body_size 100m;\n}\n\nlocation /socket.io {\n proxy_pass http://127.0.0.1:8888;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection 'upgrade';\n proxy_redirect off;\n\n proxy_buffers 8 32k;\n proxy_buffer_size 64k;\n\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header Host $http_host;\n proxy_set_header X-NginX-Proxy true;\n}\n location /notification {\n proxy_pass http://127.0.0.1:8083;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";\n access_log /var/log/nginx/notification.access.log seafileformat;\n error_log /var/log/nginx/notification.error.log;\n}\n map $http_x_forwarded_proto $the_scheme {\n default $http_x_forwarded_proto;\n \"\" $scheme;\n}\nmap $http_x_forwarded_host $the_host {\n default $http_x_forwarded_host;\n \"\" $host;\n}\nmap $http_upgrade $proxy_connection {\n default upgrade;\n \"\" close;\n}\nlocation /onlyofficeds/ {\n proxy_pass http://127.0.0.1:6233/;\n proxy_http_version 1.1;\n client_max_body_size 100M;\n proxy_read_timeout 3600s;\n proxy_connect_timeout 3600s;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection $proxy_connection;\n proxy_set_header X-Forwarded-Host $the_host/onlyofficeds;\n proxy_set_header X-Forwarded-Proto $the_scheme;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n}\n location /thumbnail {\n proxy_pass http://127.0.0.1:8084;\n proxy_http_version 1.1;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n\n access_log /var/log/nginx/thumbnail.access.log;\n error_log /var/log/nginx/thumbnail.error.log;\n}\n"},{"location":"setup/use_other_reverse_proxy/#modify-env","title":"Modify .env","text":"Remove caddy.yml from field COMPOSE_FILE in .env, e.g.
COMPOSE_FILE='seafile-server.yml' # remove caddy.yml\n"},{"location":"setup/use_other_reverse_proxy/#restart-services-and-nginx","title":"Restart services and nginx","text":"docker compose down\ndocker compose up -d\nnginx restart\n"},{"location":"setup/use_seasearch/","title":"Use SeaSearch as search engine (Pro)","text":"SeaSearch, a file indexer with more lightweight and efficiency than Elasticsearch, is supported from Seafile 12.
For Seafile deploy from binary package
We currently only support Docker-based deployment for SeaSearch Server, so this document describes the configuration with the situation of using Docker to deploy Seafile server.
If your Seafile Server deploy from binary package, please refer here to start or stop Seafile Server.
For Seafile cluster
Theoretically, at least the backend node has to restart, if your Seafile server deploy in cluster mode, but we still suggest you configure and restart all node to make sure the consistency and synchronization in the cluster
"},{"location":"setup/use_seasearch/#deploy-seasearch-service","title":"Deploy SeaSearch service","text":"SeaSearch service is currently mainly deployed via docker. We have integrated it into the relevant docker-compose file. You only need to download it to the same directory as seafile-server.yml:
wget https://manual.seafile.com/13.0/repo/docker/pro/seasearch.yml\n"},{"location":"setup/use_seasearch/#modify-env","title":"Modify .env","text":"We have configured the relevant variables in .env. Here you must pay special attention to the following variable information, which will affect the SeaSearch initialization process. For variables in .env of SeaSearch service, please refer here for the details. We use /opt/seasearch-data as the persistent directory of SeaSearch (the information of administrator are same as Seafile's admin by default from Seafile 13):
For Apple's Chips
Since Apple's chips (such as M2) do not support MKL, you need to set the relevant image to xxx-nomkl:latest, e.g.:
SEASEARCH_IMAGE=seafileltd/seasearch-nomkl:latest\n COMPOSE_FILE='...,seasearch.yml' # ... means other docker-compose files\n\n#SEASEARCH_IMAGE=seafileltd/seasearch-nomkl:1.0-latest # for Apple's Chip\nSEASEARCH_IMAGE=seafileltd/seasearch:1.0-latest\n\nSS_DATA_PATH=/opt/seasearch-data\nINIT_SS_ADMIN_USER=<admin-username> \nINIT_SS_ADMIN_PASSWORD=<admin-password>\n\n\n# if you would like to use S3 for saving seasearch data\nSS_STORAGE_TYPE=s3\nS3_SS_BUCKET=...\nS3_KEY_ID=<your-key-id>\nS3_SECRET_KEY=<your-secret-key>\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=false\nS3_AWS_REGION=us-east-1\nS3_HOST=\nS3_USE_HTTPS=true\nS3_SSE_C_KEY=\n"},{"location":"setup/use_seasearch/#modify-seafile-serveryml-to-disable-elasticsearch-service","title":"Modify seafile-server.yml to disable elasticSearch service","text":"If you would like to use SeaSearch as the search engine, the elasticSearch service can be removed, which is no longer used: remove elasticsearch.yml in the list variable COMPOSE_FILE on the file .env.
seafevents.conf","text":"Get your authorization token by base64 code consist of INIT_SS_ADMIN_USER and INIT_SS_ADMIN_PASSWORD defined in .env firsly, which is used to authorize when calling the SeaSearch API:
echo -n 'username:password' | base64\n\n# example output\nYWRtaW46YWRtaW5fcGFzc3dvcmQ=\n Add the following section in seafevents to enable seafile backend service to access SeaSearch APIs
SeaSearch server deploy on a different machine with Seafile
If your SeaSearch server deploy on a different machine with Seafile, please replace http://seasearch:4080 to the url <scheme>://<address>:<prot> of your SeaSearch server
[SEASEARCH]\nenabled = true\nseasearch_url = http://seasearch:4080\nseasearch_token = <your auth token>\ninterval = 10m\n\n# if you would like to enable full-text indexing (i.e., search for document content), also set the option below to true (support from 13.0 Pro)\nindex_office_pdf = true\n Disable the ElasticSearch, as you can set enabled = false in INDEX FILES section:
[INDEX FILES]\nenabled = false\n...\n docker compose down\ndocker compose up -d\n After startup the SeaSearch service, you can check the following logs for Whether SeaSearch runs normally and Seafile is called successfully:
docker logs -f seafile-seasearch/opt/seasearch-data/log/seafevents.logAfter first time start SeaSearch Server
You can remove the initial admin account informations in .env (e.g., INIT_SS_ADMIN_USER, INIT_SS_ADMIN_PASSWORD), which are only used in the SeaSearch initialization progress (i.e., the first time to start services). But make sure you have recorded it somewhere else in case you forget the password.
By default, SeaSearch use word based tokenizer designed for English/German/French language. You can add following configuration to use tokenizer designed for Chinese language.
[SEASEARCH]\nenabled = true\n...\nlang = chinese\n"},{"location":"setup_binary/cluster_deployment/","title":"Cluster Deployment","text":"Tip
Since version 8.0, the recommend way to install Seafile clsuter is using Docker
"},{"location":"setup_binary/cluster_deployment/#cluster-requirements","title":"Cluster requirements","text":"Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup_binary/cluster_deployment/#preparation-all-nodes","title":"Preparation (all nodes)","text":""},{"location":"setup_binary/cluster_deployment/#install-prerequisites","title":"Install prerequisites","text":"Please follow here to install prerequisites
Note
Cache server (the first step) is not necessary, if you donot wish this node deploy it.
"},{"location":"setup_binary/cluster_deployment/#create-user-seafile","title":"Create userseafile","text":"Create a new user and follow the instructions on the screen:
adduser seafile\n Change ownership of the created directory to the new user:
chown -R seafile: /opt/seafile\n All the following steps are done as user seafile.
Change to user seafile:
su seafile\n"},{"location":"setup_binary/cluster_deployment/#placing-the-seafile-pe-license-in-optseafile","title":"Placing the Seafile PE license in /opt/seafile","text":"Save the license file in Seafile's programm directory /opt/seafile. Make sure that the name is seafile-license.txt.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
"},{"location":"setup_binary/cluster_deployment/#setup-and-configure-nginx-only-for-frontend-nodes","title":"Setup and configure Nginx (only for frontend nodes)","text":"For security reasons, the Seafile frontend service will only listen to requests from the local port 8000. You need to use Nginx to reverse proxy this port to port 80 for external access:
Install Nginx
sudo apt update\nsudo apt install nginx\n Create the configurations file for current node
sudo nano /etc/nginx/sites-available/seafile.conf\n and, add the following contents into this file:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name <current node's IP>;\n\n proxy_set_header X-Forwarded-For $remote_addr;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log seafileformat;\n error_log /var/log/nginx/seahub.error.log;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$ $1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n\n send_timeout 36000s;\n\n access_log /var/log/nginx/seafhttp.access.log seafileformat;\n error_log /var/log/nginx/seafhttp.error.log;\n }\n location /media {\n root /opt/seafile/seafile-server-latest/seahub;\n }\n}\n Link the configurations file to sites-enabled directory:
sudo ln -s /etc/nginx/sites-available/seafile.conf /etc/nginx/sites-enabled/\n Test and enable configuration
sudo nginx -t\nsudo nginx -s reload\n It would be convenient to setup Seafile service to start on system boot. Follow this documentation to set it up on.
"},{"location":"setup_binary/cluster_deployment/#firewall-settings","title":"Firewall Settings","text":"There are 2 firewall rule changes for Seafile cluster:
Please follow Installation of Seafile Server Professional Edition to setup:
/opt/seafile/conf","text":""},{"location":"setup_binary/cluster_deployment/#env","title":".env","text":"Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters can be generated from:
pwgen -s 40 1\n JWT_PRIVATE_KEY=<Your jwt private key>\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=<your database host>\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n"},{"location":"setup_binary/cluster_deployment/#seafileconf","title":"seafile.conf","text":"Add or modify the following configuration to seafile.conf:
[memcached]\nmemcached_options = --SERVER=<your memcached ip>[:<your memcached port>] --POOL-MIN=10 --POOL-MAX=100\n [redis]\nredis_host = <your redis ip>\nredis_port = <your redis port, default 6379>\nmax_connections = 100\n Enable cluster mode
[cluster]\nenabled = true\n More options in cluster section
The Seafile server also opens a port for the load balancers to run health checks. Seafile by default uses port 11001. You can change this by adding the following config:
[cluster]\nhealth_check_port = 12345\n Enable backend storage:
You must setup and use memory cache when deploying Seafile cluster, please add or modify the following configuration to seahub_settings.py:
CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '<your Memcached host>:<your Memcached port, default 11211>',\n },\n}\n please Refer to Django's documentation about using Redis cache to add Redis configurations to seahub_settings.py.
Add following options to seahub_setting.py, which will tell Seahub to store avatar in database and cache avatar in memcached, and store css CACHE to local memory.
AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\n Modify the [INDEX FILES] section to enable full test search, we take ElasticSearch for example:
[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh\nindex_office_pdf = true\nes_host = <your ElasticSearch host>\nes_port = <your ElasticSearch port, default 9200>\n"},{"location":"setup_binary/cluster_deployment/#update-seahub-database","title":"Update Seahub Database","text":"In cluster environment, we have to store avatars in the database instead of in a local disk.
mysql -h<your MySQL host> -P<your MySQL port> -useafile -p<user seafile's password>\n\n# enter MySQL environment\nUSE seahub_db;\n\nCREATE TABLE `avatar_uploaded` (`filename` TEXT NOT NULL, `filename_md5` CHAR(32) NOT NULL PRIMARY KEY, `data` MEDIUMTEXT NOT NULL, `size` INTEGER NOT NULL, `mtime` datetime NOT NULL);\n"},{"location":"setup_binary/cluster_deployment/#run-and-test-the-single-node","title":"Run and Test the Single Node","text":"Once you have finished configuring this single node, start it to test if it runs properly:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seahub.sh start\n Success
The first time you start seahub, the script would prompt you to create an admin account for your Seafile server. Then you can see the following message in your console:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n Finally, you can visit http://ip-address-of-this-node:80 and login with the admin account to test if this node is working fine or not.
If the first frontend node works fine, you can compress the whole directory /opt/seafile into a tarball and copy it to all other Seafile server nodes. You can simply uncompress it and start the server by:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seahub.sh start\n"},{"location":"setup_binary/cluster_deployment/#backend-node","title":"Backend node","text":"In the backend node, you need to execute the following command to start Seafile server. CLUSTER_MODE=backend means this node is seafile backend server.
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n export CLUSTER_MODE=backend\ncd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seafile-background-tasks.sh start\n"},{"location":"setup_binary/cluster_deployment/#load-balancer-setting","title":"Load Balancer Setting","text":"Note
Since Seafile Pro server 6.0.0, cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the \"Load Balancer Setting\" section below for details
Generally speaking, in order to better access the Seafile service, we recommend that you use a load balancing service to access the Seafile cluster and bind your domain name (such as seafile.cluster.com) to the load balancing service. Usually, you can use:
Deploy your own load balancing service, our document will give two of common load balance services:
In the AWS ELB management console, after you've added the Seafile server instances to the instance list, you should do two more configurations.
First you should setup HTTP(S) listeners. Ports 443 and 80 of ELB should be forwarded to the ports 80 or 443 of the Seafile servers.
Then you setup health check
Refer to AWS documentation about how to setup sticky sessions.
"},{"location":"setup_binary/cluster_deployment/#nginx","title":"Nginx","text":"Install Nginx in the host if you would like to deploy load balance service
sudo apt update\nsudo apt install nginx\n Create the configurations file for Seafile cluster
sudo nano /etc/nginx/sites-available/seafile-cluster\n and, add the following contents into this file:
upstream seafile_cluster {\n server <IP: your frontend node 1>:80;\n server <IP: your frontend node 2>:80;\n ...\n}\n\nserver {\n listen 80;\n server_name <your domain>;\n\n location / {\n proxy_pass http://seafile_cluster;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n http_502 http_503 http_504;\n }\n}\n Link the configurations file to sites-enabled directory:
sudo ln -s /etc/nginx/sites-available/seafile-cluster /etc/nginx/sites-enabled/\n Test and enable configuration
sudo nginx -t\nsudo nginx -s reload\n This is a sample /etc/haproxy/haproxy.cfg:
(Assume your health check port is 11001)
global\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n maxconn 2000\n timeout connect 10000\n timeout client 300000\n timeout server 36000000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.1.165:80 check port 11001 cookie seafileserver01\n server seafileserver02 192.168.1.200:80 check port 11001 cookie seafileserver02\n"},{"location":"setup_binary/cluster_deployment/#see-how-it-runs","title":"See how it runs","text":"Now you should be able to test your cluster. Open https://seafile.example.com in your browser and enjoy. You can also synchronize files with Seafile clients.
"},{"location":"setup_binary/cluster_deployment/#the-final-configuration-of-the-front-end-nodes","title":"The final configuration of the front-end nodes","text":"Here is the summary of configurations at the front-end node that related to cluster setup. (for version 7.1+)
For seafile.conf:
[cluster]\nenabled = true\n The enabled option will prevent the start of background tasks by ./seafile.sh start in the front-end node. The tasks should be explicitly started by ./seafile-background-tasks.sh start at the back-end node.
For seahub_settings.py:
AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\n For seafevents.conf:
[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh # This configuration is for improving searching speed\nes_host = <IP of background node>\nes_port = 9200\n The [INDEX FILES] section is needed to let the front-end node know the file search feature is enabled.
You can engaged HTTPS in your load balance service, as you can use certificates manager (e.g., Certbot) to acquire and enable HTTPS to your Seafile cluster. You have to modify the relative URLs from the prefix http:// to https:// in seahub_settings.py and .env, after enabling HTTPS.
You can follow here to deploy SeaDoc server. And then modify SEADOC_SERVER_URL in your .env file
After completing the installation of Seafile Server Professional Edition, communication between the Seafile server and clients runs over (unencrypted) HTTP. While HTTP is ok for testing purposes, switching to HTTPS is imperative for production use.
HTTPS requires a SSL certificate from a Certificate Authority (CA). Unless you already have a SSL certificate, we recommend that you get your SSL certificate from Let\u2019s Encrypt using Certbot. If you have a SSL certificate from another CA, skip the section \"Getting a Let's Encrypt certificate\".
A second requirement is a reverse proxy supporting SSL. Nginx, a popular and resource-friendly web server and reverse proxy, is a good option. Nginx's documentation is available at http://nginx.org/en/docs/.
"},{"location":"setup_binary/https_with_nginx/#setup","title":"Setup","text":"The setup of Seafile using Nginx as a reverse proxy with HTTPS is demonstrated using the sample host name seafile.example.com.
This manual assumes the following requirements:
If your setup differs from thes requirements, adjust the following instructions accordingly.
The setup proceeds in two steps: First, Nginx is installed. Second, a SSL certificate is integrated in the Nginx configuration.
"},{"location":"setup_binary/https_with_nginx/#installing-nginx","title":"Installing Nginx","text":"Install Nginx using the package repositories:
sudo apt install nginx -y\n After the installation, start the server and enable it so that Nginx starts at system boot:
sudo systemctl start nginx\nsudo systemctl enable nginx\n"},{"location":"setup_binary/https_with_nginx/#preparing-nginx","title":"Preparing Nginx","text":"Create a configuration file for seafile in /etc/nginx/sites-available/:
touch /etc/nginx/sites-available/seafile.conf\n Delete the default files in /etc/nginx/sites-enabled/ and /etc/nginx/sites-available:
rm /etc/nginx/sites-enabled/default\nrm /etc/nginx/sites-available/default\n Create a symbolic link:
ln -s /etc/nginx/sites-available/seafile.conf /etc/nginx/sites-enabled/seafile.conf\n"},{"location":"setup_binary/https_with_nginx/#configuring-nginx","title":"Configuring Nginx","text":"Copy the following sample Nginx config file into the just created seafile.conf (i.e., nano /etc/nginx/sites-available/seafile.conf) and modify the content to fit your needs:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n\n proxy_set_header X-Forwarded-For $remote_addr;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log seafileformat;\n error_log /var/log/nginx/seahub.error.log;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$ $1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n\n send_timeout 36000s;\n\n access_log /var/log/nginx/seafhttp.access.log seafileformat;\n error_log /var/log/nginx/seafhttp.error.log;\n }\n location /media {\n root /opt/seafile/seafile-server-latest/seahub;\n }\n}\n The following options must be modified in the CONF file:
Optional customizable options in the seafile.conf are:
listen) - if Seafile server should be available on a non-standard port/ - if Seahub is configured to start on a different port than 8000/seafhttp - if seaf-server is configured to start on a different port than 8082client_max_body_size)The default value for client_max_body_size is 1M. Uploading larger files will result in an error message HTTP error code 413 (\"Request Entity Too Large\"). It is recommended to syncronize the value of client_max_body_size with the parameter max_upload_size in section [fileserver] of seafile.conf. Optionally, the value can also be set to 0 to disable this feature. Client uploads are only partly effected by this limit. With a limit of 100 MiB they can safely upload files of any size.
Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect:
nginx -t\nnginx -s reload\n"},{"location":"setup_binary/https_with_nginx/#getting-a-lets-encrypt-certificate","title":"Getting a Let's Encrypt certificate","text":"Getting a Let's Encrypt certificate is straightforward thanks to Certbot. Certbot is a free, open source software tool for requesting, receiving, and renewing Let's Encrypt certificates.
First, go to the Certbot website and choose your webserver and OS.
Second, follow the detailed instructions then shown.
We recommend that you get just a certificate and that you modify the Nginx configuration yourself:
sudo certbot certonly --nginx\n Follow the instructions on the screen.
Upon successful verification, Certbot saves the certificate files in a directory named after the host name in /etc/letsencrypt/live. For the host name seafile.example.com, the files are stored in /etc/letsencrypt/live/seafile.example.com.
Tip
Normally, your nginx configuration can be automatically managed by a certificate manager (e.g., CertBot) after you install the certificate. If you find that your nginx is already listening on port 443 through the certificate manager after installing the certificate, you can skip this step.
Add an server block for port 443 and a http-to-https redirect to the seafile.conf configuration file in /etc/nginx.
This is a (shortened) sample configuration for the host name seafile.example.com:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n\n server_tokens off; # Prevents the Nginx version from being displayed in the HTTP response header\n}\n\nserver {\n listen 443 ssl;\n ssl_certificate /etc/letsencrypt/live/seafile.example.com/fullchain.pem; # Path to your fullchain.pem\n ssl_certificate_key /etc/letsencrypt/live/seafile.example.com/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n proxy_set_header X-Forwarded-Proto https;\n\n... # No changes beyond this point compared to the Nginx configuration without HTTPS\n Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect:
nginx -t\nnginx -s reload\n"},{"location":"setup_binary/https_with_nginx/#large-file-uploads","title":"Large file uploads","text":"Tip for uploading very large files (> 4GB): By default Nginx will buffer large request body in temp file. After the body is completely received, Nginx will send the body to the upstream server (seaf-server in our case). But it seems when file size is very large, the buffering mechanism dosen't work well. It may stop proxying the body in the middle. So if you want to support file upload larger for 4GB, we suggest you install Nginx version >= 1.8.0 and add the following options to Nginx config file:
location /seafhttp {\n ... ...\n proxy_request_buffering off;\n }\n If you have WebDAV enabled it is recommended to add the same:
location /seafdav {\n ... ...\n proxy_request_buffering off;\n }\n"},{"location":"setup_binary/https_with_nginx/#modify-env","title":"Modify .env","text":"Modify the following field to https
SEAFILE_SERVER_PROTOCOL=https\n"},{"location":"setup_binary/https_with_nginx/#modifying-seafileconf-optional","title":"Modifying seafile.conf (optional)","text":"To improve security, the file server should only be accessible via Nginx.
Add the following line in the [fileserver] block on seafile.conf in /opt/seafile/conf:
host = 127.0.0.1 ## default port 0.0.0.0\n After his change, the file server only accepts requests from Nginx.
"},{"location":"setup_binary/https_with_nginx/#starting-seafile-and-seahub","title":"Starting Seafile and Seahub","text":"Restart the seaf-server and Seahub for the config changes to take effect:
su seafile\ncd /opt/seafile/seafile-server-latest\n./seafile.sh restart\n./seahub.sh restart # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n"},{"location":"setup_binary/https_with_nginx/#additional-modern-settings-for-nginx-optional","title":"Additional modern settings for Nginx (optional)","text":""},{"location":"setup_binary/https_with_nginx/#activating-ipv6","title":"Activating IPv6","text":"Require IPv6 on server otherwise the server will not start! Also the AAAA dns record is required for IPv6 usage.
listen 443;\nlisten [::]:443;\n"},{"location":"setup_binary/https_with_nginx/#activating-http2","title":"Activating HTTP2","text":"Activate HTTP2 for more performance. Only available for SSL and nginx version>=1.9.5. Simply add http2.
listen 443 http2;\nlisten [::]:443 http2;\n"},{"location":"setup_binary/https_with_nginx/#advanced-tls-configuration-for-nginx-optional","title":"Advanced TLS configuration for Nginx (optional)","text":"The TLS configuration in the sample Nginx configuration file above receives a B overall rating on SSL Labs. By modifying the TLS configuration in seafile.conf, this rating can be significantly improved.
The following sample Nginx configuration file for the host name seafile.example.com contains additional security-related directives. (Note that this sample file uses a generic path for the SSL certificate files.) Some of the directives require further steps as explained below.
server {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n server_tokens off;\n }\n server {\n listen 443 ssl;\n ssl_certificate /etc/ssl/cacert.pem; # Path to your cacert.pem\n ssl_certificate_key /etc/ssl/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n # HSTS for protection against man-in-the-middle-attacks\n add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\";\n\n # DH parameters for Diffie-Hellman key exchange\n ssl_dhparam /etc/nginx/dhparam.pem;\n\n # Supported protocols and ciphers for general purpose server with good security and compatability with most clients\n ssl_protocols TLSv1.2 TLSv1.3;\n ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;\n ssl_prefer_server_ciphers off;\n\n # Supported protocols and ciphers for server when clients > 5years (i.e., Windows Explorer) must be supported\n #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;\n #ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA;\n #ssl_prefer_server_ciphers on;\n\n ssl_session_timeout 5m;\n ssl_session_cache shared:SSL:5m;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto https;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n\n proxy_read_timeout 1200s;\n\n client_max_body_size 0;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n send_timeout 36000s;\n }\n\n location /media {\n root /home/user/haiwen/seafile-server-latest/seahub;\n }\n }\n"},{"location":"setup_binary/https_with_nginx/#enabling-http-strict-transport-security","title":"Enabling HTTP Strict Transport Security","text":"Enable HTTP Strict Transport Security (HSTS) to prevent man-in-the-middle-attacks by adding this directive:
add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\" always;\n HSTS instructs web browsers to automatically use HTTPS. That means, after the first visit of the HTTPS version of Seahub, the browser will only use https to access the site.
"},{"location":"setup_binary/https_with_nginx/#using-perfect-forward-secrecy","title":"Using Perfect Forward Secrecy","text":"Enable Diffie-Hellman (DH) key-exchange. Generate DH parameters and write them in a .pem file using the following command:
openssl dhparam 2048 > /etc/nginx/dhparam.pem # Generates DH parameter of length 2048 bits\n The generation of the the DH parameters may take some time depending on the server's processing power.
Add the following directive in the HTTPS server block:
ssl_dhparam /etc/nginx/dhparam.pem;\n"},{"location":"setup_binary/https_with_nginx/#restricting-tls-protocols-and-ciphers","title":"Restricting TLS protocols and ciphers","text":"Disallow the use of old TLS protocols and cipher. Mozilla provides a configuration generator for optimizing the conflicting objectives of security and compabitility. Visit https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx for more Information.
"},{"location":"setup_binary/installation/","title":"Installation of Seafile Server Professional Edition","text":"This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server from a pre-built package using MySQL/MariaDB as database. The deployment has been tested for Debian/Ubuntu.
"},{"location":"setup_binary/installation/#requirements","title":"Requirements","text":"Please refer here for system requirements about Seafile PE. In general, we recommend that you should have at least 4G RAM and a 4-core CPU (> 2GHz).
Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com or one of our partners.
"},{"location":"setup_binary/installation/#setup","title":"Setup","text":""},{"location":"setup_binary/installation/#installing-and-preparing-the-sql-database","title":"Installing and preparing the SQL database","text":"Seafile supports MySQL and MariaDB. We recommend that you use the preferred SQL database management engine included in the package repositories of your distribution.
You can find step-by-step how-tos for installing MySQL and MariaDB in the tutorials on the Digital Ocean website.
Seafile uses the mysql_native_password plugin for authentication. The versions of MySQL and MariaDB installed on Ubuntu/Debian use a different authentication plugin by default. It is therefore required to change to authentication plugin to mysql_native_password for the root user prior to the installation of Seafile. The above mentioned tutorials explain how to do it.
Tip
The standard directory /opt/seafile is assumed for the rest of this manual. If you decide to put Seafile in another directory, some commands need to be modified accordingly
Install cache server (e.g., Redis)
sudo apt-get update\nsudo apt-get install -y redis-server libhiredis-dev\n Install Python and related libraries
Ubuntu 24.04Debian 13Debian 12Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate.
sudo apt-get install -y python3 python3-dev python3-setuptools python3-pip python3-ldap python3-rados libmysqlclient-dev libmemcached-dev ldap-utils libldap2-dev python3.12-venv default-libmysqlclient-dev build-essential pkg-config\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 boto3 oss2 twilio configparser pytz \\\n sqlalchemy==2.0.* pymysql==1.1.* jinja2 django-pylibmc pylibmc psd-tools lxml \\\n django==5.2.* cffi==1.17.1 future==1.0.* mysqlclient==2.2.* captcha==0.7.* django_simple_captcha==0.6.* \\\n pyjwt==2.10.* djangosaml2==1.11.* pysaml2==7.5.* pycryptodome==3.23.* python-ldap==3.4.* pillow==11.3.* pillow-heif==1.0.*\n Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate.
sudo apt-get install -y python3 python3-dev python3-setuptools python3-pip python3-ldap python3-rados libmariadb-dev-compat libmemcached-dev ldap-utils libldap2-dev libsasl2-dev pkg-config python3.13-venv\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 boto3 oss2 twilio configparser pytz \\\n sqlalchemy==2.0.* pymysql==1.1.* jinja2 django-pylibmc pylibmc psd-tools lxml \\\n django==5.2.* cffi==1.17.1 future==1.0.* mysqlclient==2.2.* captcha==0.7.* django_simple_captcha==0.6.* \\\n pyjwt==2.10.* djangosaml2==1.11.* pysaml2==7.5.* pycryptodome==3.23.* python-ldap==3.4.* pillow==11.3.* pillow-heif==1.0.*\n Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate.
sudo apt-get install -y python3 python3-dev python3-setuptools python3-pip python3-ldap python3-rados libmariadb-dev-compat libmemcached-dev ldap-utils libldap2-dev libsasl2-dev pkg-config python3.11-venv \n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 boto3 oss2 twilio configparser pytz \\\n sqlalchemy==2.0.* pymysql==1.1.* jinja2 django-pylibmc pylibmc psd-tools lxml \\\n django==5.2.* cffi==1.17.1 future==1.0.* mysqlclient==2.2.* captcha==0.7.* django_simple_captcha==0.6.* \\\n pyjwt==2.10.* djangosaml2==1.11.* pysaml2==7.5.* pycryptodome==3.23.* python-ldap==3.4.* pillow==11.3.* pillow-heif==1.0.*\n Elasticsearch, the indexing server, cannot be run as root. More generally, it is good practice not to run applications as root.
Create a new user and follow the instructions on the screen:
Ubuntu 24.04Debian 13/12adduser seafile\n /usr/sbin/adduser seafile\n Change ownership of the created directory to the new user:
chown -R seafile: /opt/seafile\n All the following steps are done as user seafile.
Change to user seafile:
su seafile\n"},{"location":"setup_binary/installation/#placing-the-seafile-pe-license","title":"Placing the Seafile PE license","text":"Save the license file in Seafile's programm directory /opt/seafile. Make sure that the name is seafile-license.txt.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
"},{"location":"setup_binary/installation/#downloading-the-install-package","title":"Downloading the install package","text":"The install packages for Seafile PE are available for download in the the Seafile Customer Center. To access the Customer Center, a user account is necessary. The registration is free.
Beginning with Seafile PE 7.0.17, the Seafile Customer Center provides two install packages for every version (using Seafile PE 13.0.10 as an example):
The former is suitable for installation on Ubuntu/Debian servers.
Download the install package using wget (replace the x.x.x with the version you wish to download):
# Debian/Ubuntu\nwget -O 'seafile-pro-server_x.x.x_x86-64_Ubuntu.tar.gz' 'VERSION_SPECIFIC_LINK_FROM_SEAFILE_CUSTOMER_CENTER'\n We use Seafile version 13.0.10 as an example in the remainder of these instructions.
"},{"location":"setup_binary/installation/#uncompressing-the-package","title":"Uncompressing the package","text":"The install package is downloaded as a compressed tarball which needs to be uncompressed.
Uncompress the package using tar:
# Debian/Ubuntu\ntar xf seafile-pro-server_13.0.10_x86-64_Ubuntu.tar.gz\n Now you have:
$ tree -L 2 /opt/seafile\n.\n\u251c\u2500\u2500 seafile-license.txt\n\u251c\u2500\u2500 python-venv # you will not see this directory if you use ubuntu 22/debian 10\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 bin\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 include\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lib\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lib64 -> lib\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 pyvenv.cfg\n\u251c\u2500\u2500 seafile-pro-server-13.0.10\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index_op.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate_ldapusers.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 parse_seahub_db.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pro\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_master.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_worker.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-encrypt.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-import.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-background-tasks.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-monitor.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u2514\u2500\u2500 seafile-pro-server_13.0.10_x86-64_Ubuntu.tar.gz\n Tip
The names of the install packages differ for Seafile CE and Seafile PE. Using Seafile CE and Seafile PE 13.0.10 as an example, the names are as follows:
seafile-server_13.0.10_x86-86.tar.gz; uncompressing into folder seafile-server-13.0.10seafile-pro-server_13.0.10_x86-86.tar.gz; uncompressing into folder seafile-pro-server-13.0.10The install package comes with a script that sets Seafile up for you. Specifically, the script creates the required directories and extracts all files in the right place. It can also create a MySQL user and the three databases that Seafile's components require:
While ccnet server was merged into the seafile-server in Seafile 8.0, the corresponding database is still required for the time being
Run the script as user seafile:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n cd seafile-pro-server-13.0.10\n./setup-seafile-mysql.sh\n Configure your Seafile Server by specifying the following three parameters:
Option Description Note server name Name of the Seafile Server 3-15 characters, only English letters, digits and underscore ('_') are allowed server's ip or domain IP address or domain name used by the Seafile Server Seafile client program will access the server using this address fileserver port TCP port used by the Seafile fileserver Default port is 8082, it is recommended to use this port and to only change it if is used by other serviceIn the next step, choose whether to create new databases for Seafile or to use existing databases. The creation of new databases requires the root password for the SQL server.
Note
If you don't have the root password, you need someone who has the privileges, e.g., the database admin, to create the three databases required by Seafile, as well as a MySQL user who can access the databases. For example, to create three databases ccnet_db / seafile_db / seahub_db for ccnet/seafile/seahub respectively, and a MySQL user \"seafile\" to access these databases run the following SQL queries:
create database `ccnet_db` character set = 'utf8';\ncreate database `seafile_db` character set = 'utf8';\ncreate database `seahub_db` character set = 'utf8';\n\ncreate user 'seafile'@'localhost' identified by 'seafile';\n\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seafile_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seahub_db`.* to `seafile`@localhost;\n [1] Create new ccnet/seafile/seahub databases[2] Use existing ccnet/seafile/seahub databases The script creates these databases and a MySQL user that Seafile Server will use to access them. To this effect, you need to answer these questions:
Question Description Note mysql server host Host address of the MySQL server Default is localhost mysql server port TCP port used by the MySQL server Default port is 3306; almost every MySQL server uses this port mysql root password Password of the MySQL root account The root password is required to create new databases and a MySQL user mysql user for Seafile MySQL user created by the script, used by Seafile's components to access the databases Default is seafile; the user is created unless it exists mysql password for Seafile user Password for the user above, written in Seafile's config files Percent sign ('%') is not allowed database name Name of the database used by ccnet Default is \"ccnet_db\", the database is created if it does not exist seafile database name Name of the database used by Seafile Default is \"seafile_db\", the database is created if it does not exist seahub database name Name of the database used by seahub Default is \"seahub_db\", the database is created if it does not existThe prompts you need to answer:
Question Description Note mysql server host Host address of the MySQL server Default is localhost mysql server port TCP port used by MySQL server Default port is 3306; almost every MySQL server uses this port mysql user for Seafile User used by Seafile's components to access the databases The user must exists mysql password for Seafile user Password for the user above ccnet database name Name of the database used by ccnet, default is \"ccnet_db\" The database must exist seafile database name Name of the database used by Seafile, default is \"seafile_db\" The database must exist seahub dabase name Name of the database used by Seahub, default is \"seahub_db\" The database must existIf the setup is successful, you see the following output:
The directory layout then looks as follows:
/opt/seafile\n\u251c\u2500\u2500 seafile-license.txt\n\u251c\u2500\u2500 ccnet\n\u251c\u2500\u2500 conf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 gunicorn.conf.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafdav.conf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafevents.conf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.conf\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 seahub_settings.py\n\u251c\u2500\u2500 pro-data\n\u251c\u2500\u2500 python-venv # you will not see this directory if you use ubuntu 22/debian 10\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 bin\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 include\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lib\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lib64 -> lib\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 pyvenv.cfg\n\u251c\u2500\u2500 seafile-data\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 library-template\n\u251c\u2500\u2500 seafile-pro-server-13.0.10\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index_op.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate_ldapusers.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 parse_seahub_db.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pro\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_master.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_worker.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-encrypt.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-import.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-background-tasks.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-monitor.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u251c\u2500\u2500 seafile-pro-server_13.0.10_x86-64_Ubuntu.tar.gz\n\u251c\u2500\u2500 seafile-server-latest -> seafile-pro-server-13.0.10\n\u2514\u2500\u2500 seahub-data\n \u2514\u2500\u2500 avatars\n The folder seafile-server-latest is a symbolic link to the current Seafile Server folder. When later you upgrade to a new version, the upgrade scripts update this link to point to the latest Seafile Server folder.
.env file in conf/ directory","text":"nano /opt/seafile/conf/.env\n Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters can be generated from:
pwgen -s 40 1\n JWT_PRIVATE_KEY=<Your jwt private key>\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=<your database host>\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n\n## Cache\nCACHE_PROVIDER=redis # options: redis (recommend), memcached\n\n### Redis\nREDIS_HOST=redis\nREDIS_PORT=6379\nREDIS_PASSWORD=\n\n### Memcached\nMEMCACHED_HOST=memcached\nMEMCACHED_PORT=11211\n"},{"location":"setup_binary/installation/#setup-memory-cache","title":"Setup Memory Cache","text":"Memory cache is mandatory for pro edition. You may use Memcached or Reids as cache server.
MemcachedRedisUse the following commands to install memcached and corresponding libraies on your system:
# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\n Add or modify the following configuration to /opt/seafile/conf/.env:
## Cache\nCACHE_PROVIDER=memcached\n\n### Memcached\nMEMCACHED_HOST=memcached\nMEMCACHED_PORT=11211\n Redis is supported since version 11.0
Use the following commands to install redis and corresponding libraies on your system:
# on Debian/Ubuntu 18.04+\napt-get install -y redis-server libhiredis-dev\npip3 install redis django-redis\n\nsystemctl enable --now redis-server\n Add or modify the following configuration to /opt/seafile/conf/.env:
## Cache\nCACHE_PROVIDER=redis\n\n### Redis\nREDIS_HOST=redis\nREDIS_PORT=6379\nREDIS_PASSWORD=\n"},{"location":"setup_binary/installation/#enabling-httphttps-optional-but-recommended","title":"Enabling HTTP/HTTPS (Optional but Recommended)","text":"You need at least setup HTTP to make Seafile's web interface work. This manual provides instructions for enabling HTTP/HTTPS for the two most popular web servers and reverse proxies (e.g., Nginx).
"},{"location":"setup_binary/installation/#starting-seafile-server","title":"Starting Seafile Server","text":"Run the following commands in /opt/seafile/seafile-server-latest:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n su seafile\n./seafile.sh start # Start Seafile service\n./seahub.sh start # Start seahub website, port defaults to 127.0.0.1:8000\n Success
The first time you start Seahub, the script prompts you to create an admin account for your Seafile Server. Enter the email address of the admin user followed by the password, i.e.:
What is the email for the admin account?\n[ admin email ] <please input your admin's email>\n\nWhat is the password for the admin account?\n[ admin password ] <please input your admin's password>\n\nEnter the password again:\n[ admin password again ] <please input your admin's password again>\n Now you can access Seafile via the web interface at the host address (e.g., https://seafile.example.com).
"},{"location":"setup_binary/installation/#enabling-full-text-search","title":"Enabling full text search","text":"Seafile uses the indexing server ElasticSearch to enable full text search.
"},{"location":"setup_binary/installation/#deploying-elasticsearch","title":"Deploying ElasticSearch","text":"Our recommendation for deploying ElasticSearch is using Docker. Detailed information about installing Docker on various Linux distributions is available at Docker Docs.
Seafile PE 9.0 only supports ElasticSearch 7.x. Seafile PE 10.0, 11.0, 12.0, 13.0 only supports ElasticSearch 8.x.
We use ElasticSearch version 8.15.0 as an example in this section. Version 8.15.0 and newer version have been successfully tested with Seafile.
Pull the Docker image:
sudo docker pull elasticsearch:8.15.0\n Create a folder for persistent data created by ElasticSearch and change its permission:
sudo mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n Now start the ElasticSearch container using the docker run command:
sudo docker run -d \\\n--name es \\\n-p 9200:9200 \\\n-e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" \\\n-e \"ES_JAVA_OPTS=-Xms2g -Xmx2g\" -e \"xpack.security.enabled=false\" \\\n--restart=always \\\n-v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data \\\n-d elasticsearch:8.15.0\n Security notice
We sincerely thank Mohammed Adel of Safe Decision Co., for the suggestion of this notice.
By default, Elasticsearch will only listen on 127.0.0.1, but this rule may become invalid after Docker exposes the service port, which will make your Elasticsearch service vulnerable to attackers accessing and extracting sensitive data due to exposure to the external network. We recommend that you manually configure the Docker firewall, such as
sudo iptables -A INPUT -p tcp -s <your seafile server ip> --dport 9200 -j ACCEPT\nsudo iptables -A INPUT -p tcp --dport 9200 -j DROP\n The above command will only allow the host where your Seafile service is located to connect to Elasticsearch, and other addresses will be blocked. If you deploy Elasticsearch based on binary packages, you need to refer to the official document to set the address that Elasticsearch binds to.
"},{"location":"setup_binary/installation/#modifying-seafevents","title":"Modifying seafevents","text":"Add the following configuration to seafevents.conf:
[INDEX FILES]\nes_host = <your elasticsearch server's IP, e.g., 127.0.0.1> # IP address of ElasticSearch host\nes_port = 9200 # port of ElasticSearch host\n Finally, restart Seafile:
su seafile\n./seafile.sh restart && ./seahub.sh restart \n"},{"location":"setup_binary/outline/","title":"Deploy Seafile Pro Edition","text":"Binary-based community edition Seafile is not supported since version 13.0
Since version 13.0, binary-based deployment for community edition is no longer supported.
There are two ways to deploy Seafile Pro Edition. Since version 8.0, the recommend way to install Seafile Pro Edition is using Docker.
For example Debian 12
Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload.
Firstly, you should create a script to activate the python virtual environment, which goes in the ${seafile_dir} directory. Put another way, it does not go in \"seafile-server-latest\", but the directory above that. Throughout this manual the examples use /opt/seafile for this directory, but you might have chosen to use a different directory.
sudo vim /opt/seafile/run_with_venv.sh\n The content of the file is:
#!/bin/bash\n# Activate the python virtual environment (venv) before starting one of the seafile scripts\n\ndir_name=\"$(dirname $0)\"\nsource \"${dir_name}/python-venv/bin/activate\"\nscript=\"$1\"\nshift 1\n\necho \"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n\"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n make this script executable sudo chmod 755 /opt/seafile/run_with_venv.sh\n"},{"location":"setup_binary/start_seafile_at_system_bootup/#seafile-component","title":"Seafile component","text":"sudo vim /etc/systemd/system/seafile.service\n The content of the file is:
[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seafile.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n"},{"location":"setup_binary/start_seafile_at_system_bootup/#seahub-component","title":"Seahub component","text":"sudo vim /etc/systemd/system/seahub.service\n The content of the file is:
[Unit]\nDescription=Seafile hub\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seahub.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seahub.sh stop\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n"},{"location":"setup_binary/start_seafile_at_system_bootup/#for-systems-running-systemd-without-python-virtual-environment","title":"For systems running systemd without python virtual environment","text":"For example Debian 8 through Debian 11, Linux Ubuntu 15.04 and newer
Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload.
"},{"location":"setup_binary/start_seafile_at_system_bootup/#seafile-component_1","title":"Seafile component","text":"sudo vim /etc/systemd/system/seafile.service\n The content of the file is:
[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seafile.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n"},{"location":"setup_binary/start_seafile_at_system_bootup/#seahub-component_1","title":"Seahub component","text":"Create systemd service file /etc/systemd/system/seahub.service
sudo vim /etc/systemd/system/seahub.service\n The content of the file is:
[Unit]\nDescription=Seafile hub\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seahub.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seahub.sh stop\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n"},{"location":"setup_binary/start_seafile_at_system_bootup/#seafile-cli-client-optional","title":"Seafile cli client (optional)","text":"Create systemd service file /etc/systemd/system/seafile-client.service
You need to create this service file only if you have seafile console client and you want to run it on system boot.
sudo vim /etc/systemd/system/seafile-client.service\n The content of the file is:
[Unit]\nDescription=Seafile client\n# Uncomment the next line you are running seafile client on the same computer as server\n# After=seafile.service\n# Or the next one in other case\n# After=network.target\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/seaf-cli start\nExecStop=/usr/bin/seaf-cli stop\nRemainAfterExit=yes\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n"},{"location":"setup_binary/start_seafile_at_system_bootup/#enable-service-start-on-system-boot","title":"Enable service start on system boot","text":"sudo systemctl enable seafile.service\nsudo systemctl enable seahub.service\nsudo systemctl enable seafile-client.service # optional\n"},{"location":"setup_binary/using_logrotate/","title":"Set up logrotate for server","text":""},{"location":"setup_binary/using_logrotate/#how-it-works","title":"How it works","text":"seaf-server support reopenning logfiles by receiving a SIGUR1 signal.
This feature is very useful when you need cut logfiles while you don't want to shutdown the server. All you need to do now is cutting the logfile on the fly.
"},{"location":"setup_binary/using_logrotate/#default-logrotate-configuration-directory","title":"Default logrotate configuration directory","text":"For Debian, the default directory for logrotate should be /etc/logrotate.d/
Assuming your seaf-server's logfile is setup to /opt/seafile/logs/seafile.log and your seaf-server's pidfile is setup to /opt/seafile/pids/seaf-server.pid:
The configuration for logrotate could be like this:
/opt/seafile/logs/seafile.log\n/opt/seafile/logs/seahub.log\n/opt/seafile/logs/seafdav.log\n/opt/seafile/logs/fileserver-access.log\n/opt/seafile/logs/fileserver-error.log\n/opt/seafile/logs/fileserver.log\n/opt/seafile/logs/file_updates_sender.log\n/opt/seafile/logs/repo_old_file_auto_del_scan.log\n/opt/seafile/logs/seahub_email_sender.log\n/opt/seafile/logs/index.log\n{\n daily\n missingok\n rotate 7\n # compress\n # delaycompress\n dateext\n dateformat .%Y-%m-%d\n notifempty\n # create 644 root root\n sharedscripts\n postrotate\n if [ -f /opt/seafile/pids/seaf-server.pid ]; then\n kill -USR1 `cat /opt/seafile/pids/seaf-server.pid`\n fi\n\n if [ -f /opt/seafile/pids/fileserver.pid ]; then\n kill -USR1 `cat /opt/seafile/pids/fileserver.pid`\n fi\n\n if [ -f /opt/seafile/pids/seahub.pid ]; then\n kill -HUP `cat /opt/seafile/pids/seahub.pid`\n fi\n\n if [ -f /opt/seafile/pids/seafdav.pid ]; then\n kill -HUP `cat /opt/seafile/pids/seafdav.pid`\n fi\n\n find /opt/seafile/logs/ -mtime +7 -name \"*.log*\" -exec rm -f {} \\;\n endscript\n}\n You can save this file, in Debian for example, at /etc/logrotate.d/seafile.
The Seafile configuration files are located in the /opt/seafile-data/seafile/conf/ directory.
You should remove the [DATABASE] configuration block.
You should remove the [database] and [memcached] configuration block.
You should remove the SERVICE_URL, DATABASES = {...}, CACHES = {...}, COMPRESS_CACHE_BACKEND and FILE_SERVER_ROOT configuration block.
The following configurations are removed or renamed to new ones.
SEAFILE_MEMCACHED_IMAGE=docker.seafile.top/seafileltd/memcached:1.6.29\n\nINIT_S3_STORAGE_BACKEND_CONFIG=false\nINIT_S3_COMMIT_BUCKET=<your-commit-objects>\nINIT_S3_FS_BUCKET=<your-fs-objects>\nINIT_S3_BLOCK_BUCKET=<your-block-objects>\nINIT_S3_KEY_ID=<your-key-id>\nINIT_S3_SECRET_KEY=<your-secret-key>\nINIT_S3_USE_V4_SIGNATURE=true\nINIT_S3_AWS_REGION=us-east-1\nINIT_S3_HOST=\nINIT_S3_USE_HTTPS=true\n\nNOTIFICATION_SERVER_VOLUME=/opt/notification-data\n\nSS_S3_USE_V4_SIGNATURE=false\nSS_S3_ACCESS_ID=<your access id>\nSS_S3_ACCESS_SECRET=<your access secret>\nSS_S3_ENDPOINT=\nSS_S3_BUCKET=<your bucket name>\nSS_S3_USE_HTTPS=true\nSS_S3_PATH_STYLE_REQUEST=true\nSS_S3_AWS_REGION=us-east-1\nSS_S3_SSE_C_KEY=<your SSE-C key>\n"},{"location":"upgrade/seafile_obsolete_configurations/#seafile-11-to-12-obsolete-configurations","title":"Seafile 11 to 12 Obsolete Configurations","text":""},{"location":"upgrade/seafile_obsolete_configurations/#ccnetconf","title":"ccnet.conf","text":"You should remove the entire ccnet.conf configuration file.
You should remove the [notification] configuration block.
There are three types of upgrade, i.e., major version upgrade, minor version upgrade and maintenance version upgrade. This page contains general instructions for the three types of upgrade.
Please check the upgrade notes for any special configuration or changes before/while upgrading.
Suppose you are using version 5.1.0 and like to upgrade to version 6.1.0. First download and extract the new version. You should have a directory layout similar to this:
seafile\n -- seafile-server-5.1.0\n -- seafile-server-6.1.0\n -- ccnet\n -- seafile-data\n Now upgrade to version 6.1.0.
Shutdown Seafile server if it's running
cd seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n# or via service\n/etc/init.d/seafile-server stop\n Check the upgrade scripts in seafile-server-6.1.0 directory.
cd seafile/seafile-server-6.1.0\nls upgrade/upgrade_*\n You will get a list of upgrade files:
...\nupgrade_5.0_5.1.sh\nupgrade_5.1_6.0.sh\nupgrade_6.0_6.1.sh\n Start from your current version, run the script(s one by one)
upgrade/upgrade_5.1_6.0.sh\nupgrade/upgrade_6.0_6.1.sh\n Start Seafile server
cd seafile/seafile-server-latest/\n./seafile.sh start\n./seahub.sh start # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n# or via service\n/etc/init.d/seafile-server start\n If the new version works fine, the old version can be removed
rm -rf seafile-server-5.1.0/\n"},{"location":"upgrade/upgrade/#minor-version-upgrade-eg-from-61x-to-62y","title":"Minor version upgrade (e.g. from 6.1.x to 6.2.y)","text":"Suppose you are using version 6.1.0 and like to upgrade to version 6.2.0. First download and extract the new version. You should have a directory layout similar to this:
seafile\n -- seafile-server-6.1.0\n -- seafile-server-6.2.0\n -- ccnet\n -- seafile-data\n Now upgrade to version 6.2.0.
cd seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n# or via service\n/etc/init.d/seafile-server stop\n Check the upgrade scripts in seafile-server-6.2.0 directory.
cd seafile/seafile-server-6.2.0\nls upgrade/upgrade_*\n You will get a list of upgrade files:
...\nupgrade/upgrade_5.1_6.0.sh\nupgrade/upgrade_6.0_6.1.sh\nupgrade/upgrade_6.1_6.2.sh\n Start from your current version, run the script(s one by one)
upgrade/upgrade_6.1_6.2.sh\n Start Seafile server
./seafile.sh start\n./seahub.sh start\n# or via service\n/etc/init.d/seafile-server start\n If the new version works, the old version can be removed
rm -rf seafile-server-6.1.0/\n"},{"location":"upgrade/upgrade/#maintenance-version-upgrade-eg-from-622-to-623","title":"Maintenance version upgrade (e.g. from 6.2.2 to 6.2.3)","text":"A maintenance upgrade is for example an upgrade from 6.2.2 to 6.2.3.
For this type of upgrade, you only need to update the symbolic links (for avatar and a few other folders). A script to perform a minor upgrade is provided with Seafile server (for history reasons, the script is called minor-upgrade.sh):
cd seafile-server-6.2.3/upgrade/ && ./minor-upgrade.sh\n Start Seafile
If the new version works, the old version can be removed
rm -rf seafile-server-6.2.2/\n Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps:
In general, to upgrade a cluster, you need:
/opt/seafile/seafile-server-latest/upgrade/upgrade_x_x_x_x.sh) in one frontend nodeBefore upgrading, please shutdown you Seafile server
docker compose down\n"},{"location":"upgrade/upgrade_a_cluster/#step-2-download-the-newest-seafile-serveryml-file","title":"Step 2) Download the newest seafile-server.yml file","text":"Before downloading the newest seafile-server.yml, please backup your original one:
mv seafile-server.yml seafile-server.yml.bak\n Then download the new seafile-server.yml according to the following commands:
wget https://manual.seafile.com/13.0/repo/docker/cluster/seafile-server.yml\n"},{"location":"upgrade/upgrade_a_cluster/#step-3-modify-env-update-image-version-and-some-configurations","title":"Step 3) Modify .env, update image version and some configurations","text":""},{"location":"upgrade/upgrade_a_cluster/#step-31-update-image-version-to-seafile-13","title":"Step 3.1) Update image version to Seafile 13","text":"SEAFILE_IMAGE=seafileltd/seafile-pro-mc:13.0-latest\n"},{"location":"upgrade/upgrade_a_cluster/#step-32-add-configurations-for-cache","title":"Step 3.2) Add configurations for cache","text":"From Seafile 13, the configurations of cache can be set via environment variables directly (you can define it in the .env). What's more, the Redis will be recommended as the primary cache server for supporting some new features (please refer the upgradte notes, you can also refer to more details about Redis in Seafile Docker here).
## Cache\nCACHE_PROVIDER=redis\n\n### Redis\nREDIS_HOST=redis\nREDIS_PORT=6379\nREDIS_PASSWORD=\n ## Cache\nCACHE_PROVIDER=memcached\n\n### Memcached\nMEMCACHED_HOST=memcached\nMEMCACHED_PORT=11211\n"},{"location":"upgrade/upgrade_a_cluster/#step-33-add-configurations-for-database","title":"Step 3.3) Add configurations for database","text":"SEAFILE_MYSQL_DB_HOST=db\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n"},{"location":"upgrade/upgrade_a_cluster/#step-34-add-configurations-for-storage-backend","title":"Step 3.4) Add configurations for storage backend","text":"Seafile 13.0 add a new environment SEAF_SERVER_STORAGE_TYPE to determine the storage backend of seaf-server component. You can delete the variable or set it to empty (SEAF_SERVER_STORAGE_TYPE=) to use the old way, i.e., determining the storage backend from seafile.conf.
seafile.conf Set SEAF_SERVER_STORAGE_TYPE to disk (default value):
SEAF_SERVER_STORAGE_TYPE=disk\n Set SEAF_SERVER_STORAGE_TYPE to s3, and add your s3 configurations:
SEAF_SERVER_STORAGE_TYPE=s3\n\nS3_COMMIT_BUCKET=<your commit bucket name>\nS3_FS_BUCKET=<your fs bucket name>\nS3_BLOCK_BUCKET=<your block bucket name>\nS3_KEY_ID=<your-key-id>\nS3_SECRET_KEY=<your-secret-key>\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=false\nS3_AWS_REGION=us-east-1\nS3_HOST=\nS3_USE_HTTPS=true\nS3_SSE_C_KEY=\n Set SEAF_SERVER_STORAGE_TYPE to multiple. In this case, you don't need to change the storage configuration in seafile.conf.
SEAF_SERVER_STORAGE_TYPE=multiple\n If you would like to use the storage configuration in seafile.conf, please remove default value of SEAF_SERVER_STORAGE_TYPE in .env:
SEAF_SERVER_STORAGE_TYPE=\n"},{"location":"upgrade/upgrade_a_cluster/#step-4-remove-obsolete-configurations","title":"Step 4) Remove obsolete configurations","text":"Although the configurations in environment (i.e., .env) have higher priority than the configurations in config files, we recommend that you remove or modify the cache configuration in the following files to avoid ambiguity:
Backup the old configuration files:
cp /opt/seafile/shared/seafile/conf/seafile.conf /opt/seafile/shared/seafile/conf/seafile.conf.bak\ncp /opt/seafile/shared/seafile/conf/seahub_settings.py /opt/seafile/shared/seafile/conf/seahub_settings.py.bak\n Clean up redundant configuration items in the configuration files:
/opt/seafile/shared/seafile/conf/seafile.conf and remove the entire [memcached], [database], [commit_object_backend], [fs_object_backend], [notification] and [block_backend] if above sections have correctly specified in .env./opt/seafile/shared/seafile/conf/seahub_settings.py and remove the entire blocks for DATABASES = {...} and CAHCES = {...}In the most cases, the seafile.conf only include the listen port 8082 of Seafile file server.
Note
According to this upgrade document, a frontend service will be started here. If you plan to use this node as a backend node, you need to modify this item in .env and set it to backend:
CLUSTER_MODE=backend\n docker compose up -d\n"},{"location":"upgrade/upgrade_a_cluster/#step-6-upgrade-seafile","title":"Step 6) Upgrade Seafile","text":"docker exec -it seafile bash\n# enter the container `seafile`\n\n# stop servers\ncd /opt/seafile/seafile-server-latest\n./seafile.sh stop\n./seahub.sh stop\n\n# upgrade seafile\ncd upgrade\n./upgrade_12.0_13.0.sh\n Success
After upgrading the Seafile, you can see the following messages in your console:
Updating seafile/seahub database ...\n\n[INFO] You are using MySQL\n[INFO] updating seafile database...\n[INFO] updating seahub database...\n[INFO] updating seafevents database...\nDone\n\nmigrating avatars ...\n\nDone\n\nupdating /opt/seafile/seafile-server-latest symbolic link to /opt/seafile/seafile-pro-server-13.0.11 ...\n\n\n\n-----------------------------------------------------------------\nUpgraded your seafile server successfully.\n-----------------------------------------------------------------\n Then you can exit the container by exit
docker compose down\ndocker compose up -d\n Tip
docker logs -f seafile to check whether the current node service is running normallyDownload the newest seafile-sever.yml file and modify .env similar to the first node (for backend node, you should set CLUSTER_MODE=backend)
Start the Seafile server:
docker compose up -d\n Stop the seafile service in all nodes
docker compose down\n Download the docker-compose files for Seafile 12
wget -O .env https://manual.seafile.com/13.0/repo/docker/cluster/env\nwget https://manual.seafile.com/13.0/repo/docker/cluster/seafile-server.yml\n Modify .env:
Generate a JWT key
pwgen -s 40 1\n\n# e.g., EkosWcXonPCrpPE9CFsnyQLLPqoPhSJZaqA3JMFw\n Fill up the following field according to your configurations using in Seafile 11:
SEAFILE_SERVER_HOSTNAME=<your loadbalance's host>\nSEAFILE_SERVER_PROTOCOL=https # or http\nSEAFILE_MYSQL_DB_HOST=<your mysql host>\nSEAFILE_MYSQL_DB_USER=seafile # if you don't use `seafile` as your Seafile server's account, please correct it\nSEAFILE_MYSQL_DB_PASSWORD=<your mysql password for user `seafile`>\nJWT_PRIVATE_KEY=<your JWT key generated in Sec. 3.1>\n Remove the variables using in Cluster initialization
Since Seafile has been initialized in Seafile 11, the variables related to Seafile cluster initialization can be removed from .env:
Start the Seafile in a node
Note
According to this upgrade document, a frontend service will be started here. If you plan to use this node as a backend node, you need to modify this item in .env and set it to backend:
CLUSTER_MODE=backend\n docker compose up -d\n Upgrade Seafile
docker exec -it seafile bash\n# enter the container `seafile`\n\n# stop servers\ncd /opt/seafile/seafile-server-latest\n./seafile.sh stop\n./seahub.sh stop\n\n# upgrade seafile\ncd upgrade\n./upgrade_11.0_12.0.sh\n Success
After upgrading the Seafile, you can see the following messages in your console:
Updating seafile/seahub database ...\n\n[INFO] You are using MySQL\n[INFO] updating seafile database...\n[INFO] updating seahub database...\n[INFO] updating seafevents database...\nDone\n\nmigrating avatars ...\n\nDone\n\nupdating /opt/seafile/seafile-server-latest symbolic link to /opt/seafile/seafile-pro-server-12.0.6 ...\n\n\n\n-----------------------------------------------------------------\nUpgraded your seafile server successfully.\n-----------------------------------------------------------------\n Then you can exit the container by exit
Restart current node
docker compose down\n docker compose up -d\n Tip
docker logs -f seafile to check whether the current node service is running normallyOperations for other nodes
Download and modify .env similar to the first node (for backend node, you should set CLUSTER_MODE=backend)
Start the Seafile server:
docker compose up -d\n Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps:
In general, to upgrade a cluster, you need:
Doing maintanence upgrading is simple, you only need to run the script ./upgrade/minor_upgrade.sh at each node to update the symbolic link.
Clean Database
If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
Stop Seafile server
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n Frontend nodeBackend node cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh stop\n./seahub.sh stop\n cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh stop\n./seafile-background-tasks.sh stop\n Install new Python libraries
Download and uncompress the package
Run the upgrade script in a single node
seafile-pro-server-12.x.x/upgrade/upgrade_11.0_12.0.sh\n Follow here to create the .env file in conf/ directory
Start Seafile server
Frontend nodeBackend nodecd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seahub.sh start\n cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seafile-background-tasks.sh start\n (Optional) Refer here to upgrade notification server
(Optional) Refer here to upgrade SeaDoc server
For maintenance upgrade, like from version 10.0.1 to version 10.0.4, just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
For major version upgrade, like from 10.0 to 11.0, see instructions below.
Please check the upgrade notes for any special configuration or changes before/while upgrading.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-120-to-130","title":"Upgrade from 12.0 to 13.0","text":"From Seafile Docker 13.0, the elasticsearch.yml has separated from seafile-server.yml, and Seafile will support getting cache configuration from environment variables
Before upgrading, please shutdown you Seafile server
docker compose down\n"},{"location":"upgrade/upgrade_docker/#step-2-download-the-newest-yml-files","title":"Step 2) Download the newest .yml files","text":""},{"location":"upgrade/upgrade_docker/#step-21-download-seafile-serveryml","title":"Step 2.1) Download seafile-server.yml","text":"Before downloading the newest seafile-server.yml, please backup your original one:
mv seafile-server.yml seafile-server.yml.bak\n Then download the new seafile-server.yml according to the following commands:
wget https://manual.seafile.com/13.0/repo/docker/ce/seafile-server.yml\n wget https://manual.seafile.com/13.0/repo/docker/pro/seafile-server.yml\n"},{"location":"upgrade/upgrade_docker/#step-22-download-yml-file-for-notification-server","title":"Step 2.2) Download .yml file for notification server","text":"Deployment with SeafileStandalone deployment wget https://manual.seafile.com/13.0/repo/docker/notification-server.yml\n wget https://manual.seafile.com/13.0/repo/docker/notification-server/notification-server.yml\n"},{"location":"upgrade/upgrade_docker/#step-23-download-yml-file-for-search-engine-pro-edition","title":"Step 2.3) Download .yml file for search engine (Pro edition)","text":"ElasticSearchSeaSearch From Seafile Docker 13.0 (Pro), the ElasticSearch service will be controlled by a separate resource file (i.e., elasticsearch.yml). If you are using Seafile Pro and still plan to use ElasticSearch, please download the elasticsearch.yml:
wget https://manual.seafile.com/13.0/repo/docker/pro/elasticsearch.yml\n If you are using SeaSearch as the search engine, please download the newest seasearch.yml file:
mv seasearch.yml seasearch.yml.bak\nwget https://manual.seafile.com/13.0/repo/docker/pro/seasearch.yml\n"},{"location":"upgrade/upgrade_docker/#step-24-download-yml-file-for-seadoc-optional","title":"Step 2.4) Download .yml file for SeaDoc (optional)","text":"If you use SeaDoc extension, the seadoc.yml file need to be updated too:
wget https://manual.seafile.com/13.0/repo/docker/seadoc.yml\n"},{"location":"upgrade/upgrade_docker/#step-3-modify-env-update-image-version-and-add-cache-configurations","title":"Step 3) Modify .env, update image version and add cache configurations","text":""},{"location":"upgrade/upgrade_docker/#step-31-update-image-version-to-seafile-13","title":"Step 3.1) Update image version to Seafile 13","text":"Seafile CESeafile Pro SEAFILE_IMAGE=seafileltd/seafile-mc:13.0-latest\nSEADOC_IMAGE=seafileltd/sdoc-server:2.0-latest\nNOTIFICATION_SERVER_IMAGE=seafileltd/notification-server:13.0-latest\n # -- add `elasticsearch.yml` if you are still using ElasticSearch\n# COMPOSE_FILE='...,elasticsearch.yml'\n\n# -- if you are using SeaSearch, please also update the SeaSearch image\n# SEASEARCH_IMAGE=seafileltd/seasearch:1.0-latest # or seafileltd/seasearch-nomkl:1.0-latest for Apple chips\n\nSEAFILE_IMAGE=seafileltd/seafile-pro-mc:13.0-latest\nSEADOC_IMAGE=seafileltd/sdoc-server:2.0-latest\nNOTIFICATION_SERVER_IMAGE=seafileltd/notification-server:13.0-latest\n"},{"location":"upgrade/upgrade_docker/#step-32-add-configurations-for-cache","title":"Step 3.2) Add configurations for cache","text":"From Seafile 13, the configurations of database and cache can be set via environment variables directly (you can define it in the .env). What's more, the Redis will be recommended as the primary cache server for supporting some new features (please refer the upgradte notes, you can also refer to more details about Redis in Seafile Docker here).
## Cache\nCACHE_PROVIDER=redis\n\n### Redis\nREDIS_HOST=redis\nREDIS_PORT=6379\nREDIS_PASSWORD=\n ## Cache\nCACHE_PROVIDER=memcached\n\n### Memcached\nMEMCACHED_HOST=memcached\nMEMCACHED_PORT=11211\n"},{"location":"upgrade/upgrade_docker/#step-33-add-configuration-for-notification-server","title":"Step 3.3) Add configuration for notification server","text":"If you are using notification server in Seafile 12, please specify the notification server url in .env:
ENABLE_NOTIFICATION_SERVER=true\n ENABLE_NOTIFICATION_SERVER=true\nNOTIFICATION_SERVER_URL=http://<your notification server host>:8083\nINNER_NOTIFICATION_SERVER_URL=$NOTIFICATION_SERVER_URL\n"},{"location":"upgrade/upgrade_docker/#step-34-add-configurations-for-storage-backend-pro","title":"Step 3.4) Add configurations for storage backend (Pro)","text":"Seafile 13.0 add a new environment SEAF_SERVER_STORAGE_TYPE to determine the storage backend of seaf-server component. You can delete the variable or set it to empty (SEAF_SERVER_STORAGE_TYPE=) to use the old way, i.e., determining the storage backend from seafile.conf.
seafile.conf Set SEAF_SERVER_STORAGE_TYPE to disk (default value):
SEAF_SERVER_STORAGE_TYPE=disk\n Set SEAF_SERVER_STORAGE_TYPE to s3, and add your s3 configurations:
SEAF_SERVER_STORAGE_TYPE=s3\n\nS3_COMMIT_BUCKET=<your commit bucket name>\nS3_FS_BUCKET=<your fs bucket name>\nS3_BLOCK_BUCKET=<your block bucket name>\nS3_SS_BUCKET=<your seasearch bucket name> # for seasearch\nS3_MD_BUCKET=<your metadata bucket name> # for metadata-server\nS3_KEY_ID=<your-key-id>\nS3_SECRET_KEY=<your-secret-key>\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=false\nS3_AWS_REGION=us-east-1\nS3_HOST=\nS3_USE_HTTPS=true\nS3_SSE_C_KEY=\n Set SEAF_SERVER_STORAGE_TYPE to multiple. In this case, you don't need to change the storage configuration in seafile.conf.
SEAF_SERVER_STORAGE_TYPE=multiple\n If you would like to use the storage configuration in seafile.conf, please remove default value of SEAF_SERVER_STORAGE_TYPE in .env:
SEAF_SERVER_STORAGE_TYPE=\n"},{"location":"upgrade/upgrade_docker/#step-4-remove-obsolete-configurations","title":"Step 4) Remove obsolete configurations","text":"Although the configurations in environment (i.e., .env) have higher priority than the configurations in config files, we recommend that you remove or modify the cache configuration in the following files to avoid ambiguity:
Backup the old configuration files:
# please replace /opt/seafile-data to your $SEAFILE_VOLUME\n\ncp /opt/seafile-data/seafile/conf/seafile.conf /opt/seafile-data/seafile/conf/seafile.conf.bak\ncp /opt/seafile-data/seafile/conf/seahub_settings.py /opt/seafile-data/seafile/conf/seahub_settings.py.bak\n Clean up redundant configuration items in the configuration files:
/opt/seafile-data/seafile/conf/seafile.conf and remove the entire [memcached], [database], [commit_object_backend], [fs_object_backend], [notification] and [block_backend] if above sections have correctly specified in .env./opt/seafile-data/seafile/conf/seahub_settings.py and remove the entire blocks for DATABASES = {...} and CAHCES = {...}In the most cases, the seafile.conf only include the listen port 8082 of Seafile file server.
docker compose up -d\n"},{"location":"upgrade/upgrade_docker/#upgrade-from-110-to-120","title":"Upgrade from 11.0 to 12.0","text":"Note: If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
From Seafile Docker 12.0, we recommend that you use .env and seafile-server.yml files for configuration.
mv docker-compose.yml docker-compose.yml.bak\n"},{"location":"upgrade/upgrade_docker/#download-seafile-120-docker-files","title":"Download Seafile 12.0 Docker files","text":"Download .env, seafile-server.yml and caddy.yml, and modify .env file according to the old configuration in docker-compose.yml.bak
wget -O .env https://manual.seafile.com/12.0/repo/docker/ce/env\nwget https://manual.seafile.com/12.0/repo/docker/ce/seafile-server.yml\nwget https://manual.seafile.com/12.0/repo/docker/caddy.yml\n The following fields merit particular attention: Variable Description Default Value SEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-data SEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/db SEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy SEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafile SEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME The database name of ccnet ccnet_db SEAFILE_MYSQL_DB_SEAFILE_DB_NAME The database name of seafile seafile_db SEAFILE_MYSQL_DB_SEAHUB_DB_NAME The database name of seahub seahub_db JWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) http TIME_ZONE Time zone UTC wget -O .env https://manual.seafile.com/12.0/repo/docker/pro/env\nwget https://manual.seafile.com/12.0/repo/docker/pro/seafile-server.yml\nwget https://manual.seafile.com/12.0/repo/docker/caddy.yml\n The following fields merit particular attention: Variable Description Default Value SEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-data SEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/db SEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy SEAFILE_ELASTICSEARCH_VOLUME (Only valid for Seafile PE) The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data SEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafile SEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) JWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) http TIME_ZONE Time zone UTC Note
seafile.conf).INIT_SEAFILE_MYSQL_ROOT_PASSWORD, INIT_SEAFILE_ADMIN_EMAIL, INIT_SEAFILE_ADMIN_PASSWORD), you can remove it in the .env file.SSL is now handled by the caddy server. If you have used SSL before, you will also need modify the seafile.nginx.conf. Change server listen 443 to 80.
Backup the original seafile.nginx.conf file:
cp seafile.nginx.conf seafile.nginx.conf.bak\n Remove the server listen 80 section:
#server {\n# listen 80;\n# server_name _ default_server;\n\n # allow certbot to connect to challenge location via HTTP Port 80\n # otherwise renewal request will fail\n# location /.well-known/acme-challenge/ {\n# alias /var/www/challenges/;\n# try_files $uri =404;\n# }\n\n# location / {\n# rewrite ^ https://seafile.example.com$request_uri? permanent;\n# }\n#}\n Change server listen 443 to 80:
server {\n#listen 443 ssl;\nlisten 80;\n\n# ssl_certificate /shared/ssl/pkg.seafile.top.crt;\n# ssl_certificate_key /shared/ssl/pkg.seafile.top.key;\n\n# ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;\n\n ...\n Start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-notification-server","title":"Upgrade notification server","text":"If you has deployed the notification server. The Notification Server is now moved to its own Docker image. You need to redeploy it according to Notification Server document
"},{"location":"upgrade/upgrade_docker/#upgrade-seadoc-from-08-to-10-for-seafile-v120","title":"Upgrade SeaDoc from 0.8 to 1.0 for Seafile v12.0","text":"If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following steps:
From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db.
"},{"location":"upgrade/upgrade_docker/#remove-seadoc-configs-in-seafilenginxconf-file","title":"Remove SeaDoc configs in seafile.nginx.conf file","text":"If you have deployed SeaDoc older version, you should remove /sdoc-server/, /socket.io configs in seafile.nginx.conf file.
# location /sdoc-server/ {\n# add_header Access-Control-Allow-Origin *;\n# add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;\n# add_header Access-Control-Allow-Headers \"deviceType,token, authorization, content-type\";\n# if ($request_method = 'OPTIONS') {\n# add_header Access-Control-Allow-Origin *;\n# add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;\n# add_header Access-Control-Allow-Headers \"deviceType,token, authorization, content-type\";\n# return 204;\n# }\n# proxy_pass http://sdoc-server:7070/;\n# proxy_redirect off;\n# proxy_set_header Host $host;\n# proxy_set_header X-Real-IP $remote_addr;\n# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n# proxy_set_header X-Forwarded-Host $server_name;\n# proxy_set_header X-Forwarded-Proto $scheme;\n# client_max_body_size 100m;\n# }\n# location /socket.io {\n# proxy_pass http://sdoc-server:7070;\n# proxy_http_version 1.1;\n# proxy_set_header Upgrade $http_upgrade;\n# proxy_set_header Connection 'upgrade';\n# proxy_redirect off;\n# proxy_buffers 8 32k;\n# proxy_buffer_size 64k;\n# proxy_set_header X-Real-IP $remote_addr;\n# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n# proxy_set_header Host $http_host;\n# proxy_set_header X-NginX-Proxy true;\n# }\n"},{"location":"upgrade/upgrade_docker/#deploy-a-new-seadoc-server","title":"Deploy a new SeaDoc server","text":"Please see the document Setup SeaDoc to install SeaDoc with Seafile.
"},{"location":"upgrade/upgrade_docker/#other-configuration-changes","title":"Other configuration changes","text":""},{"location":"upgrade/upgrade_docker/#enable-passing-of-remote_user","title":"Enable passing of REMOTE_USER","text":"REMOTE_USER header is not passed to Seafile by default, you need to change gunicorn.conf.py if you need REMOTE_USER header for SSO.
forwarder_headers = 'SCRIPT_NAME,PATH_INFO,REMOTE_USER'\n"},{"location":"upgrade/upgrade_docker/#supplement-or-remove-allowed_hosts-in-seahub_settingspy","title":"Supplement or remove ALLOWED_HOSTS in seahub_settings.py","text":"Since version 12.0, the seaf-server component need to send internal requests to seahub component to check permissions, as reporting 400 Error when downloading files if the ALLOWED_HOSTS set incorrect. In this case, you can either remove ALLOWED_HOSTS in seahub_settings.py or supplement 127.0.0.1 in ALLOWED_HOSTS list:
# seahub_settings.py\n\nALLOWED_HOSTS = ['...(your domain)', '127.0.0.1']\n"},{"location":"upgrade/upgrade_docker/#upgrade-from-100-to-110","title":"Upgrade from 10.0 to 11.0","text":"Download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Taking the community edition as an example, you have to modify
...\nservice:\n ...\n seafile:\n image: seafileltd/seafile-mc:10.0-latest\n ...\n ...\n to
service:\n ...\n seafile:\n image: seafileltd/seafile-mc:11.0-latest\n ...\n ...\n It is also recommended that you upgrade mariadb and memcached to newer versions as in the v11.0 docker-compose.yml file. Specifically, in version 11.0, we use the following versions:
What's more, you have to migrate configuration for LDAP and OAuth according to here
Start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-90-to-100","title":"Upgrade from 9.0 to 10.0","text":"Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
If you are using pro edition with ElasticSearch, SAML SSO and storage backend features, follow the upgrading manual on how to update the configuration for these features.
If you want to use the new notification server and rate control (pro edition only), please refer to the upgrading manual.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-80-to-90","title":"Upgrade from 8.0 to 9.0","text":"Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#lets-encrypt-ssl-certificate","title":"Let's encrypt SSL certificate","text":"Since version 9.0.6, we use Acme V3 (not acme-tiny) to get certificate.
If there is a certificate generated by an old version, you need to back up and move the old certificate directory and the seafile.nginx.conf before starting.
mv /opt/seafile/shared/ssl /opt/seafile/shared/ssl-bak\n\nmv /opt/seafile/shared/nginx/conf/seafile.nginx.conf /opt/seafile/shared/nginx/conf/seafile.nginx.conf.bak\n Starting the new container will automatically apply a certificate.
docker compose down\ndocker compose up -d\n Please wait a moment for the certificate to be applied, then you can modify the new seafile.nginx.conf as you want. Execute the following command to make the nginx configuration take effect.
docker exec seafile nginx -s reload\n A cron job inside the container will automatically renew the certificate.
"},{"location":"upgrade/upgrade_notes_for_10.0.x/","title":"Upgrade notes for 10.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#important-release-changes","title":"Important release changes","text":""},{"location":"upgrade/upgrade_notes_for_10.0.x/#enable-notification-server","title":"Enable notification server","text":"The notification server enables desktop syncing and drive clients to get notification of library changes immediately using websocket. There are two benefits:
The notification server works with Seafile syncing client 9.0+ and drive client 3.0+.
Please follow the document to enable notification server
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#memcached-section-in-the-seafileconf-pro-edition-only","title":"Memcached section in the seafile.conf (pro edition only)","text":"If you use storage backend or cluster, make sure the memcached section is in the seafile.conf.
Since version 10.0, all memcached options are consolidated to the one below.
Modify the seafile.conf:
[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n"},{"location":"upgrade/upgrade_notes_for_10.0.x/#saml-sso-change-pro-edition-only","title":"SAML SSO change (pro edition only)","text":"The configuration for SAML SSO in Seafile is greatly simplified. Now only three options are needed:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n Please check the new document on SAML SSO
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#rate-control-in-role-settings-pro-edition-only","title":"Rate control in role settings (pro edition only)","text":"Starting from version 10.0, Seafile allows administrators to configure upload and download speed limits for users with different roles through the following two steps:
seahub_settings.py.ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n ...\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n ...\n },\n 'guest': {\n ...\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n ...\n },\n}\n seafile-server-latest directory to make the configuration take effect../seahub.sh python-env python3 seahub/manage.py set_user_role_upload_download_rate_limit\n"},{"location":"upgrade/upgrade_notes_for_10.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch is upgraded to version 8.x, fixed and improved some issues of file search function.
Since elasticsearch 7.x, the default number of shards has changed from 5 to 1, because too many index shards will over-occupy system resources; but when a single shard data is too large, it will also reduce search performance. Starting from version 10.0, Seafile supports customizing the number of shards in the configuration file.
You can use the following command to query the current size of each shard to determine the best number of shards for you:
curl 'http{s}://<es IP>:9200/_cat/shards/repofiles?v'\n The official recommendation is that the size of each shard should be between 10G-50G: https://www.elastic.co/guide/en/elasticsearch/reference/8.6/size-your-shards.html#shard-size-recommendation.
Modify the seafevents.conf:
[INDEX FILES]\n...\nshards = 10 # default is 5\n...\n"},{"location":"upgrade/upgrade_notes_for_10.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
For Ubuntu 20.04/22.04
sudo pip3 install future==0.18.* mysqlclient==2.1.* pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1\n For Debian 11
su pip3 install future==0.18.* mysqlclient==2.1.* pillow==9.3.* captcha==0.4 django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1\n"},{"location":"upgrade/upgrade_notes_for_10.0.x/#upgrade-to-100x","title":"Upgrade to 10.0.x","text":"Stop Seafile-9.0.x server.
Start from Seafile 10.0.x, run the script:
upgrade/upgrade_9.0_10.0.sh\n If you are using pro edtion, modify memcached option in seafile.conf and SAML SSO configuration if needed.
You can choose one of the methods to upgrade your index data.
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-one-reindex-the-old-index-data","title":"Method one, reindex the old index data","text":"1. Download Elasticsearch image:
docker pull elasticsearch:7.17.9\n Create a new folder to store ES data and give the folder permissions:
mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n Start ES docker image:
sudo docker run -d --name es-7.17 -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.17.9\n PS: ES_JAVA_OPTS can be adjusted according to your need.
2. Create an index with 8.x compatible mappings:
# create repo_head index\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8?pretty=true' -d '\n{\n \"mappings\" : {\n \"properties\" : {\n \"commit\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n },\n \"repo\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n },\n \"updatingto\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n }\n }\n }\n}'\n\n# create repofiles index, number_of_shards is the number of shards, here is set to 5, you can also modify it to the most suitable number of shards\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/?pretty=true' -d '\n{\n \"settings\" : {\n \"index\" : {\n \"number_of_shards\" : \"5\",\n \"analysis\" : {\n \"analyzer\" : {\n \"seafile_file_name_ngram_analyzer\" : {\n \"filter\" : [\n \"lowercase\"\n ],\n \"type\" : \"custom\",\n \"tokenizer\" : \"seafile_file_name_ngram_tokenizer\"\n }\n },\n \"tokenizer\" : {\n \"seafile_file_name_ngram_tokenizer\" : {\n \"type\" : \"ngram\",\n \"min_gram\" : \"3\",\n \"max_gram\" : \"4\"\n }\n }\n }\n }\n },\n \"mappings\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"text\",\n \"term_vector\" : \"with_positions_offsets\"\n },\n \"filename\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"ngram\" : {\n \"type\" : \"text\",\n \"analyzer\" : \"seafile_file_name_ngram_analyzer\"\n }\n }\n },\n \"is_dir\" : {\n \"type\" : \"boolean\"\n },\n \"mtime\" : {\n \"type\" : \"date\"\n },\n \"path\" : {\n \"type\" : \"keyword\"\n },\n \"repo\" : {\n \"type\" : \"keyword\"\n },\n \"size\" : {\n \"type\" : \"long\"\n },\n \"suffix\" : {\n \"type\" : \"keyword\"\n }\n }\n }\n}'\n 3. Set the refresh_interval to -1 and the number_of_replicas to 0 for efficient reindex:
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n 4. Use the reindex API to copy documents from the 7.x index into the new index:
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?wait_for_completion=false&pretty=true' -d '\n{\n \"source\": {\n \"index\": \"repo_head\"\n },\n \"dest\": {\n \"index\": \"repo_head_8\"\n }\n}'\n\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?wait_for_completion=false&pretty=true' -d '\n{\n \"source\": {\n \"index\": \"repofiles\"\n },\n \"dest\": {\n \"index\": \"repofiles_8\"\n }\n}'\n 5. Use the following command to check if the reindex task is complete:
# Get the task_id of the reindex task:\n$ curl 'http{s}://{es server IP}:9200/_tasks?actions=*reindex&pretty'\n# Check to see if the reindex task is complete:\n$ curl 'http{s}://{es server IP}:9200/_tasks/:<task_id>?pretty'\n 6. Reset the refresh_interval and number_of_replicas to the values used in the old index:
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n 7. Wait for the elasticsearch status to change to green (or yellow if it is a single node).
curl 'http{s}://{es server IP}:9200/_cluster/health?pretty'\n 8. Use the aliases API delete the old index and add an alias with the old index name to the new index:
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_aliases?pretty' -d '\n{\n \"actions\": [\n {\"remove_index\": {\"index\": \"repo_head\"}},\n {\"remove_index\": {\"index\": \"repofiles\"}},\n {\"add\": {\"index\": \"repo_head_8\", \"alias\": \"repo_head\"}},\n {\"add\": {\"index\": \"repofiles_8\", \"alias\": \"repofiles\"}}\n ]\n}'\n 9. Deactivate the 7.17 container, pull the 8.x image and run:
$ docker stop es-7.17\n\n$ docker rm es-7.17\n\n$ docker pull elasticsearch:8.6.2\n\n$ sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:8.6.2\n"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-two-rebuild-the-index-and-discard-the-old-index-data","title":"Method two, rebuild the index and discard the old index data","text":"1. Pull Elasticsearch image:
docker pull elasticsearch:8.5.3\n Create a new folder to store ES data and give the folder permissions:
mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n Start ES docker image:
sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:8.5.3\n 2. Modify the seafevents.conf:
[INDEX FILES]\n...\nexternal_es_server = true\nes_host = http{s}://{es server IP}\nes_port = 9200\nshards = 10 # default is 5.\n...\n Restart Seafile server:
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start\n 3. Delete old index data
rm -rf /opt/seafile-elasticsearch/data/*\n 4. Create new index data:
$ cd /opt/seafile/seafile-server-latest\n$ ./pro/pro.py search --update\n"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"1. Deploy elasticsearch 8.x according to method two. Use Seafile 10.0 version to deploy a new backend node and modify the seafevents.conf file. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update.
2. Upgrade the other nodes to Seafile 10.0 version and use the new Elasticsearch 8.x server.
3. Then deactivate the old backend node and the old version of Elasticsearch.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/","title":"Upgrade notes for 11.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#important-release-changes","title":"Important release changes","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-of-user-identity","title":"Change of user identity","text":"Previous Seafile versions directly used a user's email address or SSO identity as their internal user ID.
Seafile 11.0 introduces virtual user IDs - random, internal identifiers like \"adc023e7232240fcbb83b273e1d73d36@auth.local\". For new users, a virtual ID will be generated instead of directly using their email. A mapping between the email and virtual ID will be stored in the \"profile_profile\" database table. For SSO users,the mapping between SSO ID and virtual ID is stored in the \"social_auth_usersocialauth\" table.
Overall this brings more flexibility to handle user accounts and identity changes. Existing users will use the same old ID.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#reimplementation-of-ldap-integration","title":"Reimplementation of LDAP Integration","text":"Previous Seafile versions handled LDAP authentication in the ccnet-server component. In Seafile 11.0, LDAP is reimplemented within the Seahub Python codebase.
LDAP configuration has been moved from ccnet.conf to seahub_settings.py. The ccnet_db.LDAPImported table is no longer used - LDAP users are now stored in ccnet_db.EmailUsers along with other users.
Benefits of this new implementation:
You need to run migrate_ldapusers.py script to merge ccnet_db.LDAPImported table to ccnet_db.EmailUsers table. The setting files need to be changed manually. (See more details below)
If you use OAuth authentication, the configuration need to be changed a bit.
If you use SAML, you don't need to change configuration files. For SAML2, in version 10, the name_id field is returned from SAML server, and is used as the username (the email field in ccnet_dbEmailUser). In version 11, for old users, Seafile will find the old user and create a name_id to name_id mapping in social_auth_usersocialauth. For new users, Seafile will create a new user with random ID and add a name_id to the random ID mapping in social_auth_usersocialauth. In addition, we have added a feature where you can configure to disable login with a username and password for saml users by using the config of DISABLE_ADFS_USER_PWD_LOGIN = True in seahub_settings.py.
Seafile 11.0 dropped using SQLite as the database. It is better to migrate from SQLite database to MySQL database before upgrading to version 11.0.
There are several reasons driving this change:
To migrate from SQLite database to MySQL database, you can follow the document Migrate from SQLite to MySQL. If you have issues in the migration, just post a thread in our forum. We are glad to help you.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch version is not changed in Seafile version 11.0
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#new-saml-prerequisites-multi_tenancy-only","title":"New SAML prerequisites (MULTI_TENANCY only)","text":"For Ubuntu 20.04/22.04
sudo apt-get update\nsudo apt-get install -y dnsutils\n"},{"location":"upgrade/upgrade_notes_for_11.0.x/#django-csrf-protection-issue","title":"Django CSRF protection issue","text":"Django 4.* has introduced a new check for the origin http header in CSRF verification. It now compares the values of the origin field in HTTP header and the host field in HTTP header. If they are different, an error is triggered.
If you deploy Seafile behind a proxy, or if you use a non-standard port, or if you deploy Seafile in cluster, it is likely the origin field in HTTP header received by Django and the host field in HTTP header received by Django are different. Because the host field in HTTP header is likely to be modified by proxy. This mismatch results in a CSRF error.
You can add CSRF_TRUSTED_ORIGINS to seahub_settings.py to solve the problem:
CSRF_TRUSTED_ORIGINS = [\"https://<your-domain>\"]\n"},{"location":"upgrade/upgrade_notes_for_11.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
For Ubuntu 20.04/22.04
sudo apt-get update\nsudo apt-get install -y python3-dev ldap-utils libldap2-dev\n\nsudo pip3 install future==0.18.* mysqlclient==2.1.* pillow==10.2.* sqlalchemy==2.0.18 captcha==0.5.* django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3\n"},{"location":"upgrade/upgrade_notes_for_11.0.x/#upgrade-to-110x","title":"Upgrade to 11.0.x","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#1-stop-seafile-100x-server","title":"1) Stop Seafile-10.0.x server.","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#2-start-from-seafile-110x-run-the-script","title":"2) Start from Seafile 11.0.x, run the script:","text":"upgrade/upgrade_10.0_11.0.sh\n"},{"location":"upgrade/upgrade_notes_for_11.0.x/#3modify-configurations-and-migrate-ldap-records","title":"3\uff09Modify configurations and migrate LDAP records","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configurations-for-ldap","title":"Change configurations for LDAP","text":"The configuration items of LDAP login and LDAP sync tasks are migrated from ccnet.conf to seahub_settings.py. The name of the configuration item is based on the 10.0 version, and the characters 'LDAP_' or 'MULTI_LDAP_1' are added. Examples are as follows:
# Basic configuration items for LDAP login\nENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.125' # The URL of LDAP server\nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' # The root node of users who can \n # log in to Seafile in the LDAP server\nLDAP_ADMIN_DN = 'administrator@seafile.ren' # DN of the administrator used \n # to query the LDAP server for information\nLDAP_ADMIN_PASSWORD = 'Hello@123' # Password of LDAP_ADMIN_DN\nLDAP_PROVIDER = 'ldap' # Identify the source of the user, used in \n # the table social_auth_usersocialauth, defaults to 'ldap'\nLDAP_LOGIN_ATTR = 'userPrincipalName' # User's attribute used to log in to Seafile, \n # can be mail or userPrincipalName, cannot be changed\nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' # Additional filter conditions,\n # users who meet the filter conditions can log in, otherwise they cannot log in\n# For update user info when login\nLDAP_CONTACT_EMAIL_ATTR = '' # For update user's contact_email\nLDAP_USER_ROLE_ATTR = '' # For update user's role\nLDAP_USER_FIRST_NAME_ATTR = 'givenName' # For update user's first name\nLDAP_USER_LAST_NAME_ATTR = 'sn' # For update user's last name\nLDAP_USER_NAME_REVERSE = False # Whether to reverse the user's first and last name\n The following configuration items are only for Pro Edition:
# Configuration items for LDAP sync tasks.\nLDAP_SYNC_INTERVAL = 60 # LDAP sync task period, in minutes\n\n# LDAP user sync configuration items.\nENABLE_LDAP_USER_SYNC = True # Whether to enable user sync\nLDAP_USER_OBJECT_CLASS = 'person' # This is the name of the class used to search for user objects. \n # In Active Directory, it's usually \"person\". The default value is \"person\".\nLDAP_DEPT_ATTR = '' # LDAP user's department info\nLDAP_UID_ATTR = '' # LDAP user's login_id attribute\nLDAP_AUTO_REACTIVATE_USERS = True # Whether to auto activate deactivated user\nLDAP_USE_PAGED_RESULT = False # Whether to use pagination extension\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nENABLE_EXTRA_USER_INFO_SYNC = True # Whether to enable sync of additional user information,\n # including user's full name, contact_email, department, and Windows login name, etc.\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# LDAP group sync configuration items.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_FILTER = '' # Group sync filter\nLDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth or Shibboleth), you want Seafile to find the existing account for this user instead of creating a new one, you can set SSO_LDAP_USE_SAME_UID = True:
SSO_LDAP_USE_SAME_UID = True\n Note, here the UID means the unique user ID, in LDAP it is the attribute you use for LDAP_LOGIN_ATTR (not LDAP_UID_ATTR), in ADFS it is uid attribute. You need make sure you use the same attribute for the two settings.
Run the following script to migrate users in LDAPImported to EmailUsers
cd <install-path>/seafile-server-latest\npython3 migrate_ldapusers.py\n For Seafile docker
docker exec -it seafile /usr/bin/python3 /opt/seafile/seafile-server-latest/migrate_ldapusers.py\n"},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configuration-for-oauth","title":"Change configuration for OAuth:","text":"In the new version, the OAuth login configuration should keep the email attribute unchanged to be compatible with new and old user logins. In version 11.0, a new uid attribute is added to be used as a user's external unique ID. The uid will be stored in social_auth_usersocialauth to map to internal virtual ID. For old users, the original email is used the internal virtual ID. The example is as follows:
# Version 10.0 or earlier\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n\n# Since 11.0 version, added 'uid' attribute.\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # In the new version, the email attribute configuration should be kept unchanged to be compatible with old and new user logins\n \"uid\": (True, \"uid\"), # Seafile use 'uid' as the external unique identifier of the user. Different OAuth systems have different attributes, which may be: 'uid' or 'username', etc.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n When a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration \"id\": (True, \"email\"). You can also manully add records in social_auth_usersocialauth to map extenral uid to old users.
We have documented common issues encountered by users when upgrading to version 11.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/","title":"Upgrade notes for 12.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
Seafile version 12.0 has following major changes:
Configuration changes:
.env file is needed to contain some configuration items. These configuration items need to be shared by different components in Seafile. We name it .env to be consistant with docker based installation..env file.ccnet.conf is removed. Some of its configuration items are moved from .env file, others are read from items in seafile.conf with same name.can_create_wiki and can_publish_wiki are used to control whether a role can create a Wiki and publish a Wiki. The old role permission can_publish_repo is removed.gunicorn.conf.py if you need REMOTE_USER header for SSO.Other changes:
Breaking changes
Deploying Seafile with binary package is now deprecated and probably no longer be supported in version 13.0. We recommend you to migrate your existing Seafile deployment to docker based.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch version is not changed in Seafile version 12.0
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#new-system-libraries","title":"New system libraries","text":"Ubuntu 24.04/22.04Debian 11apt-get install -y default-libmysqlclient-dev build-essential pkg-config libmemcached-dev\n apt-get install -y libsasl2-dev\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
Ubuntu 24.04 / Debian 12Ubuntu 22.04 / Debian 11sudo pip3 install future==1.0.* mysqlclient==2.2.* pillow==10.4.* sqlalchemy==2.0.* pillow_heif==0.18.0 \\\ngevent==24.2.* captcha==0.6.* django_simple_captcha==0.6.* djangosaml2==1.9.* \\\npysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 python-ldap==3.4.*\n sudo pip3 install future==1.0.* mysqlclient==2.1.* pillow==10.4.* sqlalchemy==2.0.* pillow_heif==0.18.0 \\\ngevent==24.2.* captcha==0.6.* django_simple_captcha==0.6.* djangosaml2==1.9.* \\\npysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.2.0\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#upgrade-to-120-for-binary-installation","title":"Upgrade to 12.0 (for binary installation)","text":"The following instruction is for binary package based installation. If you use Docker based installation, please see Upgrade Docker
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#1-clean-database-tables-before-upgrade","title":"1) Clean database tables before upgrade","text":"If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
Install new system libraries and Python libraries for your operation system as documented above.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#3-stop-seafile-110x-server","title":"3) Stop Seafile-11.0.x server","text":"In the folder of Seafile 11.0.x, run the commands:
./seahub.sh stop\n./seafile.sh stop\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#4-run-seafile-120x-upgrade-script","title":"4) Run Seafile 12.0.x upgrade script","text":"In the folder of Seafile 12.0.x, run the upgrade script
upgrade/upgrade_11.0_12.0.sh\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#5-create-the-env-file-in-conf-directory","title":"5) Create the .env file in conf/ directory","text":"conf/.env
TIME_ZONE=UTC\nJWT_PRIVATE_KEY=xxx\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=db # your MySQL host\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, can be generated by
pwgen -s 40 1\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#6-start-seafile-120x-server","title":"6) Start Seafile-12.0.x server","text":"In the folder of Seafile 12.0.x, run the command:
./seafile.sh start # starts seaf-server\n./seahub.sh start # starts seahub\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#7-optional-upgrade-notification-server","title":"7) (Optional) Upgrade notification server","text":"Since seafile 12.0, we use docker to deploy the notification server. Please follow the document of notification server to re-deploy notification server.
Note
Notification server is designed to be work with Docker based deployment. To make it work with Seafile binary package on the same server, you will need to add Nginx rules for notification server properly.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#8-optional-upgrade-seadoc-from-08-to-10","title":"8) (Optional) Upgrade SeaDoc from 0.8 to 1.0","text":"If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following two steps:
SeaDoc and Seafile binary package
Deploying SeaDoc and Seafile binary package on the same server is no longer officially supported. You will need to add Nginx rules for SeaDoc server properly.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#81-delete-sdoc_db","title":"8.1) Delete sdoc_db","text":"From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#82-deploy-a-new-seadoc-server","title":"8.2) Deploy a new SeaDoc server","text":"Please see the document Setup SeaDoc to install SeaDoc on a separate machine and integrate with your binary packaged based Seafile server v12.0.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#9-optional-update-gunicornconfpy-file-in-conf-directory","title":"9) (Optional) Updategunicorn.conf.py file in conf/ directory","text":"If you deployed single sign on (SSO) by Shibboleth protocol, the following line should be added to the gunicorn config file.
forwarder_headers = 'SCRIPT_NAME,PATH_INFO,REMOTE_USER'\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#10-optional-other-configuration-changes","title":"10) (Optional) Other configuration changes","text":""},{"location":"upgrade/upgrade_notes_for_12.0.x/#enable-passing-of-remote_user","title":"Enable passing of REMOTE_USER","text":"REMOTE_USER header is not passed to Seafile by default, you need to change gunicorn.conf.py if you need REMOTE_USER header for SSO.
forwarder_headers = 'SCRIPT_NAME,PATH_INFO,REMOTE_USER'\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#supplement-or-remove-allowed_hosts-in-seahub_settingspy","title":"Supplement or remove ALLOWED_HOSTS in seahub_settings.py","text":"Since version 12.0, the seaf-server component need to send internal requests to seahub component to check permissions, as reporting 400 Error when downloading files if the ALLOWED_HOSTS set incorrect. In this case, you can either remove ALLOWED_HOSTS in seahub_settings.py or supplement 127.0.0.1 in ALLOWED_HOSTS list:
# seahub_settings.py\n\nALLOWED_HOSTS = ['...(your domain)', '127.0.0.1']\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#faq","title":"FAQ","text":"We have documented common issues encountered by users when upgrading to version 12.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_13.0.x/","title":"Upgrade notes for 13.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
Seafile version 13.0 has following major changes:
Configuration changes:
.env, it is recommended to use environment variables to config database and memcacheBreaking changes
Deploying Seafile with binary package is no longer supported for community edition. We recommend you to migrate your existing Seafile deployment to docker based.
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch version is not changed in Seafile version 13.0
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#new-system-libraries-to-be-updated","title":"New system libraries (TO be updated)","text":"Ubuntu 24.04/22.04Debian 11apt-get install -y default-libmysqlclient-dev build-essential pkg-config libmemcached-dev\n apt-get install -y libsasl2-dev\n"},{"location":"upgrade/upgrade_notes_for_13.0.x/#new-python-libraries-to-be-updated","title":"New Python libraries (TO be updated)","text":"Note, you should install Python libraries system wide using root user or sudo mode.
Ubuntu 24.04 / Debian 12Ubuntu 22.04 / Debian 11sudo pip3 install future==1.0.* mysqlclient==2.2.* pillow==10.4.* sqlalchemy==2.0.* pillow_heif==0.18.0 \\\ngevent==24.2.* captcha==0.6.* django_simple_captcha==0.6.* djangosaml2==1.9.* \\\npysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 python-ldap==3.4.*\n sudo pip3 install future==1.0.* mysqlclient==2.1.* pillow==10.4.* sqlalchemy==2.0.* pillow_heif==0.18.0 \\\ngevent==24.2.* captcha==0.6.* django_simple_captcha==0.6.* djangosaml2==1.9.* \\\npysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.2.0\n"},{"location":"upgrade/upgrade_notes_for_13.0.x/#upgrade-to-130-for-binary-installation","title":"Upgrade to 13.0 (for binary installation)","text":"The following instruction is for binary package based installation. If you use Docker based installation, please see Updgrade Docker
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#1-clean-database-tables-before-upgrade","title":"1) Clean database tables before upgrade","text":"If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
Install new system libraries and Python libraries for your operation system as documented above.
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#3-stop-seafile-110x-server","title":"3) Stop Seafile-11.0.x server","text":"In the folder of Seafile 11.0.x, run the commands:
./seahub.sh stop\n./seafile.sh stop\n"},{"location":"upgrade/upgrade_notes_for_13.0.x/#4-run-seafile-120x-upgrade-script","title":"4) Run Seafile 12.0.x upgrade script","text":"In the folder of Seafile 12.0.x, run the upgrade script
upgrade/upgrade_11.0_12.0.sh\n"},{"location":"upgrade/upgrade_notes_for_13.0.x/#5-create-the-env-file-in-conf-directory","title":"5) Create the .env file in conf/ directory","text":"conf/.env
TIME_ZONE=UTC\nJWT_PRIVATE_KEY=xxx\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=db # your MySQL host\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, can be generated by
pwgen -s 40 1\n"},{"location":"upgrade/upgrade_notes_for_13.0.x/#6-start-seafile-120x-server","title":"6) Start Seafile-12.0.x server","text":"In the folder of Seafile 12.0.x, run the command:
./seafile.sh start # starts seaf-server\n./seahub.sh start # starts seahub\n"},{"location":"upgrade/upgrade_notes_for_13.0.x/#7-optional-upgrade-notification-server","title":"7) (Optional) Upgrade notification server","text":""},{"location":"upgrade/upgrade_notes_for_13.0.x/#8-optional-upgrade-seadoc-from-10-to-20","title":"8) (Optional) Upgrade SeaDoc from 1.0 to 2.0","text":""},{"location":"upgrade/upgrade_notes_for_13.0.x/#faq","title":"FAQ","text":"We have documented common issues encountered by users when upgrading to version 12.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_9.0.x/","title":"Upgrade notes for 9.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#important-release-changes","title":"Important release changes","text":"9.0 version includes following major changes:
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
You can turn golang file-server on by adding following configuration in seafile.conf
[fileserver]\nuse_go_fileserver = true\n"},{"location":"upgrade/upgrade_notes_for_9.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
sudo pip3 install pycryptodome==3.12.0 cffi==1.14.0\n"},{"location":"upgrade/upgrade_notes_for_9.0.x/#upgrade-to-90x","title":"Upgrade to 9.0.x","text":"Start from Seafile 9.0.x, run the script:
upgrade/upgrade_8.0_9.0.sh\n Start Seafile-9.0.x server.
If your elasticsearch data is not large, it is recommended to deploy the latest 7.x version of ElasticSearch and then rebuild the new index. Specific steps are as follows
Download ElasticSearch image
docker pull elasticsearch:7.16.2\n Create a new folder to store ES data and give the folder permissions
mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n Note: You must properly grant permission to access the es data directory, and run the Elasticsearch container as the root user, refer to here.
Start ES docker image
sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms2g -Xmx2g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.16.2\n Delete old index data
rm -rf /opt/seafile/pro-data/search/data/*\n Modify seafevents.conf
[INDEX FILES]\nexternal_es_server = true\nes_host = your server's IP (use 127.0.0.1 if deployed locally)\nes_port = 9200\n Restart seafile
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start \n"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-two-reindex-the-existing-data","title":"Method two, reindex the existing data","text":"If your data volume is relatively large, it will take a long time to rebuild indexes for all Seafile databases, so you can reindex the existing data. This requires the following steps
The detailed process is as follows
Download ElasticSearch image:
docker pull elasticsearch:7.16.2\n PS\uff1aFor seafile version 9.0, you need to manually create the elasticsearch mapping path on the host machine and give it 777 permission, otherwise elasticsearch will report path permission problems when starting, the command is as follows
mkdir -p /opt/seafile-elasticsearch/data \n Move original data to the new folder and give the folder permissions
mv /opt/seafile/pro-data/search/data/* /opt/seafile-elasticsearch/data/\nchmod -R 777 /opt/seafile-elasticsearch/data/\n Note: You must properly grant permission to access the es data directory, and run the Elasticsearch container as the root user, refer to here.
Start ES docker image
sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.16.2\n Note:ES_JAVA_OPTS can be adjusted according to your need.
Create an index with 7.x compatible mappings.
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head?include_type_name=false&pretty=true' -d '\n{\n \"mappings\" : {\n \"properties\" : {\n \"commit\" : {\n \"type\" : \"text\",\n \"index\" : false\n },\n \"repo\" : {\n \"type\" : \"text\",\n \"index\" : false\n },\n \"updatingto\" : {\n \"type\" : \"text\",\n \"index\" : false\n }\n }\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/?include_type_name=false&pretty=true' -d '\n{\n \"settings\" : {\n \"index\" : {\n \"number_of_shards\" : 5,\n \"number_of_replicas\" : 1,\n \"analysis\" : {\n \"analyzer\" : {\n \"seafile_file_name_ngram_analyzer\" : {\n \"filter\" : [\n \"lowercase\"\n ],\n \"type\" : \"custom\",\n \"tokenizer\" : \"seafile_file_name_ngram_tokenizer\"\n }\n },\n \"tokenizer\" : {\n \"seafile_file_name_ngram_tokenizer\" : {\n \"type\" : \"ngram\",\n \"min_gram\" : \"3\",\n \"max_gram\" : \"4\"\n }\n }\n }\n }\n },\n \"mappings\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"text\",\n \"term_vector\" : \"with_positions_offsets\"\n },\n \"filename\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"ngram\" : {\n \"type\" : \"text\",\n \"analyzer\" : \"seafile_file_name_ngram_analyzer\"\n }\n }\n },\n \"is_dir\" : {\n \"type\" : \"boolean\"\n },\n \"mtime\" : {\n \"type\" : \"date\"\n },\n \"path\" : {\n \"type\" : \"keyword\"\n },\n \"repo\" : {\n \"type\" : \"keyword\"\n },\n \"size\" : {\n \"type\" : \"long\"\n },\n \"suffix\" : {\n \"type\" : \"keyword\"\n }\n }\n }\n}'\n Set the refresh_interval to -1 and the number_of_replicas to 0 for efficient reindexing:
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n Use the reindex API to copy documents from the 5.x index into the new index.
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?pretty' -d '\n{\n \"source\": {\n \"index\": \"repo_head\",\n \"type\": \"repo_commit\"\n },\n \"dest\": {\n \"index\": \"new_repo_head\",\n \"type\": \"_doc\"\n }\n}'\n\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?pretty' -d '\n{\n \"source\": {\n \"index\": \"repofiles\",\n \"type\": \"file\"\n },\n \"dest\": {\n \"index\": \"new_repofiles\",\n \"type\": \"_doc\"\n }\n}'\n Reset the refresh_interval and number_of_replicas to the values used in the old index.
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n Wait for the index status to change to green.
curl http{s}://{es server IP}:9200/_cluster/health?pretty\n Use the aliases API delete the old index and add an alias with the old index name to the new index.
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_aliases?pretty' -d '\n{\n \"actions\": [\n {\"remove_index\": {\"index\": \"repo_head\"}},\n {\"remove_index\": {\"index\": \"repofiles\"}},\n {\"add\": {\"index\": \"new_repo_head\", \"alias\": \"repo_head\"}},\n {\"add\": {\"index\": \"new_repofiles\", \"alias\": \"repofiles\"}}\n ]\n}'\n After reindex, modify the configuration in Seafile.
Modify seafevents.conf
[INDEX FILES]\nexternal_es_server = true\nes_host = your server's IP\nes_port = 9200\n Restart seafile
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start \n"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"Deploy a new ElasticSeach 7.x service, use Seafile 9.0 version to deploy a new backend node, and connect to ElasticSeach 7.x. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update, and then upgrade the other nodes to Seafile 9.0 version and use the new ElasticSeach 7.x after the index is created. Then deactivate the old backend node and the old version of ElasticSeach.
Seafile is an open source cloud storage system for file sync, share and document collaboration. SeaDoc is an extension of Seafile that providing a lightweight online collaborative document feature.
"},{"location":"#license","title":"LICENSE","text":"The different components of Seafile project are released under different licenses:
The requested file was not found. If you are still using https://manual.seafile.com/xxx/, please move to https://manual.seafile.com/latest/xxx/ as this path has been deprecated. We apologize for the inconvenience caused.
"},{"location":"changelog/","title":"Changelog","text":""},{"location":"changelog/#changelogs","title":"Changelogs","text":"As the system admin, you can enter the admin panel by click System Admin in the popup of avatar.
Backup and recovery:
Recover corrupt files after server hard shutdown or system crash:
You can run Seafile GC to remove unused files:
When you setup seahub website, you should have setup a admin account. After you logged in a admin, you may add/delete users and file libraries.
"},{"location":"administration/account/#how-to-change-a-users-id","title":"How to change a user's ID","text":"Since version 11.0, if you need to change a user's external ID, you can manually modify database table social_auth_usersocialauth to map the new external ID to internal ID.
Administrator can reset password for a user in \"System Admin\" page.
In a private server, the default settings doesn't support users to reset their password by email. If you want to enable this, you have first to set up notification email.
"},{"location":"administration/account/#forgot-admin-account-or-password","title":"Forgot Admin Account or Password?","text":"You may run reset-admin.sh script under seafile-server-latest directory. This script would help you reset the admin account and password. Your data will not be deleted from the admin account, this only unlocks and changes the password for the admin account.
Tip
Enter into the docker image, then go to /opt/seafile/seafile-server-latest
Under the seafile-server-latest directory, run ./seahub.sh python-env python seahub/manage.py check_user_quota , when the user quota exceeds 90%, an email will be sent. If you want to enable this, you have first to set up notification email.
In the Pro Edition, Seafile offers four audit logs in system admin panel:
The audit log data is saved in seahub_db.
There are generally two parts of data to backup
There are 3 databases:
The backup is a two step procedure:
The second sequence is better in the sense that it avoids library corruption. Like other backup solutions, some new data can be lost in recovery. There is always a backup window. However, if your storage backup mechanism can finish quickly enough, using the first sequence can retain more data.
We assume your seafile data directory is in /opt/seafile for binary package based deployment (or /opt/seafile-data for docker based deployment). And you want to backup to /backup directory. The /backup can be an NFS or Windows share mount exported by another machine, or just an external disk. You can create a layout similar to the following in /backup directory:
/backup\n---- databases/ contains database backup files\n---- data/ contains backups of the data directory\n"},{"location":"administration/backup_recovery/#backup-and-restore-for-binary-package-based-deployment","title":"Backup and restore for binary package based deployment","text":""},{"location":"administration/backup_recovery/#backing-up-databases","title":"Backing up Databases","text":"It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.
Assume your database names are ccnet_db, seafile_db and seahub_db. mysqldump automatically locks the tables so you don't need to stop Seafile server when backing up MySQL databases. Since the database tables are usually very small, it won't take long to dump.
mysqldump -h [mysqlhost] -u[username] -p[password] --opt ccnet_db > /backup/databases/ccnet_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seafile_db > /backup/databases/seafile_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seahub_db > /backup/databases/seahub_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n mysqldump: command not found
You may encounter this problem on some machines with a minimal (from 10.5) or a newer (from 11.0) Mariadb server installed, of which the mysql* series of commands have been gradually deprecated. If you encounter this error, use the mariadb-dump command, such as:
mariadb-dump -h [mysqlhost] -u[username] -p[password] --opt ccnet_db > /backup/databases/ccnet_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmariadb-dump -h [mysqlhost] -u[username] -p[password] --opt seafile_db > /backup/databases/seafile_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmariadb-dump -h [mysqlhost] -u[username] -p[password] --opt seahub_db > /backup/databases/seahub_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data","title":"Backing up Seafile library data","text":"The data files are all stored in the /opt/seafile directory, so just back up the whole directory. You can directly copy the whole directory to the backup destination, or you can use rsync to do incremental backup.
To directly copy the whole data directory,
cp -R /opt/seafile /backup/data/seafile-`date +\"%Y-%m-%d-%H-%M-%S\"`\n This produces a separate copy of the data directory each time. You can delete older backup copies after a new one is completed.
If you have a lot of data, copying the whole data directory would take long. You can use rsync to do incremental backup.
rsync -az /opt/seafile /backup/data\n This command backup the data directory to /backup/data/seafile.
Now supposed your primary seafile server is broken, you're switching to a new machine. Using the backup data to restore your Seafile instance:
/backup/data/seafile to the new machine. Let's assume the seafile deployment location new machine is also /opt/seafile.Now with the latest valid database backup files at hand, you can restore them.
mysql -u[username] -p[password] ccnet_db < ccnet_db.sql.2013-10-19-16-00-05\nmysql -u[username] -p[password] seafile_db < seafile_db.sql.2013-10-19-16-00-20\nmysql -u[username] -p[password] seahub_db < seahub_db.sql.2013-10-19-16-01-05\n mysql: command not found
You may encounter this problem on some machines with a minimal (from 10.5) or a newer (from 11.0) Mariadb server installed, of which the mysql* series of commands have been gradually deprecated. If you encounter this error, use the mariadb command, such as:
mariadb -u[username] -p[password] ccnet_db < ccnet_db.sql.2013-10-19-16-00-05\nmariadb -u[username] -p[password] seafile_db < seafile_db.sql.2013-10-19-16-00-20\nmariadb -u[username] -p[password] seahub_db < seahub_db.sql.2013-10-19-16-01-05\n"},{"location":"administration/backup_recovery/#backup-and-restore-for-docker-based-deployment","title":"Backup and restore for Docker based deployment","text":""},{"location":"administration/backup_recovery/#structure","title":"Structure","text":"We assume your seafile volumns path is in /opt/seafile-data. And you want to backup to /backup directory.
The data files to be backed up:
/opt/seafile-data/seafile/conf # configuration files\n/opt/seafile-data/seafile/seafile-data # data of seafile\n/opt/seafile-data/seafile/seahub-data # data of seahub\n"},{"location":"administration/backup_recovery/#backing-up-database","title":"Backing up Database","text":"# It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.\ncd /backup/databases\ndocker exec -it seafile-mysql mariadb-dump -u[username] -p[password] --opt ccnet_db > ccnet_db.sql\ndocker exec -it seafile-mysql mariadb-dump -u[username] -p[password] --opt seafile_db > seafile_db.sql\ndocker exec -it seafile-mysql mariadb-dump -u[username] -p[password] --opt seahub_db > seahub_db.sql\n Tip
The default image of database is Mariadb 10.11 from Seafile 12, you may not be able to find these commands in the container (such as mysqldump: command not found), since commands of mysql* series have been gradually deprecated. So we recommend that you use the mariadb* series of commands.
However, if you still use the MySQL docker image, you should continue to use mysqldump here:
docker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt ccnet_db > ccnet_db.sql\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt seafile_db > seafile_db.sql\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt seahub_db > seahub_db.sql\n"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data_1","title":"Backing up Seafile library data","text":""},{"location":"administration/backup_recovery/#to-directly-copy-the-whole-data-directory","title":"To directly copy the whole data directory","text":"cp -R /opt/seafile-data/seafile /backup/data/\n"},{"location":"administration/backup_recovery/#use-rsync-to-do-incremental-backup","title":"Use rsync to do incremental backup","text":"rsync -az /opt/seafile-data/seafile /backup/data/\n"},{"location":"administration/backup_recovery/#recovery","title":"Recovery","text":""},{"location":"administration/backup_recovery/#restore-the-databases_1","title":"Restore the databases","text":"docker cp /backup/databases/ccnet_db.sql seafile-mysql:/tmp/ccnet_db.sql\ndocker cp /backup/databases/seafile_db.sql seafile-mysql:/tmp/seafile_db.sql\ndocker cp /backup/databases/seahub_db.sql seafile-mysql:/tmp/seahub_db.sql\n\ndocker exec -it seafile-mysql /bin/sh -c \"mariadb -u[username] -p[password] ccnet_db < /tmp/ccnet_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mariadb -u[username] -p[password] seafile_db < /tmp/seafile_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mariadb -u[username] -p[password] seahub_db < /tmp/seahub_db.sql\"\n Tip
The default image of database is Mariadb 10.11 from Seafile 12, you may not be able to find these commands in the container (such as mysql: command not found), since commands of mysql* series have been gradually deprecated. So we recommend that you use the mariadb* series of commands.
However, if you still use the MySQL docker image, you should continue to use mysql here:
docker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] ccnet_db < /tmp/ccnet_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seafile_db < /tmp/seafile_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seahub_db < /tmp/seahub_db.sql\"\n"},{"location":"administration/backup_recovery/#restore-the-seafile-data","title":"Restore the seafile data","text":"# Recommended: use rsync to restore, preserving ownership/permissions/ACL/xattrs.\n# Run a dry-run first to review the changes.\n# Dry-run (no changes made)\nsudo rsync -aHAX --dry-run --itemize-changes /backup/data/seafile/ /opt/seafile-data/seafile/\n\n# Restore (apply changes)\nsudo rsync -aHAX /backup/data/seafile/ /opt/seafile-data/seafile/\n\n# Optional: make the target an exact mirror of the backup\n# (will delete files present in the target but not in the backup;\n# add only after reviewing the dry-run output)\n# sudo rsync -aHAX --delete /backup/data/seafile/ /opt/seafile-data/seafile/\n Note
Trailing \u201c/\u201d on the source means \u201ccopy the directory CONTENTS\u201d.
Run with sudo to preserve owners, groups, ACLs (-A) and xattrs (-X).
"},{"location":"administration/clean_database/","title":"Clean Database","text":""},{"location":"administration/clean_database/#session","title":"Session","text":"Use the following command to clear expired session records in Seahub database:
cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clearsessions\n Tip
Enter into the docker image, then go to /opt/seafile/seafile-server-latest
Use the following command to simultaneously clean up table records of Activity, sysadmin_extra_userloginlog, FileAudit, FileUpdate, FileHistory, PermAudit, FileTrash 90 days ago:
./seahub.sh python-env python3 seahub/manage.py clean_db_records\n You can also clean these tables manually if you like as following.
"},{"location":"administration/clean_database/#activity","title":"Activity","text":"Use the following command to clear the activity records:
use seahub_db;\nDELETE FROM Activity WHERE to_days(now()) - to_days(timestamp) > 90;\nDELETE FROM UserActivity WHERE to_days(now()) - to_days(timestamp) > 90;\n"},{"location":"administration/clean_database/#login","title":"Login","text":"Use the following command to clean the login records:
use seahub_db;\nDELETE FROM sysadmin_extra_userloginlog WHERE to_days(now()) - to_days(login_date) > 90;\n"},{"location":"administration/clean_database/#file-access","title":"File Access","text":"Use the following command to clean the file access records:
use seahub_db;\nDELETE FROM FileAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n"},{"location":"administration/clean_database/#file-update","title":"File Update","text":"Use the following command to clean the file update records:
use seahub_db;\nDELETE FROM FileUpdate WHERE to_days(now()) - to_days(timestamp) > 90;\n"},{"location":"administration/clean_database/#permisson","title":"Permisson","text":"Use the following command to clean the permission change audit records:
use seahub_db;\nDELETE FROM PermAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n"},{"location":"administration/clean_database/#file-history","title":"File History","text":"Use the following command to clean the file history records:
use seahub_db;\nDELETE FROM FileHistory WHERE to_days(now()) - to_days(timestamp) > 90;\n"},{"location":"administration/clean_database/#clean-outdated-library-data","title":"Clean outdated library data","text":"Since version 6.2, we offer command to clear outdated library records in Seafile database, e.g. records that are not deleted after a library is deleted. This is because users can restore a deleted library, so we can't delete these records at library deleting time.
./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data\n This command has been improved in version 10.0, including:
It will clear the invalid data in small batch, avoiding consume too much database resource in a short time.
Dry-run mode: if you just want to see how much invalid data can be deleted without actually deleting any data, you can use the dry-run option, e.g.
./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data --dry-run=true\n"},{"location":"administration/clean_database/#clean-library-sync-tokens","title":"Clean library sync tokens","text":"There are two tables in Seafile db that are related to library sync tokens.
When you have many sync clients connected to the server, these two tables can have large number of rows. Many of them are no longer actively used. You may clean the tokens that are not used in a recent period, by the following SQL query:
delete t,i from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n xxxx is the UNIX timestamp for the time before which tokens will be deleted.
To be safe, you can first check how many tokens will be removed:
select * from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n"},{"location":"administration/export_report/","title":"Export Report","text":"Since version 7.0.8 pro, Seafile provides commands to export reports via command line.
Tip
Enter into the docker image, then go to /opt/seafile/seafile-server-latest
cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py export_user_traffic_report --date 201906\n"},{"location":"administration/export_report/#export-user-storage-report","title":"Export User Storage Report","text":"cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py export_user_storage_report\n"},{"location":"administration/export_report/#export-file-access-log","title":"Export File Access Log","text":"cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01\n"},{"location":"administration/logs/","title":"Seafile server logs","text":""},{"location":"administration/logs/#log-files-of-seafile-server","title":"Log files of seafile server","text":"The logs for seadoc server are located in the /opt/seadoc-data/logs directory.
The logs for seasearch server are located in the /opt/seasearch-data/log directory.
The logs for Nginx are located in the /opt/seafile-data/seafile/logs directory.
On the server side, Seafile stores the files in the libraries in an internal format. Seafile has its own representation of directories and files (similar to Git).
With default installation, these internal objects are stored in the server's file system directly (such as Ext4, NTFS). But most file systems don't assure the integrity of file contents after a hard shutdown or system crash. So if new Seafile internal objects are being written when the system crashes, they can be corrupt after the system reboots. This will make part of the corresponding library not accessible.
Warning
If you store the seafile-data directory in a battery-backed NAS (like EMC or NetApp), or use S3 backend available in the Pro edition, the internal objects won't be corrupt.
Note
If your Seafile server is deployed with Docker, make sure you have enter the container before executing the following commands in this manual:
docker exec -it seafile bash\n This is also required for the other scripts in this document.
We provide a seaf-fsck.sh script to check the integrity of libraries. The seaf-fsck tool accepts the following arguments:
cd /opt/seafile/seafile-server-latest\n./seaf-fsck.sh [--repair|-r] [--export|-E export_path] [repo_id_1 [repo_id_2 ...]]\n There are three modes of operation for seaf-fsck:
Running seaf-fsck.sh without any arguments will run a read-only integrity check for all libraries.
./seaf-fsck.sh\n If you want to check integrity for specific libraries, just append the library id's as arguments:
./seaf-fsck.sh [library-id1] [library-id2] ...\n The output looks like:
[02/13/15 16:21:07] fsck.c(470): Running fsck for repo ca1a860d-e1c1-4a52-8123-0bf9def8697f.\n[02/13/15 16:21:07] fsck.c(413): Checking file system integrity of repo fsck(ca1a860d)...\n[02/13/15 16:21:07] fsck.c(35): Dir 9c09d937397b51e1283d68ee7590cd9ce01fe4c9 is missing.\n[02/13/15 16:21:07] fsck.c(200): Dir /bf/pk/(9c09d937) is corrupted.\n[02/13/15 16:21:07] fsck.c(105): Block 36e3dd8757edeb97758b3b4d8530a4a8a045d3cb is corrupted.\n[02/13/15 16:21:07] fsck.c(178): File /bf/02.1.md(ef37e350) is corrupted.\n[02/13/15 16:21:07] fsck.c(85): Block 650fb22495b0b199cff0f1e1ebf036e548fcb95a is missing.\n[02/13/15 16:21:07] fsck.c(178): File /01.2.md(4a73621f) is corrupted.\n[02/13/15 16:21:07] fsck.c(514): Fsck finished for repo ca1a860d.\n The corrupted files and directories are reported in the above message. By the way, you may also see output like the following:
[02/13/15 16:36:11] Commit 6259251e2b0dd9a8e99925ae6199cbf4c134ec10 is missing\n[02/13/15 16:36:11] fsck.c(476): Repo ca1a860d HEAD commit is corrupted, need to restore to an old version.\n[02/13/15 16:36:11] fsck.c(314): Scanning available commits...\n[02/13/15 16:36:11] fsck.c(376): Find available commit 1b26b13c(created at 2015-02-13 16:10:21) for repo ca1a860d.\n This means the head commit (current state of the library) recorded in database is not consistent with the library data. In such case, fsck will try to find the last consistent state and check the integrity in that state.
Tip
If you have many libraries, it's helpful to save the fsck output into a log file for later analysis.
"},{"location":"administration/seafile_fsck/#repairing-corruption","title":"Repairing Corruption","text":"Corruption repair in seaf-fsck basically works in two steps:
Running the following command repairs all the libraries:
./seaf-fsck.sh --repair\n Most of time you run the read-only integrity check first, to find out which libraries are corrupted. And then you repair specific libraries with the following command:
./seaf-fsck.sh --repair [library-id1] [library-id2] ...\n After repairing, in the library history, seaf-fsck includes the list of files and folders that are corrupted. So it's much easier to located corrupted paths.
"},{"location":"administration/seafile_fsck/#best-practice-for-repairing-a-library","title":"Best Practice for Repairing a Library","text":"To check all libraries and find out which library is corrupted, the system admin can run seaf-fsck.sh without any argument and save the output to a log file. Search for keyword \"Fail\" in the log file to locate corrupted libraries. You can run seaf-fsck to check all libraries when your Seafile server is running. It won't damage or change any files.
When the system admin find a library is corrupted, he/she should run seaf-fsck.sh with \"--repair\" for the library. After the command fixes the library, the admin should inform user to recover files from other places. There are two ways:
Starting from Pro edition 7.1.5, an option is added to speed up FSCK. Most of the running time of seaf-fsck is spent on calculating hashes for file contents. This hash will be compared with block object ID. If they're not consistent, the block is detected as corrupted.
In many cases, the file contents won't be corrupted most of time. Some objects are just missing from the system. So it's enough to only check for object existence. This will greatly speed up the fsck process.
To skip checking file contents, add the --shallow or -s option to seaf-fsck.
You can use seaf-fsck to export all the files in libraries to external file system (such as Ext4). This procedure doesn't rely on the seafile database. As long as you have your seafile-data directory, you can always export your files from Seafile to external file system. The command about this operation is
./seaf-fsck.sh --export top_export_path [library-id1] [library-id2] ...\n The argument top_export_path is a directory to place the exported files. Each library will be exported as a sub-directory of the export path. If you don't specify library ids, all libraries will be exported.
Note
Currently only un-encrypted libraries can be exported. Encrypted libraries will be skipped.
"},{"location":"administration/seafile_fsck/#checking-file-size","title":"Checking file size","text":"Starting from version 13.0.9, fsck has added an option to check whether the file size matches the actual file content. Some problematic clients may upload incorrect blocks, causing the actual file size to not match the file content. With this option, you can detect files with size mismatches, along with the method and time of their upload.
To check whether the file size matches, add the --check-file-size or -S option to seaf-fsck.
Seafile uses storage de-duplication technology to reduce storage usage. The underlying data blocks will not be removed immediately after you delete a file or a library. As a result, the number of unused data blocks will increase on Seafile server.
To release the storage space occupied by unused blocks, you have to run a \"garbage collection\" program to clean up unused blocks on your server.
The GC program cleans up two types of unused blocks:
Note
If your Seafile server is deployed with Docker, make sure you have enter the container before executing the script:
docker exec -it seafile bash\n For all scripts in this document, is located in /opt/seafile/seafile-server-latest:
cd `/opt/seafile/seafile-server-latest # valid both Docker-base Seafile and binary-package-base Seafile\n This is also required for the other scripts in this document.
"},{"location":"administration/seafile_gc/#dry-run-mode","title":"Dry-run Mode","text":"To see how much garbage can be collected without actually removing any garbage, use the dry-run option:
./seaf-gc.sh --dry-run [repo-id1] [repo-id2] ...\n The output should look like:
[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo My Library(ffa57d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 265.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo ffa57d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 5 commits, 265 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 265 blocks total, about 265 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo aa(f3d0a8d0)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 5.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo f3d0a8d0.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 8 commits, 5 blocks.\n[03/19/15 19:41:49] gc-core.c(264): Populating index for sub-repo 9217622a.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 4 commits, 4 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 5 blocks total, about 9 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo test2(e7d26d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 507.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo e7d26d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 577 commits, 507 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 507 blocks total, about 507 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:50] seafserv-gc.c(124): === Repos deleted by users ===\n[03/19/15 19:41:50] seafserv-gc.c(145): === GC is finished ===\n\n[03/19/15 19:41:50] Following repos have blocks to be removed:\nrepo-id1\nrepo-id2\nrepo-id3\n If you give specific library ids, only those libraries will be checked; otherwise all libraries will be checked.
repos have blocks to be removed
Notice that at the end of the output there is a \"repos have blocks to be removed\" section. It contains the list of libraries that have garbage blocks. Later when you run GC without --dry-run option, you can use these libraris ids as input arguments to GC program.
"},{"location":"administration/seafile_gc/#removing-garbage","title":"Removing Garbage","text":"To actually remove garbage blocks, run without the --dry-run option:
./seaf-gc.sh [repo-id1] [repo-id2] ...\n If libraries ids are specified, only those libraries will be checked for garbage.
As described before, there are two types of garbage blocks to be removed. Sometimes just removing the first type of blocks (those that belong to deleted libraries) is good enough. In this case, the GC program won't bother to check the libraries for outdated historic blocks. The \"-r\" option implements this feature:
./seaf-gc.sh -r\n Success
Libraries deleted by the users are not immediately removed from the system. Instead, they're moved into a \"trash\" in the system admin page. Before they're cleared from the trash, their blocks won't be garbage collected.
"},{"location":"administration/seafile_gc/#removing-fs-objects","title":"Removing FS objects","text":"Since Pro server 8.0.6 and community edition 9.0, you can remove garbage fs objects. It should be run without the --dry-run option:
./seaf-gc.sh --rm-fs\n Bug reports
This command has bug before Pro Edition 10.0.15 and Community Edition 11.0.7. It could cause virtual libraries (e.g. shared folders) failing to merge into their parent libraries. Please avoid using this option in the affected versions. Please contact our support team if you are affected by this bug.
"},{"location":"administration/seafile_gc/#using-multiple-threads-in-gc","title":"Using Multiple Threads in GC","text":"You can specify the thread number in GC. By default,
You can specify the thread number in with \"-t\" option. \"-t\" option can be used together with all other options. Each thread will do GC on one library. For example, the following command will use 20 threads to GC all libraries:
./seaf-gc.sh -t 20\n Since the threads are concurrent, the output of each thread may mix with each others. Library ID is printed in each line of output.
"},{"location":"administration/seafile_gc/#run-gc-based-on-library-id-prefix","title":"Run GC based on library ID prefix","text":"Since GC usually runs quite slowly as it needs to traverse the entire library history. You can use multiple threads to run GC in parallel. For even larger deployments, it's also desirable to run GC on multiple server in parallel.
A simple pattern to divide the workload among multiple GC servers is to assign libraries to servers based on library ID. Since Pro edition 7.1.5, this is supported. You can add \"--id-prefix\" option to seaf-gc.sh, to specify the library ID prefix. For example, the below command will only process libraries having \"a123\" as ID prefix.
./seaf-gc.sh --id-prefix a123\n"},{"location":"administration/security_features/","title":"Security Questions","text":""},{"location":"administration/security_features/#how-is-the-connection-between-client-and-server-encrypted","title":"How is the connection between client and server encrypted?","text":"Seafile uses HTTP(S) to syncing files between client and server (Since version 4.1.0).
"},{"location":"administration/security_features/#encrypted-library","title":"Encrypted Library","text":"Seafile provides a feature called encrypted library to protect your privacy. The file encryption/decryption is performed on client-side when using the desktop client for file synchronization. The password of an encrypted library is not stored on the server. Even the system admin of the server can't view the file contents.
There are a few limitation about this feature:
The client side encryption works on iOS client since version 2.1.6. The Android client support client side encryption since version 2.1.0. But since version 3.0.0, the iOS and Android clients drop support for client side encryptioin. You need to send the password to the server to encrypt/decrypt files.
"},{"location":"administration/security_features/#how-does-an-encrypted-library-work","title":"How does an encrypted library work?","text":"When you create an encrypted library, you'll need to provide a password for it. All the data in that library will be encrypted with the password before uploading it to the server (see limitations above).
"},{"location":"administration/security_features/#encryptiondecryption-procedure","title":"Encryption/Decryption procedure","text":"There are currently two supported encryption protocol versions for encrypted libraries, version 2 and versioin 4. The two versions shares the same basic procedure so we first describe the procedure.
The only difference between version 2 and version 4 is on the usage of salt for the secure hash algorithm. In version 2, all libaries share the same fixed salt. In version 4, each library will use a separate and randomly generated salt.
"},{"location":"administration/security_features/#secure-hash-algorithms-for-password-verification","title":"Secure hash algorithms for password verification","text":"A secure hash algorithm is used to derive key/iv pair for encrypting the file key. So it's critical to choose a relatively costly algorithm to prevent brute-force guessing for the password.
Before version 12, a fixed secure hash algorithm (PBKDF2-SHA256 with 1000 iterations) is used, which is far from secure for today's standard.
Since Seafile server version 12, we allow the admin to choose proper secure hash algorithms. Currently two hash algorithms are supported.
The above encryption procedure can be executed on the desktop and the mobile client. The Seahub browser client uses a different encryption procedure that happens at the server. Because of this your password will be transferred to the server.
When you sync an encrypted library to the desktop, the client needs to verify your password. When you create the library, a \"magic token\" is derived from the password and library id. This token is stored with the library on the server side. The client use this token to check whether your password is correct before you sync the library. The magic token is generated by the secure hash algorithm chosen when the library was created.
For maximum security, the plain-text password won't be saved on the client side, too. The client only saves the key/iv pair derived from the \"file key\", which is used to decrypt the data. So if you forget the password, you won't be able to recover it or access your data on the server.
"},{"location":"administration/security_features/#why-fileserver-delivers-every-content-to-everybody-knowing-the-content-url-of-an-unshared-private-file","title":"Why fileserver delivers every content to everybody knowing the content URL of an unshared private file?","text":"When a file download link is clicked, a random URL is generated for user to access the file from fileserver. This url can only be access once. After that, all access will be denied to the url. So even if someone else happens to know about the url, he can't access it anymore.
This was changed in Seafile server version 12. Instead of a random URL, a URL like 'https://yourserver.com/seafhttp/repos/{library id}/file_path' is used for downloading the file. Authorization will be done by checking cookies or API tokens on the server side. This makes the URL more cache-friendly while still being secure.
"},{"location":"administration/security_features/#how-does-seafile-store-user-login-password","title":"How does Seafile store user login password?","text":"User login passwords are stored in hash form only. Note that user login password is different from the passwords used in encrypted libraries. In the database, its format is
PBKDF2SHA256$iterations$salt$hash\n The record is divided into 4 parts by the $ sign.
To calculate the hash:
PBKDF2(password, salt, iterations). The number of iterations is currently 10000.Starting from version 6.0, we added Two-Factor Authentication to enhance account security.
There are two ways to enable this feature:
System admin can tick the check-box at the \"Password\" section of the system settings page, or
just add the following settings to seahub_settings.py and restart service.
ENABLE_TWO_FACTOR_AUTH = True\nTWO_FACTOR_DEVICE_REMEMBER_DAYS = 30 # optional, default 90 days.\n After that, there will be a \"Two-Factor Authentication\" section in the user profile page.
Users can use the Google Authenticator app on their smart-phone to scan the QR code.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/","title":"Seafile Professional Server Changelog (old)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#71","title":"7.1","text":"Upgrade
Please check our document for how to upgrade to 7.1.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#7122-20210729","title":"7.1.22 (2021/07/29)","text":"Potential breaking change in Seafile Pro 7.1.16: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n"},{"location":"changelog/changelog-for-seafile-professional-server-old/#7115-20210318","title":"7.1.15 (2021/03/18)","text":"Since seafile-pro 7.0.0, we have upgraded Elasticsearch to 5.6. As Elasticsearch 5.6 relies on the Java 8 environment and can't run with root, you need to run Seafile with a non-root user and upgrade the Java version.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#7019-20200907","title":"7.0.19 (2020/09/07)","text":"-Xms1g -Xmx1g In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.
With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.
The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf instead of running ./seahub.sh start <another-port>.
Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:
./seahub.sh python-env seahub/manage.py migrate_file_comment\n Note, this command should be run while Seafile server is running.
Version 6.3 changed '/shib-login' to '/sso'. If you use Shibboleth, you need to to update your Apache/Nginx config. Please check the updated document: shibboleth config v6.3
Version 6.3 add a new option for file search (seafevents.conf):
[INDEX FILES]\n...\nhighlight = fvh\n...\n This option will make search speed improved significantly (10x) when the search result contains large pdf/doc files. But you need to rebuild search index if you want to add this option.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#6314-20190521","title":"6.3.14 (2019/05/21)","text":"New features
From 6.2, It is recommended to use proxy mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:
./seahub.sh start instead of ./seahub.sh start-fastcgiThe configuration of Nginx is as following:
location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n }\n The configuration of Apache is as following:
# seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n"},{"location":"changelog/changelog-for-seafile-professional-server-old/#6213-2018518","title":"6.2.13 (2018.5.18)","text":"file already exists error for the first time.per_page parameter to 10 when search file via api.repo_owner field to library search web api.ENABLE_REPO_SNAPSHOT_LABEL = True to turn the feature on)You can follow the document on minor upgrade.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#619-20170928","title":"6.1.9 \uff082017.09.28\uff09","text":"Web UI Improvement:
Improvement for admins:
System changes:
ENABLE_WIKI = True in seahub_settings.py)You can follow the document on minor upgrade.
Special note for upgrading a cluster:
In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share.
cd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#6013-20170508","title":"6.0.13 (2017.05.08)","text":"Improvement for admin
Other
# -*- coding: utf-8 -*- to seahub_settings.py, so that admin can use non-ascii characters in the file.[Audit] and [AUDIT] in seafevent.confPro only features
Note: Two new options are added in version 4.4, both are in seahub_settings.py
This version contains no database table change.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#449-20160229","title":"4.4.9 (2016.02.29)","text":"LDAP improvements and fixes
New features:
Pro only:
Fixes:
Note: this version contains no database table change from v4.2. But the old search index will be deleted and regenerated.
Note when upgrading from v4.2 and using cluster, a new option COMPRESS_CACHE_BACKEND = 'locmem://' should be added to seahub_settings.py
About \"Open via Client\": The web interface will call Seafile desktop client via \"seafile://\" protocol to use local program to open a file. If the file is already synced, the local file will be opened. Otherwise it is downloaded and uploaded after modification. Need client version 4.3.0+
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#430-20150725","title":"4.3.0 (2015.07.25)","text":"Usability improvements
Pro only features:
Others
THUMBNAIL_DEFAULT_SIZE = 24, instead of THUMBNAIL_DEFAULT_SIZE = '24'Note: because Seafile has changed the way how office preview work in version 4.2.2, you need to clean the old generated files using the command:
rm -rf /tmp/seafile-office-output/html/\n"},{"location":"changelog/changelog-for-seafile-professional-server-old/#424-20150708","title":"4.2.4 (2015.07.08)","text":"In the old way, the whole file is converted to HTML5 before returning to the client. By converting an office file to HTML5 page by page, the first page will be displayed faster. By displaying each page in a separate frame, the quality for some files is improved too.
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#421-20150630","title":"4.2.1 (2015.06.30)","text":"Improved account management
Important
New features
Others
Pro only updates
Usability
Security Improvement
Platform
Pro only updates
Updates in community edition too
Important
Small
Pro edition only:
Syncing
Platform
Web
Web
Platform
Web
Platform
Misc
WebDAV
pro.py search --clear commandPlatform
Web
Web for Admin
Platform
Web
Web for Admin
API
Web
API
Platform
You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000
"},{"location":"changelog/changelog-for-seafile-professional-server/#130","title":"13.0","text":"Upgrade
Please check our document for how to upgrade to 13.0
"},{"location":"changelog/changelog-for-seafile-professional-server/#13014-2025-11-28","title":"13.0.14 (2025-11-28)","text":".env, it is recommended to use environment variables to config database and memcacheUpgrade
Please check our document for how to upgrade to 12.0
"},{"location":"changelog/changelog-for-seafile-professional-server/#12018-2025-11-17","title":"12.0.18 (2025-11-17)","text":".env file.ccnet.conf is removed. Some of its configuration items are moved from .env file, others are read from items in seafile.conf with same name.Upgrade
Please check our document for how to upgrade to 11.0
"},{"location":"changelog/changelog-for-seafile-professional-server/#11019-2025-03-21","title":"11.0.19 (2025-03-21)","text":"Seafile
SDoc editor 0.8
Seafile
SDoc editor 0.7
SDoc editor 0.6
Major changes
UI Improvements
Pro edition only changes
Other changes
Upgrade
Please check our document for how to upgrade to 10.0.
Note
If you upgrade to version 10.0.18+ from 10.0.16 or below, you need to upgrade the sqlalchemy to version 1.4.44+ if you use binary based installation. Otherwise \"activities\" page will not work.
"},{"location":"changelog/changelog-for-seafile-professional-server/#10018-2024-11-01","title":"10.0.18 (2024-11-01)","text":"This release is for Docker image only
Note, after upgrading to this version, you need to upgrade the Python libraries in your server \"pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20\"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10012-2024-01-16","title":"10.0.12 (2024-01-16)","text":"Upgrade
Please check our document for how to upgrade to 9.0.
"},{"location":"changelog/changelog-for-seafile-professional-server/#9016-2023-03-22","title":"9.0.16 (2023-03-22)","text":"Note: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command pip3 install lxml to install it.
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
You can turn golang file-server on by adding following configuration in seafile.conf
[fileserver]\nuse_go_fileserver = true\n"},{"location":"changelog/changelog-for-seafile-professional-server/#901","title":"9.0.1","text":"Deprecated
"},{"location":"changelog/changelog-for-seafile-professional-server/#900","title":"9.0.0","text":"Deprecated
"},{"location":"changelog/changelog-for-seafile-professional-server/#80","title":"8.0","text":"Upgrade
Please check our document for how to upgrade to 8.0.
"},{"location":"changelog/changelog-for-seafile-professional-server/#8017-20220110","title":"8.0.17 (2022/01/10)","text":"Potential breaking change in Seafile Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n"},{"location":"changelog/changelog-for-seafile-professional-server/#802-20210421","title":"8.0.2 (2021/04/21)","text":"._ cloud file browser
others
This version has a few bugs. We will fix it soon.
"},{"location":"changelog/client-changelog/#601-20161207","title":"6.0.1 (2016/12/07)","text":"Note: Seafile client now support HiDPI under Windows, you should remove QT_DEVICE_PIXEL_RATIO settings if you had set one previous.
In the old version, you will sometimes see strange directory such as \"Documents~1\" synced to the server, this because the old version did not handle long path correctly.
"},{"location":"changelog/client-changelog/#406-20150109","title":"4.0.6 (2015/01/09)","text":"In the previous version, when you open an office file in Windows, it is locked by the operating system. If another person modify this file in another computer, the syncing will be stopped until you close the locked file. In this new version, the syncing process will continue. The locked file will not be synced to local computer, but other files will not be affected.
"},{"location":"changelog/client-changelog/#403-20141203","title":"4.0.3 (2014/12/03)","text":"You have to update all the clients in all the PCs. If one PC does not use the v3.1.11, when the \"deleting folder\" information synced to this PC, it will fail to delete the folder completely. And the folder will be synced back to other PCs. So other PCs will see the folder reappear again.
"},{"location":"changelog/client-changelog/#3110-20141113","title":"3.1.10 (2014/11/13)","text":"Note: This version contains a bug that you can't login into your private servers.
1.8.1
1.8.0
1.7.3
1.7.2
1.7.1
1.7.0
1.6.2
1.6.1
1.6.0
1.5.3
1.5.2
1.5.1
1.5.0
S: because a few programs will automatically try to create files in S:Feature changes
Progresql support is dropped as we have rewritten the database access code to remove copyright issue.
Upgrade
Please check our document for how to upgrade to 7.1.
"},{"location":"changelog/server-changelog-old/#715-20200922","title":"7.1.5 (2020/09/22)","text":"Feature changes
In version 6.3, users can create public or private Wikis. In version 7.0, private Wikis is replaced by column mode view. Every library has a column mode view. So users don't need to explicitly create private Wikis.
Public Wikis are now renamed to published libraries.
Upgrade
Just follow our document on major version upgrade. No special steps are needed.
"},{"location":"changelog/server-changelog-old/#705-20190923","title":"7.0.5 (2019/09/23)","text":"In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.
With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.
The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf instead of running ./seahub.sh start <another-port>.
Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:
./seahub.sh python-env seahub/manage.py migrate_file_comment\n Note, this command should be run while Seafile server is running.
"},{"location":"changelog/server-changelog-old/#634-20180915","title":"6.3.4 (2018/09/15)","text":"From 6.2, It is recommended to use WSGI mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:
./seahub.sh start instead of ./seahub.sh start-fastcgiThe configuration of Nginx is as following:
location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n }\n The configuration of Apache is as following:
# seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n"},{"location":"changelog/server-changelog-old/#625-20180123","title":"6.2.5 (2018/01/23)","text":"ENABLE_REPO_SNAPSHOT_LABEL = True to turn the feature on)If you upgrade from 6.0 and you'd like to use the feature video thumbnail, you need to install ffmpeg package:
# for ubuntu 16.04\napt-get install ffmpeg\npip install pillow moviepy\n\n# for Centos 7\nyum -y install epel-release\nrpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro\nyum -y install ffmpeg ffmpeg-devel\npip install pillow moviepy\n"},{"location":"changelog/server-changelog-old/#612-20170815","title":"6.1.2 (2017.08.15)","text":"Web UI Improvement:
Improvement for admins:
System changes:
Note: If you ever used 6.0.0 or 6.0.1 or 6.0.2 with SQLite as database and encoutered a problem with desktop/mobile client login, follow https://github.com/haiwen/seafile/pull/1738 to fix the problem.
"},{"location":"changelog/server-changelog-old/#609-20170330","title":"6.0.9 (2017.03.30)","text":"Improvement for admin
# -*- coding: utf-8 -*- to seahub_settings.py, so that admin can use non-ascii characters in the file.Other
Warning:
Note: when upgrade from 5.1.3 or lower version to 5.1.4+, you need to install python-urllib3 (or python2-urllib3 for Arch Linux) manually:
# for Ubuntu\nsudo apt-get install python-urllib3\n# for CentOS\nsudo yum install python-urllib3\n"},{"location":"changelog/server-changelog-old/#514-20160723","title":"5.1.4 (2016.07.23)","text":"Note: downloading multiple files at once will be added in the next release.
Note: in this version, the group discussion is not re-implement yet. It will be available when the stable verison is released.
Note when upgrade to 5.0 from 4.4
You can follow the document on major upgrade (http://manual.seafile.com/deploy/upgrade.html) (url might deprecated)
In Seafile 5.0, we have moved all config files to folder conf, including:
If you want to downgrade from v5.0 to v4.4, you should manually copy these files back to the original place, then run minor_upgrade.sh to upgrade symbolic links back to version 4.4.
The 5.0 server is compatible with v4.4 and v4.3 desktop clients.
Common issues (solved) when upgrading to v5.0:
Improve seaf-fsck
Sharing link
[[ Pagename]].UI changes:
Config changes:
confTrash:
Admin:
Security:
New features:
Fixes:
Usability Improvement
Others
THUMBNAIL_DEFAULT_SIZE = 24, instead of THUMBNAIL_DEFAULT_SIZE = '24'Note when upgrade to 4.2 from 4.1:
If you deploy Seafile in a non-root domain, you need to add the following extra settings in seahub_settings.py:
COMPRESS_URL = MEDIA_URL\nSTATIC_URL = MEDIA_URL + '/assets/'\n"},{"location":"changelog/server-changelog-old/#423-20150618","title":"4.2.3 (2015.06.18)","text":"Usability
Security Improvement
Platform
Important
Small
Important
Small improvements
Syncing
Platform
Web
Web
Platform
Web
Platform
Platform
Web
WebDAV
<a>, <table>, <img> and a few other html elements in markdown to avoid XSS attack. Platform
Web
Web for Admin
Platform
Web
Web for Admin
API
Web
API
Platform
Web
Daemon
Web
Daemon
Web
For Admin
API
Seafile Web
Seafile Daemon
API
You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000
"},{"location":"changelog/server-changelog/#130","title":"13.0","text":"Upgrade
Please check our document for how to upgrade to 13.0
"},{"location":"changelog/server-changelog/#13012-2025-10-24","title":"13.0.12 (2025-10-24)","text":"Deploying Seafile with binary package is no longer supported for community edition. We recommend you to migrate your existing Seafile deployment to docker based.
.env, it is recommended to use environment variables to config database and memcacheUpgrade
Please check our document for how to upgrade to 12.0
"},{"location":"changelog/server-changelog/#12014-2025-05-29","title":"12.0.14 (2025-05-29)","text":".env file.ccnet.conf is removed. Some of its configuration items are moved from .env file, others are read from items in seafile.conf with same name.Upgrade
Please check our document for how to upgrade to 11.0
"},{"location":"changelog/server-changelog/#11012-2024-08-14","title":"11.0.12 (2024-08-14)","text":"Seafile
Seafile
SDoc editor 0.8
Seafile
SDoc editor 0.7
Seafile
SDoc editor 0.6
Seafile
Seafile
SDoc editor 0.5
Seafile
SDoc editor 0.4
Seafile
SDoc editor 0.3
Seafile
SDoc editor 0.2
Upgrade
Please check our document for how to upgrade to 10.0.
"},{"location":"changelog/server-changelog/#1001-2023-04-11","title":"10.0.1 (2023-04-11)","text":"/accounts/login redirect by ?next= parameterNote: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command pip3 install lxml to install it.
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
You can turn golang file-server on by adding following configuration in seafile.conf
[fileserver]\nuse_go_fileserver = true\n"},{"location":"changelog/server-changelog/#80","title":"8.0","text":"Please check our document for how to upgrade to 8.0.
"},{"location":"changelog/server-changelog/#808-20211206","title":"8.0.8 (2021/12/06)","text":"The config files used in Seafile include:
You can also modify most of the config items via web interface.The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files.
"},{"location":"config/#the-design-of-configure-options","title":"The design of configure options","text":"There are now three places you can config Seafile server:
The web interface has the highest priority. It contains a subset of end-user oriented settings. In practise, you can disable settings via web interface for simplicity.
Environment variables contains system level settings that needed when initialize Seafile server or run Seafile server. Environment variables also have three categories:
The variables in the first category can be deleted after initialization. In the future, we will make more components to read config from environment variables, so that the third category is no longer needed.
"},{"location":"config/admin_roles_permissions/","title":"Roles and Permissions Support","text":"You can add/edit roles and permission for administrators. Seafile has four build-in admin roles:
default_admin, has all permissions.
system_admin, can only view system info and config system.
daily_admin, can only view system info, view statistic, manage library/user/group, view user log.
audit_admin, can only view system info and admin log.
All administrators will have default_admin role with all permissions by default. If you set an administrator to some other admin role, the administrator will only have the permissions you configured to True.
Seafile supports eight permissions for now, its configuration is very like common user role, you can custom it by adding the following settings to seahub_settings.py.
ENABLED_ADMIN_ROLE_PERMISSIONS = {\n 'system_admin': {\n 'can_view_system_info': True,\n 'can_config_system': True,\n },\n 'daily_admin': {\n 'can_view_system_info': True,\n 'can_view_statistic': True,\n 'can_manage_library': True,\n 'can_manage_user': True,\n 'can_manage_group': True,\n 'can_view_user_log': True,\n },\n 'audit_admin': {\n 'can_view_system_info': True,\n 'can_view_admin_log': True,\n },\n 'custom_admin': {\n 'can_view_system_info': True,\n 'can_config_system': True,\n 'can_view_statistic': True,\n 'can_manage_library': True,\n 'can_manage_user': True,\n 'can_manage_group': True,\n 'can_view_user_log': True,\n 'can_view_admin_log': True,\n },\n}\n"},{"location":"config/auth_switch/","title":"Switch authentication type","text":"Seafile Server supports the following external authentication types:
Since 11.0 version, switching between the types is possible, but any switch requires modifications of Seafile's databases.
Note
Before manually manipulating your database, make a database backup, so you can restore your system if anything goes wrong!
See more about make a database backup.
"},{"location":"config/auth_switch/#migrating-from-local-user-database-to-external-authentication","title":"Migrating from local user database to external authentication","text":"As an organisation grows and its IT infrastructure matures, the migration from local authentication to external authentication like LDAP, SAML, OAUTH is common requirement. Fortunately, the switch is comparatively simple.
"},{"location":"config/auth_switch/#general-procedure","title":"General procedure","text":"Configure and test the desired external authentication. Note the name of the provider you use in the config file. The user to be migrated should already be able to log in with this new authentication type, but he will be created as a new user with a new unique identifier, so he will not have access to his existing libraries. Note the uid from the social_auth_usersocialauth table. Delete this new, still empty user again.
Determine the ID of the user to be migrated in ccnet_db.EmailUser. For users created before version 11, the ID should be the user's email, for users created after version 11, the ID should be a string like xxx@auth.local.
Replace the password hash with an exclamation mark.
Create a new entry in social_auth_usersocialauth with the xxx@auth.local, your provider and the uid.
The login with the password stored in the local database is not possible anymore. After logging in via external authentication, the user has access to all his previous libraries.
"},{"location":"config/auth_switch/#example","title":"Example","text":"This example shows how to migrate the user with the username 12ae56789f1e4c8d8e1c31415867317c@auth.local from local database authentication to OAuth. The OAuth authentication is configured in seahub_settings.py with the provider name authentik-oauth. The uid of the user inside the Identity Provider is HR12345.
This is what the database looks like before these commands must be executed:
mysql> select email,left(passwd,25) from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------------------------------+\n| email | left(passwd,25) |\n+---------------------------------------------+------------------------------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | PBKDF2SHA256$10000$4cdda6... |\n+---------------------------------------------+------------------------------+\n\nmysql> update EmailUser set passwd = '!' where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n\nmysql> insert into `social_auth_usersocialauth` (`username`, `provider`, `uid`, `extra_data`) values ('12ae56789f1e4c8d8e1c31415867317c@auth.local', 'authentik-oauth', 'HR12345', '');\n Note
The extra_data field store user's information returned from the provider. For most providers, the extra_data field is usually an empty character. Since version 11.0.3-Pro, the default value of the extra_data field is NULL.
Afterwards the databases should look like this:
mysql> select email,passwd from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------- +\n| email | passwd |\n+---------------------------------------------+--------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | ! |\n+---------------------------------------------+--------+\n\nmysql> select username,provider,uid from social_auth_usersocialauth where username = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+-----------------+---------+\n| username | provider | uid |\n+---------------------------------------------+-----------------+---------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | authentik-oauth | HR12345 |\n+---------------------------------------------+-----------------+---------+\n"},{"location":"config/auth_switch/#migrating-from-one-external-authentication-to-another","title":"Migrating from one external authentication to another","text":"First configure the two external authentications and test them with a dummy user. Then, to migrate all the existing users you only need to make changes to the social_auth_usersocialauth table. No entries need to be deleted or created. You only need to modify the existing ones. The xxx@auth.local remains the same, you only need to replace the provider and the uid.
First, delete the entry in the social_auth_usersocialauth table that belongs to the particular user.
Then you can reset the user's password, e.g. via the web interface. The user will be assigned a local password and from now on the authentication against the local database of Seafile will be done.
More details about this option will follow soon.
"},{"location":"config/auto_login_seadrive/","title":"Auto Login to SeaDrive on Windows","text":"Kerberos is a widely used single sign on (SSO) protocol. Supporting of auto login will use a Kerberos service. For server configuration, please read remote user authentication documentation. You have to configure Apache to authenticate with Kerberos. This is out of the scope of this documentation. You can for example refer to this webpage.
"},{"location":"config/auto_login_seadrive/#technical-details","title":"Technical Details","text":"The client machine has to join the AD domain. In a Windows domain, the Kerberos Key Distribution Center (KDC) is implemented on the domain service. Since the client machine has been authenticated by KDC when a Windows user logs in, a Kerberos ticket will be generated for current user without needs of another login in the browser.
When a program using the WinHttp API tries to connect a server, it can perform a login automatically through the Integrated Windows Authentication. Internet Explorer and SeaDrive both use this mechanism.
The details of Integrated Windows Authentication is described below:
In short:
The Internet Options has to be configured as following:
Open \"Internet Options\", select \"Security\" tab, select \"Local Intranet\" zone.
Note
Above configuration requires a reboot to take effect.
Next, we shall test the auto login function on Internet Explorer: visit the website and click \"Single Sign-On\" link. It should be able to log in directly, otherwise the auto login is malfunctioned.
Note
The address in the test must be same as the address specified in the keytab file. Otherwise, the client machine can't get a valid ticket from Kerberos.
"},{"location":"config/auto_login_seadrive/#auto-login-on-seadrive","title":"Auto Login on SeaDrive","text":"SeaDrive will use the Kerberos login configuration from the Windows Registry under HKEY_CURRENT_USER/SOFTWARE/SeaDrive.
Key : PreconfigureServerAddr\nType : REG_SZ\nValue : <the url of seafile server>\n\nKey : PreconfigureUseKerberosLogin\nType : REG_SZ\nValue : <0|1> // 0 for normal login, 1 for SSO login\n The system wide configuration path is located at HKEY_LOCAL_MACHINE/SOFTWARE/Wow6432Node/SeaDrive.
SeaDrive can be installed silently with the following command (requires admin privileges):
msiexec /i seadrive.msi /quiet /qn /log install.log\n"},{"location":"config/auto_login_seadrive/#auto-login-via-group-policy","title":"Auto Login via Group Policy","text":"The configuration of Internet Options : https://docs.microsoft.com/en-us/troubleshoot/browsers/how-to-configure-group-policy-preference-settings
The configuration of Windows Registry : https://thesolving.com/server-room/how-to-deploy-a-registry-key-via-group-policy/
"},{"location":"config/config_seafile_with_ADFS/","title":"config seafile with ADFS","text":""},{"location":"config/config_seafile_with_ADFS/#requirements","title":"Requirements","text":"To use ADFS to log in to your Seafile, you need the following components:
A Winodws Server with ADFS installed. For configuring and installing ADFS you can see this article.
A valid SSL certificate for ADFS server, and here we use adfs-server.adfs.com as the domain name example.
A valid SSL certificate for Seafile server, and here we use demo.seafile.com as the domain name example.
You can generate them by:
``` openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sp.key -out sp.crt
These x.509 certs are used to sign and encrypt elements like NameID and Metadata for SAML. \n\n Then copy these two files to **<seafile-install-path>/seahub-data/certs**. (if the certs folder not exists, create it.)\n\n2. x.509 cert from IdP (Identity Provider)\n\n 1. Log into the ADFS server and open the ADFS management.\n\n 1. Double click **Service** and choose **Certificates**.\n\n 1. Export the **Token-Signing** certificate:\n\n 1. Right-click the certificate and select **View Certificate**.\n 1. Select the **Details** tab.\n 1. Click **Copy to File** (select **DER encoded binary X.509**).\n\n 1. Convert this certificate to PEM format, rename it to **idp.crt**\n\n 1. Then copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Prepare IdP Metadata File\n\n1. Open https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml\n\n1. Save this xml file, rename it to **idp_federation_metadata.xml**\n\n1. Copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Install Requirements on Seafile Server\n\n- For Ubuntu 16.04\n sudo apt install xmlsec1 sudo pip install cryptography djangosaml2==0.15.0 ### Config Seafile\n\nAdd the following lines to **seahub_settings.py**\n from os import path import saml2 import saml2.saml"},{"location":"config/config_seafile_with_ADFS/#update-following-lines-according-to-your-situation","title":"update following lines according to your situation","text":"CERTS_DIR = '/seahub-data/certs' SP_SERVICE_URL = 'https://demo.seafile.com' XMLSEC_BINARY = '/usr/local/bin/xmlsec1' ATTRIBUTE_MAP_DIR = '/seafile-server-latest/seahub-extra/seahub_extra/adfs_auth/attribute-maps' SAML_ATTRIBUTE_MAPPING = { 'DisplayName': ('display_name', ), 'ContactEmail': ('contact_email', ), 'Deparment': ('department', ), 'Telephone': ('telephone', ), }"},{"location":"config/config_seafile_with_ADFS/#update-the-idp-section-in-sampl_config-according-to-your-situation-and-leave-others-as-default","title":"update the 'idp' section in SAMPL_CONFIG according to your situation, and leave others as default","text":"
ENABLE_ADFS_LOGIN = True EXTRA_AUTHENTICATION_BACKENDS = ( 'seahub_extra.adfs_auth.backends.Saml2Backend', ) SAML_USE_NAME_ID_AS_USERNAME = True LOGIN_REDIRECT_URL = '/saml2/complete/' SAML_CONFIG = { # full path to the xmlsec1 binary programm 'xmlsec_binary': XMLSEC_BINARY,
'allow_unknown_attributes': True,\n\n# your entity id, usually your subdomain plus the url to the metadata view\n'entityid': SP_SERVICE_URL + '/saml2/metadata/',\n\n# directory with attribute mapping\n'attribute_map_dir': ATTRIBUTE_MAP_DIR,\n\n# this block states what services we provide\n'service': {\n # we are just a lonely SP\n 'sp' : {\n \"allow_unsolicited\": True,\n 'name': 'Federated Seafile Service',\n 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS,\n 'endpoints': {\n # url and binding to the assetion consumer service view\n # do not change the binding or service name\n 'assertion_consumer_service': [\n (SP_SERVICE_URL + '/saml2/acs/',\n saml2.BINDING_HTTP_POST),\n ],\n # url and binding to the single logout service view\n # do not change the binding or service name\n 'single_logout_service': [\n (SP_SERVICE_URL + '/saml2/ls/',\n saml2.BINDING_HTTP_REDIRECT),\n (SP_SERVICE_URL + '/saml2/ls/post',\n saml2.BINDING_HTTP_POST),\n ],\n },\n\n # attributes that this project need to identify a user\n 'required_attributes': [\"uid\"],\n\n # attributes that may be useful to have but not required\n 'optional_attributes': ['eduPersonAffiliation', ],\n\n # in this section the list of IdPs we talk to are defined\n 'idp': {\n # we do not need a WAYF service since there is\n # only an IdP defined here. This IdP should be\n # present in our metadata\n\n # the keys of this dictionary are entity ids\n 'https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml': {\n 'single_sign_on_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/idpinitiatedsignon.aspx',\n },\n 'single_logout_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/?wa=wsignout1.0',\n },\n },\n },\n },\n},\n\n# where the remote metadata is stored\n'metadata': {\n 'local': [path.join(CERTS_DIR, 'idp_federation_metadata.xml')],\n},\n\n# set to 1 to output debugging information\n'debug': 1,\n\n# Signing\n'key_file': '', \n'cert_file': path.join(CERTS_DIR, 'idp.crt'), # from IdP\n\n# Encryption\n'encryption_keypairs': [{\n 'key_file': path.join(CERTS_DIR, 'sp.key'), # private part\n 'cert_file': path.join(CERTS_DIR, 'sp.crt'), # public part\n}],\n\n'valid_for': 24, # how long is our metadata valid\n }
```
"},{"location":"config/config_seafile_with_ADFS/#config-adfs-server","title":"Config ADFS Server","text":"Relying Party Trust is the connection between Seafile and ADFS.
Log into the ADFS server and open the ADFS management.
Double click Trust Relationships, then right click Relying Party Trusts, select Add Relying Party Trust\u2026.
Select Import data about the relying party published online or one a local network, input https://demo.seafile.com/saml2/metadata/ in the Federation metadata address.
Then Next until Finish.
Add Relying Party Claim Rules
Relying Party Claim Rules is used for attribute communication between Seafile and users in Windows Domain.
Important: Users in Windows domain must have the E-mail value setted.
Right-click on the relying party trust and select Edit Claim Rules...
On the Issuance Transform Rules tab select Add Rules...
Select Send LDAP Attribute as Claims as the claim rule template to use.
Give the claim a name such as LDAP Attributes.
Set the Attribute Store to Active Directory, the LDAP Attribute to E-Mail-Addresses, and the Outgoing Claim Type to E-mail Address.
Select Finish.
Click Add Rule... again.
Select Transform an Incoming Claim.
Give it a name such as Email to Name ID.
Incoming claim type should be E-mail Address (it must match the Outgoing Claim Type in rule #1).
The Outgoing claim type is Name ID (this is required in Seafile settings policy 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS).
the Outgoing name ID format is Email.
Pass through all claim values and click Finish.
https://support.zendesk.com/hc/en-us/articles/203663886-Setting-up-single-sign-on-using-Active-Directory-with-ADFS-and-SAML-Plus-and-Enterprise-
http://wiki.servicenow.com/?title=Configuring_ADFS_2.0_to_Communicate_with_SAML_2.0#gsc.tab=0
https://github.com/rohe/pysaml2/blob/master/src/saml2/saml.py
Note: Subject line may vary between different releases, this is based on Release 2.0.1. Restart Seahub so that your changes take effect.
"},{"location":"config/customize_email_notifications/#user-reset-hisher-password","title":"User reset his/her password","text":"Subject
seahub/seahub/auth/forms.py line:103
Body
seahub/seahub/templates/registration/password_reset_email.html
Note: You can copy password_reset_email.html to seahub-data/custom/templates/registration/password_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:424
Body
seahub/seahub/templates/sysadmin/user_add_email.html
Note: You can copy user_add_email.html to seahub-data/custom/templates/sysadmin/user_add_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:368
Body
seahub/seahub/templates/sysadmin/user_reset_email.html
Note: You can copy user_reset_email.html to seahub-data/custom/templates/sysadmin/user_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/share/views.py line:668
Body
seahub/seahub/templates/shared_link_email.html
"},{"location":"config/details_about_file_search/","title":"Details about File Search","text":""},{"location":"config/details_about_file_search/#search-options","title":"Search Options","text":"The following options can be set in seafevents.conf to control the behaviors of file search. You need to restart seafile and seahub to make them take effect.
[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## this is for improving the search speed\nhighlight = fvh \n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\nindex_office_pdf=false\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n"},{"location":"config/details_about_file_search/#enable-full-text-search-for-officepdf-files","title":"Enable full text search for Office/PDF files","text":"Full text search is not enabled by default to save system resources. If you want to enable it, you need to follow the instructions below.
"},{"location":"config/details_about_file_search/#modify-seafeventsconf","title":"Modifyseafevents.conf","text":"Deploy in DockerDeploy from binary packages cd /opt/seafile-data/seafile/conf\nnano seafevents.conf\n cd /opt/seafile/conf\nnano seafevents.conf\n set index_office_pdf to true
...\n[INDEX FILES]\n...\nindex_office_pdf=true\n...\n"},{"location":"config/details_about_file_search/#restart-seafile-server","title":"Restart Seafile server","text":"Deploy in DockerDeploy from binary packages docker exec -it seafile bash\ncd /opt/seafile/seafile-server-latest\n./seafile.sh restart\n\n# delete the existing search index and recreate it\n./pro/pro.py search --clear\n./pro/pro.py search --update\n cd /opt/seafile/seafile-server-latest\n./seafile.sh restart\n\n# delete the existing search index and recreate it\n./pro/pro.py search --clear\n./pro/pro.py search --update\n"},{"location":"config/details_about_file_search/#common-problems","title":"Common problems","text":""},{"location":"config/details_about_file_search/#how-to-rebuild-the-index-if-something-went-wrong","title":"How to rebuild the index if something went wrong","text":"You can rebuild search index by running:
Deploy in DockerDeploy from binary packagesdocker exec -it seafile bash\ncd /opt/seafile/seafile-server-latest\n./pro/pro.py search --clear\n./pro/pro.py search --update\n cd /opt/seafile/seafile-server-latest\n./pro/pro.py search --clear\n./pro/pro.py search --update\n Tip
If this does not work, you can try the following steps:
rm -rf pro-data/search./pro/pro.py search --updateCreate an elasticsearch service on AWS according to the documentation.
Configure the seafevents.conf:
[INDEX FILES]\nenabled = true\ninterval = 10m\nindex_office_pdf=true\nes_host = your domain endpoint(for example, https://search-my-domain.us-east-1.es.amazonaws.com)\nes_port = 443\nscheme = https\nusername = master user\npassword = password\nhighlight = fvh\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n"},{"location":"config/details_about_file_search/#i-get-no-result-when-i-search-a-keyword","title":"I get no result when I search a keyword","text":"The search index is updated every 10 minutes by default. So before the first index update is performed, you get nothing no matter what you search.
To be able to search immediately,
docker exec -it seafile bash\ncd /opt/seafile/seafile-server-latest\n./pro/pro.py search --update\n cd /opt/seafile/seafile-server-latest\n./pro/pro.py search --update\n"},{"location":"config/details_about_file_search/#encrypted-files-cannot-be-searched","title":"Encrypted files cannot be searched","text":"This is because the server cannot index encrypted files, since they are encrypted.
"},{"location":"config/env/","title":".env","text":"The .env file will be used to specify the components used by the Seafile-docker instance and the environment variables required by each component.
COMPOSE_FILE: .yml files for components of Seafile-docker, each .yml must be separated by the symbol defined in COMPOSE_PATH_SEPARATOR. The core components are involved in seafile-server.yml and caddy.yml which must be taken in this term.COMPOSE_PATH_SEPARATOR: The symbol used to separate the .yml files in term COMPOSE_FILE, default is ','.SEAFILE_IMAGE: The image of Seafile-server, default is seafileltd/seafile-pro-mc:12.0-latest.SEAFILE_DB_IMAGE: Database server image, default is mariadb:10.11.SEAFILE_MEMCACHED_IMAGE: Cached server image, default is memcached:1.6.29SEAFILE_ELASTICSEARCH_IMAGE: Only valid in pro edition. The elasticsearch image, default is elasticsearch:8.15.0.SEAFILE_CADDY_IMAGE: Caddy server image, default is lucaslorentz/caddy-docker-proxy:2.9-alpine.SEADOC_IMAGE: Only valid after integrating SeaDoc. SeaDoc server image, default is seafileltd/sdoc-server:2.0-latest.NON_ROOT: Run Seafile container without a root user, default is falseSEAFILE_VOLUME: The volume directory of Seafile data, default is /opt/seafile-data.SEAFILE_MYSQL_VOLUME: The volume directory of MySQL data, default is /opt/seafile-mysql/db.SEAFILE_CADDY_VOLUME: The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's, default is /opt/seafile-caddy.SEAFILE_ELASTICSEARCH_VOLUME: Only valid in pro edition. The volume directory of Elasticsearch data, default is /opt/seafile-elasticsearch/data.SEADOC_VOLUME: Only valid after integrating SeaDoc. The volume directory of SeaDoc server data, default is /opt/seadoc-data.SEAFILE_MYSQL_DB_HOST: The host address of Mysql, default is the pre-defined service name db in Seafile-docker instance.SEAFILE_MYSQL_DB_PORT: The port of Mysql, default is 3306.INIT_SEAFILE_MYSQL_ROOT_PASSWORD: (Only required on first deployment) The root password of MySQL. SEAFILE_MYSQL_DB_USER: The user of MySQL (database - user can be found in conf/seafile.conf).SEAFILE_MYSQL_DB_PASSWORD: The user seafile password of MySQL.SEAFILE_MYSQL_DB_SEAFILE_DB_NAME: The name of Seafile database name, default is seafile_dbSEAFILE_MYSQL_DB_CCNET_DB_NAME: The name of ccnet database name, default is ccnet_dbSEAFILE_MYSQL_DB_SEAHUB_DB_NAME: The name of seahub database name, default is seahub_dbCACHE_PROVIDER: The type of cache server used for Seafile. The available options are redis and memcached. Since Seafile 13, it is recommended to use redis as the cache service to support new features, and memcached will no longer be integrated into Seafile Docker by default. Default is redisThis part of configurations is only valid in CACHE_PROVIDER=redis:
REDIS_HOST: Redis server host, default is redisREDIS_PORT: Redis server port, default is 6379REDIS_PASSWORD: Redis server password. This part of configurations is only valid in CACHE_PROVIDER=memcached:
MEMCACHED_HOST: Memcached server host, default is memcachedMEMCACHED_PORT: Memcached server port, default is 11211JWT_PRIVATE_KEY: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1SEAFILE_SERVER_HOSTNAME: Seafile server hostname or domainSEAFILE_SERVER_PROTOCOL: Seafile server protocol (http or https)TIME_ZONE: Time zone (default UTC)INIT_SEAFILE_ADMIN_EMAIL: Admin usernameINIT_SEAFILE_ADMIN_PASSWORD: Admin passwordENABLE_SEADOC: Enable the SeaDoc server or not, default is false.SEADOC_SERVER_URL: Only valid in ENABLE_SEADOC=true. External URL of Seadoc server (e.g., https://seafile.example.com/sdoc-server).SEAF_SERVER_STORAGE_TYPE: What kind of the Seafile data for storage. Available options are disk (i.e., local disk), s3 and multiple (see the details of multiple storage backends)S3_COMMIT_BUCKET: S3 storage backend fs objects bucketS3_FS_BUCKET: S3 storage backend block objects bucketS3_BLOCK_BUCKET: S3 storage backend block objects bucketS3_SS_BUCKET: S3 storage bucket for SeaSearch data (valid when service enabled)S3_MD_BUCKET: S3 storage bucket for metadata-sever data (valid when service available)S3_KEY_ID: S3 storage backend key IDS3_SECRET_KEY: S3 storage backend secret keyS3_USE_V4_SIGNATURE: Use the v4 protocol of S3 if enabled, default is trueS3_AWS_REGION: Region of your buckets (AWS only), default is us-east-1.S3_HOST: Host of your buckets (required when not use AWS).S3_USE_HTTPS: Use HTTPS connections to S3 if enabled, default is trueS3_PATH_STYLE_REQUEST: This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. Default false.S3_SSE_C_KEY: A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C.Easier to configure S3 for Seafile and its components
Since Seafile Pro 13.0, in order to facilitate users to deploy Seafile's related extension components and other services in the future, a section will be provided in .env to store the S3 Configurations for Seafile and some extension components (such as SeaSearch, Metadata server). You can locate it with the title bar Storage configurations for S3.
S3 configurations in .env only support single S3 storage backend mode
The Seafile server only support configuring S3 in .env for single S3 storage backend mode (i.e., when SEAF_SERVER_STORAGE_TYPE=s3). If you would like to use other storage backend (e.g., Ceph, Swift) or other settings that can only be set in seafile.conf (like multiple storage backends), please set SEAF_SERVER_STORAGE_TYPE to multiple, and set MD_STORAGE_TYPE and SS_STORAGE_TYPE according to your configurations.
The S3 configurations only valid with at least one STORAGE_TYPE has specified to s3
Now there are three (pro) and one (cluster) STORAGE_TYPE we provided in .env: - SEAF_SERVER_STORAGE_TYPE (pro & cluster) - MD_STORAGE_TYPE (pro, see the Metadata server section for the details) - SS_STORAGE_TYPE (pro, see the SeaSearch section for the details)
You have to specify at least one of them as s3 for the above configuration to take effect.
"},{"location":"config/env/#seasearch","title":"SeaSearch","text":"For configurations about SeaSearch in .env, please refer here for the details.
For configurations about Metadata server in .env, please refer here for the details.
ENABLE_NOTIFICATION_SERVER: Enable (true) or disable (false) notification feature for Seafile. Default is false.NOTIFICATION_SERVER_URL: Used to do the connection between client (i.e., user's browser) and notification server. Default is https://seafile.example.com/notification. INNER_NOTIFICATION_SERVER_URL: Used to do the connection between Seafile server and notification server. Default is http://notification-server:8083.MD_FILE_COUNT_LIMIT: The maximum number of files in a repository that the metadata feature allows. If the number of files in a repository exceeds this value, the metadata management function will not be enabled for the repository. For a repository with metadata management enabled, if the number of records in it reaches this value but there are still some files that are not recorded in metadata server, the metadata management of the unrecorded files will be skipped. Default is 100000.CLUSTER_INIT_MODE: (only valid in pro edition at deploying first time). Cluster initialization mode, in which the necessary configuration files for the service to run will be generated (but the service will not be started). If the configuration file already exists, no operation will be performed. The default value is true. When the configuration file is generated, be sure to set this item to false.CLUSTER_INIT_ES_HOST: (only valid in pro edition at deploying first time). Your cluster Elasticsearch server host.CLUSTER_INIT_ES_PORT: (only valid in pro edition at deploying first time). Your cluster Elasticsearch server port. Default is 9200.CLUSTER_MODE: Seafile service node type, i.e., frontend (default) or backend.This documentation is for the Community Edition. If you're using Pro Edition, please refer to the Seafile Pro documentation
"},{"location":"config/ldap_in_ce/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"When Seafile is integrated with LDAP, users in the system can be divided into two tiers:
Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated.
Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them.
When Seafile counts the number of users in the system, it only counts the activated users in its internal database.
"},{"location":"config/ldap_in_ce/#basic-ldap-integration","title":"Basic LDAP Integration","text":"The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier:
user-login-name@domain-name, e.g. john@example.com. It's not a real email address, but it works fine as the unique identifier.The identifier is stored in table social_auth_usersocialauth to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth table
Add the following options to seahub_settings.py. Examples are as follows:
ENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.1' \nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' \nLDAP_ADMIN_DN = 'administrator@example.com' \nLDAP_ADMIN_PASSWORD = 'yourpassword' \nLDAP_PROVIDER = 'ldap' \nLDAP_LOGIN_ATTR = 'email' \nLDAP_CONTACT_EMAIL_ATTR = '' \nLDAP_USER_ROLE_ATTR = '' \nLDAP_USER_FIRST_NAME_ATTR = 'givenName' \nLDAP_USER_LAST_NAME_ATTR = 'sn' \nLDAP_USER_NAME_REVERSE = False \nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \n Meaning of some options:
variable descriptionLDAP_SERVER_URL The URL of LDAP server LDAP_BASE_DN The root node of users who can log in to Seafile in the LDAP server LDAP_ADMIN_DN DN of the administrator used to query the LDAP server for information. For OpenLDAP, it may be cn=admin,dc=example,dc=com LDAP_ADMIN_PASSWORD Password of LDAP_ADMIN_DN LDAP_PROVIDER Identify the source of the user, used in the table social_auth_usersocialauth, defaults to 'ldap' LDAP_LOGIN_ATTR User's attribute used to log in to Seafile. It should be a unique identifier for the user in LDAP server. Learn more about this id from the descriptions at the beginning of this section. LDAP_CONTACT_EMAIL_ATTR LDAP user's contact_email attribute LDAP_USER_ROLE_ATTR LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR Attribute for user's first name. It's \"givenName\" by default. LDAP_USER_LAST_NAME_ATTR Attribute for user's last name. It's \"sn\" by default. LDAP_USER_NAME_REVERSE In some languages, such as Chinese, the display order of the first and last name is reversed. Set this option if you need it. LDAP_FILTER Additional filter conditions. Users who meet the filter conditions can log in, otherwise they cannot log in. Tips for choosing LDAP_BASE_DN and LDAP_ADMIN_DN:
To determine the LDAP_BASE_DN, you first have to navigate your organization hierachy on the domain controller GUI.
If you want to allow all users to use Seafile, you can use cn=users,dc=yourdomain,dc=com as LDAP_BASE_DN (with proper adjustment for your own needs).
If you want to limit users to a certain OU (Organization Unit), you run dsquery command on the domain controller to find out the DN for this OU. For example, if the OU is staffs, you can run dsquery ou -name staff. More information can be found here.
AD supports user@domain.name format for the LDAP_ADMIN_DN option. For example you can use administrator@example.com for LDAP_ADMIN_DN. Sometime the domain controller doesn't recognize this format. You can still use dsquery command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser. More information here.
Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the LDAP_BASE_DN option. The DNs are separated by \";\", e.g.
LDAP_BASE_DN = 'ou=developers,dc=example,dc=com;ou=marketing,dc=example,dc=com'\n"},{"location":"config/ldap_in_ce/#additional-search-filter","title":"Additional Search Filter","text":"Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting LDAP_FILTER option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).
The final filter used for searching for users is (&($LOGIN_ATTR=*)($LDAP_FILTER)). $LOGIN_ATTR and $LDAP_FILTER will be replaced by your option values.
For example, add below option to seahub_settings.py:
LDAP_FILTER = 'memberOf=CN=group,CN=developers,DC=example,DC=com'\n The final search filter would be (&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))
Note that the case of attribute names in the above example is significant. The memberOf attribute is only available in Active Directory.
You can use the LDAP_FILTER option to limit user scope to a certain AD group.
First, you should find out the DN for the group. Again, we'll use the dsquery command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup.
Add below option to seahub_settings.py:
LDAP_FILTER = 'memberOf={output of dsquery command}'\n"},{"location":"config/ldap_in_ce/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"If your LDAP service supports TLS connections, you can configure LDAP_SERVER_URL as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
LDAP_SERVER_URL = 'ldaps://192.168.0.1:636'\n"},{"location":"config/ldap_in_pro/","title":"Configure Seafile Pro Edition to use LDAP","text":""},{"location":"config/ldap_in_pro/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"When Seafile is integrated with LDAP, users in the system can be divided into two tiers:
Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated.
Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them.
When Seafile counts the number of users in the system, it only counts the activated users in its internal database.
"},{"location":"config/ldap_in_pro/#basic-ldap-integration","title":"Basic LDAP Integration","text":"The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier:
user-login-name@domain-name, e.g. john@example.com. It's not a real email address, but it works fine as the unique identifier.The identifier is stored in table social_auth_usersocialauth to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth table
Add the following options to seahub_settings.py. Examples are as follows:
ENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.1' \nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' \nLDAP_ADMIN_DN = 'administrator@example.com' \nLDAP_ADMIN_PASSWORD = 'yourpassword' \nLDAP_PROVIDER = 'ldap' \nLDAP_LOGIN_ATTR = 'email' \nLDAP_CONTACT_EMAIL_ATTR = '' \nLDAP_USER_ROLE_ATTR = '' \nLDAP_USER_FIRST_NAME_ATTR = 'givenName' \nLDAP_USER_LAST_NAME_ATTR = 'sn' \nLDAP_USER_NAME_REVERSE = False \nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \n Meaning of some options:
variable descriptionLDAP_SERVER_URL The URL of LDAP server LDAP_BASE_DN The root node of users who can log in to Seafile in the LDAP server LDAP_ADMIN_DN DN of the administrator used to query the LDAP server for information. For OpenLDAP, it may be cn=admin,dc=example,dc=com LDAP_ADMIN_PASSWORD Password of LDAP_ADMIN_DN LDAP_PROVIDER Identify the source of the user, used in the table social_auth_usersocialauth, defaults to 'ldap' LDAP_LOGIN_ATTR User's attribute used to log in to Seafile. It should be a unique identifier for the user in LDAP server. Learn more about this id from the descriptions at the beginning of this section. LDAP_CONTACT_EMAIL_ATTR LDAP user's contact_email attribute LDAP_USER_ROLE_ATTR LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR Attribute for user's first name. It's \"givenName\" by default. LDAP_USER_LAST_NAME_ATTR Attribute for user's last name. It's \"sn\" by default. LDAP_USER_NAME_REVERSE In some languages, such as Chinese, the display order of the first and last name is reversed. Set this option if you need it. LDAP_FILTER Additional filter conditions. Users who meet the filter conditions can log in, otherwise they cannot log in. Tips for choosing LDAP_BASE_DN and LDAP_ADMIN_DN:
To determine the LDAP_BASE_DN, you first have to navigate your organization hierachy on the domain controller GUI.
If you want to allow all users to use Seafile, you can use cn=users,dc=yourdomain,dc=com as LDAP_BASE_DN (with proper adjustment for your own needs).
If you want to limit users to a certain OU (Organization Unit), you run dsquery command on the domain controller to find out the DN for this OU. For example, if the OU is staffs, you can run dsquery ou -name staff. More information can be found here.
AD supports user@domain.name format for the LDAP_ADMIN_DN option. For example you can use administrator@example.com for LDAP_ADMIN_DN. Sometime the domain controller doesn't recognize this format. You can still use dsquery command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser. More information here.
In Seafile Pro, except for importing users into internal database when they log in, you can also configure Seafile to periodically sync user information from LDAP server into the internal database.
User's full name, department and contact email address can be synced to internal database. Users can use this information to more easily search for a specific user. User's Windows or Unix login id can be synced to the internal database. This allows the user to log in with its familiar login id. When a user is removed from LDAP, the corresponding user in Seafile will be deactivated. Otherwise, he could still sync files with Seafile client or access the web interface. After synchronization is complete, you can see the user's full name, department and contact email on its profile page.
"},{"location":"config/ldap_in_pro/#sync-configuration-items","title":"Sync configuration items","text":"Add the following options to seahub_settings.py. Examples are as follows:
# Basic configuration items\nENABLE_LDAP = True\n......\n\n# ldap user sync options.\nLDAP_SYNC_INTERVAL = 60 \nENABLE_LDAP_USER_SYNC = True \nLDAP_USER_OBJECT_CLASS = 'person'\nLDAP_DEPT_ATTR = '' \nLDAP_UID_ATTR = '' \nLDAP_AUTO_REACTIVATE_USERS = True \nLDAP_USE_PAGED_RESULT = False \nIMPORT_NEW_USER = True \nACTIVATE_USER_WHEN_IMPORT = True \nDEACTIVE_USER_IF_NOTFOUND = False \nENABLE_EXTRA_USER_INFO_SYNC = True \n Meaning of some options:
Variable Description LDAP_SYNC_INTERVAL The interval to sync. Unit is minutes. Defaults to 60 minutes. ENABLE_LDAP_USER_SYNC set to \"true\" if you want to enable ldap user synchronization LDAP_USER_OBJECT_CLASS This is the name of the class used to search for user objects. In Active Directory, it's usually \"person\". The default value is \"person\". LDAP_DEPT_ATTR Attribute for department info. LDAP_UID_ATTR Attribute for Windows login name. If this is synchronized, users can also log in with their Windows login name. In AD, the attributesAMAccountName can be used as UID_ATTR. The attribute will be stored as login_id in Seafile (in seahub_db.profile_profile table). LDAP_AUTO_REACTIVATE_USERS Whether to auto activate deactivated user, default by 'true' LDAP_USE_PAGED_RESULT Whether to use pagination extension. It is useful when you have more than 1000 users in LDAP server. IMPORT_NEW_USER Whether to import new users when sync user. ACTIVATE_USER_WHEN_IMPORT Whether to activate the user automatically when imported. DEACTIVE_USER_IF_NOTFOUND set to \"true\" if you want to deactivate a user when he/she was deleted in AD server. ENABLE_EXTRA_USER_INFO_SYNC Enable synchronization of additional user information, including user's full name, department, and Windows login name, etc."},{"location":"config/ldap_in_pro/#importing-users-without-activating-them","title":"Importing Users without Activating Them","text":"The users imported with the above configuration will be activated by default. For some organizations with large number of users, they may want to import user information (such as user full name) without activating the imported users. Activating all imported users will require licenses for all users in LDAP, which may not be affordable.
Seafile provides a combination of options for such use case. You can modify below option in seahub_settings.py:
ACTIVATE_USER_WHEN_IMPORT = False\n This prevents Seafile from activating imported users. Then, add below option to seahub_settings.py:
ACTIVATE_AFTER_FIRST_LOGIN = True\n This option will automatically activate users when they login to Seafile for the first time.
"},{"location":"config/ldap_in_pro/#reactivating-users","title":"Reactivating Users","text":"When you set the DEACTIVE_USER_IF_NOTFOUND option, a user will be deactivated when he/she is not found in LDAP server. By default, even after this user reappears in the LDAP server, it won't be reactivated automatically. This is to prevent auto reactivating a user that was manually deactivated by the system admin.
However, sometimes it's desirable to auto reactivate such users. You can modify below option in seahub_settings.py:
LDAP_AUTO_REACTIVATE_USERS = True\n"},{"location":"config/ldap_in_pro/#manually-trigger-synchronization","title":"Manually Trigger Synchronization","text":"To test your LDAP sync configuration, you can run the sync command manually.
To trigger LDAP sync manually:
cd seafile-server-latest\n./pro/pro.py ldapsync\n For Seafile Docker
docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py ldapsync\n"},{"location":"config/ldap_in_pro/#setting-up-ldap-group-sync-optional","title":"Setting Up LDAP Group Sync (optional)","text":""},{"location":"config/ldap_in_pro/#how-it-works","title":"How It Works","text":"The importing or syncing process maps groups from LDAP directory server to groups in Seafile's internal database. This process is one-way.
Any changes to groups in the database won't propagate back to LDAP;
Any changes to groups in the database, except for \"setting a member as group admin\", will be overwritten in the next LDAP sync operation. If you want to add or delete members, you can only do that on LDAP server.
The creator of imported groups will be set to the system admin.
There are two modes of operation:
Periodical: the syncing process will be executed in a fixed interval
Manual: there is a script you can run to trigger the syncing once
Before enabling LDAP group sync, you should have configured LDAP authentication. See Basic LDAP Integration for details.
The following are LDAP group sync related options:
# ldap group sync options.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_GROUP_FILTER = '' # An additional filter to use when searching group objects.\n # If it's set, the final filter used to run search is \"(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER))\";\n # otherwise the final filter would be \"(objectClass=GROUP_OBJECT_CLASS)\".\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile.\n # Learn more about departments in Seafile [here](https://help.seafile.com/sharing_collaboration/departments/).\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\n Meaning of some options:
variable description ENABLE_LDAP_GROUP_SYNC Whether to enable group sync. LDAP_GROUP_OBJECT_CLASS This is the name of the class used to search for group objects. LDAP_GROUP_MEMBER_ATTR The attribute field to use when loading the group's members. For most directory servers, the attribute is \"member\" which is the default value. For \"posixGroup\", it should be set to \"memberUid\". LDAP_USER_ATTR_IN_MEMBERUID The user attribute set in 'memberUid' option, which is used in \"posixGroup\". The default value is \"uid\". LDAP_GROUP_UUID_ATTR Used to uniquely identify groups in LDAP. LDAP_GROUP_FILTER An additional filter to use when searching group objects. If it's set, the final filter used to run search is(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER)); otherwise the final filter would be (objectClass=GROUP_OBJECT_CLASS). LDAP_USER_GROUP_MEMBER_RANGE_QUERY When a group contains too many members, AD will only return part of them. Set this option to TRUE to make LDAP sync work with large groups. DEL_GROUP_IF_NOT_FOUND Set to \"true\", sync process will delete the group if not found in the LDAP server. LDAP_SYNC_GROUP_AS_DEPARTMENT Whether to sync groups as top-level departments in Seafile. Learn more about departments in Seafile here. LDAP_DEPT_NAME_ATTR Used to get the department name. Tip
The search base for groups is the option LDAP_BASE_DN.
Some LDAP server, such as Active Directory, allows a group to be a member of another group. This is called \"group nesting\". If we find a nested group B in group A, we should recursively add all the members from group B into group A. And group B should still be imported a separate group. That is, all members of group B are also members in group A.
In some LDAP server, such as OpenLDAP, it's common practice to use Posix groups to store group membership. To import Posix groups as Seafile groups, set LDAP_GROUP_OBJECT_CLASS option to posixGroup. A posixGroup object in LDAP usually contains a multi-value attribute for the list of member UIDs. The name of this attribute can be set with the LDAP_GROUP_MEMBER_ATTR option. It's MemberUid by default. The value of the MemberUid attribute is an ID that can be used to identify a user, which corresponds to an attribute in the user object. The name of this ID attribute is usually uid, but can be set via the LDAP_USER_ATTR_IN_MEMBERUID option. Note that posixGroup doesn't support nested groups.
A department in Seafile is a special group. In addition to what you can do with a group, there are two key new features for departments:
Department supports hierarchy. A department can have any levels of sub-departments.
Department can have storage quota.
Seafile supports syncing OU (Organizational Units) from AD/LDAP to departments. The sync process keeps the hierarchical structure of the OUs.
Options for syncing departments from OU:
LDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_DEPT_NAME_ATTR = 'description' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n"},{"location":"config/ldap_in_pro/#periodical-and-manual-sync","title":"Periodical and Manual Sync","text":"Periodical sync won't happen immediately after you restart seafile server. It gets scheduled after the first sync interval. For example if you set sync interval to 30 minutes, the first auto sync will happen after 30 minutes you restarts. To sync immediately, you need to manually trigger it.
After the sync is run, you should see log messages like the following in logs/seafevents.log. And you should be able to see the groups in system admin page.
[2023-03-30 18:15:05,109] [DEBUG] create group 1, and add dn pair CN=DnsUpdateProxy,CN=Users,DC=Seafile,DC=local<->1 success.\n[2023-03-30 18:15:05,145] [DEBUG] create group 2, and add dn pair CN=Domain Computers,CN=Users,DC=Seafile,DC=local<->2 success.\n[2023-03-30 18:15:05,154] [DEBUG] create group 3, and add dn pair CN=Domain Users,CN=Users,DC=Seafile,DC=local<->3 success.\n[2023-03-30 18:15:05,164] [DEBUG] create group 4, and add dn pair CN=Domain Admins,CN=Users,DC=Seafile,DC=local<->4 success.\n[2023-03-30 18:15:05,176] [DEBUG] create group 5, and add dn pair CN=RAS and IAS Servers,CN=Users,DC=Seafile,DC=local<->5 success.\n[2023-03-30 18:15:05,186] [DEBUG] create group 6, and add dn pair CN=Enterprise Admins,CN=Users,DC=Seafile,DC=local<->6 success.\n[2023-03-30 18:15:05,197] [DEBUG] create group 7, and add dn pair CN=dev,CN=Users,DC=Seafile,DC=local<->7 success.\n To trigger LDAP sync manually,
cd seafile-server-latest\n./pro/pro.py ldapsync\n For Seafile Docker
docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py ldapsync\n"},{"location":"config/ldap_in_pro/#advanced-ldap-integration-options","title":"Advanced LDAP Integration Options","text":""},{"location":"config/ldap_in_pro/#multiple-base","title":"Multiple BASE","text":"Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the LDAP_BASE_DN option. The DNs are separated by \";\", e.g.
LDAP_BASE_DN = 'ou=developers,dc=example,dc=com;ou=marketing,dc=example,dc=com'\n"},{"location":"config/ldap_in_pro/#additional-search-filter","title":"Additional Search Filter","text":"Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting LDAP_FILTER option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).
The final filter used for searching for users is (&($LOGIN_ATTR=*)($LDAP_FILTER)). $LOGIN_ATTR and $LDAP_FILTER will be replaced by your option values.
For example, add below option to seahub_settings.py:
LDAP_FILTER = 'memberOf=CN=group,CN=developers,DC=example,DC=com'\n The final search filter would be (&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))
The case of attribute names in the above example is significant. The memberOf attribute is only available in Active Directory
You can use the LDAP_FILTER option to limit user scope to a certain AD group.
First, you should find out the DN for the group. Again, we'll use the dsquery command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup.
Add below option to seahub_settings.py:
LDAP_FILTER = 'memberOf={output of dsquery command}'\n"},{"location":"config/ldap_in_pro/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"If your LDAP service supports TLS connections, you can configure LDAP_SERVER_URL as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
LDAP_SERVER_URL = 'ldaps://192.168.0.1:636'\n"},{"location":"config/ldap_in_pro/#use-paged-results-extension","title":"Use paged results extension","text":"LDAP protocol version 3 supports \"paged results\" (PR) extension. When you have large number of users, this option can greatly improve the performance of listing users. Most directory server nowadays support this extension.
In Seafile Pro Edition, add this option to seahub_settings.py to enable PR:
LDAP_USE_PAGED_RESULT = True\n"},{"location":"config/ldap_in_pro/#follow-referrals","title":"Follow referrals","text":"Seafile Pro Edition supports auto following referrals in LDAP search. This is useful for partitioned LDAP or AD servers, where users may be spreaded on multiple directory servers. For more information about referrals, you can refer to this article.
Note: If you get the error like Invalid credentials, you can try to set the LDAP_FOLLOW_REFERRALS = False to solve the problem:
LDAP_FOLLOW_REFERRALS = False\n"},{"location":"config/ldap_in_pro/#configure-multi-ldap-servers","title":"Configure Multi-ldap Servers","text":"Seafile Pro Edition supports multi-ldap servers, you can configure two ldap servers to work with seafile. Multi-ldap servers mean that, when get or search ldap user, it will iterate all configured ldap servers until a match is found; When listing all ldap users, it will iterate all ldap servers to get all users; For Ldap sync it will sync all user/group info in all configured ldap servers to seafile.
Currently, only two LDAP servers are supported.
If you want to use multi-ldap servers, please replace LDAP in the options with MULTI_LDAP_1, and then add them to seahub_settings.py, for example:
# Basic config options\nENABLE_LDAP = True\n......\n\n# Multi ldap config options\nENABLE_MULTI_LDAP = True\nMULTI_LDAP_1_SERVER_URL = 'ldap://192.168.0.2'\nMULTI_LDAP_1_BASE_DN = 'ou=test,dc=seafile,dc=top'\nMULTI_LDAP_1_ADMIN_DN = 'administrator@example.top'\nMULTI_LDAP_1_ADMIN_PASSWORD = 'Hello@123'\nMULTI_LDAP_1_PROVIDER = 'ldap1'\nMULTI_LDAP_1_LOGIN_ATTR = 'userPrincipalName'\n\n# Optional configs\nMULTI_LDAP_1_USER_FIRST_NAME_ATTR = 'givenName'\nMULTI_LDAP_1_USER_LAST_NAME_ATTR = 'sn'\nMULTI_LDAP_1_USER_NAME_REVERSE = False\nENABLE_MULTI_LDAP_1_EXTRA_USER_INFO_SYNC = True\n\nMULTI_LDAP_1_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \nMULTI_LDAP_1_USE_PAGED_RESULT = False\nMULTI_LDAP_1_FOLLOW_REFERRALS = True\nENABLE_MULTI_LDAP_1_USER_SYNC = True\nENABLE_MULTI_LDAP_1_GROUP_SYNC = True\nMULTI_LDAP_1_SYNC_DEPARTMENT_FROM_OU = True\n\nMULTI_LDAP_1_USER_OBJECT_CLASS = 'person'\nMULTI_LDAP_1_DEPT_ATTR = ''\nMULTI_LDAP_1_UID_ATTR = ''\nMULTI_LDAP_1_CONTACT_EMAIL_ATTR = ''\nMULTI_LDAP_1_USER_ROLE_ATTR = ''\nMULTI_LDAP_1_AUTO_REACTIVATE_USERS = True\n\nMULTI_LDAP_1_GROUP_OBJECT_CLASS = 'group'\nMULTI_LDAP_1_GROUP_FILTER = ''\nMULTI_LDAP_1_GROUP_MEMBER_ATTR = 'member'\nMULTI_LDAP_1_GROUP_UUID_ATTR = 'objectGUID'\nMULTI_LDAP_1_CREATE_DEPARTMENT_LIBRARY = False\nMULTI_LDAP_1_DEPT_REPO_PERM = 'rw'\nMULTI_LDAP_1_DEFAULT_DEPARTMENT_QUOTA = -2\nMULTI_LDAP_1_SYNC_GROUP_AS_DEPARTMENT = False\nMULTI_LDAP_1_USE_GROUP_MEMBER_RANGE_QUERY = False\nMULTI_LDAP_1_USER_ATTR_IN_MEMBERUID = 'uid'\nMULTI_LDAP_1_DEPT_NAME_ATTR = ''\n......\n !!! note: There are still some shared config options are used for all LDAP servers, as follows:
```python\n# Common user sync options\nLDAP_SYNC_INTERVAL = 60\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# Common group sync options\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n```\n"},{"location":"config/ldap_in_pro/#sso-and-ldap-users-use-the-same-uid","title":"SSO and LDAP users use the same uid","text":"If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth or Shibboleth), you want Seafile to find the existing account for this user instead of creating a new one, you can set
SSO_LDAP_USE_SAME_UID = True\n Here the UID means the unique user ID, in LDAP it is the attribute you use for LDAP_LOGIN_ATTR (not LDAP_UID_ATTR), in ADFS it is uid attribute. You need make sure you use the same attribute for the two settings
On this basis, if you only want users to login using SSO and not through LDAP, you can set
USE_LDAP_SYNC_ONLY = True\n"},{"location":"config/ldap_in_pro/#importing-roles-from-ldap","title":"Importing Roles from LDAP","text":"Seafile Pro Edition supports syncing roles from LDAP or Active Directory.
To enable this feature, add below option to seahub_settings.py, e.g.
LDAP_USER_ROLE_ATTR = 'title'\n LDAP_USER_ROLE_ATTR is the attribute field to configure roles in LDAP. You can write a custom function to map the role by creating a file seahub_custom_functions.py under conf/ and edit it like:
# -*- coding: utf-8 -*-\n\n# The AD roles attribute returns a list of roles (role_list).\n# The following function use the first entry in the list.\ndef ldap_role_mapping(role):\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n\n# From version 11.0.11-pro, you can define the following function\n# to calculate a role from the role_list.\ndef ldap_role_list_mapping(role_list):\n if not role_list:\n return ''\n for role in role_list:\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n You should only define one of the two functions
You can rewrite the function (in python) to make your own mapping rules. If the file or function doesn't exist, the first entry in role_list will be synced.
"},{"location":"config/multi_institutions/","title":"Multiple Organization/Institution User Management","text":"Starting from version 5.1, you can add institutions into Seafile and assign users into institutions. Each institution can have one or more administrators. This feature is to ease user administration when multiple organizations (universities) share a single Seafile instance. Unlike multi-tenancy, the users are not-isolated. A user from one institution can share files with another institution.
"},{"location":"config/multi_institutions/#turn-on-the-feature","title":"Turn on the feature","text":"In seahub_settings.py, add MULTI_INSTITUTION = True to enable multi-institution feature, and add
EXTRA_MIDDLEWARE += (\n 'seahub.institutions.middleware.InstitutionMiddleware',\n )\n Please replease += to = if EXTRA_MIDDLEWARE is not defined
After restarting Seafile, a system admin can add institutions by adding institution name in admin panel. He can also click into an institution, which will list all users whose profile.institution match the name.
If you are using Shibboleth, you can map a Shibboleth attribute into institution. For example, the following configuration maps organization attribute to institution.
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"givenname\": (False, \"givenname\"),\n \"sn\": (False, \"surname\"),\n \"mail\": (False, \"contact_email\"),\n \"organization\": (False, \"institution\"),\n}\n"},{"location":"config/multi_tenancy/","title":"Multi-Tenancy Support","text":"Multi-tenancy feature is designed for hosting providers that what to host several customers in a single Seafile instance. You can create multi-organizations. Organizations is separated from each other. Users can't share libraries between organizations.
"},{"location":"config/multi_tenancy/#seafile-config","title":"Seafile Config","text":""},{"location":"config/multi_tenancy/#seafileconf","title":"seafile.conf","text":"[general]\nmulti_tenancy = true\n"},{"location":"config/multi_tenancy/#seahub_settingspy","title":"seahub_settings.py","text":"CLOUD_MODE = True\nMULTI_TENANCY = True\n\nORG_MEMBER_QUOTA_ENABLED = True\n\nORG_ENABLE_ADMIN_CUSTOM_NAME = True # Default is True, meaning organization name can be customized\nORG_ENABLE_ADMIN_CUSTOM_LOGO = False # Default is False, if set to True, organization logo can be customized\n\nENABLE_MULTI_ADFS = True # Default is False, if set to True, support per organization custom ADFS/SAML2 login\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n"},{"location":"config/multi_tenancy/#usage","title":"Usage","text":"An organization can be created via system admin in \u201cadmin panel->organization->Add organization\u201d.
Every organization has an URL prefix. This field is for future usage. When a user create an organization, an URL like org1 will be automatically assigned.
After creating an organization, the first user will become the admin of that organization. The organization admin can add other users. Note, the system admin can't add users.
"},{"location":"config/multi_tenancy/#adfssaml-single-sign-on-integration-in-multi-tenancy","title":"ADFS/SAML single sign-on integration in multi-tenancy","text":""},{"location":"config/multi_tenancy/#preparation-for-adfssaml","title":"Preparation for ADFS/SAML","text":"1) Prepare SP(Seafile) certificate directory and SP certificates:
Create sp certs dir
$ mkdir -p /opt/seafile-data/seafile/seahub-data/certs\n The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command:
$ cd /opt/seafile-data/seafile/seahub-data/certs\n$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sp.key -out sp.crt\n The days option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly
Note
If certificates are not placed in /opt/seafile-data/seafile/seahub-data/certs, you need to add the following configuration in seahub_settings.py:
SAML_CERTS_DIR = '/path/to/certs'\n 2) Add the following configuration to seahub_settings.py and then restart Seafile:
ENABLE_MULTI_ADFS = True\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n"},{"location":"config/multi_tenancy/#integration-with-adfssaml-single-sign-on","title":"Integration with ADFS/SAML single sign-on","text":"Please refer to this document.
"},{"location":"config/oauth/","title":"OAuth Authentication","text":""},{"location":"config/oauth/#oauth","title":"OAuth","text":"Before using OAuth, you should first register an OAuth2 client application on your authorization server, then add some configurations to seahub_settings.py.
"},{"location":"config/oauth/#register-an-oauth2-client-application","title":"Register an OAuth2 client application","text":"Here we use Github as an example. First you should register an OAuth2 client application on Github, official document from Github is very detailed.
"},{"location":"config/oauth/#configuration","title":"Configuration","text":"Add the folllowing configurations to seahub_settings.py:
ENABLE_OAUTH = True\n\n# If create new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_CREATE_UNKNOWN_USER = True\n\n# If active new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_ACTIVATE_USER_AFTER_CREATION = True\n\n# Usually OAuth works through SSL layer. If your server is not parametrized to allow HTTPS, some method will raise an \"oauthlib.oauth2.rfc6749.errors.InsecureTransportError\". Set this to `True` to avoid this error.\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\n# Client id/secret generated by authorization server when you register your client application.\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\n\n# Callback url when user authentication succeeded. Note, the redirect url you input when you register your client application MUST be exactly the same as this value.\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following should NOT be changed if you are using Github as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'github.com' \nOAUTH_PROVIDER = 'github.com'\n\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # Please keep the 'email' option unchanged to be compatible with the login of users of version 11.0 and earlier.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n \"uid\": (True, \"uid\"), # Seafile v11.0 + \n}\n"},{"location":"config/oauth/#more-explanations-about-the-settings","title":"More explanations about the settings","text":"OAUTH_PROVIDER / OAUTH_PROVIDER_DOMAIN
OAUTH_PROVIDER_DOMAIN will be deprecated, and it can be replaced by OAUTH_PROVIDER. This variable is used in the database to identify third-party providers, either as a domain or as an easy-to-remember string less than 32 characters.
OAUTH_ATTRIBUTE_MAP
This variables describes which claims from the response of the user info endpoint are to be filled into which attributes of the new Seafile user. The format is showing like below:
OAUTH_ATTRIBUTE_MAP = {\n <:Attribute in the OAuth provider>: (<:Is required or not in Seafile?>, <:Attribute in Seafile >)\n }\n If the remote resource server, like Github, uses email to identify an unique user too, Seafile will use Github id directorily, the OAUTH_ATTRIBUTE_MAP setting for Github should be like this:
OAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # it is deprecated\n \"uid / id / username\": (True, \"uid\") \n\n # extra infos you want to update to Seafile\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n The key part id stands for an unique identifier of user in Github, this tells Seafile which attribute remote resoure server uses to indentify its user. The value part True stands for if this field is mandatory by Seafile.
Since 11.0 version, Seafile use uid as the external unique identifier of the user. It stores uid in table social_auth_usersocialauth and map it to internal unique identifier used in Seafile. Different OAuth systems have different attributes, which may be: id or uid or username, etc. And the id/email config id: (True, email) is deprecated.
If you upgrade from a version below 11.0, you need to have both fields configured, i.e., you configuration should be like:
OAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"uid\": (True, \"uid\") ,\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n In this way, when a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration \"id\": (True, \"email\").
If you use a newly deployed 11.0+ Seafile instance, you don't need the \"id\": (True, \"email\") item. Your configuration should be like:
OAUTH_ATTRIBUTE_MAP = {\n \"uid\": (True, \"uid\") ,\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n }\n"},{"location":"config/oauth/#sample-settings","title":"Sample settings","text":"GoogleGithubGitLabAzure Cloud ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following shoud NOT be changed if you are using Google as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'google.com'\nOAUTH_AUTHORIZATION_URL = 'https://accounts.google.com/o/oauth2/v2/auth'\nOAUTH_TOKEN_URL = 'https://www.googleapis.com/oauth2/v4/token'\nOAUTH_USER_INFO_URL = 'https://www.googleapis.com/oauth2/v1/userinfo'\nOAUTH_SCOPE = [\n \"openid\",\n \"https://www.googleapis.com/auth/userinfo.email\",\n \"https://www.googleapis.com/auth/userinfo.profile\",\n]\nOAUTH_ATTRIBUTE_MAP = {\n \"sub\": (True, \"uid\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n Note
For Github, email is not the unique identifier for an user, but id is in most cases, so we use id as settings example in our manual. As Seafile uses email to identify an unique user account for now, so we combine id and OAUTH_PROVIDER_DOMAIN, which is github.com in your case, to an email format string and then create this account if not exist.
ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\nOAUTH_PROVIDER_DOMAIN = 'github.com'\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, 'uid'),\n \"email\": (False, \"contact_email\"),\n \"name\": (False, \"name\"),\n}\n Note
To enable OAuth via GitLab. Create an application in GitLab (under Admin area->Applications).
Fill in required fields:
Name: a name you specify
Redirect URI: The callback url see below OAUTH_REDIRECT_URL
Trusted: Skip confirmation dialog page. Select this to not ask the user if he wants to authorize seafile to receive access to his/her account data.
Scopes: Select openid and read_user in the scopes list.
Press submit and copy the client id and secret you receive on the confirmation page and use them in this template for your seahub_settings.py
ENABLE_OAUTH = True\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = \"https://your-seafile/oauth/callback/\"\n\nOAUTH_PROVIDER_DOMAIN = 'your-domain'\nOAUTH_AUTHORIZATION_URL = 'https://gitlab.your-domain/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://gitlab.your-domain/oauth/token'\nOAUTH_USER_INFO_URL = 'https://gitlab.your-domain/api/v4/user'\nOAUTH_SCOPE = [\"openid\", \"read_user\"]\nOAUTH_ATTRIBUTE_MAP = {\n \"email\": (True, \"uid\"),\n \"name\": (False, \"name\")\n}\n Note
For users of Azure Cloud, as there is no id field returned from Azure Cloud's user info endpoint, so we use a special configuration for OAUTH_ATTRIBUTE_MAP setting (others are the same as Github/Google). Please see this tutorial for the complete deployment process of OAuth against Azure Cloud.
OAUTH_ATTRIBUTE_MAP = {\n \"email\": (True, \"uid\"),\n \"name\": (False, \"name\")\n}\n"},{"location":"config/ocm/","title":"Open Cloud Mesh","text":"From 8.0.0, Seafile supports OCM protocol. With OCM, user can share library to other server which enabled OCM too.
Seafile currently supports sharing between Seafile servers with version greater than 8.0, and sharing from NextCloud to Seafile since 9.0.
These two functions cannot be enabled at the same time
"},{"location":"config/ocm/#configuration","title":"Configuration","text":"Add the following configuration to seahub_settings.py.
# Enable OCM\nENABLE_OCM = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"dev\",\n \"server_url\": \"https://seafile-domain-1/\", # should end with '/'\n },\n {\n \"server_name\": \"download\",\n \"server_url\": \"https://seafile-domain-2/\", # should end with '/'\n },\n]\n # Enable OCM\nENABLE_OCM_VIA_WEBDAV = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"nextcloud\",\n \"server_url\": \"https://nextcloud-domain-1/\", # should end with '/'\n }\n]\n OCM_REMOTE_SERVERS is a list of servers that you allow your users to share libraries with
"},{"location":"config/ocm/#usage","title":"Usage","text":""},{"location":"config/ocm/#share-library-to-other-server","title":"Share library to other server","text":"In the library sharing dialog jump to \"Share to other server\", you can share this library to users of another server with \"Read-Only\" or \"Read-Write\" permission. You can also view shared records and cancel sharing.
"},{"location":"config/ocm/#view-be-shared-libraries","title":"View be shared libraries","text":"You can jump to \"Shared from other servers\" page to view the libraries shared by other servers and cancel the sharing.
And enter the library to view, download or upload files.
"},{"location":"config/remote_user/","title":"SSO using Remote User","text":"Starting from 7.0.0, Seafile can integrate with various Single Sign On systems via a proxy server. Examples include Apache as Shibboleth proxy, or LemonLdap as a proxy to LDAP servers, or Apache as Kerberos proxy. Seafile can retrieve user information from special request headers (HTTP_REMOTE_USER, HTTP_X_AUTH_USER, etc.) set by the proxy servers.
After the proxy server (Apache/Nginx) is successfully authenticated, the user information is set to the request header, and Seafile creates and logs in the user based on this information.
Make sure that the proxy server has a corresponding security mechanism to protect against forgery request header attacks
Please add the following settings to conf/seahub_settings.py to enable this feature.
ENABLE_REMOTE_USER_AUTHENTICATION = True\n\n# Optional, HTTP header, which is configured in your web server conf file,\n# used for Seafile to get user's unique id, default value is 'HTTP_REMOTE_USER'.\nREMOTE_USER_HEADER = 'HTTP_REMOTE_USER'\n\n# Optional, when the value of HTTP_REMOTE_USER is not a valid email address\uff0c\n# Seafile will build a email-like unique id from the value of 'REMOTE_USER_HEADER'\n# and this domain, e.g. user1@example.com.\nREMOTE_USER_DOMAIN = 'example.com'\n\n# Optional, whether to create new user in Seafile system, default value is True.\n# If this setting is disabled, users doesn't preexist in the Seafile DB cannot login.\n# The admin has to first import the users from external systems like LDAP.\nREMOTE_USER_CREATE_UNKNOWN_USER = True\n\n# Optional, whether to activate new user in Seafile system, default value is True.\n# If this setting is disabled, user will be unable to login by default.\n# the administrator needs to manually activate this user.\nREMOTE_USER_ACTIVATE_USER_AFTER_CREATION = True\n\n# Optional, map user attribute in HTTP header and Seafile's user attribute.\nREMOTE_USER_ATTRIBUTE_MAP = {\n 'HTTP_DISPLAYNAME': 'name',\n 'HTTP_MAIL': 'contact_email',\n\n # for user info\n \"HTTP_GIVENNAME\": 'givenname',\n \"HTTP_SN\": 'surname',\n \"HTTP_ORGANIZATION\": 'institution',\n\n # for user role\n 'HTTP_SHIBBOLETH_AFFILIATION': 'affiliation',\n}\n\n# Map affiliation to user role. Though the config name is SHIBBOLETH_AFFILIATION_ROLE_MAP,\n# it is not restricted to Shibboleth\nSHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\n Then restart Seafile.
"},{"location":"config/roles_permissions/","title":"Roles and Permissions Support","text":"You can add/edit roles and permission for users. A role is just a group of users with some pre-defined permissions, you can toggle user roles in user list page at admin panel. For most permissions, the meaning can be easily obtained from the variable name. The following is a further detailed introduction to some variables.
role_quota is used to set quota for a certain role of users. For example, we can set the quota of employee to 100G by adding 'role_quota': '100g', and leave other role of users to the default quota.
After set role_quote, it will take affect once a user with such a role login into Seafile. You can also manually change seafile-db.RoleQuota, if you want to see the effect immediately.
can_add_public_repo is to set whether a role can create a public library (shared by all login users), default is False.
Since version 11.0.9 pro, can_share_repo is added to limit users' ability to share a library
The can_add_public_repo option will not take effect if you configure global CLOUD_MODE = True
can_create_wiki and can_publish_wiki are used to control whether a role can create a Wiki and publish a Wiki. (A published Wiki have a special URL and can be visited by anonymous users)
storage_ids permission is used for assigning storage backends to users with specific role. More details can be found in multiple storage backends.
upload_rate_limit and download_rate_limit are added to limit upload and download speed for users with different roles.
Note
After configured the rate limit, run the following command in the seafile-server-latest directory to make the configuration take effect:
./seahub.sh python-env python3 seahub/manage.py set_user_role_upload_download_rate_limit\n can_drag_drop_folder_to_sync: allow or deny user to sync folder by draging and droping
can_export_files_via_mobile_client: allow or deny user to export files in using mobile client
Seafile comes with two build-in roles default and guest, a default user is a normal user with permissions as followings:
'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': True,\n 'can_publish_wiki': True,\n 'upload_rate_limit': 0, # unit: kb/s\n 'download_rate_limit': 0,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': True,\n 'monthly_ai_credit_per_user': -1,\n 'can_use_sso_in_multi_tenancy': True,\n },\n While a guest user can only read files/folders in the system, here are the permissions for a guest user:
'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'can_export_files_via_mobile_client': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': False,\n 'can_publish_wiki': False,\n 'upload_rate_limit': 0,\n 'download_rate_limit': 0,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': False,\n 'can_use_sso_in_multi_tenancy': False,\n },\n"},{"location":"config/roles_permissions/#edit-build-in-roles","title":"Edit build-in roles","text":"If you want to edit the permissions of build-in roles, e.g. default users can invite guest, guest users can view repos in organization, you can add following lines to seahub_settings.py with corresponding permissions set to True.
ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': True,\n 'can_publish_wiki': True,\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': True,\n 'monthly_ai_credit_per_user': -1,\n 'can_use_sso_in_multi_tenancy': True,\n },\n 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'can_export_files_via_mobile_client': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': False,\n 'can_publish_wiki': False,\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': False,\n 'can_use_sso_in_multi_tenancy': False,\n }\n}\n"},{"location":"config/roles_permissions/#more-about-guest-invitation-feature","title":"More about guest invitation feature","text":"An user who has can_invite_guest permission can invite people outside of the organization as guest.
In order to use this feature, in addition to granting can_invite_guest permission to the user, add the following line to seahub_settings.py,
ENABLE_GUEST_INVITATION = True\n\n# invitation expire time\nINVITATIONS_TOKEN_AGE = 72 # hours\n After restarting, users who have can_invite_guest permission will see \"Invite People\" section at sidebar of home page.
Users can invite a guest user by providing his/her email address, system will email the invite link to the user.
Tip
If you want to block certain email addresses for the invitation, you can define a blacklist, e.g.
INVITATION_ACCEPTER_BLACKLIST = [\"a@a.com\", \"*@a-a-a.com\", r\".*@(foo|bar).com\", ]\n After that, email address \"a@a.com\", any email address ends with \"@a-a-a.com\" and any email address ends with \"@foo.com\" or \"@bar.com\" will not be allowed.
"},{"location":"config/roles_permissions/#add-custom-roles","title":"Add custom roles","text":"If you want to add a new role and assign some users with this role, e.g. new role employee can invite guest and can create public library and have all other permissions a default user has, you can add following lines to seahub_settings.py
ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': True,\n 'can_publish_wiki': True,\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': True,\n 'monthly_ai_credit_per_user': -1,\n 'can_use_sso_in_multi_tenancy': True,\n },\n 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_drag_drop_folder_to_sync': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'can_export_files_via_mobile_client': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': False,\n 'can_publish_wiki': False,\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': False,\n 'can_use_sso_in_multi_tenancy': False,\n },\n 'employee': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': True,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': True,\n 'can_drag_drop_folder_to_sync': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'can_export_files_via_mobile_client': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_create_wiki': True,\n 'can_publish_wiki': True,\n 'upload_rate_limit': 500,\n 'download_rate_limit': 800,\n 'monthly_rate_limit': '',\n 'monthly_rate_limit_per_user': '',\n 'can_choose_office_suite': True,\n 'monthly_ai_credit_per_user': -1, \n 'can_use_sso_in_multi_tenancy': True,\n },\n}\n"},{"location":"config/saml2/","title":"SAML 2.0 in version 10.0+","text":"In this document, we demonstrate how to integrate Seafile with SAML single sign-on. We will use the Microsoft Azure SAML single sign-on app, Microsoft on-premise ADFS, and Keycloak as three examples. Other SAML 2.0 providers should follow a similar approach.
"},{"location":"config/saml2/#preparations-for-saml-20","title":"Preparations for SAML 2.0","text":""},{"location":"config/saml2/#install-xmlsec1-package-binary-deployment-only","title":"Install xmlsec1 package (binary deployment only)","text":"This step is not needed for Docker based deployment
$ apt update\n$ apt install xmlsec1\n$ apt install dnsutils # For multi-tenancy feature\n"},{"location":"config/saml2/#prepare-spseafile-certificate-directory-and-sp-certificates","title":"Prepare SP(Seafile) certificate directory and SP certificates","text":"Create certs dir\uff1a
Docker DeploymentBinary DeploymentThe default deployment path for Seafile is /opt/seafile, and the corresponding default path for seafile-data is /opt/seafile-data. If you do not deploy Seafile to this directory, you can check the SEAFILE_VOLUME variable in the env to confirm the path of your seafile-data.
cd /opt/seafile-data/seafile/seahub-data\nmkdir certs\n If you deploy Seafile using the binary package, the default installation and data path is /opt/seafile. If you do not deploy Seafile to this directory, please check your actual deployment path.
cd /opt/seafile/seahub-data\nmkdir certs\n The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command:
cd certs\nopenssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout sp.key -out sp.crt\n The days option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly
In the following examples, we assume Seafile is deployed at https://demo.seafile.top. You should change the domain in the exmapels to the domain of your Seafile server.
If you use Microsoft Azure SAML app to achieve single sign-on, please follow the steps below:
First, add SAML single sign-on app and assign users, refer to: add an Azure AD SAML application, create and assign users.
Second, setup the Identifier, Reply URL, and Sign on URL of the SAML app based on your service URL, refer to: enable single sign-on for saml app. The format of the Identifier, Reply URL, and Sign on URL are: https://demo.seafile.top/saml2/metadata/, https://demo.seafile.top/saml2/acs/, https://demo.seafile.top/, e.g.:
Next, edit saml attributes & claims. Keep the default attributes & claims of SAML app unchanged, the uid attribute must be added, the mail and name attributes are optional, e.g.:
Next, download the base64 format SAML app's certificate and rename to idp.crt:
and put it under the certs directory(/opt/seafile-data/seafile/seahub-data/certs).
Next, copy the metadata URL of the SAML app:
and paste it into the SAML_REMOTE_METADATA_URL option in seahub_settings.py, e.g.:
SAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n Next, add ENABLE_ADFS_LOGIN, LOGIN_REDIRECT_URL and SAML_ATTRIBUTE_MAPPING options to seahub_settings.py, and then restart Seafile, e.g:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n\n}\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n Note
/usr/bin/xmlsec1, you need to add the following configuration in seahub_settings.py:SAML_XMLSEC_BINARY_PATH = '/path/to/xmlsec1'\n View where the xmlsec1 binary is located:
$ which xmlsec1\n Finally, open the browser and enter the Seafile login page, click Single Sign-On, and use the user assigned to SAML app to perform a SAML login test.
If you use Microsoft ADFS to achieve single sign-on, please follow the steps below:
First, please make sure the following preparations are done:
A Windows Server with ADFS installed. For configuring and installing ADFS you can see this article.
A valid SSL certificate for ADFS server, and here we use temp.adfs.com as the domain name example.
A valid SSL certificate for Seafile server, and here we use demo.seafile.top as the domain name example.
Second, download the base64 format certificate and upload it:
Navigate to the AD FS management window. In the left sidebar menu, navigate to Services > Certificates.
Locate the Token-signing certificate. Right-click the certificate and select View Certificate.
In the dialog box, select the Details tab.
Click Copy to File.
In the Certificate Export Wizard that opens, click Next.
Select Base-64 encoded X.509 (.CER), then click Next.
Named it idp.crt, then click Next.
Click Finish to complete the download.
And then put it under the certs directory(/opt/seafile/seahub-data/certs).
Next, add the following configurations to seahub_settings.py and then restart Seafile:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n}\nSAML_REMOTE_METADATA_URL = 'https://temp.adfs.com/federationmetadata/2007-06/federationmetadata.xml' # The format of the ADFS federation metadata URL is: `https://{your ADFS domain name}/federationmetadata/2007-06/federationmetadata.xml`\n Next, add relying party trust:
Log into the ADFS server and open the ADFS management.
Under Actions, click Add Relying Party Trust.
On the Welcome page, choose Claims aware and click Start.
Select Import data about the relying party published online or on a local network, type your metadate url in Federation metadata address (host name or URL), and then click Next. Your metadate url format is: https://demo.seafile.top/saml2/metadata/, e.g.:
On the Specify Display Name page type a name in Display name, e.g. Seafile, under Notes type a description for this relying party trust, and then click Next.
In the Choose an access control policy window, select Permit everyone, then click Next.
Review your settings, then click Next.
Click Close.
Next, create claims rules:
Open the ADFS management, click Relying Party Trusts.
Right-click your trust, and then click Edit Claim Issuance Policy.
On the Issuance Transform Rules tab click Add Rules.
Click the Claim rule template dropdown menu and select Send LDAP Attributes as Claims, and then click Next.
In the Claim rule name field, type the display name for this rule, such as Seafile Claim rule. Click the Attribute store dropdown menu and select Active Directory. In the LDAP Attribute column, click the dropdown menu and select User-Principal-Name. In the Outgoing Claim Type column, click the dropdown menu and select UPN. And then click Finish.
Click Add Rule again.
Click the Claim rule template dropdown menu and select Transform an Incoming Claim, and then click Next.
In the Claim rule name field, type the display name for this rule, such as UPN to Name ID. Click the Incoming claim type dropdown menu and select UPN(It must match the Outgoing Claim Type in rule Seafile Claim rule). Click the Outgoing claim type dropdown menu and select Name ID. Click the Outgoing name ID format dropdown menu and select Email. And then click Finish.
Click OK to add both new rules.
When creating claims rule, you can also select other LDAP Attributes, such as E-Mail-Addresses, depending on your ADFS service
Finally, open the browser and enter the Seafile login page, click Single Sign-On to perform ADFS login test.
In this part, we use Keycloak SAML single sign-on app to show how Seafile integrate SAML 2.0.
"},{"location":"config/saml2/#in-keycloak","title":"In Keycloak","text":"First, Create a new Client:
Client type: Choose SAML\uff1b
Client ID: Fill in the SAML metadata address of Seafile (e.g.,https://demo.seafile.top/saml2/metadata/)
Root URL and Home URL: Root Directory/Homepage, fill in the Seafile web service address (e.g.,https://demo.seafile.top/)
Valid redirect URIs: Valid Redirect URIs, fill in all URLs of the Seafile web service (e.g.,https://demo.seafile.top/*)
Next, open the client you just created and make the following modifications; leave all other settings as default.
Settings - SAML capabilities: Set the Name ID Format to email, and only keep Include AuthnStatement enabled, disable all other settings.
Settings - Signature and Encryption: The default encryption algorithm is RSA_SHA256, so no changes are required.
Keys : Confirm that the Signing keys config is in the disabled state.
Client scopes: Configure the protocol mapping to map user information.
Next, choose the custom configuration By configuration:
Next, ensure that the above two attributes are added. After adding them, the result is as follows:
Advanced - Fine Grain SAML Endpoint Configuration
Assertion Consumer Service POST Binding URL: Send the SAML assertion request to the SP using the POST method, and set it to the SAML ACS address of Seafile (e.g.,https://demo.seafile.top/saml2/acs/).
Assertion Consumer Service Redirect Binding URL: Send the SAML assertion request to the SP via the redirect method, and set it to Seafile's SAML ACS address (same as the Assertion Consumer Service POST Binding URL).
Logout Service POST Binding URL: The address for sending a logout request to the SP via the POST method. Fill in the SAML logout POST address of Seafile (e.g.,https://demo.seafile.top/saml2/ls/post/).
Logout Service Redirect Binding URL: The address for sending a logout request to the SP via the redirect method. Fill in Seafile's SAML logout address (e.g.,https://demo.seafile.top/saml2/ls/).
Advanced - Authentication flow overrides: Bind the authenticator (the default account-password login uses the Browser flow).
cd /opt/seafile-data/seafile/conf/\nvim seahub_settings.py \n\n\nENABLE_ADFS_LOGIN = True\n#SAML_CERTS_DIR is a path inside the container and does not need to be changed.\nSAML_CERTS_DIR = '/opt/seafile/seahub-data/certs'\n#The configuration format of SAML_REMOTE_METADATA_URL is '{idp_server_url}/realms/{realm}/protocol/saml/descriptor' \n#idp_server_url: The URL of the Keycloak service\n#realm: Realm name\nSAML_REMOTE_METADATA_URL = 'https://keycloak.seafile.com/realms/haiwen/protocol/saml/descriptor'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n}\n Finally, open the browser and enter the Seafile login page, click Single Sign-On, and use the user assigned to SAML app to perform a SAML login test.
"},{"location":"config/seafevents-conf/","title":"Configurable Options","text":"In the file seafevents.conf:
[STATISTICS]\n## must be \"true\" to enable statistics\nenabled = true\n\n[SEAHUB EMAIL]\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending Seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n\n[FILE HISTORY]\nenabled = true\nthreshold = 5\nsuffix = md,txt,...\n\n## From seafile 7.0.0\n## Recording file history to database for fast access is enabled by default for 'Markdown, .txt, ppt, pptx, doc, docx, xls, xlsx'. \n## After enable the feature, the old histories version for markdown, doc, docx files will not be list in the history page.\n## (Only new histories that stored in database will be listed) But the users can still access the old versions in the library snapshots.\n## For file types not listed in the suffix , histories version will be scanned from the library history as before.\n## The feature default is enable. You can set the 'enabled = false' to disable the feature.\n\n## The 'threshold' is the time threshold for recording the historical version of a file, in minutes, the default is 5 minutes. \n## This means that if the interval between two adjacent file saves is less than 5 minutes, the two file changes will be merged and recorded as a historical version. \n## When set to 0, there is no time limit, which means that each save will generate a separate historical version.\n\n## If you need to modify the file list format, you can add 'suffix = md, txt, ...' configuration items to achieve.\n"},{"location":"config/seafevents-conf/#the-following-configurations-for-pro-edition-only","title":"The following configurations for Pro Edition only","text":"[AUDIT]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\n\n[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## From Seafile 6.3.0 pro, in order to speed up the full-text search speed, you should setup\nhighlight = fvh\n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\n## Refer to file search manual for details.\nindex_office_pdf=false\n\n## The default size limit for doc, docx, ppt, pptx, xls, xlsx and pdf files. Files larger than this will not be indexed.\n## Since version 6.2.0\n## Unit: MB\noffice_file_size_limit = 10\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n\n## The default loglevel is `warning`.\n## Since version 11.0.4\nloglevel = info\n\n[EVENTS PUBLISH]\n## must be \"true\" to enable publish events messages\nenabled = false\n## message format: repo-update\\t{{repo_id}}}\\t{{commit_id}}\n## Currently only support redis message queue\nmq_type = redis\n\n[AUTO DELETION]\nenabled = true # Default is false, when enabled, users can use file auto deletion feature\ninterval = 86400 # The unit is second(s), the default frequency is one day, that is, it runs once a day\n\n[SEASEARCH]\nenabled = true # Default is false, when enabled, seafile can use SeaSearch as the search engine\nseasearch_url = http://seasearch:4080 # If your SeaSearch server deploy on another machine, replace it to the truth address\nseasearch_token = <your auth token> # base64 code consist of `username:password`\ninterval = 10m # The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\n"},{"location":"config/seafile-conf/","title":"Seafile.conf settings","text":"Important
Every entry in this configuration file is case-sensitive.
You need to restart Seafile docker image so that your changes take effect.
"},{"location":"config/seafile-conf/#storage-quota-setting","title":"Storage Quota Setting","text":"You may set a default quota (e.g. 2GB) for all users. To do this, just add the following lines to seafile.conf file
[quota]\n# default user quota in GB, integer only\ndefault = 2\n This setting applies to all users. If you want to set quota for a specific user, you may log in to seahub website as administrator, then set it in \"System Admin\" page.
Since Pro 10.0.9 version, you can set the maximum number of files allowed in a library, and when this limit is exceeded, files cannot be uploaded to this library. There is no limit by default.
[quota]\nlibrary_file_limit = 100000\n"},{"location":"config/seafile-conf/#default-history-length-limit","title":"Default history length limit","text":"If you don't want to keep all file revision history, you may set a default history length limit for all libraries.
[history]\nkeep_days = days of history to keep\n"},{"location":"config/seafile-conf/#default-trash-expiration-time","title":"Default trash expiration time","text":"The default time for automatic cleanup of the libraries trash is 30 days. You can modify this time by adding the following configuration\uff1a
[library_trash]\nexpire_days = 60\n"},{"location":"config/seafile-conf/#seafile-fileserver-configuration","title":"Seafile fileserver configuration","text":"The configuration of seafile fileserver is in the [fileserver] section of the file seafile.conf
You can set the number of worker threads to server http requests. Default value is 10, which is a good value for most use cases.
[fileserver]\nworker_threads = 15\n Change upload/download settings.
[fileserver]\n# Set maximum upload file size to 200M.\n# If not configured, there is no file size limit for uploading.\nmax_upload_size=200\n\n# Set maximum download directory size to 200M.\n# Default is 100M.\nmax_download_dir_size=200\n After a file is uploaded via the web interface, or the cloud file browser in the client, it needs to be divided into fixed size blocks and stored into storage backend. We call this procedure \"indexing\". By default, the file server uses 1 thread to sequentially index the file and store the blocks one by one. This is suitable for most cases. But if you're using S3/Ceph/Swift backends, you may have more bandwidth in the storage backend for storing multiple blocks in parallel. We provide an option to define the number of concurrent threads in indexing:
[fileserver]\nmax_indexing_threads = 10\n When users upload files in the web interface (seahub), file server divides the file into fixed size blocks. Default blocks size for web uploaded files is 8MB. The block size can be set here.
[fileserver]\n#Set block size to 2MB\nfixed_block_size=2\n When users upload files in the web interface, file server assigns an token to authorize the upload operation. This token is valid for 1 hour by default. When uploading a large file via WAN, the upload time can be longer than 1 hour. You can change the token expire time to a larger value.
[fileserver]\n#Set uploading time limit to 3600s\nweb_token_expire_time=3600\n You can download a folder as a zip archive from seahub, but some zip software on windows doesn't support UTF-8, in which case you can use the \"windows_encoding\" settings to solve it.
[zip]\n# The file name encoding of the downloaded zip file.\nwindows_encoding = iso-8859-1\n The \"httptemp\" directory contains temporary files created during file upload and zip download. In some cases the temporary files are not cleaned up after the file transfer was interrupted. Starting from 7.1.5 version, file server will regularly scan the \"httptemp\" directory to remove files created long time ago.
[fileserver]\n# After how much time a temp file will be removed. The unit is in seconds. Default to 3 days.\nhttp_temp_file_ttl = x\n# File scan interval. The unit is in seconds. Default to 1 hour.\nhttp_temp_scan_interval = x\n You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. You can set both options to -1, to allow unlimited size and timeout.
[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n If you use object storage as storage backend, when a large file is frequently downloaded, the same blocks need to be fetched from the storage backend to Seafile server. This may waste bandwith and cause high load on the internal network. Since Seafile Pro 8.0.5 version, we add block caching to improve the situation.
use_block_cache option in the [fileserver] group. It's not enabled by default. block_cache_size_limit option is used to limit the size of the cache. Its default value is 10GB. The blocks are cached in seafile-data/block-cache directory. When the total size of cached files exceeds the limit, seaf-server will clean up older files until the size reduces to 70% of the limit. The cleanup interval is 5 minutes. You have to have a good estimate on how much space you need for the cache directory. Otherwise on frequent downloads this directory can be quickly filled up.block_cache_file_types configuration is used to choose the file types that are cached. block_cache_file_types the default value is mp4;mov.[fileserver]\nuse_block_cache = true\n# Set block cache size limit to 100MB\nblock_cache_size_limit = 100\nblock_cache_file_types = mp4;mov\n When a large number of files are uploaded through the web page and API, it will be expensive to calculate block IDs based on the block contents. Since Seafile-pro-9.0.6, you can add the skip_block_hash option to use a random string as block ID. Warning
This option will prevent fsck from checking block content integrity. You should specify --shallow option to fsck to not check content integrity.
[fileserver]\nskip_block_hash = true\n If you want to limit the type of files when uploading files, since Seafile Pro 10.0.0 version, you can set file_ext_white_list option in the [fileserver] group. This option is a list of file types, only the file types in this list are allowed to be uploaded. It's not enabled by default.
[fileserver]\nfile_ext_white_list = md;mp4;mov\n Since seafile 10.0.1, when you use go fileserver, you can set upload_limit and download_limit option in the [fileserver] group to limit the speed of file upload and download. It's not enabled by default.
[fileserver]\n# The unit is in KB/s.\nupload_limit = 100\ndownload_limit = 100\n Since Seafile 11.0.7 Pro, you can ask file server to check virus for every file uploaded with web APIs. Find more options about virus scanning at virus scan.
[fileserver]\n# default is false\ncheck_virus_on_web_upload = true\n Since Seafile 12.0.4, after the upload is completed by the client, seafile server will check whether the uploaded blocks are complete. Ii's enabled by default.
[fileserver]\n# default is true\nverify_client_blocks_after_sync = true\n"},{"location":"config/seafile-conf/#database-configuration","title":"Database configuration","text":"The configurations of database are stored in the [database] section.
From Seafile 11.0, the SQLite is not supported
[database]\ntype=mysql\nhost=127.0.0.1\nuser=root\npassword=root\ndb_name=seafile_db\nconnection_charset=utf8\nmax_connections=100\n When you configure seafile server to use MySQL, the default connection pool size is 100, which should be enough for most use cases.
Since Seafile 10.0.2, you can enable the encrypted connections to the MySQL server by adding the following configuration options:
[database]\nuse_ssl = true\nskip_verify = false\nca_path = /etc/mysql/ca.pem\n When set use_ssl to true and skip_verify to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path. The ca_path is a trusted CA certificate path for signing MySQL server certificates. When skip_verify is true, there is no need to add the ca_path option. The MySQL server certificate won't be verified at this time.
The Seafile Pro server auto expires file locks after some time, to prevent a locked file being locked for too long. The expire time can be tune in seafile.conf file.
[file_lock]\ndefault_expire_hours = 6\n The default is 12 hours.
Since Seafile-pro-9.0.6, you can add cache for getting locked files (reduce server load caused by sync clients). Since Pro Edition 12, this option is enabled by default.
[file_lock]\nuse_locked_file_cache = true\n At the same time, you also need to configure the following memcache options for the cache to take effect:
[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n"},{"location":"config/seafile-conf/#storage-backends","title":"Storage Backends","text":"You may configure Seafile to use various kinds of object storage backends.
You may also configure Seafile to use multiple storage backends at the same time.
"},{"location":"config/seafile-conf/#cluster","title":"Cluster","text":"When you deploy Seafile in a cluster, you should add the following configuration:
[cluster]\nenabled = true\n Tip
Since version 12, if you use Docker to deploy cluster, this option is no longer needed.
"},{"location":"config/seafile-conf/#enable-slow-log","title":"Enable Slow Log","text":"Since Seafile-pro-6.3.10, you can enable seaf-server's RPC slow log to do performance analysis.The slow log is enabled by default.
If you want to configure related options, add the options to seafile.conf:
[slow_log]\n# default to true\nenable_slow_log = true\n# the unit of all slow log thresholds is millisecond.\n# default to 5000 milliseconds, only RPC queries processed for longer than 5000 milliseconds will be logged.\nrpc_slow_threshold = 5000\n You can find seafile_slow_rpc.log in logs/slow_logs. You can also use log-rotate to rotate the log files. You just need to send SIGUSR2 to seaf-server process. The slow log file will be closed and reopened.
Since 9.0.2 Pro, the signal to trigger log rotation has been changed to SIGUSR1. This signal will trigger rotation for all log files opened by seaf-server. You should change your log rotate settings accordingly.
Even though Nginx logs all requests with certain details, such as url, response code, upstream process time, it's sometimes desirable to have more context about the requests, such as the user id for each request. Such information can only be logged from file server itself. Since 9.0.2 Pro, access log feature is added to fileserver.
To enable access log, add below options to seafile.conf:
[fileserver]\n# default to false. If enabled, fileserver-access.log will be written to log directory.\nenable_access_log = true\n The log format is as following:
start time - user id - url - response code - process time\n You can use SIGUSR1 to trigger log rotation.
Seafile 9.0 introduces a new fileserver implemented in Go programming language. To enable it, you can set the options below in seafile.conf:
[fileserver]\nuse_go_fileserver = true\n Go fileserver has 3 advantages over the traditional fileserver implemented in C language:
max_sync_file_count to limit the size of library to be synced. The default is 100K. With Go fileserver you can set this option to a much higher number, such as 1 million.max_download_dir_size is thus no longer needed by Go fileserver.Go fileserver caches fs objects in memory. On the one hand, it avoids repeated creation and destruction of repeatedly accessed objects; on the other hand it will also slow down the speed at which objects are released, which will prevent go's gc mechanism from consuming too much CPU time. You can set the size of memory used by fs cache through the following options.
[fileserver]\n# The unit is in M. Default to 2G.\nfs_cache_limit = 100\n Since Pro 12.0.10 version, you can set the max threads of fs-id-list requests. When you download a repo, Seafile client will request fs id list, and you can control the maximum concurrency for handling fs-id-list requests in the go fileserver through fs_id_list_max_threads configuration, which defaults to 10.
[fileserver]\nfs_id_list_max_threads = 20\n"},{"location":"config/seafile-conf/#profiling-go-fileserver-performance","title":"Profiling Go Fileserver Performance","text":"Since Seafile 9.0.7, you can enable the profile function of go fileserver by adding the following configuration options:
# profile_password is required, change it for your need\n[fileserver]\nenable_profiling = true\nprofile_password = 8kcUz1I2sLaywQhCRtn2x1\n This interface can be used through the pprof tool provided by Go language. See https://pkg.go.dev/net/http/pprof for details. Note that you have to first install Go on the client that issues the below commands. The password parameter should match the one you set in the configuration.
go tool pprof http://localhost:8082/debug/pprof/heap?password=8kcUz1I2sLaywQhCRtn2x1\ngo tool pprof http://localhost:8082/debug/pprof/profile?password=8kcUz1I2sLaywQhCRtn2x1\n"},{"location":"config/seahub_customization/","title":"Seahub customization","text":""},{"location":"config/seahub_customization/#customize-seahub-logo-and-css","title":"Customize Seahub Logo and CSS","text":"Create customize folder
Deploy in DockerDeploy from binary packagesmkdir -p /opt/seafile-data/seafile/seahub-data/custom\n mkdir /opt/seafile/seafile-server-latest/seahub/media/custom\n During upgrading, Seafile upgrade script will create symbolic link automatically to preserve your customization.
"},{"location":"config/seahub_customization/#customize-logo","title":"Customize Logo","text":"Add your logo file to custom/
Overwrite LOGO_PATH in seahub_settings.py
LOGO_PATH = 'custom/mylogo.png'\n Default width and height for logo is 149px and 32px, you may need to change that according to yours.
LOGO_WIDTH = 149\nLOGO_HEIGHT = 32\n"},{"location":"config/seahub_customization/#customize-favicon","title":"Customize Favicon","text":"Add your favicon file to custom/
Overwrite FAVICON_PATH in seahub_settings.py
FAVICON_PATH = 'custom/favicon.png'\n"},{"location":"config/seahub_customization/#customize-seahub-css","title":"Customize Seahub CSS","text":"Add your css file to custom/, for example, custom.css
Overwrite BRANDING_CSS in seahub_settings.py
BRANDING_CSS = 'custom/custom.css'\n"},{"location":"config/seahub_customization/#customize-help-page","title":"Customize help page","text":"Deploy in DockerDeploy from binary packages mkdir -p /opt/seafile-data/seafile/seahub-data/custom/templates/help/\ncd /opt/seafile-data/seafile/seahub-data/custom\ncp ../../help/templates/help/install.html templates/help/\n mkdir /opt/seafile/seafile-server-latest/seahub/media/custom/templates/help/\ncd /opt/seafile/seafile-server-latest/seahub/media/custom\ncp ../../help/templates/help/base.html templates/help/\n For example, modify the templates/help/install.html file and save it. You will see the new help page.
Note
There are some more help pages available for modifying, you can find the list of the html file here
"},{"location":"config/seahub_customization/#add-an-extra-note-in-sharing-dialog","title":"Add an extra note in sharing dialog","text":"You can add an extra note in sharing dialog in seahub_settings.py
ADDITIONAL_SHARE_DIALOG_NOTE = {\n 'title': 'Attention! Read before shareing files:',\n 'content': 'Do not share personal or confidential official data with **.'\n}\n Result:
"},{"location":"config/seahub_customization/#add-custom-navigation-items","title":"Add custom navigation items","text":"Since Pro 7.0.9, Seafile supports adding some custom navigation entries to the home page for quick access. This requires you to add the following configuration information to the conf/seahub_settings.py configuration file:
CUSTOM_NAV_ITEMS = [\n {'icon': 'sf2-icon-star',\n 'desc': 'Custom navigation 1',\n 'link': 'https://www.seafile.com'\n },\n {'icon': 'sf2-icon-wiki-view',\n 'desc': 'Custom navigation 2',\n 'link': 'https://www.seafile.com/help'\n },\n {'icon': 'sf2-icon-wrench',\n 'desc': 'Custom navigation 3',\n 'link': 'http://www.example.com'\n },\n]\n Note
The icon field currently only supports icons in Seafile that begin with sf2-icon. You can find the list of icons here
Then restart the Seahub service to take effect.
Once you log in to the Seafile system homepage again, you will see the new navigation entry under the Tools navigation bar on the left.
ADDITIONAL_ABOUT_DIALOG_LINKS = {\n 'seafile': 'https://example.seahub.com/seahub',\n 'dtable-web': 'https://example.seahub.com/dtable-web'\n}\n Result:
"},{"location":"config/seahub_settings_py/","title":"Seahub Settings","text":"Tip
You can also modify most of the config items via web interface. The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files. If you want to disable settings via web interface, you can add ENABLE_SETTINGS_VIA_WEB = False to seahub_settings.py.
Refer to email sending documentation.
"},{"location":"config/seahub_settings_py/#security-settings","title":"Security settings","text":"# For security consideration, please set to match the host/domain of your site, e.g., ALLOWED_HOSTS = ['.example.com'].\n# Please refer https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts for details.\nALLOWED_HOSTS = ['.myseafile.com']\n\n\n# Whether to use a secure cookie for the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = True\n\n# The value of the SameSite flag on the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-samesite\nCSRF_COOKIE_SAMESITE = 'Strict'\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-trusted-origins\nCSRF_TRUSTED_ORIGINS = ['https://www.myseafile.com']\n"},{"location":"config/seahub_settings_py/#user-management-options","title":"User management options","text":"The following options affect user registration, password and session.
# Enalbe or disalbe registration on web. Default is `False`.\nENABLE_SIGNUP = False\n\n# Activate or deactivate user when registration complete. Default is `True`.\n# If set to `False`, new users need to be activated by admin in admin panel.\nACTIVATE_AFTER_REGISTRATION = False\n\n# Whether to send email when a system admin adding a new member. Default is `True`.\nSEND_EMAIL_ON_ADDING_SYSTEM_MEMBER = True\n\n# Whether to send email when a system admin resetting a user's password. Default is `True`.\nSEND_EMAIL_ON_RESETTING_USER_PASSWD = True\n\n# Send system admin notify email when user registration is complete. Default is `False`.\nNOTIFY_ADMIN_AFTER_REGISTRATION = True\n\n# Remember days for login. Default is 7\nLOGIN_REMEMBER_DAYS = 7\n\n# Attempt limit before showing a captcha when login.\nLOGIN_ATTEMPT_LIMIT = 3\n\n# deactivate user account when login attempts exceed limit\n# Since version 5.1.2 or pro 5.1.3\nFREEZE_USER_ON_LOGIN_FAILED = False\n\n# default False, only check USER_PASSWORD_MIN_LENGTH\n# when True, check password strength level, STRONG(or above) is allowed\nUSER_STRONG_PASSWORD_REQUIRED = False\n\n# Force user to change password when admin add/reset a user.\n# Added in 5.1.1, deafults to True.\nFORCE_PASSWORD_CHANGE = True\n\n# Age of cookie, in seconds (default: 2 weeks).\nSESSION_COOKIE_AGE = 60 * 60 * 24 * 7 * 2\n\n# Whether a user's session cookie expires when the Web browser is closed.\nSESSION_EXPIRE_AT_BROWSER_CLOSE = False\n\n# Whether to save the session data on every request. Default is `False`\nSESSION_SAVE_EVERY_REQUEST = False\n\n# In old version, if you use Single Sign On, the password is not saved in Seafile.\n# Users can't use WebDAV because Seafile can't check whether the password is correct.\n# Since version 6.3.8, you can enable this option to let user's to specific a password for WebDAV login.\n# Users login via SSO can use this password to login in WebDAV.\n# Enable the feature. pycryptodome should be installed first.\n# sudo pip install pycryptodome==3.12.0\nENABLE_WEBDAV_SECRET = True\nWEBDAV_SECRET_MIN_LENGTH = 8\n\n# LEVEL for the password, based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nWEBDAV_SECRET_STRENGTH_LEVEL = 1\n\n\n# Since version 7.0.9, you can force a full user to log in with a two factor authentication.\n# The prerequisite is that the administrator should 'enable two factor authentication' in the 'System Admin -> Settings' page.\n# Then you can add the following configuration information to the configuration file.\nENABLE_FORCE_2FA_TO_ALL_USERS = True\n\n# Enable two factor authentication for accounts. Defaults to `False`.\n# Since version 6.0\nENABLE_TWO_FACTOR_AUTH = True\n\n# Enable a user to change password in 'settings' page. Default to `True`\n# Since version 6.2.11\nENABLE_CHANGE_PASSWORD = True\n\n# If show contact email when search user.\nENABLE_SHOW_CONTACT_EMAIL_WHEN_SEARCH_USER = True\n"},{"location":"config/seahub_settings_py/#single-sign-on","title":"Single Sign On","text":"# Enable authentication with ADFS\n# Default is False\n# Since 6.0.9\nENABLE_ADFS_LOGIN = True\n\n# Force user login through ADFS/OAuth instead of email and password\n# Default is False\n# Since 11.0.7, in version 12.0, it also controls users via OAuth\nDISABLE_ADFS_USER_PWD_LOGIN = True\n\n# Enable authentication wit Kerberos\n# Default is False\nENABLE_KRB5_LOGIN = True\n\n# Enable authentication with Shibboleth\n# Default is False\nENABLE_SHIBBOLETH_LOGIN = True\n\n# Enable a user associated with SSO account to change/reset local password in 'settings' page. Default to `True`.\n# Change it to false to disable SSO account to change local password\nENABLE_SSO_USER_CHANGE_PASSWORD = True\n\n# Enable client to open an external browser for single sign on\n# When it is false, the old buitin browser is opened for single sign on\n# When it is true, the default browser of the operation system is opened\n# The benefit of using system browser is that it can support hardware 2FA\n# Since 11.0.0, and sync client 9.0.5, drive client 3.0.8\nCLIENT_SSO_VIA_LOCAL_BROWSER = True # default is False\nCLIENT_SSO_UUID_EXPIRATION = 5 * 60 # in seconds\n"},{"location":"config/seahub_settings_py/#library-snapshot-label-feature","title":"Library snapshot label feature","text":"# Turn on this option to let users to add a label to a library snapshot. Default is `False`\nENABLE_REPO_SNAPSHOT_LABEL = False\n"},{"location":"config/seahub_settings_py/#library-options","title":"Library options","text":"Options for libraries:
# if enable create encrypted library\nENABLE_ENCRYPTED_LIBRARY = True\n\n# version for encrypted library\n# should only be `2` or `4`.\n# version 3 is insecure (using AES128 encryption) so it's not supported any more.\n# refer to https://manual.seafile.com/latest/administration/security_features/#how-does-an-encrypted-library-work\n# for the difference between version 2 and 4.\nENCRYPTED_LIBRARY_VERSION = 2\n\n# Since version 12, you can choose password hash algorithm for new encrypted libraries.\n# The password is used to encrypt the encryption key. So using a secure password hash algorithm to\n# prevent brute-force password guessing is important.\n# Before version 12, a fixed algorithm (PBKDF2-SHA256 with 1000 iterations) is used.\n#\n# Currently two hash algorithms are supported.\n# - PBKDF2: The only available parameter is the number of iterations. You need to increase the\n# the number of iterations over time, as GPUs are more and more used for such calculation.\n# The default number of iterations is 1000. As of 2023, the recommended iterations is 600,000.\n# - Argon2id: Secure hash algorithm that has high cost even for GPUs. There are 3 parameters that\n# can be set: time cost, memory cost, and parallelism degree. The parameters are seperated by commas,\n# e.g. \"2,102400,8\", which the default parameters used in Seafile. Learn more about this algorithm\n# on https://github.com/P-H-C/phc-winner-argon2 .\n#\n# Note that only sync client >= 9.0.9 and SeaDrive >= 3.0.12 supports syncing libraries created with these algorithms.\nENCRYPTED_LIBRARY_PWD_HASH_ALGO = \"argon2id\"\nENCRYPTED_LIBRARY_PWD_HASH_PARAMS = \"2,102400,8\"\n# ENCRYPTED_LIBRARY_PWD_HASH_ALGO = \"pbkdf2_sha256\"\n# ENCRYPTED_LIBRARY_PWD_HASH_PARAMS = \"600000\"\n\n# mininum length for password of encrypted library\nREPO_PASSWORD_MIN_LENGTH = 8\n\n# force use password when generate a share/upload link (since version 8.0.9)\nSHARE_LINK_FORCE_USE_PASSWORD = False\n\n# mininum length for password for share link (since version 4.4)\nSHARE_LINK_PASSWORD_MIN_LENGTH = 8\n\n# LEVEL for the password of a share/upload link\n# based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above. (since version 8.0.9)\nSHARE_LINK_PASSWORD_STRENGTH_LEVEL = 3\n\n# Default expire days for share link (since version 6.3.8)\n# Once this value is configured, the user can no longer generate an share link with no expiration time.\n# If the expiration value is not set when the share link is generated, the value configured here will be used.\nSHARE_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be less than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be greater than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# Default expire days for upload link (since version 7.1.6)\n# Once this value is configured, the user can no longer generate an upload link with no expiration time.\n# If the expiration value is not set when the upload link is generated, the value configured here will be used.\nUPLOAD_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MIN should be less than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MAX should be greater than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# force user login when view file/folder share link (since version 6.3.6)\nSHARE_LINK_LOGIN_REQUIRED = True\n\n# enable water mark when view(not edit) file in web browser (since version 6.3.6)\nENABLE_WATERMARK = True\n\n# Disable sync with any folder. Default is `False`\n# NOTE: since version 4.2.4\nDISABLE_SYNC_WITH_ANY_FOLDER = True\n\n# Enable or disable library history setting\nENABLE_REPO_HISTORY_SETTING = True\n\n# Enable or disable user share library to any group\n# Since version 6.2.0\nENABLE_SHARE_TO_ALL_GROUPS = True\n\n# Enable or disable user to clean trash (default is True)\n# Since version 6.3.6\nENABLE_USER_CLEAN_TRASH = True\n\n# Add a report abuse button on download links. (since version 7.1.0)\n# Users can report abuse on the share link page, fill in the report type, contact information, and description.\n# Default is false.\nENABLE_SHARE_LINK_REPORT_ABUSE = True\n Options for online file preview:
# Online preview maximum file size, defaults to 30M.\nFILE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024\n\n# Extensions of previewed text files.\n# NOTE: since version 6.1.1\nTEXT_PREVIEW_EXT = \"\"\"ac, am, bat, c, cc, cmake, cpp, cs, css, diff, el, h, html,\nhtm, java, js, json, less, make, org, php, pl, properties, py, rb,\nscala, script, sh, sql, txt, text, tex, vi, vim, xhtml, xml, log, csv,\ngroovy, rst, patch, go\"\"\"\n\n\n# Seafile only generates thumbnails for images smaller than the following size.\n# Since version 6.3.8 pro, suport the psd online preview.\nTHUMBNAIL_IMAGE_SIZE_LIMIT = 30 # MB\n"},{"location":"config/seahub_settings_py/#map-service","title":"Map service","text":"Options for map service:
# The map service currently relies on the Google Maps API and requires two API keys.\nGOOGLE_MAP_KEY = '<replace with your Google Maps API Key>'\nSERVER_GOOGLE_MAP_KEY = '<replace with your Google Maps API Key>'\n Required scope of the API keys
To safeguard your Google API Keys from abuse, restrict their usage. However, even with restrictions in place, abuse remains a risk\u2014especially since GOOGLE_MAP_KEY must be included in your source code and is therefore publicly accessible. Additionally, heavy use of the maps plugin may increase your Google billing, so monitor your spending closely.
GOOGLE_MAP_KEY Restrict to your Server URL,like https://cloud.seafile.io Maps Javascript API SERVER_GOOGLE_MAP_KEY No website restriction Geocoding API"},{"location":"config/seahub_settings_py/#cloud-mode","title":"Cloud Mode","text":"You should enable cloud mode if you use Seafile with an unknown user base. It disables the organization tab in Seahub's website to ensure that users can't access the user list. Cloud mode provides some nice features like sharing content with unregistered users and sending invitations to them. Therefore you also want to enable user registration. Through the global address book (since version 4.2.3) you can do a search for every user account. So you probably want to disable it.
# Enable cloude mode and hide `Organization` tab.\nCLOUD_MODE = True\n\n# Disable global address book\nENABLE_GLOBAL_ADDRESSBOOK = False\n"},{"location":"config/seahub_settings_py/#other-options","title":"Other options","text":"# Disable settings via Web interface in system admin->settings\n# Default is True\n# Since 5.1.3\nENABLE_SETTINGS_VIA_WEB = False\n\n# Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = 'UTC'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\n# Default language for sending emails.\nLANGUAGE_CODE = 'en'\n\n# Custom language code choice.\nLANGUAGES = (\n ('en', 'English'),\n ('zh-cn', '\u7b80\u4f53\u4e2d\u6587'),\n ('zh-tw', '\u7e41\u9ad4\u4e2d\u6587'),\n)\n\n# Set this to your website/company's name. This is contained in email notifications and welcome message when user login for the first time.\nSITE_NAME = 'Seafile'\n\n# Browser tab's title\nSITE_TITLE = 'Private Seafile'\n\n# Whether enable the feature Wiki (requires sdoc integration). Default is `True`\nENABLE_WIKI = True\n\n# Max number of files when user upload file/folder.\n# Since version 6.0.4\nMAX_NUMBER_OF_FILES_FOR_FILEUPLOAD = 500\n\n# Control the language that send email. Default to user's current language.\n# Since version 6.1.1\nSHARE_LINK_EMAIL_LANGUAGE = ''\n\n# Interval for browser requests unread notifications\n# Since PRO 6.1.4 or CE 6.1.2\nUNREAD_NOTIFICATIONS_REQUEST_INTERVAL = 3 * 60 # seconds\n\n# Whether to allow user to delete account, change login password or update basic user\n# info on profile page.\n# Since PRO 6.3.10\nENABLE_DELETE_ACCOUNT = False\nENABLE_UPDATE_USER_INFO = False\nENABLE_CHANGE_PASSWORD = False\n\n# Get web api auth token on profile page.\nENABLE_GET_AUTH_TOKEN_BY_SESSION = True\n\n# Since 8.0.6 CE/PRO version.\n# Url redirected to after user logout Seafile.\n# Usually configured as Single Logout url.\nLOGOUT_REDIRECT_URL = 'https://www.example-url.com'\n\n\n# Enable system admin add T&C, all users need to accept terms before using. Defaults to `False`.\n# Since version 6.0\nENABLE_TERMS_AND_CONDITIONS = True\n"},{"location":"config/seahub_settings_py/#pro-edition-only-options","title":"Pro edition only options","text":"# Allow administrator to view user's file in UNENCRYPTED libraries\n# through Libraries page in System Admin. Default is False.\nENABLE_SYS_ADMIN_VIEW_REPO = True\n\n# For un-login users, providing an email before downloading or uploading on shared link page.\n# Since version 5.1.4\nENABLE_SHARE_LINK_AUDIT = True\n\n# Check virus after upload files to shared upload links. Defaults to `False`.\n# Since version 6.0\nENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n\n# Send email to these email addresses when a virus is detected.\n# This list can be any valid email address, not necessarily the emails of Seafile user.\n# Since version 6.0.8\nVIRUS_SCAN_NOTIFY_LIST = ['user_a@seafile.com', 'user_b@seafile.com']\n"},{"location":"config/seahub_settings_py/#restful-api","title":"RESTful API","text":"# API throttling related settings. Enlarger the rates if you got 429 response code during API calls.\nREST_FRAMEWORK = {\n 'DEFAULT_THROTTLE_RATES': {\n 'ping': '600/minute',\n 'anon': '5/minute',\n 'user': '300/minute',\n },\n 'UNICODE_JSON': False,\n}\n\n# Throtting whitelist used to disable throttle for certain IPs.\n# e.g. REST_FRAMEWORK_THROTTING_WHITELIST = ['127.0.0.1', '192.168.1.1']\n# Please make sure `REMOTE_ADDR` header is configured in Nginx conf according to https://manual.seafile.com/13.0/setup_binary/ce/deploy_with_nginx.html.\nREST_FRAMEWORK_THROTTING_WHITELIST = []\n"},{"location":"config/seahub_settings_py/#seahub-custom-functions","title":"Seahub Custom Functions","text":"Since version 6.2, you can define a custom function to modify the result of user search function.
For example, if you want to limit user only search users in the same institution, you can define custom_search_user function in {seafile install path}/conf/seahub_custom_functions/__init__.py
Code example:
import os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseahub_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seahub/seahub')\nsys.path.append(seahub_dir)\n\nfrom seahub.profile.models import Profile\ndef custom_search_user(request, emails):\n\n institution_name = ''\n\n username = request.user.username\n profile = Profile.objects.get_profile_by_user(username)\n if profile:\n institution_name = profile.institution\n\n inst_users = [p.user for p in\n Profile.objects.filter(institution=institution_name)]\n\n filtered_emails = []\n for email in emails:\n if email in inst_users:\n filtered_emails.append(email)\n\n return filtered_emails\n You should NOT change the name of custom_search_user and seahub_custom_functions/__init__.py
Since version 6.2.5 pro, if you enable the ENABLE_SHARE_TO_ALL_GROUPS feather on sysadmin settings page, you can also define a custom function to return the groups a user can share library to.
For example, if you want to let a user to share library to both its groups and the groups of user test@test.com, you can define a custom_get_groups function in {seafile install path}/conf/seahub_custom_functions/__init__.py
Code example:
import os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseaserv_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seafile/lib64/python2.7/site-packages')\nsys.path.append(seaserv_dir)\n\ndef custom_get_groups(request):\n\n from seaserv import ccnet_api\n\n groups = []\n username = request.user.username\n\n # for current user\n groups += ccnet_api.get_groups(username)\n\n # for 'test@test.com' user\n groups += ccnet_api.get_groups('test@test.com')\n\n return groups\n You should NOT change the name of custom_get_groups and seahub_custom_functions/__init__.py
Tip
docker compose restart\n cd /opt/seafile/seafile-server-latest\n./seahub.sh restart\n There are currently five types of emails sent in Seafile:
The first four types of email are sent immediately. The last type is sent by a background task running periodically.
"},{"location":"config/sending_email/#options-of-email-sending","title":"Options of Email Sending","text":"Please add the following lines to seahub_settings.py to enable email sending.
EMAIL_USE_TLS = True\nEMAIL_HOST = 'smtp.example.com' # smpt server\nEMAIL_HOST_USER = 'username@example.com' # username and domain\nEMAIL_HOST_PASSWORD = 'password' # password\nEMAIL_PORT = 587\nDEFAULT_FROM_EMAIL = EMAIL_HOST_USER\nSERVER_EMAIL = EMAIL_HOST_USER\n Note
If your email service still does not work, you can checkout the log file logs/seahub.log to see what may cause the problem. For a complete email notification list, please refer to email notification list.
If you want to use the email service without authentication leaf EMAIL_HOST_USER and EMAIL_HOST_PASSWORD blank (''). (But notice that the emails then will be sent without a From: address.)
About using SSL connection (using port 465)
EMAIL_USE_SSL = True instead of EMAIL_USE_TLS.reply to of email","text":"You can change the reply to field of email by add the following settings to seahub_settings.py. This only affects email sending for file share link.
# Set reply-to header to user's email or not, defaults to ``False``. For details,\n# please refer to http://www.w3.org/Protocols/rfc822/\nADD_REPLY_TO_HEADER = True\n"},{"location":"config/sending_email/#config-background-email-sending-task","title":"Config background email sending task","text":"The background task will run periodically to check whether an user have new unread notifications. If there are any, it will send a reminder email to that user. The background email sending task is controlled by seafevents.conf.
[SEAHUB EMAIL]\n\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n"},{"location":"config/sending_email/#add-smime-signature-to-email","title":"Add S/MIME signature to email","text":"If you want the email signed by S/MIME, please add the config in seahub_settings.py
ENABLE_SMIME = True\nSMIME_CERTS_DIR = /opt/seafile/seahub-data/smime-certs # including cert.pem and private_key.pem\n The certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the certs using the following command: mkdir -p /opt/seafile/seahub-data/smime-certs\ncd /opt/seafile/seahub-data/smime-certs\nopenssl req -x509 -newkey rsa:4096 -keyout private_key.pem -outform PEM -out cert.pem -days 3650 -nodes\n Tip
Some email clients may not verify the email signed by certs generated by command line. So it's better to apply the certs from a manufacture
"},{"location":"config/sending_email/#customize-email-messages","title":"Customize email messages","text":"The simplest way to customize the email messages is setting the SITE_NAME variable in seahub_settings.py. If it is not enough for your case, you can customize the email templates.
Tip
Subject line may vary between different releases, this is based on Release 5.0.0. Restart Seahub so that your changes take effect.
"},{"location":"config/sending_email/#the-email-base-template","title":"The email base template","text":"seahub/seahub/templates/email_base.html
Tip
You can copy email_base.html to seahub-data/custom/templates/email_base.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/auth/forms.py line:127
send_html_email(_(\"Reset Password on %s\") % site_name,\n email_template_name, c, None, [user.username])\n Body
seahub/seahub/templates/registration/password_reset_email.html
Tip
You can copy password_reset_email.html to seahub-data/custom/templates/registration/password_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:424
send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\n Body
seahub/seahub/templates/sysadmin/user_add_email.html
Tip
You can copy user_add_email.html to seahub-data/custom/templates/sysadmin/user_add_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/views/sysadmin.py line:1224
send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\n Body
seahub/seahub/templates/sysadmin/user_reset_email.html
Tip
You can copy user_reset_email.html to seahub-data/custom/templates/sysadmin/user_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
seahub/seahub/share/views.py line:913
try:\n if file_shared_type == 'f':\n c['file_shared_type'] = _(u\"file\")\n send_html_email(_(u'A file is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to\n )\n else:\n c['file_shared_type'] = _(u\"directory\")\n send_html_email(_(u'A directory is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to)\n Body
seahub/seahub/templates/shared_link_email.html
seahub/seahub/templates/shared_upload_link_email.html
Tip
You can copy shared_link_email.html to seahub-data/custom/templates/shared_link_email.html and modify the new one. In this way, the customization will be maintained after upgrade.
Subject
send_html_email(_('New notice on %s') % settings.SITE_NAME,\n 'notifications/notice_email.html', c,\n None, [to_user])\n Body
seahub/seahub/notifications/templates/notifications/notice_email.html
"},{"location":"config/shibboleth_authentication/","title":"Shibboleth Authentication","text":"Shibboleth is a widely used single sign on (SSO) protocol. Seafile supports authentication via Shibboleth. It allows users from another organization to log in to Seafile without registering an account on the service provider.
In this documentation, we assume the reader is familiar with Shibboleth installation and configuration. For introduction to Shibboleth concepts, please refer to https://shibboleth.atlassian.net/wiki/spaces/CONCEPT/overview .
Shibboleth Service Provider (SP) should be installed on the same server as the Seafile server. The official SP from https://shibboleth.net/ is implemented as an Apache module. The module handles all Shibboleth authentication details. Seafile server receives authentication information (username) from HTTP request. The username then can be used as login name for the user.
Seahub provides a special URL to handle Shibboleth login. The URL is https://your-seafile-domain/sso. Only this URL needs to be configured under Shibboleth protection. All other URLs don't go through the Shibboleth module. The overall workflow for a user to login with Shibboleth is as follows:
https://your-seafile-domain/sso.https://your-seafile-domain/sso.HTTP_REMOTE_USER header) and brings the user to her/his home page.Since Shibboleth support requires Apache, if you want to use Nginx, you need two servers, one for non-Shibboleth access, another configured with Apache to allow Shibboleth login. In a cluster environment, you can configure your load balancer to direct traffic to different server according to URL. Only the URL https://your-seafile-domain/sso needs to be directed to Apache.
The configuration includes 3 steps:
We use CentOS 7 as example.
"},{"location":"config/shibboleth_authentication/#configure-apache","title":"Configure Apache","text":"You should create a new virtual host configuration for Shibboleth. And then restart Apache.
<IfModule mod_ssl.c>\n <VirtualHost _default_:443>\n ServerName your-seafile-domain\n DocumentRoot /var/www\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n ErrorLog ${APACHE_LOG_DIR}/seahub.error.log\n CustomLog ${APACHE_LOG_DIR}/seahub.access.log combined\n\n SSLEngine on\n SSLCertificateFile /path/to/ssl-cert.pem\n SSLCertificateKeyFile /path/to/ssl-key.pem\n\n <Location /Shibboleth.sso>\n SetHandler shib\n AuthType shibboleth\n ShibRequestSetting requireSession 1\n Require valid-user\n </Location>\n\n <Location /sso>\n SetHandler shib\n AuthType shibboleth\n ShibUseHeaders On\n ShibRequestSetting requireSession 1\n Require valid-user\n </Location>\n\n RewriteEngine On\n <Location /media>\n Require all granted\n </Location>\n\n # seafile fileserver\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n # seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n\n # for http\n # RequestHeader set REMOTE_USER %{REMOTE_USER}e\n # for https\n RequestHeader set REMOTE_USER %{REMOTE_USER}s\n </VirtualHost>\n</IfModule>\n"},{"location":"config/shibboleth_authentication/#install-and-configure-shibboleth","title":"Install and Configure Shibboleth","text":"Installation and configuration of Shibboleth is out of the scope of this documentation. You can refer to the official Shibboleth document.
"},{"location":"config/shibboleth_authentication/#configure-shibbolethsp","title":"Configure Shibboleth(SP)","text":""},{"location":"config/shibboleth_authentication/#shibboleth2xml","title":"shibboleth2.xml","text":"Open /etc/shibboleth/shibboleth2.xml and change some property. After you have done all the followings, don't forget to restart Shibboleth(SP)
ApplicationDefaults element","text":"Change entityID and REMOTE_USER property:
<!-- The ApplicationDefaults element is where most of Shibboleth's SAML bits are defined. -->\n<ApplicationDefaults entityID=\"https://your-seafile-domain/sso\"\n REMOTE_USER=\"mail\"\n cipherSuites=\"DEFAULT:!EXP:!LOW:!aNULL:!eNULL:!DES:!IDEA:!SEED:!RC4:!3DES:!kRSA:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1\">\n Seahub extracts the username from the REMOTE_USER environment variable. So you should modify your SP's shibboleth2.xml config file, so that Shibboleth translates your desired attribute into REMOTE_USER environment variable.
In Seafile, only one of the following two attributes can be used for username: eppn, and mail. eppn stands for \"Edu Person Principal Name\". It is usually the UserPrincipalName attribute in Active Directory. It's not necessarily a valid email address. mail is the user's email address. You should set REMOTE_USER to either one of these attributes.
SSO element","text":"Change entityID property:
<!--\nConfigures SSO for a default IdP. To properly allow for >1 IdP, remove\nentityID property and adjust discoveryURL to point to discovery service.\nYou can also override entityID on /Login query string, or in RequestMap/htaccess.\n-->\n<SSO entityID=\"https://your-IdP-domain\">\n <!--discoveryProtocol=\"SAMLDS\" discoveryURL=\"https://wayf.ukfederation.org.uk/DS\"-->\n SAML2\n</SSO>\n"},{"location":"config/shibboleth_authentication/#metadataprovider-element","title":"MetadataProvider element","text":"Change url and backingFilePath property:
<!-- Example of remotely supplied batch of signed metadata. -->\n<MetadataProvider type=\"XML\" validate=\"true\"\n url=\"http://your-IdP-metadata-url\"\n backingFilePath=\"your-IdP-metadata.xml\" maxRefreshDelay=\"7200\">\n <MetadataFilter type=\"RequireValidUntil\" maxValidityInterval=\"2419200\"/>\n <MetadataFilter type=\"Signature\" certificate=\"fedsigner.pem\" verifyBackup=\"false\"/>\n"},{"location":"config/shibboleth_authentication/#attribute-mapxml","title":"attribute-map.xml","text":"Open /etc/shibboleth/attribute-map.xml and change some property. After you have done all the followings, don't forget to restart Shibboleth(SP)
Attribute element","text":"Uncomment attribute elements for getting more user info:
<!-- Older LDAP-defined attributes (SAML 2.0 names followed by SAML 1 names)... -->\n<Attribute name=\"urn:oid:2.16.840.1.113730.3.1.241\" id=\"displayName\"/>\n<Attribute name=\"urn:oid:0.9.2342.19200300.100.1.3\" id=\"mail\"/>\n\n<Attribute name=\"urn:mace:dir:attribute-def:displayName\" id=\"displayName\"/>\n<Attribute name=\"urn:mace:dir:attribute-def:mail\" id=\"mail\"/>\n"},{"location":"config/shibboleth_authentication/#upload-shibbolethsps-metadata","title":"Upload Shibboleth(SP)'s metadata","text":"After restarting Apache, you should be able to get the Service Provider metadata by accessing https://your-seafile-domain/Shibboleth.sso/Metadata. This metadata should be uploaded to the Identity Provider (IdP) server.
"},{"location":"config/shibboleth_authentication/#configure-seahub","title":"Configure Seahub","text":"Add the following configuration to seahub_settings.py.
ENABLE_SHIB_LOGIN = True\nSHIBBOLETH_USER_HEADER = 'HTTP_REMOTE_USER'\n# basic user attributes\nSHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_DISPLAYNAME\": (False, \"display_name\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n}\nEXTRA_MIDDLEWARE = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\nEXTRA_AUTHENTICATION_BACKENDS = (\n 'shibboleth.backends.ShibbolethRemoteUserBackend',\n)\n Seahub can process additional user attributes from Shibboleth. These attributes are saved into Seahub's database, as user's properties. They're all not mandatory. The internal user properties Seahub now supports are:
You can specify the mapping between Shibboleth attributes and Seahub's user properties in seahub_settings.py:
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_GIVENNAME\": (False, \"givenname\"),\n \"HTTP_SN\": (False, \"surname\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n \"HTTP_ORGANIZATION\": (False, \"institution\"),\n}\n In the above config, the hash key is Shibboleth attribute name, the second element in the hash value is Seahub's property name. You can adjust the Shibboleth attribute name for your own needs.
You may have to change attribute-map.xml in your Shibboleth SP, so that the desired attributes are passed to Seahub. And you have to make sure the IdP sends these attributes to the SP
We also added an option SHIB_ACTIVATE_AFTER_CREATION (defaults to True) which control the user status after shibboleth connection. If this option set to False, user will be inactive after connection, and system admins will be notified by email to activate that account.
Shibboleth has a field called affiliation. It is a list like: employee@uni-mainz.de;member@uni-mainz.de;faculty@uni-mainz.de;staff@uni-mainz.de.
We are able to set user role from Shibboleth. Details about user role, please refer to Roles and Permissions
To enable this, modify SHIBBOLETH_ATTRIBUTE_MAP above and add Shibboleth-affiliation field, you may need to change Shibboleth-affiliation according to your Shibboleth SP attributes.
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_GIVENNAME\": (False, \"givenname\"),\n \"HTTP_SN\": (False, \"surname\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n \"HTTP_ORGANIZATION\": (False, \"institution\"),\n \"HTTP_SHIBBOLETH_AFFILIATION\": (False, \"affiliation\"),\n}\n Then add new config to define affiliation role map,
SHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\n After Shibboleth login, Seafile should calcualte user's role from affiliation and SHIBBOLETH_AFFILIATION_ROLE_MAP.
"},{"location":"config/shibboleth_authentication/#custom-set-user-role","title":"Custom set user role","text":"If you are unable to set user roles by obtaining affiliation information, or if you wish to have a more customized way of setting user roles, you can add the following configuration to achieve this.
For example, set all users whose email addresses end with @seafile.com as default, and set other users as guest.
First, update the SHIBBOLETH_ATTRIBUTE_MAP configuration in seahub_settings.py, and add HTTP_REMOTE_USER.
SHIBBOLETH_ATTRIBUTE_MAP = {\n ....\n \"HTTP_REMOTE_USER\": (False, \"remote_user\"),\n ....\n}\n Then, create /opt/seafile/conf/seahub_custom_functions/__init__.py file and add the following code.
# function name `custom_shibboleth_get_user_role` should NOT be changed\ndef custom_shibboleth_get_user_role(shib_meta):\n\n remote_user = shib_meta.get('remote_user', '')\n if not remote_user:\n return ''\n\n remote_user = remote_user.lower()\n if remote_user.endswith('@seafile.com'):\n return 'default'\n else:\n return 'guest'\n"},{"location":"config/shibboleth_authentication/#verify","title":"Verify","text":"After restarting Apache and Seahub service (./seahub.sh restart), you can then test the shibboleth login workflow.
If you encountered problems when login, follow these steps to get debug info (for Seafile pro 6.3.13).
"},{"location":"config/shibboleth_authentication/#add-this-setting-to-seahub_settingspy","title":"Add this setting toseahub_settings.py","text":"DEBUG = True\n"},{"location":"config/shibboleth_authentication/#change-seafiles-code","title":"Change Seafile's code","text":"Open seafile-server-latest/seahub/thirdpart/shibboleth/middleware.py
Insert the following code in line 59
assert False\n Insert the following code in line 65
if not username:\n assert False\n The complete code after these changes is as follows:
#Locate the remote user header.\n# import pprint; pprint.pprint(request.META)\ntry:\n username = request.META[SHIB_USER_HEADER]\nexcept KeyError:\n assert False\n # If specified header doesn't exist then return (leaving\n # request.user set to AnonymousUser by the\n # AuthenticationMiddleware).\n return\n\nif not username:\n assert False\n\np_id = ccnet_api.get_primary_id(username)\nif p_id is not None:\n username = p_id\n Then restart Seafile and relogin, you will see debug info in web page.
"},{"location":"config/single_sign_on/","title":"Single Sign On support in Seafile","text":"Seafile supports most of the popular single-sign-on authentication protocols. Some are included in Community Edition, some are only in Pro Edition.
In the Community Edition:
Kerberos authentication can be integrated by using Apache as a proxy server and follow the instructions in Remote User Authentication and Auto Login SeaDrive on Windows.
In Pro Edition:
Build Seafile
Seafile Open API
Seafile Implement Details
You can build Seafile from our source code package or from the Github repo directly.
Client
Server
Seafile internally uses a data model similar to GIT's. It consists of Repo, Commit, FS, and Block.
Seafile's high performance comes from the architectural design: stores file metadata in object storage (or file system), while only stores small amount of metadata about the libraries in relational database. An overview of the architecture can be depicted as below. We'll describe the data model in more details.
"},{"location":"develop/data_model/#repo","title":"Repo","text":"A repo is also called a library. Every repo has an unique id (UUID), and attributes like description, creator, password.
The metadata for a repo is stored in seafile_db database and the commit objects (see description in later section).
There are a few tables in the seafile_db database containing important information about each repo.
Repo: contains the ID for each repo.RepoOwner: contains the owner id for each repo.RepoInfo: it is a \"cache\" table for fast access to repo metadata stored in the commit object. It includes repo name, update time, last modifier.RepoSize: the total size of all files in the repo.RepoFileCount: the file count in the repo.RepoHead: contains the \"head commit ID\". This ID points to the head commit in the storage, which will be described in the next section.Commit objects save the change history of a repo. Each update from the web interface, or sync upload operation will create a new commit object. A commit object contains the following information: commit ID, library name, creator of this commit (a.k.a. the modifier), creation time of this commit (a.k.a. modification time), root fs object ID, parent commit ID.
The root fs object ID points to the root FS object, from which we can traverse a file system snapshot for the repo.
The parent commit ID points to the last commit previous to the current commit. The RepoHead table contains the latest head commit ID for each repo. From this head commit, we can traverse the repo history.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/commits/<repo_id>. If you use object storage, commit objects are stored in the commits bucket.
There are two types of FS objects, SeafDir Object and Seafile Object. SeafDir Object represents a directory, and Seafile Object represents a file.
The SeafDir object contains metadata for each file/sub-folder, which includes name, last modification time, last modifier, size, and object ID. The object ID points to another SeafDir or Seafile object. The Seafile object contains a block list, which is a list of block IDs for the file.
The FS object IDs are calculated based on the contents of the object. That means if a folder or a file is not changed, the same objects will be reused across multiple commits. This allow us to create snapshots very efficiently.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/fs/<repo_id>. If you use object storage, commit objects are stored in the fs bucket.
A file is further divided into blocks with variable lengths. We use Content Defined Chunking algorithm to divide file into blocks. A clear overview of this algorithm can be found at http://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf. On average, a block's size is around 8MB.
This mechanism makes it possible to deduplicate data between different versions of frequently updated files, improving storage efficiency. It also enables transferring data to/from multiple servers in parallel.
If you use file system as storage backend, commit objects are stored in the path seafile-data/storage/blocks/<repo_id>. If you use object storage, commit objects are stored in the blocks bucket.
A \"virtual repo\" is a special repo that will be created in the cases below:
A virtual repo can be understood as a view for part of the data in its parent library. For example, when sharing a folder, the virtual repo only provides access to the shared folder in that library. Virtual repo use the same underlying data as the parent library. So virtual repos use the same fs and blocks storage location as its parent.
Virtual repo has its own change history. So it has separate commits storage location from its parent. The changes in virtual repo and its parent repo will be bidirectional merged. So that changes from each side can be seen from another.
There is a VirtualRepo table in seafile_db database. It contains the folder path in the parent repo for each virtual repo.
The following list is what you need to install on your development machine. You should install all of them before you build Seafile.
Package names are according to Ubuntu 24.04. For other Linux distros, please find their corresponding names yourself.
sudo apt-get install build-essential autotools-dev libtool libevent-dev libcurl4-openssl-dev libgtk2.0-dev uuid-dev intltool libsqlite3-dev valac git libjansson-dev cmake libwebsockets-dev qtchooser qtbase5-dev libqt5webkit5-dev qttools5-dev qttools5-dev-tools libssl-dev libargon2-dev libglib2.0-dev qtwebengine5-dev qtwayland5\n"},{"location":"develop/linux/#building","title":"Building","text":"First you should get the latest source of libsearpc/seafile/seafile-client:
Download the source code of the latest tag from
For example, if the latest released seafile client is 9.0.15, then just use the v9.0.15 tags of the three projects.
git clone --branch v3.2-latest https://github.com/haiwen/libsearpc.git\ngit clone --branch v9.0.15 https://github.com/haiwen/seafile.git\ngit clone --branch v9.0.15 https://github.com/haiwen/seafile-client.git\n To build Seafile client, you need first build libsearpc and seafile.
"},{"location":"develop/linux/#set-paths","title":"set paths","text":"export PREFIX=/usr\nexport PKG_CONFIG_PATH=\"$PREFIX/lib/pkgconfig:$PKG_CONFIG_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\n"},{"location":"develop/linux/#libsearpc","title":"libsearpc","text":"cd libsearpc\n./autogen.sh\n./configure --prefix=$PREFIX\nmake\nsudo make install\ncd ..\n"},{"location":"develop/linux/#seafile","title":"seafile","text":"cd seafile\n./autogen.sh\n./configure --prefix=$PREFIX --enable-ws=yes\nmake\nsudo make install\ncd ..\n If you don't need notification server, you can set --enable-ws=no to disable notification server.
cd seafile-client\ncmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$PREFIX .\nmake\nsudo make install\ncd ..\n"},{"location":"develop/linux/#custom-prefix","title":"custom prefix","text":"when installing to a custom $PREFIX, i.e. /opt, you may need a script to set the path variables correctly
cat >$PREFIX/bin/seafile-applet.sh <<END\n#!/bin/bash\nexport LD_LIBRARY_PATH=\"$PREFIX/lib:$LD_LIBRARY_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\nexec seafile-applet $@\nEND\ncat >$PREFIX/bin/seaf-cli.sh <<'END'\nexport LD_LIBRARY_PATH=\"$PREFIX/lib:$LD_LIBRARY_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\nexport PYTHONPATH=$PREFIX/lib/python3.12/site-packages\nexec seaf-cli \"$@\"\nEND\nchmod +x $PREFIX/bin/seafile-applet.sh $PREFIX/bin/seaf-cli.sh\n you can now start the client with $PREFIX/bin/seafile-applet.sh.
The following setups are required for building and packaging Sync Client on macOS:
universal_archs arm64 x86_64. Specifies the architecture on which MapPorts is compiled.+universal. MacPorts installs universal versions of all ports.sudo port install autoconf automake pkgconfig libtool glib2 libevent vala openssl git jansson cmake libwebsockets argon2.export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/opt/local/lib/pkgconfig:/usr/local/lib/pkgconfig\nexport PATH=/opt/local/bin:/usr/local/bin:/opt/local/Library/Frameworks/Python.framework/Versions/3.10/bin:$PATH\nexport LDFLAGS=\"-L/opt/local/lib -L/usr/local/lib\"\nexport CFLAGS=\"-I/opt/local/include -I/usr/local/include\"\nexport CPPFLAGS=\"-I/opt/local/include -I/usr/local/include\"\nexport LD_LIBRARY_PATH=/opt/lib:/usr/local/lib:/opt/local/lib/:/usr/local/lib/:$LD_LIBRARY_PATH\n\nQT_BASE=$HOME/Qt/6.2.4/macos\nexport PATH=$QT_BASE/bin:$PATH\nexport PKG_CONFIG_PATH=$QT_BASE/lib/pkgconfig:$PKG_CONFIG_PATH\nexport NOTARIZE_APPLE_ID=\"Your notarize account\"\nexport NOTARIZE_PASSWORD=\"Your notarize password\"\nexport NOTARIZE_TEAM_ID=\"Your notarize team id\"\n Following directory structures are expected when building Sync Client:
seafile-workspace/\nseafile-workspace/libsearpc/\nseafile-workspace/seafile/\nseafile-workspace/seafile-client/\n The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, and github.com/haiwen/seafile-client.
"},{"location":"develop/osx/#building","title":"Building","text":"Note: the building commands have been included in the packaging script, you can skip building commands while packaging.
To build libsearpc:
$ cd seafile-workspace/libsearpc/\n$ ./autogen.sh\n$ ./configure --disable-compile-demo --enable-compile-universal=yes\n$ make\n$ make install\n To build seafile:
$ cd seafile-workspace/seafile/\n$ ./autogen.sh\n$ ./configure --disable-fuse --enable-compile-universal=yes\n$ make\n$ make install\n To build seafile-client:
$ cd seafile-workspace/seafile-client/\n$ cmake -GXcode -B. -S.\n$ xcodebuild -target seafile-applet -configuration Release\n"},{"location":"develop/osx/#packaging","title":"Packaging","text":"python3 build-mac-local-py3.py --brand=\"\" --version=1.0.0 --nostrip --universalFrom Seafile 11.0, you can build Seafile release package with seafile-build script. You can check the README.md file in the same folder for detailed instructions.
The seafile-build.sh compatible with more platforms, including Raspberry Pi, arm-64, x86-64.
Old version is below:
Table of contents:
Requirements:
sudo apt-get install build-essential\nsudo apt-get install libevent-dev libcurl4-openssl-dev libglib2.0-dev uuid-dev intltool libsqlite3-dev libmysqlclient-dev libarchive-dev libtool libjansson-dev valac libfuse-dev re2c flex python-setuptools cmake\n"},{"location":"develop/rpi/#compile-development-libraries","title":"Compile development libraries","text":""},{"location":"develop/rpi/#libevhtp","title":"libevhtp","text":"libevhtp is a http server libary on top of libevent. It's used in seafile file server.
git clone https://www.github.com/haiwen/libevhtp.git\ncd libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nsudo make install\n After compiling all the libraries, run ldconfig to update the system libraries cache:
sudo ldconfig\n"},{"location":"develop/rpi/#install-python-libraries","title":"Install python libraries","text":"Create a new directory /home/pi/dev/seahub_thirdpart:
mkdir -p ~/dev/seahub_thirdpart\n Download these tarballs to /tmp/:
Install all these libaries to /home/pi/dev/seahub_thirdpart:
cd ~/dev/seahub_thirdpart\nexport PYTHONPATH=.\npip install -t ~/dev/seahub_thirdpart/ /tmp/pytz-2016.1.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/Django-1.8.10.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-statici18n-1.1.3.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/djangorestframework-3.3.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django_compressor-1.4.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/jsonfield-1.0.3.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-post_office-2.0.6.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/gunicorn-19.4.5.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/flup-1.0.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/chardet-2.3.0.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/python-dateutil-1.5.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/six-1.9.0.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-picklefield-0.3.2.tar.gz\nwget -O /tmp/django_constance.zip https://github.com/haiwen/django-constance/archive/bde7f7c.zip\npip install -t ~/dev/seahub_thirdpart/ /tmp/django_constance.zip\npip install -t ~/dev/seahub_thirdpart/ /tmp/jdcal-1.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/et_xmlfile-1.0.1.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/openpyxl-2.3.0.tar.gz\n"},{"location":"develop/rpi/#prepare-seafile-source-code","title":"Prepare seafile source code","text":"To build seafile server, there are four sub projects involved:
The build process has two steps:
build-server.py script to build the server package from the source tarballs.Seafile manages the releases in tags on github.
Assume we are packaging for seafile server 6.0.1, then the tags are:
v6.0.1-sever tag.v3.0-latest tag (libsearpc has been quite stable and basically has no further development, so the tag is always v3.0-latest)First setup the PKG_CONFIG_PATH enviroment variable (So we don't need to make and make install libsearpc/ccnet/seafile into the system):
export PKG_CONFIG_PATH=/home/pi/dev/seafile/lib:$PKG_CONFIG_PATH\nexport PKG_CONFIG_PATH=/home/pi/dev/libsearpc:$PKG_CONFIG_PATH\nexport PKG_CONFIG_PATH=/home/pi/dev/ccnet:$PKG_CONFIG_PATH\n"},{"location":"develop/rpi/#libsearpc","title":"libsearpc","text":"cd ~/dev\ngit clone https://github.com/haiwen/libsearpc.git\ncd libsearpc\ngit reset --hard v3.0-latest\n./autogen.sh\n./configure\nmake dist\n"},{"location":"develop/rpi/#ccnet","title":"ccnet","text":"cd ~/dev\ngit clone https://github.com/haiwen/ccnet-server.git\ncd ccnet\ngit reset --hard v6.0.1-server\n./autogen.sh\n./configure\nmake dist\n"},{"location":"develop/rpi/#seafile","title":"seafile","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafile-server.git\ncd seafile\ngit reset --hard v6.0.1-server\n./autogen.sh\n./configure\nmake dist\n"},{"location":"develop/rpi/#seahub","title":"seahub","text":"cd ~/dev\ngit clone https://github.com/haiwen/seahub.git\ncd seahub\ngit reset --hard v6.0.1-server\n./tools/gen-tarball.py --version=6.0.1 --branch=HEAD\n"},{"location":"develop/rpi/#seafobj","title":"seafobj","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafobj.git\ncd seafobj\ngit reset --hard v6.0.1-server\nmake dist\n"},{"location":"develop/rpi/#seafdav","title":"seafdav","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafdav.git\ncd seafdav\ngit reset --hard v6.0.1-server\nmake\n"},{"location":"develop/rpi/#copy-the-source-tar-balls-to-the-same-folder","title":"Copy the source tar balls to the same folder","text":"mkdir ~/seafile-sources\ncp ~/dev/libsearpc/libsearpc-<version>-tar.gz ~/seafile-sources\ncp ~/dev/ccnet/ccnet-<version>-tar.gz ~/seafile-sources\ncp ~/dev/seafile/seafile-<version>-tar.gz ~/seafile-sources\ncp ~/dev/seahub/seahub-<version>-tar.gz ~/seafile-sources\n\ncp ~/dev/seafobj/seafobj.tar.gz ~/seafile-sources\ncp ~/dev/seafdav/seafdav.tar.gz ~/seafile-sources\n"},{"location":"develop/rpi/#run-the-packaging-script","title":"Run the packaging script","text":"Now we have all the tarballs prepared, we can run the build-server.py script to build the server package.
mkdir ~/seafile-server-pkgs\n~/dev/seafile/scripts/build-server.py --libsearpc_version=<libsearpc_version> --ccnet_version=<ccnet_version> --seafile_version=<seafile_version> --seahub_version=<seahub_version> --srcdir= --thirdpartdir=/home/pi/dev/seahub_thirdpart --srcdir=/home/pi/seafile-sources --outputdir=/home/pi/seafile-server-pkgs\n After the script finisheds, we would get a seafile-server_6.0.1_pi.tar.gz in ~/seafile-server-pkgs folder.
The test should cover these steps at least:
seafile.sh start and seahub.sh start, you can login from a browser.This is the document for deploying Seafile open source development environment in Ubuntu 24.04 docker container.
"},{"location":"develop/server/#create-persistent-directories","title":"Create persistent directories","text":"Login a linux server as root user, then:
mkdir -p /root/seafile-ce-docker/source-code\nmkdir -p /root/seafile-ce-docker/conf\nmkdir -p /root/seafile-ce-docker/logs\nmkdir -p /root/seafile-ce-docker/mysql-data\nmkdir -p /root/seafile-ce-docker/seafile-data/library-template\n"},{"location":"develop/server/#run-a-container","title":"Run a container","text":"After install docker, start a container to deploy seafile open source development environment.
docker run --mount type=bind,source=/root/seafile-ce-docker/source-code,target=/root/dev/source-code \\\n --mount type=bind,source=/root/seafile-ce-docker/conf,target=/root/dev/conf \\\n --mount type=bind,source=/root/seafile-ce-docker/logs,target=/root/dev/logs \\\n --mount type=bind,source=/root/seafile-ce-docker/seafile-data,target=/root/dev/seafile-data \\\n --mount type=bind,source=/root/seafile-ce-docker/mysql-data,target=/var/lib/mysql \\\n -it -p 8000:8000 -p 8082:8082 -p 3000:3000 --name seafile-ce-env ubuntu:24.04 bash\n Note, the following commands are all executed in the seafile-ce-env docker container.
"},{"location":"develop/server/#update-source-and-install-dependencies","title":"Update Source and Install Dependencies.","text":"Update base system and install base dependencies:
apt-get update && apt-get upgrade -y\n\napt-get install -y ssh libevent-dev libcurl4-openssl-dev libglib2.0-dev uuid-dev intltool libsqlite3-dev libmysqlclient-dev libarchive-dev libtool libjansson-dev valac libfuse-dev python3-dateutil cmake re2c flex sqlite3 python3-pip python3-simplejson git libssl-dev libldap2-dev libonig-dev vim vim-scripts wget cmake gcc autoconf automake mysql-client librados-dev libxml2-dev curl sudo telnet netcat unzip netbase ca-certificates apt-transport-https build-essential libxslt1-dev libffi-dev libpcre3-dev libz-dev xz-utils nginx pkg-config poppler-utils libmemcached-dev sudo ldap-utils libldap2-dev libjwt-dev libunwind-dev libhiredis-dev google-perftools libgoogle-perftools-dev\n Install Node 20 from nodesource:
curl -sL https://deb.nodesource.com/setup_20.x | sudo -E bash -\napt-get install -y nodejs\n Install other Python 3 dependencies:
apt-get install -y python3 python3-dev python3-pip python3-setuptools python3-ldap\n\npython3 -m pip install --upgrade pip\n\npip3 install pytz jinja2 Django==5.2.* django-statici18n==2.3.* django_webpack_loader==1.7.* django_picklefield==3.1 django_formtools==2.4 django_simple_captcha==0.6.* djangosaml2==1.11.* djangorestframework==3.14.* python-dateutil==2.8.* pyjwt==2.10.* pycryptodome==3.23.* python-cas==1.6.* pysaml2==7.5.* requests==2.28.* requests_oauthlib==1.3.* future==1.0.* gunicorn==20.1.* mysqlclient==2.2.* qrcode==7.3.* pillow==11.3.* pillow-heif==1.0.* chardet==5.1.* cffi==1.17.1 captcha==0.7.* openpyxl==3.0.* Markdown==3.4.* bleach==5.0.* python-ldap==3.4.* sqlalchemy==2.0.* redis mock pytest pymysql==1.1.* configparser pylibmc django-pylibmc nose exam splinter pytest-django psd-tools lxml\n"},{"location":"develop/server/#install-mariadb-and-create-databases","title":"Install MariaDB and Create Databases","text":"apt-get install -y mariadb-server\nservice mariadb start\nmysqladmin -u root password your_password\n sql for create databases
mysql -uroot -pyour_password -e \"CREATE DATABASE ccnet CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seafile CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seahub CHARACTER SET utf8;\"\n"},{"location":"develop/server/#download-source-code","title":"Download Source Code","text":"cd ~/\ncd ~/dev/source-code\n\ngit clone https://github.com/haiwen/libevhtp.git\ngit clone https://github.com/haiwen/libsearpc.git\ngit clone https://github.com/haiwen/seafile-server.git\ngit clone https://github.com/haiwen/seafevents.git\ngit clone https://github.com/haiwen/seafobj.git\ngit clone https://github.com/haiwen/seahub.git\n\ncd libevhtp/\ngit checkout tags/1.1.7 -b tag-1.1.7\n\ncd ../libsearpc/\ngit checkout tags/v3.3-latest -b tag-v3.3-latest\n\ncd ../seafile-server\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seafevents\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seafobj\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seahub\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n"},{"location":"develop/server/#compile-and-install-seaf-server","title":"Compile and Install seaf-server","text":"cd ../libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nmake install\nldconfig\n\ncd ../libsearpc\n./autogen.sh\n./configure\nmake\nmake install\nldconfig\n\ncd ../seafile-server\n./autogen.sh\n./configure --disable-fuse\nmake\nmake install\nldconfig\n"},{"location":"develop/server/#create-conf-files","title":"Create Conf Files","text":"cd ~/dev/conf\n\ncat > ccnet.conf <<EOF\n[Database]\nENGINE = mysql\nHOST = localhost\nPORT = 3306\nUSER = root\nPASSWD = 123456\nDB = ccnet\nCONNECTION_CHARSET = utf8\nCREATE_TABLES = true\nEOF\n\ncat > seafile.conf <<EOF\n[database]\ntype = mysql\nhost = localhost\nport = 3306\nuser = root\npassword = 123456\ndb_name = seafile\nconnection_charset = utf8\ncreate_tables = true\nEOF\n\ncat > seafevents.conf <<EOF\n[DATABASE]\ntype = mysql\nusername = root\npassword = 123456\nname = seahub\nhost = localhost\nEOF\n\ncat > seahub_settings.py <<EOF\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': 'seahub',\n 'USER': 'root',\n 'PASSWORD': '123456',\n 'HOST': 'localhost',\n 'PORT': '3306',\n }\n}\nFILE_SERVER_ROOT = 'http://127.0.0.1:8082'\nSERVICE_URL = 'http://127.0.0.1:8000'\nEOF\n"},{"location":"develop/server/#start-seaf-server","title":"Start seaf-server","text":"seaf-server -F /root/dev/conf -d /root/dev/seafile-data -l /root/dev/logs/seafile.log >> /root/dev/logs/seafile.log 2>&1 &\n"},{"location":"develop/server/#start-seafevents-and-seahub","title":"Start seafevents and seahub","text":""},{"location":"develop/server/#prepare-environment-variables","title":"Prepare environment variables","text":"export CCNET_CONF_DIR=/root/dev/conf\nexport SEAFILE_CONF_DIR=/root/dev/seafile-data\nexport SEAFILE_CENTRAL_CONF_DIR=/root/dev/conf\nexport SEAHUB_DIR=/root/dev/source-code/seahub\nexport SEAHUB_LOG_DIR=/root/dev/logs\nexport PYTHONPATH=/usr/local/lib/python3.10/dist-packages/:/usr/local/lib/python3.10/site-packages/:/root/dev/source-code/:/root/dev/source-code/seafobj/:/root/dev/source-code/seahub/thirdpart:$PYTHONPATH\n"},{"location":"develop/server/#start-seafevents","title":"Start seafevents","text":"cd /root/dev/source-code/seafevents/\npython3 main.py --loglevel=debug --logfile=/root/dev/logs/seafevents.log --config-file /root/dev/conf/seafevents.conf >> /root/dev/logs/seafevents.log 2>&1 &\n"},{"location":"develop/server/#start-seahub","title":"Start seahub","text":""},{"location":"develop/server/#create-seahub-database-tables","title":"Create seahub database tables","text":"cd /root/dev/source-code/seahub/\npython3 manage.py migrate\n"},{"location":"develop/server/#create-user","title":"Create user","text":"python3 manage.py createsuperuser\n"},{"location":"develop/server/#start-seahub_1","title":"Start seahub","text":"python3 manage.py runserver 0.0.0.0:8000\n Then, you can visit http://127.0.0.1:8000/ to use Seafile.
"},{"location":"develop/server/#the-final-directory-structure","title":"The Final Directory Structure","text":""},{"location":"develop/server/#more","title":"More","text":""},{"location":"develop/server/#deploy-frontend-development-environment","title":"Deploy Frontend Development Environment","text":"For deploying frontend development enviroment, you need:
1, checkout seahub to master branch
cd /root/dev/source-code/seahub\n\ngit fetch origin master:master\ngit checkout master\n 2, add the following configration to /root/dev/conf/seahub_settings.py
import os\nPROJECT_ROOT = '/root/dev/source-code/seahub'\nWEBPACK_LOADER = {\n 'DEFAULT': {\n 'BUNDLE_DIR_NAME': 'frontend/',\n 'STATS_FILE': os.path.join(PROJECT_ROOT,\n 'frontend/webpack-stats.dev.json'),\n }\n}\nDEBUG = True\n 3, install js modules
cd /root/dev/source-code/seahub/frontend\n\nnpm install\n 4, npm run dev
cd /root/dev/source-code/seahub/frontend\n\nnpm run dev\n 5, start seaf-server and seahub
"},{"location":"develop/translation/","title":"Translation","text":""},{"location":"develop/translation/#seahub-seafile-server-71-and-above","title":"Seahub (Seafile Server 7.1 and above)","text":""},{"location":"develop/translation/#translate-and-try-locally","title":"Translate and try locally","text":"1. Locate the translation files in the seafile-server-latest/seahub directory:
/locale/<lang-code>/LC_MESSAGES/django.po\u00a0 and \u00a0/locale/<lang-code>/LC_MESSAGES/djangojs.po/media/locales/<lang-code>/seafile-editor.jsonFor example, if you want to improve the Russian translation, find the corresponding strings to be edited in either of the following three files:
/seafile-server-latest/seahub/locale/ru/LC_MESSAGES/django.po/seafile-server-latest/seahub/locale/ru/LC_MESSAGES/djangojs.po/seafile-server-latest/seahub/media/locales/ru/seafile-editor.jsonIf there is no translation for your language, create a new folder matching your language code and copy-paste the contents of another language folder in your newly created one. (Don't copy from the 'en' folder because the files therein do not contain the strings to be translated.)
2. Edit the files using an UTF-8 editor.
3. Save your changes.
4. (Only necessary when you created a new language code folder) Add a new entry for your language to the language block in the /seafile-server-latest/seahub/seahub/settings.py file and save it.
LANGUAGES = (\n ...\n ('ru', '\u0420\u0443\u0441\u0441\u043a\u0438\u0439'),\n ...\n)\n 5. (Only necessary when you edited either django.po or djangojs.po) Apply the changes made in django.po and djangojs.po by running the following two commands in /seafile-server-latest/seahub/locale/<lang-code>/LC_MESSAGES:
msgfmt -o django.mo django.pomsgfmt -o djangojs.mo djangojs.poNote: msgfmt is included in the gettext package.
Additionally, run the following two commands in the seafile-server-latest directory:
./seahub.sh python-env python3 seahub/manage.py compilejsi18n -l <lang-code>./seahub.sh python-env python3 seahub/manage.py collectstatic --noinput -i admin -i termsandconditions --no-post-process6. Restart Seahub to load changes made in django.po and djangojs.po; reload the Markdown editor to check your modifications in the seafile-editor.json file.
"},{"location":"develop/translation/#submit-your-translation","title":"Submit your translation","text":"Please submit translations via Transifex: https://www.transifex.com/projects/p/seahub/
Steps:
FileNotFoundError occurred when executing the command manage.py collectstatic.
FileNotFoundError: [Errno 2] No such file or directory: '/opt/seafile/seafile-server-latest/seahub/frontend/build'\n Steps:
Modify STATICFILES_DIRS in /opt/seafile/seafile-server-latest/seahub/seahub/settings.py manually
STATICFILES_DIRS = (\n # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\".\n # Always use forward slashes, even on Windows.\n # Don't forget to use absolute paths, not relative paths.\n '%s/static' % PROJECT_ROOT,\n# '%s/frontend/build' % PROJECT_ROOT,\n)\n Execute the command
./seahub.sh python-env python3 seahub/manage.py collectstatic --noinput -i admin -i termsandconditions --no-post-process\n Restore STATICFILES_DIRS manually
```python STATICFILES_DIRS = ( # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. '%s/static' % PROJECT_ROOT, '%s/frontend/build' % PROJECT_ROOT, )
Restart Seahub
./seahub.sh restart\n This issue has been fixed since version 11.0
"},{"location":"develop/web_api_v2.1/","title":"Web API","text":""},{"location":"develop/web_api_v2.1/#seafile-web-api","title":"Seafile Web API","text":"The API document can be accessed in the following location:
The Admin API document can be accessed in the following location:
The following setups are required for building and packaging Sync Client on Windows:
vcpkg.exe integrate install to integrates vcpkg with projects.Following directory structures are expected when building Sync Client:
seafile-workspace/\nseafile-workspace/libsearpc/\nseafile-workspace/seafile/\nseafile-workspace/seafile-client/\nseafile-workspace/seafile-shell-ext/\n The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, github.com/haiwen/seafile-client, and github.com/haiwen/seafile-shell-ext.
"},{"location":"develop/windows/#building","title":"Building","text":"Note: these commands are run in \"x64 Native Tools Command Prompt for VS 2019\". The \"Debug|x64\" configuration is simplified to build, which does not include breakpad and other dependencies.
To build libsearpc:
$ cd seafile-workspace/libsearpc/\n$ devenv libsearpc.sln /build \"Debug|x64\"\n To build seafile
$ cd seafile-workspace/seafile/\n$ devenv seafile.sln /build \"Debug|x64\"\n$ devenv msi/custom/seafile_custom.sln /build \"Debug|x64\"\n To build seafile-client
$ cd seafile-workspace/seafile-client/\n$ devenv third_party/quazip/quazip.sln /build \"Debug|x64\"\n$ devenv seafile-client.sln /build \"Debug|x64\"\n To build seafile-shell-ext
$ cd seafile-workspace/seafile-shell-ext/\n$ devenv extensions/seafile_ext.sln /build \"Debug|x64\"\n$ devenv seadrive-thumbnail-ext/seadrive_thumbnail_ext.sln /build \"Debug|x64\"\n"},{"location":"develop/windows/#packaging","title":"Packaging","text":"Additional setups are required for packaging:
Certificates
Update the CERTFILE configure in seafile-workspace/seafile/scripts/build/build-msi-vs.py .
$ cd seafile-workspace/seafile-client/third_party/quazip\n$ devenv quazip.sln /build Release|x64\n$ cd seafile-workspace/seafile/scripts/build\n$ python build-msi-vs.py 1.0.0\n If you use a cluster to deploy Seafile, you can use distributed indexing to realize real-time indexing and improve indexing efficiency. The indexing process is as follows:
"},{"location":"extension/distributed_indexing/#install-redis-and-modify-configuration-files","title":"Install redis and modify configuration files","text":""},{"location":"extension/distributed_indexing/#1-install-redis-on-all-frontend-nodes","title":"1. Install redis on all frontend nodes","text":"Tip
If you use redis cloud service, skip this step and modify the configuration files directly
UbuntuCentOS$ apt install redis-server\n $ yum install redis\n"},{"location":"extension/distributed_indexing/#2-install-python-redis-third-party-package-on-all-frontend-nodes","title":"2. Install python redis third-party package on all frontend nodes","text":"$ pip install redis\n"},{"location":"extension/distributed_indexing/#3-modify-the-seafeventsconf-on-all-frontend-nodes","title":"3. Modify the seafevents.conf on all frontend nodes","text":"Add the following config items
[EVENTS PUBLISH]\nmq_type=redis # must be redis\nenabled=true\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n"},{"location":"extension/distributed_indexing/#4-modify-the-seafeventsconf-on-the-backend-node","title":"4. Modify the seafevents.conf on the backend node","text":"Disable the scheduled indexing task, because the scheduled indexing task and the distributed indexing task conflict.
[INDEX FILES]\nenabled=true\n |\n V\nenabled=false \n"},{"location":"extension/distributed_indexing/#5-restart-seafile","title":"5. Restart Seafile","text":"Deploy in DockerDeploy from binary packages docker exec -it seafile bash\ncd /scripts\n./seafile.sh restart && ./seahub.sh restart\n cd /opt/seafile/seafile-server-latest\n./seafile.sh restart && ./seahub.sh restart\n"},{"location":"extension/distributed_indexing/#deploy-distributed-indexing","title":"Deploy distributed indexing","text":"First, prepare a index-server master node and several index-server slave nodes, the number of slave nodes depends on your needs. Copy the seafile.conf and the seafevents.conf in the conf directory from the Seafile frontend nodes to /opt/seafile-data/seafile/conf in index-server nodes. The master node and slave nodes need to read the configuration files to obtain the necessary information.
mkdir -p /opt/seafile-data/seafile/conf\nmkdir -p /opt/seafile\n Then download .env and index-server.yml to /opt/seafile in all index-server nodes.
cd /opt/seafile\nwget https://manual.seafile.com/12.0/repo/docker/index-server/index-server.yml\nwget -O .env https://manual.seafile.com/12.0/repo/docker/index-server/env\n Modify mysql configurations in .env.
SEAFILE_MYSQL_DB_HOST=127.0.0.1\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\n\nCLUSTER_MODE=master\n Note
CLUSTER_MODE needs to be configured as master on the master node, and needs to be configured as worker on the slave nodes.
Next, create a configuration file index-master.conf in the conf directory of the master node, e.g.
[DEFAULT]\nmq_type=redis # must be redis\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n Start master node.
docker compose up -d\n Next, create a configuration file index-worker.conf in the conf directory of all slave nodes, e.g.
[DEFAULT]\nmq_type=redis # must be redis\nindex_workers=2 # number of threads to create/update indexes, you can increase this value according to your needs\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n Start all slave nodes.
docker compose up -d\n"},{"location":"extension/distributed_indexing/#some-commands-in-distributed-indexing","title":"Some commands in distributed indexing","text":"Rebuild search index, first execute the command in the Seafile node:
cd /opt/seafile/seafile-server-last/\n./pro/pro.py search --clear\n Then execute the command in the index-server master node:
docker exec -it index-server bash\n/opt/seafile/index-server/index-server.sh restore-all-repo\n List the number of indexing tasks currently remaining, execute the command in the index-server master node:
/opt/seafile/index-server/index-server.sh show-all-task\n"},{"location":"extension/fuse/","title":"FUSE extension","text":"Files in the seafile system are split to blocks, which means what are stored on your seafile server are not complete files, but blocks. This design faciliates effective data deduplication.
However, administrators sometimes want to access the files directly on the server. You can use seaf-fuse to do this.
Seaf-fuse is an implementation of the FUSE virtual filesystem. In a word, it mounts all the seafile files to a folder (which is called the '''mount point'''), so that you can access all the files managed by seafile server, just as you access a normal folder on your server.
Note
Assume we want to mount to /opt/seafile-fuse in host.
Add the following content
seafile:\n ...\n volumes:\n ...\n - type: bind\n source: /opt/seafile-fuse\n target: /seafile-fuse\n bind:\n propagation: rshared\n privileged: true\n cap_add:\n - SYS_ADMIN\n"},{"location":"extension/fuse/#start-seaf-fuse-with-the-script-in-docker","title":"Start seaf-fuse with the script in docker","text":"Start Seafile server and enter the container
docker compose up -d\n\ndocker exec -it seafile bash\n Start seaf-fuse in the container
cd /opt/seafile/seafile-server-latest/\n\n./seaf-fuse.sh start /seafile-fuse\n"},{"location":"extension/fuse/#use-seaf-fuse-in-binary-based-deployment","title":"Use seaf-fuse in binary based deployment","text":"Assume we want to mount to /data/seafile-fuse.
mkdir -p /data/seafile-fuse\n"},{"location":"extension/fuse/#start-seaf-fuse-with-the-script","title":"Start seaf-fuse with the script","text":"Before start seaf-fuse, you should have started seafile server with ./seafile.sh start
./seaf-fuse.sh start /data/seafile-fuse\n"},{"location":"extension/fuse/#stop-seaf-fuse","title":"Stop seaf-fuse","text":"./seaf-fuse.sh stop\n"},{"location":"extension/fuse/#start-options","title":"Start options","text":"seaf-fuse supports standard mount options for FUSE. For example, you can specify ownership for the mounted folder:
./seaf-fuse.sh start -o uid=<uid> /data/seafile-fuse\n In Pro edition, seaf-fuse enables the block cache function by default to cache block objects when object storage backend is used, thereby reducing access to backend storage, but this function will occupy local disk space. Since Seafile-pro-10.0.0, you can disable block cache by adding following options:
./seaf-fuse.sh start --disable-block-cache /data/seafile-fuse\n You can find the complete list of supported options in man fuse.
Now you can list the content of /data/seafile-fuse.
$ ls -lhp /data/seafile-fuse\n\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 abc@abc.com/\ndrwxr-xr-x 2 root root 4.0K Jan 4 2015 foo@foo.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 plus@plus.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 sharp@sharp.com/\ndrwxr-xr-x 2 root root 4.0K Jan 3 2015 test@test.com/\n $ ls -lhp /data/seafile-fuse/abc@abc.com\n\ndrwxr-xr-x 2 root root 924 Jan 1 1970 5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\ndrwxr-xr-x 2 root root 1.6K Jan 1 1970 a09ab9fc-7bd0-49f1-929d-6abeb8491397_My Notes/\n From the above list you can see, under the folder of a user there are subfolders, each of which represents a library of that user, and has a name of this format: '''{library_id}-{library-name}'''.
"},{"location":"extension/fuse/#the-folder-for-a-library","title":"The folder for a library","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com/5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\n\n-rw-r--r-- 1 root root 501K Jan 1 2015 image.png\n-rw-r--r-- 1 root root 501K Jan 1 2015 sample.jpng\n"},{"location":"extension/fuse/#if-you-get-a-permission-denied-error","title":"If you get a \"Permission denied\" error","text":"If you get an error message saying \"Permission denied\" when running ./seaf-fuse.sh start, most likely you are not in the \"fuse group\". You should:
Add yourself to the fuse group
sudo usermod -a -G fuse <your-user-name>\n Logout your shell and login again
./seaf-fuse.sh start <path>again.Deployment Tips
The steps from this guide only cover installing collabora as another container on the same docker host that your seafile docker container is on. Please make sure your host have sufficient cores and RAM.
If you want to install on another host please refer the collabora documentation for instructions. Then you should follow here to configure seahub_settings.py to enable online office.
Note
To integrate LibreOffice with Seafile, you have to enable HTTPS in your Seafile server:
Deploy in DockerDeploy from binary packagesModify .env file:
SEAFILE_SERVER_PROTOCOL=https\n Please follow the links to enable https by Nginx
Download the collabora.yml
wget https://manual.seafile.com/13.0/repo/docker/collabora.yml\n Insert collabora.yml to field COMPOSE_FILE lists (i.e., COMPOSE_FILE='...,collabora.yml') and add the relative options in .env
COLLABORA_IMAGE=collabora/code:24.04.5.1.1 # image of LibreOffice\nCOLLABORA_PORT=6232 # expose port\nCOLLABORA_USERNAME=<your LibreOffice admin username>\nCOLLABORA_PASSWORD=<your LibreOffice admin password>\nCOLLABORA_ENABLE_ADMIN_CONSOLE=true # enable admin console or not\nCOLLABORA_REMOTE_FONT= # remote font url\nCOLLABORA_ENABLE_FILE_LOGGING=false # use file logs or not, see FQA\n"},{"location":"extension/libreoffice_online/#config-seafile","title":"Config Seafile","text":"Add following config option to seahub_settings.py:
OFFICE_SERVER_TYPE = 'CollaboraOffice'\nENABLE_OFFICE_WEB_APP = True\nOFFICE_WEB_APP_BASE_URL = 'http://collabora:9980/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use LibreOffice Online view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 30 * 60 # seconds\n\n# List of file formats that you want to view through LibreOffice Online\n# You can change this value according to your preferences\n# And of course you should make sure your LibreOffice Online supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through LibreOffice Online\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through LibreOffice Online\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n Then restart Seafile.
Click an office file in Seafile web interface, you will see the online preview rendered by CollaboraOnline. Here is an example:
"},{"location":"extension/libreoffice_online/#trouble-shooting","title":"Trouble shooting","text":"Understanding how the integration work will help you debug the problem. When a user visits a file page:
CollaboraOnline container will output the logs in the stdout, you can use following command to access it
docker logs seafile-collabora\n If you would like to use file to save log (i.e., a .log file), you can modify .env with following statment, and remove the notes in the collabora.yml
# .env\nCOLLABORA_ENABLE_FILE_LOGGING=True\nCOLLABORA_PATH=/opt/collabora # path of the collabora logs\n # collabora.yml\n# remove the following notes\n...\nservices:\n collabora:\n ...\n volumes:\n - \"${COLLABORA_PATH:-/opt/collabora}/logs:/opt/cool/logs/\" # chmod 777 needed\n ...\n...\n Create the logs directory, and restart Seafile server
mkdir -p /opt/collabora\nchmod 777 /opt/collabora\ndocker compose down\ndocker compose up -d\n"},{"location":"extension/libreoffice_online/#collaboraonline-server-on-a-separate-host","title":"CollaboraOnline server on a separate host","text":"For independent deployment of CollaboraOnline on a single server, please refer to the official documentation. After a successful deployment, you only need to specify the values of the following fields in seahub_settings.py and then restart the service.
OFFICE_SERVER_TYPE = 'CollaboraOffice'\nENABLE_OFFICE_WEB_APP = True\nOFFICE_WEB_APP_BASE_URL = 'https://<Your CollaboraOnline host url>/hosting/discovery'\nWOPI_ACCESS_TOKEN_EXPIRATION = 30 * 60 \nENABLE_OFFICE_WEB_APP_EDIT = True\n"},{"location":"extension/metadata-server/","title":"Metadata server","text":"Metadata server aims to provide metadata management for your libraries, so as to better understand the relevant information of your libraries.
"},{"location":"extension/metadata-server/#deployment","title":"Deployment","text":"Prerequisites
The startup of Metadata server requires using Redis as the cache server (it should be the default cache server in Seafile 13.0). So you must deploy Redis for Seafile, then modify seafile.conf, seahub_settings.py and seafevents.conf to enable it before deploying metadata server.
Warning
Please make sure your Seafile service has been deployed before deploying Metadata server. This is because Metadata server needs to read Seafile's configuration file seafile.conf. If you deploy Metadata server before or at the same time with Seafile, it may not be able to detect seafile.conf and fail to start.
Please download the file by following command:
Deploy in the same machine with SeafileStandaloneNote
You have to download this file to the directory same as seafile-server.yml
wget https://manual.seafile.com/13.0/repo/docker/md-server.yml\n Note
For standalone deployment (usually used in cluster deployment), the metadata server only supports Seafile using the storage backend such as S3.
wget https://manual.seafile.com/13.0/repo/docker/metadata-server/md-server.yml\nwget -O .env https://manual.seafile.com/13.0/repo/docker/metadata-server/env\n"},{"location":"extension/metadata-server/#modify-env","title":"Modify .env","text":"Metadata server read all configurations from environtment and does not need a dedicated configuration file, and you don't need to add additional variables to your .env (except for standalone deployment) to get the metadata server started, because it will read the exact same configuration as the Seafile server (including JWT_PRIVATE_KEY ) and keep the repository metadata locally (default /opt/seafile-data/seafile/md-data). But you still need to modify the COMPOSE_FILE list in .env, and add md-server.yml to enable the metadata server:
COMPOSE_FILE='...,md-server.yml'\n To facilitate your deployment, we still provide two different configuration solutions for your reference:
"},{"location":"extension/metadata-server/#example-env-for-seafile-data-is-stored-locally","title":"Example.env for Seafile data is stored locally","text":"In this case you don't need to add any additional configuration to your .env. You can also specify image version, maximum local cache size, etc.
MD_IMAGE=seafileltd/seafile-md-server:13.0-latest\nMD_MAX_CACHE_SIZE=1GB\n"},{"location":"extension/metadata-server/#example-env-for-seafile-data-is-stored-in-the-storage-backend-eg-s3","title":"Example .env for Seafile data is stored in the storage backend (e.g., S3)","text":"First you need to create a bucket for metadata on your S3 storage backend provider. Then add or modify the following information to .env:
MD_IMAGE=seafileltd/seafile-md-server:13.0-latest\nMD_STORAGE_TYPE=s3\nS3_MD_BUCKET=...\nS3_KEY_ID=<your-key-id>\nS3_SECRET_KEY=<your-secret-key>\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=false\nS3_AWS_REGION=us-east-1\nS3_HOST=\nS3_USE_HTTPS=true\nS3_SSE_C_KEY=\n Data for Seafile server should be accessible for Metadata server
In order to correctly obtain metadata information, you must ensure that the data of your Seafile server can be correctly accessed. In the case of deploying Metadata server and Seafile server together, Metadata server will be able to automatically obtain the configuration information of Seafile server, so you don't need to worry about this. But if your Metadata server is deployed in Standalone (usually in a cluster environment), then you need to ensure that the description of the Seafile server storage part in the .env deployed by Metadata server needs to be consistent with the .env deployed by Seafile server (e.g., SEAF_SERVER_STORAGE_TYPE), and can access the configuration file information of Seafile server (e.g., seafile.conf) to ensure that Metadata server can correctly obtain data from Seafile server.
The following table is all the related environment variables with Metadata server:
Variables Description RequiredJWT_PRIVATE_KEY The JWT key used to connect with Seafile server Required MD_MAX_CACHE_SIZE The maximum cache size. Optional, default 1GB REDIS_HOST Your Redis service host. Optional, default redis REDIS_PORT Your Redis service port. Optional, default 6379 REDIS_PASSWORD Your Redis access password. Optional MD_STORAGE_TYPE Where the metadata storage in. Available options are disk (local storage) and s3 disk S3_MD_BUCKET Your S3 bucket name for the bucket storing metadata Required when using S3 (MD_STORAGE_TYPE=s3) MD_CHECK_UPDATE_INTERVAL The interval for updating metadata of the repository 30m MD_FILE_COUNT_LIMIT The maximum number of files in a repository that the metadata feature allows. If the number of files in a repository exceeds this value, the metadata management function will not be enabled for the repository. For a repository with metadata management enabled, if the number of records in it reaches this value but there are still some files that are not recorded in metadata server, the metadata management of the unrecorded files will be skipped. 100000 In addition, there are some environment variables related to S3 authorization, please refer to the part with S3_ prefix in this table (the buckets name for Seafile are also needed).
Metadata server supports Redis only
To enable metadata feature, you have to use Redis for cache, as the CACHE_PROVIDER must be set to redis in your .env
seahub_settings.py","text":"To enable metadata server in Seafile, please add the following field in your seahub_settings.py:
ENABLE_METADATA_MANAGEMENT = True\nMETADATA_SERVER_URL = 'http://seafile-md-server:8084'\n ENABLE_METADATA_MANAGEMENT = True\nMETADATA_SERVER_URL = 'http://<your metadata-server host>:8084'\n"},{"location":"extension/metadata-server/#start-service","title":"Start service","text":"You can use following command to start metadata server (and the Seafile service also have to restart):
docker compose down\ndocker compose up -d\n"},{"location":"extension/metadata-server/#verify-metadata-server-and-enable-it-in-the-seafile","title":"Verify Metadata server and enable it in the Seafile","text":"Check container log for seafile-md-server, you can see the following message if it runs fine:
$docker logs -f seafile-md-server\n\n[md-server] [2025-03-27 02:30:55] [INFO] Created data links\n[md-server] [2025-03-27 02:30:55] [INFO] Database initialization completed\n[md-server] [2025-03-27 02:30:55] [INFO] Starting Metadata server\n 2. Check the seafevents.log and seahub.log, as you can see the following information in seafevents.log and no error log is reported in seahub.log: [2025-02-23 06:08:05] [INFO] seafevents.repo_metadata.index_worker:134 refresh_lock refresh_thread Starting refresh locks\n[2025-02-23 06:08:05] [INFO] seafevents.repo_metadata.slow_task_handler:61 worker_handler slow_task_handler_thread_0 starting update metadata work\n[2025-02-23 06:08:05] [INFO] seafevents.repo_metadata.slow_task_handler:61 worker_handler slow_task_handler_thread_1 starting update metadata work\n[2025-02-23 06:08:05] [INFO] seafevents.repo_metadata.slow_task_handler:61 worker_handler slow_task_handler_thread_2 starting update metadata work\n Switch Enable extended properties in button Settings
Finally, you can see the metadata of your library in views tab
When you deploy Seafile server and Metadata server to the same machine, Metadata server will use the same persistence directory (e.g. /opt/seafile-data) as Seafile server. Metadata server will use the following directories or files:
/opt/seafile-data/seafile/md-data: Metadata server data and cache/opt/seafile-data/seafile/logs/seaf-md-server: The logs directory of Metadata server, consist of a running log and an access log.Currently, the status updates of files and libraries on the client and web interface are based on polling the server. The latest status cannot be reflected in real time on the client due to polling delays. The client needs to periodically refresh the library modification, file locking, subdirectory permissions and other information, which causes additional performance overhead to the server.
When a directory is opened on the web interface, the lock status of the file cannot be updated in real time, and the page needs to be refreshed.
The notification server uses websocket protocol and maintains a two-way communication connection with the client or the web interface. When the above changes occur, seaf-server will notify the notification server of the changes. Then the notification server can notify the client or the web interface in real time. This not only improves the real-time performance, but also reduces the performance overhead of the server.
"},{"location":"extension/notification-server/#supported-update-reminder-types","title":"Supported update reminder types","text":"Since Seafile 12.0, we use a separate Docker image to deploy the notification server. First download notification-server.yml to Seafile directory:
wget https://manual.seafile.com/13.0/repo/docker/notification-server.yml\n Modify .env, and insert notification-server.yml into COMPOSE_FILE:
COMPOSE_FILE='seafile-server.yml,caddy.yml,notification-server.yml'\n then add or modify ENABLE_NOTIFICATION_SERVER:
ENABLE_NOTIFICATION_SERVER=true\n Finally, You can run notification server with the following command:
docker compose down\ndocker compose up -d\n"},{"location":"extension/notification-server/#checking-notification-server-status","title":"Checking notification server status","text":"When the notification server is working, you can access http://127.0.0.1:8083/ping from your browser, which will answer {\"ret\": \"pong\"}. If you have a proxy configured, you can access https://seafile.example.com/notification/ping from your browser instead.
If the client works with notification server, there should be a log message in seafile.log or seadrive.log.
Notification server is enabled on the remote server xxxx\n"},{"location":"extension/notification-server/#notification-server-in-seafile-cluster","title":"Notification Server in Seafile cluster","text":"There is no additional features for notification server in the Pro Edition. It works the same as in community edition.
If you enable clustering, You need to deploy notification server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node.
Download .env and notification-server.yml to notification server directory:
wget https://manual.seafile.com/13.0/repo/docker/notification-server/notification-server.yml\nwget -O .env https://manual.seafile.com/13.0/repo/docker/notification-server/env\n Then modify the .env file according to your environment. The following fields are needed to be modified:
SEAFILE_MYSQL_DB_HOST Seafile MySQL host SEAFILE_MYSQL_DB_USER Seafile MySQL user, default is seafile SEAFILE_MYSQL_DB_PASSWORD Seafile MySQL password TIME_ZONE Time zone JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file SEAFILE_SERVER_HOSTNAME Seafile host name SEAFILE_SERVER_PROTOCOL http or https Now, you can run notification server with the following command:
docker compose up -d\n then you need to modify the .env on the host deployed Seafile:
ENABLE_NOTIFICATION_SERVER=true\nNOTIFICATION_SERVER_URL=https://seafile.example.com/notification\nINNER_NOTIFICATION_SERVER_URL=http://<your notification server host>:8083\n Difference between NOTIFICATION_SERVER_URL and INNER_NOTIFICATION_SERVER_URL
NOTIFICATION_SERVER_URL: used to do the connection between client (i.e., user's browser) and notification serverINNER_NOTIFICATION_SERVER_URL: used to do the connection between Seafile server and notification serverFinally, you need to configure load balancer according to the following forwarding rules:
/notification/ping requests to notification server via http protocol./notification to notification server.Here is a configuration that uses haproxy to support notification server. Haproxy version needs to be >= 2.0. You should use similar configurations for other load balancers.
#/etc/haproxy/haproxy.cfg\n\n# Other existing haproxy configurations\n......\n\nfrontend seafile\n bind 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n acl notif_ping_request url_sub -i /notification/ping\n acl ws_requests url -i /notification\n acl hdr_connection_upgrade hdr(Connection) -i upgrade\n acl hdr_upgrade_websocket hdr(Upgrade) -i websocket\n use_backend ws_backend if hdr_connection_upgrade hdr_upgrade_websocket\n use_backend notif_ping_backend if notif_ping_request\n use_backend ws_backend if ws_requests\n default_backend backup_nodes\n\nbackend backup_nodes\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.0.137:80\n\nbackend notif_ping_backend\n option forwardfor\n server ws 192.168.0.137:8083\n\nbackend ws_backend\n option forwardfor # This sets X-Forwarded-For\n server ws 192.168.0.137:8083\n"},{"location":"extension/office_web_app/","title":"Office Online Server","text":"In Seafile Professional Server Version 4.4.0 (or above), you can use Microsoft Office Online Server (formerly named Office Web Apps) to preview documents online. Office Online Server provides the best preview for all Office format files. It also support collaborative editing of Office files directly in the web browser. For organizations with Microsoft Office Volume License, it's free to use Office Online Server. For more information about Office Online Server and how to deploy it, please refer to https://technet.microsoft.com/en-us/library/jj219455(v=office.16).aspx.
Seafile only supports Office Online Server 2016 and above
To use Office Online Server for preview, please add following config option to seahub_settings.py.
# Enable Office Online Server\nENABLE_OFFICE_WEB_APP = True\n\n# Url of Office Online Server's discovery page\n# The discovery page tells Seafile how to interact with Office Online Server when view file online\n# You should change `http://example.office-web-app.com` to your actual Office Online Server server address\nOFFICE_WEB_APP_BASE_URL = 'http://example.office-web-app.com/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use Office Online Server view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 60 * 60 * 24 # seconds\n\n# List of file formats that you want to view through Office Online Server\n# You can change this value according to your preferences\n# And of course you should make sure your Office Online Server supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('ods', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt',\n 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through Office Online Server\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through Office Online Server\n# Note, Office Online Server 2016 is needed for editing docx\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('xlsx', 'pptx', 'docx')\n\n\n# HTTPS authentication related (optional)\n\n# Server certificates\n# Path to a CA_BUNDLE file or directory with certificates of trusted CAs\n# NOTE: If set this setting to a directory, the directory must have been processed using the c_rehash utility supplied with OpenSSL.\nOFFICE_WEB_APP_SERVER_CA = '/path/to/certfile'\n\n\n# Client certificates\n# You can specify a single file (containing the private key and the certificate) to use as client side certificate\nOFFICE_WEB_APP_CLIENT_PEM = 'path/to/client.pem'\n\n# or you can specify these two file path to use as client side certificate\nOFFICE_WEB_APP_CLIENT_CERT = 'path/to/client.cert'\nOFFICE_WEB_APP_CLIENT_KEY = 'path/to/client.key'\n Then restart
./seafile.sh restart\n./seahub.sh restart\n After you click the document you specified in seahub_settings.py, you will see the new preview page.
"},{"location":"extension/office_web_app/#trouble-shooting","title":"Trouble shooting","text":"Understanding how the web app integration works is going to help you debugging the problem. When a user visits a file page:
Please check the Nginx log for Seahub (for step 3) and Office Online Server to see which step is wrong.
Warning
You should make sure you have configured at least a few GB of paging files in your Windows system. Otherwise the IIS worker processes may die randomly when handling Office Online requests.
"},{"location":"extension/only_office/","title":"OnlyOffice","text":"Seafile supports OnlyOffice to view/edit office files online. In order to use OnlyOffice, you must first deploy an OnlyOffice server.
Deployment Tips
You can deploy OnlyOffice to the same machine as Seafile (only support deploying with Docker with sufficient cores and RAM) using the onlyoffice.yml provided by Seafile according to this document, or you can deploy it to a different machine according to OnlyOffice official document.
Download the onlyoffice.yml
wget https://manual.seafile.com/13.0/repo/docker/onlyoffice.yml\n insert onlyoffice.yml into COMPOSE_FILE list (i.e., COMPOSE_FILE='...,onlyoffice.yml'), and add the following configurations of onlyoffice in .env file.
# OnlyOffice image\nONLYOFFICE_IMAGE=onlyoffice/documentserver:8.1.0.1\n\n# Persistent storage directory of OnlyOffice\nONLYOFFICE_VOLUME=/opt/onlyoffice\n\n# OnlyOffice document server port\nONLYOFFICE_PORT=6233\n\n# jwt secret, generated by `pwgen -s 40 1` \nONLYOFFICE_JWT_SECRET=<your jwt secret>\n Note
From Seafile 12.0, OnlyOffice's JWT verification will be forced to enable. Secure communication between Seafile and OnlyOffice is granted by a shared secret. You can get the JWT secret by following command
pwgen -s 40 1\n Also modify seahub_settings.py
ENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'https://seafile.example.com:6233/web-apps/apps/api/documents/api.js'\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\n\n# NOTE\n# The following two configurations, do NOT need to configure them explicitly.\n# The default values are as follows.\n# If you have custom needs, you can also configure them, which will override the default values.\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods', 'ppsx', 'pps', 'csv')\nONLYOFFICE_EDIT_FILE_EXTENSION = ('docx', 'pptx', 'xlsx', 'csv')\nOFFICE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024 # preview size, 30 MB\n Tip
By default OnlyOffice will use port 6233 used for communication between Seafile and Document Server, You can modify the bound port by specifying ONLYOFFICE_PORT, and the port in the term ONLYOFFICE_APIJS_URL in seahub_settings.py should be modified together.
The following configuration options are only for OnlyOffice experts. You can create and mount a custom configuration file called local-production-linux.json to force some settings.
nano local-production-linux.json\n For example, you can configure OnlyOffice to automatically save by copying the following code block in this file:
{\n \"services\": {\n \"CoAuthoring\": {\n \"autoAssembly\": {\n \"enable\": true,\n \"interval\": \"5m\"\n }\n }\n },\n \"FileConverter\": {\n \"converter\": {\n \"downloadAttemptMaxCount\": 3\n }\n }\n}\n Mount this config file into your onlyoffice block in onlyoffice.yml:
service:\n ...\n onlyoffice:\n ...\n volumes:\n ...\n - <Your path to local-production-linux.json>:/etc/onlyoffice/documentserver/local-production-linux.json\n...\n For more information you can check the official documentation: https://api.onlyoffice.com/editors/signature/ and https://github.com/ONLYOFFICE/Docker-DocumentServer#available-configuration-parameters
"},{"location":"extension/only_office/#restart-seafile-docker-instance-and-test-that-onlyoffice-is-running","title":"Restart Seafile-docker instance and test that OnlyOffice is running","text":"docker-compose down\ndocker-compose up -d\n Success
After the installation process is finished, visit this page to make sure you have deployed OnlyOffice successfully: http{s}://{your Seafile server's domain or IP}:6233/welcome, you will get Document Server is running info at this page.
Firstly, run docker logs -f seafile-onlyoffice, then open an office file. After the \"Download failed.\" error appears on the page, observe the logs for the following error:
==> /var/log/onlyoffice/documentserver/converter/out.log <==\n...\nError: DNS lookup {local IP} (family:undefined, host:undefined) is not allowed. Because, It is a private IP address.\n...\n If it shows this error message and you haven't enabled JWT while using a local network, then it's likely due to an error triggered proactively by OnlyOffice server for enhanced security. (https://github.com/ONLYOFFICE/DocumentServer/issues/2268#issuecomment-1600787905)
So, as mentioned in the post, we highly recommend you enabling JWT in your integrations to fix this problem.
"},{"location":"extension/only_office/#the-document-security-token-is-not-correctly-formed","title":"The document security token is not correctly formed","text":"Starting from OnlyOffice Docker-DocumentServer version 7.2, JWT is enabled by default on OnlyOffice server.
So, for security reason, please Configure OnlyOffice to use JWT Secret.
"},{"location":"extension/only_office/#onlyoffice-on-a-separate-host-and-url","title":"OnlyOffice on a separate host and URL","text":"For independent deployment of OnlyOffice on a single server, please refer to the official documentation. After a successful deployment, you only need to specify the values of the following fields in seahub_settings.py and then restart the service.
ENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'http{s}://<Your OnlyOffice host url>/web-apps/apps/api/documents/api.js'\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\nOFFICE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024\n"},{"location":"extension/only_office/#about-ssl","title":"About SSL","text":"For deployments using the onlyoffice.yml file in this document, SSL is primarily handled by the Caddy. If the OnlyOffice document server and Seafile server are not on the same machine, please refer to the official document to configure SSL for OnlyOffice.
From Seafile 13, users can enable Seafile AI to support the following features:
Prerequisites of Seafile AI deployment
To deploy Seafile AI, you have to deploy metadata server extension firstly. Then you can follow this manual to deploy Seafile AI.
AIGC statement in Seafile
With the help of large language models and face recognition models and algorithm development, Seafile AI supports image recognition and text generation. The generated content is diverse and random, and users need to identify the generated content. Seafile will not be responsible for AI-generated content (AIGC).
At the same time, Seafile AI supports the use of custom LLM and face recognition models. Different large language models will have different impacts on AIGC (including functions and performance), so Seafile will not be responsible for the corresponding rate (i.e., tokens/s), token consumption, and generated content. Including but not limited to
When users use their own OpenAI-compatibility-API LLM service (e.g., LM studio, Ollama) and use self-ablated or abliterated models, Seafile will not be responsible for possible bugs (such as infinite loops outputting the same meaningless content). At the same time, Seafile does not recommend using documents such as SeaDoc to evaluate the performance of ablated models.
"},{"location":"extension/seafile-ai/#deploy-seafile-ai-basic-service","title":"Deploy Seafile AI basic service","text":""},{"location":"extension/seafile-ai/#deploy-seafile-ai-on-the-host-with-seafile","title":"Deploy Seafile AI on the host with Seafile","text":"The Seafile AI basic service will use API calls to external large language model service to implement file labeling, file and image summaries, text translation, and sdoc writing assistance.
Seafile AI requires Redis cache
In order to deploy Seafile AI correctly, you have to use Redis as the cache. Please set CACHE_PROVIDER=redis in .env and set Redis related configuration information correctly.
Download seafile-ai.yml
wget https://manual.seafile.com/13.0/repo/docker/seafile-ai.yml\n Modify .env, insert or modify the following fields:
COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\n\nENABLE_SEAFILE_AI=true\nSEAFILE_AI_LLM_TYPE=openai\nSEAFILE_AI_LLM_KEY=<your openai LLM access key>\nSEAFILE_AI_LLM_MODEL=gpt-4o-mini # recommend\n COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\n\nENABLE_SEAFILE_AI=true\nSEAFILE_AI_LLM_TYPE=deepseek\nSEAFILE_AI_LLM_KEY=<your LLM access key>\nSEAFILE_AI_LLM_MODEL=deepseek-chat # recommend\n COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\n\nENABLE_SEAFILE_AI=true\nSEAFILE_AI_LLM_TYPE=azure\nSEAFILE_AI_LLM_URL= # your deployment url, leave blank to use default endpoint\nSEAFILE_AI_LLM_KEY=<your API key>\nSEAFILE_AI_LLM_MODEL=<your deployment name>\n COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\n\nENABLE_SEAFILE_AI=true\nSEAFILE_AI_LLM_TYPE=ollama\nSEAFILE_AI_LLM_URL=<your LLM endpoint>\nSEAFILE_AI_LLM_KEY=<your LLM access key>\nSEAFILE_AI_LLM_MODEL=<your model-id>\n COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\n\nENABLE_SEAFILE_AI=true\nSEAFILE_AI_LLM_TYPE=huggingface\nSEAFILE_AI_LLM_URL=<your huggingface API endpoint>\nSEAFILE_AI_LLM_KEY=<your huggingface API key>\nSEAFILE_AI_LLM_MODEL=<model provider>/<model-id>\n COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\n\nENABLE_SEAFILE_AI=true\nSEAFILE_AI_LLM_TYPE=proxy\nSEAFILE_AI_LLM_URL=<your proxy url>\nSEAFILE_AI_LLM_KEY=<your proxy virtual key> # optional\nSEAFILE_AI_LLM_MODEL=<model-id>\n Seafile AI utilizes LiteLLM to interact with LLM services. For a complete list of supported LLM providers, please refer to this documentation. Then fill the following fields in your .env:
COMPOSE_FILE='...,seafile-ai.yml' # add seafile-ai.yml\nENABLE_SEAFILE_AI=true\n\n# according to your situation\nSEAFILE_AI_LLM_TYPE=...\nSEAFILE_AI_LLM_URL=...\nSEAFILE_AI_LLM_KEY=...\nSEAFILE_AI_LLM_MODEL=...\n For example, if you are using a LLM service with OpenAI-compatible endpoints, you should set SEAFILE_AI_LLM_TYPE to other or openai, and set other LLM configuration items accurately.
About model selection
Seafile AI supports using large model providers from LiteLLM or large model services with OpenAI-compatible endpoints. Therefore, Seafile AI is compatible with most custom large model services except the default model (gpt-4o-mini), but in order to ensure the normal use of Seafile AI features, you need to select a multimodal large model (such as supporting image input and recognition)
Restart Seafile server:
docker compose down\ndocker compose up -d\n Download seafile-ai.yml and .env:
wget https://manual.seafile.com/13.0/repo/docker/seafile-ai/seafile-ai.yml\nwget -O .env https://manual.seafile.com/13.0/repo/docker/seafile-ai/env\n Modify .env in the host will deploy Seafile AI according to following table
SEAFILE_VOLUME The volume directory of thumbnail server data JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file INNER_SEAHUB_SERVICE_URL Intranet URL for accessing Seahub component, like http://<your Seafile server intranet IP>. REDIS_HOST Redis server host REDIS_PORT Redis server port REDIS_PASSWORD Redis server password SEAFILE_AI_LLM_TYPE Large Language Model (LLM) Type. Default is openai. SEAFILE_AI_LLM_URL LLM API endpoint. SEAFILE_AI_LLM_KEY LLM API key. SEAFILE_AI_LLM_MODEL LLM model id (or name). Default is gpt-4o-mini FACE_EMBEDDING_SERVICE_URL Face embedding service url then start your Seafile AI server:
docker compose up -d\n Modify .env in the host deployed Seafile
SEAFILE_AI_SERVER_URL=http://<your seafile ai host>:8888\n then restart your Seafile server
docker compose down && docker compose up -d\n The face embedding service is used to detect and encode faces in images and is an extension component of Seafile AI. Generally, we recommend that you deploy the service on a machine with a GPU and a graphics card driver that supports OnnxRuntime (so it can also be deployed on a different machine from the Seafile AI base service). Currently, the Seafile AI face embedding service only supports the following modes:
If you plan to deploy these face embeddings in an environment using a GPU, you need to make sure your graphics card is in the range supported by the acceleration environment (e.g., CUDA 12.4 is supported) and correctly mapped in /dev/dri directory. So in some case, the cloud servers and WSL under some driver versions may not be supported.
Download Docker compose files
CUDACPUwget -O face-embedding.yml https://manual.seafile.com/13.0/repo/docker/face-embedding/cuda.yml\n wget -O face-embedding.yml https://manual.seafile.com/13.0/repo/docker/face-embedding/cpu.yml\n Modify .env, insert or modify the following fields:
COMPOSE_FILE='...,face-embedding.yml' # add face-embedding.yml\n\nFACE_EMBEDDING_VOLUME=/opt/face_embedding\n Restart Seafile server
docker compose down\ndocker compose up -d\n Enable face recognition in the repo's settings:
Since the face embedding service may need to be deployed on some hosts with GPU(s), it may not be deployed together with the Seafile AI basic service. At this time, you should make some changes to the Docker compose file so that the service can be accessed normally.
Modify .yml file, delete the commented out lines to expose the service port:
services:\n face-embedding:\n ...\n ports:\n - 8886:8886\n Modify the .env of where deployed Seafile AI:
FACE_EMBEDDING_SERVICE_URL=http://<your face embedding service host>:8886\n Make sure JWT_PRIVATE_KEY has set in the .env for face embedding and is same as the Seafile server
Restart Seafile server
docker compose down\ndocker compose up -d\n By default, the persistent volume is /opt/face_embedding. It will consist of two subdirectories:
/opt/face_embedding/logs: Contains the startup log and access log of the face embedding/opt/face_embedding/models: Contains the model files of the face embedding. It will automatically obtain the latest applicable models at each startup. These models are hosted by our Hugging Face repository. Of course, you can also manually download your own models on this directory (If you fail to automatically pull the model, you can also manually download it).By default, the access key used by the face embedding is the same as that used by the Seafile server, which is JWT_PRIVATE_KEY. At some point, this will have to be modified for security reasons. If you need to customize the access key for the face embedding, you can do the following steps:
Modify .env file for both face embedding and Seafile AI:
FACE_EMBEDDING_SERVICE_KEY=<your customizing access keys>\n Restart Seafile server
docker compose down\ndocker compose up -d\n Seafile supports counting users' AI usage (how many tokens are used) and setting monthly AI quotas for users.
Open $SEAFILE_VOLUME/seafile/conf/seahub_settings.py and add AI prices (i.e., how much per token) informations:
AI_PRICES = {\n\"gpt-4o-mini\": { # replace gpt-4o-mini to your model name\n \"input_tokens_1k\": 0.0011, # input price per token\n \"output_tokens_1k\": 0.0044 # output price per token\n }\n}\n Refer management of roles and permission to specify monthly_ai_credit_per_user (-1 is unlimited), and the unit should be the same as in AI_PRICES.
monthly_ai_credit_per_user for organization user
For organizational team users, monthly_ai_credit_per_user will apply to the entire team. For example, when monthly_ai_credit_per_user is set to 2 (unit of doller for example) and there are 10 members in the team, all members in the team will share the quota of \\(2\\times10=20\\$\\).
SeaDoc is an extension of Seafile that providing an online collaborative document editor.
SeaDoc designed around the following key ideas:
SeaDoc excels at:
The SeaDoc archticture is demonstrated as below:
Here is the workflow when a user opens an sdoc file in a browser:
Default extension in Docker deployment
This extension is already installed by default when deploying Seafile (single-node mode) by Docker.
If you would like to remove it, you can undo the steps in this section (i.e., remove the seadoc.yml in the field COMPOSE_FILE and set ENABLE_SEADOC to false)
The easiest way to deployment SeaDoc is to deploy it with Seafile server on the same host using the same Docker network. If in some situations, you need to deployment SeaDoc standalone, you can follow the next section.
Download the seadoc.yml to /opt/seafile
wget https://manual.seafile.com/13.0/repo/docker/seadoc.yml\n Modify .env, and insert seadoc.yml into COMPOSE_FILE, and enable SeaDoc server
COMPOSE_FILE='seafile-server.yml,caddy.yml,seadoc.yml'\n\nENABLE_SEADOC=true\n Start SeaDoc server server with the following command
docker compose up -d\n Now you can use SeaDoc!
"},{"location":"extension/setup_seadoc/#deploy-seadoc-standalone","title":"Deploy SeaDoc standalone","text":"If you deploy Seafile in a cluster or if you deploy Seafile with binary package, you need to setup SeaDoc as a standalone service. Here are the steps:
Download and modify the .env and seadoc.yml files to directory /opt/seadoc
wget https://manual.seafile.com/13.0/repo/docker/seadoc/seadoc.yml\nwget -O .env https://manual.seafile.com/13.0/repo/docker/seadoc/env\n Then modify the .env file according to your environment. The following fields are needed to be modified:
SEADOC_VOLUME The volume directory of SeaDoc data SEAFILE_MYSQL_DB_HOST Seafile MySQL host SEAFILE_MYSQL_DB_USER Seafile MySQL user, default is seafile SEAFILE_MYSQL_DB_PASSWORD Seafile MySQL password TIME_ZONE Time zone JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file SEAFILE_SERVER_HOSTNAME Seafile host name SEAFILE_SERVER_PROTOCOL http or https (Optional) By default, SeaDoc server will bind to port 80 on the host machine. If the port is already taken by another service, you have to change the listening port of SeaDoc:
Modify seadoc.yml
services:\n seadoc:\n ...\n ports:\n - \"<your SeaDoc server port>:80\"\n...\n Add a reverse proxy for SeaDoc server. In cluster environtment, it means you need to add reverse proxy rules at load balance. Here, we use Nginx as an example (please replace 127.0.0.1:80 to host:port of your Seadoc server)
...\nserver {\n ...\n\n location /sdoc-server/ {\n proxy_pass http://127.0.0.1:80/;\n proxy_redirect off;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n\n client_max_body_size 100m;\n }\n\n location /socket.io {\n proxy_pass http://127.0.0.1:80;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection 'upgrade';\n proxy_redirect off;\n\n proxy_buffers 8 32k;\n proxy_buffer_size 64k;\n\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header Host $http_host;\n proxy_set_header X-NginX-Proxy true;\n }\n}\n <Location /sdoc-server/>\n ProxyPass \"http://127.0.0.1:80/\"\n ProxyPassReverse \"http://127.0.0.1:80/\"\n </Location>\n\n <Location /socket.io/>\n # Since Apache HTTP Server 2.4.47\n ProxyPass \"http://127.0.0.1:80/socket.io/\" upgrade=websocket\n </Location>\n Start SeaDoc server server with the following command
docker compose up -d\n Modify Seafile server's configuration and start SeaDoc server
Warning
After using a reverse proxy, your SeaDoc service will be located at the /sdoc-server path of your reverse proxy (i.e. xxx.example.com/sdoc-server). For example:
Then SEADOC_SERVER_URL will be
http{s}://xxx.example.com/sdoc-server\n Modify .env in your Seafile-server host:
ENABLE_SEADOC=true\nSEADOC_SERVER_URL=https://seafile.example.com/sdoc-server\n Restart Seafile server
Deploy in Docker (including cluster mode)Deploy from binary packagesdocker compose down\ndocker compose up -d\n cd /opt/seafile/seafile-server-latest\n./seahub.sh restart\n /opt/seadoc-data
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files outside. This allows you to rebuild containers easily without losing important information.
SeaDoc uses one database table seahub_db.sdoc_operation_log to store operation logs. The database table is cleaned automatically.
This is because websocket for sdoc-server has not been properly configured. If you use the default Caddy proxy, it should be setup correctly.
But if you use your own proxy, you need to make sure it properly proxy your-sdoc-server-domain/socket.io to sdoc-server-docker-image-address/socket.io
This is because the browser cannot correctly load content from sdoc-server. Make sure
.envYou can open developer console of the browser to further debug the issue.
"},{"location":"extension/thumbnail-server/","title":"Thumbnail Server Overview","text":"Since Seafile 13.0, a new component thumbnail server is added. Thumbnail server can create thumbnails for images, videos, PDFs and other file types. Thumbnail server uses a task queue based architecture, it can better handle workloads than thumbnail generating inside Seahub component.
Use this feature by forwarding thumbnail requests directly to thumbnail server via caddy or a reverse proxy.
"},{"location":"extension/thumbnail-server/#how-to-configure-and-run","title":"How to configure and run","text":"First download thumbnail-server.yml to Seafile directory:
wget https://manual.seafile.com/13.0/repo/docker/thumbnail-server.yml\n Modify .env, and insert thumbnail-server.yml into COMPOSE_FILE:
COMPOSE_FILE='seafile-server.yml,caddy.yml,thumbnail-server.yml'\n Add following configuration in seahub_settings.py to enable thumbnail for videos:
# video thumbnails (disabled by default)\nENABLE_VIDEO_THUMBNAIL = True\n Finally, You can run thumbnail server with the following command:
docker compose down\ndocker compose up -d\n"},{"location":"extension/thumbnail-server/#thumbnail-server-in-seafile-cluster","title":"Thumbnail Server in Seafile cluster","text":"There is no additional features for thumbnail server in the Pro Edition. It works the same as in community edition.
If you enable clustering, You need to deploy thumbnail server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node.
Download .env and thumbnail-server.yml to thumbnail server directory:
wget https://manual.seafile.com/13.0/repo/docker/thumbnail-server/thumbnail-server.yml\nwget -O .env https://manual.seafile.com/13.0/repo/docker/thumbnail-server/env\n Then modify the .env file according to your environment. The following fields are needed to be modified:
SEAFILE_VOLUME The volume directory of thumbnail server data SEAFILE_MYSQL_DB_HOST Seafile MySQL host SEAFILE_MYSQL_DB_USER Seafile MySQL user, default is seafile SEAFILE_MYSQL_DB_PASSWORD Seafile MySQL password TIME_ZONE Time zone JWT_PRIVATE_KEY JWT key, the same as the config in Seafile .env file INNER_SEAHUB_SERVICE_URL Intranet URL for accessing Seahub component, like http://<your Seafile server intranet IP>. SEAF_SERVER_STORAGE_TYPE What kind of the Seafile data for storage. Available options are disk (i.e., local disk), s3 and multiple (see the details of multiple storage backends) S3_COMMIT_BUCKET S3 storage backend commit objects bucket S3_FS_BUCKET S3 storage backend fs objects bucket S3_BLOCK_BUCKET S3 storage backend block objects bucket S3_KEY_ID S3 storage backend key ID S3_SECRET_KEY S3 storage backend secret key S3_AWS_REGION Region of your buckets S3_HOST Host of your buckets S3_USE_HTTPS Use HTTPS connections to S3 if enabled S3_USE_V4_SIGNATURE Use the v4 protocol of S3 if enabled S3_PATH_STYLE_REQUEST This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. S3_SSE_C_KEY A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. Then you can run thumbnail server with the following command:
docker compose up -d\n You need to configure load balancer according to the following forwarding rules:
/thumbnail requests to thumbnail server via http protocol.Here is a configuration that uses haproxy to support thumbnail server. Haproxy version needs to be >= 2.0. You should use similar configurations for other load balancers.
#/etc/haproxy/haproxy.cfg\n\n# Other existing haproxy configurations\n......\n\nfrontend seafile\n bind 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n acl thumbnail_request url_sub -i /thumbnail/\n use_backend thumbnail_backend if thumbnail_request\n default_backend backup_nodes\n\nbackend backup_nodes\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.0.2:80\n\nbackend thumbnail_backend\n option forwardfor\n server thumbnail 192.168.0.9:80\n Thumbnail server has to access Seafile' storage
The thumbnail server needs to access Seafile storage.
If you use local storage, you need to mount the /opt/seafile-data directory of the Seafile node to the thumbnail node, and set SEAFILE_VOLUME to the mounted directory correctly.
If you use single backend S3 storage, please correctly set relative environment vairables in .env.
If you are using multiple storage backends, you have to copy the seafile.conf of the Seafile node to the /opt/seafile-data/seafile/conf directory of the thumbnail node, and set SEAF_SERVER_STORAGE_TYPE=multiple in .env.
/opt/seafile-data
Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files outside. This allows you to rebuild containers easily without losing important information.
This is because generating thumbnails for high-resolution images can impact system performance. You can raise the threshold by setting the THUMBNAIL_IMAGE_ORIGINAL_SIZE_LIMIT environment variable in the env file; the default is 256 (MB).
Seafile can scan uploaded files for malicious content in the background. When configured to run periodically, the scan process scans all existing libraries on the server. In each scan, the process only scans newly uploaded/updated files since the last scan. For each file, the process executes a user-specified virus scan command to check whether the file is a virus or not. Most anti-virus programs provide command line utility for Linux.
To enable this feature, add the following options to seafile.conf:
[virus_scan]\nscan_command = (command for checking virus)\nvirus_code = (command exit codes when file is virus)\nnonvirus_code = (command exit codes when file is not virus)\nscan_interval = (scanning interval, in unit of minutes, default to 60 minutes)\n More details about the options:
An example for ClamAV (http://www.clamav.net/) is provided below:
[virus_scan]\nscan_command = clamscan\nvirus_code = 1\nnonvirus_code = 0\n To test whether your configuration works, you can trigger a scan manually:
cd seafile-server-latest\n./pro/pro.py virus_scan\n If a virus was detected, you can see scan records and delete infected files on the Virus Scan page in the admin area.
Note
If you directly use clamav command line tool to scan files, scanning files will takes a lot of time. If you want to speed it up, we recommend to run Clamav as a daemon. Please refer to Run ClamAV as a Daemon
When run Clamav as a daemon, the scan_command should be clamdscan in seafile.conf. An example for Clamav-daemon is provided below:
[virus_scan]\nscan_command = clamdscan\nvirus_code = 1\nnonvirus_code = 0\n Since Pro edition 6.0.0, a few more options are added to provide finer grained control for virus scan.
[virus_scan]\n......\nscan_size_limit = (size limit for files to be scanned) # The unit is MB.\nscan_skip_ext = (a comma (',') separated list of file extensions to be ignored)\nthreads = (number of concurrent threads for scan, one thread for one file, default to 4)\n The file extensions should start with '.'. The extensions are case insensitive. By default, files with following extensions will be ignored:
.bmp, .gif, .ico, .png, .jpg, .mp3, .mp4, .wav, .avi, .rmvb, .mkv\n The list you provide will override default list.
"},{"location":"extension/virus_scan/#scanning-files-on-upload","title":"Scanning Files on Upload","text":"You may also configure Seafile to scan files for virus upon the files are uploaded. This only works for files uploaded via web interface or web APIs. Files uploaded with syncing or SeaDrive clients cannot be scanned on upload due to performance consideration.
You may scan files uploaded from shared upload links by adding the option below to seahub_settings.py:
ENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n Since Pro Edition 11.0.7, you may scan all uploaded files via web APIs by adding the option below to seafile.conf:
[fileserver]\ncheck_virus_on_web_upload = true\n"},{"location":"extension/virus_scan_with_clamav/","title":"Deploy ClamAV with Seafile","text":""},{"location":"extension/virus_scan_with_clamav/#deploy-with-docker","title":"Deploy with Docker","text":"If your Seafile server is deployed using Docker, we also recommend that you use Docker to deploy ClamAV by following the steps below, otherwise you can deploy it from binary package of ClamAV.
"},{"location":"extension/virus_scan_with_clamav/#download-clamavyml-and-insert-to-docker-compose-lists-in-env","title":"Download clamav.yml and insert to Docker-compose lists in .env","text":"Download clamav.yml
wget https://manual.seafile.com/13.0/repo/docker/pro/clamav.yml\n Modify .env, insert clamav.yml to field COMPOSE_FILE
COMPOSE_FILE='seafile-server.yml,caddy.yml,clamav.yml'\n"},{"location":"extension/virus_scan_with_clamav/#modify-seafileconf","title":"Modify seafile.conf","text":"Add the following statements to seafile.conf
[virus_scan]\nscan_command = clamdscan\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = 5\nscan_size_limit = 20\nthreads = 2\n"},{"location":"extension/virus_scan_with_clamav/#restart-docker-container","title":"Restart docker container","text":"docker compose down\ndocker compose up -d \n Wait some minutes until Clamav finished initializing.
Now Clamav can be used.
"},{"location":"extension/virus_scan_with_clamav/#use-clamav-in-binary-based-deployment","title":"Use ClamAV in binary based deployment","text":""},{"location":"extension/virus_scan_with_clamav/#install-clamav-daemon-clamav-freshclam","title":"Install clamav-daemon & clamav-freshclam","text":"apt-get install clamav-daemon clamav-freshclam\n You should run Clamd with a root permission to scan any files. Edit the conf /etc/clamav/clamd.conf,change the following line:
LocalSocketGroup root\nUser root\n"},{"location":"extension/virus_scan_with_clamav/#start-the-clamav-daemon","title":"Start the clamav-daemon","text":"systemctl start clamav-daemon\n Test the software
$ curl https://secure.eicar.org/eicar.com.txt | clamdscan -\n The output must include:
stream: Eicar-Test-Signature FOUND\n"},{"location":"extension/virus_scan_with_kav4fs/","title":"Virus Scan with kav4fs","text":""},{"location":"extension/virus_scan_with_kav4fs/#prerequisite","title":"Prerequisite","text":"Assume you have installed Kaspersky Anti-Virus for Linux File Server on the Seafile Server machine.
If the user that runs Seafile Server is not root, it should have sudoers privilege to avoid writing password when running kav4fs-control. Add following content to /etc/sudoers:
<user of running seafile server> ALL=(ALL:ALL) ALL\n<user of running seafile server> ALL=NOPASSWD: /opt/kaspersky/kav4fs/bin/kav4fs-control\n"},{"location":"extension/virus_scan_with_kav4fs/#script","title":"Script","text":"As the return code of kav4fs cannot reflect the file scan result, we use a shell wrapper script to parse the scan output and based on the parse result to return different return codes to reflect the scan result.
Save following contents to a file such as kav4fs_scan.sh:
#!/bin/bash\n\nTEMP_LOG_FILE=`mktemp /tmp/XXXXXXXXXX`\nVIRUS_FOUND=1\nCLEAN=0\nUNDEFINED=2\nKAV4FS='/opt/kaspersky/kav4fs/bin/kav4fs-control'\nif [ ! -x $KAV4FS ]\nthen\n echo \"Binary not executable\"\n exit $UNDEFINED\nfi\n\nsudo $KAV4FS --scan-file \"$1\" > $TEMP_LOG_FILE\nif [ \"$?\" -ne 0 ]\nthen\n echo \"Error due to check file '$1'\"\n exit 3\nfi\nTHREATS_C=`grep 'Threats found:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nRISKWARE_C=`grep 'Riskware found:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nINFECTED=`grep 'Infected:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nSUSPICIOUS=`grep 'Suspicious:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nSCAN_ERRORS_C=`grep 'Scan errors:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nPASSWORD_PROTECTED=`grep 'Password protected:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nCORRUPTED=`grep 'Corrupted:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\n\nrm -f $TEMP_LOG_FILE\n\nif [ $THREATS_C -gt 0 -o $RISKWARE_C -gt 0 -o $INFECTED -gt 0 -o $SUSPICIOUS -gt 0 ]\nthen\n exit $VIRUS_FOUND\nelif [ $SCAN_ERRORS_C -gt 0 -o $PASSWORD_PROTECTED -gt 0 -o $CORRUPTED -gt 0 ]\nthen\n exit $UNDEFINED\nelse\n exit $CLEAN\nfi\n Grant execute permissions for the script (make sure it is owned by the user Seafile is running as):
chmod u+x kav4fs_scan.sh\n The meaning of the script return code:
1: found virus\n0: no virus\nother: scan failed\n"},{"location":"extension/virus_scan_with_kav4fs/#configuration","title":"Configuration","text":"Add following content to seafile.conf:
[virus_scan]\nscan_command = <absolute path of kav4fs_scan.sh>\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = <scanning interval, in unit of minutes, default to 60 minutes>\n"},{"location":"extension/webdav/","title":"WebDAV extension","text":"In the document below, we assume your seafile installation folder is /opt/seafile.
The configuration file is /opt/seafile-data/seafile/conf/seafdav.conf (for deploying from binary packages, it should be /opt/seafile/conf/seafdav.conf). If it is not created already, you can just create the file.
[WEBDAV]\n\n# Default is false. Change it to true to enable SeafDAV server.\nenabled = true\n\nport = 8080\ndebug = true\n\n# If you deploy seafdav behind nginx/apache, you need to modify \"share_name\".\nshare_name = /seafdav\n\n# SeafDAV uses Gunicorn as web server.\n# This option maps to Gunicorn's 'workers' setting. https://docs.gunicorn.org/en/stable/settings.html?#workers\n# By default it's set to 5 processes.\nworkers = 5\n\n# This option maps to Gunicorn's 'timeout' setting. https://docs.gunicorn.org/en/stable/settings.html?#timeout\n# By default it's set to 1200 seconds, to support large file uploads.\ntimeout = 1200\n Every time the configuration is modified, you need to restart seafile server to make it take effect.
Deploy in DockerDeploy from binary packagesdocker compose restart\n cd /opt/seafile/seafile-server-latest/\n./seafile.sh restart\n Your WebDAV client would visit the Seafile WebDAV server at http{s}://example.com/seafdav/ (for deploying from binary packages, it should be http{s}://example.com:8080/seafdav/)
In Pro edition 7.1.8 version and community edition 7.1.5, an option is added to append library ID to the library name returned by SeafDAV.
show_repo_id=true\n"},{"location":"extension/webdav/#proxy-only-for-deploying-from-binary-packages","title":"Proxy (only for deploying from binary packages)","text":"Tip
For deploying in Docker, the WebDAV server has been proxied in /seafdav/*, as you can skip this step
For Seafdav, the configuration of Nginx is as follows:
.....\n\n location /seafdav {\n rewrite ^/seafdav$ /seafdav/ permanent;\n }\n\n location /seafdav/ {\n proxy_pass http://127.0.0.1:8080/seafdav/;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_read_timeout 1200s;\n client_max_body_size 0;\n\ufeff\n access_log /var/log/nginx/seafdav.access.log seafileformat;\n error_log /var/log/nginx/seafdav.error.log;\n }\n\n location /:dir_browser {\n proxy_pass http://127.0.0.1:8080/:dir_browser;\n }\n For Seafdav, the configuration of Apache is as follows:
......\n <Location /seafdav>\n ProxyPass \"http://127.0.0.1:8080/seafdav\"\n </Location>\n"},{"location":"extension/webdav/#notes-on-clients","title":"Notes on Clients","text":"Please first note that, there are some known performance limitation when you map a Seafile webdav server as a local file system (or network drive).
So WebDAV is more suitable for infrequent file access. If you want better performance, please use the sync client instead.
WindowsLinuxMac OS XWindows Explorer supports HTTPS connection. But it requires a valid certificate on the server. It's generally recommended to use Windows Explorer to map a webdav server as network dirve. If you use a self-signed certificate, you have to add the certificate's CA into Windows' system CA store.
On Linux you have more choices. You can use file manager such as Nautilus to connect to webdav server. Or you can use davfs2 from the command line.
To use davfs2
sudo apt-get install davfs2\nsudo mount -t davfs -o uid=<username> https://example.com/seafdav /media/seafdav/\n The -o option sets the owner of the mounted directory to so that it's writable for non-root users.
It's recommended to disable LOCK operation for davfs2. You have to edit /etc/davfs2/davfs2.conf
use_locks 0\n Finder's support for WebDAV is also not very stable and slow. So it is recommended to use a webdav client software such as Cyberduck.
"},{"location":"extension/webdav/#frequently-asked-questions","title":"Frequently Asked Questions","text":""},{"location":"extension/webdav/#clients-cant-connect-to-seafdav-server","title":"Clients can't connect to seafdav server","text":"By default, seafdav is disabled. Check whether you have enabled = true in seafdav.conf. If not, modify it and restart seafile server.
If you deploy SeafDAV behind Nginx/Apache, make sure to change the value of share_name as the sample configuration above. Restart your seafile server and try again.
First, check the seafdav.log to see if there is log like the following.
\"MOVE ... -> 502 Bad Gateway\n If you have enabled debug, there will also be the following log.
09:47:06.533 - DEBUG : Raising DAVError 502 Bad Gateway: Source and destination must have the same scheme.\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\n(See https://github.com/mar10/wsgidav/issues/183)\n\n09:47:06.533 - DEBUG : Caught (502, \"Source and destination must have the same scheme.\\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\\n(See https://github.com/mar10/wsgidav/issues/183)\")\n This issue usually occurs when you have configured HTTPS, but the request was forwarded, resulting in the HTTP_X_FORWARDED_PROTO value in the request received by Seafile not being HTTPS.
You can solve this by manually changing the value of HTTP_X_FORWARDED_PROTO. For example, in nginx, change
proxy_set_header X-Forwarded-Proto $scheme;\n to
proxy_set_header X-Forwarded-Proto https;\n"},{"location":"extension/webdav/#windows-explorer-reports-file-size-exceeds-the-limit-allowed-and-cannot-be-saved","title":"Windows Explorer reports \"file size exceeds the limit allowed and cannot be saved\"","text":"This happens when you map webdav as a network drive, and tries to copy a file larger than about 50MB from the network drive to a local folder.
This is because Windows Explorer has a limit of the file size downloaded from webdav server. To make this size large, change the registry entry on the client machine. There is a registry key named FileSizeLimitInBytes under HKEY_LOCAL_MACHINE -> SYSTEM -> CurrentControlSet -> Services -> WebClient -> Parameters.
The different components of Seafile project are released under different licenses:
Forum: https://forum.seafile.com
Follow us @seafile https://twitter.com/seafile
"},{"location":"introduction/contribution/#report-a-bug","title":"Report a Bug","text":"Seafile manages files using libraries. Every library has an owner, who can share the library to other users or share it with groups. The sharing can be read-only or read-write.
"},{"location":"introduction/file_permission_management/#read-only-syncing","title":"Read-only syncing","text":"Read-only libraries can be synced to local desktop. The modifications at the client will not be synced back. If a user has modified some file contents, he can use \"resync\" to revert the modifications.
"},{"location":"introduction/file_permission_management/#cascading-permissionsub-folder-permissions-pro-edition","title":"Cascading permission/Sub-folder permissions (Pro edition)","text":"Sharing controls whether a user or group can see a library, while sub-folder permissions are used to modify permissions on specific folders.
Supposing you share a library as read-only to a group and then want specific sub-folders to be read-write for a few users, you can set read-write permissions on sub-folders for some users and groups.
Note
Please check https://www.seafile.com/en/roadmap/
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/","title":"Seafile Professional Edition Software License Agreement","text":"Seafile Professional Edition SOFTWARE LICENSE AGREEMENT
Important
READ THE FOLLOWING TERMS AND CONDITIONS CAREFULLY BEFORE YOU DOWNLOAD, INSTALL OR USE Seafile Ltd.'S PROPRIETARY SOFTWARE.
BY INSTALLING OR USING THE SOFTWARE, YOU AGREE TO BE BOUND BY THE FOLLOWING TERMS AND CONDITIONS.
IF YOU DO NOT AGREE TO THE FOLLOWING TERMS AND CONDITIONS, DO NOT INSTALL OR USE THE SOFTWARE.
\"Seafile Ltd.\" means Seafile Ltd.
\"You and Your\" means the party licensing the Software hereunder.
\"Software\" means the computer programs provided under the terms of this license by Seafile Ltd. together with any documentation provided therewith.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#2-grant-of-rights","title":"2. GRANT OF RIGHTS","text":""},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#21-general","title":"2.1 General","text":"The License granted for Software under this Agreement authorizes You on a non-exclusive basis to use the Software. The Software is licensed, not sold to You and Seafile Ltd. reserves all rights not expressly granted to You in this Agreement. The License is personal to You and may not be assigned by You to any third party.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#22-license-provisions","title":"2.2 License Provisions","text":"Subject to the receipt by Seafile Ltd. of the applicable license fees, You have the right use the Software as follows:
The inclusion of source code with the License is explicitly not for your use to customize a solution or re-use in your own projects or products. The benefit of including the source code is for purposes of security auditing. You may modify the code only for emergency bug fixes that impact security or performance and only for use within your enterprise. You may not create or distribute derivative works based on the Software or any part thereof. If you need enhancements to the software features, you should suggest them to Seafile Ltd. for version improvements.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#4-ownership","title":"4. OWNERSHIP","text":"You acknowledge that all copies of the Software in any form are the sole property of Seafile Ltd.. You have no right, title or interest to any such Software or copies thereof except as provided in this Agreement.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#5-confidentiality","title":"5. CONFIDENTIALITY","text":"You hereby acknowledge and agreed that the Software constitute and contain valuable proprietary products and trade secrets of Seafile Ltd., embodying substantial creative efforts and confidential information, ideas, and expressions. You agree to treat, and take precautions to ensure that your employees and other third parties treat, the Software as confidential in accordance with the confidentiality requirements herein.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#6-disclaimer-of-warranties","title":"6. DISCLAIMER OF WARRANTIES","text":"EXCEPT AS OTHERWISE SET FORTH IN THIS AGREEMENT THE SOFTWARE IS PROVIDED TO YOU \"AS IS\", AND Seafile Ltd. MAKES NO EXPRESS OR IMPLIED WARRANTIES WITH RESPECT TO ITS FUNCTIONALITY, CONDITION, PERFORMANCE, OPERABILITY OR USE. WITHOUT LIMITING THE FOREGOING, Seafile Ltd. DISCLAIMS ALL IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR FREEDOM FROM INFRINGEMENT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSIONS MAY NOT APPLY TO YOU. THE LIMITED WARRANTY HEREIN GIVES YOU SPECIFIC LEGAL RIGHTS, AND YOU MAY ALSO HAVE OTHER RIGHTS THAT VARY FROM ONE JURISDICTION TO ANOTHER.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#7-limitation-of-liability","title":"7. LIMITATION OF LIABILITY","text":"YOU ACKNOWLEDGE AND AGREE THAT THE CONSIDERATION WHICH Seafile Ltd. IS CHARGING HEREUNDER DOES NOT INCLUDE ANY CONSIDERATION FOR ASSUMPTION BY Seafile Ltd. OF THE RISK OF YOUR CONSEQUENTIAL OR INCIDENTAL DAMAGES WHICH MAY ARISE IN CONNECTION WITH YOUR USE OF THE SOFTWARE. ACCORDINGLY, YOU AGREE THAT Seafile Ltd. SHALL NOT BE RESPONSIBLE TO YOU OR ANY THIRD PARTY FOR ANY LOSS-OF-PROFIT, LOST SAVINGS, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF A LICENSING OR USE OF THE SOFTWARE.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#8-indemnification","title":"8. INDEMNIFICATION","text":"You agree to defend, indemnify and hold Seafile Ltd. and its employees, agents, representatives and assigns harmless from and against any claims, proceedings, damages, injuries, liabilities, costs, attorney's fees relating to or arising out of Your use of the Software or any breach of this Agreement.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#9-termination","title":"9. TERMINATION","text":"Your license is effective until terminated. You may terminate it at any time by destroying the Software or returning all copies of the Software to Seafile Ltd.. Your license will terminate immediately without notice if You breach any of the terms and conditions of this Agreement, including non or incomplete payment of the license fee. Upon termination of this Agreement for any reason: You will uninstall all copies of the Software; You will immediately cease and desist all use of the Software; and will destroy all copies of the software in your possession.
"},{"location":"introduction/seafile_professional_sdition_software_license_agreement/#10-updates-and-support","title":"10. UPDATES AND SUPPORT","text":"Seafile Ltd. has the right, but no obligation, to periodically update the Software, at its complete discretion, without the consent or obligation to You or any licensee or user.
YOU HEREBY ACKNOWLEDGE THAT YOU HAVE READ THIS AGREEMENT, UNDERSTAND IT AND AGREE TO BE BOUND BY ITS TERMS AND CONDITIONS.
"},{"location":"setup/architecture/","title":"Architecture","text":"Seafile Docker and its components are support both x86 and ARM64 architecture. You can find detailes below.
"},{"location":"setup/architecture/#support-status","title":"Support status","text":"Component x86 ARM seafile-mc \u221a \u221a seafile-pro-mc \u221a \u221a sdoc-server \u221a \u221a notification-server \u221a \u221a seafile-md-server \u221a \u221a seafile-ai \u221a \u221a thumbnail-server \u221a \u221a seasearch \u221a \u221a face-embedding \u221a X index-server (distributed indexing) \u221a XNote, for SeaSearch, you should use seaseach-nomkl version to work on ARM architecture.
"},{"location":"setup/architecture/#pull-the-arm-image","title":"Pull the ARM image","text":"You can use the X.0-latest tag to pull the ARM image without specifying the arm tag.
docker pull seafileltd/seafile-mc:13.0-latest\n"},{"location":"setup/caddy/","title":"HTTPS and Caddy","text":"Note
From Seafile Docker 12.0, HTTPS will be handled by the Caddy. The default caddy image used of Seafile docker is lucaslorentz/caddy-docker-proxy:2.9-alpine.
Caddy is a modern open source web server that mainly binds external traffic and internal services in seafile docker. In addition to the advantages of traditional proxy components (e.g., nginx), Caddy also makes it easier for users to complete the acquisition and update of HTTPS certificates by providing simpler configurations.
"},{"location":"setup/caddy/#engage-https-by-caddy","title":"Engage HTTPS by caddy","text":"We provide two options for enabling HTTPS via Caddy, which mainly rely on The caddy docker proxy container from Lucaslorentz supports dynamic configuration with labels:
To engage HTTPS, users only needs to correctly configure the following fields in .env:
SEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=example.com\n After Seafile Docker startup, you can use following command to access the logs of Caddy
docker logs seafile-caddy -f\n"},{"location":"setup/caddy/#using-a-custom-existing-certificate","title":"Using a custom (existing) certificate","text":"With the caddy.yml, a default volume-mount is created: /opt/seafile-caddy (as you can change it by modifying SEAFILE_CADDY_VOLUME in .env). By convention you should provide your certificate & key files in the container host filesystem under /opt/seafile-caddy/certs/ to make it available to caddy:
/opt/seafile-caddy/certs/\n\u251c\u2500\u2500 cert.pem # xxx.crt in some case\n\u251c\u2500\u2500 key.pem # xxx.key in some case\n Command to generate custom certificates
With this command, you can generate your own custom certificates:
cd /opt/seafile-caddy/certs\nopenssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout ./key.pem -out ./cert.pem\n Please be aware that custom certicates can not be used for ip-adresses
Then modify seafile-server.yml to enable your custom certificate, by the way, we strongly recommend you to make a backup of seafile-server.yml before doing this:
cp seafile-server.yml seafile-server.yml.bak\nnano seafile-server.yml\n and
services:\n ...\n seafile:\n ...\n volumes:\n ...\n # If you use a self-generated certificate, please add it to the Seafile server trusted directory (i.e. remove the comment symbol below)\n # - \"/opt/seafile-caddy/certs/cert.pem:/usr/local/share/ca-certificates/cert.crt\"\n labels:\n caddy: ${SEAFILE_SERVER_HOSTNAME:?Variable is not set or empty} # leave this variables only\n caddy.tls: \"/data/caddy/certs/cert.pem /data/caddy/certs/key.pem\"\n ...\n DNS resolution must work inside the container
If you're using a non-public url like my-custom-setup.local, you have to make sure, that the docker container can resolve this DNS query. If you don't run your own DNS servers, you have to add extras_hosts to your .yml file.
The Seafile cluster solution employs a 3-tier architecture:
This architecture scales horizontally. That means, you can handle more traffic by adding more machines. The architecture is visualized in the following picture.
There are two main components on the Seafile server node: web server (Nginx/Apache) and Seafile app server. The web server passes requests from the clients to Seafile app server. The Seafile app servers work independently. They don't know about each other's state. That means each app server can fail independently without affecting other app server instances. The load balancer is responsible for detecting failure and re-routing requests.
Even though Seafile app servers work independently, they still have to share some session information. All shared session information is stored in memory cache. Thus, all Seafile app servers have to connect to the same memory cache server (cluster). Since Pro Edition 11.0, both memcached and Redis can be used as memory cache. Before 11.0, only memcached is supported. More details about memory cache configuration is available later. Since Seafile 13.0, we recommend that you use Redis as a cache to adapt to new features (such as Seafile AI, Metadata management, etc.).
The background server is the workhorse for various background tasks, including full-text indexing, office file preview, virus scanning, LDAP syncing. It should usually be run on a dedicated server for better performance. Currently only one background task server can be running in the entire cluster. If more than one background servers are running, they may conflict with each others when doing some tasks. If you need HA for background task server, you can consider using Keepalived to build a hot backup for it.
In the seafile cluster, only one server should run the background tasks, including:
Let's assume you have three nodes in your cluster: A, B, and C.
Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/cluster_deploy_with_docker/#deploy-seafile-service","title":"Deploy Seafile service","text":""},{"location":"setup/cluster_deploy_with_docker/#deploy-the-first-seafile-frontend-node","title":"Deploy the first Seafile frontend node","text":"Create the mount directory
mkdir -p /opt/seafile/shared\n Pulling Seafile image
docker pull seafileltd/seafile-pro-mc:13.0-latest\n Download the seafile-server.yml and .env
wget -O .env https://manual.seafile.com/13.0/repo/docker/cluster/env\nwget https://manual.seafile.com/13.0/repo/docker/cluster/seafile-server.yml\n Modify the variables in .env (especially the terms like <...>).
Pleace license file
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile/shared. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Start the Seafile docker
docker compose up -d\n Cluster init mode
Because CLUSTER_INIT_MODE is true in the .env file, Seafile docker will be started in init mode and generate configuration files. As the results, you can see the following lines if you trace the Seafile container (i.e., docker logs seafile):
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n\n\n[2024-11-21 02:22:37] Updating version stamp\nStart init\n\nInit success\n After initailizing the cluster, the following fields can be removed in .env
CLUSTER_INIT_MODE, must be removed from .env fileCLUSTER_INIT_ES_HOSTCLUSTER_INIT_ES_PORTTip
We recommend that you check that the relevant configuration files are correct and copy the SEAFILE_VOLUME directory before the service is officially started, because only the configuration files are generated after initialization. You can directly migrate the entire copied SEAFILE_VOLUME to other nodes later:
cp -r /opt/seafile/shared /opt/seafile/shared-bak\n Restart the container to start the service in frontend node
docker compose down\ndocker compose up -d\n Frontend node starts successfully
After executing the above command, you can trace the logs of container seafile (i.e., docker logs seafile). You can see the following message if the frontend node starts successfully:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 20\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:02:35 Nginx ready \n\n2024-11-21 03:02:35 This is an idle script (infinite loop) to keep container running. \n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n Create the mount directory
$ mkdir -p /opt/seafile/shared\n Pull Seafile image
Copy seafile-server.yml, .envand configuration files from the first frontend node
Start the service
docker compose up -d\n Create the mount directory
$ mkdir -p /opt/seafile/shared\n Pull Seafile image
Copy seafile-server.yml, .env and configuration files from frontend node
Note
The configuration files from frontend node have to be put in the same path as the frontend node, i.e., /opt/seafile/shared/seafile/conf/*
Modify .env, set CLUSTER_MODE to backend
Start the service in the backend node
docker compose up -d\n Backend node starts successfully
After executing the above command, you can trace the logs of container seafile (i.e., docker logs seafile). You can see the following message if the backend node starts successfully:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 21\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:11:59 Nginx ready \n2024-11-21 03:11:59 This is an idle script (infinite loop) to keep container running. \n\n---------------------------------\n\nSeafile cluster backend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seafile background tasks ...\nDone.\n Note
Since Seafile Pro server 6.0.0, cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the \"Load Balancer Setting\" section below for details
Generally speaking, in order to better access the Seafile service, we recommend that you use a load balancing service to access the Seafile cluster and bind your domain name (such as seafile.cluster.com) to the load balancing service. Usually, you can use:
Deploy your own load balancing service, our document will give two of common load balance services:
In the AWS ELB management console, after you've added the Seafile server instances to the instance list, you should do two more configurations.
First you should setup HTTP(S) listeners. Ports 443 and 80 of ELB should be forwarded to the ports 80 or 443 of the Seafile servers.
Then you setup health check
Refer to AWS documentation about how to setup sticky sessions.
"},{"location":"setup/cluster_deploy_with_docker/#nginx","title":"Nginx","text":"Install Nginx in the host if you would like to deploy load balance service
sudo apt update\nsudo apt install nginx\n Create the configurations file for Seafile cluster
sudo nano /etc/nginx/sites-available/seafile-cluster\n and, add the following contents into this file:
upstream seafile_cluster {\n server <IP: your frontend node 1>:80;\n server <IP: your frontend node 2>:80;\n ...\n}\n\nserver {\n listen 80;\n server_name <your domain>;\n\n location / {\n proxy_pass http://seafile_cluster;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n http_502 http_503 http_504;\n }\n}\n Link the configurations file to sites-enabled directory:
sudo ln -s /etc/nginx/sites-available/seafile-cluster /etc/nginx/sites-enabled/\n Test and enable configuration
sudo nginx -t\nsudo nginx -s reload\n Execute the following commands on the two Seafile frontend servers:
$ apt install haproxy keepalived -y\n\n$ mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak\n\n$ cat > /etc/haproxy/haproxy.cfg << 'EOF'\nglobal\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n timeout connect 10000\n timeout client 300000\n timeout server 36000000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafile01 Front-End01-IP:8001 check port 11001 cookie seafile01\n server seafile02 Front-End02-IP:8001 check port 11001 cookie seafile02\nEOF\n Warning
Please correctly modify the IP address (Front-End01-IP and Front-End02-IP) of the frontend server in the above configuration file. Other wise it cannot work properly.
Choose one of the above two servers as the master node, and the other as the slave node.
Perform the following operations on the master node:
$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node1\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state MASTER\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 100\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n Warning
Please correctly configure the virtual IP address and network interface device name in the above file. Other wise it cannot work properly.
Perform the following operations on the standby node:
$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node2\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state BACKUP\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 98\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n Finally, run the following commands on the two Seafile frontend servers to start the corresponding services:
$ systemctl enable --now haproxy\n$ systemctl enable --now keepalived\n So far, Seafile cluster has been deployed.
"},{"location":"setup/cluster_deploy_with_docker/#https","title":"HTTPS","text":"You can engaged HTTPS in your load balance service, as you can use certificates manager (e.g., Certbot) to acquire and enable HTTPS to your Seafile cluster. You have to modify the relative URLs from the prefix http:// to https:// in seahub_settings.py and .env, after enabling HTTPS.
You can follow here to deploy SeaDoc server. And then modify SEADOC_SERVER_URL in your .env file
This manual explains how to deploy and run Seafile cluster on a Linux server using Kubernetes (k8s thereafter).
"},{"location":"setup/cluster_deploy_with_k8s/#prerequisites","title":"Prerequisites","text":""},{"location":"setup/cluster_deploy_with_k8s/#cluster-requirements","title":"Cluster requirements","text":"Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/cluster_deploy_with_k8s/#k8s-tools","title":"K8S tools","text":"Two tools are suggested and can be installed with official installation guide on all nodes:
After installation, you need to start the k8s control plane service on each node and refer to the k8s official manual for creating a cluster.
Tip
Although we recommend installing the k8s control plane tool on each node, it does not mean that we will use each node as a control plane node, but it is a necessary tool to create or join a K8S cluster. For details, please refer to the above link about creating or joining into a cluster.
"},{"location":"setup/cluster_deploy_with_k8s/#create-namespace-and-secretmap","title":"Create namespace and secretMap","text":"kubectl create ns seafile\n\nkubectl create secret generic seafile-secret --namespace seafile \\\n--from-literal=JWT_PRIVATE_KEY='<required>' \\\n--from-literal=SEAFILE_MYSQL_DB_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_ADMIN_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_MYSQL_ROOT_PASSWORD='<required>' \\\n--from-literal=REDIS_PASSWORD='' \\\n--from-literal=S3_SECRET_KEY='' \\\n--from-literal=S3_SSE_C_KEY=''\n"},{"location":"setup/cluster_deploy_with_k8s/#download-k8s-yaml-files-for-seafile-cluster-without-frontend-node","title":"Download K8S YAML files for Seafile cluster (without frontend node)","text":"mkdir -p /opt/seafile-k8s-yaml\n\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/cluster/seafile-backend-deployment.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/cluster/seafile-persistentvolume.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/cluster/seafile-persistentvolumeclaim.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/cluster/seafile-service.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/cluster/seafile-env.yaml\n In here we suppose you download the YAML files in /opt/seafile-k8s-yaml, which mainly include about:
seafile-xx-deployment.yaml for frontend and backend services pod management and creation, seafile-service.yaml for exposing Seafile services to the external network, seafile-persistentVolume.yaml for defining the location of a volume used for persistent storage on the hostseafile-persistentvolumeclaim.yaml for declaring the use of persistent storage in the container.For futher configuration details, you can refer the official documents.
Use PV bound from a storage class
If you would like to use automatically allocated persistent volume (PV) by a storage class, please modify seafile-persistentvolumeclaim.yaml and specify storageClassName. On the other hand, the PV defined by seafile-persistentvolume.yaml can be disabled:
rm /opt/seafile-k8s-yaml/seafile-persistentvolume.yaml\n"},{"location":"setup/cluster_deploy_with_k8s/#modify-seafile-envyaml","title":"Modify seafile-env.yaml","text":"Similar to Docker-base deployment, Seafile cluster in K8S deployment also supports use files to configure startup progress, you can modify common environment variables by
nano /opt/seafile-k8s-yaml/seafile-env.yaml\n"},{"location":"setup/cluster_deploy_with_k8s/#initialize-seafile-cluster","title":"Initialize Seafile cluster","text":"You can use following command to initialize Seafile cluster now (the Seafile's K8S resources will be specified in namespace seafile for easier management):
kubectl apply -f /opt/seafile-k8s-yaml/ -n seafile\n About Seafile cluster initialization
When Seafile cluster is initializing, it will run with the following conditions:
CLUSTER_INIT_MODE=trueSuccess
You can get the following information through kubectl logs seafile-xxxx -n seafile to check the initialization process is done or not:
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n\n\n[2024-11-21 02:22:37] Updating version stamp\nStart init\n\nInit success\n When the initialization is complete, the server will stop automaticlly (because no operations will be performed after the initialization is completed).
We recommend that you check whether the contents of the configuration files in /opt/seafile/shared/seafile/conf are correct when going to next step, which are automatically generated during the initialization process.
/opt/seafile/shared","text":"You have to locate the /opt/seafile/shared directory generated during initialization firsly, then simply put it in this path, if you have a seafile-license.txt license file.
Finally you can use the tar -zcvf and tar -zxvf commands to package the entire /opt/seafile/shared directory of the current node, copy it to other nodes, and unpack it to the same directory to take effect on all nodes.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
"},{"location":"setup/cluster_deploy_with_k8s/#download-frontend-services-yaml-and-restart-pods-to-start-seafile-server","title":"Download frontend service's YAML and restart pods to start Seafile server","text":"Download frontend service's YAML by:
wget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/cluster/seafile-frontend-deployment.yaml\n Modify seafile-env.yaml, and set CLUSTER_INIT_MODE to false (i.e., disable initialization mode), then re-apply seafile-env.yaml again:
kubectl apply -f /opt/seafile-k8s-yaml\n Run the following command to restart pods to restart Seafile cluster:
Tip
If you modify some configurations in /opt/seafile/shared/seafile/conf or YAML files in /opt/seafile-k8s-yaml/, you still need to restart services to make modifications.
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n Sucess
You can view the pod's log to check the startup progress is normal or not. You can see the following message if server is running normally:
*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 20\n*** Running /scripts/enterpoint.sh...\n2024-11-21 03:02:35 Nginx ready \n\n2024-11-21 03:02:35 This is an idle script (infinite loop) to keep container running. \n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n"},{"location":"setup/cluster_deploy_with_k8s/#uninstall-seafile-k8s","title":"Uninstall Seafile K8S","text":"You can uninstall the Seafile K8S by the following command:
kubectl delete -f /opt/seafile-k8s-yaml/ -n seafile\n"},{"location":"setup/cluster_deploy_with_k8s/#advanced-operations","title":"Advanced operations","text":"Please refer from here for futher advanced operations.
"},{"location":"setup/helm_chart_cluster/","title":"Deploy Seafile cluster with Kubernetes (K8S) by Seafile Helm Chart","text":"This manual explains how to deploy and run Seafile cluster on a Linux server using Seafile Helm Chart (chart thereafter). You can also refer to here to use K8S resource files to deploy Seafile cluster in your K8S cluster.
"},{"location":"setup/helm_chart_cluster/#prerequisites","title":"Prerequisites","text":""},{"location":"setup/helm_chart_cluster/#cluster-requirements","title":"Cluster requirements","text":"Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/helm_chart_cluster/#k8s-tools","title":"K8S tools","text":"Two tools are suggested and can be installed with official installation guide on all nodes:
After installation, you need to start the k8s control plane service on each node and refer to the k8s official manual for creating a cluster.
Tip
Although we recommend installing the k8s control plane tool on each node, it does not mean that we will use each node as a control plane node, but it is a necessary tool to create or join a K8S cluster. For details, please refer to the above link about creating or joining into a cluster.
"},{"location":"setup/helm_chart_cluster/#install-seafile-helm-chart","title":"Install Seafile helm chart","text":"Create namespace
kubectl create namespace seafile\n Create a secret for sensitive data
kubectl create secret generic seafile-secret --namespace seafile \\\n--from-literal=JWT_PRIVATE_KEY='<required>' \\\n--from-literal=SEAFILE_MYSQL_DB_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_ADMIN_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_MYSQL_ROOT_PASSWORD='<required>' \\\n--from-literal=REDIS_PASSWORD='' \\\n--from-literal=S3_SECRET_KEY='' \\\n--from-literal=S3_SSE_C_KEY=''\n where the JWT_PRIVATE_KEY can be generate by pwgen -s 40 1
Download and modify the my-values.yaml according to your configurations. By the way, you can follow here for the details:
wget -O my-values.yaml https://haiwen.github.io/seafile-helm-chart/values/13.0/cluster.yaml\n\nnano my-values.yaml\n Tip
my-values.yaml we provided (i.e., you can create an empty my-values.yaml and add required field, as others have defined default values in our chart), because it destroys the flexibility of deploying with Helm, but it contains some formats of how Seafile Helm Chart reads these configurations, as well as all the environment variables and secret variables that can be read directly.In addition, you can also create a custom storageClassName for the persistence directory used by Seafile. You only need to specify storageClassName in the seafile.config.seafileDataVolume object in my-values.yaml:
seafile:\n configs:\n seafileDataVolume:\n storageClassName: <your seafile storage class name>\n ...\n Then install the chart use the following command:
helm repo add seafile https://haiwen.github.io/seafile-helm-chart/repo\nhelm upgrade --install seafile seafile/cluster --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n Seafile helm chart 13.0 support variable validity checking
Starting from Seafile helm chart 13.0, the validity of variables in my-values.yaml will be checked at deployment time. When there is a variable validity check that fails, you may encounter the following error message:
You have enabled <Some feature> but <Variable> is not specified and is not allowed to be empty\n If you encounter the following message, please check the relevant configuration in my-values.yaml.
Success
After installing the chart, the cluster is going to initial progress, you can see the following message by kubectl logs seafile-<string> -n seafile:
Defaulted container \"seafile-backend\" out of: seafile-backend, set-ownership (init)\n*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 15\n*** Running /scripts/enterpoint.sh...\n2025-02-13 08:58:35 Nginx ready \n2025-02-13 08:58:35 This is an idle script (infinite loop) to keep container running. \n\n---------------------------------\n\nSeafile cluster backend mode\n\n---------------------------------\n\n[2025-02-13 08:58:35] Now running setup-seafile-mysql.py in auto mode.\nChecking python on this machine ...\n\n\nverifying password of user root ... done\n\n---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: 10.0.0.138\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n\n\n[2025-02-13 08:58:36] Updating version stamp\nStart init\n\nInit success\n After the first-time startup, you have to turn off (i.e., set initMode to false) in your my-values.yaml, then upgrade the chart:
helm upgrade --install seafile seafile/cluster --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n Success
You can check any front-end node in Seafile cluster. If the following information is output, Seafile cluster will run normally in your cluster:
Defaulted container \"seafile-frontend\" out of: seafile-frontend, set-ownership (init)\n*** Running /etc/my_init.d/01_create_data_links.sh...\n*** Booting runit daemon...\n*** Runit started as PID 21\n*** Running /scripts/enterpoint.sh...\n2025-02-13 09:23:49 Nginx ready \n2025-02-13 09:23:49 This is an idle script (infinite loop) to keep container running. \n\n---------------------------------\n\nSeafile cluster frontend mode\n\n---------------------------------\n\n\nStarting seafile server, please wait ...\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(86): fileserver: web_token_expire_time = 3600\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(98): fileserver: max_index_processing_threads= 3\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(111): fileserver: fixed_block_size = 8388608\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(123): fileserver: max_indexing_threads = 1\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(138): fileserver: put_head_commit_request_timeout = 10\n[seaf-server] [2025-02-13 09:23:50] [INFO] seafile-session.c(150): fileserver: skip_block_hash = 0\n[seaf-server] [2025-02-13 09:23:50] [INFO] ../common/seaf-utils.c(581): Use database Mysql\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(243): fileserver: worker_threads = 10\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(256): fileserver: backlog = 32\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(267): fileserver: verify_client_blocks = 1\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(289): fileserver: cluster_shared_temp_file_mode = 600\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(336): fileserver: check_virus_on_web_upload = 0\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(362): fileserver: enable_async_indexing = 0\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(374): fileserver: async_indexing_threshold = 700\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(386): fileserver: fs_id_list_request_timeout = 300\n[seaf-server] [2025-02-13 09:23:50] [INFO] http-server.c(399): fileserver: max_sync_file_count = 100000\n[seaf-server] [2025-02-13 09:23:50] [WARNING] ../common/license.c(716): License file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users\nLicense file /opt/seafile/seafile-license.txt does not exist, allow at most 3 trial users\n[seaf-server] [2025-02-13 09:23:50] [INFO] filelock-mgr.c(1397): Cleaning expired file locks.\n[2025-02-13 09:23:52] Start Monitor \n[2025-02-13 09:23:52] Start seafevents.main \n/opt/seafile/seafile-pro-server-12.0.9/seahub/seahub/settings.py:1101: SyntaxWarning: invalid escape sequence '\\w'\nmatch = re.search('^EXTRA_(\\w+)', attr)\n/opt/seafile/seafile-pro-server-12.0.9/seahub/thirdpart/seafobj/mc.py:13: SyntaxWarning: invalid escape sequence '\\S'\nmatch = re.match('--SERVER\\\\s*=\\\\s*(\\S+)', mc_options)\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\n[seafevents] [2025-02-13 09:23:55] [INFO] root:82 LDAP is not set, disable ldap sync.\n[seafevents] [2025-02-13 09:23:55] [INFO] virus_scan:51 [virus_scan] scan_command option is not found in seafile.conf, disable virus scan.\n[seafevents] [2025-02-13 09:23:55] [INFO] seafevents.app.mq_handler:127 Subscribe to channels: {'seaf_server.stats', 'seahub.stats', 'seaf_server.event', 'seahub.audit'}\n[seafevents] [2025-02-13 09:23:55] [INFO] root:534 Start counting user activity info..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:547 [UserActivityCounter] update 0 items.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:240 Start counting traffic info..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:268 Traffic counter finished, total time: 0.0003578662872314453 seconds.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:23 Start file updates sender, interval = 300 sec\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:57 Can not start work weixin notice sender: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:131 search indexer is started, interval = 600 sec\n[seafevents] [2025-02-13 09:23:55] [INFO] root:56 seahub email sender is started, interval = 1800 sec\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:17 Can not start ldap syncer: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:18 Can not start virus scanner: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:35 Start data statistics..\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:40 Can not start content scanner: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:46 Can not scan repo old files auto del days: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:182 Start counting total storage..\n[seafevents] [2025-02-13 09:23:55] [WARNING] root:78 Can not start filename index updater: it is not enabled!\n[seafevents] [2025-02-13 09:23:55] [INFO] root:113 search wiki indexer is started, interval = 600 sec\n[seafevents] [2025-02-13 09:23:55] [INFO] root:87 Start counting file operations..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:403 Start counting monthly traffic info..\n[seafevents] [2025-02-13 09:23:55] [INFO] root:491 Monthly traffic counter finished, update 0 user items, 0 org items, total time: 0.0905158519744873 seconds.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:203 [TotalStorageCounter] No results from seafile-db.\n[seafevents] [2025-02-13 09:23:55] [INFO] root:169 [FileOpsCounter] Finish counting file operations in 0.09510159492492676 seconds, 0 added, 0 deleted, 0 visited, 0 modified\n\nSeahub is started\n\nDone.\n If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile/shared. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Then restart Seafile:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n A safer way to use your Seafile license file
You can also create a secret resource to encrypt your license file in your K8S cluster, which is a safer way:
kubectl create secret generic seafile-license --from-file=seafile-license.txt=$PATH_TO_YOUR_LICENSE_FILE --namespace seafile\n Then modify my-values.yaml to add the information extra volumes:
seafile:\n...\nextraVolumes:\n backend:\n - name: seafileLicense\n volumeInfo:\n secret:\n secretName: seafile-license\n items:\n - key: seafile-license.txt\n path: seafile-license.txt\n subPath: seafile-license.txt\n mountPath: /shared/seafile/seafile-license.txt\n readOnly: true\n frontend:\n - name: seafileLicense\n volumeInfo:\n secret:\n secretName: seafile-license\n items:\n - key: seafile-license.txt\n path: seafile-license.txt\n subPath: seafile-license.txt\n mountPath: /shared/seafile/seafile-license.txt\n readOnly: true\n Finally you can upgrade your chart by:
helm upgrade --install seafile seafile/cluster --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n Seafile Helm Chart is designed to provide fast deployment and version control. You can update and rollback versions using the following setps:
Update Helm repo
helm repo update\n Tip
When using the repo update command, this will not always take effect immediately, as the previous repo will be stored in the cache.
Download (optional) and modify the new my-values.yaml
wget -O my-values.yaml https://haiwen.github.io/seafile-helm-chart/values/<seafile-version>/cluster.yaml\n\nnano my-values.yaml\n About version of Seafile Helm Chart and Seafile
The version of Seafile Helm Chart is same as the major version of Seafile, i.e.:
By default, it will follow the latest Chart and the latest Seafile
Upgrade release to a new version
helm upgrade --install seafile seafile/cluster --namespace seafile --create-namespace --values my-values.yaml --version <release-version>\n (Rollback) if you would like rollback to your old-running release, you can use following command to rollback your current instances
helm rollback seafile -n seafile <revision>\n You can uninstall chart by the following command:
helm delete seafile --namespace seafile\n"},{"location":"setup/helm_chart_cluster/#advanced-operations","title":"Advanced operations","text":"Please refer from here for futher advanced operations.
"},{"location":"setup/helm_chart_single_node/","title":"Setup Seafile with a single K8S pod with Seafile Helm Chart","text":"This manual explains how to deploy and run Seafile server on a Linux server using Seafile Helm Chart (chart thereafter) in a single pod (i.e., single node mode). Comparing to Setup by K8S resource files, deployment with helm chart can simplify the deployment process and provide more flexible deployment control, which the way we recommend in deployment with K8S.
For specific environment and configuration requirements, please refer to the description of the Docker-based Seafile single-node deployment. Please also refer to the description of the K8S tool section in here.
"},{"location":"setup/helm_chart_single_node/#preparation","title":"Preparation","text":"For persisting data using in the docker-base deployment, /opt/seafile-data, is still adopted in this manual. What's more, all K8S YAML files will be placed in /opt/seafile-k8s-yaml (replace it when following these instructions if you would like to use another path).
By the way, we don't provide the deployment methods of basic services (e.g., Redis, MySQL and Elasticsearch) and seafile-compatibility components (e.g., SeaDoc) for K8S in our document. If you need to install these services in K8S format, you can refer to the rewrite method in this document.
"},{"location":"setup/helm_chart_single_node/#system-requirements","title":"System requirements","text":"Please refer here for the details of system requirements about Seafile service. By the way, this will apply to all nodes where Seafile pods may appear in your K8S cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/helm_chart_single_node/#install-seafile-helm-chart","title":"Install Seafile helm chart","text":"Create namespace
kubectl create namespace seafile\n Create a secret for sensitive data
Seafile ProSeafile CEkubectl create secret generic seafile-secret --namespace seafile \\\n--from-literal=JWT_PRIVATE_KEY='<required>' \\\n--from-literal=SEAFILE_MYSQL_DB_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_ADMIN_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_MYSQL_ROOT_PASSWORD='<required>' \\\n--from-literal=REDIS_PASSWORD='' \\\n--from-literal=S3_SECRET_KEY='' \\\n--from-literal=S3_SSE_C_KEY=''\n kubectl create secret generic seafile-secret --namespace seafile \\\n--from-literal=JWT_PRIVATE_KEY='<required>' \\\n--from-literal=SEAFILE_MYSQL_DB_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_ADMIN_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_MYSQL_ROOT_PASSWORD='<required>' \\\n--from-literal=REDIS_PASSWORD=''\n where the JWT_PRIVATE_KEY can be generate by pwgen -s 40 1
Download and modify the my-values.yaml according to your configurations. By the way, you can follow here for the details:
wget -O my-values.yaml https://haiwen.github.io/seafile-helm-chart/values/13.0/pro.yaml\n\nnano my-values.yaml\n wget -O my-values.yaml https://haiwen.github.io/seafile-helm-chart/values/13.0/ce.yaml\n\nnano my-values.yaml\n Tip
my-values.yaml we provided (i.e., you can create an empty my-values.yaml and add required field, as others have defined default values in our chart), because it destroys the flexibility of deploying with Helm, but it contains some formats of how Seafile Helm Chart reads these configurations, as well as all the environment variables and secret variables that can be read directly.In addition, you can also create a custom storageClassName for the persistence directory used by Seafile. You only need to specify storageClassName in the seafile.config.seafileDataVolume object in my-values.yaml:
seafile:\n configs:\n seafileDataVolume:\n storageClassName: <your seafile storage class name>\n ...\n Then install the chart use the following command:
Seafile ProSeafile CEhelm repo add seafile https://haiwen.github.io/seafile-helm-chart/repo\nhelm upgrade --install seafile seafile/pro --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n helm repo add seafile https://haiwen.github.io/seafile-helm-chart/repo\nhelm upgrade --install seafile seafile/ce --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n Seafile helm chart 13.0 support variable validity checking
Starting from Seafile helm chart 13.0, the validity of variables in my-values.yaml will be checked at deployment time. When there is a variable validity check that fails, you may encounter the following error message:
You have enabled <Some feature> but <Variable> is not specified and is not allowed to be empty\n If you encounter the following message, please check the relevant configuration in my-values.yaml.
After installing the chart, the Seafile pod should startup automaticlly.
About Seafile service
The default service type of Seafile is LoadBalancer. You should specify K8S load balancer for Seafile or specify at least one external ip, that can be accessed from external networks.
Important for deployment
By default, Seafile will access the Elasticsearch (Pro only) with the specific service name: - Elasticsearch: elasticsearch with port 9200
If the above services are:
Please modfiy the files in /opt/seafile-data/seafile/conf to make correct the configurations for above services, otherwise the Seafile server cannot start normally. Then restart Seafile server:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n"},{"location":"setup/helm_chart_single_node/#activating-the-seafile-license-pro","title":"Activating the Seafile License (Pro)","text":"If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Then restart Seafile:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n A safer way to use your Seafile license file
You can also create a secret resource to encrypt your license file in your K8S cluster, which is a safer way:
kubectl create secret generic seafile-license --from-file=seafile-license.txt=$PATH_TO_YOUR_LICENSE_FILE --namespace seafile\n Then modify my-values.yaml to add the information extra volumes:
seafile:\n...\nextraVolumes:\n - name: seafileLicense\n volumeInfo:\n secret:\n secretName: seafile-license\n items:\n - key: seafile-license.txt\n path: seafile-license.txt\n subPath: seafile-license.txt\n mountPath: /shared/seafile/seafile-license.txt\n readOnly: true\n Finally you can upgrade your chart by:
Seafile ProSeafile CEhelm upgrade --install seafile seafile/pro --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n helm upgrade --install seafile seafile/ce --version 13.0 --namespace seafile --create-namespace --values my-values.yaml\n"},{"location":"setup/helm_chart_single_node/#version-control","title":"Version control","text":"Seafile Helm Chart is designed to provide fast deployment and version control. You can update and rollback versions using the following setps:
Update Helm repo
helm repo update\n Tip
When using the repo update command, this will not always take effect immediately, as the previous repo will be stored in the cache.
Download (optional) and modify the new my-values.yaml
wget -O my-values.yaml https://haiwen.github.io/seafile-helm-chart/values/<seafile-version>/pro.yaml\n\nnano my-values.yaml\n wget -O my-values.yaml https://haiwen.github.io/seafile-helm-chart/values/<seafile-version>/ce.yaml\n\nnano my-values.yaml\n About version of Seafile Helm Chart and Seafile
The version of Seafile Helm Chart is same as the major version of Seafile, i.e.:
By default, it will follow the latest Chart and the latest Seafile
Upgrade release to a new version
Seafile ProSeafile CEhelm upgrade --install seafile seafile/pro --namespace seafile --create-namespace --values my-values.yaml --version <release-version>\n helm upgrade --install seafile seafile/ce --namespace seafile --create-namespace --values my-values.yaml --version <release-version>\n (Rollback) if you would like rollback to your old-running release, you can use following command to rollback your current instances
helm rollback seafile -n seafile <revision>\n You can uninstall chart by the following command:
helm delete seafile --namespace seafile\n"},{"location":"setup/helm_chart_single_node/#advanced-operations","title":"Advanced operations","text":"Please refer from here for futher advanced operations.
"},{"location":"setup/k8s_advanced_management/","title":"Seafile K8S advanced management","text":"This document mainly describes how to manage and maintain Seafile deployed through our K8S deployment document. At the same time, if you are already proficient in using kubectl commands to manage K8S resources, you can also customize the deployment solutions we provide.
Namespaces for Seafile K8S deployment
Our documentation provides two deployment solutions for both single-node and cluster deployment (via Seafile Helm Chart and K8S resource files), both of which can be highly customized.
Regardless of which deployment method you use, in our newer manuals (usually in versions after Seafile 12.0.9), Seafile-related K8S resources (including related Pods, services, and persistent volumes, etc.) are defined in the seafile namespace. In previous versions, you may deploy Seafile in the default namespace, so in this case, when referring to this document for Seafile K8S resource management, be sure to remove -n seafile in the command.
Similar to docker installation, you can also manage containers through some kubectl commands. For example, you can use the following command to check whether the relevant resources are started successfully and whether the relevant services can be accessed normally. First, execute the following command and remember the pod name with seafile- as the prefix (such as seafile-748b695648-d6l4g)
kubectl get pods -n seafile\n You can check a status of a pod by
kubectl logs seafile-748b695648-d6l4g -n seafile\n and enter a container by
kubectl exec -it seafile-748b695648-d6l4g -n seafile -- bash\n Also, you can restart the services by the following commands:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n"},{"location":"setup/k8s_advanced_management/#k8s-gateway-and-https","title":"K8S Gateway and HTTPS","text":"Since the support of Ingress feature is frozen in the new version of K8S, this article will introduce how to use the new version of K8S feature K8S Gateway to implement Seafile service exposure and load balancing.
Still use Nginx-Ingress
If your K8S is still using Nginx-Ingress, you can follow here to setup ingress controller and HTTPS. We sincerely thanks Datamate to give an example to this configuration.
For the details and features about K8S Gateway, please refer to the K8S official document, you can simpily install it by
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.1.0/standard-install.yaml\n The Gateway API requires configuration of three API categories in its resource model: - GatewayClass:\u00a0Defines a group of gateways with the same configuration, managed by the controller that implements the class. - Gateway:\u00a0Defines an instance of traffic handling infrastructure, which can be thought of as a load balancer. - HTTPRoute:\u00a0Defines HTTP-specific rules for mapping traffic from gateway listeners to representations of backend network endpoints. These endpoints are typically represented as\u00a0Services.
The GatewayClass resource serves the same purpose as the IngressClass in the old-ingress API, similar to the StorageClass in the Storage API. It defines the categories of Gateways that can be created. Typically, this resource is provided by your infrastructure platform, such as EKS or GKE. It can also be provided by a third-party Ingress Controller, such as Nginx-gateway or Istio-gateway.
Here, we take the Nginx-gateway for the example, and you can install it with the official document. After installation, you can view the installation status with the following command:
# `gc` means the `gatewayclass`, and its same as `kubectl get gatewayclass`\nkubectl get gc \n\n#NAME CONTROLLER ACCEPTED AGE\n#nginx gateway.nginx.org/nginx-gateway-controller True 22s\n Typically, after you install GatewayClass, your cloud provider will provide you with a load balancing IP, which is visible in GatewayClass. If this IP is not assigned, you can manually bind it to a IP that can be accessed from exteranl network.
kubectl edit svc nginx-gateway -n nginx-gateway\n and modify the following section:
...\nspec:\n ...\n externalIPs:\n - <your external IP>\n externalTrafficPolicy: Cluster\n ...\n...\n"},{"location":"setup/k8s_advanced_management/#gateway","title":"Gateway","text":"Gateway is used to describe an instance of traffic processing infrastructure. Usually, Gateway defines a network endpoint that can be used to process traffic, that is, to filter, balance, split, etc. Service and other backends. For example, it can represent a cloud load balancer, or a cluster proxy server configured to accept HTTP traffic. As above, please refer to the official documentation for a detailed description of Gateway. Here is only a simple reference configuration for Seafile:
# nano seafile-gateway/gateway.yaml\n\napiVersion: gateway.networking.k8s.io/v1\nkind: Gateway\nmetadata:\n name: seafile-gateway\nspec:\n gatewayClassName: nginx\n listeners:\n - name: seafile-http\n protocol: HTTP\n port: 80\n"},{"location":"setup/k8s_advanced_management/#httproute","title":"HTTPRoute","text":"The HTTPRoute category specifies the routing behavior of HTTP requests from the Gateway listener to the backend network endpoints. For service backends, the implementation can represent the backend network endpoint as a service IP or a supporting endpoint of the service. it represents the configuration that will be applied to the underlying Gateway implementation. For example, defining a new HTTPRoute may result in configuring additional traffic routes in a cloud load balancer or in-cluster proxy server. As above, please refer to the official documentation for a detailed description of the HTTPRoute resource. Here is only a reference configuration solution that is only applicable to this document.
# nano seafile-gateway/httproute.yaml\n\napiVersion: gateway.networking.k8s.io/v1\nkind: HTTPRoute\nmetadata:\n name: seafile-httproute\nspec:\n parentRefs:\n - group: gateway.networking.k8s.io\n kind: Gateway\n name: seafile-gateway\n hostnames:\n - \"<your domain>\"\n rules:\n - matches:\n - path:\n type: PathPrefix\n value: /\n backendRefs:\n - name: seafile\n port: 80\n After installing or defining GatewayClass, Gateway and HTTPRoute, you can now enable this feature by following command and view your Seafile server by the URL http://seafile.example.com/:
kubectl apply -f seafile-gateway -n seafile\n"},{"location":"setup/k8s_advanced_management/#enable-https-optional","title":"Enable HTTPS (Optional)","text":"When using K8S Gateway, a common way to enable HTTPS is to add relevant information about the TLS listener in Gateway resource. You can refer here for futher details. We will provide a simple way here so that you can quickly enable HTTPS for your Seafile K8S.
Create a secret resource (seafile-tls-cert) for your TLS certificates:
kubectl create secret tls seafile-tls-cert \\\n--cert=<your path to fullchain.pem> \\\n--key=<your path to privkey.pem>\n 2. Use the TLS in your Gateway resource and enable HTTPS: # nano seafile-gateway/gateway.yaml\n\n...\nspec:\n ...\n listeners:\n - name: seafile-http\n ...\n tls:\n certificateRefs:\n - kind: Secret\n group: \"\"\n name: seafile-tls-cert\n...\n Modify seahub_settings.py:
SERVICE_URL = \"https://<your domain>/\"\n Restart Seafile K8S Gateway:
kubectl delete -f seafile-gateway -n seafile\nkubectl apply -f seafile-gateway -n seafile\n Now you can access your Seafile service in https://<your domain>/
Similar to single-node deployment, you can browse the log files of Seafile running directly in the persistent volume directory (i.e., <path>/seafile/logs). The difference is that when using K8S to deploy a Seafile cluster (especially in a cloud environment), the persistent volume created is usually shared and synchronized for all nodes. However, the logs generated by the Seafile service do not record the specific node information where these logs are located, so browsing the files in the above folder may make it difficult to identify which node these logs are generated from. Therefore, one solution proposed here is:
Record the generated logs to the standard output. In this way, the logs can be distinguished under each node by kubectl logs (but all types of logs will be output together now). You can enable this feature (it should be enabled by default in K8S Seafile cluster but not in K8S single-pod Seafile) by modifing SEAFILE_LOG_TO_STDOUT to true in seafile-env.yaml:
...\ndata:\n ...\n SEAFILE_LOG_TO_STDOUT: \"true\"\n ...\n Then restart the Seafile server:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n Since the logs in step 1 can be distinguished between nodes, but they are aggregated and output together, it is not convenient for log retrieval. So you have to route the standard output logs (i.e., distinguish logs by corresponding components name) and re-record them in a new file or upload them to a log aggregation system (e.g., Loki).
Currently in the K8S environment, the commonly used log routing plugins are:
Fluent Bit and Promtail are more lightweight (i.e., consume less system resources), while Promtail only supports transferring logs to Loki. Therefore, this document will mainly introduce log routing through Fluent Bit which is a fast, lightweight logs and metrics agent. It is also a CNCF graduated sub-project under the umbrella of Fluentd. Fluent Bit is licensed under the terms of the Apache License v2.0. You should deploy the Fluent Bit in your K8S cluster by following offical document firstly. Then modify Fluent-Bit pod settings to mount a new directory to load the configuration files:
#kubectl edit ds fluent-bit\n\n...\nspec:\n ...\n spec:\n ...\n containers:\n - name: fluent-bit\n volumeMounts:\n ...\n - mountPath: /fluent-bit/etc/seafile\n name: fluent-bit-seafile\n - mountPath: /\n ...\n ...\n volumes:\n ...\n - hostPath:\n path: /opt/fluent-bit\n name: fluent-bit-seafile\n and
#kubectl edit cm fluent-bit\n\ndata:\n ...\n fluent-bit.conf: |\n [SERVICE]\n ...\n Parsers_File /fluent-bit/etc/seafile/confs/parsers.conf\n ...\n @INCLUDE /fluent-bit/etc/seafile/confs/*-log.conf\n For example in here, we use /opt/fluent-bit/confs (it has to be non-shared). What's more, the parsers will be defined in /opt/fluent-bit/confs/parsers.conf, and for each type log (e.g., seahub's log, seafevent's log) will be defined in /opt/fluent-bit/confs/*-log.conf. Each .conf file defines several Fluent-Bit data pipeline components:
Warning
For PARSER, it can only be stored in /opt/fluent-bit/confs/parsers.conf, otherwise the Fluent-Bit cannot startup normally.
According to the above, a container will generate a log file (usually in /var/log/containers/<container-name>-xxxxxx.log), so you need to prepare an importer and add the following information (for more details, please refer to offical document about TAIL inputer) in /opt/fluent-bit/confs/seafile-log.conf:
[INPUT]\n Name tail\n Path /var/log/containers/seafile-frontend-*.log\n Buffer_Chunk_Size 2MB\n Buffer_Max_Size 10MB\n Docker_Mode On\n Docker_Mode_Flush 5\n Tag seafile.*\n Parser Docker # for definition, please see the next section as well\n\n[INPUT]\n Name tail\n Path /var/log/containers/seafile-backend-*.log\n Buffer_Chunk_Size 2MB\n Buffer_Max_Size 10MB\n Docker_Mode On\n Docker_Mode_Flush 5\n Tag seafile.*\n Parser Docker\n The above defines two importers, which are used to monitor seafile-frontend and seafile-backend services respectively. The reason why they are written together here is that for a node, you may not know when it will run the frontend service and when it will run the backend service, but they have the same tag prefix seafile..
Each input has to use a parser to parse the logs and pass them to the filter. Here, a parser named Docker is created to parse the logs generated by the K8S-docker-runtime container. The parser is placed in /opt/fluent-bit/confs/parser.conf (for more details, please refer to offical document about JSON parser):
[PARSER]\n Name Docker\n Format json\n Time_Key time\n Time_Format %Y-%m-%dT%H:%M:%S.%LZ\n Log records after parsing
The logs of the Docker container are saved in /var/log/containers in Json format (see the sample below), which is why we use the Json format in the above parser.
{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] seafile-session.c(86): fileserver: web_token_expire_time = 3600\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.294638442Z\"}\n{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] seafile-session.c(98): fileserver: max_index_processing_threads= 3\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.294810145Z\"}\n{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] seafile-session.c(111): fileserver: fixed_block_size = 8388608\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.294879777Z\"}\n{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] seafile-session.c(123): fileserver: max_indexing_threads = 1\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.295002479Z\"}\n{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] seafile-session.c(138): fileserver: put_head_commit_request_timeout = 10\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.295082733Z\"}\n{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] seafile-session.c(150): fileserver: skip_block_hash = 0\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.295195843Z\"}\n{\"log\":\"[seaf-server] [2025-01-17 07:43:48] [INFO] ../common/seaf-utils.c(553): Use database Mysql\\n\",\"stream\":\"stdout\",\"time\":\"2025-01-17T07:43:48.29704895Z\"}\n When these logs are obtained by the importer and parsed by the parser, they will become independent log records with the following fields:
log: The original log content (i.e., same as you seen in kubectl logs seafile-xxx -n seafile) and an extra line break at the end (i.e., \\n). This is also the field we need to save or upload to the log aggregation system in the end.stream: The original log come from. stdout means the standard output.time: The time when the log is recorded in the corresponding stream (ISO 8601 format).Add two filters in /opt/fluent-bit/confs/seafile-log.conf for records filtering and routing. Here, the record_modifier filter is to select useful keys (see the contents in above tip label, only the log field is what we need) in the log records and rewrite_tag filter is used to route logs according to specific rules:
[FILTER] \n Name record_modifier\n Match seafile.*\n Allowlist_key log\n\n\n[FILTER]\n Name rewrite_tag\n Match seafile.*\n Rule $log ^.*\\[seaf-server\\].*$ seaf-server false # for seafile's logs\n Rule $log ^.*\\[seahub\\].*$ seahub false # for seahub's logs\n Rule $log ^.*\\[seafevents\\].*$ seafevents false # for seafevents' lgos\n Rule $log ^.*\\[seafile-slow-rpc\\].*$ seafile-slow-rpc false # for slow-rpc's logs\n"},{"location":"setup/k8s_advanced_management/#output-logs-to-loki","title":"Output log's to Loki","text":"Loki is multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. The Fluent-Bit loki built-in output plugin allows you to send your log or events to a Loki service. It supports data enrichment with Kubernetes labels, custom label keys and Tenant ID within others.
Alternative Fluent-Bit Loki plugin by Grafana
For sending logs to Loki, there are two plugins for Fluent-Bit:
Due to each outputer dose not have a distinguishing marks in the configuration files (because Fluent-Bit takes each plugin as a tag workflow):
Seaf-server log: Add an outputer to /opt/fluent-bit/confs/seaf-server-log.conf:
[OUTPUT]\n Name loki\n Match seaf-server\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n seahub log: Add an outputer to /opt/fluent-bit/confs/seahub-log.conf:
[OUTPUT]\n Name loki\n Match seahub\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n seafevents log: Add an outputer to /opt/fluent-bit/confs/seafevents-log.conf:
[OUTPUT]\n Name loki\n Match seafevents\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n seafile-slow-rpc log: Add an outputer to /opt/fluent-bit/confs/seafile-slow-rpc-log.conf:
[OUTPUT]\n Name loki\n Match seafile-slow-rpc\n Host <your Loki's host>\n port <your Loki's port>\n labels job=fluentbit, node_name=<your-node-name>, node_id=<your-node-id> # node_name and node_id is optional, but recommended for identifying the source node\n Cloud Loki instance
If you are using a cloud Loki instance, you can follow the Fluent-Bit Loki plugin document to fill up all necessary fields. Usually, the following fields are additional needs in cloud Loki service:
tlstls.verifyhttp_userhttp_passwdThis manual explains how to deploy and run Seafile server on a Linux server using Kubernetes (k8s thereafter) in a single pod (i.e., single node mode). So this document is essentially an extended description of the Docker-based Seafile single-node deployment (support both CE and Pro).
For specific environment and configuration requirements, please refer to the description of the Docker-based Seafile single-node deployment. Please also refer to the description of the K8S tool section in here.
"},{"location":"setup/k8s_single_node/#system-requirements","title":"System requirements","text":"Please refer here for the details of system requirements about Seafile service. By the way, this will apply to all nodes where Seafile pods may appear in your K8S cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/k8s_single_node/#gettings-started","title":"Gettings started","text":"For persisting data using in the docker-base deployment, /opt/seafile-data, is still adopted in this manual. What's more, all K8S YAML files will be placed in /opt/seafile-k8s-yaml (replace it when following these instructions if you would like to use another path).
By the way, we don't provide the deployment methods of basic services (e.g., Redis, MySQL and Elasticsearch) and seafile-compatibility components (e.g., SeaDoc) for K8S in our document. If you need to install these services in K8S format, you can refer to the rewrite method of this document.
"},{"location":"setup/k8s_single_node/#create-namespace-and-secretmap","title":"Create namespace and secretMap","text":"Seafile ProSeafile CEkubectl create ns seafile\n\nkubectl create secret generic seafile-secret --namespace seafile \\\n--from-literal=JWT_PRIVATE_KEY='<required>' \\\n--from-literal=SEAFILE_MYSQL_DB_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_ADMIN_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_MYSQL_ROOT_PASSWORD='<required>' \\\n--from-literal=REDIS_PASSWORD='' \\\n--from-literal=S3_SECRET_KEY='' \\\n--from-literal=S3_SSE_C_KEY='' \n kubectl create ns seafile\n\nkubectl create secret generic seafile-secret --namespace seafile \\\n--from-literal=JWT_PRIVATE_KEY='<required>' \\\n--from-literal=SEAFILE_MYSQL_DB_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_ADMIN_PASSWORD='<required>' \\\n--from-literal=INIT_SEAFILE_MYSQL_ROOT_PASSWORD='<required>' \\\n--from-literal=REDIS_PASSWORD=''\n"},{"location":"setup/k8s_single_node/#down-load-the-yaml-files-for-seafile-server","title":"Down load the YAML files for Seafile Server","text":"Pro editionCommunity edition mkdir -p /opt/seafile-k8s-yaml\n\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/pro/seafile-deployment.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/pro/seafile-persistentvolume.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/pro/seafile-persistentvolumeclaim.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/pro/seafile-service.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/pro/seafile-env.yaml\n mkdir -p /opt/seafile-k8s-yaml\n\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/ce/seafile-deployment.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/ce/seafile-persistentvolume.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/ce/seafile-persistentvolumeclaim.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/ce/seafile-service.yaml\nwget -P /opt/seafile-k8s-yaml https://manual.seafile.com/13.0/repo/k8s/ce/seafile-env.yaml\n In here we suppose you download the YAML files in /opt/seafile-k8s-yaml, which mainly include about:
seafile-deployment.yaml for Seafile server pod management and creation, seafile-service.yaml for exposing Seafile services to the external network, seafile-persistentVolume.yaml for defining the location of a volume used for persistent storage on the hostseafile-persistentvolumeclaim.yaml for declaring the use of persistent storage in the container.Use PV bound from a storage class
If you would like to use automatically allocated persistent volume (PV) by a storage class, please modify seafile-persistentvolumeclaim.yaml and specify storageClassName. On the other hand, the PV defined by seafile-persistentvolume.yaml can be disabled:
rm /opt/seafile-k8s-yaml/seafile-persistentvolume.yaml\n For futher configuration details, you can refer the official documents.
"},{"location":"setup/k8s_single_node/#modify-seafile-envyaml","title":"Modifyseafile-env.yaml","text":"Similar to Docker-base deployment, Seafile cluster in K8S deployment also supports use files to configure startup progress, you can modify common environment variables by
nano /opt/seafile-k8s-yaml/seafile-env.yaml\n Warning
For the fields marked with <...> are required, please make sure these items are filled in, otherwise Seafile server may not run properly.
You can start Seafile server and specify the resources into the namespace seafile for easier management by
kubectl apply -f /opt/seafile-k8s-yaml/ -n seafile\n Important for Pro edition
By default, Seafile (Pro) will access the Elasticsearch with the specific service name:
elasticsearch with port 9200If the above services are:
Please modfiy the files in /opt/seafile-data/seafile/conf/seafevents.conf to make correct the configurations for above services, otherwise the Seafile server cannot start normally. Then restart Seafile server:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n"},{"location":"setup/k8s_single_node/#activating-the-seafile-license-pro","title":"Activating the Seafile License (Pro)","text":"If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Then restart Seafile:
kubectl delete pods -n seafile $(kubectl get pods -n seafile -o jsonpath='{.items[*].metadata.name}' | grep seafile)\n"},{"location":"setup/k8s_single_node/#uninstall-seafile-k8s","title":"Uninstall Seafile K8S","text":"You can uninstall the Seafile K8S by the following command:
kubectl delete -f /opt/seafile-k8s-yaml/ -n seafile\n"},{"location":"setup/k8s_single_node/#advanced-operations","title":"Advanced operations","text":"Please refer from here for futher advanced operations.
"},{"location":"setup/migrate_backends_data/","title":"Migrate data between different backends","text":"Seafile supports data migration between filesystem, s3, ceph, swift and Alibaba oss by a built-in script. Before migration, you have to ensure that both S3 hosts can be accessed normally.
Migration to or from S3
Since version 11, when you migrate from S3 to other storage servers or from other storage servers to S3, you have to use V4 authentication protocol. This is because version 11 upgrades to Boto3 library, which fails to list objects from S3 when it's configured to use V2 authentication protocol.
"},{"location":"setup/migrate_backends_data/#copy-seafileconf-and-use-new-s3-configurations","title":"Copyseafile.conf and use new S3 configurations","text":"During the migration process, Seafile needs to know where the data will be migrated to. The easiest way is to copy the original seafile.conf to a new path, and then use the new S3 configurations in this file.
Warning
For deployment with Docker, the new seafile.conf has to be put in the persistent directory (e.g., /opt/seafile-data/seafile.conf) used by Seafile service. Otherwise the script cannot locate the new configurations file.
cp /opt/seafile-data/seafile/conf/seafile.conf /opt/seafile-data/seafile.conf\n\nnano /opt/seafile-data/seafile.conf\n cp /opt/seafile/conf/seafile.conf /opt/seafile.conf\n\nnano /opt/seafile.conf\n Then you can follow here to use the new S3 configurations in the new seafile.conf. By the way, if you want to migrate to a local file system, the new seafile.conf configurations for S3 example is as follows:
# ... other configurations\n\n[commit_object_backend]\nname = fs\ndir = /var/data_backup\n\n[fs_object_backend]\nname = fs\ndir = /var/data_backup\n\n[block_backend]\nname = fs\ndir = /var/data_backup\n"},{"location":"setup/migrate_backends_data/#stop-seafile-server","title":"Stop Seafile Server","text":"Since the data migration process will not affect the operation of the Seafile service, if the original S3 data is operated during this process, the data may not be synchronized with the migrated data. Therefore, we recommend that you stop the Seafile service before executing the migration procedure.
Deploy with DockerDeploy from binary packagedocker exec -it seafile bash\ncd /opt/seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n cd /opt/seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n"},{"location":"setup/migrate_backends_data/#run-migratesh-to-initially-migrate-objects","title":"Run migrate.sh to initially migrate objects","text":"This step will migrate most of objects from the source storage to the destination storage. You don't need to stop Seafile service at this stage as it may take quite long time to finish. Since the service is not stopped, some new objects may be added to the source storage during migration. Those objects will be handled in the next step:
Speed-up migrating large number of objects
If you have millions of objects in the storage (especially the fs objects), it may take quite long time to migrate all objects and more than half is using to check whether an object exists in the destination storage. In this situation, you can modify the nworker and maxsize variables in the migrate.py:
class ThreadPool(object):\n def __init__(self, do_work, nworker=20):\n self.do_work = do_work\n self.nworker = nworker\n self.task_queue = Queue.Queue(maxsize = 2000)\n However, if the two values (i.e., nworker and maxsize) \u200b\u200bare too large, the improvement in data migration speed may not be obvious because the disk I/O bottleneck has been reached.
Encrypted storage backend data (deprecated)
If you have an encrypted storage backend, you can use this script to migrate and decrypt the data from that backend to a new one. You can add the --decrypt option in calling the script, which will decrypt the data while reading it, and then write the unencrypted data to the new backend:
./migrate.sh /opt --decrypt\n Deploy with DockerDeploy from binary package # make sure you are in the container and in directory `/opt/seafile/seafile-server-latest`\n./migrate.sh /shared\n\n# exit container and stop it\nexit\ndocker compose down\n # make sure you are in the directory `/opt/seafile/seafile-server-latest`\n./migrate.sh /opt\n Success
You can see the following message if the migration process is done:
2025-01-15 05:49:39,408 Start to fetch [commits] object from destination\n2025-01-15 05:49:39,422 Start to fetch [fs] object from destination\n2025-01-15 05:49:39,442 Start to fetch [blocks] object from destination\n2025-01-15 05:49:39,677 [commits] [0] objects exist in destination\n2025-01-15 05:49:39,677 Start to migrate [commits] object\n2025-01-15 05:49:39,749 [blocks] [0] objects exist in destination\n2025-01-15 05:49:39,755 Start to migrate [blocks] object\n2025-01-15 05:49:39,752 [fs] [0] objects exist in destination\n2025-01-15 05:49:39,762 Start to migrate [fs] object\n2025-01-15 05:49:40,602 Complete migrate [commits] object\n2025-01-15 05:49:40,626 Complete migrate [blocks] object\n2025-01-15 05:49:40,790 Complete migrate [fs] object\nDone.\n"},{"location":"setup/migrate_backends_data/#replace-the-original-seafileconf-and-start-seafile","title":"Replace the original seafile.conf and start Seafile","text":"After running the script, we recommend that you check whether your data already exists on the new S3 storage backend server (i.e., the migration is successful, and the number and size of files should be the same). Then you can remove the file from the old S3 storage backend and replace the original seafile.conf from the new one:
mv /opt/seafile-data/seafile.conf /opt/seafile-data/seafile/conf/seafile.conf\n mv /opt/seafile.conf /opt/seafile/conf/seafile.conf\n Finally, you can start Seafile server:
Deploy with DockerDeploy from binary packagedocker compose up -d\n # make sure you are in the directory `/opt/seafile/seafile-server-latest`\n./seahub.sh start\n./seafile.sh start\n"},{"location":"setup/migrate_ce_to_pro_with_docker/","title":"Migrate CE to Pro with Docker","text":""},{"location":"setup/migrate_ce_to_pro_with_docker/#preparation","title":"Preparation","text":".env and seafile-server.yml of Seafile Pro.wget -O .env https://manual.seafile.com/13.0/repo/docker/pro/env\nwget https://manual.seafile.com/13.0/repo/docker/pro/seafile-server.yml\nwget https://manual.seafile.com/13.0/repo/docker/pro/elasticsearch.yml\n"},{"location":"setup/migrate_ce_to_pro_with_docker/#migrate","title":"Migrate","text":""},{"location":"setup/migrate_ce_to_pro_with_docker/#stop-the-seafile-ce","title":"Stop the Seafile CE","text":"docker compose down\n Tip
To ensure data security, it is recommended that you back up your MySQL data
"},{"location":"setup/migrate_ce_to_pro_with_docker/#put-your-licence-file","title":"Put your licence file","text":"Copy the seafile-license.txt to the volume directory of the Seafile CE's data. If the directory is /opt/seafile-data, so you should put it in the /opt/seafile-data/seafile/.
Modify .env based on the old configurations from the old .env file. The following fields should be paid special attention and others should be the same as the old configurations:
SEAFILE_IMAGE The Seafile pro docker image, which the tag must be equal to or newer than the old Seafile CE docker tag seafileltd/seafile-pro-mc:13.0-latest SEAFILE_ELASTICSEARCH_VOLUME The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data For other fileds (e.g., SEAFILE_VOLUME, SEAFILE_MYSQL_VOLUME, SEAFILE_MYSQL_DB_USER, SEAFILE_MYSQL_DB_PASSWORD), must be consistent with the old configurations.
Tip
For the configurations using to do the initializations (e.g, INIT_SEAFILE_ADMIN_EMAIL, INIT_SEAFILE_MYSQL_ROOT_PASSWORD), you can remove it from .env as well
seafile-server.yml and .env","text":"Replace the old seafile-server.yml and .env by the new and modified files, i.e. (if your old seafile-server.yml and .env are in the /opt)
mv -b seafile-server.yml /opt/seafile-server.yml\nmv -b .env /opt/.env\n"},{"location":"setup/migrate_ce_to_pro_with_docker/#modify-seafeventsconf","title":"Modify seafevents.conf","text":"Add [INDEX FILES] section in /opt/seafile-data/seafile/conf/seafevents.conf manually:
Additional system resource requirements
Seafile PE docker requires a minimum of 4 cores and 4GB RAM because of Elasticsearch deployed simultaneously. If you do not have enough system resources, you can use an alternative search engine, SeaSearch, a more lightweight search engine built on open source search engine ZincSearch, as the indexer.
[INDEX FILES]\nes_host = elasticsearch\nes_port = 9200\nenabled = true\ninterval = 10m\n"},{"location":"setup/migrate_ce_to_pro_with_docker/#start-seafile-pro","title":"Start Seafile Pro","text":"Run the following command to run the Seafile-Pro container\uff1a
docker compose up -d\n Now you have a Seafile Professional service.
"},{"location":"setup/migrate_non_docker_to_docker/","title":"Migrate from non-docker Seafile deployment to docker","text":"Note
The recommended steps to migrate from non-docker deployment to docker deployment on two different machines are:
Run the following commands in /opt/seafile/seafile-server-latest:
Note
For installations using python virtual environment, activate it if it isn't already active:
source python-venv/bin/activate\n Tip
If you have integrated some components (e.g., SeaDoc) in your Seafile server, please shutdown them to avoid losting unsaved data
su seafile\n./seafile.sh stop\n./seahub.sh stop\n"},{"location":"setup/migrate_non_docker_to_docker/#stop-nginx-cache-server-eg-redis-elasticsearch","title":"Stop Nginx, cache server (e.g., Redis), ElasticSearch","text":"You have to stop the above services to avoid losing data before migrating.
systemctl stop nginx && systemctl disable nginx\nsystemctl stop redis && systemctl disable redis\ndocker stop es && docker remove es\n"},{"location":"setup/migrate_non_docker_to_docker/#backup-mysql-database-and-seafile-server","title":"Backup MySQL database and Seafile server","text":"Please follow here to backup:
You can follow here to deploy Seafile with Docker, please use your old configurations when modifying .env, and make sure the Seafile server is running normally after deployment.
Use external MySQL service or the old MySQL service
This document is written to migrate from non-Docker version to Docker version Seafile between two different machines. We suggest using the Docker-compose Mariadb service (version 10.11 by default) as the database service in after-migration Seafile. If you would like to use an existed MySQL service, always in which situation you try to do migrate operation on the same host or the old MySQL service is the dependency of other services, you have to follow here to deploy Seafile.
"},{"location":"setup/migrate_non_docker_to_docker/#recovery-libraries-data-for-seafile-docker","title":"Recovery libraries data for Seafile Docker","text":"Firstly, you should stop the Seafile server before recovering Seafile libraries data:
docker compose down\n Then recover the data from backuped file:
cp /backup/data/* /opt/seafile-data/seafile\n"},{"location":"setup/migrate_non_docker_to_docker/#recover-the-database-only-for-the-new-mysql-service-used-in-seafile-docker","title":"Recover the Database (only for the new MySQL service used in Seafile docker)","text":"Start the database service Only:
docker compose up -d --no-deps db\n Follow here to recover the database data.
Exit the container and stop the Mariadb service
docker compose down\n Finally, the migration is complete. You can restart the Seafile server of Docker-base by restarting the service:
docker compose up -d\n By the way, you can shutdown the old MySQL service, if it is not a dependency of other services, .
"},{"location":"setup/overview/","title":"Seafile Docker overview","text":"Seafile docker based installation consist of the following components (docker images):
SSL configuration.You can use run Seafile as non root user in docker.
Note: In non root mode, the seafile user is automatically created in the container, with uid 8000 and gid 8000.
First deploy Seafile with docker, and destroy the containers.
docker compose down\n Then add the NON_ROOT=true to the .env.
NON_ROOT=true\n Then modify /opt/seafile-data/seafile/ permissions.
chmod -R a+rwx /opt/seafile-data/seafile/\n Start Seafile:
docker compose up -d\n Now you can run Seafile as seafile user.
Tip
When doing maintenance, other scripts in docker are also required to be run as seafile user, e.g. su seafile -c ./seaf-gc.sh
You can use one of the following methods to start Seafile container on system bootup.
"},{"location":"setup/seafile_docker_autostart/#modify-docker-composeservice","title":"Modify docker-compose.service","text":"Add docker-compose.service
vim /etc/systemd/system/docker-compose.service
[Unit]\nDescription=Docker Compose Application Service\nRequires=docker.service\nAfter=docker.service\n\n[Service]\nType=forking\nRemainAfterExit=yes\nWorkingDirectory=/opt/ \nExecStart=/usr/bin/docker compose up -d\nExecStop=/usr/bin/docker compose down\nTimeoutStartSec=0\n\n[Install]\nWantedBy=multi-user.target\n Note
WorkingDirectory is the absolute path to the seafile-server.yml file directory.
Set the docker-compose.service file to 644 permissions
chmod 644 /etc/systemd/system/docker-compose.service\n Load autostart configuration
systemctl daemon-reload\nsystemctl enable docker-compose.service\n Add configuration restart: unless-stopped for each container in components of Seafile docker. Take seafile-server.yml for example
services:\n db:\n image: mariadb:10.11\n container_name: seafile-mysql-1\n restart: unless-stopped\n\n redis:\n image: redis\n container_name: seafile-redis\n restart: unless-stopped\n\n elasticsearch:\n image: elasticsearch:8.6.2\n container_name: seafile-elasticsearch\n restart: unless-stopped\n\n seafile:\n image: seafileltd/seafile-pro-mc:12.0-latest\n container_name: seafile\n restart: unless-stopped\n Tip
Add restart: unless-stopped, and the Seafile container will automatically start when Docker starts. If the Seafile container does not exist (execute docker compose down), the container will not start automatically.
Please refer here for system requirements about Seafile CE. In general, we recommend that you have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup/setup_ce_by_docker/#getting-started","title":"Getting started","text":"The following assumptions and conventions are used in the rest of this document:
/opt/seafile is the directory for store Seafile docker compose files. If you decide to put Seafile in a different directory \u2014 which you can \u2014 adjust all paths accordingly./opt/seafile-mysql and /opt/seafile-data, respectively. It is not recommended to change these paths. If you do, account for it when following these instructions.Use the official installation guide for your OS to install Docker.
"},{"location":"setup/setup_ce_by_docker/#download-and-modify-env","title":"Download and modify.env","text":"To deploy Seafile with Docker, you have to .env, seafile-server.yml and caddy.yml in a directory (e.g., /opt/seafile):
mkdir /opt/seafile\ncd /opt/seafile\n\nwget -O .env https://manual.seafile.com/13.0/repo/docker/ce/env\nwget https://manual.seafile.com/13.0/repo/docker/ce/seafile-server.yml\nwget https://manual.seafile.com/13.0/repo/docker/seadoc.yml\nwget https://manual.seafile.com/13.0/repo/docker/caddy.yml\n\nnano .env\n The following fields merit particular attention:
Variable Description Default ValueSEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-data SEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/db SEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy INIT_SEAFILE_MYSQL_ROOT_PASSWORD The root password of MySQL (Only required on first deployment) SEAFILE_MYSQL_DB_HOST The host of MySQL db SEAFILE_MYSQL_DB_PORT The port of MySQL 3306 SEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafile SEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME The database name of ccnet ccnet_db SEAFILE_MYSQL_DB_SEAFILE_DB_NAME The database name of seafile seafile_db SEAFILE_MYSQL_DB_SEAHUB_DB_NAME The database name of seahub seahub_db JWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) http CACHE_PROVIDER The type of cache server used for Seafile. The available options are redis and memcached. Since Seafile 13, it is recommended to use redis as the cache service to support new features, and memcached will no longer be integrated into Seafile Docker by default. redis REDIS_HOST Redis server host redis REDIS_PORT Redis server port 6379 REDIS_PASSWORD Redis server password (none) MEMCACHED_HOST Memcached server host memcached MEMCACHED_PORT Memcached server port 11211 TIME_ZONE Time zone UTC ENABLE_NOTIFICATION_SERVER Enable (true) or disable (false) notification feature for Seafile false NOTIFICATION_SERVER_URL The notification server url (none) MD_FILE_COUNT_LIMIT (only valid when deployed metadata server). The maximum number of files in a repository that the metadata feature allows. If the number of files in a repository exceeds this value, the metadata management function will not be enabled for the repository. For a repository with metadata management enabled, if the number of records in it reaches this value but there are still some files that are not recorded in metadata server, the metadata management of the unrecorded files will be skipped. 100000 INIT_SEAFILE_ADMIN_EMAIL Admin username me@example.com (Recommend modifications) INIT_SEAFILE_ADMIN_PASSWORD Admin password asecret (Recommend modifications) NON_ROOT Run Seafile container without a root user false"},{"location":"setup/setup_ce_by_docker/#start-seafile-server","title":"Start Seafile server","text":"Start Seafile server with the following command
docker compose up -d\n ERROR: Named volume \"xxx\" is used in service \"xxx\" but no declaration was found in the volumes section
You may encounter this problem when your Docker (or docker-compose) version is out of date. You can upgrade or reinstall the Docker service to solve this problem according to the Docker official documentation.
Note
You must run the above command in the directory with the .env. If .env file is elsewhere, please run
docker compose --env-file /path/to/.env up -d\n Success
After starting the services, you can see the initialization progress by tracing the logs of container seafile (i.e., docker logs seafile -f)
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\ncreating seafile-server-latest symbolic link ... done\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n And then you can see the following messages which the Seafile server starts successfully:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\nSeahub is started\n\nDone.\n Finially, you can go to http://seafile.example.com to use Seafile.
/opt/seafile-data","text":"Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/seafile/logs/seafile.log./var/log inside the container. /opt/seafile-data/logs/var-log/nginx contains the logs of Nginx in the Seafile container.To monitor container logs (from outside of the container), please use the following commands:
# if the `.env` file is in current directory:\ndocker compose logs --follow\n# if the `.env` file is elsewhere:\ndocker compose --env-file /path/to/.env logs --follow\n\n# you can also specify container name:\ndocker compose logs seafile --follow\n# or, if the `.env` file is elsewhere:\ndocker compose --env-file /path/to/.env logs seafile --follow\n The Seafile logs are under /shared/logs/seafile in the docker, or /opt/seafile-data/logs/seafile in the server that run the docker.
The system logs are under /shared/logs/var-log, or /opt/seafile-data/logs/var-log in the server that run the docker.
To monitor all Seafile logs simultaneously (from outside of the container), run
sudo tail -f $(find /opt/seafile-data/ -type f -name *.log 2>/dev/null)\n"},{"location":"setup/setup_ce_by_docker/#more-configuration-options","title":"More configuration options","text":"The config files are under /opt/seafile-data/seafile/conf. You can modify the configurations according to configuration section
Ensure the container is running, then enter this command:
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\n Enter the username and password according to the prompts. You now have a new admin account.
"},{"location":"setup/setup_ce_by_docker/#backup-and-recovery","title":"Backup and recovery","text":"Follow the instructions in Backup and restore for Seafile Docker
"},{"location":"setup/setup_ce_by_docker/#garbage-collection","title":"Garbage collection","text":"When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them.
"},{"location":"setup/setup_ce_by_docker/#faq","title":"FAQ","text":""},{"location":"setup/setup_ce_by_docker/#seafile-service-and-container-maintenance","title":"Seafile service and container maintenance","text":"Q: If I want enter into the Docker container, which command I can use?
A: You can enter into the docker container using the command:
docker exec -it seafile /bin/bash\n Q: I forgot the Seafile admin email address/password, how do I create a new admin account?
A: You can create a new admin account by running
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\n The Seafile service must be up when running the superuser command.
Q: If, for whatever reason, the installation fails, how do I to start from a clean slate again?
A: Remove the directories /opt/seafile, /opt/seafile-data and /opt/seafile-mysql and start again.
Q: Something goes wrong during the start of the containers. How can I find out more?
A: You can view the docker logs using this command: docker compose logs -f.
Q: How Seafile use cache?
A: Seafile uses cache to improve performance in many situations. The content includes but is not limited to user session information, avatars, profiles, records from database, etc. From Seafile Docker 13, the Redis takes the default cache server for supporting the new features (please refer the upgradte notes), which has integrated in Seafile Docker 13 and can be configured directly in environment variables in .env (no additional settings are required by default)
Q: Is the Redis integrated in Seafile Docker safe? Does it have an access password?
A: Although the Redis integrated by Seafile Docker does not have a password set by default, it can only be accessed through the Docker private network and will not expose the service port externally. Of course, you can also set a password for it if necessary. You can set REDIS_PASSWORD in .env and remove the following comment markers in seafile-server.yml to set the integrated Redis' password:
services:\n ...\n redis:\n image: ${SEAFILE_REDIS_IMAGE:-redis}\n container_name: seafile-redis\n # remove the following comment markers\n command:\n - /bin/sh\n - -c\n - redis-server --requirepass \"$${REDIS_PASSWORD:?Variable is not set or empty}\"\n networks:\n - seafile-net\n ...\n Q: For some reason, I still have to use Memcached as my cache server. How can I do this?
A: If you still want to use the Memcached (is not provided from Seafile Docker 13), just follow the steps below:
CACHE_PROVIDER to memcached and modify MEMCACHED_xxx in .envredis part and and the redis dependency in seafile service section in seafile-server.yml. By the way, you can make changes to the cache server after the service is started (by setting environment variables in .env), but the corresponding configuration files will not be updated directly (e.g., seahub_settings.py, seafile.conf and seafevents.conf). To avoid ambiguity, we recommend that you also update these configuration files.
This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server using Docker and Docker Compose. The deployment has been tested for Debian/Ubuntu and CentOS, but Seafile PE should also work on other Linux distributions.
"},{"location":"setup/setup_pro_by_docker/#system-requirements","title":"System requirements","text":"Please refer here for system requirements about Seafile PE. In general, we recommend that you have at least 4G RAM and a 4-core CPU (> 2GHz).
About license
Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com. For futher details, please refer the license page of Seafile PE.
"},{"location":"setup/setup_pro_by_docker/#setup","title":"Setup","text":"The following assumptions and conventions are used in the rest of this document:
/opt/seafile is the directory of Seafile for storing Seafile docker files. If you decide to put Seafile in a different directory, adjust all paths accordingly.Use the official installation guide for your OS to install Docker.
"},{"location":"setup/setup_pro_by_docker/#downloading-the-seafile-image","title":"Downloading the Seafile Image","text":"Success
Since v12.0, Seafile PE versions are hosted on DockerHub and does not require username and password to download. For older Seafile PE versions are available private docker repository (back to Seafile 7.0). You can get the username and password on the download page in the Customer Center.
docker pull seafileltd/seafile-pro-mc:13.0-latest\n"},{"location":"setup/setup_pro_by_docker/#downloading-and-modifying-env","title":"Downloading and Modifying .env","text":"Seafile uses .env, seafile-server.yml and caddy.yml files for configuration.
mkdir /opt/seafile\ncd /opt/seafile\n\nwget -O .env https://manual.seafile.com/13.0/repo/docker/pro/env\nwget https://manual.seafile.com/13.0/repo/docker/pro/seafile-server.yml\nwget https://manual.seafile.com/13.0/repo/docker/pro/elasticsearch.yml\nwget https://manual.seafile.com/13.0/repo/docker/seadoc.yml\nwget https://manual.seafile.com/13.0/repo/docker/caddy.yml\n\nnano .env\n The following fields merit particular attention:
Variable Description Default ValueSEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-data SEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/db SEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy SEAFILE_ELASTICSEARCH_VOLUME The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data INIT_SEAFILE_MYSQL_ROOT_PASSWORD The root password of MySQL (Only required on first deployment) SEAFILE_MYSQL_DB_HOST The host of MySQL db SEAFILE_MYSQL_DB_PORT The port of MySQL 3306 SEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafile SEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME The database name of ccnet ccnet_db SEAFILE_MYSQL_DB_SEAFILE_DB_NAME The database name of seafile seafile_db SEAFILE_MYSQL_DB_SEAHUB_DB_NAME The database name of seahub seahub_db JWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) http CACHE_PROVIDER The type of cache server used for Seafile. The available options are redis and memcached. Since Seafile 13, it is recommended to use redis as the cache service to support new features, and memcached will no longer be integrated into Seafile Docker by default. redis REDIS_HOST Redis server host redis REDIS_PORT Redis server port 6379 REDIS_PASSWORD Redis server password (none) MEMCACHED_HOST Memcached server host memcached MEMCACHED_PORT Memcached server port 11211 TIME_ZONE Time zone UTC INIT_SEAFILE_ADMIN_EMAIL Synchronously set admin username during initialization me@example.com INIT_SEAFILE_ADMIN_PASSWORD Synchronously set admin password during initialization asecret SEAF_SERVER_STORAGE_TYPE What kind of the Seafile data for storage. Available options are disk (i.e., local disk), s3 and multiple (see the details of multiple storage backends) disk S3_COMMIT_BUCKET S3 storage backend commit objects bucket (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_FS_BUCKET S3 storage backend fs objects bucket (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_BLOCK_BUCKET S3 storage backend block objects bucket (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_KEY_ID S3 storage backend key ID (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_SECRET_KEY S3 storage backend secret key (required when SEAF_SERVER_STORAGE_TYPE=s3) S3_AWS_REGION Region of your buckets us-east-1 S3_HOST Host of your buckets (required when not use AWS) S3_USE_HTTPS Use HTTPS connections to S3 if enabled true S3_USE_V4_SIGNATURE Use the v4 protocol of S3 if enabled true S3_PATH_STYLE_REQUEST This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. false S3_SSE_C_KEY A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. (none) ENABLE_NOTIFICATION_SERVER Enable (true) or disable (false) notification feature for Seafile false NOTIFICATION_SERVER_URL The notification server url (none) MD_FILE_COUNT_LIMIT (only valid when deployed metadata server). The maximum number of files in a repository that the metadata feature allows. If the number of files in a repository exceeds this value, the metadata management function will not be enabled for the repository. For a repository with metadata management enabled, if the number of records in it reaches this value but there are still some files that are not recorded in metadata server, the metadata management of the unrecorded files will be skipped. 100000 NON_ROOT Run Seafile container without a root user false Easier to configure S3 for Seafile and its components
Since Seafile Pro 13.0, in order to facilitate users to deploy Seafile's related extension components and other services in the future, a section will be provided in .env to store the S3 Configurations for Seafile and some extension components (such as SeaSearch, Metadata server). You can locate it with the title bar Storage configurations for S3.
S3 configurations in .env only support single S3 storage backend mode
The Seafile server only support configuring S3 in .env for single S3 storage backend mode (i.e., when SEAF_SERVER_STORAGE_TYPE=s3). If you would like to use other storage backend (e.g., Ceph, Swift) or other settings that can only be set in seafile.conf (like multiple storage backends), please set SEAF_SERVER_STORAGE_TYPE to multiple, and set MD_STORAGE_TYPE and SS_STORAGE_TYPE according to your configurations.
To conclude, set the directory permissions of the Elasticsearch volumne:
mkdir -p /opt/seafile-elasticsearch/data\nchmod 777 -R /opt/seafile-elasticsearch/data\n"},{"location":"setup/setup_pro_by_docker/#starting-the-docker-containers","title":"Starting the Docker Containers","text":"Run docker compose in detached mode:
docker compose up -d\n ERROR: Named volume \"xxx\" is used in service \"xxx\" but no declaration was found in the volumes section
You may encounter this problem when your Docker (or docker-compose) version is out of date. You can upgrade or reinstall the Docker service to solve this problem according to the Docker official documentation.
Note
You must run the above command in the directory with the .env. If .env file is elsewhere, please run
docker compose --env-file /path/to/.env up -d\n Success
After starting the services, you can see the initialization progress by tracing the logs of container seafile (i.e., docker logs seafile -f)
---------------------------------\nThis is your configuration\n---------------------------------\n\n server name: seafile\n server ip/domain: seafile.example.com\n\n seafile data dir: /opt/seafile/seafile-data\n fileserver port: 8082\n\n database: create new\n ccnet database: ccnet_db\n seafile database: seafile_db\n seahub database: seahub_db\n database user: seafile\n\n\nGenerating seafile configuration ...\n\ndone\nGenerating seahub configuration ...\n\n----------------------------------------\nNow creating seafevents database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating ccnet database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seafile database tables ...\n\n----------------------------------------\n----------------------------------------\nNow creating seahub database tables ...\n\n----------------------------------------\n\ncreating seafile-server-latest symbolic link ... done\n\n-----------------------------------------------------------------\nYour seafile server configuration has been finished successfully.\n-----------------------------------------------------------------\n And then you can see the following messages which the Seafile server starts successfully:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\n----------------------------------------\nSuccessfully created seafile admin\n----------------------------------------\n\nSeahub is started\n\nDone.\n Finially, you can go to http://seafile.example.com to use Seafile.
A 502 Bad Gateway error means that the system has not yet completed the initialization
"},{"location":"setup/setup_pro_by_docker/#find-logs","title":"Find logs","text":"To view Seafile docker logs, please use the following command
docker compose logs -f\n The Seafile logs are under /shared/logs/seafile in the docker, or /opt/seafile-data/logs/seafile in the server that run the docker.
The system logs are under /shared/logs/var-log, or /opt/seafile-data/logs/var-log in the server that run the docker.
If you have a seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
Then restart Seafile:
docker compose down\n\ndocker compose up -d\n"},{"location":"setup/setup_pro_by_docker/#seafile-directory-structure","title":"Seafile directory structure","text":""},{"location":"setup/setup_pro_by_docker/#path-optseafile-data","title":"Path /opt/seafile-data","text":"Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information.
/opt/seafile-data/seafile/logs/seafile.log./var/log inside the container. For example, you can find the nginx logs in /opt/seafile-data/logs/var-log/nginx/.The command docker container list should list the containers specified in the .env.
The directory layout of the Seafile container's volume should look as follows:
$ tree /opt/seafile-data -L 2\n/opt/seafile-data\n\u251c\u2500\u2500 logs\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 var-log\n\u251c\u2500\u2500 nginx\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 conf\n\u2514\u2500\u2500 seafile\n \u00a0\u00a0 \u251c\u2500\u2500 ccnet\n \u00a0\u00a0 \u251c\u2500\u2500 conf\n \u00a0\u00a0 \u251c\u2500\u2500 logs\n \u00a0\u00a0 \u251c\u2500\u2500 pro-data\n \u00a0\u00a0 \u251c\u2500\u2500 seafile-data\n \u00a0\u00a0 \u2514\u2500\u2500 seahub-data\n All Seafile config files are stored in /opt/seafile-data/seafile/conf. The nginx config file is in /opt/seafile-data/nginx/conf.
Any modification of a configuration file requires a restart of Seafile to take effect:
docker compose restart\n All Seafile log files are stored in /opt/seafile-data/seafile/logs whereas all other log files are in /opt/seafile-data/logs/var-log.
Follow the instructions in Backup and restore for Seafile Docker
"},{"location":"setup/setup_pro_by_docker/#garbage-collection","title":"Garbage Collection","text":"When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them.
"},{"location":"setup/setup_pro_by_docker/#faq","title":"FAQ","text":""},{"location":"setup/setup_pro_by_docker/#seafile-service-and-container-maintenance","title":"Seafile service and container maintenance","text":"Q: If I want enter into the Docker container, which command I can use?
A: You can enter into the docker container using the command:
docker exec -it seafile /bin/bash\n Q: I forgot the Seafile admin email address/password, how do I create a new admin account?
A: You can create a new admin account by running
docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\n The Seafile service must be up when running the superuser command.
Q: If, for whatever reason, the installation fails, how do I to start from a clean slate again?
A: Remove the directories /opt/seafile, /opt/seafile-data and /opt/seafile-mysql and start again.
Q: Something goes wrong during the start of the containers. How can I find out more?
A: You can view the docker logs using this command: docker compose logs -f.
Q: How Seafile use cache?
A: Seafile uses cache to improve performance in many situations. The content includes but is not limited to user session information, avatars, profiles, records from database, etc. From Seafile Docker 13, the Redis takes the default cache server for supporting the new features (please refer the upgradte notes), which has integrated in Seafile Docker 13 and can be configured directly in environment variables in .env (no additional settings are required by default)
Q: Is the Redis integrated in Seafile Docker safe? Does it have an access password?
A: Although the Redis integrated by Seafile Docker does not have a password set by default, it can only be accessed through the Docker private network and will not expose the service port externally. Of course, you can also set a password for it if necessary. You can set REDIS_PASSWORD in .env and remove the following comment markers in seafile-server.yml to set the integrated Redis' password:
services:\n ...\n redis:\n image: ${SEAFILE_REDIS_IMAGE:-redis}\n container_name: seafile-redis\n # remove the following comment markers\n command:\n - /bin/sh\n - -c\n - redis-server --requirepass \"$${REDIS_PASSWORD:?Variable is not set or empty}\"\n networks:\n - seafile-net\n ...\n Q: For some reason, I still have to use Memcached as my cache server. How can I do this?
A: If you still want to use the Memcached (is not provided from Seafile Docker 13), just follow the steps below:
CACHE_PROVIDER to memcached and modify MEMCACHED_xxx in .envredis part and and the redis dependency in seafile service section in seafile-server.yml. By the way, you can make changes to the cache server after the service is started (by setting environment variables in .env), but the corresponding configuration files will not be updated directly (e.g., seahub_settings.py, seafile.conf and seafevents.conf). To avoid ambiguity, we recommend that you also update these configuration files.
The entire db service needs to be removed (or noted) in seafile-server.yml if you would like to use an existing MySQL server, otherwise there is a redundant database service is running
service:\n\n # note or remove the entire `db` service\n #db:\n #image: ${SEAFILE_DB_IMAGE:-mariadb:10.11}\n #container_name: seafile-mysql\n # ... other parts in service `db`\n\n # do not change other services\n...\n What's more, you have to modify the .env to set correctly the fields with MySQL:
SEAFILE_MYSQL_DB_HOST=192.168.0.2\nSEAFILE_MYSQL_DB_PORT=3306\nINIT_SEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_USER=seafile # the user name of the user you like to use for Seafile server\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD # the password of the user you like to use for Seafile server\n Tip
INIT_SEAFILE_MYSQL_ROOT_PASSWORD is needed during installation (i.e., the deployment in the first time). After Seafile is installed, the user seafile will be used to connect to the MySQL server (SEAFILE_MYSQL_DB_PASSWORD), then you can remove the INIT_SEAFILE_MYSQL_ROOT_PASSWORD.
Ceph is a scalable distributed storage system. It's recommended to use Ceph's S3 Gateway (RGW) to integarte with Seafile. Seafile can also use Ceph's RADOS object storage layer for storage backend. But using RADOS requires to link with librados library, which may introduce library incompatibility issues during deployment. Furthermore the S3 Gateway provides easier to manage HTTP based interface. If you want to integrate with S3 gateway, please refer to \"Use S3-compatible Object Storage\" section in this documentation. The documentation below is for integrating with RADOS.
"},{"location":"setup/setup_with_ceph/#copy-ceph-conf-file-and-client-keyring","title":"Copy ceph conf file and client keyring","text":"Seafile acts as a client to Ceph/RADOS, so it needs to access ceph cluster's conf file and keyring. You have to copy these files from a ceph admin node's /etc/ceph directory to the seafile machine.
seafile-machine# sudo scp user@ceph-admin-node:/etc/ceph/ /etc\n"},{"location":"setup/setup_with_ceph/#install-and-enable-memcached","title":"Install and enable memcached","text":"For best performance, Seafile requires install memcached or redis and enable cache for objects.
We recommend to allocate at least 128MB memory for object cache.
"},{"location":"setup/setup_with_ceph/#install-python-ceph-library","title":"Install Python Ceph Library","text":"File search and WebDAV functions rely on Python Ceph library installed in the system.
sudo apt-get install python3-rados\n"},{"location":"setup/setup_with_ceph/#edit-seafile-configuration","title":"Edit seafile configuration","text":"Edit seafile.conf, add the following lines:
[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-fs\n You also need to add memory cache configurations
It's required to create separate pools for commit, fs, and block objects.
ceph-admin-node# rados mkpool seafile-blocks\nceph-admin-node# rados mkpool seafile-commits\nceph-admin-node# rados mkpool seafile-fs\n Troubleshooting librados incompatibility issues
Since 8.0 version, Seafile bundles librados from Ceph 16. On some systems you may find Seafile fail to connect to your Ceph cluster. In such case, you can usually solve it by removing the bundled librados libraries and use the one installed in the OS.
To do this, you have to remove a few bundled libraries:
cd seafile-server-latest/seafile/lib\nrm librados.so.2 libstdc++.so.6 libnspr4.so\n"},{"location":"setup/setup_with_ceph/#use-arbitary-ceph-user","title":"Use arbitary Ceph user","text":"The above configuration will use the default (client.admin) user to connect to Ceph. You may want to use some other Ceph user to connect. This is supported in Seafile. To specify the Ceph user, you have to add a ceph_client_id option to seafile.conf, as the following:
[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-fs\n\n# Memcached or Reids configs\n......\n You can create a ceph user for seafile on your ceph cluster like this:
ceph auth add client.seafile \\\n mds 'allow' \\\n mon 'allow r' \\\n osd 'allow rwx pool=seafile-blocks, allow rwx pool=seafile-commits, allow rwx pool=seafile-fs'\n You also have to add this user's keyring path to /etc/ceph/ceph.conf:
[client.seafile]\nkeyring = <path to user's keyring file>\n"},{"location":"setup/setup_with_multiple_storage_backends/","title":"Multiple Storage Backend","text":"There are some use cases that supporting multiple storage backends in Seafile server is needed. Such as:
Store different types of files into different storage backends:
Combine multiple storage backends to extend storage scalability:
About data of library
To use this feature, you need to:
SEAF_SERVER_STORAGE_TYPE=multiple in .env.seafile.conf.As Seafile server before 6.3 version doesn't support multiple storage classes, you have to explicitly enable this new feature and define storage classes with a different syntax than how we define storage backend before.
By default, Seafile dose not enable multiple storage classes. So, you have to create a configuration file for storage classes and specify it and enable the feature in seafile.conf:
Create the storage classes file:
nano /opt/seafile-date/seafile/conf\n For the example of this file, please refer next section
Modify seafile.conf
[storage]\nenable_storage_classes = true\nstorage_classes_file = /shared/conf/seafile_storage_classes.json\n enable_storage_classes \uff1aIf this is set to true, the storage class feature is enabled. You must define the storage classes in a JSON file provided in the next configuration option.storage_classes_file\uff1aSpecifies the path for the JSON file that contains the storage class definition.Tip
seafile.confstorage_classes_file in the Seafile container is different from the host usually, so we suggest you put this file in to the Seafile's configurations directory, and use /shared/conf instead of /opt/seafile-date/seafile/conf. Otherwise you have to add another persistent volume mapping strategy in seafile-server.yml. If your Seafile server is not deployed with Docker, we still suggest you put this file into the Seafile configurations file directory.The storage classes JSON file is about an array consist of objects, for each defines a storage class. The fields in the definition corresponds to the information we need to specify for a storage class:
Variables Descriptionsstorage_id A unique internal string ID used to identify the storage class. It is not visible to users. For example, \"primary storage\". name A user-visible name for the storage class. is_default Indicates whether this storage class is the default one. commits The storage used for storing commit objects for this class. fs The storage used for storing fs objects for this class. blocks The storage used for storing block objects for this class. Note
is_default is effective in two cases: commit, fs, and blocks can be stored in different storages. This provides the most flexible way to define storage classes (e.g., a file system, Ceph, or S3.)Here is an example, which uses local file system, S3 (default), Swift and Ceph at the same time.
[\n {\n \"storage_id\": \"hot_storage\",\n \"name\": \"Hot Storage\",\n \"is_default\": true,\n \"commits\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-commits\",\n \"key\": \"<your key>\",\n \"key_id\": \"<your key id>\"\n },\n \"fs\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-fs\",\n \"key\": \"<your key>\",\n \"key_id\": \"<your key id>\"\n },\n \"blocks\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-blocks\",\n \"key\": \"<your key>\",\n \"key_id\": \"<your key id>\"\n }\n },\n {\n \"storage_id\": \"cold_storage\",\n \"name\": \"Cold Storage\",\n \"is_default\": false,\n \"fs\": {\n \"backend\": \"fs\",\n \"dir\": \"/share/seafile/seafile-data\" // /opt/seafile/seafile-data for binary-install Seafile\n },\n \"commits\": {\n \"backend\": \"fs\",\n \"dir\": \"/share/seafile/seafile-data\"\n },\n \"blocks\": {\n \"backend\": \"fs\",\n \"dir\": \"/share/seafile/seafile-data\"\n }\n },\n {\n \"storage_id\": \"swift_storage\",\n \"name\": \"Swift Storage\",\n \"fs\": {\n \"backend\": \"swift\",\n \"tenant\": \"<your tenant>\",\n \"user_name\": \"<your username>\",\n \"password\": \"<your password>\",\n \"container\": \"seafile-commits\",\n \"auth_host\": \"<Swift auth host>:<port, default 5000>\",\n \"auth_ver\": \"v2.0\"\n },\n \"commits\": {\n \"backend\": \"swift\",\n \"tenant\": \"<your tenant>\",\n \"user_name\": \"<your username>\",\n \"password\": \"<your password>\",\n \"container\": \"seafile-commits\",\n \"auth_host\": \"<Swift auth host>:<port, default 5000>\",\n \"auth_ver\": \"v2.0\"\n },\n \"blocks\": {\n \"backend\": \"swift\",\n \"tenant\": \"<your tenant>\",\n \"user_name\": \"<your username>\",\n \"password\": \"<your password>\",\n \"container\": \"seafile-commits\",\n \"auth_host\": \"<Swift auth host>:<port, default 5000>\",\n \"auth_ver\": \"v2.0\",\n \"region\": \"RegionTwo\"\n }\n },\n {\n \"storage_id\": \"ceph_storage\",\n \"name\": \"ceph Storage\",\n \"fs\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-fs\"\n },\n \"commits\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-commits\"\n },\n \"blocks\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-blocks\"\n }\n }\n]\n Tip
As you may have seen, the commits, fs and blocks information syntax is similar to what is used in [commit_object_backend], [fs_object_backend] and [block_backend] section of seafile.conf for a single backend storage. You can refer to the detailed syntax in the documentation for the storage you use (e.g., S3 Storage for S3).
If you use file system as storage for fs, commits or blocks, you must explicitly provide the path for the seafile-data directory. The objects will be stored in storage/commits, storage/fs, storage/blocks under this path.
Library mapping policies decide the storage class a library uses. Currently we provide 3 policies for 3 different use cases:
The storage class of a library is decided on creation and stored in a database table. The storage class of a library won't change if the mapping policy is changed later.
Before choosing your mapping policy, you need to enable the storage classes feature in seahub_settings.py:
ENABLE_STORAGE_CLASSES = True\n"},{"location":"setup/setup_with_multiple_storage_backends/#user-chosen","title":"User Chosen","text":"This policy lets the users choose which storage class to use when creating a new library. The users can select any storage class that's been defined in the JSON file.
To use this policy, add following options in seahub_settings.py:
STORAGE_CLASS_MAPPING_POLICY = 'USER_SELECT'\n If you enable storage class support but don't explicitly set STORAGE_CLASS_MAPPING_POLIICY in seahub_settings.py, this policy is used by default.
Due to storage cost or management considerations, sometimes a system admin wants to make different type of users use different storage backends (or classes). You can configure a user's storage classes based on their roles.
A new option storage_ids is added to the role configuration in seahub_settings.py to assign storage classes to each role. If only one storage class is assigned to a role, the users with this role cannot choose storage class for libraries; otherwise, the users can choose a storage class if more than one class are assigned. If no storage class is assigned to a role, the default class specified in the JSON file will be used.
Here are the sample options in seahub_settings.py to use this policy:
ENABLE_STORAGE_CLASSES = True\nSTORAGE_CLASS_MAPPING_POLICY = 'ROLE_BASED'\n\nENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_invite_guest': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': ['old_version_id', 'hot_storage', 'cold_storage', 'a_storage'],\n },\n 'guest': {\n 'can_add_repo': True,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'storage_ids': ['hot_storage', 'cold_storage'],\n },\n}\n"},{"location":"setup/setup_with_multiple_storage_backends/#library-id-based-mapping","title":"Library ID Based Mapping","text":"This policy maps libraries to storage classes based on its library ID. The ID of a library is an UUID. In this way, the data in the system can be evenly distributed among the storage classes.
Note
This policy is not a designed to be a complete distributed storage solution. It doesn't handle automatic migration of library data between storage classes. If you need to add more storage classes to the configuration, existing libraries will stay in their original storage classes. New libraries can be distributed among the new storage classes (backends). You still have to plan about the total storage capacity of your system at the beginning.
To use this policy, you first add following options in seahub_settings.py:
STORAGE_CLASS_MAPPING_POLICY = 'REPO_ID_MAPPING'\n Then you can add option for_new_library to the backends which are expected to store new libraries in json file:
[\n {\n \"storage_id\": \"new_backend\",\n \"name\": \"New store\",\n \"for_new_library\": true,\n \"is_default\": false,\n \"fs\": {\n \"backend\": \"fs\", \n \"dir\": \"/storage/seafile/new-data\"\n },\n \"commits\": {\n \"backend\": \"fs\", \n \"dir\": \"/storage/seafile/new-data\"\n },\n \"blocks\": {\n \"backend\": \"fs\", \n \"dir\": \"/storage/seafile/new-data\"\n }\n }\n]\n"},{"location":"setup/setup_with_multiple_storage_backends/#multiple-storage-backend-data-migration","title":"Multiple Storage Backend Data Migration","text":"Migration from S3
Since version 11, when you migrate from S3 to other storage servers, you have to use V4 authentication protocol. This is because version 11 upgrades to Boto3 library, which fails to list objects from S3 when it's configured to use V2 authentication protocol.
Run the migrate-repo.sh script to migrate library data between different storage backends.
./migrate-repo.sh [repo_id] origin_storage_id destination_storage_id\n repo_id is optional, if not specified, all libraries will be migrated.
Specify a path prefix
You can set the OBJECT_LIST_FILE_PATH environment variable to specify a path prefix to store the migrated object list before running the migration script
For example:
export OBJECT_LIST_FILE_PATH=/opt/test\n This will create three files in the specified path (/opt):
test_4c731e5c-f589-4eaa-889f-14c00d4893cb.fstest_4c731e5c-f589-4eaa-889f-14c00d4893cb.commits test_4c731e5c-f589-4eaa-889f-14c00d4893cb.blocksSetting the OBJECT_LIST_FILE_PATH environment variable has two purposes:
Run the remove-objs.sh script (before migration, you need to set the OBJECT_LIST_FILE_PATH environment variable) to delete all objects in a library in the specified storage backend.
./remove-objs.sh repo_id storage_id\n"},{"location":"setup/setup_with_s3/","title":"Setup With S3 Storage","text":"From Seafile 13, there are two ways to configure S3 storage (single S3 storage backend) for Seafile server:
seafile.conf)Setup note for binary packages deployment (Pro)
If your Seafile server is deployed from binary packages, you have to do the following steps before deploying:
install boto3 to your machine
sudo pip install boto3\n Install and configure memcached or Redis.
For best performance, Seafile requires enable memory cache for objects. We recommend to at least allocate 128MB memory for memcached or Redis.
The configuration options differ for different S3 storage. We'll describe the configurations in separate sections. You also need to add memory cache configurations
From Seafile 13, configuring S3 from environment variables will be supported and will provide a more convenient way. You can refer to the detailed description of this part in the introduction of .env file. Generally,
S3_COMMIT_BUCKET, S3_FS_BUCKET and S3_BLOCK_BUCKET). SEAF_SERVER_STORAGE_TYPE to true.env \u200b\u200baccording to the following table:S3_COMMIT_BUCKET S3 storage backend commit objects bucket (required) S3_FS_BUCKET S3 storage backend fs objects bucket (required) S3_BLOCK_BUCKET S3 storage backend block objects bucket (required) S3_KEY_ID S3 storage backend key ID (required) S3_SECRET_KEY S3 storage backend secret key (required) S3_AWS_REGION Region of your buckets us-east-1 S3_HOST Host of your buckets (required when not use AWS) S3_USE_HTTPS Use HTTPS connections to S3 if enabled true S3_USE_V4_SIGNATURE Use the v4 protocol of S3 if enabled true S3_PATH_STYLE_REQUEST This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. false S3_SSE_C_KEY A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. (none) Bucket naming conventions
No matter if you using AWS or any other S3 compatible object storage, we recommend that you follow S3 naming rules. When you create buckets on S3, please read the S3 rules for naming first. Note, especially do not use capital letters in the name of the bucket (do not use camel-style naming, such as MyCommitObjects).
Good naming of a bucketBad naming of a bucketAbout S3_SSE_C_KEY
S3_SSE_C_KEY is a string of 32 characters.
You can generate sse_c_key with the following command. Note that the key doesn't have to be base64 encoded. It can be any 32-character long random string. The example just show one possible way to generate such a key.
openssl rand -base64 24\n Howevery, if you have existing data in your S3 storage bucket, turning on the above configuration will make your data inaccessible. That's because Seafile server doesn't support encrypted and non-encrypted objects mixed in the same bucket. You have to create a new bucket, and migrate your data to it by following storage backend migration documentation.
For other S3 support extensions
In addition to Seafile server, the following extensions (if already installed) will share the same S3 authorization information in .env with Seafile server:
SS_STORAGE_TYPE=s3 and S3_SS_BUCKETMD_STORAGE_TYPE=s3 and S3_MD_BUCKETSEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=false\nS3_AWS_REGION=eu-central-1\nS3_HOST=\nS3_USE_HTTPS=true\n SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=sos-de-fra-1.exo.io\nS3_USE_HTTPS=true\n SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=fsn1.your-objectstorage.com\nS3_USE_HTTPS=true\n There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support
SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=<access endpoint for storage provider>\nS3_USE_HTTPS=true\n Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift, Ceph's RADOS Gateway and Minio. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:
SEAF_SERVER_STORAGE_TYPE=s3\nS3_COMMIT_BUCKET=my-commit-objects\nS3_FS_BUCKET=my-fs-objects\nS3_BLOCK_BUCKET=my-block-objects\nS3_KEY_ID=your-key-id\nS3_SECRET_KEY=your-secret-key\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=true\nS3_AWS_REGION=eu-central-1 # will be ignored when S3_HOST is specified\nS3_HOST=<your s3 api endpoint host>:<your s3 api endpoint port>\nS3_USE_HTTPS=true # according to your S3 configuration\n"},{"location":"setup/setup_with_s3/#setup-with-config-file","title":"Setup with config file","text":"Seafile configures S3 storage by adding or modifying the following section in seafile.conf:
[xxx_object_backend]\nname = s3\nbucket = my-xxx-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\nuse_https = true\n... ; other optional configurations\n Similar to configure in .env, you have to create at least 3 buckets for Seafile too, corresponding to the sections: commit_object_backend, fs_object_backend and block_backend. For the configurations for each backend section, please refer to the following table:
bucket Bucket name for commit, fs, and block objects. Make sure it follows S3 naming rules (you can refer the notes below the table). key_id The key_id is required to authenticate you to S3. You can find the key_id in the \"security credentials\" section on your AWS account page or from your storage provider. key The key is required to authenticate you to S3. You can find the key in the \"security credentials\" section on your AWS account page or from your storage provider. use_v4_signature There are two versions of authentication protocols that can be used with S3 storage: Version 2 (older, may still be supported by some regions) and Version 4 (current, used by most regions). If you don't set this option, Seafile will use the v2 protocol. It's suggested to use the v4 protocol. use_https Use https to connect to S3. It's recommended to use https. aws_region (Optional) If you use the v4 protocol and AWS S3, set this option to the region you chose when you create the buckets. If it's not set and you're using the v4 protocol, Seafile will use us-east-1 as the default. This option will be ignored if you use the v2 protocol. host (Optional) The endpoint by which you access the storage service. Usually it starts with the region name. It's required to provide the host address if you use storage provider other than AWS, otherwise Seafile will use AWS's address (i.e., s3.us-east-1.amazonaws.com). sse_c_key (Optional) A string of 32 characters can be generated by openssl rand -base64 24. It can be any 32-character long random string. It's required to use V4 authentication protocol and https if you enable SSE-C. path_style_request (Optional) This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. So we recommend to set this option to true for self-hosted storage."},{"location":"setup/setup_with_s3/#example-configurations_1","title":"Example configurations","text":"AWSExoscaleHetznerOther Public Hosted S3 StorageSelf-hosted S3 Storage [commit_object_backend]\nname = s3\nbucket = my-commit-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\nuse_https = true\n [commit_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = sos-de-fra-1.exo.io\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[fs_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = sos-de-fra-1.exo.io\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[block_backend]\nname = s3\nbucket = your-bucket-name\nhost = sos-de-fra-1.exo.io\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n [commit_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = fsn1.your-objectstorage.com\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[fs_object_backend]\nname = s3\nbucket = your-bucket-name\nhost = fsn1.your-objectstorage.com\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n\n[block_backend]\nname = s3\nbucket = your-bucket-name\nhost = fsn1.your-objectstorage.com\nkey_id = ...\nkey = ...\nuse_https = true\npath_style_request = true\n There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support
[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\n# v2 authentication protocol will be used if not set\nuse_v4_signature = true\n# required for v4 protocol. ignored for v2 protocol.\naws_region = <region name for storage provider>\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\nuse_https = true\n Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift, Ceph's RADOS Gateway and Minio. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:
[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = <your s3 api endpoint host>:<your s3 api endpoint port>\npath_style_request = true\nuse_v4_signature = true\nuse_https = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = <your s3 api endpoint host>:<your s3 api endpoint port>\npath_style_request = true\nuse_v4_signature = true\nuse_https = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = <your s3 api endpoint host>:<your s3 api endpoint port>\npath_style_request = true\nuse_v4_signature = true\nuse_https = true\n Use server-side encryption with customer-provided keys (SSE-C) in Seafile
Since Pro 11.0, you can use SSE-C to S3. Add the following sse_c_key to seafile.conf (as shown in the above variables table):
[commit_object_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[fs_object_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[block_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n"},{"location":"setup/setup_with_s3/#run-and-test","title":"Run and Test","text":"Now you can start Seafile and test
"},{"location":"setup/setup_with_swift/","title":"Setup With OpenStack Swift","text":"This backend uses the native Swift API. Previously users can only use the S3-compatibility layer of Swift. That way is obsolete now.
Since version 6.3, OpenStack Swift v3.0 API is supported.
"},{"location":"setup/setup_with_swift/#prepare","title":"Prepare","text":"To setup Seafile Professional Server with Swift:
Edit seafile.conf, add the following lines:
[block_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-blocks\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n\n[commit_object_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-commits\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n\n[fs_object_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-fs\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n You also need to add memory cache configurations
The above config is just an example. You should replace the options according to your own environment.
Seafile supports Swift with Keystone as authentication mechanism. The auth_host option is the address and port of Keystone service.The region option is used to select publicURL,if you don't configure it, use the first publicURL in returning authenticated information.
Seafile also supports Tempauth and Swauth since professional edition 6.2.1. The auth_ver option should be set to v1.0, tenant and region are no longer needed.
It's required to create separate containers for commit, fs, and block objects.
"},{"location":"setup/setup_with_swift/#use-https-connections-to-swift","title":"Use HTTPS connections to Swift","text":"Since Pro 5.0.4, you can use HTTPS connections to Swift. Add the following options to seafile.conf:
[commit_object_backend]\nname = swift\n......\nuse_https = true\n\n[fs_object_backend]\nname = swift\n......\nuse_https = true\n\n[block_backend]\nname = swift\n......\nuse_https = true\n Because the server package is built on CentOS 6, if you're using Debian/Ubuntu, you have to copy the system CA bundle to CentOS's CA bundle path. Otherwise Seafile can't find the CA bundle so that the SSL connection will fail.
sudo mkdir -p /etc/pki/tls/certs\nsudo cp /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/certs/ca-bundle.crt\nsudo ln -s /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/cert.pem\n"},{"location":"setup/setup_with_swift/#run-and-test","title":"Run and Test","text":"Now you can start Seafile by ./seafile.sh start and ./seahub.sh start and visit the website.
This page shows the minimal requirements of Seafile.
About the system requirements
The system requirements in this document refer to the minimum system hardware requirements are the suggestions to smooth operation of Seafile (network connection is not discussed here). If not otherwise specified, it will apply to all deployment scenarios, but for binary installations, the libraries we provided in the documents are only supporting the following operation systems:
Important: Information of Docker-base deployment integration services
In each case, we have shown the services integrated names Docker-base deployment integration services by standard installation with Docker. If these services are already installed and you do not need them in your deployment, you need to refer to the corresponding documentation and disable them in the Docker resource file.However, we do not recommend that you reduce the corresponding system resource requirements on our suggestions, unless otherwise specified.
However, if you use other installation methods (e.g., binary deployment, K8S deployment) you have to make sure you have installed these services, because it will not include the installation of that.
If you need to install other extensions not included here (e.g., OnlyOffice), you should increase the system requirements appropriately above our recommendations.
CPU and Memory requirements:
Deployment Scenarios CPU Requirements Memory Requirements Indexer / Search Engine Docker deployment 4 Cores 4G Default All 4 Cores 4G With existing ElasticSearch service, but on the same machine / node All 2 Cores 2G With existing ElasticSearch service, and on another machine / node All 2 Cores 2G Use SeaSearch as the search engine, instead of ElasticSearchHard disk requirements: More than 50G are recommended
More details of files indexer used in Seafile PE
By default, Seafile Pro will use Elasticsearch as the files indexer
Please make sure the mmapfs counts do not cause excptions like out of memory, which can be increased by following command (see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html for futher details):
sysctl -w vm.max_map_count=262144 #run as root\n or modify /etc/sysctl.conf and reboot to set this value permanently:
nano /etc/sysctl.conf\n\n# modify vm.max_map_count\nvm.max_map_count=262144\n If your machine dose not have enough requirements, 2 Cores and 2GB RAM are minimum by chosing one of following two ways after first-time deployment:
Use SeaSearch, a lightweight search engine built on open source search engine ZincSearch, as the indexer
Deploy Elasticsearch in another machine, and modify es_host and es_port in seafevents.conf
More details about the number of nodes
More suggestions in Seafile cluster
We assume you have already deployed Redis (Memcached alternatively, but still recommend Redis), MariaDB, file indexer (e.g., ElasticSearch) in separate machines and use S3 like object storage.
Generally, when deploying Seafile in a cluster, we recommend that you use a storage backend (such as AWS S3) to store Seafile data. However, according to the Seafile image startup rules and K8S persistent storage strategy, you still need to prepare a persistent directory for configuring the startup of the Seafile container.
Since Seafile 12.0, all reverse proxy, HTTPS, etc. processing for single-node deployment based on Docker is handled by caddy. If you need to use other reverse proxy services, you can refer to this document to modify the relevant configuration files.
"},{"location":"setup/use_other_reverse_proxy/#services-that-require-reverse-proxy","title":"Services that require reverse proxy","text":"Before making changes to the configuration files, you have to know the services used by Seafile and related components (Table 1 therafter).
Tip
The services shown in the table below are all based on the single-node integrated deployment in accordance with the Seafile official documentation.
If these services are deployed in standalone mode (such as seadoc and notification-server), or deployed in the official documentation of third-party plugins (such as onlyoffice and collabora), you can skip modifying the configuration files of these services (because Caddy is not used as a reverse proxy for such deployment approaches).
If you have not integrated the services in the Table 1, please choose Standalone or Refer to the official documentation of third-party plugins to install them when you need these services
YML Service Suggest exposed port Service listen port Require WebSocketseafile-server.yml seafile 80 80 No seadoc.yml seadoc 8888 80 Yes notification-server.yml notification-server 8083 8083 Yes collabora.yml collabora 6232 9980 No onlyoffice.yml onlyoffice 6233 80 No thumbnail-server.yml thumbnail 8084 80 No"},{"location":"setup/use_other_reverse_proxy/#modify-yml-files","title":"Modify YML files","text":"Refer to Table 1 for the related service exposed ports. Add section ports for corresponding services
services:\n <the service need to be modified>:\n ...\n ports:\n - \"<Suggest exposed port>:<Service listen port>\"\n Delete all fields related to Caddy reverse proxy (in label section)
Tip
Some .yml files (e.g., collabora.yml) also have port-exposing information with Caddy in the top of the file, which also needs to be removed.
We take seafile-server.yml for example (Pro edition):
services:\n # ... other services\n\n seafile:\n image: ${SEAFILE_IMAGE:-seafileltd/seafile-pro-mc:13.0-latest}\n container_name: seafile\n ports:\n - \"80:80\"\n volumes:\n - ${SEAFILE_VOLUME:-/opt/seafile-data}:/shared\n environment:\n ... # enviroment variables map, donnot make change\n\n # please remove the `label` section\n #label: ... <- remove this section\n\n depends_on:\n ... # dependencies, donnot make change\n ...\n\n# ... other options\n"},{"location":"setup/use_other_reverse_proxy/#add-reverse-proxy-for-related-services","title":"Add reverse proxy for related services","text":"Modify nginx.conf and add reverse proxy for services seafile and seadoc:
Note
If your proxy server's host is not the same as the host the Seafile deployed to, please replase 127.0.0.1 to your Seafile server's host
location / {\n proxy_pass http://127.0.0.1:80;\n proxy_read_timeout 310s;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header Connection \"\";\n proxy_http_version 1.1;\n\n client_max_body_size 0;\n}\n location /sdoc-server/ {\n proxy_pass http://127.0.0.1:8888/;\n proxy_redirect off;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n\n client_max_body_size 100m;\n}\n\nlocation /socket.io {\n proxy_pass http://127.0.0.1:8888;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection 'upgrade';\n proxy_redirect off;\n\n proxy_buffers 8 32k;\n proxy_buffer_size 64k;\n\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header Host $http_host;\n proxy_set_header X-NginX-Proxy true;\n}\n location /notification {\n proxy_pass http://127.0.0.1:8083;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";\n access_log /var/log/nginx/notification.access.log seafileformat;\n error_log /var/log/nginx/notification.error.log;\n}\n map $http_x_forwarded_proto $the_scheme {\n default $http_x_forwarded_proto;\n \"\" $scheme;\n}\nmap $http_x_forwarded_host $the_host {\n default $http_x_forwarded_host;\n \"\" $host;\n}\nmap $http_upgrade $proxy_connection {\n default upgrade;\n \"\" close;\n}\nlocation /onlyofficeds/ {\n proxy_pass http://127.0.0.1:6233/;\n proxy_http_version 1.1;\n client_max_body_size 100M;\n proxy_read_timeout 3600s;\n proxy_connect_timeout 3600s;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection $proxy_connection;\n proxy_set_header X-Forwarded-Host $the_host/onlyofficeds;\n proxy_set_header X-Forwarded-Proto $the_scheme;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n}\n location /thumbnail {\n proxy_pass http://127.0.0.1:8084;\n proxy_http_version 1.1;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n\n access_log /var/log/nginx/thumbnail.access.log;\n error_log /var/log/nginx/thumbnail.error.log;\n}\n"},{"location":"setup/use_other_reverse_proxy/#modify-env","title":"Modify .env","text":"Remove caddy.yml from field COMPOSE_FILE in .env, e.g.
COMPOSE_FILE='seafile-server.yml' # remove caddy.yml\n"},{"location":"setup/use_other_reverse_proxy/#restart-services-and-nginx","title":"Restart services and nginx","text":"docker compose down\ndocker compose up -d\nnginx restart\n"},{"location":"setup/use_seasearch/","title":"Use SeaSearch as search engine (Pro)","text":"SeaSearch, a file indexer with more lightweight and efficiency than Elasticsearch, is supported from Seafile 12.
For Seafile deploy from binary package
We currently only support Docker-based deployment for SeaSearch Server, so this document describes the configuration with the situation of using Docker to deploy Seafile server.
If your Seafile Server deploy from binary package, please refer here to start or stop Seafile Server.
For Seafile cluster
Theoretically, at least the backend node has to restart, if your Seafile server deploy in cluster mode, but we still suggest you configure and restart all node to make sure the consistency and synchronization in the cluster
"},{"location":"setup/use_seasearch/#deploy-seasearch-service","title":"Deploy SeaSearch service","text":"SeaSearch service is currently mainly deployed via docker. We have integrated it into the relevant docker-compose file. You only need to download it to the same directory as seafile-server.yml:
wget https://manual.seafile.com/13.0/repo/docker/pro/seasearch.yml\n"},{"location":"setup/use_seasearch/#modify-env","title":"Modify .env","text":"We have configured the relevant variables in .env. Here you must pay special attention to the following variable information, which will affect the SeaSearch initialization process. For variables in .env of SeaSearch service, please refer here for the details. We use /opt/seasearch-data as the persistent directory of SeaSearch (the information of administrator are same as Seafile's admin by default from Seafile 13):
For Apple's Chips
Since Apple's chips (such as M2) do not support MKL, you need to set the relevant image to xxx-nomkl:latest, e.g.:
SEASEARCH_IMAGE=seafileltd/seasearch-nomkl:latest\n COMPOSE_FILE='...,seasearch.yml' # ... means other docker-compose files\n\n#SEASEARCH_IMAGE=seafileltd/seasearch-nomkl:1.0-latest # for Apple's Chip\nSEASEARCH_IMAGE=seafileltd/seasearch:1.0-latest\n\nSS_DATA_PATH=/opt/seasearch-data\nINIT_SS_ADMIN_USER=<admin-username> \nINIT_SS_ADMIN_PASSWORD=<admin-password>\n\n\n# if you would like to use S3 for saving seasearch data\nSS_STORAGE_TYPE=s3\nS3_SS_BUCKET=...\nS3_KEY_ID=<your-key-id>\nS3_SECRET_KEY=<your-secret-key>\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=false\nS3_AWS_REGION=us-east-1\nS3_HOST=\nS3_USE_HTTPS=true\nS3_SSE_C_KEY=\n"},{"location":"setup/use_seasearch/#modify-seafile-serveryml-to-disable-elasticsearch-service","title":"Modify seafile-server.yml to disable elasticSearch service","text":"If you would like to use SeaSearch as the search engine, the elasticSearch service can be removed, which is no longer used: remove elasticsearch.yml in the list variable COMPOSE_FILE on the file .env.
seafevents.conf","text":"Get your authorization token by base64 code consist of INIT_SS_ADMIN_USER and INIT_SS_ADMIN_PASSWORD defined in .env firsly, which is used to authorize when calling the SeaSearch API:
echo -n 'username:password' | base64\n\n# example output\nYWRtaW46YWRtaW5fcGFzc3dvcmQ=\n Add the following section in seafevents to enable seafile backend service to access SeaSearch APIs
SeaSearch server deploy on a different machine with Seafile
If your SeaSearch server deploy on a different machine with Seafile, please replace http://seasearch:4080 to the url <scheme>://<address>:<prot> of your SeaSearch server
[SEASEARCH]\nenabled = true\nseasearch_url = http://seasearch:4080\nseasearch_token = <your auth token>\ninterval = 10m\n\n# if you would like to enable full-text indexing (i.e., search for document content), also set the option below to true (support from 13.0 Pro)\nindex_office_pdf = true\n Disable the ElasticSearch, as you can set enabled = false in INDEX FILES section:
[INDEX FILES]\nenabled = false\n...\n docker compose down\ndocker compose up -d\n After startup the SeaSearch service, you can check the following logs for Whether SeaSearch runs normally and Seafile is called successfully:
docker logs -f seafile-seasearch/opt/seasearch-data/log/seafevents.logAfter first time start SeaSearch Server
You can remove the initial admin account informations in .env (e.g., INIT_SS_ADMIN_USER, INIT_SS_ADMIN_PASSWORD), which are only used in the SeaSearch initialization progress (i.e., the first time to start services). But make sure you have recorded it somewhere else in case you forget the password.
By default, SeaSearch use word based tokenizer designed for English/German/French language. You can add following configuration to use tokenizer designed for Chinese language.
[SEASEARCH]\nenabled = true\n...\nlang = chinese\n"},{"location":"setup_binary/cluster_deployment/","title":"Cluster Deployment","text":"Tip
Since version 8.0, the recommend way to install Seafile clsuter is using Docker
"},{"location":"setup_binary/cluster_deployment/#cluster-requirements","title":"Cluster requirements","text":"Please refer here for the details about the cluster requirements for all nodes in Seafile cluster. In general, we recommend that each node should have at least 2G RAM and a 2-core CPU (> 2GHz).
"},{"location":"setup_binary/cluster_deployment/#preparation-all-nodes","title":"Preparation (all nodes)","text":""},{"location":"setup_binary/cluster_deployment/#install-prerequisites","title":"Install prerequisites","text":"Please follow here to install prerequisites
Note
Cache server (the first step) is not necessary, if you donot wish this node deploy it.
"},{"location":"setup_binary/cluster_deployment/#create-user-seafile","title":"Create userseafile","text":"Create a new user and follow the instructions on the screen:
adduser seafile\n Change ownership of the created directory to the new user:
chown -R seafile: /opt/seafile\n All the following steps are done as user seafile.
Change to user seafile:
su seafile\n"},{"location":"setup_binary/cluster_deployment/#placing-the-seafile-pe-license-in-optseafile","title":"Placing the Seafile PE license in /opt/seafile","text":"Save the license file in Seafile's programm directory /opt/seafile. Make sure that the name is seafile-license.txt.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
"},{"location":"setup_binary/cluster_deployment/#setup-and-configure-nginx-only-for-frontend-nodes","title":"Setup and configure Nginx (only for frontend nodes)","text":"For security reasons, the Seafile frontend service will only listen to requests from the local port 8000. You need to use Nginx to reverse proxy this port to port 80 for external access:
Install Nginx
sudo apt update\nsudo apt install nginx\n Create the configurations file for current node
sudo nano /etc/nginx/sites-available/seafile.conf\n and, add the following contents into this file:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name <current node's IP>;\n\n proxy_set_header X-Forwarded-For $remote_addr;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log seafileformat;\n error_log /var/log/nginx/seahub.error.log;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$ $1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n\n send_timeout 36000s;\n\n access_log /var/log/nginx/seafhttp.access.log seafileformat;\n error_log /var/log/nginx/seafhttp.error.log;\n }\n location /media {\n root /opt/seafile/seafile-server-latest/seahub;\n }\n}\n Link the configurations file to sites-enabled directory:
sudo ln -s /etc/nginx/sites-available/seafile.conf /etc/nginx/sites-enabled/\n Test and enable configuration
sudo nginx -t\nsudo nginx -s reload\n It would be convenient to setup Seafile service to start on system boot. Follow this documentation to set it up on.
"},{"location":"setup_binary/cluster_deployment/#firewall-settings","title":"Firewall Settings","text":"There are 2 firewall rule changes for Seafile cluster:
Please follow Installation of Seafile Server Professional Edition to setup:
/opt/seafile/conf","text":""},{"location":"setup_binary/cluster_deployment/#env","title":".env","text":"Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters can be generated from:
pwgen -s 40 1\n JWT_PRIVATE_KEY=<Your jwt private key>\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=<your database host>\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n"},{"location":"setup_binary/cluster_deployment/#seafileconf","title":"seafile.conf","text":"Add or modify the following configuration to seafile.conf:
[memcached]\nmemcached_options = --SERVER=<your memcached ip>[:<your memcached port>] --POOL-MIN=10 --POOL-MAX=100\n [redis]\nredis_host = <your redis ip>\nredis_port = <your redis port, default 6379>\nmax_connections = 100\n Enable cluster mode
[cluster]\nenabled = true\n More options in cluster section
The Seafile server also opens a port for the load balancers to run health checks. Seafile by default uses port 11001. You can change this by adding the following config:
[cluster]\nhealth_check_port = 12345\n Enable backend storage:
You must setup and use memory cache when deploying Seafile cluster, please add or modify the following configuration to seahub_settings.py:
CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '<your Memcached host>:<your Memcached port, default 11211>',\n },\n}\n please Refer to Django's documentation about using Redis cache to add Redis configurations to seahub_settings.py.
Add following options to seahub_setting.py, which will tell Seahub to store avatar in database and cache avatar in memcached, and store css CACHE to local memory.
AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\n Modify the [INDEX FILES] section to enable full test search, we take ElasticSearch for example:
[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh\nindex_office_pdf = true\nes_host = <your ElasticSearch host>\nes_port = <your ElasticSearch port, default 9200>\n"},{"location":"setup_binary/cluster_deployment/#update-seahub-database","title":"Update Seahub Database","text":"In cluster environment, we have to store avatars in the database instead of in a local disk.
mysql -h<your MySQL host> -P<your MySQL port> -useafile -p<user seafile's password>\n\n# enter MySQL environment\nUSE seahub_db;\n\nCREATE TABLE `avatar_uploaded` (`filename` TEXT NOT NULL, `filename_md5` CHAR(32) NOT NULL PRIMARY KEY, `data` MEDIUMTEXT NOT NULL, `size` INTEGER NOT NULL, `mtime` datetime NOT NULL);\n"},{"location":"setup_binary/cluster_deployment/#run-and-test-the-single-node","title":"Run and Test the Single Node","text":"Once you have finished configuring this single node, start it to test if it runs properly:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seahub.sh start\n Success
The first time you start seahub, the script would prompt you to create an admin account for your Seafile server. Then you can see the following message in your console:
Starting seafile server, please wait ...\nSeafile server started\n\nDone.\n\nStarting seahub at port 8000 ...\n\nSeahub is started\n\nDone.\n Finally, you can visit http://ip-address-of-this-node:80 and login with the admin account to test if this node is working fine or not.
If the first frontend node works fine, you can compress the whole directory /opt/seafile into a tarball and copy it to all other Seafile server nodes. You can simply uncompress it and start the server by:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seahub.sh start\n"},{"location":"setup_binary/cluster_deployment/#backend-node","title":"Backend node","text":"In the backend node, you need to execute the following command to start Seafile server. CLUSTER_MODE=backend means this node is seafile backend server.
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n export CLUSTER_MODE=backend\ncd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seafile-background-tasks.sh start\n"},{"location":"setup_binary/cluster_deployment/#load-balancer-setting","title":"Load Balancer Setting","text":"Note
Since Seafile Pro server 6.0.0, cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the \"Load Balancer Setting\" section below for details
Generally speaking, in order to better access the Seafile service, we recommend that you use a load balancing service to access the Seafile cluster and bind your domain name (such as seafile.cluster.com) to the load balancing service. Usually, you can use:
Deploy your own load balancing service, our document will give two of common load balance services:
In the AWS ELB management console, after you've added the Seafile server instances to the instance list, you should do two more configurations.
First you should setup HTTP(S) listeners. Ports 443 and 80 of ELB should be forwarded to the ports 80 or 443 of the Seafile servers.
Then you setup health check
Refer to AWS documentation about how to setup sticky sessions.
"},{"location":"setup_binary/cluster_deployment/#nginx","title":"Nginx","text":"Install Nginx in the host if you would like to deploy load balance service
sudo apt update\nsudo apt install nginx\n Create the configurations file for Seafile cluster
sudo nano /etc/nginx/sites-available/seafile-cluster\n and, add the following contents into this file:
upstream seafile_cluster {\n server <IP: your frontend node 1>:80;\n server <IP: your frontend node 2>:80;\n ...\n}\n\nserver {\n listen 80;\n server_name <your domain>;\n\n location / {\n proxy_pass http://seafile_cluster;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Proto $scheme;\n http_502 http_503 http_504;\n }\n}\n Link the configurations file to sites-enabled directory:
sudo ln -s /etc/nginx/sites-available/seafile-cluster /etc/nginx/sites-enabled/\n Test and enable configuration
sudo nginx -t\nsudo nginx -s reload\n This is a sample /etc/haproxy/haproxy.cfg:
(Assume your health check port is 11001)
global\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n maxconn 2000\n timeout connect 10000\n timeout client 300000\n timeout server 36000000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.1.165:80 check port 11001 cookie seafileserver01\n server seafileserver02 192.168.1.200:80 check port 11001 cookie seafileserver02\n"},{"location":"setup_binary/cluster_deployment/#see-how-it-runs","title":"See how it runs","text":"Now you should be able to test your cluster. Open https://seafile.example.com in your browser and enjoy. You can also synchronize files with Seafile clients.
"},{"location":"setup_binary/cluster_deployment/#the-final-configuration-of-the-front-end-nodes","title":"The final configuration of the front-end nodes","text":"Here is the summary of configurations at the front-end node that related to cluster setup. (for version 7.1+)
For seafile.conf:
[cluster]\nenabled = true\n The enabled option will prevent the start of background tasks by ./seafile.sh start in the front-end node. The tasks should be explicitly started by ./seafile-background-tasks.sh start at the back-end node.
For seahub_settings.py:
AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\n For seafevents.conf:
[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh # This configuration is for improving searching speed\nes_host = <IP of background node>\nes_port = 9200\n The [INDEX FILES] section is needed to let the front-end node know the file search feature is enabled.
You can engaged HTTPS in your load balance service, as you can use certificates manager (e.g., Certbot) to acquire and enable HTTPS to your Seafile cluster. You have to modify the relative URLs from the prefix http:// to https:// in seahub_settings.py and .env, after enabling HTTPS.
You can follow here to deploy SeaDoc server. And then modify SEADOC_SERVER_URL in your .env file
After completing the installation of Seafile Server Professional Edition, communication between the Seafile server and clients runs over (unencrypted) HTTP. While HTTP is ok for testing purposes, switching to HTTPS is imperative for production use.
HTTPS requires a SSL certificate from a Certificate Authority (CA). Unless you already have a SSL certificate, we recommend that you get your SSL certificate from Let\u2019s Encrypt using Certbot. If you have a SSL certificate from another CA, skip the section \"Getting a Let's Encrypt certificate\".
A second requirement is a reverse proxy supporting SSL. Nginx, a popular and resource-friendly web server and reverse proxy, is a good option. Nginx's documentation is available at http://nginx.org/en/docs/.
"},{"location":"setup_binary/https_with_nginx/#setup","title":"Setup","text":"The setup of Seafile using Nginx as a reverse proxy with HTTPS is demonstrated using the sample host name seafile.example.com.
This manual assumes the following requirements:
If your setup differs from thes requirements, adjust the following instructions accordingly.
The setup proceeds in two steps: First, Nginx is installed. Second, a SSL certificate is integrated in the Nginx configuration.
"},{"location":"setup_binary/https_with_nginx/#installing-nginx","title":"Installing Nginx","text":"Install Nginx using the package repositories:
sudo apt install nginx -y\n After the installation, start the server and enable it so that Nginx starts at system boot:
sudo systemctl start nginx\nsudo systemctl enable nginx\n"},{"location":"setup_binary/https_with_nginx/#preparing-nginx","title":"Preparing Nginx","text":"Create a configuration file for seafile in /etc/nginx/sites-available/:
touch /etc/nginx/sites-available/seafile.conf\n Delete the default files in /etc/nginx/sites-enabled/ and /etc/nginx/sites-available:
rm /etc/nginx/sites-enabled/default\nrm /etc/nginx/sites-available/default\n Create a symbolic link:
ln -s /etc/nginx/sites-available/seafile.conf /etc/nginx/sites-enabled/seafile.conf\n"},{"location":"setup_binary/https_with_nginx/#configuring-nginx","title":"Configuring Nginx","text":"Copy the following sample Nginx config file into the just created seafile.conf (i.e., nano /etc/nginx/sites-available/seafile.conf) and modify the content to fit your needs:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n\n proxy_set_header X-Forwarded-For $remote_addr;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log seafileformat;\n error_log /var/log/nginx/seahub.error.log;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$ $1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n\n send_timeout 36000s;\n\n access_log /var/log/nginx/seafhttp.access.log seafileformat;\n error_log /var/log/nginx/seafhttp.error.log;\n }\n location /media {\n root /opt/seafile/seafile-server-latest/seahub;\n }\n}\n The following options must be modified in the CONF file:
Optional customizable options in the seafile.conf are:
listen) - if Seafile server should be available on a non-standard port/ - if Seahub is configured to start on a different port than 8000/seafhttp - if seaf-server is configured to start on a different port than 8082client_max_body_size)The default value for client_max_body_size is 1M. Uploading larger files will result in an error message HTTP error code 413 (\"Request Entity Too Large\"). It is recommended to syncronize the value of client_max_body_size with the parameter max_upload_size in section [fileserver] of seafile.conf. Optionally, the value can also be set to 0 to disable this feature. Client uploads are only partly effected by this limit. With a limit of 100 MiB they can safely upload files of any size.
Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect:
nginx -t\nnginx -s reload\n"},{"location":"setup_binary/https_with_nginx/#getting-a-lets-encrypt-certificate","title":"Getting a Let's Encrypt certificate","text":"Getting a Let's Encrypt certificate is straightforward thanks to Certbot. Certbot is a free, open source software tool for requesting, receiving, and renewing Let's Encrypt certificates.
First, go to the Certbot website and choose your webserver and OS.
Second, follow the detailed instructions then shown.
We recommend that you get just a certificate and that you modify the Nginx configuration yourself:
sudo certbot certonly --nginx\n Follow the instructions on the screen.
Upon successful verification, Certbot saves the certificate files in a directory named after the host name in /etc/letsencrypt/live. For the host name seafile.example.com, the files are stored in /etc/letsencrypt/live/seafile.example.com.
Tip
Normally, your nginx configuration can be automatically managed by a certificate manager (e.g., CertBot) after you install the certificate. If you find that your nginx is already listening on port 443 through the certificate manager after installing the certificate, you can skip this step.
Add an server block for port 443 and a http-to-https redirect to the seafile.conf configuration file in /etc/nginx.
This is a (shortened) sample configuration for the host name seafile.example.com:
log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n\n server_tokens off; # Prevents the Nginx version from being displayed in the HTTP response header\n}\n\nserver {\n listen 443 ssl;\n ssl_certificate /etc/letsencrypt/live/seafile.example.com/fullchain.pem; # Path to your fullchain.pem\n ssl_certificate_key /etc/letsencrypt/live/seafile.example.com/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n proxy_set_header X-Forwarded-Proto https;\n\n... # No changes beyond this point compared to the Nginx configuration without HTTPS\n Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect:
nginx -t\nnginx -s reload\n"},{"location":"setup_binary/https_with_nginx/#large-file-uploads","title":"Large file uploads","text":"Tip for uploading very large files (> 4GB): By default Nginx will buffer large request body in temp file. After the body is completely received, Nginx will send the body to the upstream server (seaf-server in our case). But it seems when file size is very large, the buffering mechanism dosen't work well. It may stop proxying the body in the middle. So if you want to support file upload larger for 4GB, we suggest you install Nginx version >= 1.8.0 and add the following options to Nginx config file:
location /seafhttp {\n ... ...\n proxy_request_buffering off;\n }\n If you have WebDAV enabled it is recommended to add the same:
location /seafdav {\n ... ...\n proxy_request_buffering off;\n }\n"},{"location":"setup_binary/https_with_nginx/#modify-env","title":"Modify .env","text":"Modify the following field to https
SEAFILE_SERVER_PROTOCOL=https\n"},{"location":"setup_binary/https_with_nginx/#modifying-seafileconf-optional","title":"Modifying seafile.conf (optional)","text":"To improve security, the file server should only be accessible via Nginx.
Add the following line in the [fileserver] block on seafile.conf in /opt/seafile/conf:
host = 127.0.0.1 ## default port 0.0.0.0\n After his change, the file server only accepts requests from Nginx.
"},{"location":"setup_binary/https_with_nginx/#starting-seafile-and-seahub","title":"Starting Seafile and Seahub","text":"Restart the seaf-server and Seahub for the config changes to take effect:
su seafile\ncd /opt/seafile/seafile-server-latest\n./seafile.sh restart\n./seahub.sh restart # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n"},{"location":"setup_binary/https_with_nginx/#additional-modern-settings-for-nginx-optional","title":"Additional modern settings for Nginx (optional)","text":""},{"location":"setup_binary/https_with_nginx/#activating-ipv6","title":"Activating IPv6","text":"Require IPv6 on server otherwise the server will not start! Also the AAAA dns record is required for IPv6 usage.
listen 443;\nlisten [::]:443;\n"},{"location":"setup_binary/https_with_nginx/#activating-http2","title":"Activating HTTP2","text":"Activate HTTP2 for more performance. Only available for SSL and nginx version>=1.9.5. Simply add http2.
listen 443 http2;\nlisten [::]:443 http2;\n"},{"location":"setup_binary/https_with_nginx/#advanced-tls-configuration-for-nginx-optional","title":"Advanced TLS configuration for Nginx (optional)","text":"The TLS configuration in the sample Nginx configuration file above receives a B overall rating on SSL Labs. By modifying the TLS configuration in seafile.conf, this rating can be significantly improved.
The following sample Nginx configuration file for the host name seafile.example.com contains additional security-related directives. (Note that this sample file uses a generic path for the SSL certificate files.) Some of the directives require further steps as explained below.
server {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n server_tokens off;\n }\n server {\n listen 443 ssl;\n ssl_certificate /etc/ssl/cacert.pem; # Path to your cacert.pem\n ssl_certificate_key /etc/ssl/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n # HSTS for protection against man-in-the-middle-attacks\n add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\";\n\n # DH parameters for Diffie-Hellman key exchange\n ssl_dhparam /etc/nginx/dhparam.pem;\n\n # Supported protocols and ciphers for general purpose server with good security and compatability with most clients\n ssl_protocols TLSv1.2 TLSv1.3;\n ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;\n ssl_prefer_server_ciphers off;\n\n # Supported protocols and ciphers for server when clients > 5years (i.e., Windows Explorer) must be supported\n #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;\n #ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA;\n #ssl_prefer_server_ciphers on;\n\n ssl_session_timeout 5m;\n ssl_session_cache shared:SSL:5m;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto https;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n\n proxy_read_timeout 1200s;\n\n client_max_body_size 0;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n send_timeout 36000s;\n }\n\n location /media {\n root /home/user/haiwen/seafile-server-latest/seahub;\n }\n }\n"},{"location":"setup_binary/https_with_nginx/#enabling-http-strict-transport-security","title":"Enabling HTTP Strict Transport Security","text":"Enable HTTP Strict Transport Security (HSTS) to prevent man-in-the-middle-attacks by adding this directive:
add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\" always;\n HSTS instructs web browsers to automatically use HTTPS. That means, after the first visit of the HTTPS version of Seahub, the browser will only use https to access the site.
"},{"location":"setup_binary/https_with_nginx/#using-perfect-forward-secrecy","title":"Using Perfect Forward Secrecy","text":"Enable Diffie-Hellman (DH) key-exchange. Generate DH parameters and write them in a .pem file using the following command:
openssl dhparam 2048 > /etc/nginx/dhparam.pem # Generates DH parameter of length 2048 bits\n The generation of the the DH parameters may take some time depending on the server's processing power.
Add the following directive in the HTTPS server block:
ssl_dhparam /etc/nginx/dhparam.pem;\n"},{"location":"setup_binary/https_with_nginx/#restricting-tls-protocols-and-ciphers","title":"Restricting TLS protocols and ciphers","text":"Disallow the use of old TLS protocols and cipher. Mozilla provides a configuration generator for optimizing the conflicting objectives of security and compabitility. Visit https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx for more Information.
"},{"location":"setup_binary/installation/","title":"Installation of Seafile Server Professional Edition","text":"This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server from a pre-built package using MySQL/MariaDB as database. The deployment has been tested for Debian/Ubuntu.
"},{"location":"setup_binary/installation/#requirements","title":"Requirements","text":"Please refer here for system requirements about Seafile PE. In general, we recommend that you should have at least 4G RAM and a 4-core CPU (> 2GHz).
Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com or one of our partners.
"},{"location":"setup_binary/installation/#setup","title":"Setup","text":""},{"location":"setup_binary/installation/#installing-and-preparing-the-sql-database","title":"Installing and preparing the SQL database","text":"Seafile supports MySQL and MariaDB. We recommend that you use the preferred SQL database management engine included in the package repositories of your distribution.
You can find step-by-step how-tos for installing MySQL and MariaDB in the tutorials on the Digital Ocean website.
Seafile uses the mysql_native_password plugin for authentication. The versions of MySQL and MariaDB installed on Ubuntu/Debian use a different authentication plugin by default. It is therefore required to change to authentication plugin to mysql_native_password for the root user prior to the installation of Seafile. The above mentioned tutorials explain how to do it.
Tip
The standard directory /opt/seafile is assumed for the rest of this manual. If you decide to put Seafile in another directory, some commands need to be modified accordingly
Install cache server (e.g., Redis)
sudo apt-get update\nsudo apt-get install -y redis-server libhiredis-dev\n Install Python and related libraries
Ubuntu 24.04Debian 13Debian 12Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate.
sudo apt-get install -y python3 python3-dev python3-setuptools python3-pip python3-ldap python3-rados libmysqlclient-dev libmemcached-dev ldap-utils libldap2-dev python3.12-venv default-libmysqlclient-dev build-essential pkg-config\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 boto3 oss2 twilio configparser pytz \\\n sqlalchemy==2.0.* pymysql==1.1.* jinja2 django-pylibmc pylibmc psd-tools lxml \\\n django==5.2.* cffi==1.17.1 future==1.0.* mysqlclient==2.2.* captcha==0.7.* django_simple_captcha==0.6.* \\\n pyjwt==2.10.* djangosaml2==1.11.* pysaml2==7.5.* pycryptodome==3.23.* python-ldap==3.4.* pillow==11.3.* pillow-heif==1.0.*\n Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate.
sudo apt-get install -y python3 python3-dev python3-setuptools python3-pip python3-ldap python3-rados libmariadb-dev-compat libmemcached-dev ldap-utils libldap2-dev libsasl2-dev pkg-config python3.13-venv\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 boto3 oss2 twilio configparser pytz \\\n sqlalchemy==2.0.* pymysql==1.1.* jinja2 django-pylibmc pylibmc psd-tools lxml \\\n django==5.2.* cffi==1.17.1 future==1.0.* mysqlclient==2.2.* captcha==0.7.* django_simple_captcha==0.6.* \\\n pyjwt==2.10.* djangosaml2==1.11.* pysaml2==7.5.* pycryptodome==3.23.* python-ldap==3.4.* pillow==11.3.* pillow-heif==1.0.*\n Note
Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with source python-venv/bin/activate.
sudo apt-get install -y python3 python3-dev python3-setuptools python3-pip python3-ldap python3-rados libmariadb-dev-compat libmemcached-dev ldap-utils libldap2-dev libsasl2-dev pkg-config python3.11-venv \n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 boto3 oss2 twilio configparser pytz \\\n sqlalchemy==2.0.* pymysql==1.1.* jinja2 django-pylibmc pylibmc psd-tools lxml \\\n django==5.2.* cffi==1.17.1 future==1.0.* mysqlclient==2.2.* captcha==0.7.* django_simple_captcha==0.6.* \\\n pyjwt==2.10.* djangosaml2==1.11.* pysaml2==7.5.* pycryptodome==3.23.* python-ldap==3.4.* pillow==11.3.* pillow-heif==1.0.*\n Elasticsearch, the indexing server, cannot be run as root. More generally, it is good practice not to run applications as root.
Create a new user and follow the instructions on the screen:
Ubuntu 24.04Debian 13/12adduser seafile\n /usr/sbin/adduser seafile\n Change ownership of the created directory to the new user:
chown -R seafile: /opt/seafile\n All the following steps are done as user seafile.
Change to user seafile:
su seafile\n"},{"location":"setup_binary/installation/#placing-the-seafile-pe-license","title":"Placing the Seafile PE license","text":"Save the license file in Seafile's programm directory /opt/seafile. Make sure that the name is seafile-license.txt.
If the license file has a different name or cannot be read, Seafile server will start with in trailer mode with most THREE users
"},{"location":"setup_binary/installation/#downloading-the-install-package","title":"Downloading the install package","text":"The install packages for Seafile PE are available for download in the the Seafile Customer Center. To access the Customer Center, a user account is necessary. The registration is free.
Beginning with Seafile PE 7.0.17, the Seafile Customer Center provides two install packages for every version (using Seafile PE 13.0.10 as an example):
The former is suitable for installation on Ubuntu/Debian servers.
Download the install package using wget (replace the x.x.x with the version you wish to download):
# Debian/Ubuntu\nwget -O 'seafile-pro-server_x.x.x_x86-64_Ubuntu.tar.gz' 'VERSION_SPECIFIC_LINK_FROM_SEAFILE_CUSTOMER_CENTER'\n We use Seafile version 13.0.10 as an example in the remainder of these instructions.
"},{"location":"setup_binary/installation/#uncompressing-the-package","title":"Uncompressing the package","text":"The install package is downloaded as a compressed tarball which needs to be uncompressed.
Uncompress the package using tar:
# Debian/Ubuntu\ntar xf seafile-pro-server_13.0.10_x86-64_Ubuntu.tar.gz\n Now you have:
$ tree -L 2 /opt/seafile\n.\n\u251c\u2500\u2500 seafile-license.txt\n\u251c\u2500\u2500 python-venv # you will not see this directory if you use ubuntu 22/debian 10\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 bin\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 include\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lib\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lib64 -> lib\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 pyvenv.cfg\n\u251c\u2500\u2500 seafile-pro-server-13.0.10\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index_op.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate_ldapusers.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 parse_seahub_db.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pro\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_master.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_worker.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-encrypt.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-import.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-background-tasks.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-monitor.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u2514\u2500\u2500 seafile-pro-server_13.0.10_x86-64_Ubuntu.tar.gz\n Tip
The names of the install packages differ for Seafile CE and Seafile PE. Using Seafile CE and Seafile PE 13.0.10 as an example, the names are as follows:
seafile-server_13.0.10_x86-86.tar.gz; uncompressing into folder seafile-server-13.0.10seafile-pro-server_13.0.10_x86-86.tar.gz; uncompressing into folder seafile-pro-server-13.0.10The install package comes with a script that sets Seafile up for you. Specifically, the script creates the required directories and extracts all files in the right place. It can also create a MySQL user and the three databases that Seafile's components require:
While ccnet server was merged into the seafile-server in Seafile 8.0, the corresponding database is still required for the time being
Run the script as user seafile:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n cd seafile-pro-server-13.0.10\n./setup-seafile-mysql.sh\n Configure your Seafile Server by specifying the following three parameters:
Option Description Note server name Name of the Seafile Server 3-15 characters, only English letters, digits and underscore ('_') are allowed server's ip or domain IP address or domain name used by the Seafile Server Seafile client program will access the server using this address fileserver port TCP port used by the Seafile fileserver Default port is 8082, it is recommended to use this port and to only change it if is used by other serviceIn the next step, choose whether to create new databases for Seafile or to use existing databases. The creation of new databases requires the root password for the SQL server.
Note
If you don't have the root password, you need someone who has the privileges, e.g., the database admin, to create the three databases required by Seafile, as well as a MySQL user who can access the databases. For example, to create three databases ccnet_db / seafile_db / seahub_db for ccnet/seafile/seahub respectively, and a MySQL user \"seafile\" to access these databases run the following SQL queries:
create database `ccnet_db` character set = 'utf8';\ncreate database `seafile_db` character set = 'utf8';\ncreate database `seahub_db` character set = 'utf8';\n\ncreate user 'seafile'@'localhost' identified by 'seafile';\n\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seafile_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seahub_db`.* to `seafile`@localhost;\n [1] Create new ccnet/seafile/seahub databases[2] Use existing ccnet/seafile/seahub databases The script creates these databases and a MySQL user that Seafile Server will use to access them. To this effect, you need to answer these questions:
Question Description Note mysql server host Host address of the MySQL server Default is localhost mysql server port TCP port used by the MySQL server Default port is 3306; almost every MySQL server uses this port mysql root password Password of the MySQL root account The root password is required to create new databases and a MySQL user mysql user for Seafile MySQL user created by the script, used by Seafile's components to access the databases Default is seafile; the user is created unless it exists mysql password for Seafile user Password for the user above, written in Seafile's config files Percent sign ('%') is not allowed database name Name of the database used by ccnet Default is \"ccnet_db\", the database is created if it does not exist seafile database name Name of the database used by Seafile Default is \"seafile_db\", the database is created if it does not exist seahub database name Name of the database used by seahub Default is \"seahub_db\", the database is created if it does not existThe prompts you need to answer:
Question Description Note mysql server host Host address of the MySQL server Default is localhost mysql server port TCP port used by MySQL server Default port is 3306; almost every MySQL server uses this port mysql user for Seafile User used by Seafile's components to access the databases The user must exists mysql password for Seafile user Password for the user above ccnet database name Name of the database used by ccnet, default is \"ccnet_db\" The database must exist seafile database name Name of the database used by Seafile, default is \"seafile_db\" The database must exist seahub dabase name Name of the database used by Seahub, default is \"seahub_db\" The database must existIf the setup is successful, you see the following output:
The directory layout then looks as follows:
/opt/seafile\n\u251c\u2500\u2500 seafile-license.txt\n\u251c\u2500\u2500 ccnet\n\u251c\u2500\u2500 conf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 gunicorn.conf.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafdav.conf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafevents.conf\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.conf\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 seahub_settings.py\n\u251c\u2500\u2500 pro-data\n\u251c\u2500\u2500 python-venv # you will not see this directory if you use ubuntu 22/debian 10\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 bin\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 include\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lib\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 lib64 -> lib\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 pyvenv.cfg\n\u251c\u2500\u2500 seafile-data\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 library-template\n\u251c\u2500\u2500 seafile-pro-server-13.0.10\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index_op.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate_ldapusers.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 parse_seahub_db.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pro\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_master.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_worker.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-encrypt.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-import.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-background-tasks.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-monitor.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u251c\u2500\u2500 seafile-pro-server_13.0.10_x86-64_Ubuntu.tar.gz\n\u251c\u2500\u2500 seafile-server-latest -> seafile-pro-server-13.0.10\n\u2514\u2500\u2500 seahub-data\n \u2514\u2500\u2500 avatars\n The folder seafile-server-latest is a symbolic link to the current Seafile Server folder. When later you upgrade to a new version, the upgrade scripts update this link to point to the latest Seafile Server folder.
.env file in conf/ directory","text":"nano /opt/seafile/conf/.env\n Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters can be generated from:
pwgen -s 40 1\n JWT_PRIVATE_KEY=<Your jwt private key>\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=<your database host>\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n\n## Cache\nCACHE_PROVIDER=redis # options: redis (recommend), memcached\n\n### Redis\nREDIS_HOST=redis\nREDIS_PORT=6379\nREDIS_PASSWORD=\n\n### Memcached\nMEMCACHED_HOST=memcached\nMEMCACHED_PORT=11211\n"},{"location":"setup_binary/installation/#setup-memory-cache","title":"Setup Memory Cache","text":"Memory cache is mandatory for pro edition. You may use Memcached or Reids as cache server.
MemcachedRedisUse the following commands to install memcached and corresponding libraies on your system:
# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\n Add or modify the following configuration to /opt/seafile/conf/.env:
## Cache\nCACHE_PROVIDER=memcached\n\n### Memcached\nMEMCACHED_HOST=memcached\nMEMCACHED_PORT=11211\n Redis is supported since version 11.0
Use the following commands to install redis and corresponding libraies on your system:
# on Debian/Ubuntu 18.04+\napt-get install -y redis-server libhiredis-dev\npip3 install redis django-redis\n\nsystemctl enable --now redis-server\n Add or modify the following configuration to /opt/seafile/conf/.env:
## Cache\nCACHE_PROVIDER=redis\n\n### Redis\nREDIS_HOST=redis\nREDIS_PORT=6379\nREDIS_PASSWORD=\n"},{"location":"setup_binary/installation/#enabling-httphttps-optional-but-recommended","title":"Enabling HTTP/HTTPS (Optional but Recommended)","text":"You need at least setup HTTP to make Seafile's web interface work. This manual provides instructions for enabling HTTP/HTTPS for the two most popular web servers and reverse proxies (e.g., Nginx).
"},{"location":"setup_binary/installation/#starting-seafile-server","title":"Starting Seafile Server","text":"Run the following commands in /opt/seafile/seafile-server-latest:
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n su seafile\n./seafile.sh start # Start Seafile service\n./seahub.sh start # Start seahub website, port defaults to 127.0.0.1:8000\n Success
The first time you start Seahub, the script prompts you to create an admin account for your Seafile Server. Enter the email address of the admin user followed by the password, i.e.:
What is the email for the admin account?\n[ admin email ] <please input your admin's email>\n\nWhat is the password for the admin account?\n[ admin password ] <please input your admin's password>\n\nEnter the password again:\n[ admin password again ] <please input your admin's password again>\n Now you can access Seafile via the web interface at the host address (e.g., https://seafile.example.com).
"},{"location":"setup_binary/installation/#enabling-full-text-search","title":"Enabling full text search","text":"Seafile uses the indexing server ElasticSearch to enable full text search.
"},{"location":"setup_binary/installation/#deploying-elasticsearch","title":"Deploying ElasticSearch","text":"Our recommendation for deploying ElasticSearch is using Docker. Detailed information about installing Docker on various Linux distributions is available at Docker Docs.
Seafile PE 9.0 only supports ElasticSearch 7.x. Seafile PE 10.0, 11.0, 12.0, 13.0 only supports ElasticSearch 8.x.
We use ElasticSearch version 8.15.0 as an example in this section. Version 8.15.0 and newer version have been successfully tested with Seafile.
Pull the Docker image:
sudo docker pull elasticsearch:8.15.0\n Create a folder for persistent data created by ElasticSearch and change its permission:
sudo mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n Now start the ElasticSearch container using the docker run command:
sudo docker run -d \\\n--name es \\\n-p 9200:9200 \\\n-e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" \\\n-e \"ES_JAVA_OPTS=-Xms2g -Xmx2g\" -e \"xpack.security.enabled=false\" \\\n--restart=always \\\n-v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data \\\n-d elasticsearch:8.15.0\n Security notice
We sincerely thank Mohammed Adel of Safe Decision Co., for the suggestion of this notice.
By default, Elasticsearch will only listen on 127.0.0.1, but this rule may become invalid after Docker exposes the service port, which will make your Elasticsearch service vulnerable to attackers accessing and extracting sensitive data due to exposure to the external network. We recommend that you manually configure the Docker firewall, such as
sudo iptables -A INPUT -p tcp -s <your seafile server ip> --dport 9200 -j ACCEPT\nsudo iptables -A INPUT -p tcp --dport 9200 -j DROP\n The above command will only allow the host where your Seafile service is located to connect to Elasticsearch, and other addresses will be blocked. If you deploy Elasticsearch based on binary packages, you need to refer to the official document to set the address that Elasticsearch binds to.
"},{"location":"setup_binary/installation/#modifying-seafevents","title":"Modifying seafevents","text":"Add the following configuration to seafevents.conf:
[INDEX FILES]\nes_host = <your elasticsearch server's IP, e.g., 127.0.0.1> # IP address of ElasticSearch host\nes_port = 9200 # port of ElasticSearch host\n Finally, restart Seafile:
su seafile\n./seafile.sh restart && ./seahub.sh restart \n"},{"location":"setup_binary/outline/","title":"Deploy Seafile Pro Edition","text":"Binary-based community edition Seafile is not supported since version 13.0
Since version 13.0, binary-based deployment for community edition is no longer supported.
There are two ways to deploy Seafile Pro Edition. Since version 8.0, the recommend way to install Seafile Pro Edition is using Docker.
For example Debian 12
Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload.
Firstly, you should create a script to activate the python virtual environment, which goes in the ${seafile_dir} directory. Put another way, it does not go in \"seafile-server-latest\", but the directory above that. Throughout this manual the examples use /opt/seafile for this directory, but you might have chosen to use a different directory.
sudo vim /opt/seafile/run_with_venv.sh\n The content of the file is:
#!/bin/bash\n# Activate the python virtual environment (venv) before starting one of the seafile scripts\n\ndir_name=\"$(dirname $0)\"\nsource \"${dir_name}/python-venv/bin/activate\"\nscript=\"$1\"\nshift 1\n\necho \"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n\"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n make this script executable sudo chmod 755 /opt/seafile/run_with_venv.sh\n"},{"location":"setup_binary/start_seafile_at_system_bootup/#seafile-component","title":"Seafile component","text":"sudo vim /etc/systemd/system/seafile.service\n The content of the file is:
[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seafile.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n"},{"location":"setup_binary/start_seafile_at_system_bootup/#seahub-component","title":"Seahub component","text":"sudo vim /etc/systemd/system/seahub.service\n The content of the file is:
[Unit]\nDescription=Seafile hub\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seahub.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seahub.sh stop\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n"},{"location":"setup_binary/start_seafile_at_system_bootup/#for-systems-running-systemd-without-python-virtual-environment","title":"For systems running systemd without python virtual environment","text":"For example Debian 8 through Debian 11, Linux Ubuntu 15.04 and newer
Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload.
"},{"location":"setup_binary/start_seafile_at_system_bootup/#seafile-component_1","title":"Seafile component","text":"sudo vim /etc/systemd/system/seafile.service\n The content of the file is:
[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seafile.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n"},{"location":"setup_binary/start_seafile_at_system_bootup/#seahub-component_1","title":"Seahub component","text":"Create systemd service file /etc/systemd/system/seahub.service
sudo vim /etc/systemd/system/seahub.service\n The content of the file is:
[Unit]\nDescription=Seafile hub\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seahub.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seahub.sh stop\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n"},{"location":"setup_binary/start_seafile_at_system_bootup/#seafile-cli-client-optional","title":"Seafile cli client (optional)","text":"Create systemd service file /etc/systemd/system/seafile-client.service
You need to create this service file only if you have seafile console client and you want to run it on system boot.
sudo vim /etc/systemd/system/seafile-client.service\n The content of the file is:
[Unit]\nDescription=Seafile client\n# Uncomment the next line you are running seafile client on the same computer as server\n# After=seafile.service\n# Or the next one in other case\n# After=network.target\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/seaf-cli start\nExecStop=/usr/bin/seaf-cli stop\nRemainAfterExit=yes\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n"},{"location":"setup_binary/start_seafile_at_system_bootup/#enable-service-start-on-system-boot","title":"Enable service start on system boot","text":"sudo systemctl enable seafile.service\nsudo systemctl enable seahub.service\nsudo systemctl enable seafile-client.service # optional\n"},{"location":"setup_binary/using_logrotate/","title":"Set up logrotate for server","text":""},{"location":"setup_binary/using_logrotate/#how-it-works","title":"How it works","text":"seaf-server support reopenning logfiles by receiving a SIGUR1 signal.
This feature is very useful when you need cut logfiles while you don't want to shutdown the server. All you need to do now is cutting the logfile on the fly.
"},{"location":"setup_binary/using_logrotate/#default-logrotate-configuration-directory","title":"Default logrotate configuration directory","text":"For Debian, the default directory for logrotate should be /etc/logrotate.d/
Assuming your seaf-server's logfile is setup to /opt/seafile/logs/seafile.log and your seaf-server's pidfile is setup to /opt/seafile/pids/seaf-server.pid:
The configuration for logrotate could be like this:
/opt/seafile/logs/seafile.log\n/opt/seafile/logs/seahub.log\n/opt/seafile/logs/seafdav.log\n/opt/seafile/logs/fileserver-access.log\n/opt/seafile/logs/fileserver-error.log\n/opt/seafile/logs/fileserver.log\n/opt/seafile/logs/file_updates_sender.log\n/opt/seafile/logs/repo_old_file_auto_del_scan.log\n/opt/seafile/logs/seahub_email_sender.log\n/opt/seafile/logs/index.log\n{\n daily\n missingok\n rotate 7\n # compress\n # delaycompress\n dateext\n dateformat .%Y-%m-%d\n notifempty\n # create 644 root root\n sharedscripts\n postrotate\n if [ -f /opt/seafile/pids/seaf-server.pid ]; then\n kill -USR1 `cat /opt/seafile/pids/seaf-server.pid`\n fi\n\n if [ -f /opt/seafile/pids/fileserver.pid ]; then\n kill -USR1 `cat /opt/seafile/pids/fileserver.pid`\n fi\n\n if [ -f /opt/seafile/pids/seahub.pid ]; then\n kill -HUP `cat /opt/seafile/pids/seahub.pid`\n fi\n\n if [ -f /opt/seafile/pids/seafdav.pid ]; then\n kill -HUP `cat /opt/seafile/pids/seafdav.pid`\n fi\n\n find /opt/seafile/logs/ -mtime +7 -name \"*.log*\" -exec rm -f {} \\;\n endscript\n}\n You can save this file, in Debian for example, at /etc/logrotate.d/seafile.
The Seafile configuration files are located in the /opt/seafile-data/seafile/conf/ directory.
You should remove the [DATABASE] configuration block.
You should remove the [database] and [memcached] configuration block.
You should remove the SERVICE_URL, DATABASES = {...}, CACHES = {...}, COMPRESS_CACHE_BACKEND and FILE_SERVER_ROOT configuration block.
The following configurations are removed or renamed to new ones.
SEAFILE_MEMCACHED_IMAGE=docker.seafile.top/seafileltd/memcached:1.6.29\n\nINIT_S3_STORAGE_BACKEND_CONFIG=false\nINIT_S3_COMMIT_BUCKET=<your-commit-objects>\nINIT_S3_FS_BUCKET=<your-fs-objects>\nINIT_S3_BLOCK_BUCKET=<your-block-objects>\nINIT_S3_KEY_ID=<your-key-id>\nINIT_S3_SECRET_KEY=<your-secret-key>\nINIT_S3_USE_V4_SIGNATURE=true\nINIT_S3_AWS_REGION=us-east-1\nINIT_S3_HOST=\nINIT_S3_USE_HTTPS=true\n\nNOTIFICATION_SERVER_VOLUME=/opt/notification-data\n\nSS_S3_USE_V4_SIGNATURE=false\nSS_S3_ACCESS_ID=<your access id>\nSS_S3_ACCESS_SECRET=<your access secret>\nSS_S3_ENDPOINT=\nSS_S3_BUCKET=<your bucket name>\nSS_S3_USE_HTTPS=true\nSS_S3_PATH_STYLE_REQUEST=true\nSS_S3_AWS_REGION=us-east-1\nSS_S3_SSE_C_KEY=<your SSE-C key>\n"},{"location":"upgrade/seafile_obsolete_configurations/#seafile-11-to-12-obsolete-configurations","title":"Seafile 11 to 12 Obsolete Configurations","text":""},{"location":"upgrade/seafile_obsolete_configurations/#ccnetconf","title":"ccnet.conf","text":"You should remove the entire ccnet.conf configuration file.
You should remove the [notification] configuration block.
There are three types of upgrade, i.e., major version upgrade, minor version upgrade and maintenance version upgrade. This page contains general instructions for the three types of upgrade.
Please check the upgrade notes for any special configuration or changes before/while upgrading.
Suppose you are using version 5.1.0 and like to upgrade to version 6.1.0. First download and extract the new version. You should have a directory layout similar to this:
seafile\n -- seafile-server-5.1.0\n -- seafile-server-6.1.0\n -- ccnet\n -- seafile-data\n Now upgrade to version 6.1.0.
Shutdown Seafile server if it's running
cd seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n# or via service\n/etc/init.d/seafile-server stop\n Check the upgrade scripts in seafile-server-6.1.0 directory.
cd seafile/seafile-server-6.1.0\nls upgrade/upgrade_*\n You will get a list of upgrade files:
...\nupgrade_5.0_5.1.sh\nupgrade_5.1_6.0.sh\nupgrade_6.0_6.1.sh\n Start from your current version, run the script(s one by one)
upgrade/upgrade_5.1_6.0.sh\nupgrade/upgrade_6.0_6.1.sh\n Start Seafile server
cd seafile/seafile-server-latest/\n./seafile.sh start\n./seahub.sh start # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n# or via service\n/etc/init.d/seafile-server start\n If the new version works fine, the old version can be removed
rm -rf seafile-server-5.1.0/\n"},{"location":"upgrade/upgrade/#minor-version-upgrade-eg-from-61x-to-62y","title":"Minor version upgrade (e.g. from 6.1.x to 6.2.y)","text":"Suppose you are using version 6.1.0 and like to upgrade to version 6.2.0. First download and extract the new version. You should have a directory layout similar to this:
seafile\n -- seafile-server-6.1.0\n -- seafile-server-6.2.0\n -- ccnet\n -- seafile-data\n Now upgrade to version 6.2.0.
cd seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n# or via service\n/etc/init.d/seafile-server stop\n Check the upgrade scripts in seafile-server-6.2.0 directory.
cd seafile/seafile-server-6.2.0\nls upgrade/upgrade_*\n You will get a list of upgrade files:
...\nupgrade/upgrade_5.1_6.0.sh\nupgrade/upgrade_6.0_6.1.sh\nupgrade/upgrade_6.1_6.2.sh\n Start from your current version, run the script(s one by one)
upgrade/upgrade_6.1_6.2.sh\n Start Seafile server
./seafile.sh start\n./seahub.sh start\n# or via service\n/etc/init.d/seafile-server start\n If the new version works, the old version can be removed
rm -rf seafile-server-6.1.0/\n"},{"location":"upgrade/upgrade/#maintenance-version-upgrade-eg-from-622-to-623","title":"Maintenance version upgrade (e.g. from 6.2.2 to 6.2.3)","text":"A maintenance upgrade is for example an upgrade from 6.2.2 to 6.2.3.
For this type of upgrade, you only need to update the symbolic links (for avatar and a few other folders). A script to perform a minor upgrade is provided with Seafile server (for history reasons, the script is called minor-upgrade.sh):
cd seafile-server-6.2.3/upgrade/ && ./minor-upgrade.sh\n Start Seafile
If the new version works, the old version can be removed
rm -rf seafile-server-6.2.2/\n Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps:
In general, to upgrade a cluster, you need:
/opt/seafile/seafile-server-latest/upgrade/upgrade_x_x_x_x.sh) in one frontend nodeBefore upgrading, please shutdown you Seafile server
docker compose down\n"},{"location":"upgrade/upgrade_a_cluster/#step-2-download-the-newest-seafile-serveryml-file","title":"Step 2) Download the newest seafile-server.yml file","text":"Before downloading the newest seafile-server.yml, please backup your original one:
mv seafile-server.yml seafile-server.yml.bak\n Then download the new seafile-server.yml according to the following commands:
wget https://manual.seafile.com/13.0/repo/docker/cluster/seafile-server.yml\n"},{"location":"upgrade/upgrade_a_cluster/#step-3-modify-env-update-image-version-and-some-configurations","title":"Step 3) Modify .env, update image version and some configurations","text":""},{"location":"upgrade/upgrade_a_cluster/#step-31-update-image-version-to-seafile-13","title":"Step 3.1) Update image version to Seafile 13","text":"SEAFILE_IMAGE=seafileltd/seafile-pro-mc:13.0-latest\n"},{"location":"upgrade/upgrade_a_cluster/#step-32-add-configurations-for-cache","title":"Step 3.2) Add configurations for cache","text":"From Seafile 13, the configurations of cache can be set via environment variables directly (you can define it in the .env). What's more, the Redis will be recommended as the primary cache server for supporting some new features (please refer the upgradte notes, you can also refer to more details about Redis in Seafile Docker here).
## Cache\nCACHE_PROVIDER=redis\n\n### Redis\nREDIS_HOST=redis\nREDIS_PORT=6379\nREDIS_PASSWORD=\n ## Cache\nCACHE_PROVIDER=memcached\n\n### Memcached\nMEMCACHED_HOST=memcached\nMEMCACHED_PORT=11211\n"},{"location":"upgrade/upgrade_a_cluster/#step-33-add-configurations-for-database","title":"Step 3.3) Add configurations for database","text":"SEAFILE_MYSQL_DB_HOST=db\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n"},{"location":"upgrade/upgrade_a_cluster/#step-34-add-configurations-for-storage-backend","title":"Step 3.4) Add configurations for storage backend","text":"Seafile 13.0 add a new environment SEAF_SERVER_STORAGE_TYPE to determine the storage backend of seaf-server component. You can delete the variable or set it to empty (SEAF_SERVER_STORAGE_TYPE=) to use the old way, i.e., determining the storage backend from seafile.conf.
seafile.conf Set SEAF_SERVER_STORAGE_TYPE to disk (default value):
SEAF_SERVER_STORAGE_TYPE=disk\n Set SEAF_SERVER_STORAGE_TYPE to s3, and add your s3 configurations:
SEAF_SERVER_STORAGE_TYPE=s3\n\nS3_COMMIT_BUCKET=<your commit bucket name>\nS3_FS_BUCKET=<your fs bucket name>\nS3_BLOCK_BUCKET=<your block bucket name>\nS3_KEY_ID=<your-key-id>\nS3_SECRET_KEY=<your-secret-key>\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=false\nS3_AWS_REGION=us-east-1\nS3_HOST=\nS3_USE_HTTPS=true\nS3_SSE_C_KEY=\n Set SEAF_SERVER_STORAGE_TYPE to multiple. In this case, you don't need to change the storage configuration in seafile.conf.
SEAF_SERVER_STORAGE_TYPE=multiple\n If you would like to use the storage configuration in seafile.conf, please remove default value of SEAF_SERVER_STORAGE_TYPE in .env:
SEAF_SERVER_STORAGE_TYPE=\n"},{"location":"upgrade/upgrade_a_cluster/#step-4-remove-obsolete-configurations","title":"Step 4) Remove obsolete configurations","text":"Although the configurations in environment (i.e., .env) have higher priority than the configurations in config files, we recommend that you remove or modify the cache configuration in the following files to avoid ambiguity:
Backup the old configuration files:
cp /opt/seafile/shared/seafile/conf/seafile.conf /opt/seafile/shared/seafile/conf/seafile.conf.bak\ncp /opt/seafile/shared/seafile/conf/seahub_settings.py /opt/seafile/shared/seafile/conf/seahub_settings.py.bak\n Clean up redundant configuration items in the configuration files:
/opt/seafile/shared/seafile/conf/seafile.conf and remove the entire [memcached], [database], [commit_object_backend], [fs_object_backend], [notification] and [block_backend] if above sections have correctly specified in .env./opt/seafile/shared/seafile/conf/seahub_settings.py and remove the entire blocks for DATABASES = {...} and CAHCES = {...}In the most cases, the seafile.conf only include the listen port 8082 of Seafile file server.
Note
According to this upgrade document, a frontend service will be started here. If you plan to use this node as a backend node, you need to modify this item in .env and set it to backend:
CLUSTER_MODE=backend\n docker compose up -d\n"},{"location":"upgrade/upgrade_a_cluster/#step-6-upgrade-seafile","title":"Step 6) Upgrade Seafile","text":"docker exec -it seafile bash\n# enter the container `seafile`\n\n# stop servers\ncd /opt/seafile/seafile-server-latest\n./seafile.sh stop\n./seahub.sh stop\n\n# upgrade seafile\ncd upgrade\n./upgrade_12.0_13.0.sh\n Success
After upgrading the Seafile, you can see the following messages in your console:
Updating seafile/seahub database ...\n\n[INFO] You are using MySQL\n[INFO] updating seafile database...\n[INFO] updating seahub database...\n[INFO] updating seafevents database...\nDone\n\nmigrating avatars ...\n\nDone\n\nupdating /opt/seafile/seafile-server-latest symbolic link to /opt/seafile/seafile-pro-server-13.0.11 ...\n\n\n\n-----------------------------------------------------------------\nUpgraded your seafile server successfully.\n-----------------------------------------------------------------\n Then you can exit the container by exit
docker compose down\ndocker compose up -d\n Tip
docker logs -f seafile to check whether the current node service is running normallyDownload the newest seafile-sever.yml file and modify .env similar to the first node (for backend node, you should set CLUSTER_MODE=backend)
Start the Seafile server:
docker compose up -d\n Stop the seafile service in all nodes
docker compose down\n Download the docker-compose files for Seafile 12
wget -O .env https://manual.seafile.com/13.0/repo/docker/cluster/env\nwget https://manual.seafile.com/13.0/repo/docker/cluster/seafile-server.yml\n Modify .env:
Generate a JWT key
pwgen -s 40 1\n\n# e.g., EkosWcXonPCrpPE9CFsnyQLLPqoPhSJZaqA3JMFw\n Fill up the following field according to your configurations using in Seafile 11:
SEAFILE_SERVER_HOSTNAME=<your loadbalance's host>\nSEAFILE_SERVER_PROTOCOL=https # or http\nSEAFILE_MYSQL_DB_HOST=<your mysql host>\nSEAFILE_MYSQL_DB_USER=seafile # if you don't use `seafile` as your Seafile server's account, please correct it\nSEAFILE_MYSQL_DB_PASSWORD=<your mysql password for user `seafile`>\nJWT_PRIVATE_KEY=<your JWT key generated in Sec. 3.1>\n Remove the variables using in Cluster initialization
Since Seafile has been initialized in Seafile 11, the variables related to Seafile cluster initialization can be removed from .env:
Start the Seafile in a node
Note
According to this upgrade document, a frontend service will be started here. If you plan to use this node as a backend node, you need to modify this item in .env and set it to backend:
CLUSTER_MODE=backend\n docker compose up -d\n Upgrade Seafile
docker exec -it seafile bash\n# enter the container `seafile`\n\n# stop servers\ncd /opt/seafile/seafile-server-latest\n./seafile.sh stop\n./seahub.sh stop\n\n# upgrade seafile\ncd upgrade\n./upgrade_11.0_12.0.sh\n Success
After upgrading the Seafile, you can see the following messages in your console:
Updating seafile/seahub database ...\n\n[INFO] You are using MySQL\n[INFO] updating seafile database...\n[INFO] updating seahub database...\n[INFO] updating seafevents database...\nDone\n\nmigrating avatars ...\n\nDone\n\nupdating /opt/seafile/seafile-server-latest symbolic link to /opt/seafile/seafile-pro-server-12.0.6 ...\n\n\n\n-----------------------------------------------------------------\nUpgraded your seafile server successfully.\n-----------------------------------------------------------------\n Then you can exit the container by exit
Restart current node
docker compose down\n docker compose up -d\n Tip
docker logs -f seafile to check whether the current node service is running normallyOperations for other nodes
Download and modify .env similar to the first node (for backend node, you should set CLUSTER_MODE=backend)
Start the Seafile server:
docker compose up -d\n Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps:
In general, to upgrade a cluster, you need:
Doing maintanence upgrading is simple, you only need to run the script ./upgrade/minor_upgrade.sh at each node to update the symbolic link.
Clean Database
If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
Stop Seafile server
Note
For installations using python virtual environment, activate it if it isn't already active
source python-venv/bin/activate\n Frontend nodeBackend node cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh stop\n./seahub.sh stop\n cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh stop\n./seafile-background-tasks.sh stop\n Install new Python libraries
Download and uncompress the package
Run the upgrade script in a single node
seafile-pro-server-12.x.x/upgrade/upgrade_11.0_12.0.sh\n Follow here to create the .env file in conf/ directory
Start Seafile server
Frontend nodeBackend nodecd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seahub.sh start\n cd /opt/seafile/seafile-server-latest\nsu seafile\n./seafile.sh start\n./seafile-background-tasks.sh start\n (Optional) Refer here to upgrade notification server
(Optional) Refer here to upgrade SeaDoc server
For maintenance upgrade, like from version 10.0.1 to version 10.0.4, just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
For major version upgrade, like from 10.0 to 11.0, see instructions below.
Please check the upgrade notes for any special configuration or changes before/while upgrading.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-120-to-130","title":"Upgrade from 12.0 to 13.0","text":"From Seafile Docker 13.0, the elasticsearch.yml has separated from seafile-server.yml, and Seafile will support getting cache configuration from environment variables
Before upgrading, please shutdown you Seafile server
docker compose down\n"},{"location":"upgrade/upgrade_docker/#step-2-download-the-newest-yml-files","title":"Step 2) Download the newest .yml files","text":""},{"location":"upgrade/upgrade_docker/#step-21-download-seafile-serveryml","title":"Step 2.1) Download seafile-server.yml","text":"Before downloading the newest seafile-server.yml, please backup your original one:
mv seafile-server.yml seafile-server.yml.bak\n Then download the new seafile-server.yml according to the following commands:
wget https://manual.seafile.com/13.0/repo/docker/ce/seafile-server.yml\n wget https://manual.seafile.com/13.0/repo/docker/pro/seafile-server.yml\n"},{"location":"upgrade/upgrade_docker/#step-22-download-yml-file-for-notification-server","title":"Step 2.2) Download .yml file for notification server","text":"Deployment with SeafileStandalone deployment wget https://manual.seafile.com/13.0/repo/docker/notification-server.yml\n wget https://manual.seafile.com/13.0/repo/docker/notification-server/notification-server.yml\n"},{"location":"upgrade/upgrade_docker/#step-23-download-yml-file-for-search-engine-pro-edition","title":"Step 2.3) Download .yml file for search engine (Pro edition)","text":"ElasticSearchSeaSearch From Seafile Docker 13.0 (Pro), the ElasticSearch service will be controlled by a separate resource file (i.e., elasticsearch.yml). If you are using Seafile Pro and still plan to use ElasticSearch, please download the elasticsearch.yml:
wget https://manual.seafile.com/13.0/repo/docker/pro/elasticsearch.yml\n If you are using SeaSearch as the search engine, please download the newest seasearch.yml file:
mv seasearch.yml seasearch.yml.bak\nwget https://manual.seafile.com/13.0/repo/docker/pro/seasearch.yml\n"},{"location":"upgrade/upgrade_docker/#step-24-download-yml-file-for-seadoc-optional","title":"Step 2.4) Download .yml file for SeaDoc (optional)","text":"If you use SeaDoc extension, the seadoc.yml file need to be updated too:
wget https://manual.seafile.com/13.0/repo/docker/seadoc.yml\n"},{"location":"upgrade/upgrade_docker/#step-3-modify-env-update-image-version-and-add-cache-configurations","title":"Step 3) Modify .env, update image version and add cache configurations","text":""},{"location":"upgrade/upgrade_docker/#step-31-update-image-version-to-seafile-13","title":"Step 3.1) Update image version to Seafile 13","text":"Seafile CESeafile Pro SEAFILE_IMAGE=seafileltd/seafile-mc:13.0-latest\nSEADOC_IMAGE=seafileltd/sdoc-server:2.0-latest\nNOTIFICATION_SERVER_IMAGE=seafileltd/notification-server:13.0-latest\n # -- add `elasticsearch.yml` if you are still using ElasticSearch\n# COMPOSE_FILE='...,elasticsearch.yml'\n\n# -- if you are using SeaSearch, please also update the SeaSearch image\n# SEASEARCH_IMAGE=seafileltd/seasearch:1.0-latest # or seafileltd/seasearch-nomkl:1.0-latest for Apple chips\n\nSEAFILE_IMAGE=seafileltd/seafile-pro-mc:13.0-latest\nSEADOC_IMAGE=seafileltd/sdoc-server:2.0-latest\nNOTIFICATION_SERVER_IMAGE=seafileltd/notification-server:13.0-latest\n"},{"location":"upgrade/upgrade_docker/#step-32-add-configurations-for-cache","title":"Step 3.2) Add configurations for cache","text":"From Seafile 13, the configurations of database and cache can be set via environment variables directly (you can define it in the .env). What's more, the Redis will be recommended as the primary cache server for supporting some new features (please refer the upgradte notes, you can also refer to more details about Redis in Seafile Docker here).
## Cache\nCACHE_PROVIDER=redis\n\n### Redis\nREDIS_HOST=redis\nREDIS_PORT=6379\nREDIS_PASSWORD=\n ## Cache\nCACHE_PROVIDER=memcached\n\n### Memcached\nMEMCACHED_HOST=memcached\nMEMCACHED_PORT=11211\n"},{"location":"upgrade/upgrade_docker/#step-33-add-configuration-for-notification-server","title":"Step 3.3) Add configuration for notification server","text":"If you are using notification server in Seafile 12, please specify the notification server url in .env:
ENABLE_NOTIFICATION_SERVER=true\n ENABLE_NOTIFICATION_SERVER=true\nNOTIFICATION_SERVER_URL=http://<your notification server host>:8083\nINNER_NOTIFICATION_SERVER_URL=$NOTIFICATION_SERVER_URL\n"},{"location":"upgrade/upgrade_docker/#step-34-add-configurations-for-storage-backend-pro","title":"Step 3.4) Add configurations for storage backend (Pro)","text":"Seafile 13.0 add a new environment SEAF_SERVER_STORAGE_TYPE to determine the storage backend of seaf-server component. You can delete the variable or set it to empty (SEAF_SERVER_STORAGE_TYPE=) to use the old way, i.e., determining the storage backend from seafile.conf.
seafile.conf Set SEAF_SERVER_STORAGE_TYPE to disk (default value):
SEAF_SERVER_STORAGE_TYPE=disk\n Set SEAF_SERVER_STORAGE_TYPE to s3, and add your s3 configurations:
SEAF_SERVER_STORAGE_TYPE=s3\n\nS3_COMMIT_BUCKET=<your commit bucket name>\nS3_FS_BUCKET=<your fs bucket name>\nS3_BLOCK_BUCKET=<your block bucket name>\nS3_SS_BUCKET=<your seasearch bucket name> # for seasearch\nS3_MD_BUCKET=<your metadata bucket name> # for metadata-server\nS3_KEY_ID=<your-key-id>\nS3_SECRET_KEY=<your-secret-key>\nS3_USE_V4_SIGNATURE=true\nS3_PATH_STYLE_REQUEST=false\nS3_AWS_REGION=us-east-1\nS3_HOST=\nS3_USE_HTTPS=true\nS3_SSE_C_KEY=\n Set SEAF_SERVER_STORAGE_TYPE to multiple. In this case, you don't need to change the storage configuration in seafile.conf.
SEAF_SERVER_STORAGE_TYPE=multiple\n If you would like to use the storage configuration in seafile.conf, please remove default value of SEAF_SERVER_STORAGE_TYPE in .env:
SEAF_SERVER_STORAGE_TYPE=\n"},{"location":"upgrade/upgrade_docker/#step-4-remove-obsolete-configurations","title":"Step 4) Remove obsolete configurations","text":"Although the configurations in environment (i.e., .env) have higher priority than the configurations in config files, we recommend that you remove or modify the cache configuration in the following files to avoid ambiguity:
Backup the old configuration files:
# please replace /opt/seafile-data to your $SEAFILE_VOLUME\n\ncp /opt/seafile-data/seafile/conf/seafile.conf /opt/seafile-data/seafile/conf/seafile.conf.bak\ncp /opt/seafile-data/seafile/conf/seahub_settings.py /opt/seafile-data/seafile/conf/seahub_settings.py.bak\n Clean up redundant configuration items in the configuration files:
/opt/seafile-data/seafile/conf/seafile.conf and remove the entire [memcached], [database], [commit_object_backend], [fs_object_backend], [notification] and [block_backend] if above sections have correctly specified in .env./opt/seafile-data/seafile/conf/seahub_settings.py and remove the entire blocks for DATABASES = {...} and CAHCES = {...}In the most cases, the seafile.conf only include the listen port 8082 of Seafile file server.
docker compose up -d\n"},{"location":"upgrade/upgrade_docker/#upgrade-from-110-to-120","title":"Upgrade from 11.0 to 12.0","text":"Note: If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
From Seafile Docker 12.0, we recommend that you use .env and seafile-server.yml files for configuration.
mv docker-compose.yml docker-compose.yml.bak\n"},{"location":"upgrade/upgrade_docker/#download-seafile-120-docker-files","title":"Download Seafile 12.0 Docker files","text":"Download .env, seafile-server.yml and caddy.yml, and modify .env file according to the old configuration in docker-compose.yml.bak
wget -O .env https://manual.seafile.com/12.0/repo/docker/ce/env\nwget https://manual.seafile.com/12.0/repo/docker/ce/seafile-server.yml\nwget https://manual.seafile.com/12.0/repo/docker/caddy.yml\n The following fields merit particular attention: Variable Description Default Value SEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-data SEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/db SEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy SEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafile SEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) SEAFILE_MYSQL_DB_CCNET_DB_NAME The database name of ccnet ccnet_db SEAFILE_MYSQL_DB_SEAFILE_DB_NAME The database name of seafile seafile_db SEAFILE_MYSQL_DB_SEAHUB_DB_NAME The database name of seahub seahub_db JWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) http TIME_ZONE Time zone UTC wget -O .env https://manual.seafile.com/12.0/repo/docker/pro/env\nwget https://manual.seafile.com/12.0/repo/docker/pro/seafile-server.yml\nwget https://manual.seafile.com/12.0/repo/docker/caddy.yml\n The following fields merit particular attention: Variable Description Default Value SEAFILE_VOLUME The volume directory of Seafile data /opt/seafile-data SEAFILE_MYSQL_VOLUME The volume directory of MySQL data /opt/seafile-mysql/db SEAFILE_CADDY_VOLUME The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's /opt/seafile-caddy SEAFILE_ELASTICSEARCH_VOLUME (Only valid for Seafile PE) The volume directory of Elasticsearch data /opt/seafile-elasticsearch/data SEAFILE_MYSQL_DB_USER The user of MySQL (database - user can be found in conf/seafile.conf) seafile SEAFILE_MYSQL_DB_PASSWORD The user seafile password of MySQL (required) JWT_PRIVATE_KEY JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters is required for Seafile, which can be generated by using pwgen -s 40 1 (required) SEAFILE_SERVER_HOSTNAME Seafile server hostname or domain (required) SEAFILE_SERVER_PROTOCOL Seafile server protocol (http or https) http TIME_ZONE Time zone UTC Note
seafile.conf).INIT_SEAFILE_MYSQL_ROOT_PASSWORD, INIT_SEAFILE_ADMIN_EMAIL, INIT_SEAFILE_ADMIN_PASSWORD), you can remove it in the .env file.SSL is now handled by the caddy server. If you have used SSL before, you will also need modify the seafile.nginx.conf. Change server listen 443 to 80.
Backup the original seafile.nginx.conf file:
cp seafile.nginx.conf seafile.nginx.conf.bak\n Remove the server listen 80 section:
#server {\n# listen 80;\n# server_name _ default_server;\n\n # allow certbot to connect to challenge location via HTTP Port 80\n # otherwise renewal request will fail\n# location /.well-known/acme-challenge/ {\n# alias /var/www/challenges/;\n# try_files $uri =404;\n# }\n\n# location / {\n# rewrite ^ https://seafile.example.com$request_uri? permanent;\n# }\n#}\n Change server listen 443 to 80:
server {\n#listen 443 ssl;\nlisten 80;\n\n# ssl_certificate /shared/ssl/pkg.seafile.top.crt;\n# ssl_certificate_key /shared/ssl/pkg.seafile.top.key;\n\n# ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;\n\n ...\n Start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-notification-server","title":"Upgrade notification server","text":"If you has deployed the notification server. The Notification Server is now moved to its own Docker image. You need to redeploy it according to Notification Server document
"},{"location":"upgrade/upgrade_docker/#upgrade-seadoc-from-08-to-10-for-seafile-v120","title":"Upgrade SeaDoc from 0.8 to 1.0 for Seafile v12.0","text":"If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following steps:
From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db.
"},{"location":"upgrade/upgrade_docker/#remove-seadoc-configs-in-seafilenginxconf-file","title":"Remove SeaDoc configs in seafile.nginx.conf file","text":"If you have deployed SeaDoc older version, you should remove /sdoc-server/, /socket.io configs in seafile.nginx.conf file.
# location /sdoc-server/ {\n# add_header Access-Control-Allow-Origin *;\n# add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;\n# add_header Access-Control-Allow-Headers \"deviceType,token, authorization, content-type\";\n# if ($request_method = 'OPTIONS') {\n# add_header Access-Control-Allow-Origin *;\n# add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;\n# add_header Access-Control-Allow-Headers \"deviceType,token, authorization, content-type\";\n# return 204;\n# }\n# proxy_pass http://sdoc-server:7070/;\n# proxy_redirect off;\n# proxy_set_header Host $host;\n# proxy_set_header X-Real-IP $remote_addr;\n# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n# proxy_set_header X-Forwarded-Host $server_name;\n# proxy_set_header X-Forwarded-Proto $scheme;\n# client_max_body_size 100m;\n# }\n# location /socket.io {\n# proxy_pass http://sdoc-server:7070;\n# proxy_http_version 1.1;\n# proxy_set_header Upgrade $http_upgrade;\n# proxy_set_header Connection 'upgrade';\n# proxy_redirect off;\n# proxy_buffers 8 32k;\n# proxy_buffer_size 64k;\n# proxy_set_header X-Real-IP $remote_addr;\n# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n# proxy_set_header Host $http_host;\n# proxy_set_header X-NginX-Proxy true;\n# }\n"},{"location":"upgrade/upgrade_docker/#deploy-a-new-seadoc-server","title":"Deploy a new SeaDoc server","text":"Please see the document Setup SeaDoc to install SeaDoc with Seafile.
"},{"location":"upgrade/upgrade_docker/#other-configuration-changes","title":"Other configuration changes","text":""},{"location":"upgrade/upgrade_docker/#enable-passing-of-remote_user","title":"Enable passing of REMOTE_USER","text":"REMOTE_USER header is not passed to Seafile by default, you need to change gunicorn.conf.py if you need REMOTE_USER header for SSO.
forwarder_headers = 'SCRIPT_NAME,PATH_INFO,REMOTE_USER'\n"},{"location":"upgrade/upgrade_docker/#supplement-or-remove-allowed_hosts-in-seahub_settingspy","title":"Supplement or remove ALLOWED_HOSTS in seahub_settings.py","text":"Since version 12.0, the seaf-server component need to send internal requests to seahub component to check permissions, as reporting 400 Error when downloading files if the ALLOWED_HOSTS set incorrect. In this case, you can either remove ALLOWED_HOSTS in seahub_settings.py or supplement 127.0.0.1 in ALLOWED_HOSTS list:
# seahub_settings.py\n\nALLOWED_HOSTS = ['...(your domain)', '127.0.0.1']\n"},{"location":"upgrade/upgrade_docker/#upgrade-from-100-to-110","title":"Upgrade from 10.0 to 11.0","text":"Download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Taking the community edition as an example, you have to modify
...\nservice:\n ...\n seafile:\n image: seafileltd/seafile-mc:10.0-latest\n ...\n ...\n to
service:\n ...\n seafile:\n image: seafileltd/seafile-mc:11.0-latest\n ...\n ...\n It is also recommended that you upgrade mariadb and memcached to newer versions as in the v11.0 docker-compose.yml file. Specifically, in version 11.0, we use the following versions:
What's more, you have to migrate configuration for LDAP and OAuth according to here
Start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-90-to-100","title":"Upgrade from 9.0 to 10.0","text":"Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
If you are using pro edition with ElasticSearch, SAML SSO and storage backend features, follow the upgrading manual on how to update the configuration for these features.
If you want to use the new notification server and rate control (pro edition only), please refer to the upgrading manual.
"},{"location":"upgrade/upgrade_docker/#upgrade-from-80-to-90","title":"Upgrade from 8.0 to 9.0","text":"Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up.
"},{"location":"upgrade/upgrade_docker/#lets-encrypt-ssl-certificate","title":"Let's encrypt SSL certificate","text":"Since version 9.0.6, we use Acme V3 (not acme-tiny) to get certificate.
If there is a certificate generated by an old version, you need to back up and move the old certificate directory and the seafile.nginx.conf before starting.
mv /opt/seafile/shared/ssl /opt/seafile/shared/ssl-bak\n\nmv /opt/seafile/shared/nginx/conf/seafile.nginx.conf /opt/seafile/shared/nginx/conf/seafile.nginx.conf.bak\n Starting the new container will automatically apply a certificate.
docker compose down\ndocker compose up -d\n Please wait a moment for the certificate to be applied, then you can modify the new seafile.nginx.conf as you want. Execute the following command to make the nginx configuration take effect.
docker exec seafile nginx -s reload\n A cron job inside the container will automatically renew the certificate.
"},{"location":"upgrade/upgrade_notes_for_10.0.x/","title":"Upgrade notes for 10.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#important-release-changes","title":"Important release changes","text":""},{"location":"upgrade/upgrade_notes_for_10.0.x/#enable-notification-server","title":"Enable notification server","text":"The notification server enables desktop syncing and drive clients to get notification of library changes immediately using websocket. There are two benefits:
The notification server works with Seafile syncing client 9.0+ and drive client 3.0+.
Please follow the document to enable notification server
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#memcached-section-in-the-seafileconf-pro-edition-only","title":"Memcached section in the seafile.conf (pro edition only)","text":"If you use storage backend or cluster, make sure the memcached section is in the seafile.conf.
Since version 10.0, all memcached options are consolidated to the one below.
Modify the seafile.conf:
[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n"},{"location":"upgrade/upgrade_notes_for_10.0.x/#saml-sso-change-pro-edition-only","title":"SAML SSO change (pro edition only)","text":"The configuration for SAML SSO in Seafile is greatly simplified. Now only three options are needed:
ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n Please check the new document on SAML SSO
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#rate-control-in-role-settings-pro-edition-only","title":"Rate control in role settings (pro edition only)","text":"Starting from version 10.0, Seafile allows administrators to configure upload and download speed limits for users with different roles through the following two steps:
seahub_settings.py.ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n ...\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n ...\n },\n 'guest': {\n ...\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n ...\n },\n}\n seafile-server-latest directory to make the configuration take effect../seahub.sh python-env python3 seahub/manage.py set_user_role_upload_download_rate_limit\n"},{"location":"upgrade/upgrade_notes_for_10.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch is upgraded to version 8.x, fixed and improved some issues of file search function.
Since elasticsearch 7.x, the default number of shards has changed from 5 to 1, because too many index shards will over-occupy system resources; but when a single shard data is too large, it will also reduce search performance. Starting from version 10.0, Seafile supports customizing the number of shards in the configuration file.
You can use the following command to query the current size of each shard to determine the best number of shards for you:
curl 'http{s}://<es IP>:9200/_cat/shards/repofiles?v'\n The official recommendation is that the size of each shard should be between 10G-50G: https://www.elastic.co/guide/en/elasticsearch/reference/8.6/size-your-shards.html#shard-size-recommendation.
Modify the seafevents.conf:
[INDEX FILES]\n...\nshards = 10 # default is 5\n...\n"},{"location":"upgrade/upgrade_notes_for_10.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
For Ubuntu 20.04/22.04
sudo pip3 install future==0.18.* mysqlclient==2.1.* pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1\n For Debian 11
su pip3 install future==0.18.* mysqlclient==2.1.* pillow==9.3.* captcha==0.4 django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1\n"},{"location":"upgrade/upgrade_notes_for_10.0.x/#upgrade-to-100x","title":"Upgrade to 10.0.x","text":"Stop Seafile-9.0.x server.
Start from Seafile 10.0.x, run the script:
upgrade/upgrade_9.0_10.0.sh\n If you are using pro edtion, modify memcached option in seafile.conf and SAML SSO configuration if needed.
You can choose one of the methods to upgrade your index data.
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-one-reindex-the-old-index-data","title":"Method one, reindex the old index data","text":"1. Download Elasticsearch image:
docker pull elasticsearch:7.17.9\n Create a new folder to store ES data and give the folder permissions:
mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n Start ES docker image:
sudo docker run -d --name es-7.17 -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.17.9\n PS: ES_JAVA_OPTS can be adjusted according to your need.
2. Create an index with 8.x compatible mappings:
# create repo_head index\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8?pretty=true' -d '\n{\n \"mappings\" : {\n \"properties\" : {\n \"commit\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n },\n \"repo\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n },\n \"updatingto\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n }\n }\n }\n}'\n\n# create repofiles index, number_of_shards is the number of shards, here is set to 5, you can also modify it to the most suitable number of shards\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/?pretty=true' -d '\n{\n \"settings\" : {\n \"index\" : {\n \"number_of_shards\" : \"5\",\n \"analysis\" : {\n \"analyzer\" : {\n \"seafile_file_name_ngram_analyzer\" : {\n \"filter\" : [\n \"lowercase\"\n ],\n \"type\" : \"custom\",\n \"tokenizer\" : \"seafile_file_name_ngram_tokenizer\"\n }\n },\n \"tokenizer\" : {\n \"seafile_file_name_ngram_tokenizer\" : {\n \"type\" : \"ngram\",\n \"min_gram\" : \"3\",\n \"max_gram\" : \"4\"\n }\n }\n }\n }\n },\n \"mappings\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"text\",\n \"term_vector\" : \"with_positions_offsets\"\n },\n \"filename\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"ngram\" : {\n \"type\" : \"text\",\n \"analyzer\" : \"seafile_file_name_ngram_analyzer\"\n }\n }\n },\n \"is_dir\" : {\n \"type\" : \"boolean\"\n },\n \"mtime\" : {\n \"type\" : \"date\"\n },\n \"path\" : {\n \"type\" : \"keyword\"\n },\n \"repo\" : {\n \"type\" : \"keyword\"\n },\n \"size\" : {\n \"type\" : \"long\"\n },\n \"suffix\" : {\n \"type\" : \"keyword\"\n }\n }\n }\n}'\n 3. Set the refresh_interval to -1 and the number_of_replicas to 0 for efficient reindex:
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n 4. Use the reindex API to copy documents from the 7.x index into the new index:
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?wait_for_completion=false&pretty=true' -d '\n{\n \"source\": {\n \"index\": \"repo_head\"\n },\n \"dest\": {\n \"index\": \"repo_head_8\"\n }\n}'\n\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?wait_for_completion=false&pretty=true' -d '\n{\n \"source\": {\n \"index\": \"repofiles\"\n },\n \"dest\": {\n \"index\": \"repofiles_8\"\n }\n}'\n 5. Use the following command to check if the reindex task is complete:
# Get the task_id of the reindex task:\n$ curl 'http{s}://{es server IP}:9200/_tasks?actions=*reindex&pretty'\n# Check to see if the reindex task is complete:\n$ curl 'http{s}://{es server IP}:9200/_tasks/:<task_id>?pretty'\n 6. Reset the refresh_interval and number_of_replicas to the values used in the old index:
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n 7. Wait for the elasticsearch status to change to green (or yellow if it is a single node).
curl 'http{s}://{es server IP}:9200/_cluster/health?pretty'\n 8. Use the aliases API delete the old index and add an alias with the old index name to the new index:
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_aliases?pretty' -d '\n{\n \"actions\": [\n {\"remove_index\": {\"index\": \"repo_head\"}},\n {\"remove_index\": {\"index\": \"repofiles\"}},\n {\"add\": {\"index\": \"repo_head_8\", \"alias\": \"repo_head\"}},\n {\"add\": {\"index\": \"repofiles_8\", \"alias\": \"repofiles\"}}\n ]\n}'\n 9. Deactivate the 7.17 container, pull the 8.x image and run:
$ docker stop es-7.17\n\n$ docker rm es-7.17\n\n$ docker pull elasticsearch:8.6.2\n\n$ sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:8.6.2\n"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-two-rebuild-the-index-and-discard-the-old-index-data","title":"Method two, rebuild the index and discard the old index data","text":"1. Pull Elasticsearch image:
docker pull elasticsearch:8.5.3\n Create a new folder to store ES data and give the folder permissions:
mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n Start ES docker image:
sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:8.5.3\n 2. Modify the seafevents.conf:
[INDEX FILES]\n...\nexternal_es_server = true\nes_host = http{s}://{es server IP}\nes_port = 9200\nshards = 10 # default is 5.\n...\n Restart Seafile server:
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start\n 3. Delete old index data
rm -rf /opt/seafile-elasticsearch/data/*\n 4. Create new index data:
$ cd /opt/seafile/seafile-server-latest\n$ ./pro/pro.py search --update\n"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"1. Deploy elasticsearch 8.x according to method two. Use Seafile 10.0 version to deploy a new backend node and modify the seafevents.conf file. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update.
2. Upgrade the other nodes to Seafile 10.0 version and use the new Elasticsearch 8.x server.
3. Then deactivate the old backend node and the old version of Elasticsearch.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/","title":"Upgrade notes for 11.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#important-release-changes","title":"Important release changes","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-of-user-identity","title":"Change of user identity","text":"Previous Seafile versions directly used a user's email address or SSO identity as their internal user ID.
Seafile 11.0 introduces virtual user IDs - random, internal identifiers like \"adc023e7232240fcbb83b273e1d73d36@auth.local\". For new users, a virtual ID will be generated instead of directly using their email. A mapping between the email and virtual ID will be stored in the \"profile_profile\" database table. For SSO users,the mapping between SSO ID and virtual ID is stored in the \"social_auth_usersocialauth\" table.
Overall this brings more flexibility to handle user accounts and identity changes. Existing users will use the same old ID.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#reimplementation-of-ldap-integration","title":"Reimplementation of LDAP Integration","text":"Previous Seafile versions handled LDAP authentication in the ccnet-server component. In Seafile 11.0, LDAP is reimplemented within the Seahub Python codebase.
LDAP configuration has been moved from ccnet.conf to seahub_settings.py. The ccnet_db.LDAPImported table is no longer used - LDAP users are now stored in ccnet_db.EmailUsers along with other users.
Benefits of this new implementation:
You need to run migrate_ldapusers.py script to merge ccnet_db.LDAPImported table to ccnet_db.EmailUsers table. The setting files need to be changed manually. (See more details below)
If you use OAuth authentication, the configuration need to be changed a bit.
If you use SAML, you don't need to change configuration files. For SAML2, in version 10, the name_id field is returned from SAML server, and is used as the username (the email field in ccnet_dbEmailUser). In version 11, for old users, Seafile will find the old user and create a name_id to name_id mapping in social_auth_usersocialauth. For new users, Seafile will create a new user with random ID and add a name_id to the random ID mapping in social_auth_usersocialauth. In addition, we have added a feature where you can configure to disable login with a username and password for saml users by using the config of DISABLE_ADFS_USER_PWD_LOGIN = True in seahub_settings.py.
Seafile 11.0 dropped using SQLite as the database. It is better to migrate from SQLite database to MySQL database before upgrading to version 11.0.
There are several reasons driving this change:
To migrate from SQLite database to MySQL database, you can follow the document Migrate from SQLite to MySQL. If you have issues in the migration, just post a thread in our forum. We are glad to help you.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch version is not changed in Seafile version 11.0
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#new-saml-prerequisites-multi_tenancy-only","title":"New SAML prerequisites (MULTI_TENANCY only)","text":"For Ubuntu 20.04/22.04
sudo apt-get update\nsudo apt-get install -y dnsutils\n"},{"location":"upgrade/upgrade_notes_for_11.0.x/#django-csrf-protection-issue","title":"Django CSRF protection issue","text":"Django 4.* has introduced a new check for the origin http header in CSRF verification. It now compares the values of the origin field in HTTP header and the host field in HTTP header. If they are different, an error is triggered.
If you deploy Seafile behind a proxy, or if you use a non-standard port, or if you deploy Seafile in cluster, it is likely the origin field in HTTP header received by Django and the host field in HTTP header received by Django are different. Because the host field in HTTP header is likely to be modified by proxy. This mismatch results in a CSRF error.
You can add CSRF_TRUSTED_ORIGINS to seahub_settings.py to solve the problem:
CSRF_TRUSTED_ORIGINS = [\"https://<your-domain>\"]\n"},{"location":"upgrade/upgrade_notes_for_11.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
For Ubuntu 20.04/22.04
sudo apt-get update\nsudo apt-get install -y python3-dev ldap-utils libldap2-dev\n\nsudo pip3 install future==0.18.* mysqlclient==2.1.* pillow==10.2.* sqlalchemy==2.0.18 captcha==0.5.* django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3\n"},{"location":"upgrade/upgrade_notes_for_11.0.x/#upgrade-to-110x","title":"Upgrade to 11.0.x","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#1-stop-seafile-100x-server","title":"1) Stop Seafile-10.0.x server.","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#2-start-from-seafile-110x-run-the-script","title":"2) Start from Seafile 11.0.x, run the script:","text":"upgrade/upgrade_10.0_11.0.sh\n"},{"location":"upgrade/upgrade_notes_for_11.0.x/#3modify-configurations-and-migrate-ldap-records","title":"3\uff09Modify configurations and migrate LDAP records","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configurations-for-ldap","title":"Change configurations for LDAP","text":"The configuration items of LDAP login and LDAP sync tasks are migrated from ccnet.conf to seahub_settings.py. The name of the configuration item is based on the 10.0 version, and the characters 'LDAP_' or 'MULTI_LDAP_1' are added. Examples are as follows:
# Basic configuration items for LDAP login\nENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.125' # The URL of LDAP server\nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' # The root node of users who can \n # log in to Seafile in the LDAP server\nLDAP_ADMIN_DN = 'administrator@seafile.ren' # DN of the administrator used \n # to query the LDAP server for information\nLDAP_ADMIN_PASSWORD = 'Hello@123' # Password of LDAP_ADMIN_DN\nLDAP_PROVIDER = 'ldap' # Identify the source of the user, used in \n # the table social_auth_usersocialauth, defaults to 'ldap'\nLDAP_LOGIN_ATTR = 'userPrincipalName' # User's attribute used to log in to Seafile, \n # can be mail or userPrincipalName, cannot be changed\nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' # Additional filter conditions,\n # users who meet the filter conditions can log in, otherwise they cannot log in\n# For update user info when login\nLDAP_CONTACT_EMAIL_ATTR = '' # For update user's contact_email\nLDAP_USER_ROLE_ATTR = '' # For update user's role\nLDAP_USER_FIRST_NAME_ATTR = 'givenName' # For update user's first name\nLDAP_USER_LAST_NAME_ATTR = 'sn' # For update user's last name\nLDAP_USER_NAME_REVERSE = False # Whether to reverse the user's first and last name\n The following configuration items are only for Pro Edition:
# Configuration items for LDAP sync tasks.\nLDAP_SYNC_INTERVAL = 60 # LDAP sync task period, in minutes\n\n# LDAP user sync configuration items.\nENABLE_LDAP_USER_SYNC = True # Whether to enable user sync\nLDAP_USER_OBJECT_CLASS = 'person' # This is the name of the class used to search for user objects. \n # In Active Directory, it's usually \"person\". The default value is \"person\".\nLDAP_DEPT_ATTR = '' # LDAP user's department info\nLDAP_UID_ATTR = '' # LDAP user's login_id attribute\nLDAP_AUTO_REACTIVATE_USERS = True # Whether to auto activate deactivated user\nLDAP_USE_PAGED_RESULT = False # Whether to use pagination extension\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nENABLE_EXTRA_USER_INFO_SYNC = True # Whether to enable sync of additional user information,\n # including user's full name, contact_email, department, and Windows login name, etc.\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# LDAP group sync configuration items.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_FILTER = '' # Group sync filter\nLDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth or Shibboleth), you want Seafile to find the existing account for this user instead of creating a new one, you can set SSO_LDAP_USE_SAME_UID = True:
SSO_LDAP_USE_SAME_UID = True\n Note, here the UID means the unique user ID, in LDAP it is the attribute you use for LDAP_LOGIN_ATTR (not LDAP_UID_ATTR), in ADFS it is uid attribute. You need make sure you use the same attribute for the two settings.
Run the following script to migrate users in LDAPImported to EmailUsers
cd <install-path>/seafile-server-latest\npython3 migrate_ldapusers.py\n For Seafile docker
docker exec -it seafile /usr/bin/python3 /opt/seafile/seafile-server-latest/migrate_ldapusers.py\n"},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configuration-for-oauth","title":"Change configuration for OAuth:","text":"In the new version, the OAuth login configuration should keep the email attribute unchanged to be compatible with new and old user logins. In version 11.0, a new uid attribute is added to be used as a user's external unique ID. The uid will be stored in social_auth_usersocialauth to map to internal virtual ID. For old users, the original email is used the internal virtual ID. The example is as follows:
# Version 10.0 or earlier\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n\n# Since 11.0 version, added 'uid' attribute.\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # In the new version, the email attribute configuration should be kept unchanged to be compatible with old and new user logins\n \"uid\": (True, \"uid\"), # Seafile use 'uid' as the external unique identifier of the user. Different OAuth systems have different attributes, which may be: 'uid' or 'username', etc.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n When a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration \"id\": (True, \"email\"). You can also manully add records in social_auth_usersocialauth to map extenral uid to old users.
We have documented common issues encountered by users when upgrading to version 11.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/","title":"Upgrade notes for 12.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
Seafile version 12.0 has following major changes:
Configuration changes:
.env file is needed to contain some configuration items. These configuration items need to be shared by different components in Seafile. We name it .env to be consistant with docker based installation..env file.ccnet.conf is removed. Some of its configuration items are moved from .env file, others are read from items in seafile.conf with same name.can_create_wiki and can_publish_wiki are used to control whether a role can create a Wiki and publish a Wiki. The old role permission can_publish_repo is removed.gunicorn.conf.py if you need REMOTE_USER header for SSO.Other changes:
Breaking changes
Deploying Seafile with binary package is now deprecated and probably no longer be supported in version 13.0. We recommend you to migrate your existing Seafile deployment to docker based.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch version is not changed in Seafile version 12.0
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#new-system-libraries","title":"New system libraries","text":"Ubuntu 24.04/22.04Debian 11apt-get install -y default-libmysqlclient-dev build-essential pkg-config libmemcached-dev\n apt-get install -y libsasl2-dev\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
Ubuntu 24.04 / Debian 12Ubuntu 22.04 / Debian 11sudo pip3 install future==1.0.* mysqlclient==2.2.* pillow==10.4.* sqlalchemy==2.0.* pillow_heif==0.18.0 \\\ngevent==24.2.* captcha==0.6.* django_simple_captcha==0.6.* djangosaml2==1.9.* \\\npysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 python-ldap==3.4.*\n sudo pip3 install future==1.0.* mysqlclient==2.1.* pillow==10.4.* sqlalchemy==2.0.* pillow_heif==0.18.0 \\\ngevent==24.2.* captcha==0.6.* django_simple_captcha==0.6.* djangosaml2==1.9.* \\\npysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.2.0\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#upgrade-to-120-for-binary-installation","title":"Upgrade to 12.0 (for binary installation)","text":"The following instruction is for binary package based installation. If you use Docker based installation, please see Upgrade Docker
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#1-clean-database-tables-before-upgrade","title":"1) Clean database tables before upgrade","text":"If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
Install new system libraries and Python libraries for your operation system as documented above.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#3-stop-seafile-110x-server","title":"3) Stop Seafile-11.0.x server","text":"In the folder of Seafile 11.0.x, run the commands:
./seahub.sh stop\n./seafile.sh stop\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#4-run-seafile-120x-upgrade-script","title":"4) Run Seafile 12.0.x upgrade script","text":"In the folder of Seafile 12.0.x, run the upgrade script
upgrade/upgrade_11.0_12.0.sh\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#5-create-the-env-file-in-conf-directory","title":"5) Create the .env file in conf/ directory","text":"conf/.env
TIME_ZONE=UTC\nJWT_PRIVATE_KEY=xxx\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=db # your MySQL host\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, can be generated by
pwgen -s 40 1\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#6-start-seafile-120x-server","title":"6) Start Seafile-12.0.x server","text":"In the folder of Seafile 12.0.x, run the command:
./seafile.sh start # starts seaf-server\n./seahub.sh start # starts seahub\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#7-optional-upgrade-notification-server","title":"7) (Optional) Upgrade notification server","text":"Since seafile 12.0, we use docker to deploy the notification server. Please follow the document of notification server to re-deploy notification server.
Note
Notification server is designed to be work with Docker based deployment. To make it work with Seafile binary package on the same server, you will need to add Nginx rules for notification server properly.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#8-optional-upgrade-seadoc-from-08-to-10","title":"8) (Optional) Upgrade SeaDoc from 0.8 to 1.0","text":"If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following two steps:
SeaDoc and Seafile binary package
Deploying SeaDoc and Seafile binary package on the same server is no longer officially supported. You will need to add Nginx rules for SeaDoc server properly.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#81-delete-sdoc_db","title":"8.1) Delete sdoc_db","text":"From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#82-deploy-a-new-seadoc-server","title":"8.2) Deploy a new SeaDoc server","text":"Please see the document Setup SeaDoc to install SeaDoc on a separate machine and integrate with your binary packaged based Seafile server v12.0.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#9-optional-update-gunicornconfpy-file-in-conf-directory","title":"9) (Optional) Updategunicorn.conf.py file in conf/ directory","text":"If you deployed single sign on (SSO) by Shibboleth protocol, the following line should be added to the gunicorn config file.
forwarder_headers = 'SCRIPT_NAME,PATH_INFO,REMOTE_USER'\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#10-optional-other-configuration-changes","title":"10) (Optional) Other configuration changes","text":""},{"location":"upgrade/upgrade_notes_for_12.0.x/#enable-passing-of-remote_user","title":"Enable passing of REMOTE_USER","text":"REMOTE_USER header is not passed to Seafile by default, you need to change gunicorn.conf.py if you need REMOTE_USER header for SSO.
forwarder_headers = 'SCRIPT_NAME,PATH_INFO,REMOTE_USER'\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#supplement-or-remove-allowed_hosts-in-seahub_settingspy","title":"Supplement or remove ALLOWED_HOSTS in seahub_settings.py","text":"Since version 12.0, the seaf-server component need to send internal requests to seahub component to check permissions, as reporting 400 Error when downloading files if the ALLOWED_HOSTS set incorrect. In this case, you can either remove ALLOWED_HOSTS in seahub_settings.py or supplement 127.0.0.1 in ALLOWED_HOSTS list:
# seahub_settings.py\n\nALLOWED_HOSTS = ['...(your domain)', '127.0.0.1']\n"},{"location":"upgrade/upgrade_notes_for_12.0.x/#faq","title":"FAQ","text":"We have documented common issues encountered by users when upgrading to version 12.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_13.0.x/","title":"Upgrade notes for 13.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
For docker based version, please check upgrade Seafile Docker image
Seafile version 13.0 has following major changes:
Configuration changes:
.env, it is recommended to use environment variables to config database and memcacheBreaking changes
Deploying Seafile with binary package is no longer supported for community edition. We recommend you to migrate your existing Seafile deployment to docker based.
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"Elasticsearch version is not changed in Seafile version 13.0
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#new-system-libraries-to-be-updated","title":"New system libraries (TO be updated)","text":"Ubuntu 24.04/22.04Debian 11apt-get install -y default-libmysqlclient-dev build-essential pkg-config libmemcached-dev\n apt-get install -y libsasl2-dev\n"},{"location":"upgrade/upgrade_notes_for_13.0.x/#new-python-libraries-to-be-updated","title":"New Python libraries (TO be updated)","text":"Note, you should install Python libraries system wide using root user or sudo mode.
Ubuntu 24.04 / Debian 12Ubuntu 22.04 / Debian 11sudo pip3 install future==1.0.* mysqlclient==2.2.* pillow==10.4.* sqlalchemy==2.0.* pillow_heif==0.18.0 \\\ngevent==24.2.* captcha==0.6.* django_simple_captcha==0.6.* djangosaml2==1.9.* \\\npysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 python-ldap==3.4.*\n sudo pip3 install future==1.0.* mysqlclient==2.1.* pillow==10.4.* sqlalchemy==2.0.* pillow_heif==0.18.0 \\\ngevent==24.2.* captcha==0.6.* django_simple_captcha==0.6.* djangosaml2==1.9.* \\\npysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.2.0\n"},{"location":"upgrade/upgrade_notes_for_13.0.x/#upgrade-to-130-for-binary-installation","title":"Upgrade to 13.0 (for binary installation)","text":"The following instruction is for binary package based installation. If you use Docker based installation, please see Updgrade Docker
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#1-clean-database-tables-before-upgrade","title":"1) Clean database tables before upgrade","text":"If you have a large number of Activity in MySQL, clear this table first Clean Database. Otherwise, the database upgrade will take a long time.
Install new system libraries and Python libraries for your operation system as documented above.
"},{"location":"upgrade/upgrade_notes_for_13.0.x/#3-stop-seafile-110x-server","title":"3) Stop Seafile-11.0.x server","text":"In the folder of Seafile 11.0.x, run the commands:
./seahub.sh stop\n./seafile.sh stop\n"},{"location":"upgrade/upgrade_notes_for_13.0.x/#4-run-seafile-120x-upgrade-script","title":"4) Run Seafile 12.0.x upgrade script","text":"In the folder of Seafile 12.0.x, run the upgrade script
upgrade/upgrade_11.0_12.0.sh\n"},{"location":"upgrade/upgrade_notes_for_13.0.x/#5-create-the-env-file-in-conf-directory","title":"5) Create the .env file in conf/ directory","text":"conf/.env
TIME_ZONE=UTC\nJWT_PRIVATE_KEY=xxx\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\nSEAFILE_MYSQL_DB_HOST=db # your MySQL host\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=<your MySQL password>\nSEAFILE_MYSQL_DB_CCNET_DB_NAME=ccnet_db\nSEAFILE_MYSQL_DB_SEAFILE_DB_NAME=seafile_db\nSEAFILE_MYSQL_DB_SEAHUB_DB_NAME=seahub_db\n Tip
JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, can be generated by
pwgen -s 40 1\n"},{"location":"upgrade/upgrade_notes_for_13.0.x/#6-start-seafile-120x-server","title":"6) Start Seafile-12.0.x server","text":"In the folder of Seafile 12.0.x, run the command:
./seafile.sh start # starts seaf-server\n./seahub.sh start # starts seahub\n"},{"location":"upgrade/upgrade_notes_for_13.0.x/#7-optional-upgrade-notification-server","title":"7) (Optional) Upgrade notification server","text":""},{"location":"upgrade/upgrade_notes_for_13.0.x/#8-optional-upgrade-seadoc-from-10-to-20","title":"8) (Optional) Upgrade SeaDoc from 1.0 to 2.0","text":""},{"location":"upgrade/upgrade_notes_for_13.0.x/#faq","title":"FAQ","text":"We have documented common issues encountered by users when upgrading to version 12.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000.
If you encounter any issue, please give it a check.
"},{"location":"upgrade/upgrade_notes_for_9.0.x/","title":"Upgrade notes for 9.0","text":"These notes give additional information about changes. Please always follow the main upgrade guide.
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#important-release-changes","title":"Important release changes","text":"9.0 version includes following major changes:
The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages:
You can turn golang file-server on by adding following configuration in seafile.conf
[fileserver]\nuse_go_fileserver = true\n"},{"location":"upgrade/upgrade_notes_for_9.0.x/#new-python-libraries","title":"New Python libraries","text":"Note, you should install Python libraries system wide using root user or sudo mode.
sudo pip3 install pycryptodome==3.12.0 cffi==1.14.0\n"},{"location":"upgrade/upgrade_notes_for_9.0.x/#upgrade-to-90x","title":"Upgrade to 9.0.x","text":"Start from Seafile 9.0.x, run the script:
upgrade/upgrade_8.0_9.0.sh\n Start Seafile-9.0.x server.
If your elasticsearch data is not large, it is recommended to deploy the latest 7.x version of ElasticSearch and then rebuild the new index. Specific steps are as follows
Download ElasticSearch image
docker pull elasticsearch:7.16.2\n Create a new folder to store ES data and give the folder permissions
mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n Note: You must properly grant permission to access the es data directory, and run the Elasticsearch container as the root user, refer to here.
Start ES docker image
sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms2g -Xmx2g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.16.2\n Delete old index data
rm -rf /opt/seafile/pro-data/search/data/*\n Modify seafevents.conf
[INDEX FILES]\nexternal_es_server = true\nes_host = your server's IP (use 127.0.0.1 if deployed locally)\nes_port = 9200\n Restart seafile
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start \n"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-two-reindex-the-existing-data","title":"Method two, reindex the existing data","text":"If your data volume is relatively large, it will take a long time to rebuild indexes for all Seafile databases, so you can reindex the existing data. This requires the following steps
The detailed process is as follows
Download ElasticSearch image:
docker pull elasticsearch:7.16.2\n PS\uff1aFor seafile version 9.0, you need to manually create the elasticsearch mapping path on the host machine and give it 777 permission, otherwise elasticsearch will report path permission problems when starting, the command is as follows
mkdir -p /opt/seafile-elasticsearch/data \n Move original data to the new folder and give the folder permissions
mv /opt/seafile/pro-data/search/data/* /opt/seafile-elasticsearch/data/\nchmod -R 777 /opt/seafile-elasticsearch/data/\n Note: You must properly grant permission to access the es data directory, and run the Elasticsearch container as the root user, refer to here.
Start ES docker image
sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.16.2\n Note:ES_JAVA_OPTS can be adjusted according to your need.
Create an index with 7.x compatible mappings.
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head?include_type_name=false&pretty=true' -d '\n{\n \"mappings\" : {\n \"properties\" : {\n \"commit\" : {\n \"type\" : \"text\",\n \"index\" : false\n },\n \"repo\" : {\n \"type\" : \"text\",\n \"index\" : false\n },\n \"updatingto\" : {\n \"type\" : \"text\",\n \"index\" : false\n }\n }\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/?include_type_name=false&pretty=true' -d '\n{\n \"settings\" : {\n \"index\" : {\n \"number_of_shards\" : 5,\n \"number_of_replicas\" : 1,\n \"analysis\" : {\n \"analyzer\" : {\n \"seafile_file_name_ngram_analyzer\" : {\n \"filter\" : [\n \"lowercase\"\n ],\n \"type\" : \"custom\",\n \"tokenizer\" : \"seafile_file_name_ngram_tokenizer\"\n }\n },\n \"tokenizer\" : {\n \"seafile_file_name_ngram_tokenizer\" : {\n \"type\" : \"ngram\",\n \"min_gram\" : \"3\",\n \"max_gram\" : \"4\"\n }\n }\n }\n }\n },\n \"mappings\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"text\",\n \"term_vector\" : \"with_positions_offsets\"\n },\n \"filename\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"ngram\" : {\n \"type\" : \"text\",\n \"analyzer\" : \"seafile_file_name_ngram_analyzer\"\n }\n }\n },\n \"is_dir\" : {\n \"type\" : \"boolean\"\n },\n \"mtime\" : {\n \"type\" : \"date\"\n },\n \"path\" : {\n \"type\" : \"keyword\"\n },\n \"repo\" : {\n \"type\" : \"keyword\"\n },\n \"size\" : {\n \"type\" : \"long\"\n },\n \"suffix\" : {\n \"type\" : \"keyword\"\n }\n }\n }\n}'\n Set the refresh_interval to -1 and the number_of_replicas to 0 for efficient reindexing:
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n Use the reindex API to copy documents from the 5.x index into the new index.
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?pretty' -d '\n{\n \"source\": {\n \"index\": \"repo_head\",\n \"type\": \"repo_commit\"\n },\n \"dest\": {\n \"index\": \"new_repo_head\",\n \"type\": \"_doc\"\n }\n}'\n\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?pretty' -d '\n{\n \"source\": {\n \"index\": \"repofiles\",\n \"type\": \"file\"\n },\n \"dest\": {\n \"index\": \"new_repofiles\",\n \"type\": \"_doc\"\n }\n}'\n Reset the refresh_interval and number_of_replicas to the values used in the old index.
curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n Wait for the index status to change to green.
curl http{s}://{es server IP}:9200/_cluster/health?pretty\n Use the aliases API delete the old index and add an alias with the old index name to the new index.
curl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_aliases?pretty' -d '\n{\n \"actions\": [\n {\"remove_index\": {\"index\": \"repo_head\"}},\n {\"remove_index\": {\"index\": \"repofiles\"}},\n {\"add\": {\"index\": \"new_repo_head\", \"alias\": \"repo_head\"}},\n {\"add\": {\"index\": \"new_repofiles\", \"alias\": \"repofiles\"}}\n ]\n}'\n After reindex, modify the configuration in Seafile.
Modify seafevents.conf
[INDEX FILES]\nexternal_es_server = true\nes_host = your server's IP\nes_port = 9200\n Restart seafile
su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start \n"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"Deploy a new ElasticSeach 7.x service, use Seafile 9.0 version to deploy a new backend node, and connect to ElasticSeach 7.x. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update, and then upgrade the other nodes to Seafile 9.0 version and use the new ElasticSeach 7.x after the index is created. Then deactivate the old backend node and the old version of ElasticSeach.