From 200ea8763efcc5dbcf1d8d3c85264023f2d37a79 Mon Sep 17 00:00:00 2001
From: ci-bot Seafile is an open source cloud storage system for file sync, share and document collaboration. The different components of Seafile project are released under different licenses: The different components of Seafile project are released under different licenses: Forum: https://forum.seafile.com Follow us @seafile https://twitter.com/seafile The source code of seafile is ISO/IEC 9899:1999 (E) (a.k.a. C99) compatible. Take a look at code standard. Please check https://www.seafile.com/en/roadmap/ You can build Seafile from our source code package or from the Github repo directly. Client Server The following list is what you need to install on your development machine. You should install all of them before you build Seafile. Package names are according to Ubuntu 14.04. For other Linux distros, please find their corresponding names yourself. For a fresh Fedora 20 / 23 installation, the following will install all dependencies via YUM: First you should get the latest source of libsearpc/ccnet/seafile/seafile-client: Download the source tarball of the latest tag from For example, if the latest released seafile client is 8.0.0, then just use the v8.0.0 tags of the four projects. You should get four tarballs: Now uncompress them: To build Seafile client, you need first build libsearpc and ccnet, seafile. In order to support notification server, you need to build libwebsockets first. You can set when installing to a custom you can now start the client with The following setups are required for building and packaging Sync Client on macOS: Following directory structures are expected when building Sync Client: The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, and github.com/haiwen/seafile-client. Note: the building commands have been included in the packaging script, you can skip building commands while packaging. To build libsearpc: To build seafile: To build seafile-client: From Seafile 11.0, you can build Seafile release package with seafile-build script. You can check the README.md file in the same folder for detailed instructions. The Old version is below: Table of contents: Requirements: libevhtp is a http server libary on top of libevent. It's used in seafile file server. After compiling all the libraries, run Create a new directory Download these tarballs to Install all these libaries to To build seafile server, there are four sub projects involved: The build process has two steps: Seafile manages the releases in tags on github. Assume we are packaging for seafile server 6.0.1, then the tags are: First setup the Now we have all the tarballs prepared, we can run the After the script finisheds, we would get a Use the built seafile server package to go over the steps of Deploying Seafile with SQLite. The test should cover these steps at least: This is the document for deploying Seafile open source development environment in Ubuntu 2204 docker container. Login a linux server as After install docker, start a container to deploy seafile open source development environment. Note, the following commands are all executed in the seafile-ce-env docker container. Update base system and install base dependencies: Install Node 16 from nodesource: Install other Python 3 dependencies: sql for create databases Then, you can visit http://127.0.0.1:8000/ to use Seafile. For deploying frontend development enviroment, you need: 1, checkout seahub to master branch 2, add the following configration to /root/dev/conf/seahub_settings.py 3, install js modules 4, npm run dev 5, start seaf-server and seahub The following setups are required for building and packaging Sync Client on Windows: vcpkg Python 3.7 Certificates Note: certificates for Windows application are issued by third-party certificate authority. Support for Breakpad can be added by running following steps: install gyp tool compile breakpad compile dump_syms tool create vs solution copy VC merge modules Following directory structures are expected when building Sync Client: The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, github.com/haiwen/seafile-client, and github.com/haiwen/seafile-shell-ext. Note: the building commands have been included in the packaging script, you can skip building commands while packaging. To build libsearpc: To build seafile To build seafile-client To build seafile-shell-ext Note: Two new options are added in version 4.4, both are in seahub_settings.py This version contains no database table change. LDAP improvements and fixes New features: Pro only: Fixes: Note: this version contains no database table change from v4.2. But the old search index will be deleted and regenerated. Note when upgrading from v4.2 and using cluster, a new option About \"Open via Client\": The web interface will call Seafile desktop client via \"seafile://\" protocol to use local program to open a file. If the file is already synced, the local file will be opened. Otherwise it is downloaded and uploaded after modification. Need client version 4.3.0+ Usability improvements Pro only features: Others Note: because Seafile has changed the way how office preview work in version 4.2.2, you need to clean the old generated files using the command: In the old way, the whole file is converted to HTML5 before returning to the client. By converting an office file to HTML5 page by page, the first page will be displayed faster. By displaying each page in a separate frame, the quality for some files is improved too. Improved account management Important New features Others Pro only updates Usability Security Improvement Platform Pro only updates Updates in community edition too Important Small Pro edition only: Syncing Platform Web Web Platform Web Platform Misc WebDAV Platform Web Web for Admin Platform Web Web for Admin API Web API Platform You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000 Upgrade Please check our document for how to upgrade to 11.0: https://manual.seafile.com/upgrade/upgrade_notes_for_11.0.x/ Seafile SDoc editor 0.8 Seafile SDoc editor 0.7 SDoc editor 0.6 Major changes UI Improvements Pro edition only changes Other changes Upgrade Please check our document for how to upgrade to 10.0: https://manual.seafile.com/upgrade/upgrade_notes_for_10.0.x/ Note, after upgrading to this version, you need to upgrade the Python libraries in your server \"pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20\" Upgrade Please check our document for how to upgrade to 9.0: https://manual.seafile.com/upgrade/upgrade_notes_for_9.0.x/ Note: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages: You can turn golang file-server on by adding following configuration in seafile.conf Deprecated Deprecated Upgrade Please check our document for how to upgrade to 8.0: https://manual.seafile.com/upgrade/upgrade_notes_for_8.0.x/ Potential breaking change in Seafile Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through Upgrade Please check our document for how to upgrade to 7.1: upgrade notes for 7.1.x Potential breaking change in Seafile Pro 7.1.16: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through Since seafile-pro 7.0.0, we have upgraded Elasticsearch to 5.6. As Elasticsearch 5.6 relies on the Java 8 environment and can't run with root, you need to run Seafile with a non-root user and upgrade the Java version. Please check our document for how to upgrade to 7.0: upgrade notes for 7.0.x In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April. With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode. The way to run Seahub in another port is also changed. You need to modify the configuration file Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3: Note, this command should be run while Seafile server is running. Version 6.3 changed '/shib-login' to '/sso'. If you use Shibboleth, you need to to update your Apache/Nginx config. Please check the updated document: shibboleth config v6.3 Version 6.3 add a new option for file search ( This option will make search speed improved significantly (10x) when the search result contains large pdf/doc files. But you need to rebuild search index if you want to add this option. New features From 6.2, It is recommended to use proxy mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode: The configuration of Nginx is as following: The configuration of Apache is as following: You can follow the document on minor upgrade. Web UI Improvement: Improvement for admins: System changes: You can follow the document on minor upgrade. Special note for upgrading a cluster: In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share. The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster. Improvement for admin Other Pro only features cloud file browser others This version has a few bugs. We will fix it soon. Note: Seafile client now support HiDPI under Windows, you should remove QT_DEVICE_PIXEL_RATIO settings if you had set one previous. In the old version, you will sometimes see strange directory such as \"Documents~1\" synced to the server, this because the old version did not handle long path correctly. In the previous version, when you open an office file in Windows, it is locked by the operating system. If another person modify this file in another computer, the syncing will be stopped until you close the locked file. In this new version, the syncing process will continue. The locked file will not be synced to local computer, but other files will not be affected. You have to update all the clients in all the PCs. If one PC does not use the v3.1.11, when the \"deleting folder\" information synced to this PC, it will fail to delete the folder completely. And the folder will be synced back to other PCs. So other PCs will see the folder reappear again. Note: This version contains a bug that you can't login into your private servers. 1.8.1 1.8.0 1.7.3 1.7.2 1.7.1 1.7.0 1.6.2 1.6.1 1.6.0 1.5.3 1.5.2 1.5.1 1.5.0 Note when upgrade to 5.0 from 4.4 You can follow the document on major upgrade (http://manual.seafile.com/deploy/upgrade.html) In Seafile 5.0, we have moved all config files to folder If you want to downgrade from v5.0 to v4.4, you should manually copy these files back to the original place, then run minor_upgrade.sh to upgrade symbolic links back to version 4.4. The 5.0 server is compatible with v4.4 and v4.3 desktop clients. Common issues (solved) when upgrading to v5.0: Improve seaf-fsck Sharing link UI changes: Config changes: Trash: Admin: Security: New features: Fixes: Usability Improvement Others Note when upgrade to 4.2 from 4.1: If you deploy Seafile in a non-root domain, you need to add the following extra settings in seahub_settings.py: Usability Security Improvement Platform Important Small Important Small improvements Syncing Platform Web Web Platform Web Platform Platform Web WebDAV Platform Web Web for Admin Platform Web Web for Admin API Web API Platform Web Daemon Web Daemon Web For Admin API Seafile Web Seafile Daemon API You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000 Upgrade Please check our document for how to upgrade to 11.0: https://manual.seafile.com/upgrade/upgrade_notes_for_11.0.x/ Seafile Seafile SDoc editor 0.8 Seafile SDoc editor 0.7 Seafile SDoc editor 0.6 Seafile Seafile SDoc editor 0.5 Seafile SDoc editor 0.4 Seafile SDoc editor 0.3 Seafile SDoc editor 0.2 Upgrade Please check our document for how to upgrade to 10.0: https://manual.seafile.com/upgrade/upgrade_notes_for_10.0.x/ Note: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages: You can turn golang file-server on by adding following configuration in seafile.conf Please check our document for how to upgrade to 8.0: https://manual.seafile.com/upgrade/upgrade_notes_for_8.0.x/ Feature changes Progresql support is dropped as we have rewritten the database access code to remove copyright issue. Upgrade Please check our document for how to upgrade to 7.1: https://manual.seafile.com/upgrade/upgrade_notes_for_7.1.x/ Feature changes In version 6.3, users can create public or private Wikis. In version 7.0, private Wikis is replaced by column mode view. Every library has a column mode view. So users don't need to explicitly create private Wikis. Public Wikis are now renamed to published libraries. Upgrade Just follow our document on major version upgrade. No special steps are needed. In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April. With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode. The way to run Seahub in another port is also changed. You need to modify the configuration file Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3: Note, this command should be run while Seafile server is running. From 6.2, It is recommended to use WSGI mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode: The configuration of Nginx is as following: The configuration of Apache is as following: If you upgrade from 6.0 and you'd like to use the feature video thumbnail, you need to install ffmpeg package: Web UI Improvement: Improvement for admins: System changes: Note: If you ever used 6.0.0 or 6.0.1 or 6.0.2 with SQLite as database and encoutered a problem with desktop/mobile client login, follow https://github.com/haiwen/seafile/pull/1738 to fix the problem. Improvement for admin Other Warning: Note: when upgrade from 5.1.3 or lower version to 5.1.4+, you need to install python-urllib3 (or python2-urllib3 for Arch Linux) manually: Note: downloading multiple files at once will be added in the next release. Note: in this version, the group discussion is not re-implement yet. It will be available when the stable verison is released. There are three config files in the community edition: You can also modify most of the config items via web interface.The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files. Ccnet is the internal RPC framework used by Seafile server and also manages the user database. A few useful options are in ccnet.conf. Ccnet component is merged into seaf-server in version 7.1, but the configuration file are still needed. When you configure ccnet to use MySQL, the default connection pool size is 100, which should be enough for most use cases. You can change this value by adding following options to ccnet.conf: Since Seafile 10.0.2, you can enable the encrypted connections to the MySQL server by adding the following configuration options: When set Note: Subject line may vary between different releases, this is based on Release 2.0.1. Restart Seahub so that your changes take effect. Subject seahub/seahub/auth/forms.py line:103 Body seahub/seahub/templates/registration/password_reset_email.html Note: You can copy password_reset_email.html to Subject seahub/seahub/views/sysadmin.py line:424 Body seahub/seahub/templates/sysadmin/user_add_email.html Note: You can copy user_add_email.html to Subject seahub/seahub/views/sysadmin.py line:368 Body seahub/seahub/templates/sysadmin/user_reset_email.html Note: You can copy user_reset_email.html to Subject seahub/seahub/share/views.py line:668 Body seahub/seahub/templates/shared_link_email.html The In the file NOTE: Access the AWS elasticsearch service using HTTPS Important: Every entry in this configuration file is case-sensitive. You need to restart seafile and seahub so that your changes take effect. You may set a default quota (e.g. 2GB) for all users. To do this, just add the following lines to This setting applies to all users. If you want to set quota for a specific user, you may log in to seahub website as administrator, then set it in \"System Admin\" page. Since Pro 10.0.9 version, you can set the maximum number of files allowed in a library, and when this limit is exceeded, files cannot be uploaded to this library. There is no limit by default. If you don't want to keep all file revision history, you may set a default history length limit for all libraries. The default time for automatic cleanup of the libraries trash is 30 days.You can modify this time by adding the following configuration\uff1a Seafile uses a system trash, where deleted libraries will be moved to. In this way, accidentally deleted libraries can be recovered by system admin. Seafile Pro Edition uses memory caches in various cases to improve performance. Some session information is also saved into memory cache to be shared among the cluster nodes. Memcached or Reids can be use for memory cache. If you use memcached: If you use redis: Redis support is added in version 11.0. Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet. The configuration of seafile fileserver is in the Since Community Edition 6.2 and Pro Edition 6.1.9, you can set the number of worker threads to server http requests. Default value is 10, which is a good value for most use cases. Change upload/download settings. After a file is uploaded via the web interface, or the cloud file browser in the client, it needs to be divided into fixed size blocks and stored into storage backend. We call this procedure \"indexing\". By default, the file server uses 1 thread to sequentially index the file and store the blocks one by one. This is suitable for most cases. But if you're using S3/Ceph/Swift backends, you may have more bandwidth in the storage backend for storing multiple blocks in parallel. We provide an option to define the number of concurrent threads in indexing: When users upload files in the web interface (seahub), file server divides the file into fixed size blocks. Default blocks size for web uploaded files is 8MB. The block size can be set here. When users upload files in the web interface, file server assigns an token to authorize the upload operation. This token is valid for 1 hour by default. When uploading a large file via WAN, the upload time can be longer than 1 hour. You can change the token expire time to a larger value. You can download a folder as a zip archive from seahub, but some zip software on windows doesn't support UTF-8, in which case you can use the \"windows_encoding\" settings to solve it. The \"httptemp\" directory contains temporary files created during file upload and zip download. In some cases the temporary files are not cleaned up after the file transfer was interrupted. Starting from 7.1.5 version, file server will regularly scan the \"httptemp\" directory to remove files created long time ago. New in Seafile Pro 7.1.16 and Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through Since Pro 8.0.4 version, you can set both options to -1, to allow unlimited size and timeout. If you use object storage as storage backend, when a large file is frequently downloaded, the same blocks need to be fetched from the storage backend to Seafile server. This may waste bandwith and cause high load on the internal network. Since Seafile Pro 8.0.5 version, we add block caching to improve the situation. Note that this configuration is only effective for downloading files through web page or API, but not for syncing files. When a large number of files are uploaded through the web page and API, it will be expensive to calculate block IDs based on the block contents. Since Seafile-pro-9.0.6, you can add the If you want to limit the type of files when uploading files, since Seafile Pro 10.0.0 version, you can set Since seafile 10.0.1, when you use go fileserver, you can set Since Seafile 11.0.7 Pro, you can ask file server to check virus for every file uploaded with web APIs. Find more options about virus scanning at virus scan. The whole database configuration is stored in the When you configure seafile server to use MySQL, the default connection pool size is 100, which should be enough for most use cases. Since Seafile 10.0.2, you can enable the encrypted connections to the MySQL server by adding the following configuration options: When set The Seafile Pro server auto expires file locks after some time, to prevent a locked file being locked for too long. The expire time can be tune in seafile.conf file. The default is 12 hours. Since Seafile-pro-9.0.6, you can add cache for getting locked files (reduce server load caused by sync clients). At the same time, you also need to configure the following memcache options for the cache to take effect: You may configure Seafile to use various kinds of object storage backends. You may also configure Seafile to use multiple storage backends at the same time. When you deploy Seafile in a cluster, you should add the following configuration: Since Seafile-pro-6.3.10, you can enable seaf-server's RPC slow log to do performance analysis.The slow log is enabled by default. If you want to configure related options, add the options to seafile.conf: You can find Since 9.0.2 Pro, the signal to trigger log rotation has been changed to Even though Nginx logs all requests with certain details, such as url, response code, upstream process time, it's sometimes desirable to have more context about the requests, such as the user id for each request. Such information can only be logged from file server itself. Since 9.0.2 Pro, access log feature is added to fileserver. To enable access log, add below options to seafile.conf: The log format is as following: You can use Seafile 9.0 introduces a new fileserver implemented in Go programming language. To enable it, you can set the options below in seafile.conf: Go fileserver has 3 advantages over the traditional fileserver implemented in C language: Go fileserver caches fs objects in memory. On the one hand, it avoids repeated creation and destruction of repeatedly accessed objects; on the other hand it will also slow down the speed at which objects are released, which will prevent go's gc mechanism from consuming too much CPU time. You can set the size of memory used by fs cache through the following options. Since Seafile 9.0.7, you can enable the profile function of go fileserver by adding the following configuration options: This interface can be used through the pprof tool provided by Go language. See https://pkg.go.dev/net/http/pprof for details. Note that you have to first install Go on the client that issues the below commands. The password parameter should match the one you set in the configuration. Since Seafile 10.0.0, you can enable the notification server by adding the following configuration options: You can generate jwt_private_key with the following command\uff1a If you use nginx, then you also need to add the following configuration for nginx: Or add the configuration for Apache: Create a folder During upgrading, Seafile upgrade script will create symbolic link automatically to preserve your customization. Add your logo file to Overwrite Default width and height for logo is 149px and 32px, you may need to change that according to yours. Add your favicon file to Overwrite Add your css file to Overwrite Note: Since version 2.1. First go to the custom folder then run the following commands Modify the You can add an extra note in sharing dialog in seahub_settings.py Result: Since Pro 7.0.9, Seafile supports adding some custom navigation entries to the home page for quick access. This requires you to add the following configuration information to the **Note: The Then restart the Seahub service to take effect. Once you log in to the Seafile system homepage again, you will see the new navigation entry under the Result: Result: Note: You can also modify most of the config items via web interface. The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files. If you want to disable settings via web interface, you can add Refer to email sending documentation. Seahub caches items(avatars, profiles, etc) on file system by default(/tmp/seahub_cache/). You can replace with Memcached or Redis. Add the following configuration to Redis support is added in version 11.0. Please refer to Django's documentation about using Redis cache. The following options affect user registration, password and session. Options for libraries: Options for online file preview: You should enable cloud mode if you use Seafile with an unknown user base. It disables the organization tab in Seahub's website to ensure that users can't access the user list. Cloud mode provides some nice features like sharing content with unregistered users and sending invitations to them. Therefore you also want to enable user registration. Through the global address book (since version 4.2.3) you can do a search for every user account. So you probably want to disable it. Since version 6.2, you can define a custom function to modify the result of user search function. For example, if you want to limit user only search users in the same institution, you can define Code example: NOTE, you should NOT change the name of Since version 6.2.5 pro, if you enable the ENABLE_SHARE_TO_ALL_GROUPS feather on sysadmin settings page, you can also define a custom function to return the groups a user can share library to. For example, if you want to let a user to share library to both its groups and the groups of user Code example: NOTE, you should NOT change the name of There are currently five types of emails sent in Seafile: The first four types of email are sent immediately. The last type is sent by a background task running periodically. Please add the following lines to If you are using Gmail as email server, use following lines: Note: If your email service still does not work, you can checkout the log file Note2: If you want to use the email service without authentication leaf Note3: About using SSL connection (using port 465) Port 587 is being used to establish a connection using STARTTLS and port 465 is being used to establish an SSL connection. Starting from Django 1.8, it supports both. If you want to use SSL on port 465, set You can change the reply to field of email by add the following settings to seahub_settings.py. This only affects email sending for file share link. The background task will run periodically to check whether an user have new unread notifications. If there are any, it will send a reminder email to that user. The background email sending task is controlled by The simplest way to customize the email messages is setting the Note: Subject line may vary between different releases, this is based on Release 5.0.0. Restart Seahub so that your changes take effect. seahub/seahub/templates/email_base.html Note: You can copy email_base.html to Subject seahub/seahub/auth/forms.py line:127 Body seahub/seahub/templates/registration/password_reset_email.html Note: You can copy password_reset_email.html to Subject seahub/seahub/views/sysadmin.py line:424 Body seahub/seahub/templates/sysadmin/user_add_email.html Note: You can copy user_add_email.html to Subject seahub/seahub/views/sysadmin.py line:1224 Body seahub/seahub/templates/sysadmin/user_reset_email.html Note: You can copy user_reset_email.html to Subject seahub/seahub/share/views.py line:913 Body seahub/seahub/templates/shared_link_email.html seahub/seahub/templates/shared_upload_link_email.html Note: You can copy shared_link_email.html to Subject Body seahub/seahub/notifications/templates/notifications/notice_email.html We provide two ways to deploy Seafile services. Since version 8.0, Docker is the recommended way. LDAP/AD Integration Seafile supports a few Single Sign On authentication protocols. See Single Sign On for a summary. Seafile Server supports the following external authentication types: Since 11.0 version, switching between the types is possible, but any switch requires modifications of Seafile's databases. Note Before manually manipulating your database, make a database backup, so you can restore your system if anything goes wrong! See more about make a database backup. As an organisation grows and its IT infrastructure matures, the migration from local authentication to external authentication like LDAP, SAML, OAUTH is common requirement. Fortunately, the switch is comparatively simple. Configure and test the desired external authentication. Note the name of the Determine the ID of the user to be migrated in ccnet_db.EmailUser. For users created before version 11, the ID should be the user's email, for users created after version 11, the ID should be a string like Replace the password hash with an exclamation mark. Create a new entry in The login with the password stored in the local database is not possible anymore. After logging in via external authentication, the user has access to all his previous libraries. This example shows how to migrate the user with the username This is what the database looks like before these commands must be executed: Note: The Afterwards the databases should look like this: First configure the two external authentications and test them with a dummy user. Then, to migrate all the existing users you only need to make changes to the First, delete the entry in the Then you can reset the user's password, e.g. via the web interface. The user will be assigned a local password and from now on the authentication against the local database of Seafile will be done. More details about this option will follow soon. Kerberos is a widely used single sign on (SSO) protocol. Supporting of auto login will use a Kerberos service. For server configuration, please read remote user authentication documentation. You have to configure Apache to authenticate with Kerberos. This is out of the scope of this documentation. You can for example refer to this webpage. The client machine has to join the AD domain. In a Windows domain, the Kerberos Key Distribution Center (KDC) is implemented on the domain service. Since the client machine has been authenticated by KDC when a Windows user logs in, a Kerberos ticket will be generated for current user without needs of another login in the browser. When a program using the WinHttp API tries to connect a server, it can perform a login automatically through the Integrated Windows Authentication. Internet Explorer and SeaDrive both use this mechanism. The details of Integrated Windows Authentication is described below: In short: The Internet Options has to be configured as following: Open \"Internet Options\", select \"Security\" tab, select \"Local Intranet\" zone. Note: Above configuration requires a reboot to take effect. Next, we shall test the auto login function on Internet Explorer: visit the website and click \"Single Sign-On\" link. It should be able to log in directly, otherwise the auto login is malfunctioned. Note: The address in the test must be same as the address specified in the keytab file. Otherwise, the client machine can't get a valid ticket from Kerberos. SeaDrive will use the Kerberos login configuration from the Windows Registry under The system wide configuration path is located at SeaDrive can be installed silently with the following command (requires admin privileges): The configuration of Internet Options : https://docs.microsoft.com/en-us/troubleshoot/browsers/how-to-configure-group-policy-preference-settings The configuration of Windows Registry : https://thesolving.com/server-room/how-to-deploy-a-registry-key-via-group-policy/ This manual explains how to deploy and run Seafile Server on a Linux server using Kubernetes (k8s thereafter). The two volumes for persisting data, The two tools, kubectl and a k8s control plane tool (i.e., kubeadm), are required and can be installed with official installation guide. Note that if it is a multi-node deployment, k8s control plane needs to be installed on each node. After installation, you need to start the k8s control plane service on each node and refer to the k8s official manual for creating a cluster. Since this manual still uses the same image as docker deployment, we need to add the following repository to k8s: Seafile mainly involves three different services, namely database service, cache service and seafile service. Since these three services do not have a direct dependency relationship, we need to separate them from the entire docker-compose.yml (in this manual, we use Seafile 11 PRO) and divide them into three pods. For each pod, we need to define a series of YAML files for k8s to read, and we will store these YAMLs in Please replease Please replease the above configurations, such as database root password, admin in seafile. You can use following command to deploy pods: Similar to docker installation, you can also manage containers through some kubectl commands. For example, you can use the following command to check whether the relevant resources are started successfully and whether the relevant services can be accessed normally. First, execute the following command and remember the pod name with You can check a status of a pod by and enter a container by If you modify some configurations in After completing the installation of Seafile Server Community Edition and Seafile Server Professional Edition, communication between the Seafile server and clients runs over (unencrypted) HTTP. While HTTP is ok for testing purposes, switching to HTTPS is imperative for production use. HTTPS requires a SSL certificate from a Certificate Authority (CA). Unless you already have a SSL certificate, we recommend that you get your SSL certificate from Let\u2019s Encrypt using Certbot. If you have a SSL certificate from another CA, skip the section \"Getting a Let's Encrypt certificate\". A second requirement is a reverse proxy supporting SSL. Apache, a popular web server and reverse proxy, is a good option. The full documentation of Apache is available at https://httpd.apache.org/docs/. The recommended reverse proxy is Nginx. You find instructions for enabling HTTPS with Nginx here. The setup of Seafile using Apache as a reverse proxy with HTTPS is demonstrated using the sample host name This manual assumes the following requirements: If your setup differs from thes requirements, adjust the following instructions accordingly. The setup proceeds in two steps: First, Apache is installed. Second, a SSL certificate is integrated in the Apache configuration. Install and enable apache modules: Important: Due to the security advisory published by Django team, we recommend to disable GZip compression to mitigate BREACH attack. No version earlier than Apache 2.4 should be used. Modify Apache config file. For CentOS, this is Getting a Let's Encrypt certificate is straightforward thanks to Certbot. Certbot is a free, open source software tool for requesting, receiving, and renewing Let's Encrypt certificates. First, go to the Certbot website and choose your web server and OS. Second, follow the detailed instructions then shown. We recommend that you get just a certificate and that you modify the Apache configuration yourself: Follow the instructions on the screen. Upon successful verification, Certbot saves the certificate files in a directory named after the host name in To use HTTPS, you need to enable mod_ssl: Then modify your Apache configuration file. Here is a sample: Finally, make sure the virtual host file does not contain syntax errors and restart Apache for the configuration changes to take effect: The The Note: The To improve security, the file server should only be accessible via Apache. Add the following line in the [fileserver] block on After his change, the file server only accepts requests from Apache. Restart the seaf-server and Seahub for the config changes to take effect: If there are problems with paths or files containing spaces, make sure to have at least Apache 2.4.12. References After completing the installation of Seafile Server Community Edition and Seafile Server Professional Edition, communication between the Seafile server and clients runs over (unencrypted) HTTP. While HTTP is ok for testing purposes, switching to HTTPS is imperative for production use. HTTPS requires a SSL certificate from a Certificate Authority (CA). Unless you already have a SSL certificate, we recommend that you get your SSL certificate from Let\u2019s Encrypt using Certbot. If you have a SSL certificate from another CA, skip the section \"Getting a Let's Encrypt certificate\". A second requirement is a reverse proxy supporting SSL. Nginx, a popular and resource-friendly web server and reverse proxy, is a good option. Nginx's documentation is available at http://nginx.org/en/docs/. If you prefer Apache, you find instructions for enabling HTTPS with Apache here. The setup of Seafile using Nginx as a reverse proxy with HTTPS is demonstrated using the sample host name This manual assumes the following requirements: If your setup differs from thes requirements, adjust the following instructions accordingly. The setup proceeds in two steps: First, Nginx is installed. Second, a SSL certificate is integrated in the Nginx configuration. Install Nginx using the package repositories: After the installation, start the server and enable it so that Nginx starts at system boot: The configuration of a proxy server in Nginx differs slightly between CentOS and Debian/Ubuntu. Additionally, the restrictive default settings of SELinux's configuration on CentOS require a modification. Switch SELinux into permissive mode and perpetuate the setting: Create a configuration file for seafile in Create a configuration file for seafile in Delete the default files in Create a symbolic link: Copy the following sample Nginx config file into the just created The following options must be modified in the CONF file: Optional customizable options in the seafile.conf are: The default value for Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect: Getting a Let's Encrypt certificate is straightforward thanks to Certbot. Certbot is a free, open source software tool for requesting, receiving, and renewing Let's Encrypt certificates. First, go to the Certbot website and choose your webserver and OS. Second, follow the detailed instructions then shown. We recommend that you get just a certificate and that you modify the Nginx configuration yourself: Follow the instructions on the screen. Upon successful verification, Certbot saves the certificate files in a directory named after the host name in Add an server block for port 443 and a http-to-https redirect to the This is a (shortened) sample configuration for the host name seafile.example.com: Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect: Tip for uploading very large files (> 4GB): By default Nginx will buffer large request body in temp file. After the body is completely received, Nginx will send the body to the upstream server (seaf-server in our case). But it seems when file size is very large, the buffering mechanism dosen't work well. It may stop proxying the body in the middle. So if you want to support file upload larger for 4GB, we suggest you install Nginx version >= 1.8.0 and add the following options to Nginx config file: If you have WebDAV enabled it is recommended to add the same: The The Note: The To improve security, the file server should only be accessible via Nginx. Add the following line in the After his change, the file server only accepts requests from Nginx. Restart the seaf-server and Seahub for the config changes to take effect: Require IPv6 on server otherwise the server will not start! Also the AAAA dns record is required for IPv6 usage. Activate HTTP2 for more performance. Only available for SSL and nginx version>=1.9.5. Simply add The TLS configuration in the sample Nginx configuration file above receives a B overall rating on SSL Labs. By modifying the TLS configuration in The following sample Nginx configuration file for the host name seafile.example.com contains additional security-related directives. (Note that this sample file uses a generic path for the SSL certificate files.) Some of the directives require further steps as explained below. Enable HTTP Strict Transport Security (HSTS) to prevent man-in-the-middle-attacks by adding this directive: HSTS instructs web browsers to automatically use HTTPS. That means, after the first visit of the HTTPS version of Seahub, the browser will only use https to access the site. Enable Diffie-Hellman (DH) key-exchange. Generate DH parameters and write them in a .pem file using the following command: The generation of the the DH parameters may take some time depending on the server's processing power. Add the following directive in the HTTPS server block: Disallow the use of old TLS protocols and cipher. Mozilla provides a configuration generator for optimizing the conflicting objectives of security and compabitility. Visit https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx for more Information. NOTE: Since version 7.0, this documenation is deprecated. Users should use Apache as a proxy server for Kerberos authentication. Then configure Seahub by the instructions in Remote User Authentication. Kerberos is a widely used single sign on (SSO) protocol. Seafile server supports authentication via Kerberos. It allows users to log in to Seafile without entering credentials again if they have a kerberos ticket. In this documentation, we assume the reader is familiar with Kerberos installation and configuration. Seahub provides a special URL to handle Kerberos login. The URL is The configuration includes three steps: Store the keytab under the name defined below and make it accessible only to the apache user (e.g. httpd or www-data and chmod 600). You should create a new location in your virtual host configuration for Kerberos. After restarting Apache, you should see in the Apache logs that user@REALM is used when accessing https://seafile.example.com/krb5-login/. Seahub extracts the username from the Now we have to tell Seahub what to do with the authentication information passed in by Kerberos. Add the following option to seahub_settings.py. After restarting Apache and Seafile services, you can test the Kerberos login workflow. Note: This documentation is for the Community Edition. If you're using Pro Edition, please refer to the Seafile Pro documentation. When Seafile is integrated with LDAP, users in the system can be divided into two tiers: Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated. Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them. When Seafile counts the number of users in the system, it only counts the activated users in its internal database. The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier: Note, the identifier is stored in table Add the following options to Meaning of some options: LDAP_USER_ROLE_ATTR: LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR: Attribute for user's first name. It's \"givenName\" by default. Tips for choosing To determine the If you want to allow all users to use Seafile, you can use If you want to limit users to a certain OU (Organization Unit), you run AD supports Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting The final filter used for searching for users is For example, add below option to The final search filter would be Note that the case of attribute names in the above example is significant. The You can use the First, you should find out the DN for the group. Again, we'll use the Add below option to If your LDAP service supports TLS connections, you can configure Since Seafile Professional edition 6.0.0, you can integrate Seafile with Collabora Online to preview office files. Prepare an Ubuntu 20.04 or 22.04 64bit server with docker installed. Assign a domain name to this server, we use collabora-online.seafile.com here. Obtain and install valid TLS/SSL certificates for this server, we use Let\u2019s Encrypt. Then use Nginx to serve collabora online, config file example (source https://sdk.collaboraonline.com/docs/installation/Proxy_settings.html): then use the following command to setup/start Collabora Online (source https://sdk.collaboraonline.com/docs/installation/CODE_Docker_image.html#code-docker-image): NOTE: the For more information about Collabora Online and how to deploy it, please refer to https://www.collaboraoffice.com NOTE: You must enable https with valid TLS/SSL certificates with Seafile to use Collabora Online. Add following config option to seahub_settings.py: Then restart Seafile. Click an office file in Seafile web interface, you will see the online preview rendered by LibreOffice online. Here is an example: Understanding how theintegration work will help you debug the problem. When a user visits a file page: If you have a problem, please check the Nginx log for Seahub (for step 3) and Collabora Online to see which step is wrong. NOTE: The tutorial is only related to Seafile CE edition. First make sure the python module for MySQL is installed. On Ubuntu/Debian, use Steps to migrate Seafile from SQLite to MySQL: Stop Seafile and Seahub. Download sqlite2mysql.sh and sqlite2mysql.py to the top directory of your Seafile installation path. For example, Run This script will produce three files: Then create 3 databases ccnet_db, seafile_db, seahub_db and seafile user. Import ccnet data to MySql. Import seafile data to MySql. Import seahub data to MySql. Modify configure files\uff1aAppend following lines to ccnet.conf: Note: Use Replace the database section in Append following lines to Restart seafile and seahub NOTE User notifications will be cleared during migration due to the slight difference between MySQL and SQLite, if you only see the busy icon when click the notitfications button beside your avatar, please remove This error typically occurs because the current table being created contains a foreign key that references a table whose primary key has not yet been created. Therefore, please check the database table creation order in the SQL file. The correct order is: and Currently, the status updates of files and libraries on the client and web interface are based on polling the server. The latest status cannot be reflected in real time on the client due to polling delays. The client needs to periodically refresh the library modification, file locking, subdirectory permissions and other information, which causes additional performance overhead to the server. When a directory is opened on the web interface, the lock status of the file cannot be updated in real time, and the page needs to be refreshed. The notification server uses websocket protocol and maintains a two-way communication connection with the client or the web interface. When the above changes occur, seaf-server will notify the notification server of the changes. Then the notification server can notify the client or the web interface in real time. This not only improves the real-time performance, but also reduces the performance overhead of the server. Note, the notification server cannot work if you config Seafile server with SQLite database. Since seafile-10.0.0, you can configure a notification server to send real-time notifications to clients. In order to run the notification server, you need to add the following configurations under seafile.conf\uff1a You can generate jwt_private_key with the following command\uff1a We generally recommend deploying notification server behind nginx, the notification server can be supported by adding the following nginx configuration: Or add the configuration for Apache: NOTE: according to apache ProxyPass document the final configuration for Apache should be like: After that, you can run notification server with the following command: When the notification server is working, you can access If the client works with notification server, there should be a log message in seafile.log or seadrive.log. There is no additional features for notification server in the Pro Edition. It works the same as in community edition. If you enable clustering, You need to deploy notification server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node. On each Seafile frontend node, the notification server configuration should be the same as in community edition: You need to configure load balancer according to the following forwarding rules: Here is a configuration that uses haproxy to support notification server. Haproxy version needs to be >= 2.0. You should use similar configurations for other load balancers. Since CE version 6.2.3, Seafile supports user login via OAuth. Before using OAuth, Seafile administrator should first register an OAuth2 client application on your authorization server, then add some configurations to seahub_settings.py. Here we use Github as an example. First you should register an OAuth2 client application on Github, official document from Github is very detailed. Add the folllowing configurations to seahub_settings.py: NOTE: There are some more explanations about the settings. OAUTH_PROVIDER / OAUTH_PROVIDER_DOMAIN OAUTH_ATTRIBUTE_MAP This variables describes which claims from the response of the user info endpoint are to be filled into which attributes of the new Seafile user. The format is showing like below: If the remote resource server, like Github, uses email to identify an unique user too, Seafile will use Github id directorily, the OAUTH_ATTRIBUTE_MAP setting for Github should be like this: The key part Since 11.0 version, Seafile use If you upgrade from a version below 11.0, you need to have both fields configured, i.e., you configuration should be like: In this way, when a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration If you use a newly deployed 11.0 Seafile instance, you don't need the For Github, To enable OAuth via GitLab. Create an application in GitLab (under Admin area->Applications). Fill in required fields: Name: a name you specify Redirect URI: The callback url see below Trusted: Skip confirmation dialog page. Select this to not ask the user if he wants to authorize seafile to receive access to his/her account data. Scopes: Select Press submit and copy the client id and secret you receive on the confirmation page and use them in this template for your seahub_settings.py: For users of Azure Cloud, as there is no Please see this tutorial for the complete deployment process of OAuth against Azure Cloud. From 8.0.0, Seafile supports OCM protocol. With OCM, user can share library to other server which enabled OCM too. Seafile currently supports sharing between Seafile servers with version greater than 8.0, and sharing from NextCloud to Seafile since 9.0. Note that these two functions cannot be enabled at the same time. Add the following configuration to OCM_REMOTE_SERVERS is a list of servers that you allow your users to share libraries with. Add the following configuration to In the library sharing dialog jump to \"Share to other server\", you can share this library to users of another server with \"Read-Only\" or \"Read-Write\" permission. You can also view shared records and cancel sharing. You can jump to \"Shared from other servers\" page to view the libraries shared by other servers and cancel the sharing. And enter the library to view, download or upload files. From version 6.1.0+ on (including CE), Seafile supports OnlyOffice to view/edit office files online. In order to use OnlyOffice, you must first deploy an OnlyOffice server. You can deploy OnlyOffice to the same machine as Seafile using the Note: Using the official documentation to deploy to the same machine as Seafile server is no longer recommended after 12.0 From Seafile 12.0, OnlyOffice's JWT verification will be forced to enable Secure communication between Seafile and OnlyOffice is granted by a shared secret. You can get the JWT secret by following command Download the insert Also modify By default OnlyOffice will use port 6233 used for communication between Seafile and Document Server, You can modify the bound port by specifying The following configuration options are only for OnlyOffice experts. You can create and mount a custom configuration file called For example, you can configure OnlyOffice to automatically save by copying the following code block in this file: Mount this config file into your onlyoffice block in For more information you can check the official documentation: https://api.onlyoffice.com/editors/signature/ and https://github.com/ONLYOFFICE/Docker-DocumentServer#available-configuration-parameters By default, OnlyOffice will use the database information related to First, you need to make sure the database service is started, and enter the seafile-mysql container In the container, you need to create the database After the installation process is finished, visit this page to make sure you have deployed OnlyOffice successfully: Firstly, run If it shows this error message and you haven't enabled JWT while using a local network, then it's likely due to an error triggered proactively by OnlyOffice server for enhanced security. (https://github.com/ONLYOFFICE/DocumentServer/issues/2268#issuecomment-1600787905) So, as mentioned in the post, we highly recommend you enabling JWT in your integrations to fix this problem. Starting from OnlyOffice Docker-DocumentServer version 7.2, JWT is enabled by default on OnlyOffice server. So, for security reason, please Configure OnlyOffice to use JWT Secret. In general, you only need to specify the values \u200b\u200bof the following fields in For deployments using the The Seafile Add-in for Outlook natively supports authentication via username and password. In order to authenticate with SSO, the add-in utilizes SSO support integrated in Seafile's webinterface Seahub. Specifically, this is how SSO with the add-in works : This document explains how to configure Seafile and the reverse proxy and how to deploy the PHP script. SSO authentication must be configured in Seafile. Seafile Server must be version 8.0 or above. The packages php, composer, firebase-jwt, and guzzle must be installed. PHP can usually be downloaded and installed via the distribution's official repositories. firebase-jwt and guzzle are installed using composer. First, install the php package and check the installed version: Second, install composer. You find an up-to-date install manual at https://getcomposer.org/ for CentOS, Debian, and Ubuntu. Third, use composer to install firebase-jwt and guzzle in a new directory in Add this block to the config file Replace SHARED_SECRET with a secret of your own. The configuration depends on the proxy server use. If you use nginx, add the following location block to the nginx configuration: This sample block assumes that PHP 7.4 is installed. If you have a different PHP version on your system, modify the version in the fastcgi_pass unix. Note: The alias path can be altered. We advise against it unless there are good reasons. If you do, make sure you modify the path accordingly in all subsequent steps. Finally, check the nginx configuration and restart nginx: The PHP script and corresponding configuration files will be saved in the new directory created earlier. Change into it and add a PHP config file: Paste the following content in the First, replace SEAFILE_SERVER_URL with the URL of your Seafile Server and SHARED_SECRET with the key used in Configuring Seahub. Second, add either the user credentials of a Seafile user with admin rights or the API-token of such a user. In the next step, create the Paste the following code block: Note: Contrary to the config.php, no replacements or modifications are necessary in this file. The directory layout in Seafile and Seahub are now configured to support SSO in the Seafile Add-in for Outlook. You can now test SSO authentication in the add-in. Hit the SSO button in the settings of the Seafile add-in. Starting from 7.0.0, Seafile can integrate with various Single Sign On systems via a proxy server. Examples include Apache as Shibboleth proxy, or LemonLdap as a proxy to LDAP servers, or Apache as Kerberos proxy. Seafile can retrieve user information from special request headers (HTTP_REMOTE_USER, HTTP_X_AUTH_USER, etc.) set by the proxy servers. After the proxy server (Apache/Nginx) is successfully authenticated, the user information is set to the request header, and Seafile creates and logs in the user based on this information. Note: Make sure that the proxy server has a corresponding security mechanism to protect against forgery request header attacks. Please add the following settings to Then restart Seafile. Shibboleth is a widely used single sign on (SSO) protocol. Seafile supports authentication via Shibboleth. It allows users from another organization to log in to Seafile without registering an account on the service provider. In this documentation, we assume the reader is familiar with Shibboleth installation and configuration. For introduction to Shibboleth concepts, please refer to https://shibboleth.atlassian.net/wiki/spaces/CONCEPT/overview . Shibboleth Service Provider (SP) should be installed on the same server as the Seafile server. The official SP from https://shibboleth.net/ is implemented as an Apache module. The module handles all Shibboleth authentication details. Seafile server receives authentication information (username) from HTTP request. The username then can be used as login name for the user. Seahub provides a special URL to handle Shibboleth login. The URL is Since Shibboleth support requires Apache, if you want to use Nginx, you need two servers, one for non-Shibboleth access, another configured with Apache to allow Shibboleth login. In a cluster environment, you can configure your load balancer to direct traffic to different server according to URL. Only the URL The configuration includes 3 steps: We use CentOS 7 as example. You should create a new virtual host configuration for Shibboleth. And then restart Apache. Installation and configuration of Shibboleth is out of the scope of this documentation. You can refer to the official Shibboleth document. Open Change Seahub extracts the username from the In Seafile, only one of the following two attributes can be used for username: Change Change Open Uncomment attribute elements for getting more user info: After restarting Apache, you should be able to get the Service Provider metadata by accessing https://your-seafile-domain/Shibboleth.sso/Metadata. This metadata should be uploaded to the Identity Provider (IdP) server. Add the following configuration to seahub_settings.py. Seahub can process additional user attributes from Shibboleth. These attributes are saved into Seahub's database, as user's properties. They're all not mandatory. The internal user properties Seahub now supports are: You can specify the mapping between Shibboleth attributes and Seahub's user properties in seahub_settings.py: In the above config, the hash key is Shibboleth attribute name, the second element in the hash value is Seahub's property name. You can adjust the Shibboleth attribute name for your own needs. Note that you may have to change attribute-map.xml in your Shibboleth SP, so that the desired attributes are passed to Seahub. And you have to make sure the IdP sends these attributes to the SP. We also added an option Shibboleth has a field called affiliation. It is a list like: We are able to set user role from Shibboleth. Details about user role, please refer to https://download.seafile.com/published/seafile-manual/deploy_pro/roles_permissions.md To enable this, modify Then add new config to define affiliation role map, After Shibboleth login, Seafile should calcualte user's role from affiliation and SHIBBOLETH_AFFILIATION_ROLE_MAP. After restarting Apache and Seahub service ( If you encountered problems when login, follow these steps to get debug info (for Seafile pro 6.3.13). Open Insert the following code in line 59 Insert the following code in line 65 The complete code after these changes is as follows: Then restart Seafile and relogin, you will see debug info in web page. Seafile supports most of the popular single-sign-on authentication protocols. Some are included in Community Edition, some are only in Pro Edition. In the Community Edition: Kerberos authentication can be integrated by using Apache as a proxy server and follow the instructions in Remote User Authentication and Auto Login SeaDrive on Windows. In Pro Edition: Firstly, you should create a script to activate the python virtual environment, which goes in the ${seafile_dir} directory. Put another way, it does not go in \"seafile-server-latest\", but the directory above that. Throughout this manual the examples use /opt/seafile for this directory, but you might have chosen to use a different directory. The content of the file is: make this script executable The content of the file is: The content of the file is: Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload. The content of the file is: Create systemd service file /etc/systemd/system/seahub.service The content of the file is: Create systemd service file /etc/systemd/system/seafile-client.service You need to create this service file only if you have seafile console client and you want to run it on system boot. The content of the file is: Files in the seafile system are split to blocks, which means what are stored on your seafile server are not complete files, but blocks. This design faciliates effective data deduplication. However, administrators sometimes want to access the files directly on the server. You can use seaf-fuse to do this. Seaf-fuse is added since Seafile Server '''2.1.0'''. '''Note:''' * Encrypted folders can't be accessed by seaf-fuse. * Currently the implementation is '''read-only''', which means you can't modify the files through the mounted folder. * One debian/centos systems, you need to be in the \"fuse\" group to have the permission to mount a FUSE folder. Assume we want to mount to '''Note:''' Before start seaf-fuse, you should have started seafile server with Now you can list the content of From the above list you can see, under the folder of a user there are subfolders, each of which represents a library of that user, and has a name of this format: '''{library_id}-{library-name}'''. If you get an error message saying \"Permission denied\" when running seaf-server and seafile-controller support reopenning logfiles by receiving a This feature is very useful when you need cut logfiles while you don't want to shutdown the server. All you need to do now is cutting the logfile on the fly. For Debian, the default directory for logrotate should be Assuming your seaf-server's logfile is setup to The configuration for logrotate could be like this: You can save this file, in Debian for example, at This manual explains how to deploy and run Seafile Server Community Edition (Seafile CE) on a Linux server from a pre-built package using MySQL/MariaDB as database. The deployment has been tested for Debian/Ubuntu and CentOS, but Seafile should also work on other Linux distributions. Tip: If you have little experience with Seafile Server, we recommend that you use an installation script for deploying Seafile. Seafile CE for x86 architecture requires a minimum of 2 cores and 2GB RAM. There is a community-supported package for the installation on Raspberry Pi. Seafile supports MySQL and MariaDB. We recommend that you use the preferred SQL database management engine included in the package repositories of your distribution. This means: You can find step-by-step how-tos for installing MySQL and MariaDB in the tutorials on the Digital Ocean website. Seafile uses the mysql_native_password plugin for authentication. The versions of MySQL and MariaDB installed on CentOS 8, Debian 10, and Ubuntu 20.04 use a different authentication plugin by default. It is therefore required to change to authentication plugin to mysql_native_password for the root user prior to the installation of Seafile. The above mentioned tutorials explain how to do it. For Seafile 8.0.x For Seafile 9.0.x Note: CentOS 8 is no longer supported. For Seafile 10.0.x For Seafile 11.0.x (Debian 11, Ubuntu 22.04, etc.) For Seafile 11.0.x on Debian 12 and Ubuntu 24.04 with virtual env Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with \"source python-venv/bin/activate\". The standard directory for Seafile's program files is The program directory can be changed. The standard directory It is good practice not to run applications as root. Create a new user and follow the instructions on the screen: Change ownership of the created directory to the new user: All the following steps are done as user seafile. Change to user seafile: Download the install package from the download page on Seafile's website using wget. We use Seafile CE version 8.0.4 as an example in the rest of this manual. The install package is downloaded as a compressed tarball which needs to be uncompressed. Uncompress the package using tar: Now you have: The install package comes with a script that sets Seafile up for you. Specifically, the script creates the required directories and extracts all files in the right place. It can also create a MySQL user and the three databases that Seafile's components require : Note: While ccnet server was merged into the seafile-server in Seafile 8.0, the corresponding database is still required for the time being. Run the script as user seafile: Configure your Seafile Server by specifying the following three parameters: In the next step, choose whether to create new databases for Seafile or to use existing databases. The creation of new databases requires the root password for the SQL server. When choosing \"[1] Create new ccnet/seafile/seahub databases\", the script creates these databases and a MySQL user that Seafile Server will use to access them. To this effect, you need to answer these questions: When choosing \"[2] Use existing ccnet/seafile/seahub databases\", this are the prompts you need to answer: If the setup is successful, you see the following output: The directory layout then looks as follows: The folder Note: If you don't have the root password, you need someone who has the privileges, e.g., the database admin, to create the three databases required by Seafile, as well as a MySQL user who can access the databases. For example, to create three databases Seahub caches items(avatars, profiles, etc) on file system by default(/tmp/seahub_cache/). You can replace with Memcached or Redis. Use the following commands to install memcached and corresponding libraies on your system: Add the following configuration to Redis is supported since version 11.0. First, install Redis with package installers in your OS. Then refer to Django's documentation about using Redis cache to add Redis configurations to Seafile's config files as created by the setup script are prepared for Seafile running behind a reverse proxy. To access Seafile's web interface and to create working sharing links without a reverse proxy, you need to modify two configuration files in Run the following commands in The first time you start Seahub, the script prompts you to create an admin account for your Seafile Server. Enter the email address of the admin user followed by the password. Now you can access Seafile via the web interface at the host address and port 8000 (e.g., http://1.2.3.4:8000) Note: On CentOS, the firewall blocks traffic on port 8000 by default. If seafile.sh and/or seahub.sh fail to run successfully, use Use It is strongly recommended to switch from unencrypted HTTP (via port 8000) to encrypted HTTPS (via port 443). This manual provides instructions for enabling HTTPS for the two most popular web servers and reverse proxies: Since community edition 5.1.2 and professional edition 5.1.4, Seafile support using Syslog. Add following configuration to Restart seafile server, you will find follow logs in Add following configuration to Restart seafile server, you will find follow logs in Add following configurations to You need to install ffmpeg package to let the video thumbnail work correctly: Ubuntu 16.04 Centos 7 Debian Jessie Now configure accordingly in There are two ways to deploy Seafile Pro Edition. Since version 8.0, the recommend way to install Seafile Pro Edition is using Docker. You can add/edit roles and permission for administrators. Seafile has four build-in admin roles: default_admin, has all permissions. system_admin, can only view system info and config system. daily_admin, can only view system info, view statistic, manage library/user/group, view user log. audit_admin, can only view system info and admin log. All administrators will have Seafile supports eight permissions for now, its configuration is very like common user role, you can custom it by adding the following settings to When you have both Java 6 and Java 7 installed, the default Java may not be Java 7. Do this by typing If the default Java is Java 6, then do On Debian/Ubuntu: On CentOS/RHEL: The above command will ask you to choose one of the installed Java versions as default. You should choose Java 7 here. After that, re-run Reference link To use ADFS to log in to your Seafile, you need the following components: A Winodws Server with ADFS installed. For configuring and installing ADFS you can see this article. A valid SSL certificate for ADFS server, and here we use adfs-server.adfs.com as the domain name example. A valid SSL certificate for Seafile server, and here we use demo.seafile.com as the domain name example. You can generate them by: ``` openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sp.key -out sp.crt sudo apt install xmlsec1 sudo pip install cryptography djangosaml2==0.15.0 from os import path import saml2 import saml2.saml CERTS_DIR = '/seahub-data/certs' SP_SERVICE_URL = 'https://demo.seafile.com' XMLSEC_BINARY = '/usr/local/bin/xmlsec1' ATTRIBUTE_MAP_DIR = '/seafile-server-latest/seahub-extra/seahub_extra/adfs_auth/attribute-maps' SAML_ATTRIBUTE_MAPPING = { 'DisplayName': ('display_name', ), 'ContactEmail': ('contact_email', ), 'Deparment': ('department', ), 'Telephone': ('telephone', ), }"},{"location":"deploy_pro/config_seafile_with_ADFS/#update-the-idp-section-in-sampl_config-according-to-your-situation-and-leave-others-as-default","title":"update the 'idp' section in SAMPL_CONFIG according to your situation, and leave others as default","text":" ENABLE_ADFS_LOGIN = True EXTRA_AUTHENTICATION_BACKENDS = ( 'seahub_extra.adfs_auth.backends.Saml2Backend', ) SAML_USE_NAME_ID_AS_USERNAME = True LOGIN_REDIRECT_URL = '/saml2/complete/' SAML_CONFIG = { # full path to the xmlsec1 binary programm 'xmlsec_binary': XMLSEC_BINARY, } ``` Relying Party Trust is the connection between Seafile and ADFS. Log into the ADFS server and open the ADFS management. Double click Trust Relationships, then right click Relying Party Trusts, select Add Relying Party Trust\u2026. Select Import data about the relying party published online or one a local network, input Then Next until Finish. Add Relying Party Claim Rules Relying Party Claim Rules is used for attribute communication between Seafile and users in Windows Domain. Important: Users in Windows domain must have the E-mail value setted. Right-click on the relying party trust and select Edit Claim Rules... On the Issuance Transform Rules tab select Add Rules... Select Send LDAP Attribute as Claims as the claim rule template to use. Give the claim a name such as LDAP Attributes. Set the Attribute Store to Active Directory, the LDAP Attribute to E-Mail-Addresses, and the Outgoing Claim Type to E-mail Address. Select Finish. Click Add Rule... again. Select Transform an Incoming Claim. Give it a name such as Email to Name ID. Incoming claim type should be E-mail Address (it must match the Outgoing Claim Type in rule #1). The Outgoing claim type is Name ID (this is required in Seafile settings policy the Outgoing name ID format is Email. Pass through all claim values and click Finish. https://support.zendesk.com/hc/en-us/articles/203663886-Setting-up-single-sign-on-using-Active-Directory-with-ADFS-and-SAML-Plus-and-Enterprise- http://wiki.servicenow.com/?title=Configuring_ADFS_2.0_to_Communicate_with_SAML_2.0#gsc.tab=0 https://github.com/rohe/pysaml2/blob/master/src/saml2/saml.py The following section needs to be added to docker-compose.yml in the services section Add this to seafile.conf Wait some minutes until Clamav finished initializing. Now Clamav can be used. You should run Clamd with a root permission to scan any files. Edit the conf The output must include: Update: Since Seafile Pro server 6.0.0, cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the \"Load Balancer Setting\" section below for details The Seafile cluster solution employs a 3-tier architecture: This architecture scales horizontally. That means, you can handle more traffic by adding more machines. The architecture is visualized in the following picture. There are two main components on the Seafile server node: web server (Nginx/Apache) and Seafile app server. The web server passes requests from the clients to Seafile app server. The Seafile app servers work independently. They don't know about each other's state. That means each app server can fail independently without affecting other app server instances. The load balancer is responsible for detecting failure and re-routing requests. Even though Seafile app servers work independently, they still have to share some session information. All shared session information is stored in memory cache. Thus, all Seafile app servers have to connect to the same memory cache server (cluster). Since Pro Edition 11.0, both memcached and Redis can be used as memory cache. Before 11.0, only memcached is supported. More details about memory cache configuration is available later. The background server is the workhorse for various background tasks, including full-text indexing, office file preview, virus scanning, LDAP syncing. It should usually be run on a dedicated server for better performance. Currently only one background task server can be running in the entire cluster. If more than one background servers are running, they may conflict with each others when doing some tasks. If you need HA for background task server, you can consider using Keepalived to build a hot backup for it. More details can be found in background server setup. All Seafile app servers access the same set of user data. The user data has two parts: One in the MySQL database and the other one in the backend storage cluster (S3, Ceph etc.). All app servers serve the data equally to the clients. All app servers have to connect to the same database or database cluster. We recommend to use MariaDB Galera Cluster if you need a database cluster. There are a few steps to deploy a Seafile cluster: At least 3 Linux server with at least 4 cores, 8GB RAM. Two servers work as frontend servers, while one server works as background task server. Virtual machines are sufficient for most cases. In small cluster, you can re-use the 3 Seafile servers to run memcached cluster and MariaDB cluster. For larger clusters, you can have 3 more dedicated server to run memcached cluster and MariaDB cluster. Because the load on these two clusters are not high, they can share the hardware to save cost. Documentation about how to setup memcached cluster and MariaDB cluster can be found here. Since version 11.0, Redis can also be used as memory cache server. But currently only single-node Redis is supported. On each mode, you need to install some python libraries. First make sure your have installed Python 2.7, then: If you receive an error stating \"Wheel installs require setuptools >= ...\", run this between the pip and boto lines above You should make sure the config files on every Seafile server are consistent. Put the license you get under the top level diretory. In our wiki, we use the diretory Now you have: Please follow Download and Setup Seafile Professional Server With MySQL to setup a single Seafile server node. Note: Use the load balancer's address or domain name for the server address. Don't use the local IP address of each Seafile server machine. This assures the user will always access your service via the load balancers. After the setup process is done, you still have to do a few manual changes to the config files. If you use a single memcached server, you have to add the following configuration to If you use memcached cluster, the recommended way to setup memcached clusters can be found here. You'll setup two memcached server, in active/standby mode. A floating IP address will be assigned to the current active memcached node. So you have to configure the address in seafile.conf accordingly. If you are using Redis as cache, add following configurations: Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet. (Optional) The Seafile server also opens a port for the load balancers to run health checks. Seafile by default uses port 11001. You can change this by adding the following config option to You must setup and use memory cache when deploying Seafile cluster. Refer to \"memory cache\" to configure memory cache in Seahub. Also add following options to seahub_setting.py. These settings tell Seahub to store avatar in database and cache avatar in memcached, and store css CACHE to local memory. Add following to Here is an example Note: In cluster environment, we have to store avatars in the database instead of in a local disk. You also need to add the settings for backend cloud storage systems to the config files. Nginx/Apache with HTTP need to set it up on each machine running Seafile server. This is make sure only port 80 need to be exposed to load balancer. (HTTPS should be setup at the load balancer) Please check the following documents on how to setup HTTP with Nginx/Apache. Note, you only the HTTP setup part the the documents. (HTTPS is not needed) Once you have finished configuring this single node, start it to test if it runs properly: Note: The first time you start seahub, the script would prompt you to create an admin account for your Seafile server. Open your browser, visit Now you have one node working fine, let's continue to configure more nodes. Supposed your Seafile installation directory is On each node, run In the back-end node, you need to execute the following command to start Seafile server. CLUSTER_MODE=backend means this node is seafile backend server. It would be convenient to setup Seafile service to start on system boot. Follow this documentation to set it up on all nodes. Beside standard ports of a seafile server, there are 2 firewall rule changes for Seafile cluster: Now that your cluster is already running, fire up the load balancer and welcome your users. Since version 6.0.0, Seafile Pro requires \"sticky session\" settings in the load balancer. You should refer to the manual of your load balancer for how to set up sticky sessions. In the AWS ELB management console, after you've added the Seafile server instances to the instance list, you should do two more configurations. First you should setup HTTP(S) listeners. Ports 443 and 80 of ELB should be forwarded to the ports 80 or 443 of the Seafile servers. Then you setup health check Refer to AWS documentation about how to setup sticky sessions. This is a sample (Assume your health check port is Now you should be able to test your cluster. Open https://seafile.example.com in your browser and enjoy. You can also synchronize files with Seafile clients. If the above works, the next step would be Enable search and background tasks in a cluster. Here is the summary of configurations at the front-end node that related to cluster setup. (for version 7.1+) For seafile.conf: The For seahub_settings.py: For seafevents.conf: The The following options can be set in seafevents.conf to control the behaviors of file search. You need to restart seafile and seahub to make them take effect. Full text search is not enabled by default to save system resources. If you want to enable it, you need to follow the instructions below. First you have to set the value of Then restart seafile server You need to delete the existing search index and recreate it. You can rebuild search index by running: If this does not work, you can try the following steps: Create an elasticsearch service on AWS according to the documentation. Configure the seafevents.conf: NOTE: The version of the Python third-party package The search index is updated every 10 minutes by default. So before the first index update is performed, you get nothing no matter what you search. To be able to search immediately, This is because the server cannot index encrypted files, since they are encrypted. The search functionality is based on Elasticsearch, which is a java process. You can modify the memory size by modifying the jvm configuration file. For example, modify to 2G memory. Modify the following configuration in the Restart the seafile service to make the above changes take effect: If you use a cluster to deploy Seafile, you can use distributed indexing to realize real-time indexing and improve indexing efficiency. The indexing process is as follows: First, install redis on all frontend nodes(If you use redis cloud service, skip this step and modify the configuration files directly): For Ubuntu: For CentOS: Then, install python redis third-party package on all frontend nodes: Next, modify the Next, modify the Next, restart Seafile to make the configuration take effect: First, prepare a seafes master node and several seafes slave nodes, the number of slave nodes depends on your needs. Deploy Seafile on these nodes, and copy the configuration files in the Next, create a configuration file Execute Next, create a configuration file Execute Note The index worker connects to backend storage directly. You don't need to run seaf-server in index worker node. Rebuild search index, execute in the List the number of indexing tasks currently remaining, execute in the The above commands need to be run on the master node. This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server from a pre-built package using MySQL/MariaDB as database. The deployment has been tested for Debian/Ubuntu and CentOS, but Seafile PE should also work on other Linux distributions. Tip: If you have little experience with Seafile Server, we recommend that you use an installation script for deploying Seafile Server. Seafile PE requires a minimum of 2 cores and 2GB RAM. If elasticsearch is installed on the same server, the minimum requirements are 4 cores and 4 GB RAM. Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com or one of our partners. These instructions assume that MySQL/MariaDB server and client are installed and a MySQL/MariaDB root user can authenticate using the mysql_native_password plugin. For Seafile 8.0.x For Seafile 9.0.x For Seafile 10.0.x For Seafile 11.0.x (Debian 11, Ubuntu 22.04, Centos 8, etc.) Note: The recommended deployment option for Seafile PE on CentOS/Redhat is Docker. For Seafile 11.0.x on Debian 12 and Ubuntu 24.04 with virtual env Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with \"source python-venv/bin/activate\". Java Runtime Environment (JRE) is a requirement for full text search with ElasticSearch. It is used in extracting contents from PDF and Office files. The standard directory for Seafile's program files is The program directory can be changed. The standard directory Elasticsearch, the indexing server, cannot be run as root. More generally, it is good practice not to run applications as root. Create a new user and follow the instructions on the screen: Change ownership of the created directory to the new user: All the following steps are done as user seafile. Change to user seafile: Save the license file in Seafile's programm directory The install packages for Seafile PE are available for download in the the Seafile Customer Center. To access the Customer Center, a user account is necessary. The registration is free. Beginning with Seafile PE 7.0.17, the Seafile Customer Center provides two install packages for every version (using Seafile PE 8.0.4 as an example): The former is suitable for installation on Ubuntu/Debian servers, the latter for CentOS servers. Download the install package using wget (replace the x.x.x with the version you wish to download): We use Seafile version 8.0.4 as an example in the remainder of these instructions. The install package is downloaded as a compressed tarball which needs to be uncompressed. Uncompress the package using tar: Now you have: Note: The names of the install packages differ for Seafile CE and Seafile PE. Using Seafile CE and Seafile PE 8.0.4 as an example, the names are as follows: The setup process of Seafile PE is the same as the Seafile CE. See Installation of Seafile Server Community Edition with MySQL/MariaDB. After the successful completition of the setup script, the directory layout of Seafile PE looks as follows (some folders only get created after the first start, e.g. For Seafile 7.1.x and later Memory cache is mandatory for pro edition. You may use Memcached or Reids as cache server. Use the following commands to install memcached and corresponding libraies on your system: Add the following configuration to Redis is supported since version 11.0. First, install Redis with package installers in your OS. Then refer to Django's documentation about using Redis cache to add Redis configurations to You need at least setup HTTP to make Seafile's web interface work. This manual provides instructions for enabling HTTP/HTTPS for the two most popular web servers and reverse proxies: Run the following commands in The first time you start Seahub, the script prompts you to create an admin account for your Seafile Server. Enter the email address of the admin user followed by the password. Now you can access Seafile via the web interface at the host address (e.g., http://1.2.3.4:80). Seafile uses the indexing server ElasticSearch to enable full text search. In versions prior to Seafile 9.0, Seafile's install packages included ElasticSearch. A separate deployment was not necessary. Due to licensing conditions, ElasticSearch 7.x can no longer be bundled in Seafile's install package. As a consequence, a separate deployment of ElasticSearch is required to enble full text search in Seafile newest version. Our recommendation for deploying ElasticSearch is using Docker. Detailed information about installing Docker on various Linux distributions is available at Docker Docs. Seafile PE 9.0 only supports ElasticSearch 7.x. Seafile PE 10.0 and 11.0 only supports ElasticSearch 8.x. We use ElasticSearch version 7.16.2 as an example in this section. Version 7.16.2 and newer version have been successfully tested with Seafile. Pull the Docker image: Create a folder for persistent data created by ElasticSearch and change its permission: Now start the ElasticSearch container using the docker run command: Add the following configuration to Finally, restart Seafile: In the seafile cluster, only one server should run the background tasks, including: Let's assume you have three nodes in your cluster: A, B, and C. If you following the steps on settings up a cluster, node B and node C should have already be configed as frontend node. You can copy the configuration of node B as a base for node A. Then do the following steps: On Ubuntu/Debian: On CentOS/Red Hat: Edit seafevents.conf and ensure this line does NOT exist: Edit seafevents.conf, adding the following configuration: host is the IP address of background node, make sure the front end nodes can access the background node via IP:6000 . Edit seafile.conf to enable virus scan according to virus scan document In your firewall rules for node A, you should open the port 9200 (for search requests) and port 6000 for office converter. For versions older than 6.1, On nodes B and C, you need to: Edit Edit seahub_settings.py and add a line: Type the following commands to start the background node (Note, one additional command To stop the background node, type: You should also configure Seafile background tasks to start on system bootup. For systemd based OS, you can add Then enable this task in systemd: Here is the summary of configurations at the background node that related to clustering setup. For seafile.conf: For seafevents.conf: If you following the steps on settings up a cluster, node B and node C should have already be configed as frontend node. You can copy the configuration of node B as a base for node A. Then do the following steps: Since 9.0, ElasticSearch program is not part of Seafile package. You should deploy ElasticSearch service seperately. Then edit Edit seafile.conf to enable virus scan according to virus scan document On nodes B and C, you need to: Edit Edit seahub_settings.py and add a line: Type the following commands to start the background node (Note, one additional command To stop the background node, type: You should also configure Seafile background tasks to start on system bootup. For systemd based OS, you can add Then enable this task in systemd: Here is the summary of configurations at the background node that related to clustering setup. For seafile.conf: For seafevents.conf: When Seafile is integrated with LDAP, users in the system can be divided into two tiers: Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated. Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them. When Seafile counts the number of users in the system, it only counts the activated users in its internal database. The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier: Note, the identifier is stored in table Add the following options to Meaning of some options: LDAP_USER_ROLE_ATTR: LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR: Attribute for user's first name. It's \"givenName\" by default. Tips for choosing To determine the If you want to allow all users to use Seafile, you can use If you want to limit users to a certain OU (Organization Unit), you run AD supports In Seafile Pro, except for importing users into internal database when they log in, you can also configure Seafile to periodically sync user information from LDAP server into the internal database. User's full name, department and contact email address can be synced to internal database. Users can use this information to more easily search for a specific user. User's Windows or Unix login id can be synced to the internal database. This allows the user to log in with its familiar login id. When a user is removed from LDAP, the corresponding user in Seafile will be deactivated. Otherwise, he could still sync files with Seafile client or access the web interface. After synchronization is complete, you can see the user's full name, department and contact email on its profile page. Add the following options to Meaning of some options: The users imported with the above configuration will be activated by default. For some organizations with large number of users, they may want to import user information (such as user full name) without activating the imported users. Activating all imported users will require licenses for all users in LDAP, which may not be affordable. Seafile provides a combination of options for such use case. You can modify below option in This prevents Seafile from activating imported users. Then, add below option to This option will automatically activate users when they login to Seafile for the first time. When you set the However, sometimes it's desirable to auto reactivate such users. You can modify below option in To test your LDAP sync configuration, you can run the sync command manually. To trigger LDAP sync manually: For Seafile Docker The importing or syncing process maps groups from LDAP directory server to groups in Seafile's internal database. This process is one-way. Any changes to groups in the database won't propagate back to LDAP; Any changes to groups in the database, except for \"setting a member as group admin\", will be overwritten in the next LDAP sync operation. If you want to add or delete members, you can only do that on LDAP server. The creator of imported groups will be set to the system admin. There are two modes of operation: Periodical: the syncing process will be executed in a fixed interval Manual: there is a script you can run to trigger the syncing once Before enabling LDAP group sync, you should have configured LDAP authentication. See Basic LDAP Integration for details. The following are LDAP group sync related options: Meaning of some options: Note: The search base for groups is the option Some LDAP server, such as Active Directory, allows a group to be a member of another group. This is called \"group nesting\". If we find a nested group B in group A, we should recursively add all the members from group B into group A. And group B should still be imported a separate group. That is, all members of group B are also members in group A. In some LDAP server, such as OpenLDAP, it's common practice to use Posix groups to store group membership. To import Posix groups as Seafile groups, set A department in Seafile is a special group. In addition to what you can do with a group, there are two key new features for departments: Department supports hierarchy. A department can have any levels of sub-departments. Department can have storage quota. Seafile supports syncing OU (Organizational Units) from AD/LDAP to departments. The sync process keeps the hierarchical structure of the OUs. Options for syncing departments from OU: Periodical sync won't happen immediately after you restart seafile server. It gets scheduled after the first sync interval. For example if you set sync interval to 30 minutes, the first auto sync will happen after 30 minutes you restarts. To sync immediately, you need to manually trigger it. After the sync is run, you should see log messages like the following in logs/seafevents.log. And you should be able to see the groups in system admin page. To trigger LDAP sync manually, For Seafile Docker Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting The final filter used for searching for users is For example, add below option to The final search filter would be Note that the case of attribute names in the above example is significant. The You can use the First, you should find out the DN for the group. Again, we'll use the Add below option to If your LDAP service supports TLS connections, you can configure LDAP protocol version 3 supports \"paged results\" (PR) extension. When you have large number of users, this option can greatly improve the performance of listing users. Most directory server nowadays support this extension. In Seafile Pro Edition, add this option to Seafile Pro Edition supports auto following referrals in LDAP search. This is useful for partitioned LDAP or AD servers, where users may be spreaded on multiple directory servers. For more information about referrals, you can refer to this article. To configure, add below option to Seafile Pro Edition supports multi-ldap servers, you can configure two ldap servers to work with seafile. Multi-ldap servers mean that, when get or search ldap user, it will iterate all configured ldap servers until a match is found; When listing all ldap users, it will iterate all ldap servers to get all users; For Ldap sync it will sync all user/group info in all configured ldap servers to seafile. Currently, only two LDAP servers are supported. If you want to use multi-ldap servers, please replace Note: There are still some shared config options are used for all LDAP servers, as follows: If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth), you want Seafile to find the existing account for this user instead of creating a new one, you can set Note, here the UID means the unique user ID, in LDAP it is the attribute you use for Seafile Pro Edition supports syncing roles from LDAP or Active Directory. To enable this feature, add below option to Note: You should only define one of the two functions. You can rewrite the function (in python) to make your own mapping rules. If the file or function doesn't exist, the first entry in role_list will be synced. For high availability, it is recommended to set up a memcached cluster and MariaDB Galera cluster for Seafile cluster. This documentation will provide information on how to do this with 3 servers. You can either use 3 dedicated servers or use the 3 Seafile server nodes. Seafile servers share session information within memcached. So when you set up a Seafile cluster, there needs to be a memcached server (cluster) running. The simplest way is to use a single-node memcached server. But when this server fails, some functions in the web UI of Seafile cannot work. So for HA, it's usually desirable to have more than one memcached servers. We recommend to setup two independent memcached servers, in active/standby mode. A floating IP address (or Virtual IP address in some context) is assigned to the current active node. When the active node goes down, Keepalived will migrate the virtual IP to the standby node. So you actually use a single node memcahced, but use Keepalived (or other alternatives) to provide high availability. After installing memcahced on each server, you need to make some modification to the memcached config file. NOTE: Please configure memcached to start on system startup. Install and configure Keepalived. Modify keepalived config file On active node On standby node NOTE: Please adjust the network device names accordingly. virtual_ipaddress is the floating IP address in use. MariaDB cluster helps you to remove single point of failure from the cluster architecture. Every update in the database cluster is synchronously replicated to all instances. You can choose between two different setups: We refer to the documentation from MariaDB team: Seafile supports data migration between filesystem, s3, ceph, swift and Alibaba oss. Data migration takes 3 steps: We need to add new backend configurations to this file (including If you want to migrate to a local file system, the seafile.conf temporary configuration example is as follows: Repalce the configurations with your own choice. If you have millions of objects in the storage (especially fs objects), it may take quite long time to migrate all objects. More than half of the time is spent on checking whether an object exists in the destination storage. Since Pro edition 7.0.8, a feature is added to speed-up the checking. Before running the migration script, please set this env variable: 3 files will be created: When you run the script for the first time, the object list file will be filled with existing objects in the destination. Then, when you run the script for the second time, it will load the existing object list from the file, instead of querying the destination. And newly migrated objects will also be added to the file. During migration, the migration process checks whether an object exists by checking the pre-loaded object list, instead of asking the destination, which will greatly speed-up the migration process. It's suggested that you don't interrupt the script during the \"fetch object list\" stage when you run it for the first time. Otherwise the object list in the file will be incomplete. Another trick to speed-up the migration is to increase the number of worker threads and size of task queue in the migration script. You can modify the The number of workers can be set to relatively large values, since they're mostly waiting for I/O operations to finished. If you have an encrypted storage backend (a deprecated feature no long supported now), you can use this script to migrate and decrypt the data from that backend to a new one. You can add the This step will migrate most of objects from the source storage to the destination storage. You don't need to stop Seafile service at this stage as it may take quite long time to finish. Since the service is not stopped, some new objects may be added to the source storage during migration. Those objects will be handled in the next step. We assume you have installed seafile pro server under Please note that this script is completely reentrant. So you can stop and restart it, or run it many times. It will check whether an object exists in the destination before sending it. New objects added during the last migration step will be migrated in this step. To prevent new objects being added, you have to stop Seafile service during the final migration operation. This usually take short time. If you have large number of objects, please following the optimization instruction in previous section. You just have to stop Seafile and Seahub service, then run the migration script again. After running the script, we need replace the original seafile.conf with new one: now we only have configurations about backend, more config options, e.g. memcache and quota, can then be copied from the original seafile.conf file. After replacing seafile.conf, you can restart seafile server and access the data on the new backend. It's quite likely you have deployed the Seafile Community Server and want to switch to the Professional Server, or vice versa. But there are some restrictions: That means, if you are using Community Server version 9.0, and want to switch to the Professional Server 10.0, you must first upgrade to Community Server version 10.0, and then follow the guides below to switch to the Professional Server 10.0. (The last tiny version number in 10.0.x is not important.) The package poppler-utils is required for full text search of pdf files. On Ubuntu/Debian: We assume you already have deployed Seafile Community Server 10.0.0 under If the license you received is not named as seafile-license.txt, rename it to seafile-license.txt. Then put the license file under the top level diretory. In our example, it is You should uncompress the tarball to the top level directory of your installation, in our example it is Now you have: You should notice the difference between the names of the Community Server and Professional Server. Take the 10.0.0 64bit version as an example: The migration script is going to do the following for you: Now you have: Using memory cache is mandatory in Pro Edition. You may use Memcached or Reids as cache server. Use the following commands to install memcached and corresponding libraies on your system: Add the following configuration to Redis is supported since version 11.0. First, Install Redis with package installers in your OS. Then refer to Django's documentation about using Redis cache to add Redis configurations to Stop Seafile Professional Server if it's running Run the minor-upgrade script to fix symbolic links Start Seafile Community Server Starting from version 5.1, you can add institutions into Seafile and assign users into institutions. Each institution can have one or more administrators. This feature is to ease user administration when multiple organizations (universities) share a single Seafile instance. Unlike multi-tenancy, the users are not-isolated. A user from one institution can share files with another institution. In or if After restarting Seafile, a system admin can add institutions by adding institution name in admin panel. He can also click into an institution, which will list all users whose If you are using Shibboleth, you can map a Shibboleth attribute into institution. For example, the following configuration maps organization attribute to institution. Multi-tenancy feature is designed for hosting providers that what to host several customers in a single Seafile instance. You can create multi-organizations. Organizations is separated from each other. Users can't share libraries between organizations. An organization can be created via system admin in \u201cadmin panel->organization->Add organization\u201d. Every organization has an URL prefix. This field is for future usage. When a user create an organization, an URL like org1 will be automatically assigned. After creating an organization, the first user will become the admin of that organization. The organization admin can add other users. Note, the system admin can't add users. The system admin has to complete the following works. Fisrt, install xmlsec1 package: Second, prepare SP(Seafile) certificate directory and SP certificates: Create sp certs dir The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command: Note: The Finally, add the following configuration to seahub_settings.py and then restart Seafile: Note: If the xmlsec1 binary is not located in View where the xmlsec1 binary is located: Note: If certificates are not placed in Please refer to this document. There are some use cases that supporting multiple storage backends in Seafile server is needed. Such as: The library data in Seafile server are spreaded into multiple storage backends in the unit of libraries. All the data in a library will be located in the same storage backend. The mapping from library to its storage backend is stored in a database table. Different mapping policies can be chosen based on the use case. To use this feature, you need to: In Seafile server, a storage backend is represented by the concept of \"storage class\". A storage class is defined by specifying the following information: commit, fs, and blocks can be stored in different storages. This provides the most flexible way to define storage classes. As Seafile server before 6.3 version doesn't support multiple storage classes, you have to explicitly enable this new feature and define storage classes with a different syntax than how we define storage backend before. First, you have to enable this feature in seafile.conf. You also need to add memory cache configurations to If installing Seafile as Docker containers, place the For example, if the configuration of the Then place the JSON file within any sub-directory of You also need to add memory cache configurations to The JSON file is an array of objects. Each object defines a storage class. The fields in the definition corresponds to the information we need to specify for a storage class. Below is an example: As you may have seen, the If you use file system as storage for Note: Currently file system, S3 and Swift backends are supported. Ceph/RADOS is also supported since version 7.0.14. Library mapping policies decide the storage class a library uses. Currently we provide 3 policies for 3 different use cases. The storage class of a library is decided on creation and stored in a database table. The storage class of a library won't change if the mapping policy is changed later. Before choosing your mapping policy, you need to enable the storage classes feature in seahub_settings.py: This policy lets the users choose which storage class to use when creating a new library. The users can select any storage class that's been defined in the JSON file. To use this policy, add following options in seahub_settings.py: If you enable storage class support but don't explicitly set Due to storage cost or management considerations, sometimes a system admin wants to make different type of users use different storage backends (or classes). You can configure a user's storage classes based on their roles. A new option Here are the sample options in seahub_settings.py to use this policy: This policy maps libraries to storage classes based on its library ID. The ID of a library is an UUID. In this way, the data in the system can be evenly distributed among the storage classes. Note that this policy is not a designed to be a complete distributed storage solution. It doesn't handle automatic migration of library data between storage classes. If you need to add more storage classes to the configuration, existing libraries will stay in their original storage classes. New libraries can be distributed among the new storage classes (backends). You still have to plan about the total storage capacity of your system at the beginning. To use this policy, you first add following options in seahub_settings.py: Then you can add option Run the repo_id is optional, if not specified, all libraries will be migrated. Before running the migration script, you can set the For example: This will create three files in the specified path (/opt): Run the In Seafile Professional Server Version 4.4.0 (or above), you can use Microsoft Office Online Server (formerly named Office Web Apps) to preview documents online. Office Online Server provides the best preview for all Office format files. It also support collaborative editing of Office files directly in the web browser. For organizations with Microsoft Office Volume License, it's free to use Office Online Server. For more information about Office Online Server and how to deploy it, please refer to https://technet.microsoft.com/en-us/library/jj219455(v=office.16).aspx. Notice: Seafile only supports Office Online Server 2016 and above. To use Office Online Server for preview, please add following config option to seahub_settings.py. Then restart After you click the document you specified in seahub_settings.py, you will see the new preview page. Understanding how the web app integration works is going to help you debugging the problem. When a user visits a file page: Please check the Nginx log for Seahub (for step 3) and Office Online Server to see which step is wrong. You should make sure you have configured at least a few GB of paging files in your Windows system. Otherwise the IIS worker processes may die randomly when handling Office Online requests. You can add/edit roles and permission for users. A role is just a group of users with some pre-defined permissions, you can toggle user roles in user list page at admin panel. The Since version 10.0, Since version 11.0.9 pro, Seafile comes with two build-in roles While a guest user can only read files/folders in the system, here are the permissions for a guest user: If you want to edit the permissions of build-in roles, e.g. default users can invite guest, guest users can view repos in organization, you can add following lines to An user who has In order to use this feature, in addition to granting After restarting, users who have Users can invite a guest user by providing his/her email address, system will email the invite link to the user. Tip: If you want to block certain email addresses for the invitation, you can define a blacklist, e.g. After that, email address \"a@a.com\", any email address ends with \"@a-a-a.com\" and any email address ends with \"@foo.com\" or \"@bar.com\" will not be allowed. If you want to add a new role and assign some users with this role, e.g. new role In this document, we use Microsoft Azure SAML single sign-on app and Microsoft on-premise ADFS to show how Seafile integrate SAML 2.0. Other SAML 2.0 provider should be similar. First, install xmlsec1 package: Second, prepare SP(Seafile) certificate directory and SP certificates: Create certs dir The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command: Note: The If you use Microsoft Azure SAML app to achieve single sign-on, please follow the steps below: First, add SAML single sign-on app and assign users, refer to: add an Azure AD SAML application, create and assign users. Second, setup the Identifier, Reply URL, and Sign on URL of the SAML app based on your service URL, refer to: enable single sign-on for saml app. The format of the Identifier, Reply URL, and Sign on URL are: https://example.com/saml2/metadata/, https://example.com/saml2/acs/, https://example.com/, e.g.: Next, edit saml attributes & claims. Keep the default attributes & claims of SAML app unchanged, the uid attribute must be added, the mail and name attributes are optional, e.g.: Next, download the base64 format SAML app's certificate and rename to idp.crt: and put it under the certs directory( Next, copy the metadata URL of the SAML app: and paste it into the Next, add Note: If the xmlsec1 binary is not located in View where the xmlsec1 binary is located: Note: If certificates are not placed in Finally, open the browser and enter the Seafile login page, click If you use Microsoft ADFS to achieve single sign-on, please follow the steps below: First, please make sure the following preparations are done: A Windows Server with ADFS installed. For configuring and installing ADFS you can see this article. A valid SSL certificate for ADFS server, and here we use A valid SSL certificate for Seafile server, and here we use Second, download the base64 format certificate and upload it: Navigate to the AD FS management window. In the left sidebar menu, navigate to Services > Certificates. Locate the Token-signing certificate. Right-click the certificate and select View Certificate. In the dialog box, select the Details tab. Click Copy to File. In the Certificate Export Wizard that opens, click Next. Select Base-64 encoded X.509 (.CER), then click Next. Named it idp.crt, then click Next. Click Finish to complete the download. And then put it under the certs directory( Next, add the following configurations to seahub_settings.py and then restart Seafile: Next, add relying party trust: Log into the ADFS server and open the ADFS management. Under Actions, click Add Relying Party Trust. On the Welcome page, choose Claims aware and click Start. Select Import data about the relying party published online or on a local network, type your metadate url in Federation metadata address (host name or URL), and then click Next. Your metadate url format is: On the Specify Display Name page type a name in Display name, e.g. In the Choose an access control policy window, select Permit everyone, then click Next. Review your settings, then click Next. Click Close. Next, create claims rules: Open the ADFS management, click Relying Party Trusts. Right-click your trust, and then click Edit Claim Issuance Policy. On the Issuance Transform Rules tab click Add Rules. Click the Claim rule template dropdown menu and select Send LDAP Attributes as Claims, and then click Next. In the Claim rule name field, type the display name for this rule, such as Seafile Claim rule. Click the Attribute store dropdown menu and select Active Directory. In the LDAP Attribute column, click the dropdown menu and select User-Principal-Name. In the Outgoing Claim Type column, click the dropdown menu and select UPN. And then click Finish. Click Add Rule again. Click the Claim rule template dropdown menu and select Transform an Incoming Claim, and then click Next. In the Claim rule name field, type the display name for this rule, such as UPN to Name ID. Click the Incoming claim type dropdown menu and select UPN(It must match the Outgoing Claim Type in rule Click OK to add both new rules. Note: When creating claims rule, you can also select other LDAP Attributes, such as E-Mail-Addresses, depending on your ADFS service. Finally, open the browser and enter the Seafile login page, click This feature is deprecated. We recommend you to use the encryption feature provided the storage system. Since Seafile Professional Server 5.1.3, we support storage enryption backend functionality. When enabled, all seafile objects (commit, fs, block) will be encrypted with AES 256 CBC algorithm, before writing them to the storage backend. Currently supported backends are: file system, Ceph, Swift and S3. Note that all objects will be encrypted with the same global key/iv pair. The key/iv pair has to be generated by the system admin and stored safely. If the key/iv pair is lost, all data cannot be recovered. Go to /seafile-server-latest, execute By default, the key/iv pair will be saved to a file named seaf-key.txt in the current directory. You can use '-p' option to change the path. Add the following configuration to seafile.conf: Now the encryption feature should be working. If you have existing data in the Seafile server, you have to migrate/encrypt the existing data. You must stop Seafile server before migrating the data. Create new configuration and data directories for the encrypted data. If you configured S3/Swift/Ceph backend, edit /conf-enc/seafile.conf. You must use a different bucket/container/pool to store the encrypted data. Then add the following configuration to /conf-enc/seafile.conf Go to /seafile-server-latest, use the seaf-encrypt.sh script to migrate the data. Run If there are error messages after executing seaf-encrypt.sh, you can fix the problem and run the script again. Objects that have already been migrated will not be copied again. Go to , execute following commands: Restart Seafile Server. If everything works okay, you can remove the backup directories. Seafile Professional Edition SOFTWARE LICENSE AGREEMENT NOTICE: READ THE FOLLOWING TERMS AND CONDITIONS CAREFULLY BEFORE YOU DOWNLOAD, INSTALL OR USE Seafile Ltd.'S PROPRIETARY SOFTWARE. BY INSTALLING OR USING THE SOFTWARE, YOU AGREE TO BE BOUND BY THE FOLLOWING TERMS AND CONDITIONS. IF YOU DO NOT AGREE TO THE FOLLOWING TERMS AND CONDITIONS, DO NOT INSTALL OR USE THE SOFTWARE. \"Seafile Ltd.\" means Seafile Ltd. \"You and Your\" means the party licensing the Software hereunder. \"Software\" means the computer programs provided under the terms of this license by Seafile Ltd. together with any documentation provided therewith. The License granted for Software under this Agreement authorizes You on a non-exclusive basis to use the Software. The Software is licensed, not sold to You and Seafile Ltd. reserves all rights not expressly granted to You in this Agreement. The License is personal to You and may not be assigned by You to any third party. Subject to the receipt by Seafile Ltd. of the applicable license fees, You have the right use the Software as follows: The inclusion of source code with the License is explicitly not for your use to customize a solution or re-use in your own projects or products. The benefit of including the source code is for purposes of security auditing. You may modify the code only for emergency bug fixes that impact security or performance and only for use within your enterprise. You may not create or distribute derivative works based on the Software or any part thereof. If you need enhancements to the software features, you should suggest them to Seafile Ltd. for version improvements. You acknowledge that all copies of the Software in any form are the sole property of Seafile Ltd.. You have no right, title or interest to any such Software or copies thereof except as provided in this Agreement. You hereby acknowledge and agreed that the Software constitute and contain valuable proprietary products and trade secrets of Seafile Ltd., embodying substantial creative efforts and confidential information, ideas, and expressions. You agree to treat, and take precautions to ensure that your employees and other third parties treat, the Software as confidential in accordance with the confidentiality requirements herein. EXCEPT AS OTHERWISE SET FORTH IN THIS AGREEMENT THE SOFTWARE IS PROVIDED TO YOU \"AS IS\", AND Seafile Ltd. MAKES NO EXPRESS OR IMPLIED WARRANTIES WITH RESPECT TO ITS FUNCTIONALITY, CONDITION, PERFORMANCE, OPERABILITY OR USE. WITHOUT LIMITING THE FOREGOING, Seafile Ltd. DISCLAIMS ALL IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR FREEDOM FROM INFRINGEMENT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSIONS MAY NOT APPLY TO YOU. THE LIMITED WARRANTY HEREIN GIVES YOU SPECIFIC LEGAL RIGHTS, AND YOU MAY ALSO HAVE OTHER RIGHTS THAT VARY FROM ONE JURISDICTION TO ANOTHER. YOU ACKNOWLEDGE AND AGREE THAT THE CONSIDERATION WHICH Seafile Ltd. IS CHARGING HEREUNDER DOES NOT INCLUDE ANY CONSIDERATION FOR ASSUMPTION BY Seafile Ltd. OF THE RISK OF YOUR CONSEQUENTIAL OR INCIDENTAL DAMAGES WHICH MAY ARISE IN CONNECTION WITH YOUR USE OF THE SOFTWARE. ACCORDINGLY, YOU AGREE THAT Seafile Ltd. SHALL NOT BE RESPONSIBLE TO YOU OR ANY THIRD PARTY FOR ANY LOSS-OF-PROFIT, LOST SAVINGS, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF A LICENSING OR USE OF THE SOFTWARE. You agree to defend, indemnify and hold Seafile Ltd. and its employees, agents, representatives and assigns harmless from and against any claims, proceedings, damages, injuries, liabilities, costs, attorney's fees relating to or arising out of Your use of the Software or any breach of this Agreement. Your license is effective until terminated. You may terminate it at any time by destroying the Software or returning all copies of the Software to Seafile Ltd.. Your license will terminate immediately without notice if You breach any of the terms and conditions of this Agreement, including non or incomplete payment of the license fee. Upon termination of this Agreement for any reason: You will uninstall all copies of the Software; You will immediately cease and desist all use of the Software; and will destroy all copies of the software in your possession. Seafile Ltd. has the right, but no obligation, to periodically update the Software, at its complete discretion, without the consent or obligation to You or any licensee or user. YOU HEREBY ACKNOWLEDGE THAT YOU HAVE READ THIS AGREEMENT, UNDERSTAND IT AND AGREE TO BE BOUND BY ITS TERMS AND CONDITIONS. In a Seafile cluster, one common way to share data among the Seafile server instances is to use NFS. You should only share the files objects (located in How to setup nfs server and client is beyond the scope of this wiki. Here are few references: Supposed your seafile server installation directory is This way the instances will share the same To setup Seafile Professional Server with Amazon S3: The configuration options differ for different S3 storage. We'll describe the configurations in separate sections. AWS S3 is the original S3 storage provider. Edit You also need to add memory cache configurations. We'll explain the configurations below: For file search and webdav to work with the v4 signature mechanism, you need to add following lines to ~/.boto Since Pro 11.0, you can use SSE-C to S3. Add the following options to seafile.conf: There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support. Edit You also need to add memory cache configurations. We'll explain the configurations below: For file search and webdav to work with the v4 signature mechanism, you need to add following lines to ~/.boto Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift and Ceph's RADOS Gateway. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config: You also need to add memory cache configurations. We'll explain the configurations below: Below are a few options that are not shown in the example configuration above: To use HTTPS connections to S3, add the following options to seafile.conf: Because the server package is built on CentOS 6, if you're using Debian/Ubuntu, you have to copy the system CA bundle to CentOS's CA bundle path. Otherwise Seafile can't find the CA bundle so that the SSL connection will fail. Another important note is that you must not use '.' in your bucket names. Otherwise the wildcard certificate for AWS S3 cannot be resolved. This is a limitation on AWS. Now you can start Seafile by Ceph is a scalable distributed storage system. It's recommended to use Ceph's S3 Gateway (RGW) to integarte with Seafile. Seafile can also use Ceph's RADOS object storage layer for storage backend. But using RADOS requires to link with librados library, which may introduce library incompatibility issues during deployment. Furthermore the S3 Gateway provides easier to manage HTTP based interface. If you want to integrate with S3 gateway, please refer to \"Use S3-compatible Object Storage\" section in this documentation. The documentation below is for integrating with RADOS. Seafile acts as a client to Ceph/RADOS, so it needs to access ceph cluster's conf file and keyring. You have to copy these files from a ceph admin node's /etc/ceph directory to the seafile machine. For best performance, Seafile requires install memcached or redis and enable cache for objects. We recommend to allocate at least 128MB memory for object cache. File search and WebDAV functions rely on Python Ceph library installed in the system. On Debian/Ubuntu (Seafile 7.1+): On Debian/Ubuntu (Seafile 7.0 or below): On RedHat/CentOS (Seafile 7.0 or below): Edit You also need to add memory cache configurations. It's required to create separate pools for commit, fs, and block objects. Since 8.0 version, Seafile bundles librados from Ceph 16. On some systems you may find Seafile fail to connect to your Ceph cluster. In such case, you can usually solve it by removing the bundled librados libraries and use the one installed in the OS. To do this, you have to remove a few bundled libraries: The above configuration will use the default (client.admin) user to connect to Ceph. You may want to use some other Ceph user to connect. This is supported in Seafile. To specify the Ceph user, you have to add a You can create a ceph user for seafile on your ceph cluster like this: You also have to add this user's keyring path to /etc/ceph/ceph.conf: To setup Seafile Professional Server with Alibaba OSS: Edit You also need to add memory cache configurations. It's required to create separate buckets for commit, fs, and block objects. For performance and to save network traffic costs, you should create buckets within the region where the seafile server is running. The key_id and key are required to authenticate you to OSS. You can find the key_id and key in the \"security credentials\" section on your OSS management page. The region is the region where the bucket you created is located, such as beijing, hangzhou, shenzhen, etc. Before version 6.0.9, Seafile only supports using OSS services in the classic network environment. The OSS service address in the VPC (Virtual Private Network) environment is different from the classic network, so you need to specify the OSS access address in the configuration environment. After version 6.0.9, it supports the configuration of OSS access addresses, thus adding the support for VPC OSS services. Use the following configuration: Compared with the configuration under the classic network, the above configuration uses the You also need to add memory cache configurations. To use HTTPS connections to OSS, add the following options to seafile.conf: This backend uses the native Swift API. Previously users can only use the S3-compatibility layer of Swift. That way is obsolete now. The old documentation is still available here. Since version 6.3, OpenStack Swift v3.0 API is supported. To setup Seafile Professional Server with Swift: Edit You also need to add memory cache configurations. The above config is just an example. You should replace the options according to your own environment. Seafile supports Swift with Keystone as authentication mechanism. The Seafile also supports Tempauth and Swauth since professional edition 6.2.1. The It's required to create separate containers for commit, fs, and block objects. Since Pro 5.0.4, you can use HTTPS connections to Swift. Add the following options to seafile.conf: Because the server package is built on CentOS 6, if you're using Debian/Ubuntu, you have to copy the system CA bundle to CentOS's CA bundle path. Otherwise Seafile can't find the CA bundle so that the SSL connection will fail. Now you can start Seafile by Starting from version 6.0, system admin can add T&C at admin panel, all users need to accept that before using the site. In order to use this feature, please add following line to After restarting, there will be \"Terms and Conditions\" section at sidebar of admin panel. Seafile can scan uploaded files for malicious content in the background. When configured to run periodically, the scan process scans all existing libraries on the server. In each scan, the process only scans newly uploaded/updated files since the last scan. For each file, the process executes a user-specified virus scan command to check whether the file is a virus or not. Most anti-virus programs provide command line utility for Linux. To enable this feature, add the following options to More details about the options: An example for ClamAV (http://www.clamav.net/) is provided below: To test whether your configuration works, you can trigger a scan manually: If a virus was detected, you can see scan records and delete infected files on the Virus Scan page in the admin area. INFO: If you directly use clamav command line tool to scan files, scanning files will takes a lot of time. If you want to speed it up, we recommend to run Clamav as a daemon. Please refer to Run ClamAV as a Daemon When run Clamav as a daemon, the Since Pro edition 6.0.0, a few more options are added to provide finer grained control for virus scan. The file extensions should start with '.'. The extensions are case insensitive. By default, files with following extensions will be ignored: The list you provide will override default list. You may also configure Seafile to scan files for virus upon the files are uploaded. This only works for files uploaded via web interface or web APIs. Files uploaded with syncing or SeaDrive clients cannot be scanned on upload due to performance consideration. You may scan files uploaded from shared upload links by adding the option below to Since Pro Edition 11.0.7, you may scan all uploaded files via web APIs by adding the option below to Assume you have installed Kaspersky Anti-Virus for Linux File Server on the Seafile Server machine. If the user that runs Seafile Server is not root, it should have sudoers privilege to avoid writing password when running kav4fs-control. Add following content to /etc/sudoers: As the return code of kav4fs cannot reflect the file scan result, we use a shell wrapper script to parse the scan output and based on the parse result to return different return codes to reflect the scan result. Save following contents to a file such as Grant execute permissions for the script (make sure it is owned by the user Seafile is running as): The meaning of the script return code: Add following content to Build Seafile Seafile Open API Seafile Implement Details Seafile internally uses a data model similar to GIT's. It consists of Seafile's high performance comes from the architectural design: stores file metadata in object storage (or file system), while only stores small amount of metadata about the libraries in relational database. An overview of the architecture can be depicted as below. We'll describe the data model in more details. A repo is also called a library. Every repo has an unique id (UUID), and attributes like description, creator, password. The metadata for a repo is stored in There are a few tables in the Commit objects save the change history of a repo. Each update from the web interface, or sync upload operation will create a new commit object. A commit object contains the following information: commit ID, library name, creator of this commit (a.k.a. the modifier), creation time of this commit (a.k.a. modification time), root fs object ID, parent commit ID. The root fs object ID points to the root FS object, from which we can traverse a file system snapshot for the repo. The parent commit ID points to the last commit previous to the current commit. The If you use file system as storage backend, commit objects are stored in the path There are two types of FS objects, The The FS object IDs are calculated based on the contents of the object. That means if a folder or a file is not changed, the same objects will be reused across multiple commits. This allow us to create snapshots very efficiently. If you use file system as storage backend, commit objects are stored in the path A file is further divided into blocks with variable lengths. We use Content Defined Chunking algorithm to divide file into blocks. A clear overview of this algorithm can be found at http://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf. On average, a block's size is around 8MB. This mechanism makes it possible to deduplicate data between different versions of frequently updated files, improving storage efficiency. It also enables transferring data to/from multiple servers in parallel. If you use file system as storage backend, commit objects are stored in the path A \"virtual repo\" is a special repo that will be created in the cases below: A virtual repo can be understood as a view for part of the data in its parent library. For example, when sharing a folder, the virtual repo only provides access to the shared folder in that library. Virtual repo use the same underlying data as the parent library. So virtual repos use the same Virtual repo has its own change history. So it has separate There is a 1. Locate the translation files in the seafile-server-latest/seahub directory: For example, if you want to improve the Russian translation, find the corresponding strings to be edited in either of the following three files: If there is no translation for your language, create a new folder matching your language code and copy-paste the contents of another language folder in your newly created one. (Don't copy from the 'en' folder because the files therein do not contain the strings to be translated.) 2. Edit the files using an UTF-8 editor. 3. Save your changes. 4. (Only necessary when you created a new language code folder) Add a new entry for your language to the language block in the 5. (Only necessary when you edited either django.po or djangojs.po) Apply the changes made in django.po and djangojs.po by running the following two commands in Note: msgfmt is included in the gettext package. Additionally, run the following two commands in the seafile-server-latest directory: 6. Restart Seahub to load changes made in django.po and djangojs.po; reload the Markdown editor to check your modifications in the seafile-editor.json file. Please submit translations via Transifex: https://www.transifex.com/projects/p/seahub/ Steps: Steps: Modify ```python STATICFILES_DIRS = ( # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. '%s/static' % PROJECT_ROOT, Execute the command Restore ```python STATICFILES_DIRS = ( # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. '%s/static' % PROJECT_ROOT, '%s/frontend/build' % PROJECT_ROOT, ) Restart Seahub This issue has been fixed since version 11.0 ) ``` The API document can be accessed in the following location: The Admin API document can be accessed in the following location: The following assumptions and conventions are used in the rest of this document: Use the official installation guide for your OS to install Docker. From Seafile Docker 12.0, we recommend that you use NOTE: Different versions of Seafile have different compose files. The following fields merit particular attention: NOTE: SSL is now handled by the caddy server from 12.0. Start Seafile server with the following command Wait for a few minutes for the first time initialization, then visit Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information. To monitor container logs (from outside of the container), please use the following commands: The Seafile logs are under The system logs are under To monitor all Seafile logs simultaneously (from outside of the container), run If you want to use an existing mysql-server, you can modify the NOTE: The config files are under After modification, you need to restart the container: Ensure the container is running, then enter this command: Enter the username and password according to the prompts. You now have a new admin account. You can use run seafile as non root user in docker. (NOTE: Programs such as First add the Then modify Then destroy the containers and run them again: Now you can run Seafile as Follow the instructions in Backup and restore for Seafile Docker When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them. (NOTE: for technical reasons, the GC process does not guarantee that every single orphan block will be deleted.) The required scripts can be found in the From Seafile 12.0, the SSL is handled by Caddy. Caddy is a modern open source web server that mainly binds external traffic and internal services in seafile docker. The default caddy image is The recommended steps to migrate from non-docker deployment to docker deployment are: The following document assumes that the deployment path of your non-Docker version of Seafile is /opt/seafile. If you use other paths, before running the command, be careful to modify the command path. Note, you can also refer to the Seafile backup and recovery documentation, deploy Seafile Docker on another machine, and then copy the old configuration information, database, and seafile-data to the new machine to complete the migration. The advantage of this is that even if an error occurs during the migration process, the existing system will not be destroyed. Stop the locally deployed Seafile, Nginx, Memcache The non-Docker version uses the local MySQL. Now if the Docker version of Seafile connects to this MySQL, you need to increase the corresponding access permissions. The following commands are based on that you use Copy the original config files to the directory to be mapped by the docker version of seafile Modify the MySQL configuration in Modify the memcached configuration in Download docker-compose.yml to There are two ways to let Seafile Docker to use the old seafile-data You can copy or move the old seafile-data folder ( You can mount the old seafile-data folder ( The added line Start Seafile docker and check if everything is okay: While it is not possible from inside a docker container to connect to the host database via localhost but via Following iptables commands protect MariaDB/MySQL: Keep in mind this is not bootsafe! For Debian based Linux Distros you can start a local IP by adding in SUSE based is by editing If using MariaDB the server just can bind to one IP-address (192.158.1.38 or 0.0.0.0 (internet)). So if you bind your MariaDB server to that new address other applications might need some reconfigurement. In then edit /opt/seafile-data/seafile/conf/ -> ccnet.conf seafile.conf seahub_settings.py in the Host-Line to that IP and execute the following commands: You can use one of the following methods to start Seafile container on system bootup. Note: Add configuration Note: Add Seafile docker based installation consist of the following components (docker images): Seafile Docker cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the Load Balancer Setting for details. System: Ubuntu 20.04 docker-compose: 1.25.0 Seafile Server: 2 frontend nodes, 1 backend node We assume you have already deployed memcache, MariaDB, ElasticSearch in separate machines and use S3 like object storage. Create the three databases ccnet_db, seafile_db, and seahub_db required by Seafile on MariaDB/MySQL, and authorize the `seafile` user to be able to access these three databases: You also need to create a table in `seahub_db` Create the mount directory Create the docker-compose.yml file Note: CLUSTER_SERVER=true means seafile cluster mode, CLUSTER_MODE=frontend means this node is seafile frontend server. Start the seafile docker container 1. Manually generate configuration files 2. Modify the mysql configuration options (user, host, password) in configuration files such as ccnet.conf, seafevents.conf, seafile.conf and seahub_settings.py. 3. Modify the memcached configuration option in seahub_settings.py 4. Modify the [INDEX FILES] configuration options in seafevents.conf 5. Add some configurations in seahub_settings.py 6. Add cluster special configuration in seafile.conf 7. Add memory cache configuration in seafile.conf Enter the container, and then execute the following commands to import tables Start Seafile service When you start it for the first time, seafile will guide you to set up an admin user. When deploying the second frontend node, you can directly copy all the directories generated by the first frontend node, including the docker-compose.yml file and modified configuration files, and then start the seafile docker container. Create the mount directory Create the docker-compose.yml file Note: CLUSTER_SERVER=true means seafile cluster mode, CLUSTER_MODE=backend means this node is seafile backend server. Start the seafile docker container Copy configuration files of the frontend node, and then start Seafile server of the backend node Modify the seafile.conf file on each node to configure S3 storage. vim seafile.conf Execute the following commands on the two Seafile frontend servers: Note: Correctly modify the IP address (Front-End01-IP and Front-End02-IP) of the frontend server in the above configuration file. Choose one of the above two servers as the master node, and the other as the slave node. Perform the following operations on the master node: Note: Correctly configure the virtual IP address and network interface device name in the above file. Perform the following operations on the standby node: Finally, run the following commands on the two Seafile frontend servers to start the corresponding services: So far, Seafile cluster has been deployed. The following section needs to be added to docker-compose.yml in the services section Add this to seafile.nginx.conf Add this to seahub_settings.py Wait some minutes until OnlyOffice finished initializing. Now OnlyOffice can be used. This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server using Docker and Docker Compose. The deployment has been tested for Debian/Ubuntu and CentOS, but Seafile PE should also work on other Linux distributions. Seafile PE requires a minimum of 2 cores and 2GB RAM. If Elasticsearch is installed on the same server, the minimum requirements are 4 cores and 4 GB RAM, and make sure the mmapfs counts do not cause excptions like out of memory, which can be increased by following command (see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html for futher details): or modify /etc/sysctl.conf and reboot to set this value permanently: Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com. The following assumptions and conventions are used in the rest of this document: Use the official installation guide for your OS to install Docker. Log into Seafile's private repository and pull the Seafile image: When prompted, enter the username and password of the private repository. They are available on the download page in the Customer Center. NOTE: Older Seafile PE versions are also available in the repository (back to Seafile 7.0). To pull an older version, replace '12.0-latest' tag by the desired version. From Seafile Docker 12.0, we recommend that you use NOTE: Different versions of Seafile have different compose files. The following fields merit particular attention: NOTE: SSL is now handled by the caddy server from 12.0. To conclude, set the directory permissions of the Elasticsearch volumne: Run docker compose in detached mode: NOTE: You must run the above command in the directory with the Wait a few moment for the database to initialize. You can now access Seafile at the host name specified in the Compose file. (A 502 Bad Gateway error means that the system has not yet completed the initialization.) To view Seafile docker logs, please use the following command The Seafile logs are under The system logs are under If you have a Then restart Seafile: Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information. The command The directory layout of the Seafile container's volume should look as follows: All Seafile config files are stored in Any modification of a configuration file requires a restart of Seafile to take effect: All Seafile log files are stored in If you want to use an existing mysql-server, you can modify the NOTE: You can use run seafile as non root user in docker. (NOTE: Programs such as First add the Then modify Then destroy the containers and run them again: Now you can run Seafile as Follow the instructions in Backup and restore for Seafile Docker When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them. (NOTE: for technical reasons, the GC process does not guarantee that every single orphan block will be deleted.) The required scripts can be found in the You need to manually add the OnlyOffice config to You need to manually add the Clamav config to Q: I forgot the Seafile admin email address/password, how do I create a new admin account? A: You can create a new admin account by running The Seafile service must be up when running the superuser command. Q: If, for whatever reason, the installation fails, how do I to start from a clean slate again? A: Remove the directories /opt/seafile, /opt/seafile-data, /opt/seafile-elasticsearch, and /opt/seafile-mysql and start again. Q: Something goes wrong during the start of the containers. How can I find out more? A: You can view the docker logs using this command: Q: I forgot the admin password. How do I create a new admin account? A: Make sure the seafile container is running and enter NOTE: Different versions of Seafile have different compose files. To ensure data security, it is recommended that you back up your MySQL data. Copy the Replace the old The Seafile Pro container needs to be running during the migration process, which means that end users may access the Seafile service during this process. In order to avoid the data confusion caused by this, it is recommended that you take the necessary measures to temporarily prohibit users from accessing the Seafile service. For example, modify the firewall policy. Run the following command to run the Seafile-Pro container\uff1a Then run the migration script by executing the following command: After the migration script runs successfully, modify Restart the Seafile Pro container. Now you have a Seafile Professional service. Seafile WebDAV and FUSE extensions make it easy for Seafile to work with third party applications. For example, you can use Documents App in iOS to access files in Seafile via WebDAV interface. Files in the seafile system are split to blocks, which means what are stored on your seafile server are not complete files, but blocks. This design faciliates effective data deduplication. However, administrators sometimes want to access the files directly on the server. You can use seaf-fuse to do this. Note: Assume we want to mount to Note: Before start seaf-fuse, you should have started seafile server with seaf-fuse supports standard mount options for FUSE. For example, you can specify ownership for the mounted folder: The fuse enables the block cache function by default to cache block objects, thereby reducing access to backend storage, but this function will occupy local disk space. Since Seafile-pro-10.0.0, you can disable block cache by adding following options: You can find the complete list of supported options in Now you can list the content of From the above list you can see, under the folder of a user there are subfolders, each of which represents a library of that user, and has a name of this format: '''{library_id}-{library-name}'''. If you get an error message saying \"Permission denied\" when running Assume we want to mount to Add the following content Start Seafile server and enter the container Start seaf-fuse in the container In the document below, we assume your seafile installation folder is The configuration file is Every time the configuration is modified, you need to restart seafile server to make it take effect. Your WebDAV client would visit the Seafile WebDAV server at In Pro edition 7.1.8 version and community edition 7.1.5, an option is added to append library ID to the library name returned by SeafDAV. For Seafdav, the configuration of Nginx is as follows: For Seafdav, the configuration of Apache is as follows: Please first note that, there are some known performance limitation when you map a Seafile webdav server as a local file system (or network drive). So WebDAV is more suitable for infrequent file access. If you want better performance, please use the sync client instead. Windows Explorer supports HTTPS connection. But it requires a valid certificate on the server. It's generally recommended to use Windows Explorer to map a webdav server as network dirve. If you use a self-signed certificate, you have to add the certificate's CA into Windows' system CA store. On Linux you have more choices. You can use file manager such as Nautilus to connect to webdav server. Or you can use davfs2 from the command line. To use davfs2 The -o option sets the owner of the mounted directory to so that it's writable for non-root users. It's recommended to disable LOCK operation for davfs2. You have to edit /etc/davfs2/davfs2.conf Finder's support for WebDAV is also not very stable and slow. So it is recommended to use a webdav client software such as Cyberduck. By default, seafdav is disabled. Check whether you have If you deploy SeafDAV behind Nginx/Apache, make sure to change the value of First, check the If you have enabled debug, there will also be the following log. This issue usually occurs when you have configured HTTPS, but the request was forwarded, resulting in the You can solve this by manually changing the value of to This happens when you map webdav as a network drive, and tries to copy a file larger than about 50MB from the network drive to a local folder. This is because Windows Explorer has a limit of the file size downloaded from webdav server. To make this size large, change the registry entry on the client machine. There is a registry key named SeaDoc is an extension of Seafile that providing an online collaborative document editor. SeaDoc designed around the following key ideas: SeaDoc excels at: The SeaDoc archticture is demonstrated as below: Here is the workflow when a user open sdoc file in browser Seafile version 11.0 or later is required to work with SeaDoc. SeaDoc has the following deployment methods: Download docker-compose.yml sample file to your host. Then modify the file according to your environment. The following fields are needed to be modified: SeaDoc and Seafile share the MySQL service. Create the database sdoc_db in Seafile MySQL and authorize the user. Note, SeaDoc will only create one database table to store operation logs. Then follow the section: Start SeaDoc. Modify SeaDoc and Seafile share the MySQL service. Create the database sdoc_db in Seafile MySQL. Note, SeaDoc will only create one database table to store operation logs. Start SeaDoc server with the following command Now you can use SeaDoc! Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information. From Seafile 12.0, the SSL is handled by Caddy. Caddy is a modern open source web server that mainly binds external traffic and internal services in seafile docker. The default caddy image is As the system admin, you can enter the admin panel by click Backup and recovery: Recover corrupt files after server hard shutdown or system crash: You can run Seafile GC to remove unused files: When you setup seahub website, you should have setup a admin account. After you logged in a admin, you may add/delete users and file libraries. Since version 11.0, if you need to change a user's external ID, you can manually modify database table For version below 11.0, if you really want to change a user's ID, you should create a new user then use this admin API to migrate the data from old user to the new user: https://download.seafile.com/published/web-api/v2.1-admin/accounts.md#user-content-Migrate%20Account. Administrator can reset password for a user in \"System Admin\" page. In a private server, the default settings doesn't support users to reset their password by email. If you want to enable this, you have first to set up notification email. You may run Under the seafile-server-latest directory, run There are generally two parts of data to backup If you setup seafile server according to our manual, you should have a directory layout like: All your library data is stored under the '/opt/seafile' directory. Seafile also stores some important metadata data in a few databases. The names and locations of these databases depends on which database software you use. For SQLite, the database files are also under the '/opt/seafile' directory. The locations are: For MySQL, the databases are created by the administrator, so the names can be different from one deployment to another. There are 3 databases: The backup is a three step procedure: The second sequence is better in the sense that it avoids library corruption. Like other backup solutions, some new data can be lost in recovery. There is always a backup window. However, if your storage backup mechanism can finish quickly enough, using the first sequence can retain more data. We assume your seafile data directory is in It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week. MySQL Assume your database names are SQLite You need to stop Seafile server first before backing up SQLite database. The data files are all stored in the To directly copy the whole data directory, This produces a separate copy of the data directory each time. You can delete older backup copies after a new one is completed. If you have a lot of data, copying the whole data directory would take long. You can use rsync to do incremental backup. This command backup the data directory to Now supposed your primary seafile server is broken, you're switching to a new machine. Using the backup data to restore your Seafile instance: Now with the latest valid database backup files at hand, you can restore them. MySQL SQLite We assume your seafile volumns path is in The data files to be backed up: Use the following command to clear expired session records in Seahub database: Use the following command to clear the activity records: The corresponding items in UserActivity will deleted automatically by MariaDB when the foreign keys in Activity table are deleted. Use the following command to clean the login records: Use the following command to clean the file access records: Use the following command to clean the file update records: Use the following command to clean the permission change audit records: Use the following command to clean the file history records: Use the following command to simultaneously clean up table records of Activity, sysadmin_extra_userloginlog, FileAudit, FileUpdate, FileHistory, PermAudit, FileTrash 90 days ago: Since version 6.2, we offer command to clear outdated library records in Seahub database, e.g. records that are not deleted after a library is deleted. This is because users can restore a deleted library, so we can't delete these records at library deleting time. This command has been improved in version 10.0, including: It will clear the invalid data in small batch, avoiding consume too much database resource in a short time. Dry-run mode: if you just want to see how much invalid data can be deleted without actually deleting any data, you can use the dry-run option, e.g. There are two tables in Seafile db that are related to library sync tokens. When you have many sync clients connected to the server, these two tables can have large number of rows. Many of them are no longer actively used. You may clean the tokens that are not used in a recent period, by the following SQL query: xxxx is the UNIX timestamp for the time before which tokens will be deleted. To be safe, you can first check how many tokens will be removed: Since version 7.0.8 pro, we offer command to export file access log. Since version 7.0.8 pro, Seafile provides commands to export reports via command line. Since version 7.0.8 pro, we offer command to export user storage report. On the server side, Seafile stores the files in the libraries in an internal format. Seafile has its own representation of directories and files (similar to Git). With default installation, these internal objects are stored in the server's file system directly (such as Ext4, NTFS). But most file systems don't assure the integrity of file contents after a hard shutdown or system crash. So if new Seafile internal objects are being written when the system crashes, they can be corrupt after the system reboots. This will make part of the corresponding library not accessible. Note: If you store the seafile-data directory in a battery-backed NAS (like EMC or NetApp), or use S3 backend available in the Pro edition, the internal objects won't be corrupt. We provide a seaf-fsck.sh script to check the integrity of libraries. The seaf-fsck tool accepts the following arguments: There are three modes of operation for seaf-fsck: Running seaf-fsck.sh without any arguments will run a read-only integrity check for all libraries. If you want to check integrity for specific libraries, just append the library id's as arguments: The output looks like: The corrupted files and directories are reported. Sometimes you can see output like the following: This means the \"head commit\" (current state of the library) recorded in database is not consistent with the library data. In such case, fsck will try to find the last consistent state and check the integrity in that state. Tips: If you have many libraries, it's helpful to save the fsck output into a log file for later analysis. Corruption repair in seaf-fsck basically works in two steps: Running the following command repairs all the libraries: Most of time you run the read-only integrity check first, to find out which libraries are corrupted. And then you repair specific libraries with the following command: After repairing, in the library history, seaf-fsck includes the list of files and folders that are corrupted. So it's much easier to located corrupted paths. To check all libraries and find out which library is corrupted, the system admin can run seaf-fsck.sh without any argument and save the output to a log file. Search for keyword \"Fail\" in the log file to locate corrupted libraries. You can run seaf-fsck to check all libraries when your Seafile server is running. It won't damage or change any files. When the system admin find a library is corrupted, he/she should run seaf-fsck.sh with \"--repair\" for the library. After the command fixes the library, the admin should inform user to recover files from other places. There are two ways: Starting from Pro edition 7.1.5, an option is added to speed up FSCK. Most of the running time of seaf-fsck is spent on calculating hashes for file contents. This hash will be compared with block object ID. If they're not consistent, the block is detected as corrupted. In many cases, the file contents won't be corrupted most of time. Some objects are just missing from the system. So it's enough to only check for object existence. This will greatly speed up the fsck process. To skip checking file contents, add the \"--shallow\" or \"-s\" option to seaf-fsck. You can use seaf-fsck to export all the files in libraries to external file system (such as Ext4). This procedure doesn't rely on the seafile database. As long as you have your seafile-data directory, you can always export your files from Seafile to external file system. The command syntax is The argument Currently only un-encrypted libraries can be exported. Encrypted libraries will be skipped. Seafile uses storage de-duplication technology to reduce storage usage. The underlying data blocks will not be removed immediately after you delete a file or a library. As a result, the number of unused data blocks will increase on Seafile server. To release the storage space occupied by unused blocks, you have to run a \"garbage collection\" program to clean up unused blocks on your server. The GC program cleans up two types of unused blocks: Before running GC, you must shutdown the Seafile program on your server if you use the community edition. For professional edition, online GC operation is supported. If you use Professional edition, you don't need to shutdown the Seafile program if you are using MySQL. This is because new blocks written into Seafile while GC is running may be mistakenly deleted by the GC program. At the bottom of the page there is a script that you can use to run the cleanup manually or e.g. once a week with as cronjob. To see how much garbage can be collected without actually removing any garbage, use the dry-run option: The output should look like: If you give specific library ids, only those libraries will be checked; otherwise all libraries will be checked. Notice that at the end of the output there is a \"repos have blocks to be removed\" section. It contains the list of libraries that have garbage blocks. Later when you run GC without --dry-run option, you can use these libraris ids as input arguments to GC program. To actually remove garbage blocks, run without the --dry-run option: If libraries ids are specified, only those libraries will be checked for garbage. As described before, there are two types of garbage blocks to be removed. Sometimes just removing the first type of blocks (those that belong to deleted libraries) is good enough. In this case, the GC program won't bother to check the libraries for outdated historic blocks. The \"-r\" option implements this feature: Libraries deleted by the users are not immediately removed from the system. Instead, they're moved into a \"trash\" in the system admin page. Before they're cleared from the trash, their blocks won't be garbage collected. Since Pro server 8.0.6 and community edition 9.0, you can remove garbage fs objects. It should be run without the --dry-run option: Note: This command has bug before Pro Edition 10.0.15 and Community Edition 11.0.7. It could cause virtual libraries (e.g. shared folders) failing to merge into their parent libraries. Please avoid using this option in the affected versions. Please contact our support team if you are affected by this bug. You can specify the thread number in GC. By default, You can specify the thread number in with \"-t\" option. \"-t\" option can be used together with all other options. Each thread will do GC on one library. For example, the following command will use 20 threads to GC all libraries: Since the threads are concurrent, the output of each thread may mix with each others. Library ID is printed in each line of output. Since GC usually runs quite slowly as it needs to traverse the entire library history. You can use multiple threads to run GC in parallel. For even larger deployments, it's also desirable to run GC on multiple server in parallel. A simple pattern to divide the workload among multiple GC servers is to assign libraries to servers based on library ID. Since Pro edition 7.1.5, this is supported. You can add \"--id-prefix\" option to seaf-gc.sh, to specify the library ID prefix. For example, the below command will only process libraries having \"a123\" as ID prefix. To use this script you need: Create the script file (change the location to your liking): Use your favorite text editor and paste the following code: Make sure that the script has been given execution rights, to do that run this command. Then open crontab with the root user Add the following line (change the location of your script accordingly!) The script will then run every Sunday at 2:00 AM. To perform garbage collection inside the seafile docker container, you must run the Starting from version 6.0, we added Two-Factor Authentication to enhance account security. There are two ways to enable this feature: System admin can tick the check-box at the \"Password\" section of the system settings page, or just add the following settings to After that, there will be a \"Two-Factor Authentication\" section in the user profile page. Users can use the Google Authenticator app on their smart-phone to scan the QR code. Seafile Server consists of the following two components: The picture below shows how Seafile clients access files when you configure Seafile behind Nginx/Apache. Seafile manages files using libraries. Every library has an owner, who can share the library to other users or share it with groups. The sharing can be read-only or read-write. Read-only libraries can be synced to local desktop. The modifications at the client will not be synced back. If a user has modified some file contents, he can use \"resync\" to revert the modifications. Sharing controls whether a user or group can see a library, while sub-folder permissions are used to modify permissions on specific folders. Supposing you share a library as read-only to a group and then want specific sub-folders to be read-write for a few users, you can set read-write permissions on sub-folders for some users and groups. Note: In the Pro Edition, Seafile offers four audit logs in system admin panel: The logging feature is turned off by default before version 6.0. Add the following option to The audit log data is being saved in Fail2ban is an intrusion prevention software framework which protects computer servers from brute-force attacks. Written in the Python programming language, it is able to run on POSIX systems that have an interface to a packet-control system or firewall installed locally, for example, iptables or TCP Wrapper. (Definition from wikipedia - https://en.wikipedia.org/wiki/Fail2ban) To protect your seafile website against brute force attemps. Each time a user/computer tries to connect and fails 3 times, a new line will be write in your seafile logs ( Fail2ban will check this log file and will ban all failed authentications with a new rule in your firewall. WARNING: Without this your Fail2Ban filter will not work. You need to add the following settings to seahub_settings.py but change it to your own time zone. WARNING: this file may override some parameters from your Edit Finally, just restart fail2ban and check your firewall (iptables for me) : Fail2ban will create a new chain for this jail. So you should see these new lines : To do a simple test (but you have to be an administrator on your seafile server) go to your seafile webserver URL and try 3 authentications with a wrong password. Actually, when you have done that, you are banned from http and https ports in iptables, thanks to fail2ban. To check that : on fail2ban on iptables : To unban your IP address, just execute this command : As three (3) failed attempts to login will result in one line added in seahub.log a Fail2Ban jail with the settings maxretry = 3 is the same as nine (9) failed attempts to login. Seafile uses HTTP(S) to syncing files between client and server (Since version 4.1.0). Seafile provides a feature called encrypted library to protect your privacy. The file encryption/decryption is performed on client-side when using the desktop client for file synchronization. The password of an encrypted library is not stored on the server. Even the system admin of the server can't view the file contents. There are a few limitation about this feature: The client side encryption works on iOS client since version 2.1.6. The Android client support client side encryption since version 2.1.0. When you create an encrypted library, you'll need to provide a password for it. All the data in that library will be encrypted with the password before uploading it to the server (see limitations above). The encryption procedure is: The above encryption procedure can be executed on the desktop and the mobile client. The Seahub browser client uses a different encryption procedure that happens at the server. Because of this your password will be transferred to the server. When you sync an encrypted library to the desktop, the client needs to verify your password. When you create the library, a \"magic token\" is derived from the password and library id. This token is stored with the library on the server side. The client use this token to check whether your password is correct before you sync the library. The magic token is generated by PBKDF2 algorithm with 1000 iterations of SHA256 hash. For maximum security, the plain-text password won't be saved on the client side, too. The client only saves the key/iv pair derived from the \"file key\", which is used to decrypt the data. So if you forget the password, you won't be able to recover it or access your data on the server. When a file download link is clicked, a random URL is generated for user to access the file from fileserver. This url can only be access once. After that, all access will be denied to the url. So even if someone else happens to know about the url, he can't access it anymore. User login passwords are stored in hash form only. Note that user login password is different from the passwords used in encrypted libraries. In the database, its format is The record is divided into 4 parts by the $ sign. To calculate the hash: There are three types of upgrade, i.e., major version upgrade, minor version upgrade and maintenance version upgrade. This page contains general instructions for the three types of upgrade. Please check the upgrade notes for any special configuration or changes before/while upgrading. Suppose you are using version 5.1.0 and like to upgrade to version 6.1.0. First download and extract the new version. You should have a directory layout similar to this: Now upgrade to version 6.1.0. Shutdown Seafile server if it's running Check the upgrade scripts in seafile-server-6.1.0 directory. You will get a list of upgrade files: Start from your current version, run the script(s one by one) Start Seafile server If the new version works fine, the old version can be removed Suppose you are using version 6.1.0 and like to upgrade to version 6.2.0. First download and extract the new version. You should have a directory layout similar to this: Now upgrade to version 6.2.0. Check the upgrade scripts in seafile-server-6.2.0 directory. You will get a list of upgrade files: Start from your current version, run the script(s one by one) Start Seafile server If the new version works, the old version can be removed A maintenance upgrade is for example an upgrade from 6.2.2 to 6.2.3. For this type of upgrade, you only need to update the symbolic links (for avatar and a few other folders). A script to perform a minor upgrade is provided with Seafile server (for history reasons, the script is called Start Seafile If the new version works, the old version can be removed Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps: In general, to upgrade a cluster, you need: Doing maintanence upgrading is simple, you only need to run the script In the background node, Seahub no longer need to be started. Nginx is not needed too. The way of how office converter work is changed. The Seahub in front end nodes directly access a service in background node. seahub_settings.py seafevents.conf seahub_settings.py is not needed. But you can leave it unchanged. seafevents.conf No special upgrade operations. In version 6.2.11, the included Django was upgraded. The memcached configuration needed to be upgraded if you were using a cluster. If you upgrade from a version below 6.1.11, don't forget to change your memcache configuration. If the configuration in your Now you need to change to: No special upgrade operations. In version 6.1, we upgraded the included ElasticSearch server. The old server listen on port 9500, new server listen on port 9200. Please change your firewall settings. In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share. The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster. Because Django is upgraded to 1.8, the COMPRESS_CACHE_BACKEND should be changed v5.0 introduces some database schema change, and all configuration files (ccnet.conf, seafile.conf, seafevents.conf, seahub_settings.py) are moved to a central config directory. Perform the following steps to upgrade: After the upgrade, you should see the configuration files has been moved to the conf/ folder. There are no database and search index upgrade from v4.3 to v4.4. Perform the following steps to upgrade: v4.3 contains no database table change from v4.2. But the old search index will be deleted and regenerated. A new option COMPRESS_CACHE_BACKEND = 'django.core.cache.backends.locmem.LocMemCache' should be added to seahub_settings.py The secret key in seahub_settings.py need to be regenerated, the old secret key lack enough randomness. Perform the following steps to upgrade: Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps: In general, to upgrade a cluster, you need: Maintanence upgrade only needs to download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Start with docker compose up. Migrate your configuration for LDAP and OAuth according to https://manual.seafile.com/upgrade/upgrade_notes_for_11.0.x If you are using with ElasticSearch, SAML SSO and storage backend features, follow the upgrading manual on how to update the configuration for these features: https://manual.seafile.com/upgrade/upgrade_notes_for_10.0.x If you want to use the new notification server and rate control (pro edition only), please refer to the upgrading manual: https://manual.seafile.com/upgrade/upgrade_notes_for_10.0.x If you are using with ElasticSearch, follow the upgrading manual on how to update the configuration: https://manual.seafile.com/upgrade/upgrade_notes_for_9.0.x For maintenance upgrade, like from version 10.0.1 to version 10.0.4, just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up. For major version upgrade, like from 10.0 to 11.0, see instructions below. Please check the upgrade notes for any special configuration or changes before/while upgrading. Download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Taking the community edition as an example, you have to modify to It is also recommended that you upgrade mariadb and memcached to newer versions as in the v11.0 docker-compose.yml file. Specifically, in version 11.0, we use the following versions: What's more, you have to migrate configuration for LDAP and OAuth according to https://manual.seafile.com/upgrade/upgrade_notes_for_11.0.x Start with docker compose up. Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up. If you are using pro edition with ElasticSearch, SAML SSO and storage backend features, follow the upgrading manual on how to update the configuration for these features: https://manual.seafile.com/upgrade/upgrade_notes_for_10.0.x If you want to use the new notification server and rate control (pro edition only), please refer to the upgrading manual: https://manual.seafile.com/upgrade/upgrade_notes_for_10.0.x Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up. Since version 9.0.6, we use Acme V3 (not acme-tiny) to get certificate. If there is a certificate generated by an old version, you need to back up and move the old certificate directory and the seafile.nginx.conf before starting. Starting the new container will automatically apply a certificate. Please wait a moment for the certificate to be applied, then you can modify the new seafile.nginx.conf as you want. Execute the following command to make the nginx configuration take effect. A cron job inside the container will automatically renew the certificate. Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up. Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up. From Seafile Docker 12.0, we recommend that you use First, backup the original docker-compose.yml file: Then download .env, seafile-server.yml and caddy.yml, and modify .env file according to the old configuration in For community edition: For pro edition: The following fields merit particular attention: SSL is now handled by the caddy server. If you have used SSL before, you will also need modify the seafile.nginx.conf. Change server listen 443 to 80. Backup the original seafile.nginx.conf file: Remove the Change Start with docker compose up. If you have deployed SeaDoc extension in version 11.0, please use the following steps to upgrade it to version 1.0. SeaDoc 1.0 is for working with Seafile 12.0. SeaDoc and Seafile are deployed in the same directory. SeaDoc has no state in itself. You can simplify delete old configuration file and directory of v0.8. Then deploy SeaDoc 1.0 as following. In version 1.0, we use .env file to configure SeaDoc docker image, instead of modifying the docker-compose.yml file directly. Download seadoc.yml to the Seafile For community edition: For pro edition: The following fields merit particular attention: If you have deployed SeaDoc older version, you should remove Start Seafile server and SeaDoc server with the following command These notes give additional information about changes. Please always follow the main upgrade guide. For docker based version, please check upgrade Seafile Docker image The notification server enables desktop syncing and drive clients to get notification of library changes immediately using websocket. There are two benefits: The notification server works with Seafile syncing client 9.0+ and drive client 3.0+. Please follow the document to enable notification server: https://manual.seafile.com/config/seafile-conf/#notification-server-configuration If you use storage backend or cluster, make sure the memcached section is in the seafile.conf. Since version 10.0, all memcached options are consolidated to the one below. Modify the seafile.conf: The configuration for SAML SSO in Seafile is greatly simplified. Now only three options are needed: Please check the new document on SAML SSO Starting from version 10.0, Seafile allows administrators to configure upload and download speed limits for users with different roles through the following two steps: Elasticsearch is upgraded to version 8.x, fixed and improved some issues of file search function. Since elasticsearch 7.x, the default number of shards has changed from 5 to 1, because too many index shards will over-occupy system resources; but when a single shard data is too large, it will also reduce search performance. Starting from version 10.0, Seafile supports customizing the number of shards in the configuration file. You can use the following command to query the current size of each shard to determine the best number of shards for you: The official recommendation is that the size of each shard should be between 10G-50G: https://www.elastic.co/guide/en/elasticsearch/reference/8.6/size-your-shards.html#shard-size-recommendation. Modify the seafevents.conf: Note, you should install Python libraries system wide using root user or sudo mode. For Ubuntu 20.04/22.04 For Debian 11 Stop Seafile-9.0.x server. Start from Seafile 10.0.x, run the script: If you are using pro edtion, modify memcached option in seafile.conf and SAML SSO configuration if needed. You can choose one of the methods to upgrade your index data. 1. Download Elasticsearch image: Create a new folder to store ES data and give the folder permissions: Start ES docker image: PS: 2. Create an index with 8.x compatible mappings: 3. Set the 4. Use the reindex API to copy documents from the 7.x index into the new index: 5. Use the following command to check if the reindex task is complete: 6. Reset the 7. Wait for the elasticsearch status to change to 8. Use the aliases API delete the old index and add an alias with the old index name to the new index: 9. Deactivate the 7.17 container, pull the 8.x image and run: 1. Pull Elasticsearch image: Create a new folder to store ES data and give the folder permissions: Start ES docker image: 2. Modify the seafevents.conf: Restart Seafile server: 3. Delete old index data 4. Create new index data: 1. Deploy elasticsearch 8.x according to method two. Use Seafile 10.0 version to deploy a new backend node and modify the 2. Upgrade the other nodes to Seafile 10.0 version and use the new Elasticsearch 8.x server. 3. Then deactivate the old backend node and the old version of Elasticsearch. These notes give additional information about changes. Please always follow the main upgrade guide. For docker based version, please check upgrade Seafile Docker image Previous Seafile versions directly used a user's email address or SSO identity as their internal user ID. Seafile 11.0 introduces virtual user IDs - random, internal identifiers like \"adc023e7232240fcbb83b273e1d73d36@auth.local\". For new users, a virtual ID will be generated instead of directly using their email. A mapping between the email and virtual ID will be stored in the \"profile_profile\" database table. For SSO users,the mapping between SSO ID and virtual ID is stored in the \"social_auth_usersocialauth\" table. Overall this brings more flexibility to handle user accounts and identity changes. Existing users will use the same old ID. Previous Seafile versions handled LDAP authentication in the ccnet-server component. In Seafile 11.0, LDAP is reimplemented within the Seahub Python codebase. LDAP configuration has been moved from ccnet.conf to seahub_settings.py. The ccnet_db.LDAPImported table is no longer used - LDAP users are now stored in ccnet_db.EmailUsers along with other users. Benefits of this new implementation: You need to run If you use OAuth authentication, the configuration need to be changed a bit. If you use SAML, you don't need to change configuration files. For SAML2, in version 10, the name_id field is returned from SAML server, and is used as the username (the email field in ccnet_dbEmailUser). In version 11, for old users, Seafile will find the old user and create a name_id to name_id mapping in social_auth_usersocialauth. For new users, Seafile will create a new user with random ID and add a name_id to the random ID mapping in social_auth_usersocialauth. In addition, we have added a feature where you can configure to disable login with a username and password for saml users by using the config of Seafile 11.0 dropped using SQLite as the database. It is better to migrate from SQLite database to MySQL database before upgrading to version 11.0. There are several reasons driving this change: To migrate from SQLite database to MySQL database, you can follow the document Migrate from SQLite to MySQL. If you have issues in the migration, just post a thread in our forum. We are glad to help you. Elasticsearch version is not changed in Seafile version 11.0 For Ubuntu 20.04/22.04 Django 4.* has introduced a new check for the origin http header in CSRF verification. It now compares the values of the origin field in HTTP header and the host field in HTTP header. If they are different, an error is triggered. If you deploy Seafile behind a proxy, or if you use a non-standard port, or if you deploy Seafile in cluster, it is likely the origin field in HTTP header received by Django and the host field in HTTP header received by Django are different. Because the host field in HTTP header is likely to be modified by proxy. This mismatch results in a CSRF error. You can add CSRF_TRUSTED_ORIGINS to seahub_settings.py to solve the problem: Note, you should install Python libraries system wide using root user or sudo mode. For Ubuntu 20.04/22.04 The configuration items of LDAP login and LDAP sync tasks are migrated from ccnet.conf to seahub_settings.py. The name of the configuration item is based on the 10.0 version, and the characters 'LDAP_' or 'MULTI_LDAP_1' are added. Examples are as follows: The following configuration items are only for Pro Edition: If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth), you want Seafile to find the existing account for this user instead of creating a new one, you can set Note, here the UID means the unique user ID, in LDAP it is the attribute you use for Run the following script to migrate users in For Seafile docker In the new version, the OAuth login configuration should keep the email attribute unchanged to be compatible with new and old user logins. In version 11.0, a new uid attribute is added to be used as a user's external unique ID. The uid will be stored in social_auth_usersocialauth to map to internal virtual ID. For old users, the original email is used the internal virtual ID. The example is as follows: When a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration We have documented common issues encountered by users when upgrading to version 11.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000. If you encounter any issue, please give it a check. These notes give additional information about changes. Please always follow the main upgrade guide. For docker based version, please check upgrade Seafile Docker image Seafile version 12.0 has following major changes: Other changes: Breaking changes Deploying SeaDoc and Seafile binary package on the same server is no longer supported. You can: Deploying Seafile with binary package is now deprecated and probably no longer be supported in version 13.0. We recommend you to migrate your existing Seafile deployment to docker based. Elasticsearch version is not changed in Seafile version 12.0 Note, you should install Python libraries system wide using root user or sudo mode. For Ubuntu 22.04/24.04 The following instruction is for binary package based installation. If you use Docker based installation, please see conf/.env Note: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following two steps: Note, deploying SeaDoc and Seafile binary package on the same server is no longer supported. If you really want to deploying SeaDoc and Seafile server on the same machine, you should deploy Seafile server with Docker. From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db. Please see the document Setup SeaDoc to install SeaDoc on a separate machine and integrate with your binary packaged based Seafile server v12.0. We have documented common issues encountered by users when upgrading to version 12.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000. If you encounter any issue, please give it a check. These notes give additional information about changes. Please always follow the main upgrade guide. If you are currently using the Seafile Community Edition, please refer to Upgrade notes for CE-7.0.x. If you are currently using Seafile Professional, please refer to Upgrade notes for Pro-7.0.x. These notes give additional information about changes. Please always follow the main upgrade guide. From 7.1.0 version, Seafile will depend on the Python 3 and is\u00a0not\u00a0compatible\u00a0with\u00a0Python\u00a02. Therefore you cannot upgrade directly from Seafile 6.x.x to 7.1.x. If your current version of Seafile is not 7.0.x, you must first download the 7.0.x installation package and upgrade to 7.0.x before performing the subsequent operations. To support both Python 3.6 and 3.7, we no longer bundle python libraries with Seafile package. You need to install most of the libraries by your own as bellow. Note, you should install Python libraries system wide using root user or sudo mode. After Seafile 7.1.x, Seafdav does not support Fastcgi, only Wsgi. This means that if you are using Seafdav functionality and have deployed Nginx or Apache reverse proxy. You need to change Fastcgi to Wsgi. For Seafdav, the configuration of Nginx is as follows: For Seafdav, the configuration of Apache is as follows: The implementation of builtin office file preview has been changed. You should update your configuration according to: https://download.seafile.com/published/seafile-manual/deploy_pro/office_documents_preview.md#user-content-Version%207.1+ If you are using Ceph storage backend, you need to install new python library. On Debian/Ubuntu (Seafile 7.1+): If you have customized the login page or other html pages, as we have removed some old javascript libraries, your customized pages may not work anymore. Please try to re-customize based on the newest version. Note, the following patch is included in version pro-7.1.8 and ce-7.1.5 already. We have two customers reported that after upgrading to version 7.1, users login via Shibboleth single sign on have a wrong name if the name contains a special character. We suspect it is a Shibboleth problem as it does not sending the name in UTF-8 encoding to Seafile. (https://issues.shibboleth.net/jira/browse/SSPCPP-2) The solution is to modify the code in seahub/thirdpart/shibboleth/middleware.py: If you have this problem too, please let us know. The upgrade script will try to create a missing table and remove an used index. The following SQL errors are jus warnings and can be ignored: Please check whether the seahub process is running in your server. If it is running, there should be an error log in seahub.log for internal server error. If seahub process is not running, you can modify\u00a0conf/gunicorn.conf, change\u00a0 The most common issue is that you use an old memcache configuration that depends on python-memcache. The new way is The old way is These notes give additional information about changes. Please always follow the main upgrade guide. From 8.0, ccnet-server component is removed. But ccnet.conf is still needed. Note, you should install Python libraries system wide using root user or sudo mode. If you are using Shibboleth and have configured please change it to As support for old-style middleware using Start from Seafile 7.1.x, run the script: Start Seafile-8.0.x server. These notes give additional information about changes. Please always follow the main upgrade guide. 9.0 version includes following major changes: The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages: You can turn golang file-server on by adding following configuration in seafile.conf Note, you should install Python libraries system wide using root user or sudo mode. Start from Seafile 9.0.x, run the script: Start Seafile-9.0.x server. If your elasticsearch data is not large, it is recommended to deploy the latest 7.x version of ElasticSearch and then rebuild the new index. Specific steps are as follows Download ElasticSearch image Create a new folder to store ES data and give the folder permissions Note: You must properly grant permission to access the es data directory, and run the Elasticsearch container as the root user, refer to here. Start ES docker image Delete old index data Modify seafevents.conf Restart seafile If your data volume is relatively large, it will take a long time to rebuild indexes for all Seafile databases, so you can reindex the existing data. This requires the following steps The detailed process is as follows Download ElasticSearch image: PS\uff1aFor seafile version 9.0, you need to manually create the elasticsearch mapping path on the host machine and give it 777 permission, otherwise elasticsearch will report path permission problems when starting, the command is as follows Move original data to the new folder and give the folder permissions Note: You must properly grant permission to access the es data directory, and run the Elasticsearch container as the root user, refer to here. Start ES docker image Note: Create an index with 7.x compatible mappings. Set the Use the reindex API to copy documents from the 5.x index into the new index. Reset the Wait for the index status to change to Use the aliases API delete the old index and add an alias with the old index name to the new index. After reindex, modify the configuration in Seafile. Modify seafevents.conf Restart seafile Deploy a new ElasticSeach 7.x service, use Seafile 9.0 version to deploy a new backend node, and connect to ElasticSeach 7.x. The background node does not start the Seafile background service, just manually run the command Seafile is an open source cloud storage system for file sync, share and document collaboration. The different components of Seafile project are released under different licenses: The different components of Seafile project are released under different licenses: Forum: https://forum.seafile.com Follow us @seafile https://twitter.com/seafile The source code of seafile is ISO/IEC 9899:1999 (E) (a.k.a. C99) compatible. Take a look at code standard. Please check https://www.seafile.com/en/roadmap/ You can build Seafile from our source code package or from the Github repo directly. Client Server The following list is what you need to install on your development machine. You should install all of them before you build Seafile. Package names are according to Ubuntu 14.04. For other Linux distros, please find their corresponding names yourself. For a fresh Fedora 20 / 23 installation, the following will install all dependencies via YUM: First you should get the latest source of libsearpc/ccnet/seafile/seafile-client: Download the source tarball of the latest tag from For example, if the latest released seafile client is 8.0.0, then just use the v8.0.0 tags of the four projects. You should get four tarballs: Now uncompress them: To build Seafile client, you need first build libsearpc and ccnet, seafile. In order to support notification server, you need to build libwebsockets first. You can set when installing to a custom you can now start the client with The following setups are required for building and packaging Sync Client on macOS: Following directory structures are expected when building Sync Client: The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, and github.com/haiwen/seafile-client. Note: the building commands have been included in the packaging script, you can skip building commands while packaging. To build libsearpc: To build seafile: To build seafile-client: From Seafile 11.0, you can build Seafile release package with seafile-build script. You can check the README.md file in the same folder for detailed instructions. The Old version is below: Table of contents: Requirements: libevhtp is a http server libary on top of libevent. It's used in seafile file server. After compiling all the libraries, run Create a new directory Download these tarballs to Install all these libaries to To build seafile server, there are four sub projects involved: The build process has two steps: Seafile manages the releases in tags on github. Assume we are packaging for seafile server 6.0.1, then the tags are: First setup the Now we have all the tarballs prepared, we can run the After the script finisheds, we would get a Use the built seafile server package to go over the steps of Deploying Seafile with SQLite. The test should cover these steps at least: This is the document for deploying Seafile open source development environment in Ubuntu 2204 docker container. Login a linux server as After install docker, start a container to deploy seafile open source development environment. Note, the following commands are all executed in the seafile-ce-env docker container. Update base system and install base dependencies: Install Node 16 from nodesource: Install other Python 3 dependencies: sql for create databases Then, you can visit http://127.0.0.1:8000/ to use Seafile. For deploying frontend development enviroment, you need: 1, checkout seahub to master branch 2, add the following configration to /root/dev/conf/seahub_settings.py 3, install js modules 4, npm run dev 5, start seaf-server and seahub The following setups are required for building and packaging Sync Client on Windows: vcpkg Python 3.7 Certificates Note: certificates for Windows application are issued by third-party certificate authority. Support for Breakpad can be added by running following steps: install gyp tool compile breakpad compile dump_syms tool create vs solution copy VC merge modules Following directory structures are expected when building Sync Client: The source code of these projects can be downloaded at github.com/haiwen/libsearpc, github.com/haiwen/seafile, github.com/haiwen/seafile-client, and github.com/haiwen/seafile-shell-ext. Note: the building commands have been included in the packaging script, you can skip building commands while packaging. To build libsearpc: To build seafile To build seafile-client To build seafile-shell-ext Note: Two new options are added in version 4.4, both are in seahub_settings.py This version contains no database table change. LDAP improvements and fixes New features: Pro only: Fixes: Note: this version contains no database table change from v4.2. But the old search index will be deleted and regenerated. Note when upgrading from v4.2 and using cluster, a new option About \"Open via Client\": The web interface will call Seafile desktop client via \"seafile://\" protocol to use local program to open a file. If the file is already synced, the local file will be opened. Otherwise it is downloaded and uploaded after modification. Need client version 4.3.0+ Usability improvements Pro only features: Others Note: because Seafile has changed the way how office preview work in version 4.2.2, you need to clean the old generated files using the command: In the old way, the whole file is converted to HTML5 before returning to the client. By converting an office file to HTML5 page by page, the first page will be displayed faster. By displaying each page in a separate frame, the quality for some files is improved too. Improved account management Important New features Others Pro only updates Usability Security Improvement Platform Pro only updates Updates in community edition too Important Small Pro edition only: Syncing Platform Web Web Platform Web Platform Misc WebDAV Platform Web Web for Admin Platform Web Web for Admin API Web API Platform You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000 Upgrade Please check our document for how to upgrade to 11.0: https://manual.seafile.com/upgrade/upgrade_notes_for_11.0.x/ Seafile SDoc editor 0.8 Seafile SDoc editor 0.7 SDoc editor 0.6 Major changes UI Improvements Pro edition only changes Other changes Upgrade Please check our document for how to upgrade to 10.0: https://manual.seafile.com/upgrade/upgrade_notes_for_10.0.x/ Note, after upgrading to this version, you need to upgrade the Python libraries in your server \"pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20\" Upgrade Please check our document for how to upgrade to 9.0: https://manual.seafile.com/upgrade/upgrade_notes_for_9.0.x/ Note: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages: You can turn golang file-server on by adding following configuration in seafile.conf Deprecated Deprecated Upgrade Please check our document for how to upgrade to 8.0: https://manual.seafile.com/upgrade/upgrade_notes_for_8.0.x/ Potential breaking change in Seafile Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through Upgrade Please check our document for how to upgrade to 7.1: upgrade notes for 7.1.x Potential breaking change in Seafile Pro 7.1.16: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through Since seafile-pro 7.0.0, we have upgraded Elasticsearch to 5.6. As Elasticsearch 5.6 relies on the Java 8 environment and can't run with root, you need to run Seafile with a non-root user and upgrade the Java version. Please check our document for how to upgrade to 7.0: upgrade notes for 7.0.x In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April. With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode. The way to run Seahub in another port is also changed. You need to modify the configuration file Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3: Note, this command should be run while Seafile server is running. Version 6.3 changed '/shib-login' to '/sso'. If you use Shibboleth, you need to to update your Apache/Nginx config. Please check the updated document: shibboleth config v6.3 Version 6.3 add a new option for file search ( This option will make search speed improved significantly (10x) when the search result contains large pdf/doc files. But you need to rebuild search index if you want to add this option. New features From 6.2, It is recommended to use proxy mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode: The configuration of Nginx is as following: The configuration of Apache is as following: You can follow the document on minor upgrade. Web UI Improvement: Improvement for admins: System changes: You can follow the document on minor upgrade. Special note for upgrading a cluster: In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share. The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster. Improvement for admin Other Pro only features cloud file browser others This version has a few bugs. We will fix it soon. Note: Seafile client now support HiDPI under Windows, you should remove QT_DEVICE_PIXEL_RATIO settings if you had set one previous. In the old version, you will sometimes see strange directory such as \"Documents~1\" synced to the server, this because the old version did not handle long path correctly. In the previous version, when you open an office file in Windows, it is locked by the operating system. If another person modify this file in another computer, the syncing will be stopped until you close the locked file. In this new version, the syncing process will continue. The locked file will not be synced to local computer, but other files will not be affected. You have to update all the clients in all the PCs. If one PC does not use the v3.1.11, when the \"deleting folder\" information synced to this PC, it will fail to delete the folder completely. And the folder will be synced back to other PCs. So other PCs will see the folder reappear again. Note: This version contains a bug that you can't login into your private servers. 1.8.1 1.8.0 1.7.3 1.7.2 1.7.1 1.7.0 1.6.2 1.6.1 1.6.0 1.5.3 1.5.2 1.5.1 1.5.0 Note when upgrade to 5.0 from 4.4 You can follow the document on major upgrade (http://manual.seafile.com/deploy/upgrade.html) In Seafile 5.0, we have moved all config files to folder If you want to downgrade from v5.0 to v4.4, you should manually copy these files back to the original place, then run minor_upgrade.sh to upgrade symbolic links back to version 4.4. The 5.0 server is compatible with v4.4 and v4.3 desktop clients. Common issues (solved) when upgrading to v5.0: Improve seaf-fsck Sharing link UI changes: Config changes: Trash: Admin: Security: New features: Fixes: Usability Improvement Others Note when upgrade to 4.2 from 4.1: If you deploy Seafile in a non-root domain, you need to add the following extra settings in seahub_settings.py: Usability Security Improvement Platform Important Small Important Small improvements Syncing Platform Web Web Platform Web Platform Platform Web WebDAV Platform Web Web for Admin Platform Web Web for Admin API Web API Platform Web Daemon Web Daemon Web For Admin API Seafile Web Seafile Daemon API You can check Seafile release table to find the lifetime of each release and current supported OS: https://cloud.seatable.io/dtable/external-links/a85d4221e41344c19566/?tid=0000&vid=0000 Upgrade Please check our document for how to upgrade to 11.0: https://manual.seafile.com/upgrade/upgrade_notes_for_11.0.x/ Seafile Seafile SDoc editor 0.8 Seafile SDoc editor 0.7 Seafile SDoc editor 0.6 Seafile Seafile SDoc editor 0.5 Seafile SDoc editor 0.4 Seafile SDoc editor 0.3 Seafile SDoc editor 0.2 Upgrade Please check our document for how to upgrade to 10.0: https://manual.seafile.com/upgrade/upgrade_notes_for_10.0.x/ Note: included lxml library is removed for some compatiblity reason. The library is used in published libraries feature and WebDAV feature. You need to install lxml manually after upgrade to 9.0.7. Use command The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages: You can turn golang file-server on by adding following configuration in seafile.conf Please check our document for how to upgrade to 8.0: https://manual.seafile.com/upgrade/upgrade_notes_for_8.0.x/ Feature changes Progresql support is dropped as we have rewritten the database access code to remove copyright issue. Upgrade Please check our document for how to upgrade to 7.1: https://manual.seafile.com/upgrade/upgrade_notes_for_7.1.x/ Feature changes In version 6.3, users can create public or private Wikis. In version 7.0, private Wikis is replaced by column mode view. Every library has a column mode view. So users don't need to explicitly create private Wikis. Public Wikis are now renamed to published libraries. Upgrade Just follow our document on major version upgrade. No special steps are needed. In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April. With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode. The way to run Seahub in another port is also changed. You need to modify the configuration file Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3: Note, this command should be run while Seafile server is running. From 6.2, It is recommended to use WSGI mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode: The configuration of Nginx is as following: The configuration of Apache is as following: If you upgrade from 6.0 and you'd like to use the feature video thumbnail, you need to install ffmpeg package: Web UI Improvement: Improvement for admins: System changes: Note: If you ever used 6.0.0 or 6.0.1 or 6.0.2 with SQLite as database and encoutered a problem with desktop/mobile client login, follow https://github.com/haiwen/seafile/pull/1738 to fix the problem. Improvement for admin Other Warning: Note: when upgrade from 5.1.3 or lower version to 5.1.4+, you need to install python-urllib3 (or python2-urllib3 for Arch Linux) manually: Note: downloading multiple files at once will be added in the next release. Note: in this version, the group discussion is not re-implement yet. It will be available when the stable verison is released. There are three config files in the community edition: You can also modify most of the config items via web interface.The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files. Ccnet is the internal RPC framework used by Seafile server and also manages the user database. A few useful options are in ccnet.conf. Ccnet component is merged into seaf-server in version 7.1, but the configuration file are still needed. When you configure ccnet to use MySQL, the default connection pool size is 100, which should be enough for most use cases. You can change this value by adding following options to ccnet.conf: Since Seafile 10.0.2, you can enable the encrypted connections to the MySQL server by adding the following configuration options: When set Note: Subject line may vary between different releases, this is based on Release 2.0.1. Restart Seahub so that your changes take effect. Subject seahub/seahub/auth/forms.py line:103 Body seahub/seahub/templates/registration/password_reset_email.html Note: You can copy password_reset_email.html to Subject seahub/seahub/views/sysadmin.py line:424 Body seahub/seahub/templates/sysadmin/user_add_email.html Note: You can copy user_add_email.html to Subject seahub/seahub/views/sysadmin.py line:368 Body seahub/seahub/templates/sysadmin/user_reset_email.html Note: You can copy user_reset_email.html to Subject seahub/seahub/share/views.py line:668 Body seahub/seahub/templates/shared_link_email.html The In the file NOTE: Access the AWS elasticsearch service using HTTPS Important: Every entry in this configuration file is case-sensitive. You need to restart seafile and seahub so that your changes take effect. You may set a default quota (e.g. 2GB) for all users. To do this, just add the following lines to This setting applies to all users. If you want to set quota for a specific user, you may log in to seahub website as administrator, then set it in \"System Admin\" page. Since Pro 10.0.9 version, you can set the maximum number of files allowed in a library, and when this limit is exceeded, files cannot be uploaded to this library. There is no limit by default. If you don't want to keep all file revision history, you may set a default history length limit for all libraries. The default time for automatic cleanup of the libraries trash is 30 days.You can modify this time by adding the following configuration\uff1a Seafile uses a system trash, where deleted libraries will be moved to. In this way, accidentally deleted libraries can be recovered by system admin. Seafile Pro Edition uses memory caches in various cases to improve performance. Some session information is also saved into memory cache to be shared among the cluster nodes. Memcached or Reids can be use for memory cache. If you use memcached: If you use redis: Redis support is added in version 11.0. Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet. The configuration of seafile fileserver is in the Since Community Edition 6.2 and Pro Edition 6.1.9, you can set the number of worker threads to server http requests. Default value is 10, which is a good value for most use cases. Change upload/download settings. After a file is uploaded via the web interface, or the cloud file browser in the client, it needs to be divided into fixed size blocks and stored into storage backend. We call this procedure \"indexing\". By default, the file server uses 1 thread to sequentially index the file and store the blocks one by one. This is suitable for most cases. But if you're using S3/Ceph/Swift backends, you may have more bandwidth in the storage backend for storing multiple blocks in parallel. We provide an option to define the number of concurrent threads in indexing: When users upload files in the web interface (seahub), file server divides the file into fixed size blocks. Default blocks size for web uploaded files is 8MB. The block size can be set here. When users upload files in the web interface, file server assigns an token to authorize the upload operation. This token is valid for 1 hour by default. When uploading a large file via WAN, the upload time can be longer than 1 hour. You can change the token expire time to a larger value. You can download a folder as a zip archive from seahub, but some zip software on windows doesn't support UTF-8, in which case you can use the \"windows_encoding\" settings to solve it. The \"httptemp\" directory contains temporary files created during file upload and zip download. In some cases the temporary files are not cleaned up after the file transfer was interrupted. Starting from 7.1.5 version, file server will regularly scan the \"httptemp\" directory to remove files created long time ago. New in Seafile Pro 7.1.16 and Pro 8.0.3: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through Since Pro 8.0.4 version, you can set both options to -1, to allow unlimited size and timeout. If you use object storage as storage backend, when a large file is frequently downloaded, the same blocks need to be fetched from the storage backend to Seafile server. This may waste bandwith and cause high load on the internal network. Since Seafile Pro 8.0.5 version, we add block caching to improve the situation. Note that this configuration is only effective for downloading files through web page or API, but not for syncing files. When a large number of files are uploaded through the web page and API, it will be expensive to calculate block IDs based on the block contents. Since Seafile-pro-9.0.6, you can add the If you want to limit the type of files when uploading files, since Seafile Pro 10.0.0 version, you can set Since seafile 10.0.1, when you use go fileserver, you can set Since Seafile 11.0.7 Pro, you can ask file server to check virus for every file uploaded with web APIs. Find more options about virus scanning at virus scan. The whole database configuration is stored in the When you configure seafile server to use MySQL, the default connection pool size is 100, which should be enough for most use cases. Since Seafile 10.0.2, you can enable the encrypted connections to the MySQL server by adding the following configuration options: When set The Seafile Pro server auto expires file locks after some time, to prevent a locked file being locked for too long. The expire time can be tune in seafile.conf file. The default is 12 hours. Since Seafile-pro-9.0.6, you can add cache for getting locked files (reduce server load caused by sync clients). At the same time, you also need to configure the following memcache options for the cache to take effect: You may configure Seafile to use various kinds of object storage backends. You may also configure Seafile to use multiple storage backends at the same time. When you deploy Seafile in a cluster, you should add the following configuration: Since Seafile-pro-6.3.10, you can enable seaf-server's RPC slow log to do performance analysis.The slow log is enabled by default. If you want to configure related options, add the options to seafile.conf: You can find Since 9.0.2 Pro, the signal to trigger log rotation has been changed to Even though Nginx logs all requests with certain details, such as url, response code, upstream process time, it's sometimes desirable to have more context about the requests, such as the user id for each request. Such information can only be logged from file server itself. Since 9.0.2 Pro, access log feature is added to fileserver. To enable access log, add below options to seafile.conf: The log format is as following: You can use Seafile 9.0 introduces a new fileserver implemented in Go programming language. To enable it, you can set the options below in seafile.conf: Go fileserver has 3 advantages over the traditional fileserver implemented in C language: Go fileserver caches fs objects in memory. On the one hand, it avoids repeated creation and destruction of repeatedly accessed objects; on the other hand it will also slow down the speed at which objects are released, which will prevent go's gc mechanism from consuming too much CPU time. You can set the size of memory used by fs cache through the following options. Since Seafile 9.0.7, you can enable the profile function of go fileserver by adding the following configuration options: This interface can be used through the pprof tool provided by Go language. See https://pkg.go.dev/net/http/pprof for details. Note that you have to first install Go on the client that issues the below commands. The password parameter should match the one you set in the configuration. Since Seafile 10.0.0, you can enable the notification server by adding the following configuration options: You can generate jwt_private_key with the following command\uff1a If you use nginx, then you also need to add the following configuration for nginx: Or add the configuration for Apache: Create a folder During upgrading, Seafile upgrade script will create symbolic link automatically to preserve your customization. Add your logo file to Overwrite Default width and height for logo is 149px and 32px, you may need to change that according to yours. Add your favicon file to Overwrite Add your css file to Overwrite Note: Since version 2.1. First go to the custom folder then run the following commands Modify the You can add an extra note in sharing dialog in seahub_settings.py Result: Since Pro 7.0.9, Seafile supports adding some custom navigation entries to the home page for quick access. This requires you to add the following configuration information to the **Note: The Then restart the Seahub service to take effect. Once you log in to the Seafile system homepage again, you will see the new navigation entry under the Result: Result: Note: You can also modify most of the config items via web interface. The config items are saved in database table (seahub-db/constance_config). They have a higher priority over the items in config files. If you want to disable settings via web interface, you can add Refer to email sending documentation. Seahub caches items(avatars, profiles, etc) on file system by default(/tmp/seahub_cache/). You can replace with Memcached or Redis. Add the following configuration to Redis support is added in version 11.0. Please refer to Django's documentation about using Redis cache. The following options affect user registration, password and session. Options for libraries: Options for online file preview: You should enable cloud mode if you use Seafile with an unknown user base. It disables the organization tab in Seahub's website to ensure that users can't access the user list. Cloud mode provides some nice features like sharing content with unregistered users and sending invitations to them. Therefore you also want to enable user registration. Through the global address book (since version 4.2.3) you can do a search for every user account. So you probably want to disable it. Since version 6.2, you can define a custom function to modify the result of user search function. For example, if you want to limit user only search users in the same institution, you can define Code example: NOTE, you should NOT change the name of Since version 6.2.5 pro, if you enable the ENABLE_SHARE_TO_ALL_GROUPS feather on sysadmin settings page, you can also define a custom function to return the groups a user can share library to. For example, if you want to let a user to share library to both its groups and the groups of user Code example: NOTE, you should NOT change the name of There are currently five types of emails sent in Seafile: The first four types of email are sent immediately. The last type is sent by a background task running periodically. Please add the following lines to If you are using Gmail as email server, use following lines: Note: If your email service still does not work, you can checkout the log file Note2: If you want to use the email service without authentication leaf Note3: About using SSL connection (using port 465) Port 587 is being used to establish a connection using STARTTLS and port 465 is being used to establish an SSL connection. Starting from Django 1.8, it supports both. If you want to use SSL on port 465, set You can change the reply to field of email by add the following settings to seahub_settings.py. This only affects email sending for file share link. The background task will run periodically to check whether an user have new unread notifications. If there are any, it will send a reminder email to that user. The background email sending task is controlled by The simplest way to customize the email messages is setting the Note: Subject line may vary between different releases, this is based on Release 5.0.0. Restart Seahub so that your changes take effect. seahub/seahub/templates/email_base.html Note: You can copy email_base.html to Subject seahub/seahub/auth/forms.py line:127 Body seahub/seahub/templates/registration/password_reset_email.html Note: You can copy password_reset_email.html to Subject seahub/seahub/views/sysadmin.py line:424 Body seahub/seahub/templates/sysadmin/user_add_email.html Note: You can copy user_add_email.html to Subject seahub/seahub/views/sysadmin.py line:1224 Body seahub/seahub/templates/sysadmin/user_reset_email.html Note: You can copy user_reset_email.html to Subject seahub/seahub/share/views.py line:913 Body seahub/seahub/templates/shared_link_email.html seahub/seahub/templates/shared_upload_link_email.html Note: You can copy shared_link_email.html to Subject Body seahub/seahub/notifications/templates/notifications/notice_email.html We provide two ways to deploy Seafile services. Since version 8.0, Docker is the recommended way. LDAP/AD Integration Seafile supports a few Single Sign On authentication protocols. See Single Sign On for a summary. Seafile Server supports the following external authentication types: Since 11.0 version, switching between the types is possible, but any switch requires modifications of Seafile's databases. Note Before manually manipulating your database, make a database backup, so you can restore your system if anything goes wrong! See more about make a database backup. As an organisation grows and its IT infrastructure matures, the migration from local authentication to external authentication like LDAP, SAML, OAUTH is common requirement. Fortunately, the switch is comparatively simple. Configure and test the desired external authentication. Note the name of the Determine the ID of the user to be migrated in ccnet_db.EmailUser. For users created before version 11, the ID should be the user's email, for users created after version 11, the ID should be a string like Replace the password hash with an exclamation mark. Create a new entry in The login with the password stored in the local database is not possible anymore. After logging in via external authentication, the user has access to all his previous libraries. This example shows how to migrate the user with the username This is what the database looks like before these commands must be executed: Note: The Afterwards the databases should look like this: First configure the two external authentications and test them with a dummy user. Then, to migrate all the existing users you only need to make changes to the First, delete the entry in the Then you can reset the user's password, e.g. via the web interface. The user will be assigned a local password and from now on the authentication against the local database of Seafile will be done. More details about this option will follow soon. Kerberos is a widely used single sign on (SSO) protocol. Supporting of auto login will use a Kerberos service. For server configuration, please read remote user authentication documentation. You have to configure Apache to authenticate with Kerberos. This is out of the scope of this documentation. You can for example refer to this webpage. The client machine has to join the AD domain. In a Windows domain, the Kerberos Key Distribution Center (KDC) is implemented on the domain service. Since the client machine has been authenticated by KDC when a Windows user logs in, a Kerberos ticket will be generated for current user without needs of another login in the browser. When a program using the WinHttp API tries to connect a server, it can perform a login automatically through the Integrated Windows Authentication. Internet Explorer and SeaDrive both use this mechanism. The details of Integrated Windows Authentication is described below: In short: The Internet Options has to be configured as following: Open \"Internet Options\", select \"Security\" tab, select \"Local Intranet\" zone. Note: Above configuration requires a reboot to take effect. Next, we shall test the auto login function on Internet Explorer: visit the website and click \"Single Sign-On\" link. It should be able to log in directly, otherwise the auto login is malfunctioned. Note: The address in the test must be same as the address specified in the keytab file. Otherwise, the client machine can't get a valid ticket from Kerberos. SeaDrive will use the Kerberos login configuration from the Windows Registry under The system wide configuration path is located at SeaDrive can be installed silently with the following command (requires admin privileges): The configuration of Internet Options : https://docs.microsoft.com/en-us/troubleshoot/browsers/how-to-configure-group-policy-preference-settings The configuration of Windows Registry : https://thesolving.com/server-room/how-to-deploy-a-registry-key-via-group-policy/ This manual explains how to deploy and run Seafile Server on a Linux server using Kubernetes (k8s thereafter). The two volumes for persisting data, The two tools, kubectl and a k8s control plane tool (i.e., kubeadm), are required and can be installed with official installation guide. Note that if it is a multi-node deployment, k8s control plane needs to be installed on each node. After installation, you need to start the k8s control plane service on each node and refer to the k8s official manual for creating a cluster. Since this manual still uses the same image as docker deployment, we need to add the following repository to k8s: Seafile mainly involves three different services, namely database service, cache service and seafile service. Since these three services do not have a direct dependency relationship, we need to separate them from the entire docker-compose.yml (in this manual, we use Seafile 11 PRO) and divide them into three pods. For each pod, we need to define a series of YAML files for k8s to read, and we will store these YAMLs in Please replease Please replease the above configurations, such as database root password, admin in seafile. You can use following command to deploy pods: Similar to docker installation, you can also manage containers through some kubectl commands. For example, you can use the following command to check whether the relevant resources are started successfully and whether the relevant services can be accessed normally. First, execute the following command and remember the pod name with You can check a status of a pod by and enter a container by If you modify some configurations in After completing the installation of Seafile Server Community Edition and Seafile Server Professional Edition, communication between the Seafile server and clients runs over (unencrypted) HTTP. While HTTP is ok for testing purposes, switching to HTTPS is imperative for production use. HTTPS requires a SSL certificate from a Certificate Authority (CA). Unless you already have a SSL certificate, we recommend that you get your SSL certificate from Let\u2019s Encrypt using Certbot. If you have a SSL certificate from another CA, skip the section \"Getting a Let's Encrypt certificate\". A second requirement is a reverse proxy supporting SSL. Apache, a popular web server and reverse proxy, is a good option. The full documentation of Apache is available at https://httpd.apache.org/docs/. The recommended reverse proxy is Nginx. You find instructions for enabling HTTPS with Nginx here. The setup of Seafile using Apache as a reverse proxy with HTTPS is demonstrated using the sample host name This manual assumes the following requirements: If your setup differs from thes requirements, adjust the following instructions accordingly. The setup proceeds in two steps: First, Apache is installed. Second, a SSL certificate is integrated in the Apache configuration. Install and enable apache modules: Important: Due to the security advisory published by Django team, we recommend to disable GZip compression to mitigate BREACH attack. No version earlier than Apache 2.4 should be used. Modify Apache config file. For CentOS, this is Getting a Let's Encrypt certificate is straightforward thanks to Certbot. Certbot is a free, open source software tool for requesting, receiving, and renewing Let's Encrypt certificates. First, go to the Certbot website and choose your web server and OS. Second, follow the detailed instructions then shown. We recommend that you get just a certificate and that you modify the Apache configuration yourself: Follow the instructions on the screen. Upon successful verification, Certbot saves the certificate files in a directory named after the host name in To use HTTPS, you need to enable mod_ssl: Then modify your Apache configuration file. Here is a sample: Finally, make sure the virtual host file does not contain syntax errors and restart Apache for the configuration changes to take effect: The The Note: The To improve security, the file server should only be accessible via Apache. Add the following line in the [fileserver] block on After his change, the file server only accepts requests from Apache. Restart the seaf-server and Seahub for the config changes to take effect: If there are problems with paths or files containing spaces, make sure to have at least Apache 2.4.12. References After completing the installation of Seafile Server Community Edition and Seafile Server Professional Edition, communication between the Seafile server and clients runs over (unencrypted) HTTP. While HTTP is ok for testing purposes, switching to HTTPS is imperative for production use. HTTPS requires a SSL certificate from a Certificate Authority (CA). Unless you already have a SSL certificate, we recommend that you get your SSL certificate from Let\u2019s Encrypt using Certbot. If you have a SSL certificate from another CA, skip the section \"Getting a Let's Encrypt certificate\". A second requirement is a reverse proxy supporting SSL. Nginx, a popular and resource-friendly web server and reverse proxy, is a good option. Nginx's documentation is available at http://nginx.org/en/docs/. If you prefer Apache, you find instructions for enabling HTTPS with Apache here. The setup of Seafile using Nginx as a reverse proxy with HTTPS is demonstrated using the sample host name This manual assumes the following requirements: If your setup differs from thes requirements, adjust the following instructions accordingly. The setup proceeds in two steps: First, Nginx is installed. Second, a SSL certificate is integrated in the Nginx configuration. Install Nginx using the package repositories: After the installation, start the server and enable it so that Nginx starts at system boot: The configuration of a proxy server in Nginx differs slightly between CentOS and Debian/Ubuntu. Additionally, the restrictive default settings of SELinux's configuration on CentOS require a modification. Switch SELinux into permissive mode and perpetuate the setting: Create a configuration file for seafile in Create a configuration file for seafile in Delete the default files in Create a symbolic link: Copy the following sample Nginx config file into the just created The following options must be modified in the CONF file: Optional customizable options in the seafile.conf are: The default value for Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect: Getting a Let's Encrypt certificate is straightforward thanks to Certbot. Certbot is a free, open source software tool for requesting, receiving, and renewing Let's Encrypt certificates. First, go to the Certbot website and choose your webserver and OS. Second, follow the detailed instructions then shown. We recommend that you get just a certificate and that you modify the Nginx configuration yourself: Follow the instructions on the screen. Upon successful verification, Certbot saves the certificate files in a directory named after the host name in Add an server block for port 443 and a http-to-https redirect to the This is a (shortened) sample configuration for the host name seafile.example.com: Finally, make sure your seafile.conf does not contain syntax errors and restart Nginx for the configuration changes to take effect: Tip for uploading very large files (> 4GB): By default Nginx will buffer large request body in temp file. After the body is completely received, Nginx will send the body to the upstream server (seaf-server in our case). But it seems when file size is very large, the buffering mechanism dosen't work well. It may stop proxying the body in the middle. So if you want to support file upload larger for 4GB, we suggest you install Nginx version >= 1.8.0 and add the following options to Nginx config file: If you have WebDAV enabled it is recommended to add the same: The The Note: The To improve security, the file server should only be accessible via Nginx. Add the following line in the After his change, the file server only accepts requests from Nginx. Restart the seaf-server and Seahub for the config changes to take effect: Require IPv6 on server otherwise the server will not start! Also the AAAA dns record is required for IPv6 usage. Activate HTTP2 for more performance. Only available for SSL and nginx version>=1.9.5. Simply add The TLS configuration in the sample Nginx configuration file above receives a B overall rating on SSL Labs. By modifying the TLS configuration in The following sample Nginx configuration file for the host name seafile.example.com contains additional security-related directives. (Note that this sample file uses a generic path for the SSL certificate files.) Some of the directives require further steps as explained below. Enable HTTP Strict Transport Security (HSTS) to prevent man-in-the-middle-attacks by adding this directive: HSTS instructs web browsers to automatically use HTTPS. That means, after the first visit of the HTTPS version of Seahub, the browser will only use https to access the site. Enable Diffie-Hellman (DH) key-exchange. Generate DH parameters and write them in a .pem file using the following command: The generation of the the DH parameters may take some time depending on the server's processing power. Add the following directive in the HTTPS server block: Disallow the use of old TLS protocols and cipher. Mozilla provides a configuration generator for optimizing the conflicting objectives of security and compabitility. Visit https://wiki.mozilla.org/Security/Server_Side_TLS#Nginx for more Information. NOTE: Since version 7.0, this documenation is deprecated. Users should use Apache as a proxy server for Kerberos authentication. Then configure Seahub by the instructions in Remote User Authentication. Kerberos is a widely used single sign on (SSO) protocol. Seafile server supports authentication via Kerberos. It allows users to log in to Seafile without entering credentials again if they have a kerberos ticket. In this documentation, we assume the reader is familiar with Kerberos installation and configuration. Seahub provides a special URL to handle Kerberos login. The URL is The configuration includes three steps: Store the keytab under the name defined below and make it accessible only to the apache user (e.g. httpd or www-data and chmod 600). You should create a new location in your virtual host configuration for Kerberos. After restarting Apache, you should see in the Apache logs that user@REALM is used when accessing https://seafile.example.com/krb5-login/. Seahub extracts the username from the Now we have to tell Seahub what to do with the authentication information passed in by Kerberos. Add the following option to seahub_settings.py. After restarting Apache and Seafile services, you can test the Kerberos login workflow. Note: This documentation is for the Community Edition. If you're using Pro Edition, please refer to the Seafile Pro documentation. When Seafile is integrated with LDAP, users in the system can be divided into two tiers: Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated. Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them. When Seafile counts the number of users in the system, it only counts the activated users in its internal database. The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier: Note, the identifier is stored in table Add the following options to Meaning of some options: LDAP_USER_ROLE_ATTR: LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR: Attribute for user's first name. It's \"givenName\" by default. Tips for choosing To determine the If you want to allow all users to use Seafile, you can use If you want to limit users to a certain OU (Organization Unit), you run AD supports Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting The final filter used for searching for users is For example, add below option to The final search filter would be Note that the case of attribute names in the above example is significant. The You can use the First, you should find out the DN for the group. Again, we'll use the Add below option to If your LDAP service supports TLS connections, you can configure Since Seafile Professional edition 6.0.0, you can integrate Seafile with Collabora Online to preview office files. Prepare an Ubuntu 20.04 or 22.04 64bit server with docker installed. Assign a domain name to this server, we use collabora-online.seafile.com here. Obtain and install valid TLS/SSL certificates for this server, we use Let\u2019s Encrypt. Then use Nginx to serve collabora online, config file example (source https://sdk.collaboraonline.com/docs/installation/Proxy_settings.html): then use the following command to setup/start Collabora Online (source https://sdk.collaboraonline.com/docs/installation/CODE_Docker_image.html#code-docker-image): NOTE: the For more information about Collabora Online and how to deploy it, please refer to https://www.collaboraoffice.com NOTE: You must enable https with valid TLS/SSL certificates with Seafile to use Collabora Online. Add following config option to seahub_settings.py: Then restart Seafile. Click an office file in Seafile web interface, you will see the online preview rendered by LibreOffice online. Here is an example: Understanding how theintegration work will help you debug the problem. When a user visits a file page: If you have a problem, please check the Nginx log for Seahub (for step 3) and Collabora Online to see which step is wrong. NOTE: The tutorial is only related to Seafile CE edition. First make sure the python module for MySQL is installed. On Ubuntu/Debian, use Steps to migrate Seafile from SQLite to MySQL: Stop Seafile and Seahub. Download sqlite2mysql.sh and sqlite2mysql.py to the top directory of your Seafile installation path. For example, Run This script will produce three files: Then create 3 databases ccnet_db, seafile_db, seahub_db and seafile user. Import ccnet data to MySql. Import seafile data to MySql. Import seahub data to MySql. Modify configure files\uff1aAppend following lines to ccnet.conf: Note: Use Replace the database section in Append following lines to Restart seafile and seahub NOTE User notifications will be cleared during migration due to the slight difference between MySQL and SQLite, if you only see the busy icon when click the notitfications button beside your avatar, please remove This error typically occurs because the current table being created contains a foreign key that references a table whose primary key has not yet been created. Therefore, please check the database table creation order in the SQL file. The correct order is: and Currently, the status updates of files and libraries on the client and web interface are based on polling the server. The latest status cannot be reflected in real time on the client due to polling delays. The client needs to periodically refresh the library modification, file locking, subdirectory permissions and other information, which causes additional performance overhead to the server. When a directory is opened on the web interface, the lock status of the file cannot be updated in real time, and the page needs to be refreshed. The notification server uses websocket protocol and maintains a two-way communication connection with the client or the web interface. When the above changes occur, seaf-server will notify the notification server of the changes. Then the notification server can notify the client or the web interface in real time. This not only improves the real-time performance, but also reduces the performance overhead of the server. Note, the notification server cannot work if you config Seafile server with SQLite database. Since seafile-10.0.0, you can configure a notification server to send real-time notifications to clients. In order to run the notification server, you need to add the following configurations under seafile.conf\uff1a You can generate jwt_private_key with the following command\uff1a We generally recommend deploying notification server behind nginx, the notification server can be supported by adding the following nginx configuration: Or add the configuration for Apache: NOTE: according to apache ProxyPass document the final configuration for Apache should be like: After that, you can run notification server with the following command: When the notification server is working, you can access If the client works with notification server, there should be a log message in seafile.log or seadrive.log. There is no additional features for notification server in the Pro Edition. It works the same as in community edition. If you enable clustering, You need to deploy notification server on one of the servers, or a separate server. The load balancer should forward websockets requests to this node. On each Seafile frontend node, the notification server configuration should be the same as in community edition: You need to configure load balancer according to the following forwarding rules: Here is a configuration that uses haproxy to support notification server. Haproxy version needs to be >= 2.0. You should use similar configurations for other load balancers. Since CE version 6.2.3, Seafile supports user login via OAuth. Before using OAuth, Seafile administrator should first register an OAuth2 client application on your authorization server, then add some configurations to seahub_settings.py. Here we use Github as an example. First you should register an OAuth2 client application on Github, official document from Github is very detailed. Add the folllowing configurations to seahub_settings.py: NOTE: There are some more explanations about the settings. OAUTH_PROVIDER / OAUTH_PROVIDER_DOMAIN OAUTH_ATTRIBUTE_MAP This variables describes which claims from the response of the user info endpoint are to be filled into which attributes of the new Seafile user. The format is showing like below: If the remote resource server, like Github, uses email to identify an unique user too, Seafile will use Github id directorily, the OAUTH_ATTRIBUTE_MAP setting for Github should be like this: The key part Since 11.0 version, Seafile use If you upgrade from a version below 11.0, you need to have both fields configured, i.e., you configuration should be like: In this way, when a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration If you use a newly deployed 11.0 Seafile instance, you don't need the For Github, To enable OAuth via GitLab. Create an application in GitLab (under Admin area->Applications). Fill in required fields: Name: a name you specify Redirect URI: The callback url see below Trusted: Skip confirmation dialog page. Select this to not ask the user if he wants to authorize seafile to receive access to his/her account data. Scopes: Select Press submit and copy the client id and secret you receive on the confirmation page and use them in this template for your seahub_settings.py: For users of Azure Cloud, as there is no Please see this tutorial for the complete deployment process of OAuth against Azure Cloud. From 8.0.0, Seafile supports OCM protocol. With OCM, user can share library to other server which enabled OCM too. Seafile currently supports sharing between Seafile servers with version greater than 8.0, and sharing from NextCloud to Seafile since 9.0. Note that these two functions cannot be enabled at the same time. Add the following configuration to OCM_REMOTE_SERVERS is a list of servers that you allow your users to share libraries with. Add the following configuration to In the library sharing dialog jump to \"Share to other server\", you can share this library to users of another server with \"Read-Only\" or \"Read-Write\" permission. You can also view shared records and cancel sharing. You can jump to \"Shared from other servers\" page to view the libraries shared by other servers and cancel the sharing. And enter the library to view, download or upload files. From version 6.1.0+ on (including CE), Seafile supports OnlyOffice to view/edit office files online. In order to use OnlyOffice, you must first deploy an OnlyOffice server. You can deploy OnlyOffice to the same machine as Seafile using the Note: Using the official documentation to deploy to the same machine as Seafile server is no longer recommended after 12.0 From Seafile 12.0, OnlyOffice's JWT verification will be forced to enable Secure communication between Seafile and OnlyOffice is granted by a shared secret. You can get the JWT secret by following command Download the insert Also modify By default OnlyOffice will use port 6233 used for communication between Seafile and Document Server, You can modify the bound port by specifying The following configuration options are only for OnlyOffice experts. You can create and mount a custom configuration file called For example, you can configure OnlyOffice to automatically save by copying the following code block in this file: Mount this config file into your onlyoffice block in For more information you can check the official documentation: https://api.onlyoffice.com/editors/signature/ and https://github.com/ONLYOFFICE/Docker-DocumentServer#available-configuration-parameters By default, OnlyOffice will use the database information related to First, you need to make sure the database service is started, and enter the seafile-mysql container In the container, you need to create the database After the installation process is finished, visit this page to make sure you have deployed OnlyOffice successfully: Firstly, run If it shows this error message and you haven't enabled JWT while using a local network, then it's likely due to an error triggered proactively by OnlyOffice server for enhanced security. (https://github.com/ONLYOFFICE/DocumentServer/issues/2268#issuecomment-1600787905) So, as mentioned in the post, we highly recommend you enabling JWT in your integrations to fix this problem. Starting from OnlyOffice Docker-DocumentServer version 7.2, JWT is enabled by default on OnlyOffice server. So, for security reason, please Configure OnlyOffice to use JWT Secret. In general, you only need to specify the values \u200b\u200bof the following fields in For deployments using the The Seafile Add-in for Outlook natively supports authentication via username and password. In order to authenticate with SSO, the add-in utilizes SSO support integrated in Seafile's webinterface Seahub. Specifically, this is how SSO with the add-in works : This document explains how to configure Seafile and the reverse proxy and how to deploy the PHP script. SSO authentication must be configured in Seafile. Seafile Server must be version 8.0 or above. The packages php, composer, firebase-jwt, and guzzle must be installed. PHP can usually be downloaded and installed via the distribution's official repositories. firebase-jwt and guzzle are installed using composer. First, install the php package and check the installed version: Second, install composer. You find an up-to-date install manual at https://getcomposer.org/ for CentOS, Debian, and Ubuntu. Third, use composer to install firebase-jwt and guzzle in a new directory in Add this block to the config file Replace SHARED_SECRET with a secret of your own. The configuration depends on the proxy server use. If you use nginx, add the following location block to the nginx configuration: This sample block assumes that PHP 7.4 is installed. If you have a different PHP version on your system, modify the version in the fastcgi_pass unix. Note: The alias path can be altered. We advise against it unless there are good reasons. If you do, make sure you modify the path accordingly in all subsequent steps. Finally, check the nginx configuration and restart nginx: The PHP script and corresponding configuration files will be saved in the new directory created earlier. Change into it and add a PHP config file: Paste the following content in the First, replace SEAFILE_SERVER_URL with the URL of your Seafile Server and SHARED_SECRET with the key used in Configuring Seahub. Second, add either the user credentials of a Seafile user with admin rights or the API-token of such a user. In the next step, create the Paste the following code block: Note: Contrary to the config.php, no replacements or modifications are necessary in this file. The directory layout in Seafile and Seahub are now configured to support SSO in the Seafile Add-in for Outlook. You can now test SSO authentication in the add-in. Hit the SSO button in the settings of the Seafile add-in. Starting from 7.0.0, Seafile can integrate with various Single Sign On systems via a proxy server. Examples include Apache as Shibboleth proxy, or LemonLdap as a proxy to LDAP servers, or Apache as Kerberos proxy. Seafile can retrieve user information from special request headers (HTTP_REMOTE_USER, HTTP_X_AUTH_USER, etc.) set by the proxy servers. After the proxy server (Apache/Nginx) is successfully authenticated, the user information is set to the request header, and Seafile creates and logs in the user based on this information. Note: Make sure that the proxy server has a corresponding security mechanism to protect against forgery request header attacks. Please add the following settings to Then restart Seafile. Shibboleth is a widely used single sign on (SSO) protocol. Seafile supports authentication via Shibboleth. It allows users from another organization to log in to Seafile without registering an account on the service provider. In this documentation, we assume the reader is familiar with Shibboleth installation and configuration. For introduction to Shibboleth concepts, please refer to https://shibboleth.atlassian.net/wiki/spaces/CONCEPT/overview . Shibboleth Service Provider (SP) should be installed on the same server as the Seafile server. The official SP from https://shibboleth.net/ is implemented as an Apache module. The module handles all Shibboleth authentication details. Seafile server receives authentication information (username) from HTTP request. The username then can be used as login name for the user. Seahub provides a special URL to handle Shibboleth login. The URL is Since Shibboleth support requires Apache, if you want to use Nginx, you need two servers, one for non-Shibboleth access, another configured with Apache to allow Shibboleth login. In a cluster environment, you can configure your load balancer to direct traffic to different server according to URL. Only the URL The configuration includes 3 steps: We use CentOS 7 as example. You should create a new virtual host configuration for Shibboleth. And then restart Apache. Installation and configuration of Shibboleth is out of the scope of this documentation. You can refer to the official Shibboleth document. Open Change Seahub extracts the username from the In Seafile, only one of the following two attributes can be used for username: Change Change Open Uncomment attribute elements for getting more user info: After restarting Apache, you should be able to get the Service Provider metadata by accessing https://your-seafile-domain/Shibboleth.sso/Metadata. This metadata should be uploaded to the Identity Provider (IdP) server. Add the following configuration to seahub_settings.py. Seahub can process additional user attributes from Shibboleth. These attributes are saved into Seahub's database, as user's properties. They're all not mandatory. The internal user properties Seahub now supports are: You can specify the mapping between Shibboleth attributes and Seahub's user properties in seahub_settings.py: In the above config, the hash key is Shibboleth attribute name, the second element in the hash value is Seahub's property name. You can adjust the Shibboleth attribute name for your own needs. Note that you may have to change attribute-map.xml in your Shibboleth SP, so that the desired attributes are passed to Seahub. And you have to make sure the IdP sends these attributes to the SP. We also added an option Shibboleth has a field called affiliation. It is a list like: We are able to set user role from Shibboleth. Details about user role, please refer to https://download.seafile.com/published/seafile-manual/deploy_pro/roles_permissions.md To enable this, modify Then add new config to define affiliation role map, After Shibboleth login, Seafile should calcualte user's role from affiliation and SHIBBOLETH_AFFILIATION_ROLE_MAP. After restarting Apache and Seahub service ( If you encountered problems when login, follow these steps to get debug info (for Seafile pro 6.3.13). Open Insert the following code in line 59 Insert the following code in line 65 The complete code after these changes is as follows: Then restart Seafile and relogin, you will see debug info in web page. Seafile supports most of the popular single-sign-on authentication protocols. Some are included in Community Edition, some are only in Pro Edition. In the Community Edition: Kerberos authentication can be integrated by using Apache as a proxy server and follow the instructions in Remote User Authentication and Auto Login SeaDrive on Windows. In Pro Edition: Firstly, you should create a script to activate the python virtual environment, which goes in the ${seafile_dir} directory. Put another way, it does not go in \"seafile-server-latest\", but the directory above that. Throughout this manual the examples use /opt/seafile for this directory, but you might have chosen to use a different directory. The content of the file is: make this script executable The content of the file is: The content of the file is: Create systemd service files, change ${seafile_dir} to your seafile installation location and seafile to user, who runs seafile (if appropriate). Then you need to reload systemd's daemons: systemctl daemon-reload. The content of the file is: Create systemd service file /etc/systemd/system/seahub.service The content of the file is: Create systemd service file /etc/systemd/system/seafile-client.service You need to create this service file only if you have seafile console client and you want to run it on system boot. The content of the file is: Files in the seafile system are split to blocks, which means what are stored on your seafile server are not complete files, but blocks. This design faciliates effective data deduplication. However, administrators sometimes want to access the files directly on the server. You can use seaf-fuse to do this. Seaf-fuse is added since Seafile Server '''2.1.0'''. '''Note:''' * Encrypted folders can't be accessed by seaf-fuse. * Currently the implementation is '''read-only''', which means you can't modify the files through the mounted folder. * One debian/centos systems, you need to be in the \"fuse\" group to have the permission to mount a FUSE folder. Assume we want to mount to '''Note:''' Before start seaf-fuse, you should have started seafile server with Now you can list the content of From the above list you can see, under the folder of a user there are subfolders, each of which represents a library of that user, and has a name of this format: '''{library_id}-{library-name}'''. If you get an error message saying \"Permission denied\" when running seaf-server and seafile-controller support reopenning logfiles by receiving a This feature is very useful when you need cut logfiles while you don't want to shutdown the server. All you need to do now is cutting the logfile on the fly. For Debian, the default directory for logrotate should be Assuming your seaf-server's logfile is setup to The configuration for logrotate could be like this: You can save this file, in Debian for example, at This manual explains how to deploy and run Seafile Server Community Edition (Seafile CE) on a Linux server from a pre-built package using MySQL/MariaDB as database. The deployment has been tested for Debian/Ubuntu and CentOS, but Seafile should also work on other Linux distributions. Tip: If you have little experience with Seafile Server, we recommend that you use an installation script for deploying Seafile. Seafile CE for x86 architecture requires a minimum of 2 cores and 2GB RAM. There is a community-supported package for the installation on Raspberry Pi. Seafile supports MySQL and MariaDB. We recommend that you use the preferred SQL database management engine included in the package repositories of your distribution. This means: You can find step-by-step how-tos for installing MySQL and MariaDB in the tutorials on the Digital Ocean website. Seafile uses the mysql_native_password plugin for authentication. The versions of MySQL and MariaDB installed on CentOS 8, Debian 10, and Ubuntu 20.04 use a different authentication plugin by default. It is therefore required to change to authentication plugin to mysql_native_password for the root user prior to the installation of Seafile. The above mentioned tutorials explain how to do it. For Seafile 8.0.x For Seafile 9.0.x Note: CentOS 8 is no longer supported. For Seafile 10.0.x For Seafile 11.0.x (Debian 11, Ubuntu 22.04, etc.) For Seafile 11.0.x on Debian 12 and Ubuntu 24.04 with virtual env Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with \"source python-venv/bin/activate\". The standard directory for Seafile's program files is The program directory can be changed. The standard directory It is good practice not to run applications as root. Create a new user and follow the instructions on the screen: Change ownership of the created directory to the new user: All the following steps are done as user seafile. Change to user seafile: Download the install package from the download page on Seafile's website using wget. We use Seafile CE version 8.0.4 as an example in the rest of this manual. The install package is downloaded as a compressed tarball which needs to be uncompressed. Uncompress the package using tar: Now you have: The install package comes with a script that sets Seafile up for you. Specifically, the script creates the required directories and extracts all files in the right place. It can also create a MySQL user and the three databases that Seafile's components require : Note: While ccnet server was merged into the seafile-server in Seafile 8.0, the corresponding database is still required for the time being. Run the script as user seafile: Configure your Seafile Server by specifying the following three parameters: In the next step, choose whether to create new databases for Seafile or to use existing databases. The creation of new databases requires the root password for the SQL server. When choosing \"[1] Create new ccnet/seafile/seahub databases\", the script creates these databases and a MySQL user that Seafile Server will use to access them. To this effect, you need to answer these questions: When choosing \"[2] Use existing ccnet/seafile/seahub databases\", this are the prompts you need to answer: If the setup is successful, you see the following output: The directory layout then looks as follows: The folder Note: If you don't have the root password, you need someone who has the privileges, e.g., the database admin, to create the three databases required by Seafile, as well as a MySQL user who can access the databases. For example, to create three databases Seahub caches items(avatars, profiles, etc) on file system by default(/tmp/seahub_cache/). You can replace with Memcached or Redis. Use the following commands to install memcached and corresponding libraies on your system: Add the following configuration to Redis is supported since version 11.0. First, install Redis with package installers in your OS. Then refer to Django's documentation about using Redis cache to add Redis configurations to Seafile's config files as created by the setup script are prepared for Seafile running behind a reverse proxy. To access Seafile's web interface and to create working sharing links without a reverse proxy, you need to modify two configuration files in Run the following commands in The first time you start Seahub, the script prompts you to create an admin account for your Seafile Server. Enter the email address of the admin user followed by the password. Now you can access Seafile via the web interface at the host address and port 8000 (e.g., http://1.2.3.4:8000) Note: On CentOS, the firewall blocks traffic on port 8000 by default. If seafile.sh and/or seahub.sh fail to run successfully, use Use It is strongly recommended to switch from unencrypted HTTP (via port 8000) to encrypted HTTPS (via port 443). This manual provides instructions for enabling HTTPS for the two most popular web servers and reverse proxies: Since community edition 5.1.2 and professional edition 5.1.4, Seafile support using Syslog. Add following configuration to Restart seafile server, you will find follow logs in Add following configuration to Restart seafile server, you will find follow logs in Add following configurations to You need to install ffmpeg package to let the video thumbnail work correctly: Ubuntu 16.04 Centos 7 Debian Jessie Now configure accordingly in There are two ways to deploy Seafile Pro Edition. Since version 8.0, the recommend way to install Seafile Pro Edition is using Docker. You can add/edit roles and permission for administrators. Seafile has four build-in admin roles: default_admin, has all permissions. system_admin, can only view system info and config system. daily_admin, can only view system info, view statistic, manage library/user/group, view user log. audit_admin, can only view system info and admin log. All administrators will have Seafile supports eight permissions for now, its configuration is very like common user role, you can custom it by adding the following settings to When you have both Java 6 and Java 7 installed, the default Java may not be Java 7. Do this by typing If the default Java is Java 6, then do On Debian/Ubuntu: On CentOS/RHEL: The above command will ask you to choose one of the installed Java versions as default. You should choose Java 7 here. After that, re-run Reference link To use ADFS to log in to your Seafile, you need the following components: A Winodws Server with ADFS installed. For configuring and installing ADFS you can see this article. A valid SSL certificate for ADFS server, and here we use adfs-server.adfs.com as the domain name example. A valid SSL certificate for Seafile server, and here we use demo.seafile.com as the domain name example. You can generate them by: ``` openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sp.key -out sp.crt sudo apt install xmlsec1 sudo pip install cryptography djangosaml2==0.15.0 from os import path import saml2 import saml2.saml CERTS_DIR = '/seahub-data/certs' SP_SERVICE_URL = 'https://demo.seafile.com' XMLSEC_BINARY = '/usr/local/bin/xmlsec1' ATTRIBUTE_MAP_DIR = '/seafile-server-latest/seahub-extra/seahub_extra/adfs_auth/attribute-maps' SAML_ATTRIBUTE_MAPPING = { 'DisplayName': ('display_name', ), 'ContactEmail': ('contact_email', ), 'Deparment': ('department', ), 'Telephone': ('telephone', ), }"},{"location":"deploy_pro/config_seafile_with_ADFS/#update-the-idp-section-in-sampl_config-according-to-your-situation-and-leave-others-as-default","title":"update the 'idp' section in SAMPL_CONFIG according to your situation, and leave others as default","text":" ENABLE_ADFS_LOGIN = True EXTRA_AUTHENTICATION_BACKENDS = ( 'seahub_extra.adfs_auth.backends.Saml2Backend', ) SAML_USE_NAME_ID_AS_USERNAME = True LOGIN_REDIRECT_URL = '/saml2/complete/' SAML_CONFIG = { # full path to the xmlsec1 binary programm 'xmlsec_binary': XMLSEC_BINARY, } ``` Relying Party Trust is the connection between Seafile and ADFS. Log into the ADFS server and open the ADFS management. Double click Trust Relationships, then right click Relying Party Trusts, select Add Relying Party Trust\u2026. Select Import data about the relying party published online or one a local network, input Then Next until Finish. Add Relying Party Claim Rules Relying Party Claim Rules is used for attribute communication between Seafile and users in Windows Domain. Important: Users in Windows domain must have the E-mail value setted. Right-click on the relying party trust and select Edit Claim Rules... On the Issuance Transform Rules tab select Add Rules... Select Send LDAP Attribute as Claims as the claim rule template to use. Give the claim a name such as LDAP Attributes. Set the Attribute Store to Active Directory, the LDAP Attribute to E-Mail-Addresses, and the Outgoing Claim Type to E-mail Address. Select Finish. Click Add Rule... again. Select Transform an Incoming Claim. Give it a name such as Email to Name ID. Incoming claim type should be E-mail Address (it must match the Outgoing Claim Type in rule #1). The Outgoing claim type is Name ID (this is required in Seafile settings policy the Outgoing name ID format is Email. Pass through all claim values and click Finish. https://support.zendesk.com/hc/en-us/articles/203663886-Setting-up-single-sign-on-using-Active-Directory-with-ADFS-and-SAML-Plus-and-Enterprise- http://wiki.servicenow.com/?title=Configuring_ADFS_2.0_to_Communicate_with_SAML_2.0#gsc.tab=0 https://github.com/rohe/pysaml2/blob/master/src/saml2/saml.py The following section needs to be added to docker-compose.yml in the services section Add this to seafile.conf Wait some minutes until Clamav finished initializing. Now Clamav can be used. You should run Clamd with a root permission to scan any files. Edit the conf The output must include: Update: Since Seafile Pro server 6.0.0, cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the \"Load Balancer Setting\" section below for details The Seafile cluster solution employs a 3-tier architecture: This architecture scales horizontally. That means, you can handle more traffic by adding more machines. The architecture is visualized in the following picture. There are two main components on the Seafile server node: web server (Nginx/Apache) and Seafile app server. The web server passes requests from the clients to Seafile app server. The Seafile app servers work independently. They don't know about each other's state. That means each app server can fail independently without affecting other app server instances. The load balancer is responsible for detecting failure and re-routing requests. Even though Seafile app servers work independently, they still have to share some session information. All shared session information is stored in memory cache. Thus, all Seafile app servers have to connect to the same memory cache server (cluster). Since Pro Edition 11.0, both memcached and Redis can be used as memory cache. Before 11.0, only memcached is supported. More details about memory cache configuration is available later. The background server is the workhorse for various background tasks, including full-text indexing, office file preview, virus scanning, LDAP syncing. It should usually be run on a dedicated server for better performance. Currently only one background task server can be running in the entire cluster. If more than one background servers are running, they may conflict with each others when doing some tasks. If you need HA for background task server, you can consider using Keepalived to build a hot backup for it. More details can be found in background server setup. All Seafile app servers access the same set of user data. The user data has two parts: One in the MySQL database and the other one in the backend storage cluster (S3, Ceph etc.). All app servers serve the data equally to the clients. All app servers have to connect to the same database or database cluster. We recommend to use MariaDB Galera Cluster if you need a database cluster. There are a few steps to deploy a Seafile cluster: At least 3 Linux server with at least 4 cores, 8GB RAM. Two servers work as frontend servers, while one server works as background task server. Virtual machines are sufficient for most cases. In small cluster, you can re-use the 3 Seafile servers to run memcached cluster and MariaDB cluster. For larger clusters, you can have 3 more dedicated server to run memcached cluster and MariaDB cluster. Because the load on these two clusters are not high, they can share the hardware to save cost. Documentation about how to setup memcached cluster and MariaDB cluster can be found here. Since version 11.0, Redis can also be used as memory cache server. But currently only single-node Redis is supported. On each mode, you need to install some python libraries. First make sure your have installed Python 2.7, then: If you receive an error stating \"Wheel installs require setuptools >= ...\", run this between the pip and boto lines above You should make sure the config files on every Seafile server are consistent. Put the license you get under the top level diretory. In our wiki, we use the diretory Now you have: Please follow Download and Setup Seafile Professional Server With MySQL to setup a single Seafile server node. Note: Use the load balancer's address or domain name for the server address. Don't use the local IP address of each Seafile server machine. This assures the user will always access your service via the load balancers. After the setup process is done, you still have to do a few manual changes to the config files. If you use a single memcached server, you have to add the following configuration to If you use memcached cluster, the recommended way to setup memcached clusters can be found here. You'll setup two memcached server, in active/standby mode. A floating IP address will be assigned to the current active memcached node. So you have to configure the address in seafile.conf accordingly. If you are using Redis as cache, add following configurations: Currently only single-node Redis is supported. Redis Sentinel or Cluster is not supported yet. (Optional) The Seafile server also opens a port for the load balancers to run health checks. Seafile by default uses port 11001. You can change this by adding the following config option to You must setup and use memory cache when deploying Seafile cluster. Refer to \"memory cache\" to configure memory cache in Seahub. Also add following options to seahub_setting.py. These settings tell Seahub to store avatar in database and cache avatar in memcached, and store css CACHE to local memory. Add following to Here is an example Note: In cluster environment, we have to store avatars in the database instead of in a local disk. You also need to add the settings for backend cloud storage systems to the config files. Nginx/Apache with HTTP need to set it up on each machine running Seafile server. This is make sure only port 80 need to be exposed to load balancer. (HTTPS should be setup at the load balancer) Please check the following documents on how to setup HTTP with Nginx/Apache. Note, you only the HTTP setup part the the documents. (HTTPS is not needed) Once you have finished configuring this single node, start it to test if it runs properly: Note: The first time you start seahub, the script would prompt you to create an admin account for your Seafile server. Open your browser, visit Now you have one node working fine, let's continue to configure more nodes. Supposed your Seafile installation directory is On each node, run In the back-end node, you need to execute the following command to start Seafile server. CLUSTER_MODE=backend means this node is seafile backend server. It would be convenient to setup Seafile service to start on system boot. Follow this documentation to set it up on all nodes. Beside standard ports of a seafile server, there are 2 firewall rule changes for Seafile cluster: Now that your cluster is already running, fire up the load balancer and welcome your users. Since version 6.0.0, Seafile Pro requires \"sticky session\" settings in the load balancer. You should refer to the manual of your load balancer for how to set up sticky sessions. In the AWS ELB management console, after you've added the Seafile server instances to the instance list, you should do two more configurations. First you should setup HTTP(S) listeners. Ports 443 and 80 of ELB should be forwarded to the ports 80 or 443 of the Seafile servers. Then you setup health check Refer to AWS documentation about how to setup sticky sessions. This is a sample (Assume your health check port is Now you should be able to test your cluster. Open https://seafile.example.com in your browser and enjoy. You can also synchronize files with Seafile clients. If the above works, the next step would be Enable search and background tasks in a cluster. Here is the summary of configurations at the front-end node that related to cluster setup. (for version 7.1+) For seafile.conf: The For seahub_settings.py: For seafevents.conf: The The following options can be set in seafevents.conf to control the behaviors of file search. You need to restart seafile and seahub to make them take effect. Full text search is not enabled by default to save system resources. If you want to enable it, you need to follow the instructions below. First you have to set the value of Then restart seafile server You need to delete the existing search index and recreate it. You can rebuild search index by running: If this does not work, you can try the following steps: Create an elasticsearch service on AWS according to the documentation. Configure the seafevents.conf: NOTE: The version of the Python third-party package The search index is updated every 10 minutes by default. So before the first index update is performed, you get nothing no matter what you search. To be able to search immediately, This is because the server cannot index encrypted files, since they are encrypted. The search functionality is based on Elasticsearch, which is a java process. You can modify the memory size by modifying the jvm configuration file. For example, modify to 2G memory. Modify the following configuration in the Restart the seafile service to make the above changes take effect: If you use a cluster to deploy Seafile, you can use distributed indexing to realize real-time indexing and improve indexing efficiency. The indexing process is as follows: First, install redis on all frontend nodes(If you use redis cloud service, skip this step and modify the configuration files directly): For Ubuntu: For CentOS: Then, install python redis third-party package on all frontend nodes: Next, modify the Next, modify the Next, restart Seafile to make the configuration take effect: First, prepare a seafes master node and several seafes slave nodes, the number of slave nodes depends on your needs. Deploy Seafile on these nodes, and copy the configuration files in the Next, create a configuration file Execute Next, create a configuration file Execute Note The index worker connects to backend storage directly. You don't need to run seaf-server in index worker node. Rebuild search index, execute in the List the number of indexing tasks currently remaining, execute in the The above commands need to be run on the master node. This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server from a pre-built package using MySQL/MariaDB as database. The deployment has been tested for Debian/Ubuntu and CentOS, but Seafile PE should also work on other Linux distributions. Tip: If you have little experience with Seafile Server, we recommend that you use an installation script for deploying Seafile Server. Seafile PE requires a minimum of 2 cores and 2GB RAM. If elasticsearch is installed on the same server, the minimum requirements are 4 cores and 4 GB RAM. Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com or one of our partners. These instructions assume that MySQL/MariaDB server and client are installed and a MySQL/MariaDB root user can authenticate using the mysql_native_password plugin. For Seafile 8.0.x For Seafile 9.0.x For Seafile 10.0.x For Seafile 11.0.x (Debian 11, Ubuntu 22.04, Centos 8, etc.) Note: The recommended deployment option for Seafile PE on CentOS/Redhat is Docker. For Seafile 11.0.x on Debian 12 and Ubuntu 24.04 with virtual env Debian 12 and Ubuntu 24.04 are now discouraging system-wide installation of python modules with pip. It is preferred now to install modules into a virtual environment which keeps them separate from the files installed by the system package manager, and enables different versions to be installed for different applications. With these python virtual environments (venv for short) to work, you have to activate the venv to make the packages installed in it available to the programs you run. That is done here with \"source python-venv/bin/activate\". Java Runtime Environment (JRE) is a requirement for full text search with ElasticSearch. It is used in extracting contents from PDF and Office files. The standard directory for Seafile's program files is The program directory can be changed. The standard directory Elasticsearch, the indexing server, cannot be run as root. More generally, it is good practice not to run applications as root. Create a new user and follow the instructions on the screen: Change ownership of the created directory to the new user: All the following steps are done as user seafile. Change to user seafile: Save the license file in Seafile's programm directory The install packages for Seafile PE are available for download in the the Seafile Customer Center. To access the Customer Center, a user account is necessary. The registration is free. Beginning with Seafile PE 7.0.17, the Seafile Customer Center provides two install packages for every version (using Seafile PE 8.0.4 as an example): The former is suitable for installation on Ubuntu/Debian servers, the latter for CentOS servers. Download the install package using wget (replace the x.x.x with the version you wish to download): We use Seafile version 8.0.4 as an example in the remainder of these instructions. The install package is downloaded as a compressed tarball which needs to be uncompressed. Uncompress the package using tar: Now you have: Note: The names of the install packages differ for Seafile CE and Seafile PE. Using Seafile CE and Seafile PE 8.0.4 as an example, the names are as follows: The setup process of Seafile PE is the same as the Seafile CE. See Installation of Seafile Server Community Edition with MySQL/MariaDB. After the successful completition of the setup script, the directory layout of Seafile PE looks as follows (some folders only get created after the first start, e.g. For Seafile 7.1.x and later Memory cache is mandatory for pro edition. You may use Memcached or Reids as cache server. Use the following commands to install memcached and corresponding libraies on your system: Add the following configuration to Redis is supported since version 11.0. First, install Redis with package installers in your OS. Then refer to Django's documentation about using Redis cache to add Redis configurations to You need at least setup HTTP to make Seafile's web interface work. This manual provides instructions for enabling HTTP/HTTPS for the two most popular web servers and reverse proxies: Run the following commands in The first time you start Seahub, the script prompts you to create an admin account for your Seafile Server. Enter the email address of the admin user followed by the password. Now you can access Seafile via the web interface at the host address (e.g., http://1.2.3.4:80). Seafile uses the indexing server ElasticSearch to enable full text search. In versions prior to Seafile 9.0, Seafile's install packages included ElasticSearch. A separate deployment was not necessary. Due to licensing conditions, ElasticSearch 7.x can no longer be bundled in Seafile's install package. As a consequence, a separate deployment of ElasticSearch is required to enble full text search in Seafile newest version. Our recommendation for deploying ElasticSearch is using Docker. Detailed information about installing Docker on various Linux distributions is available at Docker Docs. Seafile PE 9.0 only supports ElasticSearch 7.x. Seafile PE 10.0 and 11.0 only supports ElasticSearch 8.x. We use ElasticSearch version 7.16.2 as an example in this section. Version 7.16.2 and newer version have been successfully tested with Seafile. Pull the Docker image: Create a folder for persistent data created by ElasticSearch and change its permission: Now start the ElasticSearch container using the docker run command: Add the following configuration to Finally, restart Seafile: In the seafile cluster, only one server should run the background tasks, including: Let's assume you have three nodes in your cluster: A, B, and C. If you following the steps on settings up a cluster, node B and node C should have already be configed as frontend node. You can copy the configuration of node B as a base for node A. Then do the following steps: On Ubuntu/Debian: On CentOS/Red Hat: Edit seafevents.conf and ensure this line does NOT exist: Edit seafevents.conf, adding the following configuration: host is the IP address of background node, make sure the front end nodes can access the background node via IP:6000 . Edit seafile.conf to enable virus scan according to virus scan document In your firewall rules for node A, you should open the port 9200 (for search requests) and port 6000 for office converter. For versions older than 6.1, On nodes B and C, you need to: Edit Edit seahub_settings.py and add a line: Type the following commands to start the background node (Note, one additional command To stop the background node, type: You should also configure Seafile background tasks to start on system bootup. For systemd based OS, you can add Then enable this task in systemd: Here is the summary of configurations at the background node that related to clustering setup. For seafile.conf: For seafevents.conf: If you following the steps on settings up a cluster, node B and node C should have already be configed as frontend node. You can copy the configuration of node B as a base for node A. Then do the following steps: Since 9.0, ElasticSearch program is not part of Seafile package. You should deploy ElasticSearch service seperately. Then edit Edit seafile.conf to enable virus scan according to virus scan document On nodes B and C, you need to: Edit Edit seahub_settings.py and add a line: Type the following commands to start the background node (Note, one additional command To stop the background node, type: You should also configure Seafile background tasks to start on system bootup. For systemd based OS, you can add Then enable this task in systemd: Here is the summary of configurations at the background node that related to clustering setup. For seafile.conf: For seafevents.conf: When Seafile is integrated with LDAP, users in the system can be divided into two tiers: Users within Seafile's internal user database. Some attributes are attached to these users, such as whether it's a system admin user, whether it's activated. Users in LDAP server. These are all the intended users of Seafile inside the LDAP server. Seafile doesn't manipulate these users directly. It has to import them into its internal database before setting attributes on them. When Seafile counts the number of users in the system, it only counts the activated users in its internal database. The only requirement for Seafile to use LDAP for authentication is that there must be a unique identifier for each user in the LDAP server. This id should also be user-friendly as the users will use it as username when login. Below are some usual options for this unique identifier: Note, the identifier is stored in table Add the following options to Meaning of some options: LDAP_USER_ROLE_ATTR: LDAP user's role attribute LDAP_USER_FIRST_NAME_ATTR: Attribute for user's first name. It's \"givenName\" by default. Tips for choosing To determine the If you want to allow all users to use Seafile, you can use If you want to limit users to a certain OU (Organization Unit), you run AD supports In Seafile Pro, except for importing users into internal database when they log in, you can also configure Seafile to periodically sync user information from LDAP server into the internal database. User's full name, department and contact email address can be synced to internal database. Users can use this information to more easily search for a specific user. User's Windows or Unix login id can be synced to the internal database. This allows the user to log in with its familiar login id. When a user is removed from LDAP, the corresponding user in Seafile will be deactivated. Otherwise, he could still sync files with Seafile client or access the web interface. After synchronization is complete, you can see the user's full name, department and contact email on its profile page. Add the following options to Meaning of some options: The users imported with the above configuration will be activated by default. For some organizations with large number of users, they may want to import user information (such as user full name) without activating the imported users. Activating all imported users will require licenses for all users in LDAP, which may not be affordable. Seafile provides a combination of options for such use case. You can modify below option in This prevents Seafile from activating imported users. Then, add below option to This option will automatically activate users when they login to Seafile for the first time. When you set the However, sometimes it's desirable to auto reactivate such users. You can modify below option in To test your LDAP sync configuration, you can run the sync command manually. To trigger LDAP sync manually: For Seafile Docker The importing or syncing process maps groups from LDAP directory server to groups in Seafile's internal database. This process is one-way. Any changes to groups in the database won't propagate back to LDAP; Any changes to groups in the database, except for \"setting a member as group admin\", will be overwritten in the next LDAP sync operation. If you want to add or delete members, you can only do that on LDAP server. The creator of imported groups will be set to the system admin. There are two modes of operation: Periodical: the syncing process will be executed in a fixed interval Manual: there is a script you can run to trigger the syncing once Before enabling LDAP group sync, you should have configured LDAP authentication. See Basic LDAP Integration for details. The following are LDAP group sync related options: Meaning of some options: Note: The search base for groups is the option Some LDAP server, such as Active Directory, allows a group to be a member of another group. This is called \"group nesting\". If we find a nested group B in group A, we should recursively add all the members from group B into group A. And group B should still be imported a separate group. That is, all members of group B are also members in group A. In some LDAP server, such as OpenLDAP, it's common practice to use Posix groups to store group membership. To import Posix groups as Seafile groups, set A department in Seafile is a special group. In addition to what you can do with a group, there are two key new features for departments: Department supports hierarchy. A department can have any levels of sub-departments. Department can have storage quota. Seafile supports syncing OU (Organizational Units) from AD/LDAP to departments. The sync process keeps the hierarchical structure of the OUs. Options for syncing departments from OU: Periodical sync won't happen immediately after you restart seafile server. It gets scheduled after the first sync interval. For example if you set sync interval to 30 minutes, the first auto sync will happen after 30 minutes you restarts. To sync immediately, you need to manually trigger it. After the sync is run, you should see log messages like the following in logs/seafevents.log. And you should be able to see the groups in system admin page. To trigger LDAP sync manually, For Seafile Docker Multiple base DN is useful when your company has more than one OUs to use Seafile. You can specify a list of base DN in the Search filter is very useful when you have a large organization but only a portion of people want to use Seafile. The filter can be given by setting The final filter used for searching for users is For example, add below option to The final search filter would be Note that the case of attribute names in the above example is significant. The You can use the First, you should find out the DN for the group. Again, we'll use the Add below option to If your LDAP service supports TLS connections, you can configure LDAP protocol version 3 supports \"paged results\" (PR) extension. When you have large number of users, this option can greatly improve the performance of listing users. Most directory server nowadays support this extension. In Seafile Pro Edition, add this option to Seafile Pro Edition supports auto following referrals in LDAP search. This is useful for partitioned LDAP or AD servers, where users may be spreaded on multiple directory servers. For more information about referrals, you can refer to this article. To configure, add below option to Seafile Pro Edition supports multi-ldap servers, you can configure two ldap servers to work with seafile. Multi-ldap servers mean that, when get or search ldap user, it will iterate all configured ldap servers until a match is found; When listing all ldap users, it will iterate all ldap servers to get all users; For Ldap sync it will sync all user/group info in all configured ldap servers to seafile. Currently, only two LDAP servers are supported. If you want to use multi-ldap servers, please replace Note: There are still some shared config options are used for all LDAP servers, as follows: If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth), you want Seafile to find the existing account for this user instead of creating a new one, you can set Note, here the UID means the unique user ID, in LDAP it is the attribute you use for Seafile Pro Edition supports syncing roles from LDAP or Active Directory. To enable this feature, add below option to Note: You should only define one of the two functions. You can rewrite the function (in python) to make your own mapping rules. If the file or function doesn't exist, the first entry in role_list will be synced. For high availability, it is recommended to set up a memcached cluster and MariaDB Galera cluster for Seafile cluster. This documentation will provide information on how to do this with 3 servers. You can either use 3 dedicated servers or use the 3 Seafile server nodes. Seafile servers share session information within memcached. So when you set up a Seafile cluster, there needs to be a memcached server (cluster) running. The simplest way is to use a single-node memcached server. But when this server fails, some functions in the web UI of Seafile cannot work. So for HA, it's usually desirable to have more than one memcached servers. We recommend to setup two independent memcached servers, in active/standby mode. A floating IP address (or Virtual IP address in some context) is assigned to the current active node. When the active node goes down, Keepalived will migrate the virtual IP to the standby node. So you actually use a single node memcahced, but use Keepalived (or other alternatives) to provide high availability. After installing memcahced on each server, you need to make some modification to the memcached config file. NOTE: Please configure memcached to start on system startup. Install and configure Keepalived. Modify keepalived config file On active node On standby node NOTE: Please adjust the network device names accordingly. virtual_ipaddress is the floating IP address in use. MariaDB cluster helps you to remove single point of failure from the cluster architecture. Every update in the database cluster is synchronously replicated to all instances. You can choose between two different setups: We refer to the documentation from MariaDB team: Seafile supports data migration between filesystem, s3, ceph, swift and Alibaba oss. Data migration takes 3 steps: We need to add new backend configurations to this file (including If you want to migrate to a local file system, the seafile.conf temporary configuration example is as follows: Repalce the configurations with your own choice. If you have millions of objects in the storage (especially fs objects), it may take quite long time to migrate all objects. More than half of the time is spent on checking whether an object exists in the destination storage. Since Pro edition 7.0.8, a feature is added to speed-up the checking. Before running the migration script, please set this env variable: 3 files will be created: When you run the script for the first time, the object list file will be filled with existing objects in the destination. Then, when you run the script for the second time, it will load the existing object list from the file, instead of querying the destination. And newly migrated objects will also be added to the file. During migration, the migration process checks whether an object exists by checking the pre-loaded object list, instead of asking the destination, which will greatly speed-up the migration process. It's suggested that you don't interrupt the script during the \"fetch object list\" stage when you run it for the first time. Otherwise the object list in the file will be incomplete. Another trick to speed-up the migration is to increase the number of worker threads and size of task queue in the migration script. You can modify the The number of workers can be set to relatively large values, since they're mostly waiting for I/O operations to finished. If you have an encrypted storage backend (a deprecated feature no long supported now), you can use this script to migrate and decrypt the data from that backend to a new one. You can add the This step will migrate most of objects from the source storage to the destination storage. You don't need to stop Seafile service at this stage as it may take quite long time to finish. Since the service is not stopped, some new objects may be added to the source storage during migration. Those objects will be handled in the next step. We assume you have installed seafile pro server under Please note that this script is completely reentrant. So you can stop and restart it, or run it many times. It will check whether an object exists in the destination before sending it. New objects added during the last migration step will be migrated in this step. To prevent new objects being added, you have to stop Seafile service during the final migration operation. This usually take short time. If you have large number of objects, please following the optimization instruction in previous section. You just have to stop Seafile and Seahub service, then run the migration script again. After running the script, we need replace the original seafile.conf with new one: now we only have configurations about backend, more config options, e.g. memcache and quota, can then be copied from the original seafile.conf file. After replacing seafile.conf, you can restart seafile server and access the data on the new backend. It's quite likely you have deployed the Seafile Community Server and want to switch to the Professional Server, or vice versa. But there are some restrictions: That means, if you are using Community Server version 9.0, and want to switch to the Professional Server 10.0, you must first upgrade to Community Server version 10.0, and then follow the guides below to switch to the Professional Server 10.0. (The last tiny version number in 10.0.x is not important.) The package poppler-utils is required for full text search of pdf files. On Ubuntu/Debian: We assume you already have deployed Seafile Community Server 10.0.0 under If the license you received is not named as seafile-license.txt, rename it to seafile-license.txt. Then put the license file under the top level diretory. In our example, it is You should uncompress the tarball to the top level directory of your installation, in our example it is Now you have: You should notice the difference between the names of the Community Server and Professional Server. Take the 10.0.0 64bit version as an example: The migration script is going to do the following for you: Now you have: Using memory cache is mandatory in Pro Edition. You may use Memcached or Reids as cache server. Use the following commands to install memcached and corresponding libraies on your system: Add the following configuration to Redis is supported since version 11.0. First, Install Redis with package installers in your OS. Then refer to Django's documentation about using Redis cache to add Redis configurations to Stop Seafile Professional Server if it's running Run the minor-upgrade script to fix symbolic links Start Seafile Community Server Starting from version 5.1, you can add institutions into Seafile and assign users into institutions. Each institution can have one or more administrators. This feature is to ease user administration when multiple organizations (universities) share a single Seafile instance. Unlike multi-tenancy, the users are not-isolated. A user from one institution can share files with another institution. In or if After restarting Seafile, a system admin can add institutions by adding institution name in admin panel. He can also click into an institution, which will list all users whose If you are using Shibboleth, you can map a Shibboleth attribute into institution. For example, the following configuration maps organization attribute to institution. Multi-tenancy feature is designed for hosting providers that what to host several customers in a single Seafile instance. You can create multi-organizations. Organizations is separated from each other. Users can't share libraries between organizations. An organization can be created via system admin in \u201cadmin panel->organization->Add organization\u201d. Every organization has an URL prefix. This field is for future usage. When a user create an organization, an URL like org1 will be automatically assigned. After creating an organization, the first user will become the admin of that organization. The organization admin can add other users. Note, the system admin can't add users. The system admin has to complete the following works. Fisrt, install xmlsec1 package: Second, prepare SP(Seafile) certificate directory and SP certificates: Create sp certs dir The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command: Note: The Finally, add the following configuration to seahub_settings.py and then restart Seafile: Note: If the xmlsec1 binary is not located in View where the xmlsec1 binary is located: Note: If certificates are not placed in Please refer to this document. There are some use cases that supporting multiple storage backends in Seafile server is needed. Such as: The library data in Seafile server are spreaded into multiple storage backends in the unit of libraries. All the data in a library will be located in the same storage backend. The mapping from library to its storage backend is stored in a database table. Different mapping policies can be chosen based on the use case. To use this feature, you need to: In Seafile server, a storage backend is represented by the concept of \"storage class\". A storage class is defined by specifying the following information: commit, fs, and blocks can be stored in different storages. This provides the most flexible way to define storage classes. As Seafile server before 6.3 version doesn't support multiple storage classes, you have to explicitly enable this new feature and define storage classes with a different syntax than how we define storage backend before. First, you have to enable this feature in seafile.conf. You also need to add memory cache configurations to If installing Seafile as Docker containers, place the For example, if the configuration of the Then place the JSON file within any sub-directory of You also need to add memory cache configurations to The JSON file is an array of objects. Each object defines a storage class. The fields in the definition corresponds to the information we need to specify for a storage class. Below is an example: As you may have seen, the If you use file system as storage for Note: Currently file system, S3 and Swift backends are supported. Ceph/RADOS is also supported since version 7.0.14. Library mapping policies decide the storage class a library uses. Currently we provide 3 policies for 3 different use cases. The storage class of a library is decided on creation and stored in a database table. The storage class of a library won't change if the mapping policy is changed later. Before choosing your mapping policy, you need to enable the storage classes feature in seahub_settings.py: This policy lets the users choose which storage class to use when creating a new library. The users can select any storage class that's been defined in the JSON file. To use this policy, add following options in seahub_settings.py: If you enable storage class support but don't explicitly set Due to storage cost or management considerations, sometimes a system admin wants to make different type of users use different storage backends (or classes). You can configure a user's storage classes based on their roles. A new option Here are the sample options in seahub_settings.py to use this policy: This policy maps libraries to storage classes based on its library ID. The ID of a library is an UUID. In this way, the data in the system can be evenly distributed among the storage classes. Note that this policy is not a designed to be a complete distributed storage solution. It doesn't handle automatic migration of library data between storage classes. If you need to add more storage classes to the configuration, existing libraries will stay in their original storage classes. New libraries can be distributed among the new storage classes (backends). You still have to plan about the total storage capacity of your system at the beginning. To use this policy, you first add following options in seahub_settings.py: Then you can add option Run the repo_id is optional, if not specified, all libraries will be migrated. Before running the migration script, you can set the For example: This will create three files in the specified path (/opt): Run the In Seafile Professional Server Version 4.4.0 (or above), you can use Microsoft Office Online Server (formerly named Office Web Apps) to preview documents online. Office Online Server provides the best preview for all Office format files. It also support collaborative editing of Office files directly in the web browser. For organizations with Microsoft Office Volume License, it's free to use Office Online Server. For more information about Office Online Server and how to deploy it, please refer to https://technet.microsoft.com/en-us/library/jj219455(v=office.16).aspx. Notice: Seafile only supports Office Online Server 2016 and above. To use Office Online Server for preview, please add following config option to seahub_settings.py. Then restart After you click the document you specified in seahub_settings.py, you will see the new preview page. Understanding how the web app integration works is going to help you debugging the problem. When a user visits a file page: Please check the Nginx log for Seahub (for step 3) and Office Online Server to see which step is wrong. You should make sure you have configured at least a few GB of paging files in your Windows system. Otherwise the IIS worker processes may die randomly when handling Office Online requests. You can add/edit roles and permission for users. A role is just a group of users with some pre-defined permissions, you can toggle user roles in user list page at admin panel. The Since version 10.0, Since version 11.0.9 pro, Seafile comes with two build-in roles While a guest user can only read files/folders in the system, here are the permissions for a guest user: If you want to edit the permissions of build-in roles, e.g. default users can invite guest, guest users can view repos in organization, you can add following lines to An user who has In order to use this feature, in addition to granting After restarting, users who have Users can invite a guest user by providing his/her email address, system will email the invite link to the user. Tip: If you want to block certain email addresses for the invitation, you can define a blacklist, e.g. After that, email address \"a@a.com\", any email address ends with \"@a-a-a.com\" and any email address ends with \"@foo.com\" or \"@bar.com\" will not be allowed. If you want to add a new role and assign some users with this role, e.g. new role In this document, we use Microsoft Azure SAML single sign-on app and Microsoft on-premise ADFS to show how Seafile integrate SAML 2.0. Other SAML 2.0 provider should be similar. First, install xmlsec1 package: Second, prepare SP(Seafile) certificate directory and SP certificates: Create certs dir The SP certificate can be generated by the openssl command, or you can apply to the certificate manufacturer, it is up to you. For example, generate the SP certs using the following command: Note: The If you use Microsoft Azure SAML app to achieve single sign-on, please follow the steps below: First, add SAML single sign-on app and assign users, refer to: add an Azure AD SAML application, create and assign users. Second, setup the Identifier, Reply URL, and Sign on URL of the SAML app based on your service URL, refer to: enable single sign-on for saml app. The format of the Identifier, Reply URL, and Sign on URL are: https://example.com/saml2/metadata/, https://example.com/saml2/acs/, https://example.com/, e.g.: Next, edit saml attributes & claims. Keep the default attributes & claims of SAML app unchanged, the uid attribute must be added, the mail and name attributes are optional, e.g.: Next, download the base64 format SAML app's certificate and rename to idp.crt: and put it under the certs directory( Next, copy the metadata URL of the SAML app: and paste it into the Next, add Note: If the xmlsec1 binary is not located in View where the xmlsec1 binary is located: Note: If certificates are not placed in Finally, open the browser and enter the Seafile login page, click If you use Microsoft ADFS to achieve single sign-on, please follow the steps below: First, please make sure the following preparations are done: A Windows Server with ADFS installed. For configuring and installing ADFS you can see this article. A valid SSL certificate for ADFS server, and here we use A valid SSL certificate for Seafile server, and here we use Second, download the base64 format certificate and upload it: Navigate to the AD FS management window. In the left sidebar menu, navigate to Services > Certificates. Locate the Token-signing certificate. Right-click the certificate and select View Certificate. In the dialog box, select the Details tab. Click Copy to File. In the Certificate Export Wizard that opens, click Next. Select Base-64 encoded X.509 (.CER), then click Next. Named it idp.crt, then click Next. Click Finish to complete the download. And then put it under the certs directory( Next, add the following configurations to seahub_settings.py and then restart Seafile: Next, add relying party trust: Log into the ADFS server and open the ADFS management. Under Actions, click Add Relying Party Trust. On the Welcome page, choose Claims aware and click Start. Select Import data about the relying party published online or on a local network, type your metadate url in Federation metadata address (host name or URL), and then click Next. Your metadate url format is: On the Specify Display Name page type a name in Display name, e.g. In the Choose an access control policy window, select Permit everyone, then click Next. Review your settings, then click Next. Click Close. Next, create claims rules: Open the ADFS management, click Relying Party Trusts. Right-click your trust, and then click Edit Claim Issuance Policy. On the Issuance Transform Rules tab click Add Rules. Click the Claim rule template dropdown menu and select Send LDAP Attributes as Claims, and then click Next. In the Claim rule name field, type the display name for this rule, such as Seafile Claim rule. Click the Attribute store dropdown menu and select Active Directory. In the LDAP Attribute column, click the dropdown menu and select User-Principal-Name. In the Outgoing Claim Type column, click the dropdown menu and select UPN. And then click Finish. Click Add Rule again. Click the Claim rule template dropdown menu and select Transform an Incoming Claim, and then click Next. In the Claim rule name field, type the display name for this rule, such as UPN to Name ID. Click the Incoming claim type dropdown menu and select UPN(It must match the Outgoing Claim Type in rule Click OK to add both new rules. Note: When creating claims rule, you can also select other LDAP Attributes, such as E-Mail-Addresses, depending on your ADFS service. Finally, open the browser and enter the Seafile login page, click This feature is deprecated. We recommend you to use the encryption feature provided the storage system. Since Seafile Professional Server 5.1.3, we support storage enryption backend functionality. When enabled, all seafile objects (commit, fs, block) will be encrypted with AES 256 CBC algorithm, before writing them to the storage backend. Currently supported backends are: file system, Ceph, Swift and S3. Note that all objects will be encrypted with the same global key/iv pair. The key/iv pair has to be generated by the system admin and stored safely. If the key/iv pair is lost, all data cannot be recovered. Go to /seafile-server-latest, execute By default, the key/iv pair will be saved to a file named seaf-key.txt in the current directory. You can use '-p' option to change the path. Add the following configuration to seafile.conf: Now the encryption feature should be working. If you have existing data in the Seafile server, you have to migrate/encrypt the existing data. You must stop Seafile server before migrating the data. Create new configuration and data directories for the encrypted data. If you configured S3/Swift/Ceph backend, edit /conf-enc/seafile.conf. You must use a different bucket/container/pool to store the encrypted data. Then add the following configuration to /conf-enc/seafile.conf Go to /seafile-server-latest, use the seaf-encrypt.sh script to migrate the data. Run If there are error messages after executing seaf-encrypt.sh, you can fix the problem and run the script again. Objects that have already been migrated will not be copied again. Go to , execute following commands: Restart Seafile Server. If everything works okay, you can remove the backup directories. Seafile Professional Edition SOFTWARE LICENSE AGREEMENT NOTICE: READ THE FOLLOWING TERMS AND CONDITIONS CAREFULLY BEFORE YOU DOWNLOAD, INSTALL OR USE Seafile Ltd.'S PROPRIETARY SOFTWARE. BY INSTALLING OR USING THE SOFTWARE, YOU AGREE TO BE BOUND BY THE FOLLOWING TERMS AND CONDITIONS. IF YOU DO NOT AGREE TO THE FOLLOWING TERMS AND CONDITIONS, DO NOT INSTALL OR USE THE SOFTWARE. \"Seafile Ltd.\" means Seafile Ltd. \"You and Your\" means the party licensing the Software hereunder. \"Software\" means the computer programs provided under the terms of this license by Seafile Ltd. together with any documentation provided therewith. The License granted for Software under this Agreement authorizes You on a non-exclusive basis to use the Software. The Software is licensed, not sold to You and Seafile Ltd. reserves all rights not expressly granted to You in this Agreement. The License is personal to You and may not be assigned by You to any third party. Subject to the receipt by Seafile Ltd. of the applicable license fees, You have the right use the Software as follows: The inclusion of source code with the License is explicitly not for your use to customize a solution or re-use in your own projects or products. The benefit of including the source code is for purposes of security auditing. You may modify the code only for emergency bug fixes that impact security or performance and only for use within your enterprise. You may not create or distribute derivative works based on the Software or any part thereof. If you need enhancements to the software features, you should suggest them to Seafile Ltd. for version improvements. You acknowledge that all copies of the Software in any form are the sole property of Seafile Ltd.. You have no right, title or interest to any such Software or copies thereof except as provided in this Agreement. You hereby acknowledge and agreed that the Software constitute and contain valuable proprietary products and trade secrets of Seafile Ltd., embodying substantial creative efforts and confidential information, ideas, and expressions. You agree to treat, and take precautions to ensure that your employees and other third parties treat, the Software as confidential in accordance with the confidentiality requirements herein. EXCEPT AS OTHERWISE SET FORTH IN THIS AGREEMENT THE SOFTWARE IS PROVIDED TO YOU \"AS IS\", AND Seafile Ltd. MAKES NO EXPRESS OR IMPLIED WARRANTIES WITH RESPECT TO ITS FUNCTIONALITY, CONDITION, PERFORMANCE, OPERABILITY OR USE. WITHOUT LIMITING THE FOREGOING, Seafile Ltd. DISCLAIMS ALL IMPLIED WARRANTIES INCLUDING, WITHOUT LIMITATION, ANY IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE OR FREEDOM FROM INFRINGEMENT. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION OF IMPLIED WARRANTIES, SO THE ABOVE EXCLUSIONS MAY NOT APPLY TO YOU. THE LIMITED WARRANTY HEREIN GIVES YOU SPECIFIC LEGAL RIGHTS, AND YOU MAY ALSO HAVE OTHER RIGHTS THAT VARY FROM ONE JURISDICTION TO ANOTHER. YOU ACKNOWLEDGE AND AGREE THAT THE CONSIDERATION WHICH Seafile Ltd. IS CHARGING HEREUNDER DOES NOT INCLUDE ANY CONSIDERATION FOR ASSUMPTION BY Seafile Ltd. OF THE RISK OF YOUR CONSEQUENTIAL OR INCIDENTAL DAMAGES WHICH MAY ARISE IN CONNECTION WITH YOUR USE OF THE SOFTWARE. ACCORDINGLY, YOU AGREE THAT Seafile Ltd. SHALL NOT BE RESPONSIBLE TO YOU OR ANY THIRD PARTY FOR ANY LOSS-OF-PROFIT, LOST SAVINGS, INDIRECT, INCIDENTAL, SPECIAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF A LICENSING OR USE OF THE SOFTWARE. You agree to defend, indemnify and hold Seafile Ltd. and its employees, agents, representatives and assigns harmless from and against any claims, proceedings, damages, injuries, liabilities, costs, attorney's fees relating to or arising out of Your use of the Software or any breach of this Agreement. Your license is effective until terminated. You may terminate it at any time by destroying the Software or returning all copies of the Software to Seafile Ltd.. Your license will terminate immediately without notice if You breach any of the terms and conditions of this Agreement, including non or incomplete payment of the license fee. Upon termination of this Agreement for any reason: You will uninstall all copies of the Software; You will immediately cease and desist all use of the Software; and will destroy all copies of the software in your possession. Seafile Ltd. has the right, but no obligation, to periodically update the Software, at its complete discretion, without the consent or obligation to You or any licensee or user. YOU HEREBY ACKNOWLEDGE THAT YOU HAVE READ THIS AGREEMENT, UNDERSTAND IT AND AGREE TO BE BOUND BY ITS TERMS AND CONDITIONS. In a Seafile cluster, one common way to share data among the Seafile server instances is to use NFS. You should only share the files objects (located in How to setup nfs server and client is beyond the scope of this wiki. Here are few references: Supposed your seafile server installation directory is This way the instances will share the same To setup Seafile Professional Server with Amazon S3: The configuration options differ for different S3 storage. We'll describe the configurations in separate sections. AWS S3 is the original S3 storage provider. Edit You also need to add memory cache configurations. We'll explain the configurations below: For file search and webdav to work with the v4 signature mechanism, you need to add following lines to ~/.boto Since Pro 11.0, you can use SSE-C to S3. Add the following options to seafile.conf: There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support. Edit You also need to add memory cache configurations. We'll explain the configurations below: For file search and webdav to work with the v4 signature mechanism, you need to add following lines to ~/.boto Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift and Ceph's RADOS Gateway. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config: You also need to add memory cache configurations. We'll explain the configurations below: Below are a few options that are not shown in the example configuration above: To use HTTPS connections to S3, add the following options to seafile.conf: Because the server package is built on CentOS 6, if you're using Debian/Ubuntu, you have to copy the system CA bundle to CentOS's CA bundle path. Otherwise Seafile can't find the CA bundle so that the SSL connection will fail. Another important note is that you must not use '.' in your bucket names. Otherwise the wildcard certificate for AWS S3 cannot be resolved. This is a limitation on AWS. Now you can start Seafile by Ceph is a scalable distributed storage system. It's recommended to use Ceph's S3 Gateway (RGW) to integarte with Seafile. Seafile can also use Ceph's RADOS object storage layer for storage backend. But using RADOS requires to link with librados library, which may introduce library incompatibility issues during deployment. Furthermore the S3 Gateway provides easier to manage HTTP based interface. If you want to integrate with S3 gateway, please refer to \"Use S3-compatible Object Storage\" section in this documentation. The documentation below is for integrating with RADOS. Seafile acts as a client to Ceph/RADOS, so it needs to access ceph cluster's conf file and keyring. You have to copy these files from a ceph admin node's /etc/ceph directory to the seafile machine. For best performance, Seafile requires install memcached or redis and enable cache for objects. We recommend to allocate at least 128MB memory for object cache. File search and WebDAV functions rely on Python Ceph library installed in the system. On Debian/Ubuntu (Seafile 7.1+): On Debian/Ubuntu (Seafile 7.0 or below): On RedHat/CentOS (Seafile 7.0 or below): Edit You also need to add memory cache configurations. It's required to create separate pools for commit, fs, and block objects. Since 8.0 version, Seafile bundles librados from Ceph 16. On some systems you may find Seafile fail to connect to your Ceph cluster. In such case, you can usually solve it by removing the bundled librados libraries and use the one installed in the OS. To do this, you have to remove a few bundled libraries: The above configuration will use the default (client.admin) user to connect to Ceph. You may want to use some other Ceph user to connect. This is supported in Seafile. To specify the Ceph user, you have to add a You can create a ceph user for seafile on your ceph cluster like this: You also have to add this user's keyring path to /etc/ceph/ceph.conf: To setup Seafile Professional Server with Alibaba OSS: Edit You also need to add memory cache configurations. It's required to create separate buckets for commit, fs, and block objects. For performance and to save network traffic costs, you should create buckets within the region where the seafile server is running. The key_id and key are required to authenticate you to OSS. You can find the key_id and key in the \"security credentials\" section on your OSS management page. The region is the region where the bucket you created is located, such as beijing, hangzhou, shenzhen, etc. Before version 6.0.9, Seafile only supports using OSS services in the classic network environment. The OSS service address in the VPC (Virtual Private Network) environment is different from the classic network, so you need to specify the OSS access address in the configuration environment. After version 6.0.9, it supports the configuration of OSS access addresses, thus adding the support for VPC OSS services. Use the following configuration: Compared with the configuration under the classic network, the above configuration uses the You also need to add memory cache configurations. To use HTTPS connections to OSS, add the following options to seafile.conf: This backend uses the native Swift API. Previously users can only use the S3-compatibility layer of Swift. That way is obsolete now. The old documentation is still available here. Since version 6.3, OpenStack Swift v3.0 API is supported. To setup Seafile Professional Server with Swift: Edit You also need to add memory cache configurations. The above config is just an example. You should replace the options according to your own environment. Seafile supports Swift with Keystone as authentication mechanism. The Seafile also supports Tempauth and Swauth since professional edition 6.2.1. The It's required to create separate containers for commit, fs, and block objects. Since Pro 5.0.4, you can use HTTPS connections to Swift. Add the following options to seafile.conf: Because the server package is built on CentOS 6, if you're using Debian/Ubuntu, you have to copy the system CA bundle to CentOS's CA bundle path. Otherwise Seafile can't find the CA bundle so that the SSL connection will fail. Now you can start Seafile by Starting from version 6.0, system admin can add T&C at admin panel, all users need to accept that before using the site. In order to use this feature, please add following line to After restarting, there will be \"Terms and Conditions\" section at sidebar of admin panel. Seafile can scan uploaded files for malicious content in the background. When configured to run periodically, the scan process scans all existing libraries on the server. In each scan, the process only scans newly uploaded/updated files since the last scan. For each file, the process executes a user-specified virus scan command to check whether the file is a virus or not. Most anti-virus programs provide command line utility for Linux. To enable this feature, add the following options to More details about the options: An example for ClamAV (http://www.clamav.net/) is provided below: To test whether your configuration works, you can trigger a scan manually: If a virus was detected, you can see scan records and delete infected files on the Virus Scan page in the admin area. INFO: If you directly use clamav command line tool to scan files, scanning files will takes a lot of time. If you want to speed it up, we recommend to run Clamav as a daemon. Please refer to Run ClamAV as a Daemon When run Clamav as a daemon, the Since Pro edition 6.0.0, a few more options are added to provide finer grained control for virus scan. The file extensions should start with '.'. The extensions are case insensitive. By default, files with following extensions will be ignored: The list you provide will override default list. You may also configure Seafile to scan files for virus upon the files are uploaded. This only works for files uploaded via web interface or web APIs. Files uploaded with syncing or SeaDrive clients cannot be scanned on upload due to performance consideration. You may scan files uploaded from shared upload links by adding the option below to Since Pro Edition 11.0.7, you may scan all uploaded files via web APIs by adding the option below to Assume you have installed Kaspersky Anti-Virus for Linux File Server on the Seafile Server machine. If the user that runs Seafile Server is not root, it should have sudoers privilege to avoid writing password when running kav4fs-control. Add following content to /etc/sudoers: As the return code of kav4fs cannot reflect the file scan result, we use a shell wrapper script to parse the scan output and based on the parse result to return different return codes to reflect the scan result. Save following contents to a file such as Grant execute permissions for the script (make sure it is owned by the user Seafile is running as): The meaning of the script return code: Add following content to Build Seafile Seafile Open API Seafile Implement Details Seafile internally uses a data model similar to GIT's. It consists of Seafile's high performance comes from the architectural design: stores file metadata in object storage (or file system), while only stores small amount of metadata about the libraries in relational database. An overview of the architecture can be depicted as below. We'll describe the data model in more details. A repo is also called a library. Every repo has an unique id (UUID), and attributes like description, creator, password. The metadata for a repo is stored in There are a few tables in the Commit objects save the change history of a repo. Each update from the web interface, or sync upload operation will create a new commit object. A commit object contains the following information: commit ID, library name, creator of this commit (a.k.a. the modifier), creation time of this commit (a.k.a. modification time), root fs object ID, parent commit ID. The root fs object ID points to the root FS object, from which we can traverse a file system snapshot for the repo. The parent commit ID points to the last commit previous to the current commit. The If you use file system as storage backend, commit objects are stored in the path There are two types of FS objects, The The FS object IDs are calculated based on the contents of the object. That means if a folder or a file is not changed, the same objects will be reused across multiple commits. This allow us to create snapshots very efficiently. If you use file system as storage backend, commit objects are stored in the path A file is further divided into blocks with variable lengths. We use Content Defined Chunking algorithm to divide file into blocks. A clear overview of this algorithm can be found at http://pdos.csail.mit.edu/papers/lbfs:sosp01/lbfs.pdf. On average, a block's size is around 8MB. This mechanism makes it possible to deduplicate data between different versions of frequently updated files, improving storage efficiency. It also enables transferring data to/from multiple servers in parallel. If you use file system as storage backend, commit objects are stored in the path A \"virtual repo\" is a special repo that will be created in the cases below: A virtual repo can be understood as a view for part of the data in its parent library. For example, when sharing a folder, the virtual repo only provides access to the shared folder in that library. Virtual repo use the same underlying data as the parent library. So virtual repos use the same Virtual repo has its own change history. So it has separate There is a 1. Locate the translation files in the seafile-server-latest/seahub directory: For example, if you want to improve the Russian translation, find the corresponding strings to be edited in either of the following three files: If there is no translation for your language, create a new folder matching your language code and copy-paste the contents of another language folder in your newly created one. (Don't copy from the 'en' folder because the files therein do not contain the strings to be translated.) 2. Edit the files using an UTF-8 editor. 3. Save your changes. 4. (Only necessary when you created a new language code folder) Add a new entry for your language to the language block in the 5. (Only necessary when you edited either django.po or djangojs.po) Apply the changes made in django.po and djangojs.po by running the following two commands in Note: msgfmt is included in the gettext package. Additionally, run the following two commands in the seafile-server-latest directory: 6. Restart Seahub to load changes made in django.po and djangojs.po; reload the Markdown editor to check your modifications in the seafile-editor.json file. Please submit translations via Transifex: https://www.transifex.com/projects/p/seahub/ Steps: Steps: Modify ```python STATICFILES_DIRS = ( # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. '%s/static' % PROJECT_ROOT, Execute the command Restore ```python STATICFILES_DIRS = ( # Put strings here, like \"/home/html/static\" or \"C:/www/django/static\". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. '%s/static' % PROJECT_ROOT, '%s/frontend/build' % PROJECT_ROOT, ) Restart Seahub This issue has been fixed since version 11.0 ) ``` The API document can be accessed in the following location: The Admin API document can be accessed in the following location: The following assumptions and conventions are used in the rest of this document: Use the official installation guide for your OS to install Docker. From Seafile Docker 12.0, we recommend that you use NOTE: Different versions of Seafile have different compose files. The following fields merit particular attention: NOTE: SSL is now handled by the caddy server from 12.0. Start Seafile server with the following command Wait for a few minutes for the first time initialization, then visit Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information. To monitor container logs (from outside of the container), please use the following commands: The Seafile logs are under The system logs are under To monitor all Seafile logs simultaneously (from outside of the container), run If you want to use an existing mysql-server, you can modify the NOTE: The config files are under After modification, you need to restart the container: Ensure the container is running, then enter this command: Enter the username and password according to the prompts. You now have a new admin account. You can use run seafile as non root user in docker. (NOTE: Programs such as First add the Then modify Then destroy the containers and run them again: Now you can run Seafile as Follow the instructions in Backup and restore for Seafile Docker When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them. (NOTE: for technical reasons, the GC process does not guarantee that every single orphan block will be deleted.) The required scripts can be found in the From Seafile 12.0, the SSL is handled by Caddy. Caddy is a modern open source web server that mainly binds external traffic and internal services in seafile docker. The default caddy image is The recommended steps to migrate from non-docker deployment to docker deployment are: The following document assumes that the deployment path of your non-Docker version of Seafile is /opt/seafile. If you use other paths, before running the command, be careful to modify the command path. Note, you can also refer to the Seafile backup and recovery documentation, deploy Seafile Docker on another machine, and then copy the old configuration information, database, and seafile-data to the new machine to complete the migration. The advantage of this is that even if an error occurs during the migration process, the existing system will not be destroyed. Stop the locally deployed Seafile, Nginx, Memcache The non-Docker version uses the local MySQL. Now if the Docker version of Seafile connects to this MySQL, you need to increase the corresponding access permissions. The following commands are based on that you use Copy the original config files to the directory to be mapped by the docker version of seafile Modify the MySQL configuration in Modify the memcached configuration in Download docker-compose.yml to There are two ways to let Seafile Docker to use the old seafile-data You can copy or move the old seafile-data folder ( You can mount the old seafile-data folder ( The added line Start Seafile docker and check if everything is okay: While it is not possible from inside a docker container to connect to the host database via localhost but via Following iptables commands protect MariaDB/MySQL: Keep in mind this is not bootsafe! For Debian based Linux Distros you can start a local IP by adding in SUSE based is by editing If using MariaDB the server just can bind to one IP-address (192.158.1.38 or 0.0.0.0 (internet)). So if you bind your MariaDB server to that new address other applications might need some reconfigurement. In then edit /opt/seafile-data/seafile/conf/ -> ccnet.conf seafile.conf seahub_settings.py in the Host-Line to that IP and execute the following commands: You can use one of the following methods to start Seafile container on system bootup. Note: Add configuration Note: Add Seafile docker based installation consist of the following components (docker images): Seafile Docker cluster deployment requires \"sticky session\" settings in the load balancer. Otherwise sometimes folder download on the web UI can't work properly. Read the Load Balancer Setting for details. System: Ubuntu 20.04 docker-compose: 1.25.0 Seafile Server: 2 frontend nodes, 1 backend node We assume you have already deployed memcache, MariaDB, ElasticSearch in separate machines and use S3 like object storage. Create the three databases ccnet_db, seafile_db, and seahub_db required by Seafile on MariaDB/MySQL, and authorize the `seafile` user to be able to access these three databases: You also need to create a table in `seahub_db` Create the mount directory Create the docker-compose.yml file Note: CLUSTER_SERVER=true means seafile cluster mode, CLUSTER_MODE=frontend means this node is seafile frontend server. Start the seafile docker container 1. Manually generate configuration files 2. Modify the mysql configuration options (user, host, password) in configuration files such as ccnet.conf, seafevents.conf, seafile.conf and seahub_settings.py. 3. Modify the memcached configuration option in seahub_settings.py 4. Modify the [INDEX FILES] configuration options in seafevents.conf 5. Add some configurations in seahub_settings.py 6. Add cluster special configuration in seafile.conf 7. Add memory cache configuration in seafile.conf Enter the container, and then execute the following commands to import tables Start Seafile service When you start it for the first time, seafile will guide you to set up an admin user. When deploying the second frontend node, you can directly copy all the directories generated by the first frontend node, including the docker-compose.yml file and modified configuration files, and then start the seafile docker container. Create the mount directory Create the docker-compose.yml file Note: CLUSTER_SERVER=true means seafile cluster mode, CLUSTER_MODE=backend means this node is seafile backend server. Start the seafile docker container Copy configuration files of the frontend node, and then start Seafile server of the backend node Modify the seafile.conf file on each node to configure S3 storage. vim seafile.conf Execute the following commands on the two Seafile frontend servers: Note: Correctly modify the IP address (Front-End01-IP and Front-End02-IP) of the frontend server in the above configuration file. Choose one of the above two servers as the master node, and the other as the slave node. Perform the following operations on the master node: Note: Correctly configure the virtual IP address and network interface device name in the above file. Perform the following operations on the standby node: Finally, run the following commands on the two Seafile frontend servers to start the corresponding services: So far, Seafile cluster has been deployed. The following section needs to be added to docker-compose.yml in the services section Add this to seafile.nginx.conf Add this to seahub_settings.py Wait some minutes until OnlyOffice finished initializing. Now OnlyOffice can be used. This manual explains how to deploy and run Seafile Server Professional Edition (Seafile PE) on a Linux server using Docker and Docker Compose. The deployment has been tested for Debian/Ubuntu and CentOS, but Seafile PE should also work on other Linux distributions. Seafile PE requires a minimum of 2 cores and 2GB RAM. If Elasticsearch is installed on the same server, the minimum requirements are 4 cores and 4 GB RAM, and make sure the mmapfs counts do not cause excptions like out of memory, which can be increased by following command (see https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html for futher details): or modify /etc/sysctl.conf and reboot to set this value permanently: Seafile PE can be used without a paid license with up to three users. Licenses for more user can be purchased in the Seafile Customer Center or contact Seafile Sales at sales@seafile.com. The following assumptions and conventions are used in the rest of this document: Use the official installation guide for your OS to install Docker. Log into Seafile's private repository and pull the Seafile image: When prompted, enter the username and password of the private repository. They are available on the download page in the Customer Center. NOTE: Older Seafile PE versions are also available in the repository (back to Seafile 7.0). To pull an older version, replace '12.0-latest' tag by the desired version. From Seafile Docker 12.0, we recommend that you use NOTE: Different versions of Seafile have different compose files. The following fields merit particular attention: NOTE: SSL is now handled by the caddy server from 12.0. To conclude, set the directory permissions of the Elasticsearch volumne: Run docker compose in detached mode: NOTE: You must run the above command in the directory with the Wait a few moment for the database to initialize. You can now access Seafile at the host name specified in the Compose file. (A 502 Bad Gateway error means that the system has not yet completed the initialization.) To view Seafile docker logs, please use the following command The Seafile logs are under The system logs are under If you have a Then restart Seafile: Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information. The command The directory layout of the Seafile container's volume should look as follows: All Seafile config files are stored in Any modification of a configuration file requires a restart of Seafile to take effect: All Seafile log files are stored in If you want to use an existing mysql-server, you can modify the NOTE: You can use run seafile as non root user in docker. (NOTE: Programs such as First add the Then modify Then destroy the containers and run them again: Now you can run Seafile as Follow the instructions in Backup and restore for Seafile Docker When files are deleted, the blocks comprising those files are not immediately removed as there may be other files that reference those blocks (due to the magic of deduplication). To remove them, Seafile requires a 'garbage collection' process to be run, which detects which blocks no longer used and purges them. (NOTE: for technical reasons, the GC process does not guarantee that every single orphan block will be deleted.) The required scripts can be found in the You need to manually add the OnlyOffice config to You need to manually add the Clamav config to Q: I forgot the Seafile admin email address/password, how do I create a new admin account? A: You can create a new admin account by running The Seafile service must be up when running the superuser command. Q: If, for whatever reason, the installation fails, how do I to start from a clean slate again? A: Remove the directories /opt/seafile, /opt/seafile-data, /opt/seafile-elasticsearch, and /opt/seafile-mysql and start again. Q: Something goes wrong during the start of the containers. How can I find out more? A: You can view the docker logs using this command: Q: I forgot the admin password. How do I create a new admin account? A: Make sure the seafile container is running and enter NOTE: Different versions of Seafile have different compose files. To ensure data security, it is recommended that you back up your MySQL data. Copy the Replace the old The Seafile Pro container needs to be running during the migration process, which means that end users may access the Seafile service during this process. In order to avoid the data confusion caused by this, it is recommended that you take the necessary measures to temporarily prohibit users from accessing the Seafile service. For example, modify the firewall policy. Run the following command to run the Seafile-Pro container\uff1a Then run the migration script by executing the following command: After the migration script runs successfully, modify Restart the Seafile Pro container. Now you have a Seafile Professional service. Seafile WebDAV and FUSE extensions make it easy for Seafile to work with third party applications. For example, you can use Documents App in iOS to access files in Seafile via WebDAV interface. Files in the seafile system are split to blocks, which means what are stored on your seafile server are not complete files, but blocks. This design faciliates effective data deduplication. However, administrators sometimes want to access the files directly on the server. You can use seaf-fuse to do this. Note: Assume we want to mount to Note: Before start seaf-fuse, you should have started seafile server with seaf-fuse supports standard mount options for FUSE. For example, you can specify ownership for the mounted folder: The fuse enables the block cache function by default to cache block objects, thereby reducing access to backend storage, but this function will occupy local disk space. Since Seafile-pro-10.0.0, you can disable block cache by adding following options: You can find the complete list of supported options in Now you can list the content of From the above list you can see, under the folder of a user there are subfolders, each of which represents a library of that user, and has a name of this format: '''{library_id}-{library-name}'''. If you get an error message saying \"Permission denied\" when running Assume we want to mount to Add the following content Start Seafile server and enter the container Start seaf-fuse in the container In the document below, we assume your seafile installation folder is The configuration file is Every time the configuration is modified, you need to restart seafile server to make it take effect. Your WebDAV client would visit the Seafile WebDAV server at In Pro edition 7.1.8 version and community edition 7.1.5, an option is added to append library ID to the library name returned by SeafDAV. For Seafdav, the configuration of Nginx is as follows: For Seafdav, the configuration of Apache is as follows: Please first note that, there are some known performance limitation when you map a Seafile webdav server as a local file system (or network drive). So WebDAV is more suitable for infrequent file access. If you want better performance, please use the sync client instead. Windows Explorer supports HTTPS connection. But it requires a valid certificate on the server. It's generally recommended to use Windows Explorer to map a webdav server as network dirve. If you use a self-signed certificate, you have to add the certificate's CA into Windows' system CA store. On Linux you have more choices. You can use file manager such as Nautilus to connect to webdav server. Or you can use davfs2 from the command line. To use davfs2 The -o option sets the owner of the mounted directory to so that it's writable for non-root users. It's recommended to disable LOCK operation for davfs2. You have to edit /etc/davfs2/davfs2.conf Finder's support for WebDAV is also not very stable and slow. So it is recommended to use a webdav client software such as Cyberduck. By default, seafdav is disabled. Check whether you have If you deploy SeafDAV behind Nginx/Apache, make sure to change the value of First, check the If you have enabled debug, there will also be the following log. This issue usually occurs when you have configured HTTPS, but the request was forwarded, resulting in the You can solve this by manually changing the value of to This happens when you map webdav as a network drive, and tries to copy a file larger than about 50MB from the network drive to a local folder. This is because Windows Explorer has a limit of the file size downloaded from webdav server. To make this size large, change the registry entry on the client machine. There is a registry key named SeaDoc is an extension of Seafile that providing an online collaborative document editor. SeaDoc designed around the following key ideas: SeaDoc excels at: The SeaDoc archticture is demonstrated as below: Here is the workflow when a user open sdoc file in browser Seafile version 11.0 or later is required to work with SeaDoc. SeaDoc has the following deployment methods: Download docker-compose.yml sample file to your host. Then modify the file according to your environment. The following fields are needed to be modified: SeaDoc and Seafile share the MySQL service. Create the database sdoc_db in Seafile MySQL and authorize the user. Note, SeaDoc will only create one database table to store operation logs. Then follow the section: Start SeaDoc. Modify SeaDoc and Seafile share the MySQL service. Create the database sdoc_db in Seafile MySQL. Note, SeaDoc will only create one database table to store operation logs. Start SeaDoc server with the following command Now you can use SeaDoc! Placeholder spot for shared volumes. You may elect to store certain persistent information outside of a container, in our case we keep various log files and upload directory outside. This allows you to rebuild containers easily without losing important information. From Seafile 12.0, the SSL is handled by Caddy. Caddy is a modern open source web server that mainly binds external traffic and internal services in seafile docker. The default caddy image is As the system admin, you can enter the admin panel by click Backup and recovery: Recover corrupt files after server hard shutdown or system crash: You can run Seafile GC to remove unused files: When you setup seahub website, you should have setup a admin account. After you logged in a admin, you may add/delete users and file libraries. Since version 11.0, if you need to change a user's external ID, you can manually modify database table For version below 11.0, if you really want to change a user's ID, you should create a new user then use this admin API to migrate the data from old user to the new user: https://download.seafile.com/published/web-api/v2.1-admin/accounts.md#user-content-Migrate%20Account. Administrator can reset password for a user in \"System Admin\" page. In a private server, the default settings doesn't support users to reset their password by email. If you want to enable this, you have first to set up notification email. You may run Under the seafile-server-latest directory, run There are generally two parts of data to backup If you setup seafile server according to our manual, you should have a directory layout like: All your library data is stored under the '/opt/seafile' directory. Seafile also stores some important metadata data in a few databases. The names and locations of these databases depends on which database software you use. For SQLite, the database files are also under the '/opt/seafile' directory. The locations are: For MySQL, the databases are created by the administrator, so the names can be different from one deployment to another. There are 3 databases: The backup is a three step procedure: The second sequence is better in the sense that it avoids library corruption. Like other backup solutions, some new data can be lost in recovery. There is always a backup window. However, if your storage backup mechanism can finish quickly enough, using the first sequence can retain more data. We assume your seafile data directory is in It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week. MySQL Assume your database names are SQLite You need to stop Seafile server first before backing up SQLite database. The data files are all stored in the To directly copy the whole data directory, This produces a separate copy of the data directory each time. You can delete older backup copies after a new one is completed. If you have a lot of data, copying the whole data directory would take long. You can use rsync to do incremental backup. This command backup the data directory to Now supposed your primary seafile server is broken, you're switching to a new machine. Using the backup data to restore your Seafile instance: Now with the latest valid database backup files at hand, you can restore them. MySQL SQLite We assume your seafile volumns path is in The data files to be backed up: Use the following command to clear expired session records in Seahub database: Use the following command to clear the activity records: The corresponding items in UserActivity will deleted automatically by MariaDB when the foreign keys in Activity table are deleted. Use the following command to clean the login records: Use the following command to clean the file access records: Use the following command to clean the file update records: Use the following command to clean the permission change audit records: Use the following command to clean the file history records: Use the following command to simultaneously clean up table records of Activity, sysadmin_extra_userloginlog, FileAudit, FileUpdate, FileHistory, PermAudit, FileTrash 90 days ago: Since version 6.2, we offer command to clear outdated library records in Seahub database, e.g. records that are not deleted after a library is deleted. This is because users can restore a deleted library, so we can't delete these records at library deleting time. This command has been improved in version 10.0, including: It will clear the invalid data in small batch, avoiding consume too much database resource in a short time. Dry-run mode: if you just want to see how much invalid data can be deleted without actually deleting any data, you can use the dry-run option, e.g. There are two tables in Seafile db that are related to library sync tokens. When you have many sync clients connected to the server, these two tables can have large number of rows. Many of them are no longer actively used. You may clean the tokens that are not used in a recent period, by the following SQL query: xxxx is the UNIX timestamp for the time before which tokens will be deleted. To be safe, you can first check how many tokens will be removed: Since version 7.0.8 pro, we offer command to export file access log. Since version 7.0.8 pro, Seafile provides commands to export reports via command line. Since version 7.0.8 pro, we offer command to export user storage report. On the server side, Seafile stores the files in the libraries in an internal format. Seafile has its own representation of directories and files (similar to Git). With default installation, these internal objects are stored in the server's file system directly (such as Ext4, NTFS). But most file systems don't assure the integrity of file contents after a hard shutdown or system crash. So if new Seafile internal objects are being written when the system crashes, they can be corrupt after the system reboots. This will make part of the corresponding library not accessible. Note: If you store the seafile-data directory in a battery-backed NAS (like EMC or NetApp), or use S3 backend available in the Pro edition, the internal objects won't be corrupt. We provide a seaf-fsck.sh script to check the integrity of libraries. The seaf-fsck tool accepts the following arguments: There are three modes of operation for seaf-fsck: Running seaf-fsck.sh without any arguments will run a read-only integrity check for all libraries. If you want to check integrity for specific libraries, just append the library id's as arguments: The output looks like: The corrupted files and directories are reported. Sometimes you can see output like the following: This means the \"head commit\" (current state of the library) recorded in database is not consistent with the library data. In such case, fsck will try to find the last consistent state and check the integrity in that state. Tips: If you have many libraries, it's helpful to save the fsck output into a log file for later analysis. Corruption repair in seaf-fsck basically works in two steps: Running the following command repairs all the libraries: Most of time you run the read-only integrity check first, to find out which libraries are corrupted. And then you repair specific libraries with the following command: After repairing, in the library history, seaf-fsck includes the list of files and folders that are corrupted. So it's much easier to located corrupted paths. To check all libraries and find out which library is corrupted, the system admin can run seaf-fsck.sh without any argument and save the output to a log file. Search for keyword \"Fail\" in the log file to locate corrupted libraries. You can run seaf-fsck to check all libraries when your Seafile server is running. It won't damage or change any files. When the system admin find a library is corrupted, he/she should run seaf-fsck.sh with \"--repair\" for the library. After the command fixes the library, the admin should inform user to recover files from other places. There are two ways: Starting from Pro edition 7.1.5, an option is added to speed up FSCK. Most of the running time of seaf-fsck is spent on calculating hashes for file contents. This hash will be compared with block object ID. If they're not consistent, the block is detected as corrupted. In many cases, the file contents won't be corrupted most of time. Some objects are just missing from the system. So it's enough to only check for object existence. This will greatly speed up the fsck process. To skip checking file contents, add the \"--shallow\" or \"-s\" option to seaf-fsck. You can use seaf-fsck to export all the files in libraries to external file system (such as Ext4). This procedure doesn't rely on the seafile database. As long as you have your seafile-data directory, you can always export your files from Seafile to external file system. The command syntax is The argument Currently only un-encrypted libraries can be exported. Encrypted libraries will be skipped. Seafile uses storage de-duplication technology to reduce storage usage. The underlying data blocks will not be removed immediately after you delete a file or a library. As a result, the number of unused data blocks will increase on Seafile server. To release the storage space occupied by unused blocks, you have to run a \"garbage collection\" program to clean up unused blocks on your server. The GC program cleans up two types of unused blocks: Before running GC, you must shutdown the Seafile program on your server if you use the community edition. For professional edition, online GC operation is supported. If you use Professional edition, you don't need to shutdown the Seafile program if you are using MySQL. This is because new blocks written into Seafile while GC is running may be mistakenly deleted by the GC program. At the bottom of the page there is a script that you can use to run the cleanup manually or e.g. once a week with as cronjob. To see how much garbage can be collected without actually removing any garbage, use the dry-run option: The output should look like: If you give specific library ids, only those libraries will be checked; otherwise all libraries will be checked. Notice that at the end of the output there is a \"repos have blocks to be removed\" section. It contains the list of libraries that have garbage blocks. Later when you run GC without --dry-run option, you can use these libraris ids as input arguments to GC program. To actually remove garbage blocks, run without the --dry-run option: If libraries ids are specified, only those libraries will be checked for garbage. As described before, there are two types of garbage blocks to be removed. Sometimes just removing the first type of blocks (those that belong to deleted libraries) is good enough. In this case, the GC program won't bother to check the libraries for outdated historic blocks. The \"-r\" option implements this feature: Libraries deleted by the users are not immediately removed from the system. Instead, they're moved into a \"trash\" in the system admin page. Before they're cleared from the trash, their blocks won't be garbage collected. Since Pro server 8.0.6 and community edition 9.0, you can remove garbage fs objects. It should be run without the --dry-run option: Note: This command has bug before Pro Edition 10.0.15 and Community Edition 11.0.7. It could cause virtual libraries (e.g. shared folders) failing to merge into their parent libraries. Please avoid using this option in the affected versions. Please contact our support team if you are affected by this bug. You can specify the thread number in GC. By default, You can specify the thread number in with \"-t\" option. \"-t\" option can be used together with all other options. Each thread will do GC on one library. For example, the following command will use 20 threads to GC all libraries: Since the threads are concurrent, the output of each thread may mix with each others. Library ID is printed in each line of output. Since GC usually runs quite slowly as it needs to traverse the entire library history. You can use multiple threads to run GC in parallel. For even larger deployments, it's also desirable to run GC on multiple server in parallel. A simple pattern to divide the workload among multiple GC servers is to assign libraries to servers based on library ID. Since Pro edition 7.1.5, this is supported. You can add \"--id-prefix\" option to seaf-gc.sh, to specify the library ID prefix. For example, the below command will only process libraries having \"a123\" as ID prefix. To use this script you need: Create the script file (change the location to your liking): Use your favorite text editor and paste the following code: Make sure that the script has been given execution rights, to do that run this command. Then open crontab with the root user Add the following line (change the location of your script accordingly!) The script will then run every Sunday at 2:00 AM. To perform garbage collection inside the seafile docker container, you must run the Starting from version 6.0, we added Two-Factor Authentication to enhance account security. There are two ways to enable this feature: System admin can tick the check-box at the \"Password\" section of the system settings page, or just add the following settings to After that, there will be a \"Two-Factor Authentication\" section in the user profile page. Users can use the Google Authenticator app on their smart-phone to scan the QR code. Seafile Server consists of the following two components: The picture below shows how Seafile clients access files when you configure Seafile behind Nginx/Apache. Seafile manages files using libraries. Every library has an owner, who can share the library to other users or share it with groups. The sharing can be read-only or read-write. Read-only libraries can be synced to local desktop. The modifications at the client will not be synced back. If a user has modified some file contents, he can use \"resync\" to revert the modifications. Sharing controls whether a user or group can see a library, while sub-folder permissions are used to modify permissions on specific folders. Supposing you share a library as read-only to a group and then want specific sub-folders to be read-write for a few users, you can set read-write permissions on sub-folders for some users and groups. Note: In the Pro Edition, Seafile offers four audit logs in system admin panel: The logging feature is turned off by default before version 6.0. Add the following option to The audit log data is being saved in Fail2ban is an intrusion prevention software framework which protects computer servers from brute-force attacks. Written in the Python programming language, it is able to run on POSIX systems that have an interface to a packet-control system or firewall installed locally, for example, iptables or TCP Wrapper. (Definition from wikipedia - https://en.wikipedia.org/wiki/Fail2ban) To protect your seafile website against brute force attemps. Each time a user/computer tries to connect and fails 3 times, a new line will be write in your seafile logs ( Fail2ban will check this log file and will ban all failed authentications with a new rule in your firewall. WARNING: Without this your Fail2Ban filter will not work. You need to add the following settings to seahub_settings.py but change it to your own time zone. WARNING: this file may override some parameters from your Edit Finally, just restart fail2ban and check your firewall (iptables for me) : Fail2ban will create a new chain for this jail. So you should see these new lines : To do a simple test (but you have to be an administrator on your seafile server) go to your seafile webserver URL and try 3 authentications with a wrong password. Actually, when you have done that, you are banned from http and https ports in iptables, thanks to fail2ban. To check that : on fail2ban on iptables : To unban your IP address, just execute this command : As three (3) failed attempts to login will result in one line added in seahub.log a Fail2Ban jail with the settings maxretry = 3 is the same as nine (9) failed attempts to login. Seafile uses HTTP(S) to syncing files between client and server (Since version 4.1.0). Seafile provides a feature called encrypted library to protect your privacy. The file encryption/decryption is performed on client-side when using the desktop client for file synchronization. The password of an encrypted library is not stored on the server. Even the system admin of the server can't view the file contents. There are a few limitation about this feature: The client side encryption works on iOS client since version 2.1.6. The Android client support client side encryption since version 2.1.0. When you create an encrypted library, you'll need to provide a password for it. All the data in that library will be encrypted with the password before uploading it to the server (see limitations above). The encryption procedure is: The above encryption procedure can be executed on the desktop and the mobile client. The Seahub browser client uses a different encryption procedure that happens at the server. Because of this your password will be transferred to the server. When you sync an encrypted library to the desktop, the client needs to verify your password. When you create the library, a \"magic token\" is derived from the password and library id. This token is stored with the library on the server side. The client use this token to check whether your password is correct before you sync the library. The magic token is generated by PBKDF2 algorithm with 1000 iterations of SHA256 hash. For maximum security, the plain-text password won't be saved on the client side, too. The client only saves the key/iv pair derived from the \"file key\", which is used to decrypt the data. So if you forget the password, you won't be able to recover it or access your data on the server. When a file download link is clicked, a random URL is generated for user to access the file from fileserver. This url can only be access once. After that, all access will be denied to the url. So even if someone else happens to know about the url, he can't access it anymore. User login passwords are stored in hash form only. Note that user login password is different from the passwords used in encrypted libraries. In the database, its format is The record is divided into 4 parts by the $ sign. To calculate the hash: There are three types of upgrade, i.e., major version upgrade, minor version upgrade and maintenance version upgrade. This page contains general instructions for the three types of upgrade. Please check the upgrade notes for any special configuration or changes before/while upgrading. Suppose you are using version 5.1.0 and like to upgrade to version 6.1.0. First download and extract the new version. You should have a directory layout similar to this: Now upgrade to version 6.1.0. Shutdown Seafile server if it's running Check the upgrade scripts in seafile-server-6.1.0 directory. You will get a list of upgrade files: Start from your current version, run the script(s one by one) Start Seafile server If the new version works fine, the old version can be removed Suppose you are using version 6.1.0 and like to upgrade to version 6.2.0. First download and extract the new version. You should have a directory layout similar to this: Now upgrade to version 6.2.0. Check the upgrade scripts in seafile-server-6.2.0 directory. You will get a list of upgrade files: Start from your current version, run the script(s one by one) Start Seafile server If the new version works, the old version can be removed A maintenance upgrade is for example an upgrade from 6.2.2 to 6.2.3. For this type of upgrade, you only need to update the symbolic links (for avatar and a few other folders). A script to perform a minor upgrade is provided with Seafile server (for history reasons, the script is called Start Seafile If the new version works, the old version can be removed Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps: In general, to upgrade a cluster, you need: Doing maintanence upgrading is simple, you only need to run the script In the background node, Seahub no longer need to be started. Nginx is not needed too. The way of how office converter work is changed. The Seahub in front end nodes directly access a service in background node. seahub_settings.py seafevents.conf seahub_settings.py is not needed. But you can leave it unchanged. seafevents.conf No special upgrade operations. In version 6.2.11, the included Django was upgraded. The memcached configuration needed to be upgraded if you were using a cluster. If you upgrade from a version below 6.1.11, don't forget to change your memcache configuration. If the configuration in your Now you need to change to: No special upgrade operations. In version 6.1, we upgraded the included ElasticSearch server. The old server listen on port 9500, new server listen on port 9200. Please change your firewall settings. In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share. The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster. Because Django is upgraded to 1.8, the COMPRESS_CACHE_BACKEND should be changed v5.0 introduces some database schema change, and all configuration files (ccnet.conf, seafile.conf, seafevents.conf, seahub_settings.py) are moved to a central config directory. Perform the following steps to upgrade: After the upgrade, you should see the configuration files has been moved to the conf/ folder. There are no database and search index upgrade from v4.3 to v4.4. Perform the following steps to upgrade: v4.3 contains no database table change from v4.2. But the old search index will be deleted and regenerated. A new option COMPRESS_CACHE_BACKEND = 'django.core.cache.backends.locmem.LocMemCache' should be added to seahub_settings.py The secret key in seahub_settings.py need to be regenerated, the old secret key lack enough randomness. Perform the following steps to upgrade: Seafile adds new features in major and minor versions. It is likely that some database tables need to be modified or the search index need to be updated. In general, upgrading a cluster contains the following steps: In general, to upgrade a cluster, you need: Maintanence upgrade only needs to download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Start with docker compose up. Migrate your configuration for LDAP and OAuth according to https://manual.seafile.com/upgrade/upgrade_notes_for_11.0.x If you are using with ElasticSearch, SAML SSO and storage backend features, follow the upgrading manual on how to update the configuration for these features: https://manual.seafile.com/upgrade/upgrade_notes_for_10.0.x If you want to use the new notification server and rate control (pro edition only), please refer to the upgrading manual: https://manual.seafile.com/upgrade/upgrade_notes_for_10.0.x If you are using with ElasticSearch, follow the upgrading manual on how to update the configuration: https://manual.seafile.com/upgrade/upgrade_notes_for_9.0.x For maintenance upgrade, like from version 10.0.1 to version 10.0.4, just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up. For major version upgrade, like from 10.0 to 11.0, see instructions below. Please check the upgrade notes for any special configuration or changes before/while upgrading. Download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version. Taking the community edition as an example, you have to modify to It is also recommended that you upgrade mariadb and memcached to newer versions as in the v11.0 docker-compose.yml file. Specifically, in version 11.0, we use the following versions: What's more, you have to migrate configuration for LDAP and OAuth according to https://manual.seafile.com/upgrade/upgrade_notes_for_11.0.x Start with docker compose up. Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up. If you are using pro edition with ElasticSearch, SAML SSO and storage backend features, follow the upgrading manual on how to update the configuration for these features: https://manual.seafile.com/upgrade/upgrade_notes_for_10.0.x If you want to use the new notification server and rate control (pro edition only), please refer to the upgrading manual: https://manual.seafile.com/upgrade/upgrade_notes_for_10.0.x Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up. Since version 9.0.6, we use Acme V3 (not acme-tiny) to get certificate. If there is a certificate generated by an old version, you need to back up and move the old certificate directory and the seafile.nginx.conf before starting. Starting the new container will automatically apply a certificate. Please wait a moment for the certificate to be applied, then you can modify the new seafile.nginx.conf as you want. Execute the following command to make the nginx configuration take effect. A cron job inside the container will automatically renew the certificate. Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up. Just download the new image, stop the old docker container, modify the Seafile image version in docker-compose.yml to the new version, then start with docker compose up. From Seafile Docker 12.0, we recommend that you use First, backup the original docker-compose.yml file: Then download .env, seafile-server.yml and caddy.yml, and modify .env file according to the old configuration in For community edition: For pro edition: The following fields merit particular attention: SSL is now handled by the caddy server. If you have used SSL before, you will also need modify the seafile.nginx.conf. Change server listen 443 to 80. Backup the original seafile.nginx.conf file: Remove the Change Start with docker compose up. If you have deployed SeaDoc extension in version 11.0, please use the following steps to upgrade it to version 1.0. SeaDoc 1.0 is for working with Seafile 12.0. SeaDoc and Seafile are deployed in the same directory. SeaDoc has no state in itself. You can simplify delete old configuration file and directory of v0.8. Then deploy SeaDoc 1.0 as following. In version 1.0, we use .env file to configure SeaDoc docker image, instead of modifying the docker-compose.yml file directly. Download seadoc.yml to the Seafile For community edition: For pro edition: The following fields merit particular attention: If you have deployed SeaDoc older version, you should remove Start Seafile server and SeaDoc server with the following command These notes give additional information about changes. Please always follow the main upgrade guide. For docker based version, please check upgrade Seafile Docker image The notification server enables desktop syncing and drive clients to get notification of library changes immediately using websocket. There are two benefits: The notification server works with Seafile syncing client 9.0+ and drive client 3.0+. Please follow the document to enable notification server: https://manual.seafile.com/config/seafile-conf/#notification-server-configuration If you use storage backend or cluster, make sure the memcached section is in the seafile.conf. Since version 10.0, all memcached options are consolidated to the one below. Modify the seafile.conf: The configuration for SAML SSO in Seafile is greatly simplified. Now only three options are needed: Please check the new document on SAML SSO Starting from version 10.0, Seafile allows administrators to configure upload and download speed limits for users with different roles through the following two steps: Elasticsearch is upgraded to version 8.x, fixed and improved some issues of file search function. Since elasticsearch 7.x, the default number of shards has changed from 5 to 1, because too many index shards will over-occupy system resources; but when a single shard data is too large, it will also reduce search performance. Starting from version 10.0, Seafile supports customizing the number of shards in the configuration file. You can use the following command to query the current size of each shard to determine the best number of shards for you: The official recommendation is that the size of each shard should be between 10G-50G: https://www.elastic.co/guide/en/elasticsearch/reference/8.6/size-your-shards.html#shard-size-recommendation. Modify the seafevents.conf: Note, you should install Python libraries system wide using root user or sudo mode. For Ubuntu 20.04/22.04 For Debian 11 Stop Seafile-9.0.x server. Start from Seafile 10.0.x, run the script: If you are using pro edtion, modify memcached option in seafile.conf and SAML SSO configuration if needed. You can choose one of the methods to upgrade your index data. 1. Download Elasticsearch image: Create a new folder to store ES data and give the folder permissions: Start ES docker image: PS: 2. Create an index with 8.x compatible mappings: 3. Set the 4. Use the reindex API to copy documents from the 7.x index into the new index: 5. Use the following command to check if the reindex task is complete: 6. Reset the 7. Wait for the elasticsearch status to change to 8. Use the aliases API delete the old index and add an alias with the old index name to the new index: 9. Deactivate the 7.17 container, pull the 8.x image and run: 1. Pull Elasticsearch image: Create a new folder to store ES data and give the folder permissions: Start ES docker image: 2. Modify the seafevents.conf: Restart Seafile server: 3. Delete old index data 4. Create new index data: 1. Deploy elasticsearch 8.x according to method two. Use Seafile 10.0 version to deploy a new backend node and modify the 2. Upgrade the other nodes to Seafile 10.0 version and use the new Elasticsearch 8.x server. 3. Then deactivate the old backend node and the old version of Elasticsearch. These notes give additional information about changes. Please always follow the main upgrade guide. For docker based version, please check upgrade Seafile Docker image Previous Seafile versions directly used a user's email address or SSO identity as their internal user ID. Seafile 11.0 introduces virtual user IDs - random, internal identifiers like \"adc023e7232240fcbb83b273e1d73d36@auth.local\". For new users, a virtual ID will be generated instead of directly using their email. A mapping between the email and virtual ID will be stored in the \"profile_profile\" database table. For SSO users,the mapping between SSO ID and virtual ID is stored in the \"social_auth_usersocialauth\" table. Overall this brings more flexibility to handle user accounts and identity changes. Existing users will use the same old ID. Previous Seafile versions handled LDAP authentication in the ccnet-server component. In Seafile 11.0, LDAP is reimplemented within the Seahub Python codebase. LDAP configuration has been moved from ccnet.conf to seahub_settings.py. The ccnet_db.LDAPImported table is no longer used - LDAP users are now stored in ccnet_db.EmailUsers along with other users. Benefits of this new implementation: You need to run If you use OAuth authentication, the configuration need to be changed a bit. If you use SAML, you don't need to change configuration files. For SAML2, in version 10, the name_id field is returned from SAML server, and is used as the username (the email field in ccnet_dbEmailUser). In version 11, for old users, Seafile will find the old user and create a name_id to name_id mapping in social_auth_usersocialauth. For new users, Seafile will create a new user with random ID and add a name_id to the random ID mapping in social_auth_usersocialauth. In addition, we have added a feature where you can configure to disable login with a username and password for saml users by using the config of Seafile 11.0 dropped using SQLite as the database. It is better to migrate from SQLite database to MySQL database before upgrading to version 11.0. There are several reasons driving this change: To migrate from SQLite database to MySQL database, you can follow the document Migrate from SQLite to MySQL. If you have issues in the migration, just post a thread in our forum. We are glad to help you. Elasticsearch version is not changed in Seafile version 11.0 For Ubuntu 20.04/22.04 Django 4.* has introduced a new check for the origin http header in CSRF verification. It now compares the values of the origin field in HTTP header and the host field in HTTP header. If they are different, an error is triggered. If you deploy Seafile behind a proxy, or if you use a non-standard port, or if you deploy Seafile in cluster, it is likely the origin field in HTTP header received by Django and the host field in HTTP header received by Django are different. Because the host field in HTTP header is likely to be modified by proxy. This mismatch results in a CSRF error. You can add CSRF_TRUSTED_ORIGINS to seahub_settings.py to solve the problem: Note, you should install Python libraries system wide using root user or sudo mode. For Ubuntu 20.04/22.04 The configuration items of LDAP login and LDAP sync tasks are migrated from ccnet.conf to seahub_settings.py. The name of the configuration item is based on the 10.0 version, and the characters 'LDAP_' or 'MULTI_LDAP_1' are added. Examples are as follows: The following configuration items are only for Pro Edition: If you sync users from LDAP to Seafile, when the user login via SSO (ADFS or OAuth), you want Seafile to find the existing account for this user instead of creating a new one, you can set Note, here the UID means the unique user ID, in LDAP it is the attribute you use for Run the following script to migrate users in For Seafile docker In the new version, the OAuth login configuration should keep the email attribute unchanged to be compatible with new and old user logins. In version 11.0, a new uid attribute is added to be used as a user's external unique ID. The uid will be stored in social_auth_usersocialauth to map to internal virtual ID. For old users, the original email is used the internal virtual ID. The example is as follows: When a user login, Seafile will first use \"id -> email\" map to find the old user and then create \"uid -> uid\" map for this old user. After all users login once, you can delete the configuration We have documented common issues encountered by users when upgrading to version 11.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000. If you encounter any issue, please give it a check. These notes give additional information about changes. Please always follow the main upgrade guide. For docker based version, please check upgrade Seafile Docker image Seafile version 12.0 has following major changes: Other changes: Breaking changes Deploying SeaDoc and Seafile binary package on the same server is no longer supported. You can: Deploying Seafile with binary package is now deprecated and probably no longer be supported in version 13.0. We recommend you to migrate your existing Seafile deployment to docker based. Elasticsearch version is not changed in Seafile version 12.0 Note, you should install Python libraries system wide using root user or sudo mode. For Ubuntu 22.04/24.04 The following instruction is for binary package based installation. If you use Docker based installation, please see conf/.env Note: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: If you have deployed SeaDoc v0.8 with Seafile v11.0, you can upgrade it to 1.0 use the following two steps: Note, deploying SeaDoc and Seafile binary package on the same server is no longer supported. If you really want to deploying SeaDoc and Seafile server on the same machine, you should deploy Seafile server with Docker. From version 1.0, SeaDoc is using seahub_db database to store its operation logs and no longer need an extra database sdoc_db. The database tables in seahub_db are created automatically when you upgrade Seafile server from v11.0 to v12.0. You can simply delete sdoc_db. Please see the document Setup SeaDoc to install SeaDoc on a separate machine and integrate with your binary packaged based Seafile server v12.0. We have documented common issues encountered by users when upgrading to version 12.0 in our FAQ https://cloud.seatable.io/dtable/external-links/7b976c85f504491cbe8e/?tid=0000&vid=0000. If you encounter any issue, please give it a check. These notes give additional information about changes. Please always follow the main upgrade guide. If you are currently using the Seafile Community Edition, please refer to Upgrade notes for CE-7.0.x. If you are currently using Seafile Professional, please refer to Upgrade notes for Pro-7.0.x. These notes give additional information about changes. Please always follow the main upgrade guide. From 7.1.0 version, Seafile will depend on the Python 3 and is\u00a0not\u00a0compatible\u00a0with\u00a0Python\u00a02. Therefore you cannot upgrade directly from Seafile 6.x.x to 7.1.x. If your current version of Seafile is not 7.0.x, you must first download the 7.0.x installation package and upgrade to 7.0.x before performing the subsequent operations. To support both Python 3.6 and 3.7, we no longer bundle python libraries with Seafile package. You need to install most of the libraries by your own as bellow. Note, you should install Python libraries system wide using root user or sudo mode. After Seafile 7.1.x, Seafdav does not support Fastcgi, only Wsgi. This means that if you are using Seafdav functionality and have deployed Nginx or Apache reverse proxy. You need to change Fastcgi to Wsgi. For Seafdav, the configuration of Nginx is as follows: For Seafdav, the configuration of Apache is as follows: The implementation of builtin office file preview has been changed. You should update your configuration according to: https://download.seafile.com/published/seafile-manual/deploy_pro/office_documents_preview.md#user-content-Version%207.1+ If you are using Ceph storage backend, you need to install new python library. On Debian/Ubuntu (Seafile 7.1+): If you have customized the login page or other html pages, as we have removed some old javascript libraries, your customized pages may not work anymore. Please try to re-customize based on the newest version. Note, the following patch is included in version pro-7.1.8 and ce-7.1.5 already. We have two customers reported that after upgrading to version 7.1, users login via Shibboleth single sign on have a wrong name if the name contains a special character. We suspect it is a Shibboleth problem as it does not sending the name in UTF-8 encoding to Seafile. (https://issues.shibboleth.net/jira/browse/SSPCPP-2) The solution is to modify the code in seahub/thirdpart/shibboleth/middleware.py: If you have this problem too, please let us know. The upgrade script will try to create a missing table and remove an used index. The following SQL errors are jus warnings and can be ignored: Please check whether the seahub process is running in your server. If it is running, there should be an error log in seahub.log for internal server error. If seahub process is not running, you can modify\u00a0conf/gunicorn.conf, change\u00a0 The most common issue is that you use an old memcache configuration that depends on python-memcache. The new way is The old way is These notes give additional information about changes. Please always follow the main upgrade guide. From 8.0, ccnet-server component is removed. But ccnet.conf is still needed. Note, you should install Python libraries system wide using root user or sudo mode. If you are using Shibboleth and have configured please change it to As support for old-style middleware using Start from Seafile 7.1.x, run the script: Start Seafile-8.0.x server. These notes give additional information about changes. Please always follow the main upgrade guide. 9.0 version includes following major changes: The new file-server written in golang serves HTTP requests to upload/download/sync files. It provides three advantages: You can turn golang file-server on by adding following configuration in seafile.conf Note, you should install Python libraries system wide using root user or sudo mode. Start from Seafile 9.0.x, run the script: Start Seafile-9.0.x server. If your elasticsearch data is not large, it is recommended to deploy the latest 7.x version of ElasticSearch and then rebuild the new index. Specific steps are as follows Download ElasticSearch image Create a new folder to store ES data and give the folder permissions Note: You must properly grant permission to access the es data directory, and run the Elasticsearch container as the root user, refer to here. Start ES docker image Delete old index data Modify seafevents.conf Restart seafile If your data volume is relatively large, it will take a long time to rebuild indexes for all Seafile databases, so you can reindex the existing data. This requires the following steps The detailed process is as follows Download ElasticSearch image: PS\uff1aFor seafile version 9.0, you need to manually create the elasticsearch mapping path on the host machine and give it 777 permission, otherwise elasticsearch will report path permission problems when starting, the command is as follows Move original data to the new folder and give the folder permissions Note: You must properly grant permission to access the es data directory, and run the Elasticsearch container as the root user, refer to here. Start ES docker image Note: Create an index with 7.x compatible mappings. Set the Use the reindex API to copy documents from the 5.x index into the new index. Reset the Wait for the index status to change to Use the aliases API delete the old index and add an alias with the old index name to the new index. After reindex, modify the configuration in Seafile. Modify seafevents.conf Restart seafile Deploy a new ElasticSeach 7.x service, use Seafile 9.0 version to deploy a new backend node, and connect to ElasticSeach 7.x. The background node does not start the Seafile background service, just manually run the command
"},{"location":"#contact-information","title":"Contact information","text":"
"},{"location":"changelog/","title":"Changelog","text":""},{"location":"changelog/#changelogs","title":"Changelogs","text":"
"},{"location":"contribution/","title":"Contribution","text":""},{"location":"contribution/#licensing","title":"Licensing","text":"
"},{"location":"contribution/#discussion","title":"Discussion","text":"
"},{"location":"contribution/#code-style","title":"Code Style","text":"
"},{"location":"build_seafile/linux/","title":"Linux","text":""},{"location":"build_seafile/linux/#preparation","title":"Preparation","text":"
sudo apt-get install autoconf automake libtool libevent-dev libcurl4-openssl-dev libgtk2.0-dev uuid-dev intltool libsqlite3-dev valac libjansson-dev cmake qtchooser qtbase5-dev libqt5webkit5-dev qttools5-dev qttools5-dev-tools libssl-dev\n
"},{"location":"build_seafile/linux/#building","title":"Building","text":"$ sudo yum install wget gcc libevent-devel openssl-devel gtk2-devel libuuid-devel sqlite-devel jansson-devel intltool cmake libtool vala gcc-c++ qt5-qtbase-devel qt5-qttools-devel qt5-qtwebkit-devel libcurl-devel openssl-devel\n
# without alias wget= might not work\nshopt -s expand_aliases\n\nexport version=8.0.0\nalias wget='wget --content-disposition -nc'\nwget https://github.com/haiwen/libsearpc/archive/v3.2-latest.tar.gz\nwget https://github.com/haiwen/ccnet/archive/v${version}.tar.gz \nwget https://github.com/haiwen/seafile/archive/v${version}.tar.gz\nwget https://github.com/haiwen/seafile-client/archive/v${version}.tar.gz\ntar xf libsearpc-3.2-latest.tar.gz\ntar xf ccnet-${version}.tar.gz\ntar xf seafile-${version}.tar.gz\ntar xf seafile-client-${version}.tar.gz\n
"},{"location":"build_seafile/linux/#libsearpc","title":"libsearpc","text":"export PREFIX=/usr\nexport PKG_CONFIG_PATH=\"$PREFIX/lib/pkgconfig:$PKG_CONFIG_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\n
"},{"location":"build_seafile/linux/#seafile","title":"seafile","text":"cd libsearpc-3.2-latest\n./autogen.sh\n./configure --prefix=$PREFIX\nmake\nsudo make install\ncd ..\ngit clone --branch=v4.3.0 https://github.com/warmcat/libwebsockets\ncd libwebsockets\nmkdir build\ncd build\ncmake ..\nmake\nsudo make install\ncd ..\n--enable-ws to no to disable notification server. After that, you can build seafile:
"},{"location":"build_seafile/linux/#seafile-client","title":"seafile-client","text":"cd seafile-${version}/\n./autogen.sh\n./configure --prefix=$PREFIX --disable-fuse\nmake\nsudo make install\ncd ..\n
"},{"location":"build_seafile/linux/#custom-prefix","title":"custom prefix","text":"cd seafile-client-${version}\ncmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$PREFIX .\nmake\nsudo make install\ncd ..\n$PREFIX, i.e. /opt, you may need a script to set the path variables correctlycat >$PREFIX/bin/seafile-applet.sh <<END\n#!/bin/bash\nexport LD_LIBRARY_PATH=\"$PREFIX/lib:$LD_LIBRARY_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\nexec seafile-applet $@\nEND\ncat >$PREFIX/bin/seaf-cli.sh <<END\nexport LD_LIBRARY_PATH=\"$PREFIX/lib:$LD_LIBRARY_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\nexport PYTHONPATH=$PREFIX/lib/python2.7/site-packages\nexec seaf-cli $@\nEND\nchmod +x $PREFIX/bin/seafile-applet.sh $PREFIX/bin/seaf-cli.sh\n$PREFIX/bin/seafile-applet.sh.
"},{"location":"build_seafile/osx/#building-sync-client","title":"Building Sync Client","text":"
universal_archs arm64 x86_64. Specifies the architecture on which MapPorts is compiled.+universal. MacPorts installs universal versions of all ports.sudo port install autoconf automake pkgconfig libtool glib2 libevent vala openssl git jansson cmake libwebsockets.
export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/opt/local/lib/pkgconfig:/usr/local/lib/pkgconfig\nexport PATH=/opt/local/bin:/usr/local/bin:/opt/local/Library/Frameworks/Python.framework/Versions/3.10/bin:$PATH\nexport LDFLAGS=\"-L/opt/local/lib -L/usr/local/lib\"\nexport CFLAGS=\"-I/opt/local/include -I/usr/local/include\"\nexport CPPFLAGS=\"-I/opt/local/include -I/usr/local/include\"\nexport LD_LIBRARY_PATH=/opt/lib:/usr/local/lib:/opt/local/lib/:/usr/local/lib/:$LD_LIBRARY_PATH\n\nQT_BASE=$HOME/Qt/6.2.4/macos\nexport PATH=$QT_BASE/bin:$PATH\nexport PKG_CONFIG_PATH=$QT_BASE/lib/pkgconfig:$PKG_CONFIG_PATH\nexport NOTARIZE_APPLE_ID=\"Your notarize account\"\nexport NOTARIZE_PASSWORD=\"Your notarize password\"\nexport NOTARIZE_TEAM_ID=\"Your notarize team id\"\nseafile-workspace/\nseafile-workspace/libsearpc/\nseafile-workspace/seafile/\nseafile-workspace/seafile-client/\n$ cd seafile-workspace/libsearpc/\n$ ./autogen.sh\n$ ./configure --disable-compile-demo --enable-compile-universal=yes\n$ make\n$ make install\n$ cd seafile-workspace/seafile/\n$ ./autogen.sh\n$ ./configure --disable-fuse --enable-compile-universal=yes\n$ make\n$ make install\n
"},{"location":"build_seafile/osx/#packaging","title":"Packaging","text":"$ cd seafile-workspace/seafile-client/\n$ cmake -GXcode -B. -S.\n$ xcodebuild -target seafile-applet -configuration Release\n
"},{"location":"build_seafile/rpi/","title":"How to Build Seafile Server Release Package","text":"python3 build-mac-local-py3.py --brand=\"\" --version=1.0.0 --nostrip --universalseafile-build.sh compatible with more platforms, including Raspberry Pi, arm-64, x86-64.
"},{"location":"build_seafile/rpi/#setup-the-build-environment","title":"Setup the build environment","text":"
"},{"location":"build_seafile/rpi/#install-packages","title":"Install packages","text":"
"},{"location":"build_seafile/rpi/#compile-development-libraries","title":"Compile development libraries","text":""},{"location":"build_seafile/rpi/#libevhtp","title":"libevhtp","text":"sudo apt-get install build-essential\nsudo apt-get install libevent-dev libcurl4-openssl-dev libglib2.0-dev uuid-dev intltool libsqlite3-dev libmysqlclient-dev libarchive-dev libtool libjansson-dev valac libfuse-dev re2c flex python-setuptools cmake\ngit clone https://www.github.com/haiwen/libevhtp.git\ncd libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nsudo make install\nldconfig to update the system libraries cache:
"},{"location":"build_seafile/rpi/#install-python-libraries","title":"Install python libraries","text":"sudo ldconfig\n/home/pi/dev/seahub_thirdpart:mkdir -p ~/dev/seahub_thirdpart\n/tmp/:
/home/pi/dev/seahub_thirdpart:
"},{"location":"build_seafile/rpi/#prepare-seafile-source-code","title":"Prepare seafile source code","text":"cd ~/dev/seahub_thirdpart\nexport PYTHONPATH=.\npip install -t ~/dev/seahub_thirdpart/ /tmp/pytz-2016.1.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/Django-1.8.10.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-statici18n-1.1.3.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/djangorestframework-3.3.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django_compressor-1.4.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/jsonfield-1.0.3.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-post_office-2.0.6.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/gunicorn-19.4.5.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/flup-1.0.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/chardet-2.3.0.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/python-dateutil-1.5.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/six-1.9.0.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-picklefield-0.3.2.tar.gz\nwget -O /tmp/django_constance.zip https://github.com/haiwen/django-constance/archive/bde7f7c.zip\npip install -t ~/dev/seahub_thirdpart/ /tmp/django_constance.zip\npip install -t ~/dev/seahub_thirdpart/ /tmp/jdcal-1.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/et_xmlfile-1.0.1.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/openpyxl-2.3.0.tar.gz\n
"},{"location":"build_seafile/rpi/#fetch-git-tags-and-prepare-source-tarballs","title":"Fetch git tags and prepare source tarballs","text":"build-server.py script to build the server package from the source tarballs.
v6.0.1-sever tag.v3.0-latest tag (libsearpc has been quite stable and basically has no further development, so the tag is always v3.0-latest)PKG_CONFIG_PATH enviroment variable (So we don't need to make and make install libsearpc/ccnet/seafile into the system):
"},{"location":"build_seafile/rpi/#libsearpc","title":"libsearpc","text":"export PKG_CONFIG_PATH=/home/pi/dev/seafile/lib:$PKG_CONFIG_PATH\nexport PKG_CONFIG_PATH=/home/pi/dev/libsearpc:$PKG_CONFIG_PATH\nexport PKG_CONFIG_PATH=/home/pi/dev/ccnet:$PKG_CONFIG_PATH\n
"},{"location":"build_seafile/rpi/#ccnet","title":"ccnet","text":"cd ~/dev\ngit clone https://github.com/haiwen/libsearpc.git\ncd libsearpc\ngit reset --hard v3.0-latest\n./autogen.sh\n./configure\nmake dist\n
"},{"location":"build_seafile/rpi/#seafile","title":"seafile","text":"cd ~/dev\ngit clone https://github.com/haiwen/ccnet-server.git\ncd ccnet\ngit reset --hard v6.0.1-server\n./autogen.sh\n./configure\nmake dist\n
"},{"location":"build_seafile/rpi/#seahub","title":"seahub","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafile-server.git\ncd seafile\ngit reset --hard v6.0.1-server\n./autogen.sh\n./configure\nmake dist\n
"},{"location":"build_seafile/rpi/#seafobj","title":"seafobj","text":"cd ~/dev\ngit clone https://github.com/haiwen/seahub.git\ncd seahub\ngit reset --hard v6.0.1-server\n./tools/gen-tarball.py --version=6.0.1 --branch=HEAD\n
"},{"location":"build_seafile/rpi/#seafdav","title":"seafdav","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafobj.git\ncd seafobj\ngit reset --hard v6.0.1-server\nmake dist\n
"},{"location":"build_seafile/rpi/#copy-the-source-tar-balls-to-the-same-folder","title":"Copy the source tar balls to the same folder","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafdav.git\ncd seafdav\ngit reset --hard v6.0.1-server\nmake\n
"},{"location":"build_seafile/rpi/#run-the-packaging-script","title":"Run the packaging script","text":"mkdir ~/seafile-sources\ncp ~/dev/libsearpc/libsearpc-<version>-tar.gz ~/seafile-sources\ncp ~/dev/ccnet/ccnet-<version>-tar.gz ~/seafile-sources\ncp ~/dev/seafile/seafile-<version>-tar.gz ~/seafile-sources\ncp ~/dev/seahub/seahub-<version>-tar.gz ~/seafile-sources\n\ncp ~/dev/seafobj/seafobj.tar.gz ~/seafile-sources\ncp ~/dev/seafdav/seafdav.tar.gz ~/seafile-sources\nbuild-server.py script to build the server package.mkdir ~/seafile-server-pkgs\n~/dev/seafile/scripts/build-server.py --libsearpc_version=<libsearpc_version> --ccnet_version=<ccnet_version> --seafile_version=<seafile_version> --seahub_version=<seahub_version> --srcdir= --thirdpartdir=/home/pi/dev/seahub_thirdpart --srcdir=/home/pi/seafile-sources --outputdir=/home/pi/seafile-server-pkgs\nseafile-server_6.0.1_pi.tar.gz in ~/seafile-server-pkgs folder.
"},{"location":"build_seafile/rpi/#test-upgrading-from-a-previous-version","title":"Test upgrading from a previous version","text":"seafile.sh start and seahub.sh start, you can login from a browser.
"},{"location":"build_seafile/server/","title":"Server development","text":"root user, then:
"},{"location":"build_seafile/server/#run-a-container","title":"Run a container","text":"mkdir -p /root/seafile-ce-docker/source-code\nmkdir -p /root/seafile-ce-docker/conf\nmkdir -p /root/seafile-ce-docker/logs\nmkdir -p /root/seafile-ce-docker/mysql-data\nmkdir -p /root/seafile-ce-docker/seafile-data/library-template\ndocker run --mount type=bind,source=/root/seafile-ce-docker/source-code,target=/root/dev/source-code \\\n --mount type=bind,source=/root/seafile-ce-docker/conf,target=/root/dev/conf \\\n --mount type=bind,source=/root/seafile-ce-docker/logs,target=/root/dev/logs \\\n --mount type=bind,source=/root/seafile-ce-docker/seafile-data,target=/root/dev/seafile-data \\\n --mount type=bind,source=/root/seafile-ce-docker/mysql-data,target=/var/lib/mysql \\\n -it -p 8000:8000 -p 8082:8082 -p 3000:3000 --name seafile-ce-env ubuntu:22.04 bash\napt-get update && apt-get upgrade -y\n\napt-get install -y ssh libevent-dev libcurl4-openssl-dev libglib2.0-dev uuid-dev intltool libsqlite3-dev libmysqlclient-dev libarchive-dev libtool libjansson-dev valac libfuse-dev python3-dateutil cmake re2c flex sqlite3 python3-pip python3-simplejson git libssl-dev libldap2-dev libonig-dev vim vim-scripts wget cmake gcc autoconf automake mysql-client librados-dev libxml2-dev curl sudo telnet netcat unzip netbase ca-certificates apt-transport-https build-essential libxslt1-dev libffi-dev libpcre3-dev libz-dev xz-utils nginx pkg-config poppler-utils libmemcached-dev sudo ldap-utils libldap2-dev libjwt-dev\ncurl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg\necho \"deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_16.x nodistro main\" | sudo tee /etc/apt/sources.list.d/nodesource.list\napt-get install -y nodejs\n
"},{"location":"build_seafile/server/#install-mariadb-and-create-databases","title":"Install MariaDB and Create Databases","text":"apt-get install -y python3 python3-dev python3-pip python3-setuptools python3-ldap\n\npython3 -m pip install --upgrade pip\n\npip3 install Django==4.2.* django-statici18n==2.3.* django_webpack_loader==1.7.* django_picklefield==3.1 django_formtools==2.4 django_simple_captcha==0.6.* djangosaml2==1.5.* djangorestframework==3.14.* python-dateutil==2.8.* pyjwt==2.6.* pycryptodome==3.16.* python-cas==1.6.* pysaml2==7.2.* requests==2.28.* requests_oauthlib==1.3.* future==0.18.* gunicorn==20.1.* mysqlclient==2.1.* qrcode==7.3.* pillow==10.2.* chardet==5.1.* cffi==1.15.1 captcha==0.5.* openpyxl==3.0.* Markdown==3.4.* bleach==5.0.* python-ldap==3.4.* sqlalchemy==2.0.18 redis mock pytest pymysql configparser pylibmc django-pylibmc nose exam splinter pytest-django\napt-get install -y mariadb-server\nservice mariadb start\nmysqladmin -u root password your_password\n
"},{"location":"build_seafile/server/#download-source-code","title":"Download Source Code","text":"mysql -uroot -pyour_password -e \"CREATE DATABASE ccnet CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seafile CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seahub CHARACTER SET utf8;\"\n
"},{"location":"build_seafile/server/#compile-and-install-seaf-server","title":"Compile and Install seaf-server","text":"cd ~/\ncd ~/dev/source-code\n\ngit clone https://github.com/haiwen/libevhtp.git\ngit clone https://github.com/haiwen/libsearpc.git\ngit clone https://github.com/haiwen/seafile-server.git\ngit clone https://github.com/haiwen/seafevents.git\ngit clone https://github.com/haiwen/seafobj.git\ngit clone https://github.com/haiwen/seahub.git\n\ncd libevhtp/\ngit checkout tags/1.1.7 -b tag-1.1.7\n\ncd ../libsearpc/\ngit checkout tags/v3.3-latest -b tag-v3.3-latest\n\ncd ../seafile-server\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seafevents\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seafobj\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seahub\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n
"},{"location":"build_seafile/server/#create-conf-files","title":"Create Conf Files","text":"cd ../libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nmake install\nldconfig\n\ncd ../libsearpc\n./autogen.sh\n./configure\nmake\nmake install\nldconfig\n\ncd ../seafile-server\n./autogen.sh\n./configure --disable-fuse\nmake\nmake install\nldconfig\n
"},{"location":"build_seafile/server/#start-seaf-server","title":"Start seaf-server","text":"cd ~/dev/conf\n\ncat > ccnet.conf <<EOF\n[Database]\nENGINE = mysql\nHOST = localhost\nPORT = 3306\nUSER = root\nPASSWD = 123456\nDB = ccnet\nCONNECTION_CHARSET = utf8\nCREATE_TABLES = true\nEOF\n\ncat > seafile.conf <<EOF\n[database]\ntype = mysql\nhost = localhost\nport = 3306\nuser = root\npassword = 123456\ndb_name = seafile\nconnection_charset = utf8\ncreate_tables = true\nEOF\n\ncat > seafevents.conf <<EOF\n[DATABASE]\ntype = mysql\nusername = root\npassword = 123456\nname = seahub\nhost = localhost\nEOF\n\ncat > seahub_settings.py <<EOF\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': 'seahub',\n 'USER': 'root',\n 'PASSWORD': '123456',\n 'HOST': 'localhost',\n 'PORT': '3306',\n }\n}\nFILE_SERVER_ROOT = 'http://127.0.0.1:8082'\nSERVICE_URL = 'http://127.0.0.1:8000'\nEOF\n
"},{"location":"build_seafile/server/#start-seafevents-and-seahub","title":"Start seafevents and seahub","text":""},{"location":"build_seafile/server/#prepare-environment-variables","title":"Prepare environment variables","text":"seaf-server -F /root/dev/conf -d /root/dev/seafile-data -l /root/dev/logs/seafile.log >> /root/dev/logs/seafile.log 2>&1 &\n
"},{"location":"build_seafile/server/#start-seafevents","title":"Start seafevents","text":"export CCNET_CONF_DIR=/root/dev/conf\nexport SEAFILE_CONF_DIR=/root/dev/seafile-data\nexport SEAFILE_CENTRAL_CONF_DIR=/root/dev/conf\nexport SEAHUB_DIR=/root/dev/source-code/seahub\nexport SEAHUB_LOG_DIR=/root/dev/logs\nexport PYTHONPATH=/usr/local/lib/python3.10/dist-packages/:/usr/local/lib/python3.10/site-packages/:/root/dev/source-code/:/root/dev/source-code/seafobj/:/root/dev/source-code/seahub/thirdpart:$PYTHONPATH\n
"},{"location":"build_seafile/server/#start-seahub","title":"Start seahub","text":""},{"location":"build_seafile/server/#create-seahub-database-tables","title":"Create seahub database tables","text":"cd /root/dev/source-code/seafevents/\npython3 main.py --loglevel=debug --logfile=/root/dev/logs/seafevents.log --config-file /root/dev/conf/seafevents.conf >> /root/dev/logs/seafevents.log 2>&1 &\n
"},{"location":"build_seafile/server/#create-user","title":"Create user","text":"cd /root/dev/source-code/seahub/\npython3 manage.py migrate\n
"},{"location":"build_seafile/server/#start-seahub_1","title":"Start seahub","text":"python3 manage.py createsuperuser\npython3 manage.py runserver 0.0.0.0:8000\ncd /root/dev/source-code/seahub\n\ngit fetch origin master:master\ngit checkout master\nimport os\nPROJECT_ROOT = '/root/dev/source-code/seahub'\nWEBPACK_LOADER = {\n 'DEFAULT': {\n 'BUNDLE_DIR_NAME': 'frontend/',\n 'STATS_FILE': os.path.join(PROJECT_ROOT,\n 'frontend/webpack-stats.dev.json'),\n }\n}\nDEBUG = True\ncd /root/dev/source-code/seahub/frontend\n\nnpm install\ncd /root/dev/source-code/seahub/frontend\n\nnpm run dev\n
"},{"location":"build_seafile/windows/#breakpad","title":"Breakpad","text":"
# Example of the install command:\n$ ./vcpkg.exe install curl[core,openssl]:x64-windows\n
"},{"location":"build_seafile/windows/#building-sync-client","title":"Building Sync Client","text":"$ git clone --depth=1 git@github.com:chromium/gyp.git\n$ python setup.py install\n$ git clone --depth=1 git@github.com:google/breakpad.git\n$ cd breakpad\n$ git clone https://github.com/google/googletest.git testing\n$ cd ..\n# create vs solution, this may throw an error \"module collections.abc has no attribute OrderedDict\", you should open the msvs.py and replace 'collections.abc' with 'collections'.\n$ gyp \u2013-no-circular-check breakpad\\src\\client\\windows\\breakpad_client.gyp\n
gyp \u2013-no-circular-check breakpad\\src\\tools\\windows\\tools_windows.gyp\n
copy C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Redist\\MSVC\\v142\\MergeModules\\MergeModules\\Microsoft_VC142_CRT_x64.msm C:\\packagelib\nseafile-workspace/\nseafile-workspace/libsearpc/\nseafile-workspace/seafile/\nseafile-workspace/seafile-client/\nseafile-workspace/seafile-shell-ext/\n$ cd seafile-workspace/libsearpc/\n$ devenv libsearpc.sln /build \"Release|x64\"\n$ cd seafile-workspace/seafile/\n$ devenv seafile.sln /build \"Release|x64\"\n$ devenv msi/custom/seafile_custom.sln /build \"Release|x64\"\n$ cd seafile-workspace/seafile-client/\n$ devenv third_party/quazip/quazip.sln /build \"Release|x64\"\n$ devenv seafile-client.sln /build \"Release|x64\"\n
"},{"location":"build_seafile/windows/#packaging","title":"Packaging","text":"$ cd seafile-workspace/seafile-shell-ext/\n$ devenv extensions/seafile_ext.sln /build \"Release|x64\"\n$ devenv seadrive-thumbnail-ext/seadrive_thumbnail_ext.sln /build \"Release|x64\"\n
"},{"location":"changelog/changelog-for-seafile-professional-server-old/","title":"Seafile Professional Server Changelog (old)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#44","title":"4.4","text":"$ cd seafile-workspace/seafile-client/third_party/quazip\n$ devenv quazip.sln /build Release|x64\n$ cd seafile-workspace/seafile/scripts/build\n$ python build-msi-vs.py 1.0.0\n
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#448-20151217","title":"4.4.8 (2015.12.17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#447-20151120","title":"4.4.7 (2015.11.20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#446-20151109","title":"4.4.6 (2015.11.09)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#444-20151029","title":"4.4.4 (2015.10.29)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#443-20151020","title":"4.4.3 (2015.10.20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#442-20151019","title":"4.4.2 (2015.10.19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#441-beta-20150924","title":"4.4.1 beta (2015.09.24)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#440-beta-20150921","title":"4.4.0 beta (2015.09.21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#43","title":"4.3","text":"COMPRESS_CACHE_BACKEND = 'locmem://' should be added to seahub_settings.py
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#433-20150821","title":"4.3.3 (2015.08.21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#432-20150812","title":"4.3.2 (2015.08.12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#431-20150731","title":"4.3.1 (2015.07.31)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#42","title":"4.2","text":"THUMBNAIL_DEFAULT_SIZE = 24, instead of THUMBNAIL_DEFAULT_SIZE = '24'
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#424-20150708","title":"4.2.4 (2015.07.08)","text":"rm -rf /tmp/seafile-office-output/html/\n
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#423-20150707","title":"4.2.3 (2015.07.07)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#422-20150703","title":"4.2.2 (2015.07.03)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#420-20150529","title":"4.2.0 (2015.05.29)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#41","title":"4.1","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#412-20150507","title":"4.1.2 (2015.05.07)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#411-20150416","title":"4.1.1 (2015.04.16)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#410-20150401","title":"4.1.0 (2015.04.01)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#40","title":"4.0","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#406-20150306","title":"4.0.6 (2015.03.06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#405-20150213","title":"4.0.5 (2015.02.13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#404-20150205","title":"4.0.4 (2015.02.05)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#403-20150115","title":"4.0.3 (2015.01.15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#402-20150106","title":"4.0.2 (2015.01.06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#401-20141229","title":"4.0.1 (2014.12.29)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#400-20141213","title":"4.0.0 (2014.12.13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#31","title":"3.1","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#3113-20141125","title":"3.1.13 (2014.11.25)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#3112-20141117","title":"3.1.12 (2014.11.17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#3111-20141103","title":"3.1.11 (2014.11.03)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#3110-20141027","title":"3.1.10 (2014.10.27)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#319-20141013","title":"3.1.9 (2014.10.13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#317-318","title":"3.1.7, 3.1.8","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#316-20140916","title":"3.1.6 (2014.09.16)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#315-20140913","title":"3.1.5 (2014.09.13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#314-20140911","title":"3.1.4 (2014.09.11)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#313-20140829","title":"3.1.3 (2014.08.29)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#312-20140827","title":"3.1.2 (2014.08.27)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#311-20140818","title":"3.1.1 (2014.08.18)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#310-20140815","title":"3.1.0 (2014.08.15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#30","title":"3.0","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#307","title":"3.0.7","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#306","title":"3.0.6","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#305","title":"3.0.5","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#304","title":"3.0.4","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#303","title":"3.0.3","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#302","title":"3.0.2","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#301","title":"3.0.1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#300","title":"3.0.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#22","title":"2.2","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#221","title":"2.2.1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#21","title":"2.1","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#215","title":"2.1.5","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#214-1","title":"2.1.4-1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#214","title":"2.1.4","text":"pro.py search --clear command
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#213","title":"2.1.3","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#212","title":"2.1.2","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#211","title":"2.1.1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#20","title":"2.0","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#205","title":"2.0.5","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#204","title":"2.0.4","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#203","title":"2.0.3","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#201","title":"2.0.1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#200","title":"2.0.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#18","title":"1.8","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#183","title":"1.8.3","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#182","title":"1.8.2","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#181","title":"1.8.1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#180","title":"1.8.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#17","title":"1.7","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#1704","title":"1.7.0.4","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#170","title":"1.7.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/","title":"Seafile Professional Server Changelog","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#11014-2024-08-22","title":"11.0.14 (2024-08-22)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#11013-2024-08-14","title":"11.0.13 (2024-08-14)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#11012-2024-08-07","title":"11.0.12 (2024-08-07)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#11011-2024-07-24","title":"11.0.11 (2024-07-24)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#11010-2024-07-09","title":"11.0.10 (2024-07-09)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1109-2024-06-25","title":"11.0.9 (2024-06-25)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1108-2024-06-20","title":"11.0.8 (2024-06-20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1107-2024-06-03","title":"11.0.7 (2024-06-03)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1106-beta-2024-04-19","title":"11.0.6 beta (2024-04-19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1105-beta-2024-03-20","title":"11.0.5 beta (2024-03-20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1104-beta-and-sdoc-editor-05-2024-02-01","title":"11.0.4 beta and SDoc editor 0.5 (2024-02-01)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#100","title":"10.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10015-2024-03-21","title":"10.0.15 (2024-03-21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10014-2024-02-27","title":"10.0.14 (2024-02-27)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10013-2024-02-05","title":"10.0.13 (2024-02-05)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10011-2023-11-09","title":"10.0.11 (2023-11-09)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10010-2023-10-17","title":"10.0.10 (2023-10-17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1009-2023-08-25","title":"10.0.9 (2023-08-25)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1008-2023-08-01","title":"10.0.8 (2023-08-01)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1007-2023-07-25","title":"10.0.7 (2023-07-25)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1006-2023-06-27","title":"10.0.6 (2023-06-27)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1005-2023-06-12","title":"10.0.5 (2023-06-12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1004-2023-05-17","title":"10.0.4 (2023-05-17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1003-beta-2023-04-12","title":"10.0.3 beta (2023-04-12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1002-beta-2023-04-12","title":"10.0.2 beta (2023-04-12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1000-beta-2023-04-12","title":"10.0.0 beta (2023-04-12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#90","title":"9.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#9015-2023-03-01","title":"9.0.15 (2023-03-01)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#9014-2023-01-06","title":"9.0.14 (2023-01-06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#9013-2022-11-11","title":"9.0.13 (2022-11-11)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#9012-2022-11-04","title":"9.0.12 (2022-11-04)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#9011-2022-10-27","title":"9.0.11 (2022-10-27)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#9010-2022-10-12","title":"9.0.10 (2022-10-12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#909-2022-09-22","title":"9.0.9 (2022-09-22)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#908-2022-09-09","title":"9.0.8 (2022-09-09)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#907-20220811","title":"9.0.7 (2022/08/11)","text":"pip3 install lxml to install it.
"},{"location":"changelog/changelog-for-seafile-professional-server/#906-20220706","title":"9.0.6 (2022/07/06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#905-20220321","title":"9.0.5 (2022/03/21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#904-20220124","title":"9.0.4 (2022/01/24)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#903-beta-20211228","title":"9.0.3 beta (2021/12/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#902-beta-20211215","title":"9.0.2 beta (2021/12/15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#901","title":"9.0.1","text":"[fileserver]\nuse_go_fileserver = true\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#8016-20211228","title":"8.0.16 (2021/12/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#8015-20211206","title":"8.0.15 (2021/12/06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#8014-20211117","title":"8.0.14 (2021/11/17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#8012-20211103","title":"8.0.12 (2021/11/03)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#8011-20210926","title":"8.0.11 (2021/09/26)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#8010-20210909","title":"8.0.10 (2021/09/09)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#809-20210826","title":"8.0.9 (2021/08/26)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#808-20210806","title":"8.0.8 (2021/08/06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#807-20210719","title":"8.0.7 (2021/07/19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#806-20210715","title":"8.0.6 (2021/07/15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#805-20210625","title":"8.0.5 (2021/06/25)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#804-20210520","title":"8.0.4 (2021/05/20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#803-20210427","title":"8.0.3 (2021/04/27)","text":"
fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
"},{"location":"changelog/changelog-for-seafile-professional-server/#802-20210421","title":"8.0.2 (2021/04/21)","text":"[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#801-20210407","title":"8.0.1 (2021/04/07)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#800-beta-20210302","title":"8.0.0 beta (2021/03/02)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#71","title":"7.1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7121-20210713","title":"7.1.21 (2021/07/13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7120-20210702","title":"7.1.20 (2021/07/02)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7119-20210604","title":"7.1.19 (2021/06/04)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7118-20210513","title":"7.1.18 (2021/05/13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7117-20210426","title":"7.1.17 (2021/04/26)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7116-20210419","title":"7.1.16 (2021/04/19)","text":"
fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
"},{"location":"changelog/changelog-for-seafile-professional-server/#7115-20210318","title":"7.1.15 (2021/03/18)","text":"[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#7114-20210226","title":"7.1.14 (2021/02/26)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7113-20210208","title":"7.1.13 (2021/02/08)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7112-20210203","title":"7.1.12 (2021/02/03)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7111-20210128","title":"7.1.11 (2021/01/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7110-20200111","title":"7.1.10 (2020/01/11)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#719-20201202","title":"7.1.9 (2020/12/02)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#718-20201012","title":"7.1.8 (2020/10/12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#717-20200828","title":"7.1.7 (2020/08/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#716-20200728","title":"7.1.6 (2020/07/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#715-20200630","title":"7.1.5 (2020/06/30)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#714-20200514","title":"7.1.4 (2020/05/14)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#713-20200408","title":"7.1.3 (2020/04/08)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#711-beta-20200227","title":"7.1.1 Beta (2020/02/27)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#710-beta-20200219","title":"7.1.0 Beta (2020/02/19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#70","title":"7.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7018-20200521","title":"7.0.18 (2020/05/21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7017-20200428","title":"7.0.17 (2020/04/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7016-20200401","title":"7.0.16 (2020/04/01)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7015-deprecated","title":"7.0.15 (Deprecated)","text":""},{"location":"changelog/changelog-for-seafile-professional-server/#7014-20200306","title":"7.0.14 (2020/03/06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7013-20200116","title":"7.0.13 (2020/01/16)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7012-20200110","title":"7.0.12 (2020/01/10)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7011-20191115","title":"7.0.11 (2019/11/15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7010-20191022","title":"7.0.10 (2019/10/22)","text":"-Xms1g -Xmx1g
"},{"location":"changelog/changelog-for-seafile-professional-server/#709-20190920","title":"7.0.9 (2019/09/20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#708-20190826","title":"7.0.8 (2019/08/26)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#707-20190729","title":"7.0.7 (2019/07/29)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#706-20190722","title":"7.0.6 (2019/07/22)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#705-20190716","title":"7.0.5 (2019/07/16)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#704-20190705","title":"7.0.4 (2019/07/05)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#703-20190613","title":"7.0.3 (2019/06/13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#702-beta-20190517","title":"7.0.2 beta (2019/05/17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#701-beta-20190418","title":"7.0.1 beta (2019/04/18)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#63","title":"6.3","text":"conf/gunicorn.conf instead of running ./seahub.sh start <another-port>../seahub.sh python-env seahub/manage.py migrate_file_comment\nseafevents.conf):[INDEX FILES]\n...\nhighlight = fvh\n...\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#6313-20190320","title":"6.3.13 (2019/03/20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#6312-20190221","title":"6.3.12 (2019/02/21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#6311-20190115","title":"6.3.11 (2019/01/15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#6310-20190102","title":"6.3.10 (2019/01/02)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#639-20181213","title":"6.3.9 (2018/12/13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#638-20181210","title":"6.3.8 (2018/12/10)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#637-20181016","title":"6.3.7 (2018/10/16)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#636-20180921","title":"6.3.6 (2018/09/21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#635-20180918","title":"6.3.5 (2018/09/18)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#634-20180816","title":"6.3.4 (2018/08/16)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#633-20180815","title":"6.3.3 (2018/08/15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#632-20180730","title":"6.3.2 (2018/07/30)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#631-20180725","title":"6.3.1 (2018/07/25)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#630-beta-20180628","title":"6.3.0 Beta (2018/06/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#62","title":"6.2","text":"
./seahub.sh start instead of ./seahub.sh start-fastcgilocation / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n }\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#6213-2018518","title":"6.2.13 (2018.5.18)","text":" # seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#6212-2018420","title":"6.2.12 (2018.4.20)","text":"file already exists error for the first time.
"},{"location":"changelog/changelog-for-seafile-professional-server/#6211-2018419","title":"6.2.11 (2018.4.19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#6210-2018320","title":"6.2.10 (2018.3.20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#629-20180210","title":"6.2.9 (2018.02.10)","text":"per_page parameter to 10 when search file via api.
"},{"location":"changelog/changelog-for-seafile-professional-server/#628-20180202","title":"6.2.8 (2018.02.02)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#627-20180122","title":"6.2.7 (2018.01.22)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#625-626-deprecated","title":"6.2.5, 6.2.6 (deprecated)","text":""},{"location":"changelog/changelog-for-seafile-professional-server/#624-20171220","title":"6.2.4 (2017.12.20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#623-20171219","title":"6.2.3 (2017.12.19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#622-20171212","title":"6.2.2 (2017.12.12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#621-beta-20171122","title":"6.2.1 beta (2017.11.22)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#620-beta-20171016","title":"6.2.0 beta (2017.10.16)","text":"repo_owner field to library search web api.
"},{"location":"changelog/changelog-for-seafile-professional-server/#61","title":"6.1","text":"ENABLE_REPO_SNAPSHOT_LABEL = True to turn the feature on)
"},{"location":"changelog/changelog-for-seafile-professional-server/#618-20170818","title":"6.1.8 (2017.08.18)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#617-20170817","title":"6.1.7 (2017.08.17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#614-20170711","title":"6.1.4 (2017.07.11)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#613-20170706","title":"6.1.3 (2017.07.06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#612-deprecated","title":"6.1.2 (deprecated)","text":""},{"location":"changelog/changelog-for-seafile-professional-server/#611-20170619","title":"6.1.1 (2017.06.19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#610-beta-20170606","title":"6.1.0 beta (2017.06.06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#60","title":"6.0","text":"ENABLE_WIKI = True in seahub_settings.py)cd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#6012-20170417","title":"6.0.12 (2017.04.17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#6011-deprecated","title":"6.0.11 (Deprecated)","text":""},{"location":"changelog/changelog-for-seafile-professional-server/#6010-20170407","title":"6.0.10 (2017.04.07)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#609-20170401","title":"6.0.9 (2017.04.01)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#608-20170223","title":"6.0.8 (2017.02.23)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#607-20170118","title":"6.0.7 (2017.01.18)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#606-20170111","title":"6.0.6 (2017.01.11)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#605-20161219","title":"6.0.5 (2016.12.19)","text":"# -*- coding: utf-8 -*- to seahub_settings.py, so that admin can use non-ascii characters in the file.
"},{"location":"changelog/changelog-for-seafile-professional-server/#604-20161129","title":"6.0.4 (2016.11.29)","text":"[Audit] and [AUDIT] in seafevent.conf
"},{"location":"changelog/changelog-for-seafile-professional-server/#603-20161117","title":"6.0.3 (2016.11.17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#602-20161020","title":"6.0.2 (2016.10.20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#601-beta","title":"6.0.1 beta","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#600-beta","title":"6.0.0 beta","text":"
"},{"location":"changelog/client-changelog/","title":"Seafile Client Changelog","text":""},{"location":"changelog/client-changelog/#90","title":"9.0","text":""},{"location":"changelog/client-changelog/#908-20240812","title":"9.0.8 (2024/08/12)","text":"
"},{"location":"changelog/client-changelog/#907-20240723","title":"9.0.7 (2024/07/23)","text":"
"},{"location":"changelog/client-changelog/#906-20240523","title":"9.0.6 (2024/05/23)","text":"
"},{"location":"changelog/client-changelog/#905-20240305","title":"9.0.5 (2024/03/05)","text":"
"},{"location":"changelog/client-changelog/#904-20230913","title":"9.0.4 (2023/09/13)","text":"
"},{"location":"changelog/client-changelog/#903-20230705","title":"9.0.3 (2023/07/05)","text":"
"},{"location":"changelog/client-changelog/#902-20230427","title":"9.0.2 (2023/04/27)","text":"
"},{"location":"changelog/client-changelog/#901-20230324","title":"9.0.1 (2023/03/24)","text":"
"},{"location":"changelog/client-changelog/#900-20230320","title":"9.0.0 (2023/03/20)","text":"
"},{"location":"changelog/client-changelog/#80","title":"8.0","text":""},{"location":"changelog/client-changelog/#8010-20221228","title":"8.0.10 (2022/12/28)","text":"
"},{"location":"changelog/client-changelog/#809-20221114","title":"8.0.9 (2022/11/14)","text":"
"},{"location":"changelog/client-changelog/#808-20220705","title":"8.0.8 (2022/07/05)","text":"
"},{"location":"changelog/client-changelog/#807-20220429","title":"8.0.7 (2022/04/29)","text":"
"},{"location":"changelog/client-changelog/#806-20220304","title":"8.0.6 (2022/03/04)","text":"
"},{"location":"changelog/client-changelog/#805-20211118","title":"8.0.5 (2021/11/18)","text":"
"},{"location":"changelog/client-changelog/#804-20210922","title":"8.0.4 (2021/09/22)","text":"
"},{"location":"changelog/client-changelog/#803-20210703","title":"8.0.3 (2021/07/03)","text":"
"},{"location":"changelog/client-changelog/#802-20210521","title":"8.0.2 (2021/05/21)","text":"
"},{"location":"changelog/client-changelog/#801-20201215","title":"8.0.1 (2020/12/15)","text":"
"},{"location":"changelog/client-changelog/#800-beta-20201128","title":"8.0.0 beta (2020/11/28)","text":"
"},{"location":"changelog/client-changelog/#70","title":"7.0","text":""},{"location":"changelog/client-changelog/#7010-20201016","title":"7.0.10 (2020/10/16)","text":"
"},{"location":"changelog/client-changelog/#709-20200730","title":"7.0.9 (2020/07/30)","text":"
"},{"location":"changelog/client-changelog/#708-20200603","title":"7.0.8 (2020/06/03)","text":"
"},{"location":"changelog/client-changelog/#707-20200403","title":"7.0.7 (2020/04/03)","text":"
"},{"location":"changelog/client-changelog/#706-20200214","title":"7.0.6 (2020/02/14)","text":"._
"},{"location":"changelog/client-changelog/#705-20200114","title":"7.0.5 (2020/01/14)","text":"
"},{"location":"changelog/client-changelog/#704-20191120","title":"7.0.4 (2019/11/20)","text":"
"},{"location":"changelog/client-changelog/#703-20191031","title":"7.0.3 (2019/10/31)","text":"
"},{"location":"changelog/client-changelog/#702-20190812","title":"7.0.2 (2019/08/12)","text":"
"},{"location":"changelog/client-changelog/#701-20190711","title":"7.0.1 (2019/07/11)","text":"
"},{"location":"changelog/client-changelog/#700-20190604","title":"7.0.0 (2019/06/04)","text":"
"},{"location":"changelog/client-changelog/#62","title":"6.2","text":""},{"location":"changelog/client-changelog/#6210-20190115","title":"6.2.10 (2019/01/15)","text":"
"},{"location":"changelog/client-changelog/#629-20181210","title":"6.2.9 (2018/12/10)","text":"
"},{"location":"changelog/client-changelog/#628-20181205","title":"6.2.8 (2018/12/05)","text":"
"},{"location":"changelog/client-changelog/#627-20181122","title":"6.2.7 (2018/11/22)","text":"
"},{"location":"changelog/client-changelog/#625-20180914","title":"6.2.5 (2018/09/14)","text":"
"},{"location":"changelog/client-changelog/#624-20180803","title":"6.2.4 (2018/08/03)","text":"
"},{"location":"changelog/client-changelog/#623-20180730","title":"6.2.3 (2018/07/30)","text":"
"},{"location":"changelog/client-changelog/#622-621-beta-20180713","title":"6.2.2 6.2.1 Beta (2018/07/13)","text":"
"},{"location":"changelog/client-changelog/#620-beta-20180703","title":"6.2.0 Beta (2018/07/03)","text":"
"},{"location":"changelog/client-changelog/#61","title":"6.1","text":""},{"location":"changelog/client-changelog/#618-20180508","title":"6.1.8 (2018/05/08)","text":"
"},{"location":"changelog/client-changelog/#617-20180329","title":"6.1.7 (2018/03/29)","text":"
"},{"location":"changelog/client-changelog/#616-20180313","title":"6.1.6 (2018/03/13)","text":"
"},{"location":"changelog/client-changelog/#615-20180206","title":"6.1.5 (2018/02/06)","text":"
"},{"location":"changelog/client-changelog/#614-20171220","title":"6.1.4 (2017/12/20)","text":"
"},{"location":"changelog/client-changelog/#613-20171103","title":"6.1.3 (2017/11/03)","text":"
"},{"location":"changelog/client-changelog/#612-20171028","title":"6.1.2 (2017/10/28)","text":"
"},{"location":"changelog/client-changelog/#611-20170920","title":"6.1.1 (2017/09/20)","text":"
"},{"location":"changelog/client-changelog/#610-20170802","title":"6.1.0 (2017/08/02)","text":"
"},{"location":"changelog/client-changelog/#60","title":"6.0","text":""},{"location":"changelog/client-changelog/#607-20170623","title":"6.0.7 (2017/06/23)","text":"
"},{"location":"changelog/client-changelog/#606-20170508","title":"6.0.6 (2017/05/08)","text":"
"},{"location":"changelog/client-changelog/#604-20170221","title":"6.0.4 (2017/02/21)","text":"
"},{"location":"changelog/client-changelog/#603-20170211","title":"6.0.3 (2017/02/11)","text":"
"},{"location":"changelog/client-changelog/#602-deprecated","title":"6.0.2 (deprecated)","text":"
"},{"location":"changelog/client-changelog/#600-20161014","title":"6.0.0 (2016/10/14)","text":"
"},{"location":"changelog/client-changelog/#51","title":"5.1","text":""},{"location":"changelog/client-changelog/#514-20160729","title":"5.1.4 (2016/07/29)","text":"
"},{"location":"changelog/client-changelog/#513-20160627","title":"5.1.3 (2016/06/27)","text":"
"},{"location":"changelog/client-changelog/#512-20160607","title":"5.1.2 (2016/06/07)","text":"
"},{"location":"changelog/client-changelog/#511-20160504","title":"5.1.1 (2016/05/04)","text":"
"},{"location":"changelog/client-changelog/#510-20160411","title":"5.1.0 (2016/04/11)","text":"
"},{"location":"changelog/client-changelog/#50","title":"5.0","text":""},{"location":"changelog/client-changelog/#507-20160329","title":"5.0.7 (2016/03/29)","text":"
"},{"location":"changelog/client-changelog/#506-20160308","title":"5.0.6 (2016/03/08)","text":"
"},{"location":"changelog/client-changelog/#505-20160220","title":"5.0.5 (2016/02/20)","text":"
"},{"location":"changelog/client-changelog/#504-20160126","title":"5.0.4 (2016/01/26)","text":"
"},{"location":"changelog/client-changelog/#503-20160113","title":"5.0.3 (2016/01/13)","text":"
"},{"location":"changelog/client-changelog/#502-20160111","title":"5.0.2 (2016/01/11)","text":"
"},{"location":"changelog/client-changelog/#501-20151221","title":"5.0.1 (2015/12/21)","text":"
"},{"location":"changelog/client-changelog/#500-20151125","title":"5.0.0 (2015/11/25)","text":"
"},{"location":"changelog/client-changelog/#44","title":"4.4","text":""},{"location":"changelog/client-changelog/#442-20151020","title":"4.4.2 (2015/10/20)","text":"
"},{"location":"changelog/client-changelog/#441-20151014","title":"4.4.1 (2015/10/14)","text":"
"},{"location":"changelog/client-changelog/#440-20150918","title":"4.4.0 (2015/09/18)","text":"
"},{"location":"changelog/client-changelog/#43","title":"4.3","text":""},{"location":"changelog/client-changelog/#434-20150914","title":"4.3.4 (2015/09/14)","text":"
"},{"location":"changelog/client-changelog/#433-20150825","title":"4.3.3 (2015/08/25)","text":"
"},{"location":"changelog/client-changelog/#432-20150819","title":"4.3.2 (2015/08/19)","text":"
"},{"location":"changelog/client-changelog/#431-20150811","title":"4.3.1 (2015/08/11)","text":"
"},{"location":"changelog/client-changelog/#430-beta-20150803","title":"4.3.0 beta (2015/08/03)","text":"
"},{"location":"changelog/client-changelog/#42","title":"4.2","text":""},{"location":"changelog/client-changelog/#428-20150711","title":"4.2.8 (2015/07/11)","text":"
"},{"location":"changelog/client-changelog/#427-20150708","title":"4.2.7 (2015/07/08)","text":"
"},{"location":"changelog/client-changelog/#426-20150625","title":"4.2.6 (2015/06/25)","text":"
"},{"location":"changelog/client-changelog/#425-20150624","title":"4.2.5 (2015/06/24)","text":"
"},{"location":"changelog/client-changelog/#424-20150611","title":"4.2.4 (2015/06/11)","text":"
"},{"location":"changelog/client-changelog/#423-20150529","title":"4.2.3 (2015/05/29)","text":"
"},{"location":"changelog/client-changelog/#422-20150526","title":"4.2.2 (2015/05/26)","text":"
"},{"location":"changelog/client-changelog/#421-20150514","title":"4.2.1 (2015/05/14)","text":"
"},{"location":"changelog/client-changelog/#420-20150507","title":"4.2.0 (2015/05/07)","text":"
"},{"location":"changelog/client-changelog/#41","title":"4.1","text":""},{"location":"changelog/client-changelog/#416-20150421","title":"4.1.6 (2015/04/21)","text":"
"},{"location":"changelog/client-changelog/#415-20150409","title":"4.1.5 (2015/04/09)","text":"
"},{"location":"changelog/client-changelog/#414-20150327","title":"4.1.4 (2015/03/27)","text":"
"},{"location":"changelog/client-changelog/#413-20150323","title":"4.1.3 (2015/03/23)","text":"
"},{"location":"changelog/client-changelog/#412-20150319-deprecated","title":"4.1.2 (2015/03/19) (deprecated)","text":"
"},{"location":"changelog/client-changelog/#411-20150303","title":"4.1.1 (2015/03/03)","text":"
"},{"location":"changelog/client-changelog/#410-beta-20150129","title":"4.1.0 beta (2015/01/29)","text":"
"},{"location":"changelog/client-changelog/#40","title":"4.0","text":""},{"location":"changelog/client-changelog/#407-20150122","title":"4.0.7 (2015/01/22)","text":"
"},{"location":"changelog/client-changelog/#405-20141224","title":"4.0.5 (2014/12/24)","text":"
"},{"location":"changelog/client-changelog/#404-20141215","title":"4.0.4 (2014/12/15)","text":"
"},{"location":"changelog/client-changelog/#402-20141129","title":"4.0.2 (2014/11/29)","text":"
"},{"location":"changelog/client-changelog/#401-20141118","title":"4.0.1 (2014/11/18)","text":"
"},{"location":"changelog/client-changelog/#400-20141110","title":"4.0.0 (2014/11/10)","text":"
"},{"location":"changelog/client-changelog/#31","title":"3.1","text":""},{"location":"changelog/client-changelog/#3112-20141201","title":"3.1.12 (2014/12/01)","text":"
"},{"location":"changelog/client-changelog/#3111-20141115","title":"3.1.11 (2014/11/15)","text":"
"},{"location":"changelog/client-changelog/#318-20141028","title":"3.1.8 (2014/10/28)","text":"
"},{"location":"changelog/client-changelog/#317-20140928","title":"3.1.7 (2014/09/28)","text":"
"},{"location":"changelog/client-changelog/#316-20140919","title":"3.1.6 (2014/09/19)","text":"
"},{"location":"changelog/client-changelog/#315-20140814","title":"3.1.5 (2014/08/14)","text":"
"},{"location":"changelog/client-changelog/#314-20140805","title":"3.1.4 (2014/08/05)","text":"
"},{"location":"changelog/client-changelog/#313-20140804","title":"3.1.3 (2014/08/04)","text":"
"},{"location":"changelog/client-changelog/#312-20140801","title":"3.1.2 (2014/08/01)","text":"
"},{"location":"changelog/client-changelog/#311-20140728","title":"3.1.1 (2014/07/28)","text":"
"},{"location":"changelog/client-changelog/#310-20140724","title":"3.1.0 (2014/07/24)","text":"
"},{"location":"changelog/client-changelog/#30","title":"3.0","text":""},{"location":"changelog/client-changelog/#304","title":"3.0.4","text":"
"},{"location":"changelog/client-changelog/#303","title":"3.0.3","text":"
"},{"location":"changelog/client-changelog/#302","title":"3.0.2","text":"
"},{"location":"changelog/client-changelog/#301","title":"3.0.1","text":"
"},{"location":"changelog/client-changelog/#300","title":"3.0.0","text":"
"},{"location":"changelog/client-changelog/#22","title":"2.2","text":""},{"location":"changelog/client-changelog/#220","title":"2.2.0","text":"
"},{"location":"changelog/client-changelog/#21","title":"2.1","text":""},{"location":"changelog/client-changelog/#212","title":"2.1.2","text":"
"},{"location":"changelog/client-changelog/#211","title":"2.1.1","text":"
"},{"location":"changelog/client-changelog/#210","title":"2.1.0","text":"
"},{"location":"changelog/client-changelog/#20","title":"2.0","text":""},{"location":"changelog/client-changelog/#208","title":"2.0.8","text":"
"},{"location":"changelog/client-changelog/#207-dont-use-it","title":"2.0.7 (Don't use it)","text":"
"},{"location":"changelog/client-changelog/#206","title":"2.0.6","text":"
"},{"location":"changelog/client-changelog/#205","title":"2.0.5","text":"
"},{"location":"changelog/client-changelog/#204","title":"2.0.4","text":"
"},{"location":"changelog/client-changelog/#203","title":"2.0.3","text":"
"},{"location":"changelog/client-changelog/#202","title":"2.0.2","text":"
"},{"location":"changelog/client-changelog/#200","title":"2.0.0","text":"
"},{"location":"changelog/client-changelog/#18","title":"1.8","text":"
"},{"location":"changelog/client-changelog/#17","title":"1.7","text":"
"},{"location":"changelog/client-changelog/#16","title":"1.6","text":"
"},{"location":"changelog/client-changelog/#15","title":"1.5","text":"
"},{"location":"changelog/drive-client-changelog/","title":"SeaDrive Client Changelog","text":""},{"location":"changelog/drive-client-changelog/#3011-20240910","title":"3.0.11 (2024/09/10)","text":"
"},{"location":"changelog/drive-client-changelog/#3010-20240618","title":"3.0.10 (2024/06/18)","text":"
"},{"location":"changelog/drive-client-changelog/#309-20240425","title":"3.0.9 (2024/04/25)","text":"
"},{"location":"changelog/drive-client-changelog/#308-20240221","title":"3.0.8 (2024/02/21)","text":"
"},{"location":"changelog/drive-client-changelog/#307-20231204","title":"3.0.7 (2023/12/04)","text":"
"},{"location":"changelog/drive-client-changelog/#306-20230915","title":"3.0.6 (2023/09/15)","text":"
"},{"location":"changelog/drive-client-changelog/#305-20230815","title":"3.0.5 (2023/08/15)","text":"
"},{"location":"changelog/drive-client-changelog/#304-20230610","title":"3.0.4 (2023/06/10)","text":"
"},{"location":"changelog/drive-client-changelog/#303-20230525","title":"3.0.3 (2023/05/25)","text":"
"},{"location":"changelog/drive-client-changelog/#302-beta-20230324","title":"3.0.2 Beta (2023/03/24)","text":"
"},{"location":"changelog/drive-client-changelog/#2027-for-windows-20230324","title":"2.0.27 for Windows (2023/03/24)","text":"
"},{"location":"changelog/drive-client-changelog/#2026-20221228","title":"2.0.26 (2022/12/28)","text":"
"},{"location":"changelog/drive-client-changelog/#2025-windows-20221203","title":"2.0.25 (Windows) (2022/12/03)","text":"
"},{"location":"changelog/drive-client-changelog/#2024-windows-20221114","title":"2.0.24 (Windows) (2022/11/14)","text":"
"},{"location":"changelog/drive-client-changelog/#2024-macos-20221109","title":"2.0.24 (macOS) (2022/11/09)","text":"
"},{"location":"changelog/drive-client-changelog/#2023-20220818","title":"2.0.23 (2022/08/18)","text":"
"},{"location":"changelog/drive-client-changelog/#2022-20220623","title":"2.0.22 (2022/06/23)","text":"
"},{"location":"changelog/drive-client-changelog/#2021-windows-20220321","title":"2.0.21 (Windows) (2022/03/21)","text":"
"},{"location":"changelog/drive-client-changelog/#2020-20220304","title":"2.0.20 (2022/03/04)","text":"
"},{"location":"changelog/drive-client-changelog/#2019-windows-20211229","title":"2.0.19 (Windows) (2021/12/29)","text":"
"},{"location":"changelog/drive-client-changelog/#2018-macos-20211029","title":"2.0.18 (macOS) (2021/10/29)","text":"
"},{"location":"changelog/drive-client-changelog/#2018-windows-20211026","title":"2.0.18 (Windows) (2021/10/26)","text":"
"},{"location":"changelog/drive-client-changelog/#2017-20210930","title":"2.0.17 (2021/09/30)","text":"
"},{"location":"changelog/drive-client-changelog/#2016-2021813","title":"2.0.16 (2021/8/13)","text":"
"},{"location":"changelog/drive-client-changelog/#2015-2021720","title":"2.0.15 (2021/7/20)","text":"
"},{"location":"changelog/drive-client-changelog/#2014-2021526","title":"2.0.14 (2021/5/26)","text":"
"},{"location":"changelog/drive-client-changelog/#2013-2021323","title":"2.0.13 (2021/3/23)","text":"
"},{"location":"changelog/drive-client-changelog/#2012-2021129","title":"2.0.12 (2021/1/29)","text":"
"},{"location":"changelog/drive-client-changelog/#2010-20201229","title":"2.0.10 (2020/12/29)","text":"
"},{"location":"changelog/drive-client-changelog/#209-20201120","title":"2.0.9 (2020/11/20)","text":"
"},{"location":"changelog/drive-client-changelog/#208-20201114","title":"2.0.8 (2020/11/14)","text":"
"},{"location":"changelog/drive-client-changelog/#207-20201031","title":"2.0.7 (2020/10/31)","text":"
"},{"location":"changelog/drive-client-changelog/#206-20200924","title":"2.0.6 (2020/09/24)","text":"
"},{"location":"changelog/drive-client-changelog/#1012-20200825","title":"1.0.12 (2020/08/25)","text":"
"},{"location":"changelog/drive-client-changelog/#205-20200730","title":"2.0.5 (2020/07/30)","text":"
"},{"location":"changelog/drive-client-changelog/#204-20200713","title":"2.0.4 (2020/07/13)","text":"
"},{"location":"changelog/drive-client-changelog/#203-20200617","title":"2.0.3 (2020/06/17)","text":"
"},{"location":"changelog/drive-client-changelog/#202-20200523","title":"2.0.2 (2020/05/23)","text":"
"},{"location":"changelog/drive-client-changelog/#201-for-windows-20200413","title":"2.0.1 for Windows (2020/04/13)","text":"
"},{"location":"changelog/drive-client-changelog/#200-for-windows-20200320","title":"2.0.0 for Windows (2020/03/20)","text":"
"},{"location":"changelog/drive-client-changelog/#1011-20200207","title":"1.0.11 (2020/02/07)","text":"
"},{"location":"changelog/drive-client-changelog/#1010-20191223","title":"1.0.10 (2019/12/23)","text":"
"},{"location":"changelog/drive-client-changelog/#108-20191105","title":"1.0.8 (2019/11/05)","text":"
"},{"location":"changelog/drive-client-changelog/#107-20190821","title":"1.0.7 (2019/08/21)","text":"
"},{"location":"changelog/drive-client-changelog/#106-20190701","title":"1.0.6 (2019/07/01)","text":"
"},{"location":"changelog/drive-client-changelog/#105-20190611","title":"1.0.5 (2019/06/11)","text":"
"},{"location":"changelog/drive-client-changelog/#104-20190423","title":"1.0.4 (2019/04/23)","text":"
"},{"location":"changelog/drive-client-changelog/#103-20190318","title":"1.0.3 (2019/03/18)","text":"
"},{"location":"changelog/drive-client-changelog/#101-20190114","title":"1.0.1 (2019/01/14)","text":"
"},{"location":"changelog/drive-client-changelog/#100-20181119","title":"1.0.0 (2018/11/19)","text":"
"},{"location":"changelog/drive-client-changelog/#095-20180910","title":"0.9.5 (2018/09/10)","text":"
"},{"location":"changelog/drive-client-changelog/#094-20180818","title":"0.9.4 (2018/08/18)","text":"
"},{"location":"changelog/drive-client-changelog/#093-20180619","title":"0.9.3 (2018/06/19)","text":"
"},{"location":"changelog/drive-client-changelog/#092-20180505","title":"0.9.2 (2018/05/05)","text":"
"},{"location":"changelog/drive-client-changelog/#091-20180424","title":"0.9.1 (2018/04/24)","text":"
"},{"location":"changelog/drive-client-changelog/#090-20180424","title":"0.9.0 (2018/04/24)","text":"
"},{"location":"changelog/drive-client-changelog/#086-20180319","title":"0.8.6 (2018/03/19)","text":"
"},{"location":"changelog/drive-client-changelog/#085-20180103","title":"0.8.5 (2018/01/03)","text":"
"},{"location":"changelog/drive-client-changelog/#084-20171201","title":"0.8.4 (2017/12/01)","text":"
"},{"location":"changelog/drive-client-changelog/#083-20171124","title":"0.8.3 (2017/11/24)","text":"
"},{"location":"changelog/drive-client-changelog/#081-20171103","title":"0.8.1 (2017/11/03)","text":"
"},{"location":"changelog/drive-client-changelog/#080-20170916","title":"0.8.0 (2017/09/16)","text":"
"},{"location":"changelog/drive-client-changelog/#071-20170623","title":"0.7.1 (2017/06/23)","text":"
"},{"location":"changelog/drive-client-changelog/#070-20170607","title":"0.7.0 (2017/06/07)","text":"
"},{"location":"changelog/drive-client-changelog/#062-20170422","title":"0.6.2 (2017/04/22)","text":"
"},{"location":"changelog/drive-client-changelog/#061-20170327","title":"0.6.1 (2017/03/27)","text":"
"},{"location":"changelog/drive-client-changelog/#060-20170325","title":"0.6.0 (2017/03/25)","text":"S: because a few programs will automatically try to create files in S:
"},{"location":"changelog/drive-client-changelog/#052-20170309","title":"0.5.2 (2017/03/09)","text":"
"},{"location":"changelog/drive-client-changelog/#051-20170216","title":"0.5.1 (2017/02/16)","text":"
"},{"location":"changelog/drive-client-changelog/#050-20170118","title":"0.5.0 (2017/01/18)","text":"
"},{"location":"changelog/drive-client-changelog/#042-20161216","title":"0.4.2 (2016/12/16)","text":"
"},{"location":"changelog/drive-client-changelog/#041-20161107","title":"0.4.1 (2016/11/07)","text":"
"},{"location":"changelog/drive-client-changelog/#040-20161105","title":"0.4.0 (2016/11/05)","text":"
"},{"location":"changelog/drive-client-changelog/#031-20161022","title":"0.3.1 (2016/10/22)","text":"
"},{"location":"changelog/drive-client-changelog/#030-20161014","title":"0.3.0 (2016/10/14)","text":"
"},{"location":"changelog/drive-client-changelog/#020-20160915","title":"0.2.0 (2016/09/15)","text":"
"},{"location":"changelog/drive-client-changelog/#010-20160902","title":"0.1.0 (2016/09/02)","text":"
"},{"location":"changelog/server-changelog-old/","title":"Seafile Server Changelog (old)","text":""},{"location":"changelog/server-changelog-old/#50","title":"5.0","text":"conf, including:
"},{"location":"changelog/server-changelog-old/#505-20160302","title":"5.0.5 (2016.03.02)","text":"
"},{"location":"changelog/server-changelog-old/#503-20151217","title":"5.0.3 (2015.12.17)","text":"
"},{"location":"changelog/server-changelog-old/#502-20151204","title":"5.0.2 (2015.12.04)","text":"
"},{"location":"changelog/server-changelog-old/#501-beta-20151112","title":"5.0.1 beta (2015.11.12)","text":"
"},{"location":"changelog/server-changelog-old/#500-beta-20151103","title":"5.0.0 beta (2015.11.03)","text":"[[ Pagename]].
conf
"},{"location":"changelog/server-changelog-old/#44","title":"4.4","text":""},{"location":"changelog/server-changelog-old/#446-20151109","title":"4.4.6 (2015.11.09)","text":"
"},{"location":"changelog/server-changelog-old/#444-20151027","title":"4.4.4 (2015.10.27)","text":"
"},{"location":"changelog/server-changelog-old/#443-20151015","title":"4.4.3 (2015.10.15)","text":"
"},{"location":"changelog/server-changelog-old/#442-20151012","title":"4.4.2 (2015.10.12)","text":"
"},{"location":"changelog/server-changelog-old/#441-20150924","title":"4.4.1 (2015.09.24)","text":"
"},{"location":"changelog/server-changelog-old/#440-20150916","title":"4.4.0 (2015.09.16)","text":"
"},{"location":"changelog/server-changelog-old/#43","title":"4.3","text":""},{"location":"changelog/server-changelog-old/#432-20150820","title":"4.3.2 (2015.08.20)","text":"
"},{"location":"changelog/server-changelog-old/#431-20150729","title":"4.3.1 (2015.07.29)","text":"
"},{"location":"changelog/server-changelog-old/#430-20150721","title":"4.3.0 (2015.07.21)","text":"
"},{"location":"changelog/server-changelog-old/#42","title":"4.2","text":"THUMBNAIL_DEFAULT_SIZE = 24, instead of THUMBNAIL_DEFAULT_SIZE = '24'
"},{"location":"changelog/server-changelog-old/#423-20150618","title":"4.2.3 (2015.06.18)","text":"COMPRESS_URL = MEDIA_URL\nSTATIC_URL = MEDIA_URL + '/assets/'\n
"},{"location":"changelog/server-changelog-old/#422-20150529","title":"4.2.2 (2015.05.29)","text":"
"},{"location":"changelog/server-changelog-old/#421-20150527","title":"4.2.1 (2015.05.27)","text":"
"},{"location":"changelog/server-changelog-old/#420-beta-20150513","title":"4.2.0 beta (2015.05.13)","text":"
"},{"location":"changelog/server-changelog-old/#41","title":"4.1","text":""},{"location":"changelog/server-changelog-old/#412-20150331","title":"4.1.2 (2015.03.31)","text":"
"},{"location":"changelog/server-changelog-old/#411-20150325","title":"4.1.1 (2015.03.25)","text":"
"},{"location":"changelog/server-changelog-old/#410-beta-20150318","title":"4.1.0 beta (2015.03.18)","text":"
"},{"location":"changelog/server-changelog-old/#40","title":"4.0","text":""},{"location":"changelog/server-changelog-old/#406-20150204","title":"4.0.6 (2015.02.04)","text":"
"},{"location":"changelog/server-changelog-old/#405-20150114","title":"4.0.5 (2015.01.14)","text":"
"},{"location":"changelog/server-changelog-old/#404-20150106","title":"4.0.4 (2015.01.06)","text":"
"},{"location":"changelog/server-changelog-old/#403-20141230","title":"4.0.3 (2014.12.30)","text":"
"},{"location":"changelog/server-changelog-old/#402-20141226","title":"4.0.2 (2014.12.26)","text":"
"},{"location":"changelog/server-changelog-old/#401-20141129","title":"4.0.1 (2014.11.29)","text":"
"},{"location":"changelog/server-changelog-old/#400-20141110","title":"4.0.0 (2014.11.10)","text":"
"},{"location":"changelog/server-changelog-old/#31","title":"3.1","text":""},{"location":"changelog/server-changelog-old/#317-20141020","title":"3.1.7 (2014.10.20)","text":"
"},{"location":"changelog/server-changelog-old/#316-20140911","title":"3.1.6 (2014.09.11)","text":"
"},{"location":"changelog/server-changelog-old/#315-20140829","title":"3.1.5 (2014.08.29)","text":"
"},{"location":"changelog/server-changelog-old/#314-20140826","title":"3.1.4 (2014.08.26)","text":"
"},{"location":"changelog/server-changelog-old/#313-20140818","title":"3.1.3 (2014.08.18)","text":"
"},{"location":"changelog/server-changelog-old/#312-20140807","title":"3.1.2 (2014.08.07)","text":"
"},{"location":"changelog/server-changelog-old/#311-20140801","title":"3.1.1 (2014.08.01)","text":"
"},{"location":"changelog/server-changelog-old/#310-20140724","title":"3.1.0 (2014.07.24)","text":"
"},{"location":"changelog/server-changelog-old/#30","title":"3.0","text":""},{"location":"changelog/server-changelog-old/#304-20140607","title":"3.0.4 (2014.06.07)","text":"
"},{"location":"changelog/server-changelog-old/#303","title":"3.0.3","text":"
"},{"location":"changelog/server-changelog-old/#302","title":"3.0.2","text":"
"},{"location":"changelog/server-changelog-old/#301","title":"3.0.1","text":"
"},{"location":"changelog/server-changelog-old/#300","title":"3.0.0","text":"
"},{"location":"changelog/server-changelog-old/#300-beta2","title":"3.0.0 beta2","text":"
"},{"location":"changelog/server-changelog-old/#300-beta","title":"3.0.0 beta","text":"
"},{"location":"changelog/server-changelog-old/#22","title":"2.2","text":""},{"location":"changelog/server-changelog-old/#221","title":"2.2.1","text":"
"},{"location":"changelog/server-changelog-old/#220","title":"2.2.0","text":"
"},{"location":"changelog/server-changelog-old/#21","title":"2.1","text":""},{"location":"changelog/server-changelog-old/#215","title":"2.1.5","text":"
"},{"location":"changelog/server-changelog-old/#214","title":"2.1.4","text":"
"},{"location":"changelog/server-changelog-old/#213","title":"2.1.3","text":"
"},{"location":"changelog/server-changelog-old/#212","title":"2.1.2","text":"<a>, <table>, <img> and a few other html elements in markdown to avoid XSS attack.
"},{"location":"changelog/server-changelog-old/#211","title":"2.1.1","text":"
"},{"location":"changelog/server-changelog-old/#210","title":"2.1.0","text":"
"},{"location":"changelog/server-changelog-old/#20","title":"2.0","text":""},{"location":"changelog/server-changelog-old/#204","title":"2.0.4","text":"
"},{"location":"changelog/server-changelog-old/#203","title":"2.0.3","text":"
"},{"location":"changelog/server-changelog-old/#202","title":"2.0.2","text":"
"},{"location":"changelog/server-changelog-old/#201","title":"2.0.1","text":"
"},{"location":"changelog/server-changelog-old/#200","title":"2.0.0","text":"
"},{"location":"changelog/server-changelog-old/#18","title":"1.8","text":""},{"location":"changelog/server-changelog-old/#185","title":"1.8.5","text":"
"},{"location":"changelog/server-changelog-old/#183","title":"1.8.3","text":"
"},{"location":"changelog/server-changelog-old/#182","title":"1.8.2","text":"
"},{"location":"changelog/server-changelog-old/#181","title":"1.8.1","text":"
"},{"location":"changelog/server-changelog-old/#180","title":"1.8.0","text":"
"},{"location":"changelog/server-changelog-old/#17","title":"1.7","text":""},{"location":"changelog/server-changelog-old/#1702-for-linux-32-bit","title":"1.7.0.2 for Linux 32 bit","text":"
"},{"location":"changelog/server-changelog-old/#1701-for-linux-32-bit","title":"1.7.0.1 for Linux 32 bit","text":"
"},{"location":"changelog/server-changelog-old/#170","title":"1.7.0","text":"
"},{"location":"changelog/server-changelog-old/#16","title":"1.6","text":""},{"location":"changelog/server-changelog-old/#161","title":"1.6.1","text":"
"},{"location":"changelog/server-changelog-old/#160","title":"1.6.0","text":"
"},{"location":"changelog/server-changelog-old/#15","title":"1.5","text":""},{"location":"changelog/server-changelog-old/#152","title":"1.5.2","text":"
"},{"location":"changelog/server-changelog-old/#151","title":"1.5.1","text":"
"},{"location":"changelog/server-changelog-old/#150","title":"1.5.0","text":"
"},{"location":"changelog/server-changelog/","title":"Seafile Server Changelog","text":"
"},{"location":"changelog/server-changelog/#11011-2024-08-07","title":"11.0.11 (2024-08-07)","text":"
"},{"location":"changelog/server-changelog/#11010-2024-08-06","title":"11.0.10 (2024-08-06)","text":"
"},{"location":"changelog/server-changelog/#1109-2024-05-30","title":"11.0.9 (2024-05-30)","text":"
"},{"location":"changelog/server-changelog/#1108-2024-04-22","title":"11.0.8 (2024-04-22)","text":"
"},{"location":"changelog/server-changelog/#1107-2024-04-18","title":"11.0.7 (2024-04-18)","text":"
"},{"location":"changelog/server-changelog/#1106-2024-03-14","title":"11.0.6 (2024-03-14)","text":"
"},{"location":"changelog/server-changelog/#1105-2024-01-31","title":"11.0.5 (2024-01-31)","text":"
"},{"location":"changelog/server-changelog/#1104-2024-01-26","title":"11.0.4 (2024-01-26)","text":"
"},{"location":"changelog/server-changelog/#1103-2023-12-19","title":"11.0.3 (2023-12-19)","text":"
"},{"location":"changelog/server-changelog/#1102-2023-11-20","title":"11.0.2 (2023-11-20)","text":"
"},{"location":"changelog/server-changelog/#1101-beta-2023-10-18","title":"11.0.1 beta (2023-10-18)","text":"
"},{"location":"changelog/server-changelog/#1100-beta-cancelled","title":"11.0.0 beta (cancelled)","text":"
"},{"location":"changelog/server-changelog/#100","title":"10.0","text":"
"},{"location":"changelog/server-changelog/#1000-beta-2023-02-22","title":"10.0.0 beta (2023-02-22)","text":"
"},{"location":"changelog/server-changelog/#90","title":"9.0","text":""},{"location":"changelog/server-changelog/#9010-2022-12-07","title":"9.0.10 (2022-12-07)","text":"
"},{"location":"changelog/server-changelog/#909-2022-09-22","title":"9.0.9 (2022-09-22)","text":"
"},{"location":"changelog/server-changelog/#908-2022-09-07","title":"9.0.8 (2022-09-07)","text":"
"},{"location":"changelog/server-changelog/#907-2022-08-10","title":"9.0.7 (2022-08-10)","text":"/accounts/login redirect by ?next= parameterpip3 install lxml to install it.
"},{"location":"changelog/server-changelog/#906-2022-06-22","title":"9.0.6 (2022-06-22)","text":"
"},{"location":"changelog/server-changelog/#905-2022-05-13","title":"9.0.5 (2022-05-13)","text":"
"},{"location":"changelog/server-changelog/#904-2022-02-21","title":"9.0.4 (2022-02-21)","text":"
"},{"location":"changelog/server-changelog/#903-2022-02-15","title":"9.0.3 (2022-02-15)","text":"
"},{"location":"changelog/server-changelog/#902-2021-12-10","title":"9.0.2 (2021-12-10)","text":"
"},{"location":"changelog/server-changelog/#901-beta-2021-11-20","title":"9.0.1 beta (2021-11-20)","text":"
"},{"location":"changelog/server-changelog/#900-beta-2021-11-11","title":"9.0.0 beta (2021-11-11)","text":"
"},{"location":"changelog/server-changelog/#80","title":"8.0","text":"[fileserver]\nuse_go_fileserver = true\n
"},{"location":"changelog/server-changelog/#807-20210809","title":"8.0.7 (2021/08/09)","text":"
"},{"location":"changelog/server-changelog/#806-20210714","title":"8.0.6 (2021/07/14)","text":"
"},{"location":"changelog/server-changelog/#805-20210514","title":"8.0.5 (2021/05/14)","text":"
"},{"location":"changelog/server-changelog/#804-20210325","title":"8.0.4 (2021/03/25)","text":"
"},{"location":"changelog/server-changelog/#803-20210127","title":"8.0.3 (2021/01/27)","text":"
"},{"location":"changelog/server-changelog/#802-20210104","title":"8.0.2 (2021/01/04)","text":"
"},{"location":"changelog/server-changelog/#801-beta-20210104","title":"8.0.1 beta (2021/01/04)","text":"
"},{"location":"changelog/server-changelog/#800-beta-20201127","title":"8.0.0 beta (2020/11/27)","text":"
"},{"location":"changelog/server-changelog/#71","title":"7.1","text":"
"},{"location":"changelog/server-changelog/#714-20200519","title":"7.1.4 (2020/05/19)","text":"
"},{"location":"changelog/server-changelog/#713-20200326","title":"7.1.3 (2020/03/26)","text":"
"},{"location":"changelog/server-changelog/#712-beta-20200305","title":"7.1.2 beta (2020/03/05)","text":"
"},{"location":"changelog/server-changelog/#711-beta-20191223","title":"7.1.1 beta (2019/12/23)","text":"
"},{"location":"changelog/server-changelog/#710-beta-20191205","title":"7.1.0 beta (2019/12/05)","text":"
"},{"location":"changelog/server-changelog/#70","title":"7.0","text":"
"},{"location":"changelog/server-changelog/#704-20190726","title":"7.0.4 (2019/07/26)","text":"
"},{"location":"changelog/server-changelog/#703-20190705","title":"7.0.3 (2019/07/05)","text":"
"},{"location":"changelog/server-changelog/#702-20190613","title":"7.0.2 (2019/06/13)","text":"
"},{"location":"changelog/server-changelog/#701-beta-20190531","title":"7.0.1 beta (2019/05/31)","text":"
"},{"location":"changelog/server-changelog/#700-beta-20190523","title":"7.0.0 beta (2019/05/23)","text":"
"},{"location":"changelog/server-changelog/#63","title":"6.3","text":"conf/gunicorn.conf instead of running ./seahub.sh start <another-port>../seahub.sh python-env seahub/manage.py migrate_file_comment\n
"},{"location":"changelog/server-changelog/#633-20180907","title":"6.3.3 (2018/09/07)","text":"
"},{"location":"changelog/server-changelog/#632-20180709","title":"6.3.2 (2018/07/09)","text":"
"},{"location":"changelog/server-changelog/#631-20180624","title":"6.3.1 (2018/06/24)","text":"
"},{"location":"changelog/server-changelog/#630-beta-20180526","title":"6.3.0 beta (2018/05/26)","text":"
"},{"location":"changelog/server-changelog/#62","title":"6.2","text":"
./seahub.sh start instead of ./seahub.sh start-fastcgilocation / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n }\n
"},{"location":"changelog/server-changelog/#625-20180123","title":"6.2.5 (2018/01/23)","text":" # seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n
"},{"location":"changelog/server-changelog/#624-20180116","title":"6.2.4 (2018/01/16)","text":"
"},{"location":"changelog/server-changelog/#623-20171115","title":"6.2.3 (2017/11/15)","text":"
"},{"location":"changelog/server-changelog/#622-20170925","title":"6.2.2 (2017/09/25)","text":"
"},{"location":"changelog/server-changelog/#621-20170922","title":"6.2.1 (2017/09/22)","text":"
"},{"location":"changelog/server-changelog/#620-beta-20170914","title":"6.2.0 beta (2017/09/14)","text":"
"},{"location":"changelog/server-changelog/#61","title":"6.1","text":"ENABLE_REPO_SNAPSHOT_LABEL = True to turn the feature on)
"},{"location":"changelog/server-changelog/#612-20170815","title":"6.1.2 (2017.08.15)","text":"# for ubuntu 16.04\napt-get install ffmpeg\npip install pillow moviepy\n\n# for Centos 7\nyum -y install epel-release\nrpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro\nyum -y install ffmpeg ffmpeg-devel\npip install pillow moviepy\n
"},{"location":"changelog/server-changelog/#611-20170615","title":"6.1.1 (2017.06.15)","text":"
"},{"location":"changelog/server-changelog/#610-beta-20170511","title":"6.1.0 beta (2017.05.11)","text":"
"},{"location":"changelog/server-changelog/#60","title":"6.0","text":"
"},{"location":"changelog/server-changelog/#608-20170216","title":"6.0.8 (2017.02.16)","text":"
# -*- coding: utf-8 -*- to seahub_settings.py, so that admin can use non-ascii characters in the file.
"},{"location":"changelog/server-changelog/#607-20161216","title":"6.0.7 (2016.12.16)","text":"
"},{"location":"changelog/server-changelog/#606-20161116","title":"6.0.6 (2016.11.16)","text":"
"},{"location":"changelog/server-changelog/#605-20161017","title":"6.0.5 (2016.10.17)","text":"
"},{"location":"changelog/server-changelog/#604-20160922","title":"6.0.4 (2016.09.22)","text":"
"},{"location":"changelog/server-changelog/#603-20160903","title":"6.0.3 (2016.09.03)","text":"
"},{"location":"changelog/server-changelog/#602-20160902","title":"6.0.2 (2016.09.02)","text":"
"},{"location":"changelog/server-changelog/#601-beta-20160822","title":"6.0.1 beta (2016.08.22)","text":"
"},{"location":"changelog/server-changelog/#600-beta-20160802","title":"6.0.0 beta (2016.08.02)","text":"
"},{"location":"changelog/server-changelog/#51","title":"5.1","text":"
"},{"location":"changelog/server-changelog/#514-20160723","title":"5.1.4 (2016.07.23)","text":"# for Ubuntu\nsudo apt-get install python-urllib3\n# for CentOS\nsudo yum install python-urllib3\n
"},{"location":"changelog/server-changelog/#513-20160530","title":"5.1.3 (2016.05.30)","text":"
"},{"location":"changelog/server-changelog/#512-20160513","title":"5.1.2 (2016.05.13)","text":"
"},{"location":"changelog/server-changelog/#511-20160408","title":"5.1.1 (2016.04.08)","text":"
"},{"location":"changelog/server-changelog/#510-beta-20160322","title":"5.1.0 beta (2016.03.22)","text":"
"},{"location":"config/","title":"Server Configuration and Customization","text":""},{"location":"config/#config-files","title":"Config Files","text":"
"},{"location":"config/ccnet-conf/","title":"ccnet.conf","text":"
"},{"location":"config/ccnet-conf/#using-encrypted-connections","title":"Using Encrypted Connections","text":"[Database]\n......\n# Use larger connection pool\nMAX_CONNECTIONS = 200\n[Database]\nUSE_SSL = true\nSKIP_VERIFY = false\nCA_PATH = /etc/mysql/ca.pem\nuse_ssl to true and skip_verify to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path. The ca_path is a trusted CA certificate path for signing MySQL server certificates. When skip_verify is true, there is no need to add the ca_path option. The MySQL server certificate won't be verified at this time.seahub-data/custom/templates/registration/password_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade. seahub-data/custom/templates/sysadmin/user_add_email.html and modify the new one. In this way, the customization will be maintained after upgrade. seahub-data/custom/templates/sysadmin/user_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade. .env file will be used to specify the components used by the Seafile-docker instance and the environment variables required by each component. The default contents list in below
"},{"location":"config/env/#seafile-docker-configurations","title":"Seafile-docker configurations","text":""},{"location":"config/env/#components-configurations","title":"Components configurations","text":"COMPOSE_FILE='seafile-server.yml,caddy.yml'\nCOMPOSE_PATH_SEPARATOR=','\n\n\nSEAFILE_IMAGE=docker.seadrive.org/seafileltd/seafile-pro-mc:12.0-latest\nSEAFILE_DB_IMAGE=mariadb:10.11\nSEAFILE_MEMCACHED_IMAGE=memcached:1.6.29\nSEAFILE_ELASTICSEARCH_IMAGE=elasticsearch:8.15.0 # pro edition only\nSEAFILE_CADDY_IMAGE=lucaslorentz/caddy-docker-proxy:2.9\n\nSEAFILE_VOLUME=/opt/seafile-data\nSEAFILE_MYSQL_VOLUME=/opt/seafile-mysql/db\nSEAFILE_ELASTICSEARCH_VOLUME=/opt/seafile-elasticsearch/data # pro edition only\nSEAFILE_CADDY_VOLUME=/opt/seafile-caddy\n\nSEAFILE_MYSQL_DB_HOST=db\nSEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\n\nTIME_ZONE=Etc/UTC\n\nJWT_PRIVATE_KEY=\n\nSEAFILE_SERVER_HOSTNAME=example.seafile.com\nSEAFILE_SERVER_PROTOCOL=http\n\nSEAFILE_ADMIN_EMAIL=me@example.com\nSEAFILE_ADMIN_PASSWORD=asecret\n\n\nSEADOC_IMAGE=seafileltd/sdoc-server:1.0-latest\nSEADOC_VOLUME=/opt/seadoc-data\n\nENABLE_SEADOC=false\nSEADOC_SERVER_URL=http://example.seafile.com/sdoc-server\n
"},{"location":"config/env/#docker-images-configurations","title":"Docker images configurations","text":"COMPOSE_FILE: .yml files for components of Seafile-docker, each .yml must be separated by the symbol defined in COMPOSE_PATH_SEPARATOR. The core components are involved in seafile-server.yml and caddy.yml which must be taken in this term.COMPOSE_PATH_SEPARATOR: The symbol used to separate the .yml files in term COMPOSE_FILE, default is ','.
"},{"location":"config/env/#persistent-volume-configurations","title":"Persistent Volume Configurations","text":"SEAFILE_IMAGE: The image of Seafile-server, default is docker.seadrive.org/seafileltd/seafile-pro-mc:12.0-latest.SEAFILE_DB_IMAGE: Database server image, default is mariadb:10.11.SEAFILE_MEMCACHED_IMAGE: Cached server image, default is memcached:1.6.29SEAFILE_ELASTICSEARCH_IMAGE: Only valid in pro edition. The elasticsearch image, default is elasticsearch:8.15.0.SEAFILE_CADDY_IMAGE: Caddy server image, default is lucaslorentz/caddy-docker-proxy:2.9.SEADOC_IMAGE: Only valid after integrating SeaDoc. SeaDoc server image, default is seafileltd/sdoc-server:1.0-latest.
"},{"location":"config/env/#mysql-configurations","title":"Mysql configurations","text":"SEAFILE_VOLUME: The volume directory of Seafile data, default is /opt/seafile-data.SEAFILE_MYSQL_VOLUME: The volume directory of MySQL data, default is /opt/seafile-mysql/db.SEAFILE_CADDY_VOLUME: The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's, default is /opt/seafile-caddy.SEAFILE_ELASTICSEARCH_VOLUME: Only valid in pro edition. The volume directory of Elasticsearch data, default is /opt/seafile-elasticsearch/data.SEADOC_VOLUME: Only valid after integrating SeaDoc. The volume directory of SeaDoc server data, default is /opt/seadoc-data.
"},{"location":"config/env/#seafile-server-configurations","title":"Seafile-server configurations","text":"SEAFILE_MYSQL_DB_HOST: The host address of Mysql, default is the pre-defined service name db in Seafile-docker instance.SEAFILE_MYSQL_ROOT_PASSWORD: The root password of MySQL.SEAFILE_MYSQL_DB_USER: The user of MySQL (database - user can be found in conf/seafile.conf).SEAFILE_MYSQL_DB_PASSWORD: The user seafile password of MySQL.
"},{"location":"config/env/#seadoc-configurations-only-valid-after-integrating-seadoc","title":"SeaDoc configurations (only valid after integrating SeaDoc)","text":"SEAFILE_MYSQL_DB_PASSWORD: The user seafile password of MySQLJWT: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1SEAFILE_SERVER_HOSTNAME: Seafile server hostname or domainSEAFILE_SERVER_PROTOCOL: Seafile server protocol (http or https)TIME_ZONE: Time zone (default UTC)SEAFILE_ADMIN_EMAIL: Admin usernameSEAFILE_ADMIN_PASSWORD: Admin password
"},{"location":"config/seafevents-conf/","title":"Configurable Options","text":"ENABLE_SEADOC: Enable the SeaDoc server or not, default is false.SEADOC_SERVER_URL: Only valid in ENABLE_SEADOC=true. Url of Seadoc server (e.g., http://example.seafile.com/sdoc-server).seafevents.conf:
"},{"location":"config/seafevents-conf/#the-following-configurations-for-pro-edition-only","title":"The following configurations for Pro Edition only","text":"[DATABASE]\ntype = mysql\nhost = 192.168.0.2\nport = 3306\nusername = seafile\npassword = password\nname = seahub_db\n\n[STATISTICS]\n## must be \"true\" to enable statistics\nenabled = false\n\n[SEAHUB EMAIL]\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending Seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n\n[FILE HISTORY]\nenabled = true\nthreshold = 5\nsuffix = md,txt,...\n\n## From seafile 7.0.0\n## Recording file history to database for fast access is enabled by default for 'Markdown, .txt, ppt, pptx, doc, docx, xls, xlsx'. \n## After enable the feature, the old histories version for markdown, doc, docx files will not be list in the history page.\n## (Only new histories that stored in database will be listed) But the users can still access the old versions in the library snapshots.\n## For file types not listed in the suffix , histories version will be scanned from the library history as before.\n## The feature default is enable. You can set the 'enabled = false' to disable the feature.\n\n## The 'threshold' is the time threshold for recording the historical version of a file, in minutes, the default is 5 minutes. \n## This means that if the interval between two adjacent file saves is less than 5 minutes, the two file changes will be merged and recorded as a historical version. \n## When set to 0, there is no time limit, which means that each save will generate a separate historical version.\n\n## If you need to modify the file list format, you can add 'suffix = md, txt, ...' configuration items to achieve.\n[AUDIT]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\n\n[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## From Seafile 6.3.0 pro, in order to speed up the full-text search speed, you should setup\nhighlight = fvh\n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\n## Refer to file search manual for details.\nindex_office_pdf=false\n\n## The default size limit for doc, docx, ppt, pptx, xls, xlsx and pdf files. Files larger than this will not be indexed.\n## Since version 6.2.0\n## Unit: MB\noffice_file_size_limit = 10\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n\n## The default loglevel is `warning`.\n## Since version 11.0.4\nloglevel = info\n\n[EVENTS PUBLISH]\n## must be \"true\" to enable publish events messages\nenabled = false\n## message format: repo-update\\t{{repo_id}}}\\t{{commit_id}}\n## Currently only support redis message queue\nmq_type = redis\n\n[REDIS]\n## redis use the 0 database and \"repo_update\" channel\nserver = 192.168.1.1\nport = 6379\npassword = q!1w@#123\n\n[AUTO DELETION]\nenabled = true # Default is false, when enabled, users can use file auto deletion feature\ninterval = 86400 # The unit is second(s), the default frequency is one day, that is, it runs once a day\n
"},{"location":"config/seafile-conf/#storage-quota-setting","title":"Storage Quota Setting","text":"./seahub.sh restart\n./seafile.sh restart\nseafile.conf file[quota]\n# default user quota in GB, integer only\ndefault = 2\n
"},{"location":"config/seafile-conf/#default-history-length-limit","title":"Default history length limit","text":"[quota]\nlibrary_file_limit = 100000\n
"},{"location":"config/seafile-conf/#default-trash-expiration-time","title":"Default trash expiration time","text":"[history]\nkeep_days = days of history to keep\n
"},{"location":"config/seafile-conf/#system-trash","title":"System Trash","text":"[library_trash]\nexpire_days = 60\n[memcached]\n# Replace `localhost` with the memcached address:port if you're using remote memcached\n# POOL-MIN and POOL-MAX is used to control connection pool size. Usually the default is good enough.\nmemcached_options = --SERVER=localhost --POOL-MIN=10 --POOL-MAX=100\n[redis]\n# your redis server address\nredis_host = 127.0.0.1\n# your redis server port\nredis_port = 6379\n# size of connection pool to redis, default is 100\nmax_connections = 100\n[fileserver] section of the file seafile.conf[fileserver]\n# bind address for fileserver\n# default to 0.0.0.0, if deployed without proxy: no access restriction\n# set to 127.0.0.1, if used with local proxy: only access by local\nhost = 127.0.0.1\n# tcp port for fileserver\nport = 8082\n[fileserver]\nworker_threads = 15\n[fileserver]\n# Set maximum upload file size to 200M.\n# If not configured, there is no file size limit for uploading.\nmax_upload_size=200\n\n# Set maximum download directory size to 200M.\n# Default is 100M.\nmax_download_dir_size=200\n[fileserver]\nmax_indexing_threads = 10\n[fileserver]\n#Set block size to 2MB\nfixed_block_size=2\n[fileserver]\n#Set uploading time limit to 3600s\nweb_token_expire_time=3600\n[zip]\n# The file name encoding of the downloaded zip file.\nwindows_encoding = iso-8859-1\n[fileserver]\n# After how much time a temp file will be removed. The unit is in seconds. Default to 3 days.\nhttp_temp_file_ttl = x\n# File scan interval. The unit is in seconds. Default to 1 hour.\nhttp_temp_scan_interval = x\nfs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server.[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
use_block_cache option in the [fileserver] group. It's not enabled by default. block_cache_size_limit option is used to limit the size of the cache. Its default value is 10GB. The blocks are cached in seafile-data/block-cache directory. When the total size of cached files exceeds the limit, seaf-server will clean up older files until the size reduces to 70% of the limit. The cleanup interval is 5 minutes. You have to have a good estimate on how much space you need for the cache directory. Otherwise on frequent downloads this directory can be quickly filled up.block_cache_file_types configuration is used to choose the file types that are cached. block_cache_file_types the default value is mp4;mov.[fileserver]\nuse_block_cache = true\n# Set block cache size limit to 100MB\nblock_cache_size_limit = 100\nblock_cache_file_types = mp4;mov\nskip_block_hash option to use a random string as block ID. Note that this option will prevent fsck from checking block content integrity. You should specify --shallow option to fsck to not check content integrity.[fileserver]\nskip_block_hash = true\nfile_ext_white_list option in the [fileserver] group. This option is a list of file types, only the file types in this list are allowed to be uploaded. It's not enabled by default. [fileserver]\nfile_ext_white_list = md;mp4;mov\nupload_limit and download_limit option in the [fileserver] group to limit the speed of file upload and download. It's not enabled by default. [fileserver]\n# The unit is in KB/s.\nupload_limit = 100\ndownload_limit = 100\n
"},{"location":"config/seafile-conf/#database-configuration","title":"Database configuration","text":"[fileserver]\n# default is false\ncheck_virus_on_web_upload = true\n[database] section of the configuration file, whether you use SQLite or MySQL.[database]\ntype=mysql\nhost=127.0.0.1\nuser=root\npassword=root\ndb_name=seafile_db\nconnection_charset=utf8\nmax_connections=100\n[database]\nuse_ssl = true\nskip_verify = false\nca_path = /etc/mysql/ca.pem\nuse_ssl to true and skip_verify to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path. The ca_path is a trusted CA certificate path for signing MySQL server certificates. When skip_verify is true, there is no need to add the ca_path option. The MySQL server certificate won't be verified at this time.[file_lock]\ndefault_expire_hours = 6\n[file_lock]\nuse_locked_file_cache = true\n
"},{"location":"config/seafile-conf/#storage-backends","title":"Storage Backends","text":"[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n
"},{"location":"config/seafile-conf/#enable-slow-log","title":"Enable Slow Log","text":"[cluster]\nenabled = true\n[slow_log]\n# default to true\nenable_slow_log = true\n# the unit of all slow log thresholds is millisecond.\n# default to 5000 milliseconds, only RPC queries processed for longer than 5000 milliseconds will be logged.\nrpc_slow_threshold = 5000\nseafile_slow_rpc.log in logs/slow_logs. You can also use log-rotate to rotate the log files. You just need to send SIGUSR2 to seaf-server process. The slow log file will be closed and reopened.SIGUSR1. This signal will trigger rotation for all log files opened by seaf-server. You should change your log rotate settings accordingly.[fileserver]\n# default to false. If enabled, fileserver-access.log will be written to log directory.\nenable_access_log = true\nstart time - user id - url - response code - process time\nSIGUSR1 to trigger log rotation.[fileserver]\nuse_go_fileserver = true\n
max_sync_file_count to limit the size of library to be synced. The default is 100K. With Go fileserver you can set this option to a much higher number, such as 1 million.max_download_dir_size is thus no longer needed by Go fileserver.
"},{"location":"config/seafile-conf/#profiling-go-fileserver-performance","title":"Profiling Go Fileserver Performance","text":"[fileserver]\n# The unit is in M. Default to 2G.\nfs_cache_limit = 100\n# profile_password is required, change it for your need\n[fileserver]\nenable_profiling = true\nprofile_password = 8kcUz1I2sLaywQhCRtn2x1\n
"},{"location":"config/seafile-conf/#notification-server-configuration","title":"Notification server configuration","text":"go tool pprof http://localhost:8082/debug/pprof/heap?password=8kcUz1I2sLaywQhCRtn2x1\ngo tool pprof http://localhost:8082/debug/pprof/profile?password=8kcUz1I2sLaywQhCRtn2x1\n# jwt_private_key are required.You should generate it manually.\n[notification]\nenabled = true\n# the listen IP of notification server. (Do not modify the host when using Nginx or Apache, as Nginx or Apache will proxy the requests to this address)\nhost = 127.0.0.1\n# the port of notification server\nport = 8083\n# the log level of notification server\nlog_level = info\n# jwt_private_key is used to generate jwt token and authenticate seafile server\njwt_private_key = M@O8VWUb81YvmtWLHGB2I_V7di5-@0p(MF*GrE!sIws23F\n# generate jwt_private_key\nopenssl rand -base64 32\nserver {\n ...\n\n location /notification/ping {\n proxy_pass http://127.0.0.1:8083/ping;\n access_log /var/log/nginx/notification.access.log seafileformat;\n error_log /var/log/nginx/notification.error.log;\n }\n location /notification {\n proxy_pass http://127.0.0.1:8083/;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";\n access_log /var/log/nginx/notification.access.log seafileformat;\n error_log /var/log/nginx/notification.error.log;\n }\n\n ...\n}\n
"},{"location":"config/seahub_customization/","title":"Seahub customization","text":""},{"location":"config/seahub_customization/#customize-seahub-logo-and-css","title":"Customize Seahub Logo and CSS","text":" ProxyPass /notification/ping http://127.0.0.1:8083/ping/\n ProxyPassReverse /notification/ping http://127.0.0.1:8083/ping/\n\n ProxyPass /notification ws://127.0.0.1:8083/\n ProxyPassReverse /notification ws://127.0.0.1:8083/\n<seafile-install-path>/seahub-data/custom. Create a symbolic link in seafile-server-latest/seahub/media by ln -s ../../../seahub-data/custom custom.custom/LOGO_PATH in seahub_settings.pyLOGO_PATH = 'custom/mylogo.png'\n
"},{"location":"config/seahub_customization/#customize-favicon","title":"Customize Favicon","text":"LOGO_WIDTH = 149\nLOGO_HEIGHT = 32\ncustom/FAVICON_PATH in seahub_settings.py
"},{"location":"config/seahub_customization/#customize-seahub-css","title":"Customize Seahub CSS","text":"FAVICON_PATH = 'custom/favicon.png'\ncustom/, for example, custom.cssBRANDING_CSS in seahub_settings.py
"},{"location":"config/seahub_customization/#customize-help-page","title":"Customize help page","text":"BRANDING_CSS = 'custom/custom.css'\ncd <seafile-install-path>/seahub-data/custom\nmkdir templates\nmkdir templates/help\ncp ../../seafile-server-latest/seahub/seahub/help/templates/help/install.html templates/help/\ntemplates/help/install.html file and save it. You will see the new help page.ADDITIONAL_SHARE_DIALOG_NOTE = {\n 'title': 'Attention! Read before shareing files:',\n 'content': 'Do not share personal or confidential official data with **.'\n}\nconf/seahub_settings.py configuration file:CUSTOM_NAV_ITEMS = [\n {'icon': 'sf2-icon-star',\n 'desc': 'Custom navigation 1',\n 'link': 'https://www.seafile.com'\n },\n {'icon': 'sf2-icon-wiki-view',\n 'desc': 'Custom navigation 2',\n 'link': 'https://www.seafile.com/help'\n },\n {'icon': 'sf2-icon-wrench',\n 'desc': 'Custom navigation 3',\n 'link': 'http://www.example.com'\n },\n]\nicon field currently only supports icons in Seafile that begin with sf2-icon. You can find the list of icons here: Tools navigation bar on the left.ADDITIONAL_APP_BOTTOM_LINKS = {\n 'seafile': 'https://example.seahub.com/seahub',\n 'dtable-web': 'https://example.seahub.com/web'\n}\nADDITIONAL_ABOUT_DIALOG_LINKS = {\n 'seafile': 'https://example.seahub.com/seahub',\n 'dtable-web': 'https://example.seahub.com/dtable-web'\n}\nENABLE_SETTINGS_VIA_WEB = False to seahub_settings.py.# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\nseahub_settings.py.
"},{"location":"config/seahub_settings_py/#redis","title":"Redis","text":"CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\n
"},{"location":"config/seahub_settings_py/#user-management-options","title":"User management options","text":"# For security consideration, please set to match the host/domain of your site, e.g., ALLOWED_HOSTS = ['.example.com'].\n# Please refer https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts for details.\nALLOWED_HOSTS = ['.myseafile.com']\n\n\n# Whether to use a secure cookie for the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = True\n\n# The value of the SameSite flag on the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-samesite\nCSRF_COOKIE_SAMESITE = 'Strict'\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-trusted-origins\nCSRF_TRUSTED_ORIGINS = ['https://www.myseafile.com']\n
"},{"location":"config/seahub_settings_py/#library-snapshot-label-feature","title":"Library snapshot label feature","text":"# Enalbe or disalbe registration on web. Default is `False`.\nENABLE_SIGNUP = False\n\n# Activate or deactivate user when registration complete. Default is `True`.\n# If set to `False`, new users need to be activated by admin in admin panel.\nACTIVATE_AFTER_REGISTRATION = False\n\n# Whether to send email when a system admin adding a new member. Default is `True`.\nSEND_EMAIL_ON_ADDING_SYSTEM_MEMBER = True\n\n# Whether to send email when a system admin resetting a user's password. Default is `True`.\nSEND_EMAIL_ON_RESETTING_USER_PASSWD = True\n\n# Send system admin notify email when user registration is complete. Default is `False`.\nNOTIFY_ADMIN_AFTER_REGISTRATION = True\n\n# Remember days for login. Default is 7\nLOGIN_REMEMBER_DAYS = 7\n\n# Attempt limit before showing a captcha when login.\nLOGIN_ATTEMPT_LIMIT = 3\n\n# deactivate user account when login attempts exceed limit\n# Since version 5.1.2 or pro 5.1.3\nFREEZE_USER_ON_LOGIN_FAILED = False\n\n# mininum length for user's password\nUSER_PASSWORD_MIN_LENGTH = 6\n\n# LEVEL based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nUSER_PASSWORD_STRENGTH_LEVEL = 3\n\n# default False, only check USER_PASSWORD_MIN_LENGTH\n# when True, check password strength level, STRONG(or above) is allowed\nUSER_STRONG_PASSWORD_REQUIRED = False\n\n# Force user to change password when admin add/reset a user.\n# Added in 5.1.1, deafults to True.\nFORCE_PASSWORD_CHANGE = True\n\n# Age of cookie, in seconds (default: 2 weeks).\nSESSION_COOKIE_AGE = 60 * 60 * 24 * 7 * 2\n\n# Whether a user's session cookie expires when the Web browser is closed.\nSESSION_EXPIRE_AT_BROWSER_CLOSE = False\n\n# Whether to save the session data on every request. Default is `False`\nSESSION_SAVE_EVERY_REQUEST = False\n\n# Whether enable the feature \"published library\". Default is `False`\n# Since 6.1.0 CE\nENABLE_WIKI = True\n\n# In old version, if you use Single Sign On, the password is not saved in Seafile.\n# Users can't use WebDAV because Seafile can't check whether the password is correct.\n# Since version 6.3.8, you can enable this option to let user's to specific a password for WebDAV login.\n# Users login via SSO can use this password to login in WebDAV.\n# Enable the feature. pycryptodome should be installed first.\n# sudo pip install pycryptodome==3.12.0\nENABLE_WEBDAV_SECRET = True\nWEBDAV_SECRET_MIN_LENGTH = 8\n\n# LEVEL for the password, based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nWEBDAV_SECRET_STRENGTH_LEVEL = 1\n\n\n# Since version 7.0.9, you can force a full user to log in with a two factor authentication.\n# The prerequisite is that the administrator should 'enable two factor authentication' in the 'System Admin -> Settings' page.\n# Then you can add the following configuration information to the configuration file.\nENABLE_FORCE_2FA_TO_ALL_USERS = True\n
"},{"location":"config/seahub_settings_py/#library-options","title":"Library options","text":"# Turn on this option to let users to add a label to a library snapshot. Default is `False`\nENABLE_REPO_SNAPSHOT_LABEL = False\n# if enable create encrypted library\nENABLE_ENCRYPTED_LIBRARY = True\n\n# version for encrypted library\n# should only be `2` or `4`.\n# version 3 is insecure (using AES128 encryption) so it's not recommended any more.\nENCRYPTED_LIBRARY_VERSION = 2\n\n# mininum length for password of encrypted library\nREPO_PASSWORD_MIN_LENGTH = 8\n\n# force use password when generate a share/upload link (since version 8.0.9)\nSHARE_LINK_FORCE_USE_PASSWORD = False\n\n# mininum length for password for share link (since version 4.4)\nSHARE_LINK_PASSWORD_MIN_LENGTH = 8\n\n# LEVEL for the password of a share/upload link\n# based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above. (since version 8.0.9)\nSHARE_LINK_PASSWORD_STRENGTH_LEVEL = 3\n\n# Default expire days for share link (since version 6.3.8)\n# Once this value is configured, the user can no longer generate an share link with no expiration time.\n# If the expiration value is not set when the share link is generated, the value configured here will be used.\nSHARE_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be less than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be greater than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# Default expire days for upload link (since version 7.1.6)\n# Once this value is configured, the user can no longer generate an upload link with no expiration time.\n# If the expiration value is not set when the upload link is generated, the value configured here will be used.\nUPLOAD_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MIN should be less than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MAX should be greater than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# force user login when view file/folder share link (since version 6.3.6)\nSHARE_LINK_LOGIN_REQUIRED = True\n\n# enable water mark when view(not edit) file in web browser (since version 6.3.6)\nENABLE_WATERMARK = True\n\n# Disable sync with any folder. Default is `False`\n# NOTE: since version 4.2.4\nDISABLE_SYNC_WITH_ANY_FOLDER = True\n\n# Enable or disable library history setting\nENABLE_REPO_HISTORY_SETTING = True\n\n# Enable or disable user share library to any group\n# Since version 6.2.0\nENABLE_SHARE_TO_ALL_GROUPS = True\n\n# Enable or disable user to clean trash (default is True)\n# Since version 6.3.6\nENABLE_USER_CLEAN_TRASH = True\n\n# Add a report abuse button on download links. (since version 7.1.0)\n# Users can report abuse on the share link page, fill in the report type, contact information, and description.\n# Default is false.\nENABLE_SHARE_LINK_REPORT_ABUSE = True\n
"},{"location":"config/seahub_settings_py/#cloud-mode","title":"Cloud Mode","text":"# Online preview maximum file size, defaults to 30M.\nFILE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024\n\n# Extensions of previewed text files.\n# NOTE: since version 6.1.1\nTEXT_PREVIEW_EXT = \"\"\"ac, am, bat, c, cc, cmake, cpp, cs, css, diff, el, h, html,\nhtm, java, js, json, less, make, org, php, pl, properties, py, rb,\nscala, script, sh, sql, txt, text, tex, vi, vim, xhtml, xml, log, csv,\ngroovy, rst, patch, go\"\"\"\n\n\n# Seafile only generates thumbnails for images smaller than the following size.\n# Since version 6.3.8 pro, suport the psd online preview.\nTHUMBNAIL_IMAGE_SIZE_LIMIT = 30 # MB\n\n# Enable or disable thumbnail for video. ffmpeg and moviepy should be installed first.\n# For details, please refer to https://manual.seafile.com/deploy/video_thumbnails.html\n# NOTE: this option is deprecated in version 7.1\nENABLE_VIDEO_THUMBNAIL = False\n\n# Use the frame at 5 second as thumbnail\n# NOTE: this option is deprecated in version 7.1\nTHUMBNAIL_VIDEO_FRAME_TIME = 5\n\n# Absolute filesystem path to the directory that will hold thumbnail files.\nTHUMBNAIL_ROOT = '/haiwen/seahub-data/thumbnail/thumb/'\n\n# Default size for picture preview. Enlarge this size can improve the preview quality.\n# NOTE: since version 6.1.1\nTHUMBNAIL_SIZE_FOR_ORIGINAL = 1024\n
"},{"location":"config/seahub_settings_py/#single-sign-on","title":"Single Sign On","text":"# Enable cloude mode and hide `Organization` tab.\nCLOUD_MODE = True\n\n# Disable global address book\nENABLE_GLOBAL_ADDRESSBOOK = False\n
"},{"location":"config/seahub_settings_py/#other-options","title":"Other options","text":"# Enable authentication with ADFS\n# Default is False\n# Since 6.0.9\nENABLE_ADFS_LOGIN = True\n\n# Force user login through ADFS instead of email and password\n# Default is False\n# Since 11.0.7\nDISABLE_ADFS_USER_PWD_LOGIN = True\n\n# Enable authentication wit Kerberos\n# Default is False\nENABLE_KRB5_LOGIN = True\n\n# Enable authentication with Shibboleth\n# Default is False\nENABLE_SHIBBOLETH_LOGIN = True\n\n# Enable client to open an external browser for single sign on\n# When it is false, the old buitin browser is opened for single sign on\n# When it is true, the default browser of the operation system is opened\n# The benefit of using system browser is that it can support hardware 2FA\n# Since 11.0.0, and sync client 9.0.5, drive client 3.0.8\nCLIENT_SSO_VIA_LOCAL_BROWSER = True # default is False\nCLIENT_SSO_UUID_EXPIRATION = 5 * 60 # in seconds\n
"},{"location":"config/seahub_settings_py/#pro-edition-only-options","title":"Pro edition only options","text":"# This is outside URL for Seahub(Seafile Web). \n# The domain part (i.e., www.example.com) will be used in generating share links and download/upload file via web.\n# Note: Outside URL means \"if you use Nginx, it should be the Nginx's address\"\n# Note: SERVICE_URL is moved to seahub_settings.py since 9.0.0\nSERVICE_URL = 'http://www.example.com:8000'\n\n# Disable settings via Web interface in system admin->settings\n# Default is True\n# Since 5.1.3\nENABLE_SETTINGS_VIA_WEB = False\n\n# Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = 'UTC'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\n# Default language for sending emails.\nLANGUAGE_CODE = 'en'\n\n# Custom language code choice.\nLANGUAGES = (\n ('en', 'English'),\n ('zh-cn', '\u7b80\u4f53\u4e2d\u6587'),\n ('zh-tw', '\u7e41\u9ad4\u4e2d\u6587'),\n)\n\n# Set this to your website/company's name. This is contained in email notifications and welcome message when user login for the first time.\nSITE_NAME = 'Seafile'\n\n# Browser tab's title\nSITE_TITLE = 'Private Seafile'\n\n# If you don't want to run seahub website on your site's root path, set this option to your preferred path.\n# e.g. setting it to '/seahub/' would run seahub on http://example.com/seahub/.\nSITE_ROOT = '/'\n\n# Max number of files when user upload file/folder.\n# Since version 6.0.4\nMAX_NUMBER_OF_FILES_FOR_FILEUPLOAD = 500\n\n# Control the language that send email. Default to user's current language.\n# Since version 6.1.1\nSHARE_LINK_EMAIL_LANGUAGE = ''\n\n# Interval for browser requests unread notifications\n# Since PRO 6.1.4 or CE 6.1.2\nUNREAD_NOTIFICATIONS_REQUEST_INTERVAL = 3 * 60 # seconds\n\n# Whether to allow user to delete account, change login password or update basic user\n# info on profile page.\n# Since PRO 6.3.10\nENABLE_DELETE_ACCOUNT = False\nENABLE_UPDATE_USER_INFO = False\nENABLE_CHANGE_PASSWORD = False\n\n# Get web api auth token on profile page.\nENABLE_GET_AUTH_TOKEN_BY_SESSION = True\n\n# Since 8.0.6 CE/PRO version.\n# Url redirected to after user logout Seafile.\n# Usually configured as Single Logout url.\nLOGOUT_REDIRECT_URL = 'http{s}://www.example-url.com'\n\n\n# Enable system admin add T&C, all users need to accept terms before using. Defaults to `False`.\n# Since version 6.0\nENABLE_TERMS_AND_CONDITIONS = True\n\n# Enable two factor authentication for accounts. Defaults to `False`.\n# Since version 6.0\nENABLE_TWO_FACTOR_AUTH = True\n\n# Enable user select a template when he/she creates library.\n# When user select a template, Seafile will create folders releated to the pattern automaticly.\n# Since version 6.0\nLIBRARY_TEMPLATES = {\n 'Technology': ['/Develop/Python', '/Test'],\n 'Finance': ['/Current assets', '/Fixed assets/Computer']\n}\n\n# Enable a user to change password in 'settings' page. Default to `True`\n# Since version 6.2.11\nENABLE_CHANGE_PASSWORD = True\n\n# If show contact email when search user.\nENABLE_SHOW_CONTACT_EMAIL_WHEN_SEARCH_USER = True\n
"},{"location":"config/seahub_settings_py/#restful-api","title":"RESTful API","text":"# Whether to show the used traffic in user's profile popup dialog. Default is True\nSHOW_TRAFFIC = True\n\n# Allow administrator to view user's file in UNENCRYPTED libraries\n# through Libraries page in System Admin. Default is False.\nENABLE_SYS_ADMIN_VIEW_REPO = True\n\n# For un-login users, providing an email before downloading or uploading on shared link page.\n# Since version 5.1.4\nENABLE_SHARE_LINK_AUDIT = True\n\n# Check virus after upload files to shared upload links. Defaults to `False`.\n# Since version 6.0\nENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n\n# Send email to these email addresses when a virus is detected.\n# This list can be any valid email address, not necessarily the emails of Seafile user.\n# Since version 6.0.8\nVIRUS_SCAN_NOTIFY_LIST = ['user_a@seafile.com', 'user_b@seafile.com']\n
"},{"location":"config/seahub_settings_py/#seahub-custom-functions","title":"Seahub Custom Functions","text":"# API throttling related settings. Enlarger the rates if you got 429 response code during API calls.\nREST_FRAMEWORK = {\n 'DEFAULT_THROTTLE_RATES': {\n 'ping': '600/minute',\n 'anon': '5/minute',\n 'user': '300/minute',\n },\n 'UNICODE_JSON': False,\n}\n\n# Throtting whitelist used to disable throttle for certain IPs.\n# e.g. REST_FRAMEWORK_THROTTING_WHITELIST = ['127.0.0.1', '192.168.1.1']\n# Please make sure `REMOTE_ADDR` header is configured in Nginx conf according to https://manual.seafile.com/deploy/deploy_with_nginx.html.\nREST_FRAMEWORK_THROTTING_WHITELIST = []\ncustom_search_user function in {seafile install path}/conf/seahub_custom_functions/__init__.pyimport os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseahub_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seahub/seahub')\nsys.path.append(seahub_dir)\n\nfrom seahub.profile.models import Profile\ndef custom_search_user(request, emails):\n\n institution_name = ''\n\n username = request.user.username\n profile = Profile.objects.get_profile_by_user(username)\n if profile:\n institution_name = profile.institution\n\n inst_users = [p.user for p in\n Profile.objects.filter(institution=institution_name)]\n\n filtered_emails = []\n for email in emails:\n if email in inst_users:\n filtered_emails.append(email)\n\n return filtered_emails\ncustom_search_user and seahub_custom_functions/__init__.pytest@test.com, you can define a custom_get_groups function in {seafile install path}/conf/seahub_custom_functions/__init__.pyimport os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseaserv_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seafile/lib64/python2.7/site-packages')\nsys.path.append(seaserv_dir)\n\ndef custom_get_groups(request):\n\n from seaserv import ccnet_api\n\n groups = []\n username = request.user.username\n\n # for current user\n groups += ccnet_api.get_groups(username)\n\n # for 'test@test.com' user\n groups += ccnet_api.get_groups('test@test.com')\n\n return groups\ncustom_get_groups and seahub_custom_functions/__init__.py
"},{"location":"config/sending_email/","title":"Sending Email Notifications on Seahub","text":""},{"location":"config/sending_email/#types-of-email-sending-in-seafile","title":"Types of Email Sending in Seafile","text":"./seahub.sh restart\n
seahub_settings.py to enable email sending.EMAIL_USE_TLS = False\nEMAIL_HOST = 'smtp.example.com' # smpt server\nEMAIL_HOST_USER = 'username@example.com' # username and domain\nEMAIL_HOST_PASSWORD = 'password' # password\nEMAIL_PORT = 25\nDEFAULT_FROM_EMAIL = EMAIL_HOST_USER\nSERVER_EMAIL = EMAIL_HOST_USER\nEMAIL_USE_TLS = True\nEMAIL_HOST = 'smtp.gmail.com'\nEMAIL_HOST_USER = 'username@gmail.com'\nEMAIL_HOST_PASSWORD = 'password'\nEMAIL_PORT = 587\nDEFAULT_FROM_EMAIL = EMAIL_HOST_USER\nSERVER_EMAIL = EMAIL_HOST_USER\nlogs/seahub.log to see what may cause the problem. For a complete email notification list, please refer to email notification list.EMAIL_HOST_USER and EMAIL_HOST_PASSWORD blank (''). (But notice that the emails then will be sent without a From: address.)EMAIL_USE_SSL = True instead of EMAIL_USE_TLS.reply to of email","text":"
"},{"location":"config/sending_email/#config-background-email-sending-task-pro-edition-only","title":"Config background email sending task (Pro Edition Only)","text":"# Set reply-to header to user's email or not, defaults to ``False``. For details,\n# please refer to http://www.w3.org/Protocols/rfc822/\nADD_REPLY_TO_HEADER = True\nseafevents.conf.
"},{"location":"config/sending_email/#customize-email-messages","title":"Customize email messages","text":"[SEAHUB EMAIL]\n\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\nSITE_NAME variable in seahub_settings.py. If it is not enough for your case, you can customize the email templates.seahub-data/custom/templates/email_base.html and modify the new one. In this way, the customization will be maintained after upgrade. send_html_email(_(\"Reset Password on %s\") % site_name,\n email_template_name, c, None, [user.username])\nseahub-data/custom/templates/registration/password_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\nseahub-data/custom/templates/sysadmin/user_add_email.html and modify the new one. In this way, the customization will be maintained after upgrade.send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\nseahub-data/custom/templates/sysadmin/user_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.try:\n if file_shared_type == 'f':\n c['file_shared_type'] = _(u\"file\")\n send_html_email(_(u'A file is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to\n )\n else:\n c['file_shared_type'] = _(u\"directory\")\n send_html_email(_(u'A directory is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to)\nseahub-data/custom/templates/shared_link_email.html and modify the new one. In this way, the customization will be maintained after upgrade.send_html_email(_('New notice on %s') % settings.SITE_NAME,\n 'notifications/notice_email.html', c,\n None, [to_user])\n
"},{"location":"deploy/#manually-deployment-options","title":"Manually deployment options","text":"
"},{"location":"deploy/#ldap-and-ad-integration","title":"LDAP and AD integration","text":"
"},{"location":"deploy/#trouble-shooting","title":"Trouble shooting","text":"
"},{"location":"deploy/#upgrade-seafile-server","title":"Upgrade Seafile Server","text":"
"},{"location":"deploy/auth_switch/","title":"Switch authentication type","text":"
provider you use in the config file. The user to be migrated should already be able to log in with this new authentication type, but he will be created as a new user with a new unique identifier, so he will not have access to his existing libraries. Note the uid from the social_auth_usersocialauth table. Delete this new, still empty user again.xxx@auth.local.social_auth_usersocialauth with the xxx@auth.local, your provider and the uid.12ae56789f1e4c8d8e1c31415867317c@auth.local from local database authentication to OAuth. The OAuth authentication is configured in seahub_settings.py with the provider name authentik-oauth. The uid of the user inside the Identity Provider is HR12345.mysql> select email,left(passwd,25) from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------------------------------+\n| email | left(passwd,25) |\n+---------------------------------------------+------------------------------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | PBKDF2SHA256$10000$4cdda6... |\n+---------------------------------------------+------------------------------+\n\nmysql> update EmailUser set passwd = '!' where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n\nmysql> insert into `social_auth_usersocialauth` (`username`, `provider`, `uid`, `extra_data`) values ('12ae56789f1e4c8d8e1c31415867317c@auth.local', 'authentik-oauth', 'HR12345', '');\nextra_data field store user's information returned from the provider. For most providers, the extra_data field is usually an empty character. Since version 11.0.3-Pro, the default value of the extra_data field is NULL.
"},{"location":"deploy/auth_switch/#migrating-from-one-external-authentication-to-another","title":"Migrating from one external authentication to another","text":"mysql> select email,passwd from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------- +\n| email | passwd |\n+---------------------------------------------+--------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | ! |\n+---------------------------------------------+--------+\n\nmysql> select username,provider,uid from social_auth_usersocialauth where username = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+-----------------+---------+\n| username | provider | uid |\n+---------------------------------------------+-----------------+---------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | authentik-oauth | HR12345 |\n+---------------------------------------------+-----------------+---------+\nsocial_auth_usersocialauth table. No entries need to be deleted or created. You only need to modify the existing ones. The xxx@auth.local remains the same, you only need to replace the provider and the uid.social_auth_usersocialauth table that belongs to the particular user.
"},{"location":"deploy/auto_login_seadrive/#auto-login-on-internet-explorer","title":"Auto Login on Internet Explorer","text":"
HKEY_CURRENT_USER/SOFTWARE/SeaDrive.Key : PreconfigureServerAddr\nType : REG_SZ\nValue : <the url of seafile server>\n\nKey : PreconfigureUseKerberosLogin\nType : REG_SZ\nValue : <0|1> // 0 for normal login, 1 for SSO login\nHKEY_LOCAL_MACHINE/SOFTWARE/Wow6432Node/SeaDrive.
"},{"location":"deploy/auto_login_seadrive/#auto-login-via-group-policy","title":"Auto Login via Group Policy","text":"msiexec /i seadrive.msi /quiet /qn /log install.log\n/opt/seafile-data and /opt/seafile-mysql, are still adopted in this manual. What's more, all k8s YAML files will be placed in /opt/seafile-k8s-yaml. It is not recommended to change these paths. If you do, account for it when following these instructions.
"},{"location":"deploy/deploy_with_k8s/#yaml","title":"YAML","text":"kubectl create secret docker-registry regcred --docker-server=docker.seadrive.org/seafileltd --docker-username=seafile --docker-password=zjkmid6rQibdZ=uJMuWS\n/opt/seafile-k8s-yaml. This series of YAML mainly includes Deployment for pod management and creation, Service for exposing services to the external network, PersistentVolume for defining the location of a volume used for persistent storage on the host and Persistentvolumeclaim for declaring the use of persistent storage in the container. For futher configuration details, you can refer the official documents.apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mariadb\nspec:\n selector:\n matchLabels:\n app: mariadb\n replicas: 1\n template:\n metadata:\n labels:\n app: mariadb\n spec:\n containers:\n - name: mariadb\n image: mariadb:10.11\n env:\n - name: MARIADB_ROOT_PASSWORD\n value: \"db_dev\"\n - name: MARIADB_AUTO_UPGRADE\n value: \"true\"\n ports:\n - containerPort: 3306\n volumeMounts:\n - name: mariadb-data\n mountPath: /var/lib/mysql\n volumes:\n - name: mariadb-data\n persistentVolumeClaim:\n claimName: mariadb-data\nMARIADB_ROOT_PASSWORD to your own mariadb password. In the above Deployment configuration file, no restart policy for the pod is specified. The default restart policy is Always. If you need to modify it, add the following to the spec attribute:
"},{"location":"deploy/deploy_with_k8s/#mariadb-serviceyaml","title":"mariadb-service.yaml","text":"restartPolicy: OnFailure\n\n#Note:\n# Always: always restart (include normal exit)\n# OnFailure: restart only with unexpected exit\n# Never: do not restart\n
"},{"location":"deploy/deploy_with_k8s/#mariadb-persistentvolumeyaml","title":"mariadb-persistentvolume.yaml","text":"apiVersion: v1\nkind: Service\nmetadata:\n name: mariadb\nspec:\n selector:\n app: mariadb\n ports:\n - protocol: TCP\n port: 3306\n targetPort: 3306\n
"},{"location":"deploy/deploy_with_k8s/#mariadb-persistentvolumeclaimyaml","title":"mariadb-persistentvolumeclaim.yaml","text":"apiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mariadb-data\nspec:\n capacity:\n storage: 1Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: /opt/seafile-mysql/db\n
"},{"location":"deploy/deploy_with_k8s/#memcached","title":"memcached","text":""},{"location":"deploy/deploy_with_k8s/#memcached-deploymentyaml","title":"memcached-deployment.yaml","text":"apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mariadb-data\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi\n
"},{"location":"deploy/deploy_with_k8s/#memcached-serviceyaml","title":"memcached-service.yaml","text":"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: memcached\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: memcached\n template:\n metadata:\n labels:\n app: memcached\n spec:\n containers:\n - name: memcached\n image: memcached:1.6.18\n args: [\"-m\", \"256\"]\n ports:\n - containerPort: 11211\n
"},{"location":"deploy/deploy_with_k8s/#seafile","title":"Seafile","text":""},{"location":"deploy/deploy_with_k8s/#seafile-deploymentyaml","title":"seafile-deployment.yaml","text":"apiVersion: v1\nkind: Service\nmetadata:\n name: memcached\nspec:\n selector:\n app: memcached\n ports:\n - protocol: TCP\n port: 11211\n targetPort: 11211\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: seafile\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: seafile\n template:\n metadata:\n labels:\n app: seafile\n spec:\n containers:\n - name: seafile\n # image: seafileltd/seafile-mc:9.0.10\n # image: seafileltd/seafile-mc:11.0-latest\n image: docker.seadrive.org/seafileltd/seafile-pro-mc:11.0-latest\n env:\n - name: DB_HOST\n value: \"mariadb\"\n - name: DB_ROOT_PASSWD\n value: \"db_dev\" #db's password\n - name: TIME_ZONE\n value: \"Europe/Berlin\"\n - name: SEAFILE_ADMIN_EMAIL\n value: \"admin@seafile.com\" #admin email\n - name: SEAFILE_ADMIN_PASSWORD\n value: \"admin_password\" #admin password\n - name: SEAFILE_SERVER_LETSENCRYPT\n value: \"false\"\n - name: SEAFILE_SERVER_HOSTNAME\n value: \"you_seafile_domain\" #hostname\n ports:\n - containerPort: 80\n # - containerPort: 443\n # name: seafile-secure\n volumeMounts:\n - name: seafile-data\n mountPath: /shared\n volumes:\n - name: seafile-data\n persistentVolumeClaim:\n claimName: seafile-data\n restartPolicy: Always\n # to get image from protected repository\n imagePullSecrets:\n - name: regcred\n
"},{"location":"deploy/deploy_with_k8s/#seafile-persistentvolumeyaml","title":"seafile-persistentvolume.yaml","text":"apiVersion: v1\nkind: Service\nmetadata:\n name: seafile\nspec:\n selector:\n app: seafile\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80\n nodePort: 30000\n
"},{"location":"deploy/deploy_with_k8s/#seafile-persistentvolumeclaimyaml","title":"seafile-persistentvolumeclaim.yaml","text":"apiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: seafile-data\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: /opt/seafile-data\n
"},{"location":"deploy/deploy_with_k8s/#deploy-pods","title":"Deploy pods","text":"apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: seafile-data\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi\n
"},{"location":"deploy/deploy_with_k8s/#container-management","title":"Container management","text":"kubectl apply -f /opt/seafile-k8s-yaml/\nseafile- as the prefix (such as seafile-748b695648-d6l4g)kubectl get pods\nkubectl logs seafile-748b695648-d6l4g\nkubectl exec -it seafile-748b695648-d6l4g -- bash\n/opt/seafile-data/conf and need to restart the container, the following command can be refered:
"},{"location":"deploy/https_with_apache/","title":"Enabling HTTPS with Apache","text":"kubectl delete deployments --all\nkubectl apply -f /opt/seafile-k8s-yaml/\nseafile.example.com.
# Ubuntu\n$ sudo a2enmod rewrite\n$ sudo a2enmod proxy_http\nvhost.conf. For Debian/Ubuntu, this is sites-enabled/000-default.
"},{"location":"deploy/https_with_apache/#getting-a-lets-encrypt-certificate","title":"Getting a Let's Encrypt certificate","text":"<VirtualHost *:80>\n ServerName seafile.example.com\n # Use \"DocumentRoot /var/www/html\" for CentOS\n # Use \"DocumentRoot /var/www\" for Debian/Ubuntu\n DocumentRoot /var/www\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n AllowEncodedSlashes On\n\n RewriteEngine On\n\n <Location /media>\n Require all granted\n </Location>\n\n #\n # seafile fileserver\n #\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n #\n # seahub\n #\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPreserveHost On\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n</VirtualHost>\nsudo certbot --apache certonly\n/etc/letsencrypt/live. For the host name seafile.example.com, the files are stored in /etc/letsencrypt/live/seafile.example.com. $ sudo a2enmod ssl\n<VirtualHost *:443>\n ServerName seafile.example.com\n DocumentRoot /var/www\n\n SSLEngine On\n SSLCertificateFile /etc/letsencrypt/live/seafile.example.com/fullchain.pem; # Path to your fullchain.pem\n SSLCertificateKeyFile /etc/letsencrypt/live/seafile.example.com/privkey.pem; # Path to your privkey.pem\n\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n <Location /media>\n Require all granted\n </Location>\n\n RewriteEngine On\n\n #\n # seafile fileserver\n #\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n #\n # seahub\n #\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPreserveHost On\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n</VirtualHost>\n
"},{"location":"deploy/https_with_apache/#modifying-seahub_settingspy","title":"Modifying seahub_settings.py","text":"sudo service apache2 restart\nSERVICE_URL in seahub_settings.py informs Seafile about the chosen domain, protocol and port. Change the SERVICE_URLso as to account for the switch from HTTP to HTTPS and to correspond to your host name (the http://must not be removed):SERVICE_URL = 'https://seafile.example.com'\nFILE_SERVER_ROOT in seahub_settings.py informs Seafile about the location of and the protocol used by the file server. Change the FILE_SERVER_ROOTso as to account for the switch from HTTP to HTTPS and to correspond to your host name (the trailing /seafhttp must not be removed):FILE_SERVER_ROOT = 'https://seafile.example.com/seafhttp'\nSERVICE_URL and FILE_SERVER_ROOT can also be modified in Seahub via System Admininstration > Settings. If they are configured via System Admin and in seahub_settings.py, the value in System Admin will take precedence.seafile.conf in /opt/seafile/conf:host = 127.0.0.1 ## default port 0.0.0.0\n
"},{"location":"deploy/https_with_apache/#troubleshooting","title":"Troubleshooting","text":"$ su seafile\n$ cd /opt/seafile/seafile-server-latest\n$ ./seafile.sh restart\n$ ./seahub.sh restart\n
"},{"location":"deploy/https_with_nginx/","title":"Enabling HTTPS with Nginx","text":"seafile.example.com.
# CentOS\n$ sudo yum install nginx -y\n\n# Debian/Ubuntu\n$ sudo apt install nginx -y\n
"},{"location":"deploy/https_with_nginx/#preparing-nginx","title":"Preparing Nginx","text":"# CentOS/Debian/Ubuntu\n$ sudo systemctl start nginx\n$ sudo systemctl enable nginx\n$ sudo setenforce permissive\n$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config\n/etc/nginx/conf.d:
"},{"location":"deploy/https_with_nginx/#preparing-nginx-on-debianubuntu","title":"Preparing Nginx on Debian/Ubuntu","text":"$ touch /etc/nginx/conf.d/seafile.conf\n/etc/nginx/sites-available/:$ touch /etc/nginx/sites-available/seafile.conf\n/etc/nginx/sites-enabled/ and /etc/nginx/sites-available: $ rm /etc/nginx/sites-enabled/default\n$ rm /etc/nginx/sites-available/default\n
"},{"location":"deploy/https_with_nginx/#configuring-nginx","title":"Configuring Nginx","text":"$ ln -s /etc/nginx/sites-available/seafile.conf /etc/nginx/sites-enabled/seafile.conf\nseafile.conf and modify the content to fit your needs:log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n\n proxy_set_header X-Forwarded-For $remote_addr;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log seafileformat;\n error_log /var/log/nginx/seahub.error.log;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$ $1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\n proxy_connect_timeout 36000s;\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n\n send_timeout 36000s;\n\n access_log /var/log/nginx/seafhttp.access.log seafileformat;\n error_log /var/log/nginx/seafhttp.error.log;\n }\n location /media {\n root /opt/seafile/seafile-server-latest/seahub;\n }\n}\n
listen) - if Seafile server should be available on a non-standard port/ - if Seahub is configured to start on a different port than 8000/seafhttp - if seaf-server is configured to start on a different port than 8082client_max_body_size)client_max_body_size is 1M. Uploading larger files will result in an error message HTTP error code 413 (\"Request Entity Too Large\"). It is recommended to syncronize the value of client_max_body_size with the parameter max_upload_size in section [fileserver] of seafile.conf. Optionally, the value can also be set to 0 to disable this feature. Client uploads are only partly effected by this limit. With a limit of 100 MiB they can safely upload files of any size.
"},{"location":"deploy/https_with_nginx/#getting-a-lets-encrypt-certificate","title":"Getting a Let's Encrypt certificate","text":"$ nginx -t\n$ nginx -s reload\n$ sudo certbot certonly --nginx\n/etc/letsencrypt/live. For the host name seafile.example.com, the files are stored in /etc/letsencrypt/live/seafile.example.com. seafile.conf configuration file in /etc/nginx. log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n\n server_tokens off; # Prevents the Nginx version from being displayed in the HTTP response header\n}\n\nserver {\n listen 443 ssl;\n ssl_certificate /etc/letsencrypt/live/seafile.example.com/fullchain.pem; # Path to your fullchain.pem\n ssl_certificate_key /etc/letsencrypt/live/seafile.example.com/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n proxy_set_header X-Forwarded-Proto https;\n\n... # No changes beyond this point compared to the Nginx configuration without HTTPS\n
"},{"location":"deploy/https_with_nginx/#large-file-uploads","title":"Large file uploads","text":"nginx -t\nnginx -s reload\n location /seafhttp {\n ... ...\n proxy_request_buffering off;\n }\n
"},{"location":"deploy/https_with_nginx/#modifying-seahub_settingspy","title":"Modifying seahub_settings.py","text":" location /seafdav {\n ... ...\n proxy_request_buffering off;\n }\nSERVICE_URL in seahub_settings.py informs Seafile about the chosen domain, protocol and port. Change the SERVICE_URLso as to account for the switch from HTTP to HTTPS and to correspond to your host name (the http:// must not be removed):SERVICE_URL = 'https://seafile.example.com'\nFILE_SERVER_ROOT in seahub_settings.py informs Seafile about the location of and the protocol used by the file server. Change the FILE_SERVER_ROOT so as to account for the switch from HTTP to HTTPS and to correspond to your host name (the trailing /seafhttp must not be removed):FILE_SERVER_ROOT = 'https://seafile.example.com/seafhttp'\nSERVICE_URL and FILE_SERVER_ROOT can also be modified in Seahub via System Admininstration > Settings. If they are configured via System Admin and in seahub_settings.py, the value in System Admin will take precedence.[fileserver] block on seafile.conf in /opt/seafile/conf:host = 127.0.0.1 ## default port 0.0.0.0\n
"},{"location":"deploy/https_with_nginx/#additional-modern-settings-for-nginx-optional","title":"Additional modern settings for Nginx (optional)","text":""},{"location":"deploy/https_with_nginx/#activating-ipv6","title":"Activating IPv6","text":"$ su seafile\n$ cd /opt/seafile/seafile-server-latest\n$ ./seafile.sh restart\n$ ./seahub.sh restart # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n
"},{"location":"deploy/https_with_nginx/#activating-http2","title":"Activating HTTP2","text":"listen 443;\nlisten [::]:443;\nhttp2.
"},{"location":"deploy/https_with_nginx/#advanced-tls-configuration-for-nginx-optional","title":"Advanced TLS configuration for Nginx (optional)","text":"listen 443 http2;\nlisten [::]:443 http2;\nseafile.conf, this rating can be significantly improved.
"},{"location":"deploy/https_with_nginx/#enabling-http-strict-transport-security","title":"Enabling HTTP Strict Transport Security","text":" server {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n server_tokens off;\n }\n server {\n listen 443 ssl;\n ssl_certificate /etc/ssl/cacert.pem; # Path to your cacert.pem\n ssl_certificate_key /etc/ssl/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n # HSTS for protection against man-in-the-middle-attacks\n add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\";\n\n # DH parameters for Diffie-Hellman key exchange\n ssl_dhparam /etc/nginx/dhparam.pem;\n\n # Supported protocols and ciphers for general purpose server with good security and compatability with most clients\n ssl_protocols TLSv1.2 TLSv1.3;\n ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;\n ssl_prefer_server_ciphers off;\n\n # Supported protocols and ciphers for server when clients > 5years (i.e., Windows Explorer) must be supported\n #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;\n #ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA;\n #ssl_prefer_server_ciphers on;\n\n ssl_session_timeout 5m;\n ssl_session_cache shared:SSL:5m;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto https;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n\n proxy_read_timeout 1200s;\n\n client_max_body_size 0;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$ $1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_connect_timeout 36000s;\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n send_timeout 36000s;\n }\n\n location /media {\n root /home/user/haiwen/seafile-server-latest/seahub;\n }\n }\nadd_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\" always;\n$ openssl dhparam 2048 > /etc/nginx/dhparam.pem # Generates DH parameter of length 2048 bits\n
"},{"location":"deploy/https_with_nginx/#restricting-tls-protocols-and-ciphers","title":"Restricting TLS protocols and ciphers","text":"ssl_dhparam /etc/nginx/dhparam.pem;\nhttps://your-server/krb5-login. Only this URL needs to be configured under Kerberos protection. All other URLs don't go through the Kerberos module. The overall workflow for a user to login with Kerberos is as follows:
https://your-server/krb5-login.
"},{"location":"deploy/kerberos_config/#get-keytab-for-apache","title":"Get keytab for Apache","text":"<IfModule mod_ssl.c>\n <VirtualHost _default_:443>\n ServerName seafile.example.com\n DocumentRoot /var/www\n...\n <Location /krb5-login/>\n SSLRequireSSL\n AuthType Kerberos\n AuthName \"Kerberos EXAMPLE.ORG\"\n KrbMethodNegotiate On\n KrbMethodK5Passwd On\n Krb5KeyTab /etc/apache2/conf.d/http.keytab\n #ErrorDocument 401 '<html><meta http-equiv=\"refresh\" content=\"0; URL=/accounts/login\"><body>Kerberos authentication did not pass.</body></html>'\n Require valid-user\n </Location>\n...\n </VirtualHost>\n</IfModule>\nREMOTE_USER environment variable.
"},{"location":"deploy/kerberos_config/#verify","title":"Verify","text":"ENABLE_KRB5_LOGIN = True\n
user-login-name@domain-name, e.g. john@example.com. It's not a real email address, but it works fine as the unique identifier.social_auth_usersocialauth to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth table.seahub_settings.py. Examples are as follows:ENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.1' \nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' \nLDAP_ADMIN_DN = 'administrator@example.com' \nLDAP_ADMIN_PASSWORD = 'yourpassword' \nLDAP_PROVIDER = 'ldap' \nLDAP_LOGIN_ATTR = 'email' \nLDAP_CONTACT_EMAIL_ATTR = '' \nLDAP_USER_ROLE_ATTR = '' \nLDAP_USER_FIRST_NAME_ATTR = 'givenName' \nLDAP_USER_LAST_NAME_ATTR = 'sn' \nLDAP_USER_NAME_REVERSE = False \nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \n
cn=admin,dc=example,dc=comLDAP_BASE_DN and LDAP_ADMIN_DN:
"},{"location":"deploy/ldap_in_11.0/#advanced-ldap-integration-options","title":"Advanced LDAP Integration Options","text":""},{"location":"deploy/ldap_in_11.0/#multiple-base","title":"Multiple BASE","text":"LDAP_BASE_DN, you first have to navigate your organization hierachy on the domain controller GUI.
cn=users,dc=yourdomain,dc=com as LDAP_BASE_DN (with proper adjustment for your own needs).dsquery command on the domain controller to find out the DN for this OU. For example, if the OU is staffs, you can run dsquery ou -name staff. More information can be found here.user@domain.name format for the LDAP_ADMIN_DN option. For example you can use administrator@example.com for LDAP_ADMIN_DN. Sometime the domain controller doesn't recognize this format. You can still use dsquery command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser. More information here.LDAP_BASE_DN option. The DNs are separated by \";\", e.g.
"},{"location":"deploy/ldap_in_11.0/#additional-search-filter","title":"Additional Search Filter","text":"LDAP_BASE_DN = 'ou=developers,dc=example,dc=com;ou=marketing,dc=example,dc=com'\nLDAP_FILTER option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).(&($LOGIN_ATTR=*)($LDAP_FILTER)). $LOGIN_ATTR and $LDAP_FILTER will be replaced by your option values.seahub_settings.py:LDAP_FILTER = 'memberOf=CN=group,CN=developers,DC=example,DC=com'\n(&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))memberOf attribute is only available in Active Directory.LDAP_FILTER option to limit user scope to a certain AD group.
dsquery command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup.seahub_settings.py:
"},{"location":"deploy/ldap_in_11.0/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"LDAP_FILTER = 'memberOf={output of dsquery command}'\nLDAP_SERVER_URL as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
"},{"location":"deploy/libreoffice_online/","title":"Integrate Seafile with Collabora Online (LibreOffice Online)","text":"LDAP_SERVER_URL = 'ldaps://192.168.0.1:636/'\nserver {\n listen 443 ssl;\n server_name collabora-online.seafile.com;\n\n ssl_certificate /etc/letsencrypt/live/collabora-online.seafile.com/fullchain.pem;\n ssl_certificate_key /etc/letsencrypt/live/collabora-online.seafile.com/privkey.pem;\n\n # static files\n location ^~ /browser {\n proxy_pass https://127.0.0.1:9980;\n proxy_set_header Host $http_host;\n }\n\n # WOPI discovery URL\n location ^~ /hosting/discovery {\n proxy_pass https://127.0.0.1:9980;\n proxy_set_header Host $http_host;\n }\n\n # Capabilities\n location ^~ /hosting/capabilities {\n proxy_pass https://127.0.0.1:9980;\n proxy_set_header Host $http_host;\n }\n\n # main websocket\n location ~ ^/cool/(.*)/ws$ {\n proxy_pass https://127.0.0.1:9980;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"Upgrade\";\n proxy_set_header Host $http_host;\n proxy_read_timeout 36000s;\n }\n\n # download, presentation and image upload\n location ~ ^/(c|l)ool {\n proxy_pass https://127.0.0.1:9980;\n proxy_set_header Host $http_host;\n }\n\n # Admin Console websocket\n location ^~ /cool/adminws {\n proxy_pass https://127.0.0.1:9980;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"Upgrade\";\n proxy_set_header Host $http_host;\n proxy_read_timeout 36000s;\n }\n}\ndocker pull collabora/code\ndocker run -t -d -p 127.0.0.1:9980:9980 -e \"aliasgroup1=https://<your-dot-escaped-domain>:443\" -e \"username=***\" -e \"password=***\" --name code --restart always collabora/code\ndomain args is the domain name of your Seafile server, if your Seafile server's domain name is demo.seafile.com, the command should be:docker run -t -d -p 127.0.0.1:9980:9980 -e \"aliasgroup1=https://demo.seafile.com:443\" -e \"username=***\" -e \"password=***\" --name code --restart always collabora/code\n# From 6.1.0 CE version on, Seafile support viewing/editing **doc**, **ppt**, **xls** files via LibreOffice\n# Add this setting to view/edit **doc**, **ppt**, **xls** files\nOFFICE_SERVER_TYPE = 'CollaboraOffice'\n\n# Enable LibreOffice Online\nENABLE_OFFICE_WEB_APP = True\n\n# Url of LibreOffice Online's discovery page\n# The discovery page tells Seafile how to interact with LibreOffice Online when view file online\n# You should change `https://collabora-online.seafile.com/hosting/discovery` to your actual LibreOffice Online server address\nOFFICE_WEB_APP_BASE_URL = 'https://collabora-online.seafile.com/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use LibreOffice Online view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 30 * 60 # seconds\n\n# List of file formats that you want to view through LibreOffice Online\n# You can change this value according to your preferences\n# And of course you should make sure your LibreOffice Online supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through LibreOffice Online\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through LibreOffice Online\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n
sudo apt-get install python-mysqldb or sudo apt-get install python3-mysqldb to install it./opt/seafile.sqlite2mysql.sh:chmod +x sqlite2mysql.sh\n./sqlite2mysql.sh\nccnet-db.sql, seafile-db.sql, seahub-db.sql.mysql> create database ccnet_db character set = 'utf8';\nmysql> create database seafile_db character set = 'utf8';\nmysql> create database seahub_db character set = 'utf8';\nmysql> use ccnet_db;\nmysql> source ccnet-db.sql;\nmysql> use seafile_db;\nmysql> source seafile-db.sql;\nmysql> use seahub_db;\nmysql> source seahub-db.sql;\n[Database]\nENGINE=mysql\nHOST=127.0.0.1\nPORT = 3306\nUSER=root\nPASSWD=root\nDB=ccnet_db\nCONNECTION_CHARSET=utf8\n127.0.0.1, don't use localhost.seafile.conf with following lines:[database]\ntype=mysql\nhost=127.0.0.1\nport = 3306\nuser=root\npassword=root\ndb_name=seafile_db\nconnection_charset=utf8\nseahub_settings.py:DATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'USER' : 'root',\n 'PASSWORD' : 'root',\n 'NAME' : 'seahub_db',\n 'HOST' : '127.0.0.1',\n 'PORT': '3306',\n # This is only needed for MySQL older than 5.5.5.\n # For MySQL newer than 5.5.5 INNODB is the default already.\n 'OPTIONS': {\n \"init_command\": \"SET storage_engine=INNODB\",\n }\n }\n}\nuser_notitfications table manually by:
"},{"location":"deploy/migrate_from_sqlite_to_mysql/#faq","title":"FAQ","text":""},{"location":"deploy/migrate_from_sqlite_to_mysql/#encountered-errno-150-foreign-key-constraint-is-incorrectly-formed","title":"Encountered use seahub_db;\ndelete from notifications_usernotification;\nerrno: 150 \"Foreign key constraint is incorrectly formed\"","text":"auth_user\nauth_group\nauth_permission\nauth_group_permissions\nauth_user_groups\nauth_user_user_permissions\n
"},{"location":"deploy/notification-server/","title":"Notification Server Overview","text":"post_office_emailtemplate\npost_office_email\npost_office_attachment\npost_office_attachment_emails\n
"},{"location":"deploy/notification-server/#how-to-configure-and-run","title":"How to configure and run","text":"# jwt_private_key are required.You should generate it manually.\n[notification]\nenabled = true\n# the ip of notification server. (Do not modify the host when using Nginx or Apache, as Nginx or Apache will proxy the requests to this address)\nhost = 127.0.0.1\n# the port of notification server\nport = 8083\n# the log level of notification server\n# You can set log_level to debug to print messages sent to clients.\nlog_level = info\n# jwt_private_key is used to generate jwt token and authenticate seafile server\njwt_private_key = M@O8VWUb81YvmtWLHGB2I_V7di5-@0p(MF*GrE!sIws23F\n# generate jwt_private_key\nopenssl rand -base64 32\nmap $http_upgrade $connection_upgrade {\ndefault upgrade;\n'' close;\n}\n\nserver {\n location /notification/ping {\n proxy_pass http://127.0.0.1:8083/ping;\n access_log /var/log/nginx/notif.access.log;\n error_log /var/log/nginx/notif.error.log;\n }\n\n location /notification {\n proxy_pass http://127.0.0.1:8083/;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection $connection_upgrade;\n access_log /var/log/nginx/notif.access.log;\n error_log /var/log/nginx/notif.error.log;\n }\n}\n ProxyPass /notification/ping http://127.0.0.1:8083/ping/\n ProxyPassReverse /notification/ping http://127.0.0.1:8083/ping/\n\n ProxyPass /notification ws://127.0.0.1:8083/\n ProxyPassReverse /notification ws://127.0.0.1:8083/\nThe configured ProxyPass and ProxyPassMatch rules are checked in the order of configuration. The first rule that matches wins.\nSo usually you should sort conflicting ProxyPass rules starting with the longest URLs first.\nOtherwise, later rules for longer URLS will be hidden by any earlier rule which uses a leading substring of the URL. Note that there is some relation with worker sharing.\n #\n # notification server\n #\n ProxyPass /notification/ping http://127.0.0.1:8083/ping/\n ProxyPassReverse /notification/ping http://127.0.0.1:8083/ping/\n\n ProxyPass /notification ws://127.0.0.1:8083/\n ProxyPassReverse /notification ws://127.0.0.1:8083/\n\n #\n # seafile fileserver\n #\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n #\n # seahub\n #\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPreserveHost On\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n
"},{"location":"deploy/notification-server/#checking-notification-server-status","title":"Checking notification server status","text":"./seafile.sh restart\nhttp://127.0.0.1:8083/ping from your browser, which will answer {\"ret\": \"pong\"}. If you have a proxy configured, you can access https://{server}/notification/ping from your browser instead.
"},{"location":"deploy/notification-server/#notification-server-in-seafile-cluster","title":"Notification Server in Seafile cluster","text":"Notification server is enabled on the remote server xxxx\n[notification]\nenabled = true\n# the ip of notification server.\nhost = 192.168.1.134\n# the port of notification server\nport = 8083\n# the log level of notification server\nlog_level = info\n# jwt_private_key is used to generate jwt token and authenticate seafile server\njwt_private_key = M@O8VWUb81YvmtWLHGB2I_V7di5-@0p(MF*GrE!sIws23F\n
/notification/ping requests to notification server via http protocol./notification to notification server.
"},{"location":"deploy/oauth/","title":"OAuth Authentication","text":""},{"location":"deploy/oauth/#oauth","title":"OAuth","text":"#/etc/haproxy/haproxy.cfg\n\n# Other existing haproxy configurations\n......\n\nfrontend seafile\n bind 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n acl notif_ping_request url_sub -i /notification/ping\n acl ws_requests url -i /notification\n acl hdr_connection_upgrade hdr(Connection) -i upgrade\n acl hdr_upgrade_websocket hdr(Upgrade) -i websocket\n use_backend ws_backend if hdr_connection_upgrade hdr_upgrade_websocket\n use_backend notif_ping_backend if notif_ping_request\n use_backend ws_backend if ws_requests\n default_backend backup_nodes\n\nbackend backup_nodes\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.0.137:80\n\nbackend notif_ping_backend\n option forwardfor\n server ws 192.168.0.137:8083\n\nbackend ws_backend\n option forwardfor # This sets X-Forwarded-For\n server ws 192.168.0.137:8083\nENABLE_OAUTH = True\n\n# If create new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_CREATE_UNKNOWN_USER = True\n\n# If active new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_ACTIVATE_USER_AFTER_CREATION = True\n\n# Usually OAuth works through SSL layer. If your server is not parametrized to allow HTTPS, some method will raise an \"oauthlib.oauth2.rfc6749.errors.InsecureTransportError\". Set this to `True` to avoid this error.\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\n# Client id/secret generated by authorization server when you register your client application.\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\n\n# Callback url when user authentication succeeded. Note, the redirect url you input when you register your client application MUST be exactly the same as this value.\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following should NOT be changed if you are using Github as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'github.com' \nOAUTH_PROVIDER = 'github.com'\n\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # Please keep the 'email' option unchanged to be compatible with the login of users of version 11.0 and earlier.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n \"uid\": (True, \"uid\"), # Seafile v11.0 + \n}\nOAUTH_PROVIDER_DOMAIN will be deprecated, and it can be replaced by OAUTH_PROVIDER. This variable is used in the database to identify third-party providers, either as a domain or as an easy-to-remember string less than 32 characters. OAUTH_ATTRIBUTE_MAP = {\n <:Attribute in the OAuth provider>: (<:Is required or not in Seafile?>, <:Attribute in Seafile >)\n}\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # it is deprecated\n \"uid / id / username\": (True, \"uid\") \n\n # extra infos you want to update to Seafile\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n}\nid stands for an unique identifier of user in Github, this tells Seafile which attribute remote resoure server uses to indentify its user. The value part True stands for if this field is mandatory by Seafile.uid as the external unique identifier of the user. It stores uid in table social_auth_usersocialauth and map it to internal unique identifier used in Seafile. Different OAuth systems have different attributes, which may be: id or uid or username, etc. And the id/email config id: (True, email) is deprecated. OAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"uid\": (True, \"uid\") ,\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n}\n\"id\": (True, \"email\").\"id\": (True, \"email\") item. Your configuration should be like:
"},{"location":"deploy/oauth/#sample-settings-for-google","title":"Sample settings for Google","text":"OAUTH_ATTRIBUTE_MAP = {\n \"uid\": (True, \"uid\") ,\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n}\n
"},{"location":"deploy/oauth/#sample-settings-for-github","title":"Sample settings for Github","text":"ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following shoud NOT be changed if you are using Google as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'google.com'\nOAUTH_AUTHORIZATION_URL = 'https://accounts.google.com/o/oauth2/v2/auth'\nOAUTH_TOKEN_URL = 'https://www.googleapis.com/oauth2/v4/token'\nOAUTH_USER_INFO_URL = 'https://www.googleapis.com/oauth2/v1/userinfo'\nOAUTH_SCOPE = [\n \"openid\",\n \"https://www.googleapis.com/auth/userinfo.email\",\n \"https://www.googleapis.com/auth/userinfo.profile\",\n]\nOAUTH_ATTRIBUTE_MAP = {\n \"sub\": (True, \"uid\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\nemail is not the unique identifier for an user, but id is in most cases, so we use id as settings example in our manual. As Seafile uses email to identify an unique user account for now, so we combine id and OAUTH_PROVIDER_DOMAIN, which is github.com in your case, to an email format string and then create this account if not exist. Change the setting as followings:
"},{"location":"deploy/oauth/#sample-settings-for-gitlab","title":"Sample settings for GitLab","text":"ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\nOAUTH_PROVIDER_DOMAIN = 'github.com'\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, 'uid'),\n \"email\": (False, \"contact_email\"),\n \"name\": (False, \"name\"),\n}\n
OAUTH_REDIRECT_URLopenid and read_user in the scopes list.
"},{"location":"deploy/oauth/#sample-settings-for-azure-cloud","title":"Sample settings for Azure Cloud","text":"ENABLE_OAUTH = True\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = \"https://your-seafile/oauth/callback/\"\n\nOAUTH_PROVIDER_DOMAIN = 'your-domain'\nOAUTH_AUTHORIZATION_URL = 'https://gitlab.your-domain/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://gitlab.your-domain/oauth/token'\nOAUTH_USER_INFO_URL = 'https://gitlab.your-domain/api/v4/user'\nOAUTH_SCOPE = [\"openid\", \"read_user\"]\nOAUTH_ATTRIBUTE_MAP = {\n \"email\": (True, \"uid\"),\n \"name\": (False, \"name\")\n}\nid field returned from Azure Cloud's user info endpoint, so we use a special configuration for OAUTH_ATTRIBUTE_MAP setting (others are the same as Github/Google):OAUTH_ATTRIBUTE_MAP = {\n \"email\": (True, \"uid\"),\n \"name\": (False, \"name\")\n}\nseahub_settings.py.# Enable OCM\nENABLE_OCM = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"dev\",\n \"server_url\": \"https://seafile-domain-1/\", # should end with '/'\n },\n {\n \"server_name\": \"download\",\n \"server_url\": \"https://seafile-domain-2/\", # should end with '/'\n },\n]\nseahub_settings.py.
"},{"location":"deploy/ocm/#usage","title":"Usage","text":""},{"location":"deploy/ocm/#share-library-to-other-server","title":"Share library to other server","text":"# Enable OCM\nENABLE_OCM_VIA_WEBDAV = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"nextcloud\",\n \"server_url\": \"https://nextcloud-domain-1/\", # should end with '/'\n }\n]\nonlyoffice.yml provided by Seafile according to this document, or you can deploy it to a different machine according to OnlyOffice official document.
"},{"location":"deploy/only_office/#deployment-of-onlyoffice","title":"Deployment of OnlyOffice","text":"pwgen -s 40 1\nonlyoffice.ymlwget https://manual.seafile.com/12/docker/docker-compose/onlyoffice.yml\nonlyoffice.yml into COMPOSE_FILE list (i.e., COMPOSE_FILE='...,onlyoffice.yml'), and add the following configurations of onlyoffice in .env file.# OnlyOffice image\nONLYOFFICE_IMAGE=onlyoffice/documentserver:8.1.0.1\n\n# Persistent storage directory of OnlyOffice\nONLYOFFICE_VOLUME=/opt/onlyoffice\n\n# OnlyOffice document server port\nONLYOFFICE_PORT=6233\n\n# jwt secret, generated by `pwgen -s 40 1` \nONLYOFFICE_JWT_SECRET=<your jwt secret>\nseahub_settings.pyENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'https://seafile.example.com:6233/web-apps/apps/api/documents/api.js'\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods', 'csv', 'ppsx', 'pps')\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\nONLYOFFICE_PORT, and port in the term ONLYOFFICE_APIJS_URL in seahub_settings.py has been modified together.local-production-linux.json to force some settings.nano local-production-linux.json\n{\n \"services\": {\n \"CoAuthoring\": {\n \"autoAssembly\": {\n \"enable\": true,\n \"interval\": \"5m\"\n }\n }\n },\n \"FileConverter\": {\n \"converter\": {\n \"downloadAttemptMaxCount\": 3\n }\n }\n}\nonlyoffice.yml:service:\n ...\n onlyoffice:\n ...\n volumes:\n ...\n - <Your path to local-production-linux.json>:/etc/onlyoffice/documentserver/local-production-linux.json\n...\nSEAFILE_MYSQL_* in .env. If you need to specify another existing database, please modify it in onlyoffice.ymldocker compose up -d\ndocker exec -it seafile-mysql bash\nonlyoffice and add corresponding permissions for the seafile user
"},{"location":"deploy/only_office/#restart-seafile-docker-instance-and-test-that-onlyoffice-is-running","title":"Restart Seafile-docker instance and test that OnlyOffice is running","text":"create database if not exists onlyoffice charset utf8mb4;\nGRANT ALL PRIVILEGES ON `onlyoffice`.* to `seafile`@`%.%.%.%`;\ndocker-compose down\ndocker-compose up -d\nhttp{s}://{your Seafile server's domain or IP}:6233/welcome, you will get Document Server is running info at this page.docker logs -f seafile-onlyoffice, then open an office file. After the \"Download failed.\" error appears on the page, observe the logs for the following error:==> /var/log/onlyoffice/documentserver/converter/out.log <==\n...\nError: DNS lookup {local IP} (family:undefined, host:undefined) is not allowed. Because, It is a private IP address.\n...\nseahub_settings.py and then restart the service.
"},{"location":"deploy/only_office/#about-ssl","title":"About SSL","text":"ENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'http{s}://<Your OnlyOffice host url>/web-apps/apps/api/documents/api.js'\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods', 'csv', 'ppsx', 'pps')\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\nonlyoffice.yml file in this document, SSL is primarily handled by the Caddy. If the OnlyOffice document server and Seafile server are not on the same machine, please refer to the official document to configure SSL for OnlyOffice.
http(s)://SEAFILE_SERVER_URL/outlook/http(s)://SEAFILE_SERVER_URL/accounts/login/ including a redirect request to /outlook/ following a successful authentication (e.g., https://demo.seafile.com/accounts/login/?next=/jwt-sso/?page=/outlook/)# CentOS/RedHat\n$ sudo yum install -y php-fpm php-curl\n$ php --version\n\n# Debian/Ubuntu\n$ sudo apt install -y php-fpm php-curl\n$ php --version\n/var/www:
"},{"location":"deploy/outlook_addin_config/#configuring-seahub","title":"Configuring Seahub","text":"$ mkdir -p /var/www/outlook-sso\n$ cd /var/www/outlook-sso\n$ composer require firebase/php-jwt guzzlehttp/guzzle\nseahub_settings.py using a text editor:ENABLE_JWT_SSO = True\nJWT_SSO_SECRET_KEY = 'SHARED_SECRET'\nENABLE_SYS_ADMIN_GENERATE_USER_AUTH_TOKEN = True\nlocation /outlook {\n alias /var/www/outlook-sso/public;\n index index.php;\n location ~ \\.php$ {\n fastcgi_split_path_info ^(.+\\.php)(/.+)$;\n fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;\n fastcgi_param SCRIPT_FILENAME $request_filename;\n fastcgi_index index.php;\n include fastcgi_params;\n }\n}\n
"},{"location":"deploy/outlook_addin_config/#deploying-the-php-script","title":"Deploying the PHP script","text":"$ nginx -t\n$ nginx -s reload\n$ cd /var/www/outlook-sso\n$ nano config.php\nconfig.php:<?php\n\n# general settings\n$seafile_url = 'SEAFILE_SERVER_URL';\n$jwt_shared_secret = 'SHARED_SECRET';\n\n# Option 1: provide credentials of a seafile admin user\n$seafile_admin_account = [\n 'username' => '',\n 'password' => '',\n];\n\n# Option 2: provide the api-token of a seafile admin user\n$seafile_admin_token = '';\n\n?>\nindex.php and copy & paste the PHP script:mkdir /var/www/outlook-sso/public\n$ cd /var/www/outlook-sso/public\n$ nano index.php\n<?php\n/** IMPORTANT: there is no need to change anything in this file ! **/\n\nrequire_once __DIR__ . '/../vendor/autoload.php';\nrequire_once __DIR__ . '/../config.php';\n\nif(!empty($_GET['jwt-token'])){\n try {\n $decoded = Firebase\\JWT\\JWT::decode($_GET['jwt-token'], new Firebase\\JWT\\Key($jwt_shared_secret, 'HS256'));\n }\n catch (Exception $e){\n echo json_encode([\"error\" => \"wrong JWT-Token\"]);\n die();\n }\n\n try {\n // init connetion to seafile api\n $client = new GuzzleHttp\\Client(['base_uri' => $seafile_url]);\n\n // get admin api-token with his credentials (if not set)\n if(empty($seafile_admin_token)){\n $request = $client->request('POST', '/api2/auth-token/', ['form_params' => $seafile_admin_account]);\n $response = json_decode($request->getBody());\n $seafile_admin_token = $response->token;\n }\n\n // get api-token of the user\n $request = $client->request('POST', '/api/v2.1/admin/generate-user-auth-token/', [\n 'json' => ['email' => $decoded->email],\n 'headers' => ['Authorization' => 'Token '. $seafile_admin_token]\n ]);\n $response = json_decode($request->getBody());\n\n // create the output for the outlook plugin (json like response)\n echo json_encode([\n 'exp' => $decoded->exp,\n 'email' => $decoded->email,\n 'name' => $decoded->name,\n 'token' => $response->token,\n ]);\n } catch (GuzzleHttp\\Exception\\ClientException $e){\n echo $e->getResponse()->getBody();\n }\n}\nelse{ // no jwt-token. therefore redirect to the login page of seafile\n header(\"Location: \". $seafile_url .\"/accounts/login/?next=/jwt-sso/?page=/outlook\");\n} ?>\n/var/www/sso-outlook/ should now look as follows:$ tree -L 2 /var/www/outlook-sso\n/var/www/outlook-sso/\n\u251c\u2500\u2500 composer.json\n\u251c\u2500\u2500 composer.lock\n\u251c\u2500\u2500 config.php\n\u251c\u2500\u2500 public\n| \u2514\u2500\u2500 index.php\n\u2514\u2500\u2500 vendor\n \u251c\u2500\u2500 autoload.php\n \u251c\u2500\u2500 composer\n \u2514\u2500\u2500 firebase\nconf/seahub_settings.py to enable this feature.ENABLE_REMOTE_USER_AUTHENTICATION = True\n\n# Optional, HTTP header, which is configured in your web server conf file,\n# used for Seafile to get user's unique id, default value is 'HTTP_REMOTE_USER'.\nREMOTE_USER_HEADER = 'HTTP_REMOTE_USER'\n\n# Optional, when the value of HTTP_REMOTE_USER is not a valid email address\uff0c\n# Seafile will build a email-like unique id from the value of 'REMOTE_USER_HEADER'\n# and this domain, e.g. user1@example.com.\nREMOTE_USER_DOMAIN = 'example.com'\n\n# Optional, whether to create new user in Seafile system, default value is True.\n# If this setting is disabled, users doesn't preexist in the Seafile DB cannot login.\n# The admin has to first import the users from external systems like LDAP.\nREMOTE_USER_CREATE_UNKNOWN_USER = True\n\n# Optional, whether to activate new user in Seafile system, default value is True.\n# If this setting is disabled, user will be unable to login by default.\n# the administrator needs to manually activate this user.\nREMOTE_USER_ACTIVATE_USER_AFTER_CREATION = True\n\n# Optional, map user attribute in HTTP header and Seafile's user attribute.\nREMOTE_USER_ATTRIBUTE_MAP = {\n 'HTTP_DISPLAYNAME': 'name',\n 'HTTP_MAIL': 'contact_email',\n\n # for user info\n \"HTTP_GIVENNAME\": 'givenname',\n \"HTTP_SN\": 'surname',\n \"HTTP_ORGANIZATION\": 'institution',\n\n # for user role\n 'HTTP_Shibboleth-affiliation': 'affiliation',\n}\n\n# Map affiliation to user role. Though the config name is SHIBBOLETH_AFFILIATION_ROLE_MAP,\n# it is not restricted to Shibboleth\nSHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\nhttps://your-seafile-domain/sso. Only this URL needs to be configured under Shibboleth protection. All other URLs don't go through the Shibboleth module. The overall workflow for a user to login with Shibboleth is as follows:
https://your-seafile-domain/sso.https://your-seafile-domain/sso.HTTP_REMOTE_USER header) and brings the user to her/his home page.https://your-seafile-domain/sso needs to be directed to Apache.
"},{"location":"deploy/shibboleth_authentication/#install-and-configure-shibboleth-service-provider","title":"Install and Configure Shibboleth Service Provider","text":"
"},{"location":"deploy/shibboleth_authentication/#install-and-configure-shibboleth","title":"Install and Configure Shibboleth","text":"<IfModule mod_ssl.c>\n <VirtualHost _default_:443>\n ServerName your-seafile-domain\n DocumentRoot /var/www\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n ErrorLog ${APACHE_LOG_DIR}/seahub.error.log\n CustomLog ${APACHE_LOG_DIR}/seahub.access.log combined\n\n SSLEngine on\n SSLCertificateFile /path/to/ssl-cert.pem\n SSLCertificateKeyFile /path/to/ssl-key.pem\n\n <Location /Shibboleth.sso>\n SetHandler shib\n AuthType shibboleth\n ShibRequestSetting requireSession 1\n Require valid-user\n </Location>\n\n <Location /sso>\n SetHandler shib\n AuthType shibboleth\n ShibUseHeaders On\n ShibRequestSetting requireSession 1\n Require valid-user\n </Location>\n\n RewriteEngine On\n <Location /media>\n Require all granted\n </Location>\n\n # seafile fileserver\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n # seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n\n # for http\n # RequestHeader set REMOTE_USER %{REMOTE_USER}e\n # for https\n RequestHeader set REMOTE_USER %{REMOTE_USER}s\n </VirtualHost>\n</IfModule>\n/etc/shibboleth/shibboleth2.xml and change some property. After you have done all the followings, don't forget to restart Shibboleth(SP)ApplicationDefaults element","text":"entityID and REMOTE_USER property:<!-- The ApplicationDefaults element is where most of Shibboleth's SAML bits are defined. -->\n<ApplicationDefaults entityID=\"https://your-seafile-domain/sso\"\n REMOTE_USER=\"mail\"\n cipherSuites=\"DEFAULT:!EXP:!LOW:!aNULL:!eNULL:!DES:!IDEA:!SEED:!RC4:!3DES:!kRSA:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1\">\nREMOTE_USER environment variable. So you should modify your SP's shibboleth2.xml config file, so that Shibboleth translates your desired attribute into REMOTE_USER environment variable.eppn, and mail. eppn stands for \"Edu Person Principal Name\". It is usually the UserPrincipalName attribute in Active Directory. It's not necessarily a valid email address. mail is the user's email address. You should set REMOTE_USER to either one of these attributes.SSO element","text":"entityID property:
"},{"location":"deploy/shibboleth_authentication/#metadataprovider-element","title":"<!--\nConfigures SSO for a default IdP. To properly allow for >1 IdP, remove\nentityID property and adjust discoveryURL to point to discovery service.\nYou can also override entityID on /Login query string, or in RequestMap/htaccess.\n-->\n<SSO entityID=\"https://your-IdP-domain\">\n <!--discoveryProtocol=\"SAMLDS\" discoveryURL=\"https://wayf.ukfederation.org.uk/DS\"-->\n SAML2\n</SSO>\nMetadataProvider element","text":"url and backingFilePath property:
"},{"location":"deploy/shibboleth_authentication/#attribute-mapxml","title":"attribute-map.xml","text":"<!-- Example of remotely supplied batch of signed metadata. -->\n<MetadataProvider type=\"XML\" validate=\"true\"\n url=\"http://your-IdP-metadata-url\"\n backingFilePath=\"your-IdP-metadata.xml\" maxRefreshDelay=\"7200\">\n <MetadataFilter type=\"RequireValidUntil\" maxValidityInterval=\"2419200\"/>\n <MetadataFilter type=\"Signature\" certificate=\"fedsigner.pem\" verifyBackup=\"false\"/>\n/etc/shibboleth/attribute-map.xml and change some property. After you have done all the followings, don't forget to restart Shibboleth(SP)Attribute element","text":"
"},{"location":"deploy/shibboleth_authentication/#upload-shibbolethsps-metadata","title":"Upload Shibboleth(SP)'s metadata","text":"<!-- Older LDAP-defined attributes (SAML 2.0 names followed by SAML 1 names)... -->\n<Attribute name=\"urn:oid:2.16.840.1.113730.3.1.241\" id=\"displayName\"/>\n<Attribute name=\"urn:oid:0.9.2342.19200300.100.1.3\" id=\"mail\"/>\n\n<Attribute name=\"urn:mace:dir:attribute-def:displayName\" id=\"displayName\"/>\n<Attribute name=\"urn:mace:dir:attribute-def:mail\" id=\"mail\"/>\nENABLE_SHIB_LOGIN = True\nSHIBBOLETH_USER_HEADER = 'HTTP_REMOTE_USER'\n# basic user attributes\nSHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_DISPLAYNAME\": (False, \"display_name\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n}\nEXTRA_MIDDLEWARE = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\nEXTRA_AUTHENTICATION_BACKENDS = (\n 'shibboleth.backends.ShibbolethRemoteUserBackend',\n)\n
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_GIVENNAME\": (False, \"givenname\"),\n \"HTTP_SN\": (False, \"surname\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n \"HTTP_ORGANIZATION\": (False, \"institution\"),\n}\nSHIB_ACTIVATE_AFTER_CREATION (defaults to True) which control the user status after shibboleth connection. If this option set to False, user will be inactive after connection, and system admins will be notified by email to activate that account.employee@uni-mainz.de;member@uni-mainz.de;faculty@uni-mainz.de;staff@uni-mainz.de.SHIBBOLETH_ATTRIBUTE_MAP above and add Shibboleth-affiliation field, you may need to change Shibboleth-affiliation according to your Shibboleth SP attributes.SHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_GIVENNAME\": (False, \"givenname\"),\n \"HTTP_SN\": (False, \"surname\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n \"HTTP_ORGANIZATION\": (False, \"institution\"),\n \"HTTP_Shibboleth-affiliation\": (False, \"affiliation\"),\n}\nSHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\n./seahub.sh restart), you can then test the shibboleth login workflow.seahub_settings.py","text":"
"},{"location":"deploy/shibboleth_authentication/#change-seafiles-code","title":"Change Seafile's code","text":"DEBUG = True\nseafile-server-latest/seahub/thirdpart/shibboleth/middleware.py assert False\nif not username:\n assert False\n#Locate the remote user header.\n# import pprint; pprint.pprint(request.META)\ntry:\n username = request.META[SHIB_USER_HEADER]\nexcept KeyError:\n assert False\n # If specified header doesn't exist then return (leaving\n # request.user set to AnonymousUser by the\n # AuthenticationMiddleware).\n return\n\nif not username:\n assert False\n\np_id = ccnet_api.get_primary_id(username)\nif p_id is not None:\n username = p_id\n
"},{"location":"deploy/start_seafile_at_system_bootup/","title":"Start Seafile at System Bootup","text":""},{"location":"deploy/start_seafile_at_system_bootup/#for-systems-running-systemd-and-python-virtual-environments","title":"For systems running systemd and python virtual environments","text":"
sudo vim /opt/seafile/run_with_venv.sh\n#!/bin/bash\n# Activate the python virtual environment (venv) before starting one of the seafile scripts\n\ndir_name=\"$(dirname $0)\"\nsource \"${dir_name}/python-venv/bin/activate\"\nscript=\"$1\"\nshift 1\n\necho \"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n\"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n
"},{"location":"deploy/start_seafile_at_system_bootup/#seafile-component","title":"Seafile component","text":"sudo chmod 755 /opt/seafile/run_with_venv.sh\nsudo vim /etc/systemd/system/seafile.service\n
"},{"location":"deploy/start_seafile_at_system_bootup/#seahub-component","title":"Seahub component","text":"[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seafile.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\nsudo vim /etc/systemd/system/seahub.service\n
"},{"location":"deploy/start_seafile_at_system_bootup/#for-systems-running-systemd-without-python-virtual-environment","title":"For systems running systemd without python virtual environment","text":"[Unit]\nDescription=Seafile hub\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seahub.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seahub.sh stop\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
sudo vim /etc/systemd/system/seafile.service\n
"},{"location":"deploy/start_seafile_at_system_bootup/#seahub-component_1","title":"Seahub component","text":"[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seafile.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\nsudo vim /etc/systemd/system/seahub.service\n
"},{"location":"deploy/start_seafile_at_system_bootup/#seafile-cli-client-optional","title":"Seafile cli client (optional)","text":"[Unit]\nDescription=Seafile hub\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seahub.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seahub.sh stop\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\nsudo vim /etc/systemd/system/seafile-client.service\n
"},{"location":"deploy/start_seafile_at_system_bootup/#enable-service-start-on-system-boot","title":"Enable service start on system boot","text":"[Unit]\nDescription=Seafile client\n# Uncomment the next line you are running seafile client on the same computer as server\n# After=seafile.service\n# Or the next one in other case\n# After=network.target\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/seaf-cli start\nExecStop=/usr/bin/seaf-cli stop\nRemainAfterExit=yes\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"deploy/using_fuse/","title":"Seafile","text":""},{"location":"deploy/using_fuse/#using-fuse","title":"Using Fuse","text":"sudo systemctl enable seafile.service\nsudo systemctl enable seahub.service\nsudo systemctl enable seafile-client.service # optional\nSeaf-fuse is an implementation of the [http://fuse.sourceforge.net FUSE] virtual filesystem. In a word, it mounts all the seafile files to a folder (which is called the '''mount point'''), so that you can access all the files managed by seafile server, just as you access a normal folder on your server./data/seafile-fuse.
"},{"location":"deploy/using_fuse/#start-seaf-fuse-with-the-script","title":"Start seaf-fuse with the script","text":"mkdir -p /data/seafile-fuse\n./seafile.sh start.
"},{"location":"deploy/using_fuse/#stop-seaf-fuse","title":"Stop seaf-fuse","text":"./seaf-fuse.sh start /data/seafile-fuse\n
"},{"location":"deploy/using_fuse/#contents-of-the-mounted-folder","title":"Contents of the mounted folder","text":""},{"location":"deploy/using_fuse/#the-top-level-folder","title":"The top level folder","text":"./seaf-fuse.sh stop\n/data/seafile-fuse.$ ls -lhp /data/seafile-fuse\n\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 abc@abc.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 foo@foo.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 plus@plus.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 sharp@sharp.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 test@test.com/\n
"},{"location":"deploy/using_fuse/#the-folder-for-each-user","title":"The folder for each user","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com\n\ndrwxr-xr-x 2 root root 924 Jan 1 1970 5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\ndrwxr-xr-x 2 root root 1.6K Jan 1 1970 a09ab9fc-7bd0-49f1-929d-6abeb8491397_My Notes/\n
"},{"location":"deploy/using_fuse/#if-you-get-a-permission-denied-error","title":"If you get a \"Permission denied\" error","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com/5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\n\n-rw-r--r-- 1 root root 501K Jan 1 1970 image.png\n-rw-r--r-- 1 root root 501K Jan 1 1970 sample.jpng\n./seaf-fuse.sh start, most likely you are not in the \"fuse group\". You should:
sudo usermod -a -G fuse <your-user-name>\n
"},{"location":"deploy/using_logrotate/","title":"Set up logrotate for server","text":""},{"location":"deploy/using_logrotate/#how-it-works","title":"How it works","text":"./seaf-fuse.sh start <path> again.SIGUR1 signal./etc/logrotate.d//opt/seafile/logs/seafile.log and your seaf-server's pidfile is setup to /opt/seafile/pids/seaf-server.pid:/opt/seafile/logs/seafile.log\n/opt/seafile/logs/seahub.log\n/opt/seafile/logs/seafdav.log\n/opt/seafile/logs/fileserver-access.log\n/opt/seafile/logs/fileserver-error.log\n/opt/seafile/logs/fileserver.log\n/opt/seafile/logs/file_updates_sender.log\n/opt/seafile/logs/repo_old_file_auto_del_scan.log\n/opt/seafile/logs/seahub_email_sender.log\n/opt/seafile/logs/index.log\n{\n daily\n missingok\n rotate 7\n # compress\n # delaycompress\n dateext\n dateformat .%Y-%m-%d\n notifempty\n # create 644 root root\n sharedscripts\n postrotate\n if [ -f /opt/seafile/pids/seaf-server.pid ]; then\n kill -USR1 `cat /opt/seafile/pids/seaf-server.pid`\n fi\n\n if [ -f /opt/seafile/pids/fileserver.pid ]; then\n kill -USR1 `cat /opt/seafile/pids/fileserver.pid`\n fi\n\n if [ -f /opt/seafile/pids/seahub.pid ]; then\n kill -HUP `cat /opt/seafile/pids/seahub.pid`\n fi\n\n if [ -f /opt/seafile/pids/seafdav.pid ]; then\n kill -HUP `cat /opt/seafile/pids/seafdav.pid`\n fi\n\n find /opt/seafile/logs/ -mtime +7 -name \"*.log*\" -exec rm -f {} \\;\n endscript\n}\n/etc/logrotate.d/seafile.
# Debian 10\nsudo apt-get update\nsudo apt-get install python3 python3-setuptools python3-pip default-libmysqlclient-dev -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient\n# Ubuntu 18.04\nsudo apt-get update\nsudo apt-get install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap\n# Ubuntu 20.04\nsudo apt-get update\nsudo apt-get install python3 python3-setuptools python3-pip libmysqlclient-dev memcached libmemcached-dev -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient\n# CentOS 8\nsudo yum install python3 python3-setuptools python3-pip python3-devel mysql-devel gcc -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient\n# Ubuntu 20.04 (almost the same for Ubuntu 18.04 and Debian 10)\nsudo apt-get update\nsudo apt-get install -y python3 python3-setuptools python3-pip libmysqlclient-dev\nsudo apt-get install -y memcached libmemcached-dev\n\nsudo pip3 install --timeout=3600 django==3.2.* Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient pycryptodome==3.12.0 cffi==1.14.0 lxml\n# Ubuntu 22.04 (almost the same for Ubuntu 20.04 and Debian 11, Debian 10)\nsudo apt-get update\nsudo apt-get install -y python3 python3-setuptools python3-pip libmysqlclient-dev\nsudo apt-get install -y memcached libmemcached-dev\n\nsudo pip3 install --timeout=3600 django==3.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==1.4.44 \\\n psd-tools django-pylibmc django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml\n# Ubuntu 22.04 (almost the same for Ubuntu 20.04 and Debian 11, Debian 10)\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev\nsudo apt-get install -y memcached libmemcached-dev\n\nsudo pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n# Debian 12\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmariadb-dev-compat ldap-utils libldap2-dev libsasl2-dev python3.11-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* pymysql pillow==10.0.* pylibmc captcha==0.4 markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 psd-tools django-pylibmc django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n
"},{"location":"deploy/using_mysql/#creating-the-program-directory","title":"Creating the program directory","text":"# Ubuntu 24.04\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev python3.12-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.16.0 lxml python-ldap==3.4.3\n/opt/seafile. Create this directory and change into it:sudo mkdir /opt/seafile\ncd /opt/seafile\n/opt/seafile is assumed for the rest of this manual. If you decide to put Seafile in another directory, modify the commands accordingly.sudo adduser seafile\nsudo chown -R seafile: /opt/seafile\n
"},{"location":"deploy/using_mysql/#downloading-the-install-package","title":"Downloading the install package","text":"su seafile\ntar xf seafile-server_8.0.4_x86-64.tar.gz\n
"},{"location":"deploy/using_mysql/#setting-up-seafile-ce","title":"Setting up Seafile CE","text":"$ tree -L 2\n.\n\u251c\u2500\u2500 seafile-server-8.0.4\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u2514\u2500\u2500 seafile-server_8.0.4_x86-64.tar.gz\n
# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\ncd seafile-server-8.0.4\n./setup-seafile-mysql.sh\n$ tree /opt/seafile -L 2\nseafile\n\u251c\u2500\u2500 ccnet\n\u251c\u2500\u2500 conf\n\u2502 \u2514\u2500\u2500 ccnet.conf\n\u2502 \u2514\u2500\u2500 gunicorn.conf.py\n\u2502 \u2514\u2500\u2500 seafdav.conf\n\u2502 \u2514\u2500\u2500 seafile.conf\n\u2502 \u2514\u2500\u2500 seahub_settings.py\n\u251c\u2500\u2500 seafile-data\n\u2502 \u2514\u2500\u2500 library-template\n\u251c\u2500\u2500 seafile-server-8.0.4\n\u2502 \u2514\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502 \u2514\u2500\u2500 seaf-fsck.sh\n\u2502 \u2514\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502 \u2514\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502 \u2514\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u251c\u2500\u2500 seafile-server-latest -> seafile-server-8.0.6\n\u251c\u2500\u2500 seahub-data\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 avatars\nseafile-server-latest is a symbolic link to the current Seafile Server folder. When later you upgrade to a new version, the upgrade scripts update this link to point to the latest Seafile Server folder.ccnet_db / seafile_db / seahub_db for ccnet/seafile/seahub respectively, and a MySQL user \"seafile\" to access these databases run the following SQL queries:
"},{"location":"deploy/using_mysql/#setup-memory-cache","title":"Setup Memory Cache","text":"create database `ccnet_db` character set = 'utf8';\ncreate database `seafile_db` character set = 'utf8';\ncreate database `seahub_db` character set = 'utf8';\n\ncreate user 'seafile'@'localhost' identified by 'seafile';\n\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seafile_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seahub_db`.* to `seafile`@localhost;\n# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\nseahub_settings.py.
"},{"location":"deploy/using_mysql/#use-redis","title":"Use Redis","text":"CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\nseahub_settings.py./opt/seafile/conf:
"},{"location":"deploy/using_mysql/#starting-seafile-server","title":"Starting Seafile Server","text":"SERVICE_URL (i.e., SERVICE_URL = 'http://1.2.3.4:8000/').SERVICE_URL (i.e., SERVICE_URL = http://1.2.3.4:8000/)./opt/seafile/seafile-server-latest:# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\n./seafile.sh start # starts seaf-server\n./seahub.sh start # starts seahub\npgrep to check if seafile/seahub processes are still running:pgrep -f seafile-controller # checks seafile processes\npgrep -f \"seahub\" # checks seahub process\npkill to kill the processes:
"},{"location":"deploy/using_mysql/#stopping-and-restarting-seafile-and-seahub","title":"Stopping and Restarting Seafile and Seahub","text":""},{"location":"deploy/using_mysql/#stopping","title":"Stopping","text":"pkill -f seafile-controller\npkill -f \"seahub\"\n
"},{"location":"deploy/using_mysql/#restarting","title":"Restarting","text":"./seahub.sh stop # stops seahub\n./seafile.sh stop # stops seaf-server\n
"},{"location":"deploy/using_mysql/#enabling-https","title":"Enabling HTTPS","text":"# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\n./seafile.sh restart\n./seahub.sh restart\n
"},{"location":"deploy/using_syslog/","title":"Using syslog","text":""},{"location":"deploy/using_syslog/#configure-seafile-to-use-syslog","title":"Configure Seafile to Use Syslog","text":"general section in seafile.conf:[general]\nenable_syslog = true\n/var/log/syslog:May 10 23:45:19 ubuntu seafile-controller[16385]: seafile-controller.c(154): starting ccnet-server ...\nMay 10 23:45:19 ubuntu seafile-controller[16385]: seafile-controller.c(73): spawn_process: ccnet-server -F /home/plt/haiwen/conf -c /home/plt/haiwen/ccnet -f /home/plt/haiwen/logs/ccnet.log -d -P /home/plt/haiwen/pids/ccnet.pid\n
"},{"location":"deploy/using_syslog/#configure-syslog-for-seafevents-professional-edition-only","title":"Configure Syslog For Seafevents (Professional Edition only)","text":"May 12 01:00:51 ubuntu seaf-server[21552]: ../common/mq-mgr.c(60): [mq client] mq cilent is started\nMay 12 01:00:51 ubuntu seaf-server[21552]: ../common/mq-mgr.c(106): [mq mgr] publish to hearbeat mq: seaf_server.heartbeat\nseafevents.conf:[Syslog]\nenabled = true\n/var/log/syslog
"},{"location":"deploy/using_syslog/#configure-syslog-for-seahub","title":"Configure Syslog For Seahub","text":"May 12 01:00:52 ubuntu seafevents[21542]: [seafevents] database: mysql, name: seahub-pro\nMay 12 01:00:52 ubuntu seafevents[21542]: seafes enabled: True\nMay 12 01:00:52 ubuntu seafevents[21542]: seafes dir: /home/plt/pro-haiwen/seafile-pro-server-5.1.4/pro/python/seafes\nseahub_settings.py:
"},{"location":"deploy/video_thumbnails/","title":"Video thumbnails","text":""},{"location":"deploy/video_thumbnails/#install-ffmpeg-package","title":"Install ffmpeg package","text":"LOGGING = {\n 'version': 1,\n 'disable_existing_loggers': True,\n 'formatters': {\n 'verbose': {\n 'format': '%(process)-5d %(thread)d %(name)-50s %(levelname)-8s %(message)s'\n },\n 'standard': {\n 'format': '%(asctime)s [%(levelname)s] %(name)s:%(lineno)s %(funcName)s %(message)s'\n },\n 'simple': {\n 'format': '[%(asctime)s] %(name)s %(levelname)s %(message)s',\n 'datefmt': '%d/%b/%Y %H:%M:%S'\n },\n },\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse',\n },\n 'require_debug_true': {\n '()': 'django.utils.log.RequireDebugTrue',\n },\n },\n 'handlers': {\n 'console': {\n 'filters': ['require_debug_true'],\n 'class': 'logging.StreamHandler',\n 'formatter': 'simple'\n },\n 'syslog': {\n 'class': 'logging.handlers.SysLogHandler',\n 'address': '/dev/log',\n 'formatter': 'standard'\n },\n },\n 'loggers': {\n # root logger\n \u00a0 \u00a0 \u00a0 \u00a0# All logs printed by Seahub and any third party libraries will be handled by this logger.\n \u00a0 \u00a0 \u00a0 \u00a0'': {\n 'handlers': ['console', 'syslog'],\n 'level': 'INFO', # Logs when log level is higher than info. Level can be any one of DEBUG, INFO, WARNING, ERROR, CRITICAL.\n 'disabled': False\n },\n # This logger recorded logs printed by Django Framework. For example, when you see 5xx page error, you should check the logs recorded by this logger.\n 'django.request': {\n 'handlers': ['console', 'syslog'],\n 'level': 'INFO',\n 'propagate': False,\n },\n },\n}\n# Install ffmpeg\nsudo apt-get update && sudo apt-get -y install ffmpeg\n\n# Now we need to install some modules\npip install pillow moviepy\n# We need to activate the epel repos\nyum -y install epel-release\nrpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro\n\n# Then update the repo and install ffmpeg\nyum -y install ffmpeg ffmpeg-devel\n\n# Now we need to install some modules\npip install pillow moviepy\n
"},{"location":"deploy/video_thumbnails/#configure-seafile-to-create-thumbnails","title":"Configure Seafile to create thumbnails","text":"# Add backports repo to /etc/apt/sources.list.d/\n# e.g. the following repo works (June 2017)\nsudo echo \"deb http://httpredir.debian.org/debian $(lsb_release -cs)-backports main non-free\" > /etc/apt/sources.list.d/debian-backports.list\n\n# Then update the repo and install ffmpeg\nsudo apt-get update && sudo apt-get -y install ffmpeg\n\n# Now we need to install some modules\npip install pillow moviepy\nseahub_settings.py
"},{"location":"deploy_pro/","title":"Deploy Seafile Pro Edition","text":"# Enable or disable thumbnail for video. ffmpeg and moviepy should be installed first. \n# For details, please refer to https://manual.seafile.com/deploy/video_thumbnails/\n# NOTE: since version 6.1\nENABLE_VIDEO_THUMBNAIL = True\n\n# Use the frame at 5 second as thumbnail\nTHUMBNAIL_VIDEO_FRAME_TIME = 5 \n\n# Absolute filesystem path to the directory that will hold thumbnail files.\nTHUMBNAIL_ROOT = '/haiwen/seahub-data/thumbnail/thumb/'\n
"},{"location":"deploy_pro/#migration-and-upgrading","title":"Migration and Upgrading","text":"
"},{"location":"deploy_pro/#s3openswiftceph-storage-backends","title":"S3/OpenSwift/Ceph Storage Backends","text":"
"},{"location":"deploy_pro/#cluster","title":"Cluster","text":"
"},{"location":"deploy_pro/admin_roles_permissions/","title":"Roles and Permissions Support","text":"
default_admin role with all permissions by default. If you set an administrator to some other admin role, the administrator will only have the permissions you configured to True.seahub_settings.py.
"},{"location":"deploy_pro/change_default_java/","title":"Change default java","text":"ENABLED_ADMIN_ROLE_PERMISSIONS = {\n 'system_admin': {\n 'can_view_system_info': True,\n 'can_config_system': True,\n },\n 'daily_admin': {\n 'can_view_system_info': True,\n 'can_view_statistic': True,\n 'can_manage_library': True,\n 'can_manage_user': True,\n 'can_manage_group': True,\n 'can_view_user_log': True,\n },\n 'audit_admin': {\n 'can_view_system_info': True,\n 'can_view_admin_log': True,\n },\n 'custom_admin': {\n 'can_view_system_info': True,\n 'can_config_system': True,\n 'can_view_statistic': True,\n 'can_manage_library': True,\n 'can_manage_user': True,\n 'can_manage_group': True,\n 'can_view_user_log': True,\n 'can_view_admin_log': True,\n },\n}\njava -version, and check the output.
sudo update-alternatives --config java\nsudo alternatives --config java\njava -version to make sure the change has taken effect.
"},{"location":"deploy_pro/config_seafile_with_ADFS/#prepare-certs-file","title":"Prepare Certs File","text":"
These x.509 certs are used to sign and encrypt elements like NameID and Metadata for SAML. \n\n Then copy these two files to **<seafile-install-path>/seahub-data/certs**. (if the certs folder not exists, create it.)\n\n2. x.509 cert from IdP (Identity Provider)\n\n 1. Log into the ADFS server and open the ADFS management.\n\n 1. Double click **Service** and choose **Certificates**.\n\n 1. Export the **Token-Signing** certificate:\n\n 1. Right-click the certificate and select **View Certificate**.\n 1. Select the **Details** tab.\n 1. Click **Copy to File** (select **DER encoded binary X.509**).\n\n 1. Convert this certificate to PEM format, rename it to **idp.crt**\n\n 1. Then copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Prepare IdP Metadata File\n\n1. Open https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml\n\n1. Save this xml file, rename it to **idp_federation_metadata.xml**\n\n1. Copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Install Requirements on Seafile Server\n\n- For Ubuntu 16.04\n### Config Seafile\n\nAdd the following lines to **seahub_settings.py**\n'allow_unknown_attributes': True,\n\n# your entity id, usually your subdomain plus the url to the metadata view\n'entityid': SP_SERVICE_URL + '/saml2/metadata/',\n\n# directory with attribute mapping\n'attribute_map_dir': ATTRIBUTE_MAP_DIR,\n\n# this block states what services we provide\n'service': {\n # we are just a lonely SP\n 'sp' : {\n \"allow_unsolicited\": True,\n 'name': 'Federated Seafile Service',\n 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS,\n 'endpoints': {\n # url and binding to the assetion consumer service view\n # do not change the binding or service name\n 'assertion_consumer_service': [\n (SP_SERVICE_URL + '/saml2/acs/',\n saml2.BINDING_HTTP_POST),\n ],\n # url and binding to the single logout service view\n # do not change the binding or service name\n 'single_logout_service': [\n (SP_SERVICE_URL + '/saml2/ls/',\n saml2.BINDING_HTTP_REDIRECT),\n (SP_SERVICE_URL + '/saml2/ls/post',\n saml2.BINDING_HTTP_POST),\n ],\n },\n\n # attributes that this project need to identify a user\n 'required_attributes': [\"uid\"],\n\n # attributes that may be useful to have but not required\n 'optional_attributes': ['eduPersonAffiliation', ],\n\n # in this section the list of IdPs we talk to are defined\n 'idp': {\n # we do not need a WAYF service since there is\n # only an IdP defined here. This IdP should be\n # present in our metadata\n\n # the keys of this dictionary are entity ids\n 'https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml': {\n 'single_sign_on_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/idpinitiatedsignon.aspx',\n },\n 'single_logout_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/?wa=wsignout1.0',\n },\n },\n },\n },\n},\n\n# where the remote metadata is stored\n'metadata': {\n 'local': [path.join(CERTS_DIR, 'idp_federation_metadata.xml')],\n},\n\n# set to 1 to output debugging information\n'debug': 1,\n\n# Signing\n'key_file': '', \n'cert_file': path.join(CERTS_DIR, 'idp.crt'), # from IdP\n\n# Encryption\n'encryption_keypairs': [{\n 'key_file': path.join(CERTS_DIR, 'sp.key'), # private part\n 'cert_file': path.join(CERTS_DIR, 'sp.crt'), # public part\n}],\n\n'valid_for': 24, # how long is our metadata valid\n
https://demo.seafile.com/saml2/metadata/ in the Federation metadata address.
'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS).
"},{"location":"deploy_pro/deploy_clamav_with_seafile/","title":"Deploy ClamAV with Seafile","text":""},{"location":"deploy_pro/deploy_clamav_with_seafile/#use-clamav-with-docker-based-deployment","title":"Use Clamav with Docker based deployment","text":""},{"location":"deploy_pro/deploy_clamav_with_seafile/#add-clamav-to-docker-composeyml","title":"Add Clamav to docker-compose.yml","text":"
"},{"location":"deploy_pro/deploy_clamav_with_seafile/#modify-seafileconf","title":"Modify seafile.conf","text":"services:\n ...\n\n av:\n image: clamav/clamav:latest\n container_name: seafile-clamav\n networks:\n - seafile-net\n
"},{"location":"deploy_pro/deploy_clamav_with_seafile/#restart-docker-container","title":"Restart docker container","text":"[virus_scan]\nscan_command = clamdscan\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = 5\nscan_size_limit = 20\nthreads = 2\ndocker compose down\ndocker compose up -d \napt-get install clamav-daemon clamav-freshclam\n/etc/clamav/clamd.conf,change the following line:
"},{"location":"deploy_pro/deploy_clamav_with_seafile/#start-the-clamav-daemon","title":"Start the clamav-daemon","text":"LocalSocketGroup root\nUser root\nsystemctl start clamav-daemon\n
$ curl https://secure.eicar.org/eicar.com.txt | clamdscan -\n
"},{"location":"deploy_pro/deploy_in_a_cluster/","title":"Deploy in a cluster","text":"stream: Eicar-Test-Signature FOUND\n
"},{"location":"deploy_pro/deploy_in_a_cluster/#preparation","title":"Preparation","text":""},{"location":"deploy_pro/deploy_in_a_cluster/#hardware-database-memory-cache","title":"Hardware, Database, Memory Cache","text":"sudo easy_install pip\nsudo pip install boto\n
"},{"location":"deploy_pro/deploy_in_a_cluster/#configure-a-single-node","title":"Configure a Single Node","text":"sudo pip install setuptools --no-use-wheel --upgrade\n/data/haiwen/ as the top level directory.tar xf seafile-pro-server_8.0.0_x86-64.tar.gz\n
"},{"location":"deploy_pro/deploy_in_a_cluster/#setup-seafile","title":"Setup Seafile","text":"haiwen\n\u251c\u2500\u2500 seafile-license.txt\n\u2514\u2500\u2500 seafile-pro-server-8.0.0/\nseafile.conf[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=192.168.1.134 --POOL-MIN=10 --POOL-MAX=100\n[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=<floating IP address> --POOL-MIN=10 --POOL-MAX=100\n[cluster]\nenabled = true\n\n[redis]\n# your redis server address\nredis_server = 127.0.0.1\n# your redis server port\nredis_port = 6379\n# size of connection pool to redis, default is 100\nmax_connections = 100\nseafile.conf
"},{"location":"deploy_pro/deploy_in_a_cluster/#seahub_settingspy","title":"seahub_settings.py","text":"[cluster]\nhealth_check_port = 12345\n
"},{"location":"deploy_pro/deploy_in_a_cluster/#seafeventsconf","title":"seafevents.conf","text":"AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\nseafevents.conf to disable file indexing service on the local server. The file indexing service should be started on a dedicated background server.[INDEX FILES]\nexternal_es_server = true\n[INDEX FILES] section:[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh # This configuration is only available for Seafile 6.3.0 pro and above.\nindex_office_pdf = true\nexternal_es_server = true\nes_host = background.seafile.com\nes_port = 9200\nenable = true should be left unchanged. For versions older than 6.1, es_port was 9500.
"},{"location":"deploy_pro/deploy_in_a_cluster/#backend-storage-settings","title":"Backend Storage Settings","text":"CREATE TABLE `avatar_uploaded` (`filename` TEXT NOT NULL, `filename_md5` CHAR(32) NOT NULL PRIMARY KEY, `data` MEDIUMTEXT NOT NULL, `size` INTEGER NOT NULL, `mtime` datetime NOT NULL);\n
"},{"location":"deploy_pro/deploy_in_a_cluster/#setup-nginxapache-and-http","title":"Setup Nginx/Apache and HTTP","text":"
"},{"location":"deploy_pro/deploy_in_a_cluster/#run-and-test-the-single-node","title":"Run and Test the Single Node","text":"cd /data/haiwen/seafile-server-latest\n./seafile.sh start\n./seahub.sh start\nhttp://ip-address-of-this-node:80 and login with the admin account./data/haiwen, compress this whole directory into a tarball and copy the tarball to all other Seafile server machines. You can simply uncompress the tarball and use it../seafile.sh and ./seahub.sh to start Seafile server.
"},{"location":"deploy_pro/deploy_in_a_cluster/#start-seafile-service-on-boot","title":"Start Seafile Service on boot","text":"export CLUSTER_MODE=backend\n./seafile.sh start\n./seafile-background-tasks.sh start\n
"},{"location":"deploy_pro/deploy_in_a_cluster/#load-balancer-setting","title":"Load Balancer Setting","text":"/etc/haproxy/haproxy.cfg:11001)
"},{"location":"deploy_pro/deploy_in_a_cluster/#see-how-it-runs","title":"See how it runs","text":"global\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n maxconn 2000\n timeout connect 10000\n timeout client 300000\n timeout server 300000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.1.165:80 check port 11001 cookie seafileserver01\n server seafileserver02 192.168.1.200:80 check port 11001 cookie seafileserver02\n[cluster]\nenabled = true\nmemcached_options = --SERVER=<IP of memcached node> --POOL-MIN=10 --POOL-MAX=100\nenabled option will prevent the start of background tasks by ./seafile.sh start in the front-end node. The tasks should be explicitly started by ./seafile-background-tasks.sh start at the back-end node.AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\n[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh # This configuration is for improving searching speed\nexternal_es_server = true\nes_host = <IP of background node>\nes_port = 9200\n[INDEX FILES] section is needed to let the front-end node know the file search feature is enabled. The external_es_server = true is to tell the front-end node not to start the ElasticSearch but to use the ElasticSearch server at the back-end node.
"},{"location":"deploy_pro/details_about_file_search/#enable-full-text-search-for-officepdf-files","title":"Enable full text search for Office/PDF files","text":"[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## this is for improving the search speed\nhighlight = fvh \n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\nindex_office_pdf=false\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\nindex_office_pdf option in seafevents.conf to true. cd /data/haiwen/seafile-pro-server-1.7.0/\n ./seafile.sh restart\n
"},{"location":"deploy_pro/details_about_file_search/#common-problems","title":"Common problems","text":""},{"location":"deploy_pro/details_about_file_search/#how-to-rebuild-the-index-if-something-went-wrong","title":"How to rebuild the index if something went wrong","text":" ./pro/pro.py search --clear\n ./pro/pro.py search --update\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
"},{"location":"deploy_pro/details_about_file_search/#access-the-aws-elasticsearch-service-using-https","title":"Access the AWS elasticsearch service using HTTPS","text":"rm -rf pro-data/search./pro/pro.py search --update
[INDEX FILES]\nenabled = true\ninterval = 10m\nindex_office_pdf=true\nexternal_es_server = true\nes_host = your domain endpoint(for example, https://search-my-domain.us-east-1.es.amazonaws.com)\nes_port = 443\nscheme = https\nusername = master user\npassword = password\nhighlight = fvh\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\nelasticsearch cannot be greater than 7.14.0, otherwise the elasticsearch service cannot be accessed: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/samplecode.html#client-compatibility, https://github.com/elastic/elasticsearch-py/pull/1623.
"},{"location":"deploy_pro/details_about_file_search/#encrypted-files-cannot-be-searched","title":"Encrypted files cannot be searched","text":"cd haiwen/seafile-pro-server-2.0.4\n./pro/pro.py search --update\nseafile-server-latest/pro/elasticsearch/config/jvm.options file:-Xms2g # Minimum available memory\n-Xmx2g # Maximum available memory\n### It is recommended to set the values of the above two configurations to the same size.\n
"},{"location":"deploy_pro/details_about_file_search/#distributed-indexing","title":"Distributed indexing","text":"./seafile.sh restart\n./seahub.sh restart\n$ apt install redis-server\n$ yum install redis\n$ pip install redis\nseafevents.conf on all frontend nodes, add the following config items:[EVENTS PUBLISH]\nmq_type=redis # must be redis\nenabled=true\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\nseafevents.conf on the backend node to disable the scheduled indexing task, because the scheduled indexing task and the distributed indexing task conflict.[INDEX FILES]\nenabled=true\n |\n V\nenabled=false \n
"},{"location":"deploy_pro/details_about_file_search/#deploy-distributed-indexing","title":"Deploy distributed indexing","text":"$ ./seafile.sh restart && ./seahub.sh restart\nconf directory from the frontend nodes. The master node and slave nodes do not need to start Seafile, but need to read the configuration files to obtain the necessary information.index-master.conf in the conf directory of the master node, e.g.[DEFAULT]\nmq_type=redis # must be redis\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n./run_index_master.sh [start/stop/restart] in the seafile-server-last directory to control the program to start, stop and restart.index-slave.conf in the conf directory of all slave nodes, e.g.[DEFAULT]\nmq_type=redis # must be redis\nindex_workers=2 # number of threads to create/update indexes, you can increase this value according to your needs\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n./run_index_worker.sh [start/stop/restart] in the seafile-server-last directory to control the program to start, stop and restart.seafile-server-last directory:$ ./pro/pro.py search --clear\n$ ./run_index_master.sh python-env index_op.py --mode resotre_all_repo\nseafile-server-last directory:$ ./run_index_master.sh python-env index_op.py --mode show_all_task\n# Ubuntu 20.04 (on Debian 10/Ubuntu 18.04, it is almost the same)\nsudo apt-get update\nsudo apt-get install -y python3 python3-setuptools python3-pip libmysqlclient-dev\nsudo apt-get install -y memcached libmemcached-dev\nsudo apt-get install -y poppler-utils\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient\n# CentOS 8\nsudo yum install python3 python3-setuptools python3-pip python3-devel mysql-devel gcc -y\nsudo yum install poppler-utils -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient\n# on Ubuntu 20.04 (on Debian 10/Ubuntu 18.04, it is almost the same)\napt-get update\napt-get install -y python3 python3-setuptools python3-pip python3-ldap libmysqlclient-dev\napt-get install -y memcached libmemcached-dev\napt-get install -y poppler-utils\n\npip3 install --timeout=3600 django==3.2.* future mysqlclient pymysql Pillow pylibmc \\ \ncaptcha jinja2 sqlalchemy==1.4.3 psd-tools django-pylibmc django-simple-captcha pycryptodome==3.12.0 cffi==1.14.0 lxml\n# CentOS 8\nsudo yum install python3 python3-setuptools python3-pip python3-devel mysql-devel gcc -y\nsudo yum install poppler-utils -y\n\nsudo pip3 install --timeout=3600 django==3.2.* Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient pycryptodome==3.12.0 cffi==1.14.0 lxml\n# on Ubuntu 22.04 (on Ubuntu 20.04/Debian 11/Debian 10, it is almost the same)\napt-get update\napt-get install -y python3 python3-setuptools python3-pip python3-ldap libmysqlclient-dev\napt-get install -y memcached libmemcached-dev\napt-get install -y poppler-utils\n\nsudo pip3 install --timeout=3600 django==3.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==1.4.44 \\\n psd-tools django-pylibmc django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml\n# CentOS 8\nsudo yum install python3 python3-setuptools python3-pip python3-devel mysql-devel gcc -y\nsudo yum install poppler-utils -y\n\nsudo pip3 install --timeout=3600 django==3.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==1.4.44 \\\n psd-tools django-pylibmc django_simple_captcha==0.5.20 pycryptodome==3.16.* cffi==1.15.1 lxml\n# on Ubuntu 22.04 (on Ubuntu 20.04/Debian 11/Debian 10, it is almost the same)\napt-get update\napt-get install -y python3 python3-dev python3-setuptools python3-pip python3-ldap libmysqlclient-dev ldap-utils libldap2-dev dnsutils\napt-get install -y memcached libmemcached-dev\napt-get install -y poppler-utils\n\nsudo pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3 lxml\n# CentOS 8\nsudo yum install python3 python3-setuptools python3-pip python3-devel mysql-devel gcc bind-utils -y\nsudo yum install poppler-utils -y\n\nsudo pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3 lxml\n# Debian 12\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmariadb-dev-compat ldap-utils libldap2-dev libsasl2-dev python3.11-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* pymysql pillow==10.0.* pylibmc captcha==0.4 markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 psd-tools django-pylibmc django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#installing-java-runtime-environment","title":"Installing Java Runtime Environment","text":"# Ubuntu 24.04\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev python3.12-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.16.0 lxml python-ldap==3.4.3\n# Debian 10/Debian 11\nsudo apt-get install default-jre -y\n# Ubuntu 16.04/Ubuntu 18.04/Ubuntu 20.04/Ubuntu 22.04\nsudo apt-get install openjdk-8-jre -y\nsudo ln -sf /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java /usr/bin/\n
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#creating-the-programm-directory","title":"Creating the programm directory","text":"# CentOS\nsudo yum install java-1.8.0-openjdk -y\n/opt/seafile. Create this directory and change into it:mkdir /opt/seafile\ncd /opt/seafile\n/opt/seafile is assumed for the rest of this manual. If you decide to put Seafile in another directory, some commands need to be modified accordingly.adduser seafile\nchown -R seafile: /opt/seafile\n
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#placing-the-seafile-pe-license","title":"Placing the Seafile PE license","text":"su seafile\n/opt/seafile. Make sure that the name is seafile-license.txt. (If the file has a different name or cannot be read, Seafile PE will not start.)
# Debian/Ubuntu\nwget -O 'seafile-pro-server_x.x.x_x86-64_Ubuntu.tar.gz' 'VERSION_SPECIFIC_LINK_FROM_SEAFILE_CUSTOMER_CENTER'\n\n# CentOS\nwget -O 'seafile-pro-server_x.x.x_x86-64_CentOS.tar.gz' 'VERSION_SPECIFIC_LINK_FROM_SEAFILE_CUSTOMER_CENTER'\n# Debian/Ubuntu\ntar xf seafile-pro-server_8.0.4_x86-64_Ubuntu.tar.gz\n# CentOS\ntar xf seafile-pro-server_8.0.4_x86-64_CentOS.tar.gz\n$ tree -L 2 /opt/seafile\n.\n\u251c\u2500\u2500 seafile-license.txt\n\u2514\u2500\u2500 seafile-pro-server-8.0.4\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check-db-type.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 create-db\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index_op.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pro\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_master.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_worker.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-encrypt.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gen-key.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-background-tasks.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-import.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub-extra\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u2514\u2500\u2500 seafile-pro-server_8.0.4_x86-64.tar.gz\n
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#run-the-setup-script","title":"Run the setup script","text":"seafile-server_8.0.4_x86-86.tar.gz; uncompressing into folder seafile-server-8.0.4seafile-pro-server_8.0.4_x86-86.tar.gz; uncompressing into folder seafile-pro-server-8.0.4logs):
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#setup-memory-cache","title":"Setup Memory Cache","text":"$ tree -L 2 /opt/seafile\n.\n\u251c\u2500\u2500 seafile-license.txt # license file\n\u251c\u2500\u2500 ccnet \n\u251c\u2500\u2500 conf # configuration files\n\u2502 \u2514\u2500\u2500 ccnet.conf\n\u2502 \u2514\u2500\u2500 gunicorn.conf.py\n\u2502 \u2514\u2500\u2500 __pycache__\n\u2502 \u2514\u2500\u2500 seafdav.conf\n\u2502 \u2514\u2500\u2500 seafevents.conf\n\u2502 \u2514\u2500\u2500 seafile.conf\n\u2502 \u2514\u2500\u2500 seahub_settings.py\n\u251c\u2500\u2500 logs # log files\n\u251c\u2500\u2500 pids # process id files\n\u251c\u2500\u2500 pro-data # data specific for Seafile PE\n\u251c\u2500\u2500 seafile-data # object database\n\u251c\u2500\u2500 seafile-pro-server-8.0.4\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check-db-type.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 create-db\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index_op.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pro\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_master.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_worker.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-encrypt.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gen-key.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-background-tasks.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-import.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub-extra\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u251c\u2500\u2500 seafile-server-latest -> seafile-pro-server-8.0.4\n\u251c\u2500\u2500 seahub-data\n \u2514\u2500\u2500 avatars # user avatars\n# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\nseahub_settings.py.
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#use-redis","title":"Use Redis","text":"CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\nseahub_settings.py.
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#starting-seafile-server","title":"Starting Seafile Server","text":"/opt/seafile/seafile-server-latest:# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\n./seafile.sh start # Start Seafile service\n./seahub.sh start # Start seahub website, port defaults to 127.0.0.1:8000\nsudo docker pull elasticsearch:7.16.2\nsudo mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#modifying-seafevents","title":"Modifying seafevents","text":"sudo docker run -d \\\n--name es \\\n-p 9200:9200 \\\n-e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" \\\n-e \"ES_JAVA_OPTS=-Xms2g -Xmx2g\" -e \"xpack.security.enabled=false\" \\\n--restart=always \\\n-v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data \\\n-d elasticsearch:7.16.2\nseafevents.conf:[INDEX FILES]\nexternal_es_server = true # required when ElasticSearch on separate host\nes_host = your elasticsearch server's IP # IP address of ElasticSearch host\n # use 127.0.0.1 if deployed on the same server\nes_port = 9200 # port of ElasticSearch host\ninterval = 10m # frequency of index updates in minutes\nhighlight = fvh # parameter for improving the search performance\n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/","title":"Enable search and background tasks in a cluster","text":"./seafile.sh restart && ./seahub.sh restart \n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#71-80","title":"7.1, 8.0","text":""},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#configuring-node-a-the-backend-node","title":"Configuring Node A (the backend node)","text":"sudo apt-get install openjdk-8-jre libreoffice python-uno # or python3-uno for ubuntu 16.04+\nsudo yum install java-1.8.0-openjdk\nsudo yum install libreoffice libreoffice-headless libreoffice-pyuno\nexternal_es_server = true\n[OFFICE CONVERTER]\nenabled = true\nhost = <ip of node background>\nport = 6000\nes_port was 9500.seafevents.conf, add the following lines:[INDEX FILES]\nenabled = true\nexternal_es_server = true\nes_host = <ip of node A>\nes_port = 9200\n\n[OFFICE CONVERTER]\nenabled = true\nhost = <ip of node background>\nport = 6000\n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#start-the-background-node","title":"Start the background node","text":"OFFICE_CONVERTOR_ROOT = 'http://<ip of node background>:6000'\nseafile-background-tasks.sh is needed)./seafile.sh start\n./seafile-background-tasks.sh start\n./seafile-background-tasks.sh stop\n./seafile.sh stop\n/etc/systemd/system/seafile-background-tasks.service:[Unit]\nDescription=Seafile Background Tasks Server\nAfter=network.target seahub.service\n\n[Service]\nType=forking\nExecStart=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh start\nExecStop=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh stop\nUser=root\nGroup=root\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#the-final-configuration-of-the-background-node","title":"The final configuration of the background node","text":"systemctl enable seafile-background-tasks.service\n[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=<you memcached server host> --POOL-MIN=10 --POOL-MAX=100\n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#90","title":"9.0+","text":""},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#configuring-node-a-the-backend-node_1","title":"Configuring Node A (the backend node)","text":"[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh # this is for improving the search speed\n\n[OFFICE CONVERTER]\nenabled = true\nhost = <ip of node background>\nport = 6000\nseafevents.conf, add the following lines:[INDEX FILES]\nenabled = true\nexternal_es_server = true\nes_host = <ip of elastic search service>\nes_port = 9200\ninterval = 10m\nhighlight = fvh # this is for improving the search speed\nseafevents.conf, add the following lines:[INDEX FILES]\nenabled = true\nexternal_es_server = true\nes_host = <ip of elastic search service>\nes_port = 9200\n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#start-the-background-node_1","title":"Start the background node","text":"OFFICE_CONVERTOR_ROOT = 'http://<ip of office preview docker service>'\nseafile-background-tasks.sh is needed)export CLUSTER_MODE=backend\n./seafile.sh start\n./seafile-background-tasks.sh start\n./seafile-background-tasks.sh stop\n./seafile.sh stop\n/etc/systemd/system/seafile-background-tasks.service:[Unit]\nDescription=Seafile Background Tasks Server\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh start\nExecStop=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh stop\nUser=root\nGroup=root\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#the-final-configuration-of-the-background-node_1","title":"The final configuration of the background node","text":"systemctl enable seafile-background-tasks.service\n[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=<you memcached server host> --POOL-MIN=10 --POOL-MAX=100\n
"},{"location":"deploy_pro/ldap_in_11.0/","title":"Configure Seafile Pro Edition to use LDAP","text":""},{"location":"deploy_pro/ldap_in_11.0/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"[INDEX FILES]\nenabled = true\nexternal_es_server = true\nes_host = <ip of elastic search service>\nes_port = 9200\ninterval = 10m\nhighlight = fvh # this is for improving the search speed\n
user-login-name@domain-name, e.g. john@example.com. It's not a real email address, but it works fine as the unique identifier.social_auth_usersocialauth to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth table.seahub_settings.py. Examples are as follows:ENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.1' \nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' \nLDAP_ADMIN_DN = 'administrator@example.com' \nLDAP_ADMIN_PASSWORD = 'yourpassword' \nLDAP_PROVIDER = 'ldap' \nLDAP_LOGIN_ATTR = 'email' \nLDAP_CONTACT_EMAIL_ATTR = '' \nLDAP_USER_ROLE_ATTR = '' \nLDAP_USER_FIRST_NAME_ATTR = 'givenName' \nLDAP_USER_LAST_NAME_ATTR = 'sn' \nLDAP_USER_NAME_REVERSE = False \nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \n
cn=admin,dc=example,dc=comLDAP_BASE_DN and LDAP_ADMIN_DN:
"},{"location":"deploy_pro/ldap_in_11.0/#setting-up-ldap-user-sync-optional","title":"Setting Up LDAP User Sync (optional)","text":"LDAP_BASE_DN, you first have to navigate your organization hierachy on the domain controller GUI.
cn=users,dc=yourdomain,dc=com as LDAP_BASE_DN (with proper adjustment for your own needs).dsquery command on the domain controller to find out the DN for this OU. For example, if the OU is staffs, you can run dsquery ou -name staff. More information can be found here.user@domain.name format for the LDAP_ADMIN_DN option. For example you can use administrator@example.com for LDAP_ADMIN_DN. Sometime the domain controller doesn't recognize this format. You can still use dsquery command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser. More information here.seahub_settings.py. Examples are as follows:# Basic configuration items\nENABLE_LDAP = True\n......\n\n# ldap user sync options.\nLDAP_SYNC_INTERVAL = 60 \nENABLE_LDAP_USER_SYNC = True \nLDAP_USER_OBJECT_CLASS = 'person'\nLDAP_DEPT_ATTR = '' \nLDAP_UID_ATTR = '' \nLDAP_AUTO_REACTIVATE_USERS = True \nLDAP_USE_PAGED_RESULT = False \nIMPORT_NEW_USER = True \nACTIVATE_USER_WHEN_IMPORT = True \nDEACTIVE_USER_IF_NOTFOUND = False \nENABLE_EXTRA_USER_INFO_SYNC = True \n
"},{"location":"deploy_pro/ldap_in_11.0/#importing-users-without-activating-them","title":"Importing Users without Activating Them","text":"sAMAccountName can be used as UID_ATTR. The attribute will be stored as login_id in Seafile (in seahub_db.profile_profile table).seahub_settings.py:ACTIVATE_USER_WHEN_IMPORT = False\nseahub_settings.py:ACTIVATE_AFTER_FIRST_LOGIN = True\nDEACTIVE_USER_IF_NOTFOUND option, a user will be deactivated when he/she is not found in LDAP server. By default, even after this user reappears in the LDAP server, it won't be reactivated automatically. This is to prevent auto reactivating a user that was manually deactivated by the system admin.seahub_settings.py:
"},{"location":"deploy_pro/ldap_in_11.0/#manually-trigger-synchronization","title":"Manually Trigger Synchronization","text":"LDAP_AUTO_REACTIVATE_USERS = True\ncd seafile-server-latest\n./pro/pro.py ldapsync\n
"},{"location":"deploy_pro/ldap_in_11.0/#setting-up-ldap-group-sync-optional","title":"Setting Up LDAP Group Sync (optional)","text":""},{"location":"deploy_pro/ldap_in_11.0/#how-it-works","title":"How It Works","text":"docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py ldapsync\n
"},{"location":"deploy_pro/ldap_in_11.0/#configuration","title":"Configuration","text":"# ldap group sync options.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_GROUP_FILTER = '' # An additional filter to use when searching group objects.\n # If it's set, the final filter used to run search is \"(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER))\";\n # otherwise the final filter would be \"(objectClass=GROUP_OBJECT_CLASS)\".\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile.\n # Learn more about departments in Seafile [here](https://help.seafile.com/sharing_collaboration/departments/).\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\n
(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER)); otherwise the final filter would be (objectClass=GROUP_OBJECT_CLASS).
"},{"location":"deploy_pro/ldap_in_11.0/#sync-ou-as-departments","title":"Sync OU as Departments","text":"LDAP_BASE_DN.LDAP_GROUP_OBJECT_CLASS option to posixGroup. A posixGroup object in LDAP usually contains a multi-value attribute for the list of member UIDs. The name of this attribute can be set with the LDAP_GROUP_MEMBER_ATTR option. It's MemberUid by default. The value of the MemberUid attribute is an ID that can be used to identify a user, which corresponds to an attribute in the user object. The name of this ID attribute is usually uid, but can be set via the LDAP_USER_ATTR_IN_MEMBERUID option. Note that posixGroup doesn't support nested groups.
"},{"location":"deploy_pro/ldap_in_11.0/#periodical-and-manual-sync","title":"Periodical and Manual Sync","text":"LDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_DEPT_NAME_ATTR = 'description' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n[2023-03-30 18:15:05,109] [DEBUG] create group 1, and add dn pair CN=DnsUpdateProxy,CN=Users,DC=Seafile,DC=local<->1 success.\n[2023-03-30 18:15:05,145] [DEBUG] create group 2, and add dn pair CN=Domain Computers,CN=Users,DC=Seafile,DC=local<->2 success.\n[2023-03-30 18:15:05,154] [DEBUG] create group 3, and add dn pair CN=Domain Users,CN=Users,DC=Seafile,DC=local<->3 success.\n[2023-03-30 18:15:05,164] [DEBUG] create group 4, and add dn pair CN=Domain Admins,CN=Users,DC=Seafile,DC=local<->4 success.\n[2023-03-30 18:15:05,176] [DEBUG] create group 5, and add dn pair CN=RAS and IAS Servers,CN=Users,DC=Seafile,DC=local<->5 success.\n[2023-03-30 18:15:05,186] [DEBUG] create group 6, and add dn pair CN=Enterprise Admins,CN=Users,DC=Seafile,DC=local<->6 success.\n[2023-03-30 18:15:05,197] [DEBUG] create group 7, and add dn pair CN=dev,CN=Users,DC=Seafile,DC=local<->7 success.\ncd seafile-server-latest\n./pro/pro.py ldapsync\n
"},{"location":"deploy_pro/ldap_in_11.0/#advanced-ldap-integration-options","title":"Advanced LDAP Integration Options","text":""},{"location":"deploy_pro/ldap_in_11.0/#multiple-base","title":"Multiple BASE","text":"docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py ldapsync\nLDAP_BASE_DN option. The DNs are separated by \";\", e.g.
"},{"location":"deploy_pro/ldap_in_11.0/#additional-search-filter","title":"Additional Search Filter","text":"LDAP_BASE_DN = 'ou=developers,dc=example,dc=com;ou=marketing,dc=example,dc=com'\nLDAP_FILTER option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).(&($LOGIN_ATTR=*)($LDAP_FILTER)). $LOGIN_ATTR and $LDAP_FILTER will be replaced by your option values.seahub_settings.py:LDAP_FILTER = 'memberOf=CN=group,CN=developers,DC=example,DC=com'\n(&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))memberOf attribute is only available in Active Directory.LDAP_FILTER option to limit user scope to a certain AD group.
dsquery command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup.seahub_settings.py:
"},{"location":"deploy_pro/ldap_in_11.0/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"LDAP_FILTER = 'memberOf={output of dsquery command}'\nLDAP_SERVER_URL as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
"},{"location":"deploy_pro/ldap_in_11.0/#use-paged-results-extension","title":"Use paged results extension","text":"LDAP_SERVER_URL = 'ldaps://192.168.0.1:636/'\nseahub_settings.py to enable PR:
"},{"location":"deploy_pro/ldap_in_11.0/#follow-referrals","title":"Follow referrals","text":"LDAP_USE_PAGED_RESULT = True\nseahub_settings.py, e.g.:
"},{"location":"deploy_pro/ldap_in_11.0/#configure-multi-ldap-servers","title":"Configure Multi-ldap Servers","text":"LDAP_FOLLOW_REFERRALS = True\nLDAP in the options with MULTI_LDAP_1, and then add them to seahub_settings.py, for example:# Basic config options\nENABLE_LDAP = True\n......\n\n# Multi ldap config options\nENABLE_MULTI_LDAP_1 = True\nMULTI_LDAP_1_SERVER_URL = 'ldap://192.168.0.2'\nMULTI_LDAP_1_BASE_DN = 'ou=test,dc=seafile,dc=top'\nMULTI_LDAP_1_ADMIN_DN = 'administrator@example.top'\nMULTI_LDAP_1_ADMIN_PASSWORD = 'Hello@123'\nMULTI_LDAP_1_PROVIDER = 'ldap1'\nMULTI_LDAP_1_LOGIN_ATTR = 'userPrincipalName'\n\n# Optional configs\nMULTI_LDAP_1_USER_FIRST_NAME_ATTR = 'givenName'\nMULTI_LDAP_1_USER_LAST_NAME_ATTR = 'sn'\nMULTI_LDAP_1_USER_NAME_REVERSE = False\nENABLE_MULTI_LDAP_1_EXTRA_USER_INFO_SYNC = True\n\nMULTI_LDAP_1_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \nMULTI_LDAP_1_USE_PAGED_RESULT = False\nMULTI_LDAP_1_FOLLOW_REFERRALS = True\nENABLE_MULTI_LDAP_1_USER_SYNC = True\nENABLE_MULTI_LDAP_1_GROUP_SYNC = True\nMULTI_LDAP_1_SYNC_DEPARTMENT_FROM_OU = True\n\nMULTI_LDAP_1_USER_OBJECT_CLASS = 'person'\nMULTI_LDAP_1_DEPT_ATTR = ''\nMULTI_LDAP_1_UID_ATTR = ''\nMULTI_LDAP_1_CONTACT_EMAIL_ATTR = ''\nMULTI_LDAP_1_USER_ROLE_ATTR = ''\nMULTI_LDAP_1_AUTO_REACTIVATE_USERS = True\n\nMULTI_LDAP_1_GROUP_OBJECT_CLASS = 'group'\nMULTI_LDAP_1_GROUP_FILTER = ''\nMULTI_LDAP_1_GROUP_MEMBER_ATTR = 'member'\nMULTI_LDAP_1_GROUP_UUID_ATTR = 'objectGUID'\nMULTI_LDAP_1_CREATE_DEPARTMENT_LIBRARY = False\nMULTI_LDAP_1_DEPT_REPO_PERM = 'rw'\nMULTI_LDAP_1_DEFAULT_DEPARTMENT_QUOTA = -2\nMULTI_LDAP_1_SYNC_GROUP_AS_DEPARTMENT = False\nMULTI_LDAP_1_USE_GROUP_MEMBER_RANGE_QUERY = False\nMULTI_LDAP_1_USER_ATTR_IN_MEMBERUID = 'uid'\nMULTI_LDAP_1_DEPT_NAME_ATTR = ''\n......\n
"},{"location":"deploy_pro/ldap_in_11.0/#sso-and-ldap-users-use-the-same-uid","title":"SSO and LDAP users use the same uid","text":"# Common user sync options\nLDAP_SYNC_INTERVAL = 60\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# Common group sync options\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\nSSO_LDAP_USE_SAME_UID = True:SSO_LDAP_USE_SAME_UID = True\nLDAP_LOGIN_ATTR (not LDAP_UID_ATTR), in ADFS it is uid attribute. You need make sure you use the same attribute for the two settings.seahub_settings.py, e.g.LDAP_USER_ROLE_ATTR = 'title'\nLDAP_USER_ROLE_ATTR is the attribute field to configure roles in LDAP. You can write a custom function to map the role by creating a file seahub_custom_functions.py under conf/ and edit it like:# -*- coding: utf-8 -*-\n\n# The AD roles attribute returns a list of roles (role_list).\n# The following function use the first entry in the list.\ndef ldap_role_mapping(role):\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n\n# From version 11.0.11-pro, you can define the following function\n# to calculate a role from the role_list.\ndef ldap_role_list_mapping(role_list):\n if not role_list:\n return ''\n for role in role_list:\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n# Under Ubuntu\nvi /etc/memcached.conf\n\n# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default\n# Note that the daemon will grow to this size, but does not start out holding this much\n# memory\n# -m 64\n-m 256\n\n# Specify which IP address to listen on. The default is to listen on all IP addresses\n# This parameter is one of the only security measures that memcached has, so make sure\n# it's listening on a firewalled interface.\n-l 0.0.0.0\n\nservice memcached restart\n# For Ubuntu\nsudo apt-get install keepalived -y\n/etc/keepalived/keepalived.conf.cat /etc/keepalived/keepalived.conf\n\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node1\n vrrp_mcast_group4 224.0.100.19\n}\nvrrp_script chk_memcached {\n script \"killall -0 memcached && exit 0 || exit 1\"\n interval 1\n weight -5\n}\n\nvrrp_instance VI_1 {\n state MASTER\n interface ens33\n virtual_router_id 51\n priority 100\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass hello123\n }\n virtual_ipaddress {\n 192.168.1.113/24 dev ens33\n }\n track_script {\n chk_memcached\n }\n}\ncat /etc/keepalived/keepalived.conf\n\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node2\n vrrp_mcast_group4 224.0.100.19\n}\nvrrp_script chk_memcached {\n script \"killall -0 memcached && exit 0 || exit 1\"\n interval 1\n weight -5\n}\n\nvrrp_instance VI_1 {\n state BACKUP\n interface ens33\n virtual_router_id 51\n priority 98\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass hello123\n }\n virtual_ipaddress {\n 192.168.1.113/24 dev ens33\n }\n track_script {\n chk_memcached\n }\n}\n
"},{"location":"deploy_pro/migrate/","title":"Migrate data between different backends","text":"
"},{"location":"deploy_pro/migrate/#create-a-new-temporary-seafileconf","title":"Create a new temporary seafile.conf","text":"[block_backend], [commit_object_backend], [fs_object_backend] options) and save it under a readable path. Let's assume that we are migrating data to S3 and create temporary seafile.conf under /optcat > seafile.conf << EOF\n[commit_object_backend]\nname = s3\nbucket = seacomm\nkey_id = ******\nkey = ******\n\n[fs_object_backend]\nname = s3\nbucket = seafs\nkey_id = ******\nkey = ******\n\n[block_backend]\nname = s3\nbucket = seablk\nkey_id = ******\nkey = ******\nEOF\n\nmv seafile.conf /opt\ncat > seafile.conf << EOF\n[commit_object_backend]\nname = fs\n# the dir configuration is the new seafile-data path\ndir = /var/data_backup\n\n[fs_object_backend]\nname = fs\n# the dir configuration is the new seafile-data path\ndir = /var/data_backup\n\n[block_backend]\nname = fs\n# the dir configuration is the new seafile-data path\ndir = /var/data_backup\n\nEOF\n\nmv seafile.conf /opt\nexport OBJECT_LIST_FILE_PATH=/path/to/object/list/file\n/path/to/object/list/file.commit,/path/to/object/list/file.fs, /path/to/object/list/file.blocks.nworker and maxsize variables in the following code:class ThreadPool(object):\n\ndef __init__(self, do_work, nworker=20):\n self.do_work = do_work\n self.nworker = nworker\n self.task_queue = Queue.Queue(maxsize = 2000)\n--decrypt option, which will decrypt the data while reading it, and then write the unencrypted data to the new backend. Note that you need add this option in all stages of the migration.
"},{"location":"deploy_pro/migrate/#run-migratesh-to-initially-migrate-objects","title":"Run migrate.sh to initially migrate objects","text":"cd ~/haiwen/seafile-server-latest\n./migrate.sh /opt --decrypt\n~/haiwen, enter ~/haiwen/seafile-server-latest and run migrate.sh with parent path of temporary seafile.conf as parameter, here is /opt.cd ~/haiwen/seafile-server-latest\n./migrate.sh /opt\n
"},{"location":"deploy_pro/migrate/#replace-the-original-seafileconf","title":"Replace the original seafile.conf","text":"cd ~/haiwen/seafile-server-latest\n./migrate.sh /opt\nmv /opt/seafile.conf ~/haiwen/conf\n
"},{"location":"deploy_pro/migrate_from_seafile_community_server/#do-the-migration","title":"Do the migration","text":"sudo apt-get install poppler-utils\n/opt/seafile/seafile-server-10.0.0. /opt/seafile/./opt/seafile.tar xf seafile-pro-server_10.0.0_x86-64_Ubuntu.tar.gz\nseafile\n\u251c\u2500\u2500 seafile-license.txt\n\u251c\u2500\u2500 seafile-pro-server-10.0.0/\n\u251c\u2500\u2500 seafile-server-10.0.0/\n\u251c\u2500\u2500 ccnet/\n\u251c\u2500\u2500 seafile-data/\n\u251c\u2500\u2500 seahub-data/\n\u2514\u2500\u2500 conf/\n\u2514\u2500\u2500 logs/\n
"},{"location":"deploy_pro/migrate_from_seafile_community_server/#do-the-migration_1","title":"Do the migration","text":"seafile-server_10.0.0_x86-64_Ubuntu.tar.gz; After uncompressing, the folder is seafile-server-10.0.0seafile-pro-server_10.0.0_x86-64_Ubuntu.tar.gz; After uncompressing, the folder is seafile-pro-server-10.0.0
cd seafile/seafile-server-10.0.0\n./seafile.sh stop\n./seahub.sh stop\n
cd seafile/seafile-pro-server-10.0.0/\n./pro/pro.py setup --migrate\n
"},{"location":"deploy_pro/migrate_from_seafile_community_server/#add-memory-cache-configuration","title":"Add Memory Cache Configuration","text":"seafile\n\u251c\u2500\u2500 seafile-license.txt\n\u251c\u2500\u2500 seafile-pro-server-10.0.0/\n\u251c\u2500\u2500 seafile-server-10.0.0/\n\u251c\u2500\u2500 ccnet/\n\u251c\u2500\u2500 seafile-data/\n\u251c\u2500\u2500 seahub-data/\n\u251c\u2500\u2500 seahub.db\n\u251c\u2500\u2500 seahub_settings.py\n\u2514\u2500\u2500 pro-data/\n# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\nseahub_settings.py.
"},{"location":"deploy_pro/migrate_from_seafile_community_server/#use-redis","title":"Use Redis","text":"CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\nseahub_settings.py.
"},{"location":"deploy_pro/migrate_from_seafile_community_server/#switch-back-to-community-server","title":"Switch Back to Community Server","text":"cd seafile/seafile-pro-server-10.0.0\n./seafile.sh start\n./seahub.sh start\ncd seafile/seafile-pro-server-10.0.0/\n./seafile.sh stop\n./seahub.sh stop\ncd seafile/seafile-server-10.0.0/\n./upgrade/minor-upgrade.sh\n
"},{"location":"deploy_pro/multi_institutions/","title":"Multiple Organization/Institution User Management","text":"cd haiwen/seafile-server-10.0.0/\n./seafile.sh start\n./seahub.sh start\nseahub_settings.py, add MULTI_INSTITUTION = True to enable multi-institution feature. And add# for 7.1.22 or older\nEXTRA_MIDDLEWARE_CLASSES += (\n 'seahub.institutions.middleware.InstitutionMiddleware',\n)\n\n# for 8.0.0 or newer\nEXTRA_MIDDLEWARE += (\n 'seahub.institutions.middleware.InstitutionMiddleware',\n)\n# for 7.1.22 or older\nEXTRA_MIDDLEWARE_CLASSES = (\n 'seahub.institutions.middleware.InstitutionMiddleware',\n)\n\n# for 8.0.0 or newer\nEXTRA_MIDDLEWARE = (\n 'seahub.institutions.middleware.InstitutionMiddleware',\n)\nEXTRA_MIDDLEWARE_CLASSES or EXTRA_MIDDLEWARE is not defined.profile.institution match the name.
"},{"location":"deploy_pro/multi_tenancy/","title":"Multi-Tenancy Support","text":"SHIBBOLETH_ATTRIBUTE_MAP = {\n \"givenname\": (False, \"givenname\"),\n \"sn\": (False, \"surname\"),\n \"mail\": (False, \"contact_email\"),\n \"organization\": (False, \"institution\"),\n}\n
"},{"location":"deploy_pro/multi_tenancy/#seahub_settingspy","title":"seahub_settings.py","text":"[general]\nmulti_tenancy = true\n
"},{"location":"deploy_pro/multi_tenancy/#usage","title":"Usage","text":"CLOUD_MODE = True\nMULTI_TENANCY = True\n\nORG_MEMBER_QUOTA_ENABLED = True\n\nORG_ENABLE_ADMIN_CUSTOM_NAME = True # Default is True, meaning organization name can be customized\nORG_ENABLE_ADMIN_CUSTOM_LOGO = False # Default is False, if set to True, organization logo can be customized\n\nENABLE_MULTI_ADFS = True # Default is False, if set to True, support per organization custom ADFS/SAML2 login\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n$ apt update\n$ apt install xmlsec1\n$ mkdir -p /opt/seafile/seahub-data/certs\n$ cd /opt/seafile/seahub-data/certs\n$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sp.key -out sp.crt\ndays option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly.ENABLE_MULTI_ADFS = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n/usr/bin/xmlsec1, you need to add the following configuration in seahub_settings.py:SAML_XMLSEC_BINARY_PATH = '/path/to/xmlsec1'\n$ which xmlsec1\n/opt/seafile/seahub-data/certs, you need to add the following configuration in seahub_settings.py:
"},{"location":"deploy_pro/multi_tenancy/#integration-with-adfssaml-single-sign-on","title":"Integration with ADFS/SAML single sign-on","text":"SAML_CERTS_DIR = '/path/to/certs'\n
"},{"location":"deploy_pro/multiple_storage_backends/#outline","title":"Outline","text":"
storage_id: an internal string ID to identify the storage class. It's not visible to users. For example \"primary storage\".name: A user visible name for the storage class.is_default: whether this storage class is the default. This option are effective in two cases:commits\uff1athe storage for storing the commit objects for this class. It can be any storage that Seafile supports, like file system, ceph, s3.fs\uff1athe storage for storing the fs objects for this class. It can be any storage that Seafile supports, like file system, ceph, s3.blocks\uff1athe storage for storing the block objects for this class. It can be any storage that Seafile supports, like file system, ceph, s3.[storage]\nenable_storage_classes = true\nstorage_classes_file = /opt/seafile_storage_classes.json\n
seafile.conf.seafile_storage_classes.json file on your local disk in a sub-directory of the location that is mounted to the seafile container, and set the storage_classes_file configuration above to a path relative to the /shared/ directory mounted on the seafile container. seafile container in your docker-compose.yml file is similar to the following:// docker-compose.yml\nservices:\n seafile:\n container_name: seafile\n volumes:\n - /opt/seafile-data:/shared\n/opt/seafile-data (such as /opt/seafile-data/conf/) and then configure seafile.conf like so:[storage]\nenable_storage_classes = true\nstorage_classes_file = /shared/conf/seafile_storage_classes.json\nseafile.conf.[\n {\n \"storage_id\": \"hot_storage\",\n \"name\": \"Hot Storage\",\n \"is_default\": true,\n \"commits\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-commits\",\n \"key\": \"ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09\",\n \"key_id\": \"AKIAIOT3GCU5VGCCL44A\"\n },\n \"fs\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-fs\",\n \"key\": \"ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09\",\n \"key_id\": \"AKIAIOT3GCU5VGCCL44A\"\n },\n \"blocks\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-blocks\",\n \"key\": \"ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09\",\n \"key_id\": \"AKIAIOT3GCU5VGCCL44A\"\n }\n },\n {\n \"storage_id\": \"cold_storage\",\n \"name\": \"Cold Storage\",\n \"is_default\": false,\n \"fs\": {\n \"backend\": \"fs\",\n \"dir\": \"/storage/seafile/seafile-data\"\n },\n \"commits\": {\n \"backend\": \"fs\",\n \"dir\": \"/storage/seafile/seafile-data\"\n },\n \"blocks\": {\n \"backend\": \"fs\",\n \"dir\": \"/storage/seafile/seaflle-data\"\n }\n },\n {\n \"storage_id\": \"swift_storage\",\n \"name\": \"Swift Storage\",\n \"fs\": {\n \"backend\": \"swift\",\n \"tenant\": \"adminTenant\",\n \"user_name\": \"admin\",\n \"password\": \"openstack\",\n \"container\": \"seafile-commits\",\n \"auth_host\": \"192.168.56.31:5000\",\n \"auth_ver\": \"v2.0\"\n },\n \"commits\": {\n \"backend\": \"swift\",\n \"tenant\": \"adminTenant\",\n \"user_name\": \"admin\",\n \"password\": \"openstack\",\n \"container\": \"seafile-fs\",\n \"auth_host\": \"192.168.56.31:5000\",\n \"auth_ver\": \"v2.0\"\n },\n \"blocks\": {\n \"backend\": \"swift\",\n \"tenant\": \"adminTenant\",\n \"user_name\": \"admin\",\n \"password\": \"openstack\",\n \"container\": \"seafile-blocks\",\n \"auth_host\": \"192.168.56.31:5000\",\n \"auth_ver\": \"v2.0\",\n \"region\": \"RegionTwo\"\n }\n },\n {\n \"storage_id\": \"ceph_storage\",\n \"name\": \"ceph Storage\",\n \"fs\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-fs\"\n },\n \"commits\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-commits\"\n },\n \"blocks\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-blocks\"\n }\n }\n]\ncommits, fs and blocks information syntax is similar to what is used in [commit_object_backend], [fs_object_backend] and [block_backend] section of seafile.conf. Refer to the detailed syntax in the documentation for the storage you use. For exampe, if you use S3 storage, refer to S3 Storage.fs, commits or blocks, you must explicitly provide the path for the seafile-data directory. The objects will be stored in storage/commits, storage/fs, storage/blocks under this path.
"},{"location":"deploy_pro/multiple_storage_backends/#user-chosen","title":"User Chosen","text":"ENABLE_STORAGE_CLASSES = True\nSTORAGE_CLASS_MAPPING_POLICY = 'USER_SELECT'\nSTORAGE_CLASS_MAPPING_POLIICY in seahub_settings.py, this policy is used by default.storage_ids is added to the role configuration in seahub_settings.py to assign storage classes to each role. If only one storage class is assigned to a role, the users with this role cannot choose storage class for libraries; otherwise, the users can choose a storage class if more than one class are assigned. If no storage class is assigned to a role, the default class specified in the JSON file will be used.
"},{"location":"deploy_pro/multiple_storage_backends/#library-id-based-mapping","title":"Library ID Based Mapping","text":"ENABLE_STORAGE_CLASSES = True\nSTORAGE_CLASS_MAPPING_POLICY = 'ROLE_BASED'\n\nENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_invite_guest': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': ['old_version_id', 'hot_storage', 'cold_storage', 'a_storage'],\n },\n 'guest': {\n 'can_add_repo': True,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'storage_ids': ['hot_storage', 'cold_storage'],\n },\n}\nSTORAGE_CLASS_MAPPING_POLICY = 'REPO_ID_MAPPING'\nfor_new_library to the backends which are expected to store new libraries in json file:
"},{"location":"deploy_pro/multiple_storage_backends/#multiple-storage-backend-data-migration","title":"Multiple Storage Backend Data Migration","text":"[\n{\n\"storage_id\": \"new_backend\",\n\"name\": \"New store\",\n\"for_new_library\": true,\n\"is_default\": false,\n\"fs\": {\"backend\": \"fs\", \"dir\": \"/storage/seafile/new-data\"},\n\"commits\": {\"backend\": \"fs\", \"dir\": \"/storage/seafile/new-data\"},\n\"blocks\": {\"backend\": \"fs\", \"dir\": \"/storage/seafile/new-data\"}\n}\n]\nmigrate-repo.sh script to migrate library data between different storage backends../migrate-repo.sh [repo_id] origin_storage_id destination_storage_id\n
OBJECT_LIST_FILE_PATH environment variable to specify a path prefix to store the migrated object list.export OBJECT_LIST_FILE_PATH=/opt/test\ntest_4c731e5c-f589-4eaa-889f-14c00d4893cb.fs test_4c731e5c-f589-4eaa-889f-14c00d4893cb.commits test_4c731e5c-f589-4eaa-889f-14c00d4893cb.blocks Setting the OBJECT_LIST_FILE_PATH environment variable has two purposes:
"},{"location":"deploy_pro/multiple_storage_backends/#delete-all-objects-in-a-library-in-the-specified-storage-backend","title":"Delete All Objects In a Library In The Specified Storage Backend","text":"remove-objs.sh script (before migration, you need to set the OBJECT_LIST_FILE_PATH environment variable) to delete all objects in a library in the specified storage backend.
"},{"location":"deploy_pro/office_web_app/","title":"Office Online Server","text":"./remove-objs.sh repo_id storage_id\n# Enable Office Online Server\nENABLE_OFFICE_WEB_APP = True\n\n# Url of Office Online Server's discovery page\n# The discovery page tells Seafile how to interact with Office Online Server when view file online\n# You should change `http://example.office-web-app.com` to your actual Office Online Server server address\nOFFICE_WEB_APP_BASE_URL = 'http://example.office-web-app.com/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use Office Online Server view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 60 * 60 * 24 # seconds\n\n# List of file formats that you want to view through Office Online Server\n# You can change this value according to your preferences\n# And of course you should make sure your Office Online Server supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('ods', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt',\n 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through Office Online Server\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through Office Online Server\n# Note, Office Online Server 2016 is needed for editing docx\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('xlsx', 'pptx', 'docx')\n\n\n# HTTPS authentication related (optional)\n\n# Server certificates\n# Path to a CA_BUNDLE file or directory with certificates of trusted CAs\n# NOTE: If set this setting to a directory, the directory must have been processed using the c_rehash utility supplied with OpenSSL.\nOFFICE_WEB_APP_SERVER_CA = '/path/to/certfile'\n\n\n# Client certificates\n# You can specify a single file (containing the private key and the certificate) to use as client side certificate\nOFFICE_WEB_APP_CLIENT_PEM = 'path/to/client.pem'\n\n# or you can specify these two file path to use as client side certificate\nOFFICE_WEB_APP_CLIENT_CERT = 'path/to/client.cert'\nOFFICE_WEB_APP_CLIENT_KEY = 'path/to/client.key'\n./seafile.sh restart\n./seahub.sh restart\n
role_quota is used to set quota for a certain role of users. For example, we can set the quota of employee to 100G by adding 'role_quota': '100g', and leave other role of users to the default quota.can_add_public_repo is to set whether a role can create a public library, default is \"False\". Note:The can_add_public_repo option will not take effect if you configure global CLOUD_MODE = True.storage_ids permission is used for assigning storage backends to users with specific role. More details can be found in multiple storage backends.upload_rate_limit and download_rate_limit are added to limit upload and download speed for users with different roles. After configured the rate limit, run the following command in the seafile-server-latest directory to make the configuration take effect:./seahub.sh python-env python3 seahub/manage.py set_user_role_upload_download_rate_limit\ncan_share_repo is added to limit users' ability to share a library.default and guest, a default user is a normal user with permissions as followings: 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 0, # unit: kb/s\n 'download_rate_limit': 0,\n },\n
"},{"location":"deploy_pro/roles_permissions/#edit-build-in-roles","title":"Edit build-in roles","text":" 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': False,\n 'upload_rate_limit': 0,\n 'download_rate_limit': 0,\n },\nseahub_settings.py with corresponding permissions set to True.
"},{"location":"deploy_pro/roles_permissions/#more-about-guest-invitation-feature","title":"More about guest invitation feature","text":"ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n },\n 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': False,\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n }\n}\ncan_invite_guest permission can invite people outside of the organization as guest.can_invite_guest permission to the user, add the following line to seahub_settings.py,ENABLE_GUEST_INVITATION = True\n\n# invitation expire time\nINVITATIONS_TOKEN_AGE = 72 # hours\ncan_invite_guest permission will see \"Invite People\" section at sidebar of home page.INVITATION_ACCEPTER_BLACKLIST = [\"a@a.com\", \"*@a-a-a.com\", r\".*@(foo|bar).com\", ]\nemployee can invite guest and can create public library and have all other permissions a default user has, you can add following lines to seahub_settings.py
"},{"location":"deploy_pro/saml2_in_10.0/","title":"SAML 2.0 in version 10.0+","text":"ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n },\n 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': False,\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n },\n 'employee': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': True,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 500,\n 'download_rate_limit': 800,\n },\n}\n$ apt update\n$ apt install xmlsec1\n$ apt install dnsutils # For multi-tenancy feature\n$ mkdir -p /opt/seafile/seahub-data/certs\n$ cd /opt/seafile/seahub-data/certs\n$ openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout sp.key -out sp.crt\ndays option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly./opt/seafile/seahub-data/certs).SAML_REMOTE_METADATA_URL option in seahub_settings.py, e.g.:SAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\nENABLE_ADFS_LOGIN, LOGIN_REDIRECT_URL and SAML_ATTRIBUTE_MAPPING options to seahub_settings.py, and then restart Seafile, e.g:ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n\n}\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n/usr/bin/xmlsec1, you need to add the following configuration in seahub_settings.py:SAML_XMLSEC_BINARY_PATH = '/path/to/xmlsec1'\n$ which xmlsec1\n/opt/seafile/seahub-data/certs, you need to add the following configuration in seahub_settings.py:SAML_CERTS_DIR = '/path/to/certs'\nSingle Sign-On, and use the user assigned to SAML app to perform a SAML login test.
temp.adfs.com as the domain name example.demo.seafile.com as the domain name example.
/opt/seafile/seahub-data/certs).ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n}\nSAML_REMOTE_METADATA_URL = 'https://temp.adfs.com/federationmetadata/2007-06/federationmetadata.xml' # The format of the ADFS federation metadata URL is: `https://{your ADFS domain name}/federationmetadata/2007-06/federationmetadata.xml`\n
https://example.com/saml2/metadata/, e.g.:
Seafile, under Notes type a description for this relying party trust, and then click Next.
Seafile Claim rule). Click the Outgoing claim type dropdown menu and select Name ID. Click the Outgoing name ID format dropdown menu and select Email. And then click Finish.Single Sign-On to perform ADFS login test../seaf-gen-key.sh -h. it will print the following usage information: usage :\nseaf-gen-key.sh\n -p <file path to write key iv, default ./seaf-key.txt>\n[store_crypt]\nkey_path = <the key file path generated in previous section>\n
"},{"location":"deploy_pro/seaf_encrypt/#edit-config-files","title":"Edit Config Files","text":"cd seafile-server-latest\ncp -r conf conf-enc\nmkdir seafile-data-enc\ncp -r seafile-data/library-template seafile-data-enc\n# If you use SQLite database\ncp seafile-data/seafile.db seafile-data-enc/\n
"},{"location":"deploy_pro/seaf_encrypt/#migrate-the-data","title":"Migrate the Data","text":"[store_crypt]\nkey_path = <the key file path generated in previous section>\n./seaf-encrypt.sh -f ../conf-enc -e ../seafile-data-enc,Starting seaf-encrypt, please wait ...\n[04/26/16 06:59:40] seaf-encrypt.c(444): Start to encrypt 57 block among 12 repo.\n[04/26/16 06:59:40] seaf-encrypt.c(444): Start to encrypt 102 fs among 12 repo.\n[04/26/16 06:59:41] seaf-encrypt.c(454): Success encrypt all fs.\n[04/26/16 06:59:40] seaf-encrypt.c(444): Start to encrypt 66 commit among 12 repo.\n[04/26/16 06:59:41] seaf-encrypt.c(454): Success encrypt all commit.\n[04/26/16 06:59:41] seaf-encrypt.c(454): Success encrypt all block.\nseaf-encrypt run done\nDone.\nmv conf conf-bak\nmv seafile-data seafile-data-bak\nmv conf-enc conf\nmv seafile-data-enc seafile-data\n
"},{"location":"deploy_pro/seafile_professional_sdition_software_license_agreement/#3-no-derivative-works","title":"3. NO DERIVATIVE WORKS","text":"seafile-data folder) and user avatars as well as thumbnails (located in seahub-data folder) on NFS. Here we'll provide a tutorial about how and what to share.
/data/haiwen, after you run the setup script there should be a seafile-data and seahub-data directory in it. And supposed you mount the NFS drive on /seafile-nfs, you should follow a few steps:
seafile-data and seahub-data folder to /seafile-nfs:mv /data/haiwen/seafile-data /seafile-nfs/\nmv /data/haiwen/seahub-data /seafile-nfs/\n
seafile-data and seahub-data folder cd /data/haiwen\nln -s /seafile-nfs/seafile-data /data/haiwen/seafile-data\nln -s /seafile-nfs/seahub-data /data/haiwen/seahub-data\nseafile-data and seahub-data folder. All other config files and log files will remain independent.
boto library. It's needed to access S3 service.# Version 10.0 or earlier\nsudo pip install boto\n\n# Since 11.0 version\nsudo pip install boto3\n
seafile.conf, add the following lines:[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\n
bucket: It's required to create separate buckets for commit, fs, and block objects. When creating your buckets on S3, please first read S3 bucket naming rules. Note especially not to use UPPERCASE letters in bucket names (don't use camel style names, such as MyCommitOjbects).key_id and key: The key_id and key are required to authenticate you to S3. You can find the key_id and key in the \"security credentials\" section on your AWS account page.use_v4_signature: There are two versions of authentication protocols that can be used with S3 storage. Version 2 is the older one, which may still be supported by some regions; version 4 is the current one used by most regions. If you don't set this option, Seafile will use v2 protocol. It's suggested to use v4 protocol.aws_region: If you use v4 protocol, set this option to the region you chose when you create the buckets. If it's not set and you're using v4 protocol, Seafile will use us-east-1 as the default. This option will be ignored if you use v2 protocol.
"},{"location":"deploy_pro/setup_with_amazon_s3/#use-server-side-encryption-with-customer-provided-keys-sse-c","title":"Use server-side encryption with customer-provided keys (SSE-C)","text":"[s3]\nuse-sigv4 = True\n[commit_object_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[fs_object_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[block_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\nssk_c_key is a 32-byte random string.seafile.conf, add the following lines:[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\n# v2 authentication protocol will be used if not set\nuse_v4_signature = true\n# required for v4 protocol. ignored for v2 protocol.\naws_region = <region name for storage provider>\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\n
host: The endpoint by which you access the storage service. Usually it starts with the region name. It's required to provide the host address, otherwise Seafile will use AWS's address.bucket: It's required to create separate buckets for commit, fs, and block objects.key_id and key: The key_id and key are required to authenticate you to S3 storage.use_v4_signature: There are two versions of authentication protocols that can be used with S3 storage. Version 2 is the older one, which may still be supported by some cloud providers; version 4 is the current one used by Amazon S3 and is supported by most providers. If you don't set this option, Seafile will use v2 protocol. It's suggested to use v4 protocol.aws_region: If you use v4 protocol, set this option to the region you chose when you create the buckets. If it's not set and you're using v4 protocol, Seafile will use us-east-1 as the default. This option will be ignored if you use v2 protocol.
"},{"location":"deploy_pro/setup_with_amazon_s3/#self-hosted-s3-storage","title":"Self-hosted S3 Storage","text":"[s3]\nuse-sigv4 = True\n[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = 192.168.1.123:8080\npath_style_request = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = 192.168.1.123:8080\npath_style_request = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = 192.168.1.123:8080\npath_style_request = true\n
host: It is the address and port of the S3-compatible service. You cannot prepend \"http\" or \"https\" to the host option. By default it'll use http connections. If you want to use https connection, please set use_https = true option.bucket: It's required to create separate buckets for commit, fs, and block objects.key_id and key: The key_id and key are required to authenticate you to S3 storage.path_style_request: This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. So we recommend to set this option to true.
"},{"location":"deploy_pro/setup_with_amazon_s3/#use-https-connections-to-s3","title":"Use HTTPS connections to S3","text":"use_v4_signature: There are two versions of authentication protocols that can be used with S3 storage. Version 2 is the protocol supported by most self-hosted storage; version 4 is the current protocol used by AWS S3, but may not be supported by some self-hosted storage. If you don't set this option, Seafile will use v2 protocol. We recommend to use V2 first and if it doesn't work try V4.aws_region: If you use v4 protocol, set this option to the region you chose when you create the buckets. If it's not set and you're using v4 protocol, Seafile will use us-east-1 as the default. This option will be ignored if you use v2 protocol.[commit_object_backend]\nname = s3\n......\nuse_https = true\n\n[fs_object_backend]\nname = s3\n......\nuse_https = true\n\n[block_backend]\nname = s3\n......\nuse_https = true\nsudo mkdir -p /etc/pki/tls/certs\nsudo cp /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/certs/ca-bundle.crt\nsudo ln -s /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/cert.pem\n./seafile.sh start and ./seahub.sh start and visit the website.
"},{"location":"deploy_pro/setup_with_ceph/#install-and-enable-memcached","title":"Install and enable memcached","text":"seafile-machine# sudo scp user@ceph-admin-node:/etc/ceph/ /etc\nsudo apt-get install python3-rados\nsudo apt-get install python-ceph\n
"},{"location":"deploy_pro/setup_with_ceph/#edit-seafile-configuration","title":"Edit seafile configuration","text":"sudo yum install python-rados\nseafile.conf, add the following lines:[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-fs\n
"},{"location":"deploy_pro/setup_with_ceph/#troubleshooting-librados-incompatibility-issues","title":"Troubleshooting librados incompatibility issues","text":"ceph-admin-node# rados mkpool seafile-blocks\nceph-admin-node# rados mkpool seafile-commits\nceph-admin-node# rados mkpool seafile-fs\n
"},{"location":"deploy_pro/setup_with_ceph/#use-arbitary-ceph-user","title":"Use arbitary Ceph user","text":"cd seafile-server-latest/seafile/lib\nrm librados.so.2 libstdc++.so.6 libnspr4.so\nceph_client_id option to seafile.conf, as the following:[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-fs\n\n# Memcached or Reids configs\n......\nceph auth add client.seafile \\\n mds 'allow' \\\n mon 'allow r' \\\n osd 'allow rwx pool=seafile-blocks, allow rwx pool=seafile-commits, allow rwx pool=seafile-fs'\n
"},{"location":"deploy_pro/setup_with_oss/","title":"Setup With Alibaba OSS","text":""},{"location":"deploy_pro/setup_with_oss/#prepare","title":"Prepare","text":"[client.seafile]\nkeyring = <path to user's keyring file>\n
"},{"location":"deploy_pro/setup_with_oss/#modify-seafileconf","title":"Modify Seafile.conf","text":"oss2 library: sudo pip install oss2==2.3.0.For more installation help, please refer to this document.seafile.conf, add the following lines:[commit_object_backend]\nname = oss\nbucket = <your-seafile-commits-bucket>\nkey_id = <your-key-id>\nkey = <your-key>\nregion = beijing\n\n[fs_object_backend]\nname = oss\nbucket = <your-seafile-fs-bucket>\nkey_id = <your-key-id>\nkey = <your-key>\nregion = beijing\n\n[block_backend]\nname = oss\nbucket = <your-seafile-blocks-bucket>\nkey_id = <your-key-id>\nkey = <your-key>\nregion = beijing\n[commit_object_backend]\nname = oss\nbucket = <your-seafile-commits-bucket>\nkey_id = <your-key-id>\nkey = <your-key>\nendpoint = vpc100-oss-cn-beijing.aliyuncs.com\n\n[fs_object_backend]\nname = oss\nbucket = <your-seafile-fs-bucket>\nkey_id = <your-key-id>\nkey = <your-key>\nendpoint = vpc100-oss-cn-beijing.aliyuncs.com\n\n[block_backend]\nname = oss\nbucket = <your-seafile-blocks-bucket>\nkey_id = <your-key-id>\nkey = <your-key>\nendpoint = vpc100-oss-cn-beijing.aliyuncs.com\nendpoint option to replace the region option. The corresponding endpoint address can be found at https://www.alibabacloud.com/help/en/object-storage-service/latest/regions-and-endpoints.endpoint is a general option, you can also set it to the OSS access address under the classic network, and it will work as well.
"},{"location":"deploy_pro/setup_with_swift/","title":"Setup With OpenStack Swift","text":"[commit_object_backend]\nname = oss\n......\nuse_https = true\n\n[fs_object_backend]\nname = oss\n......\nuse_https = true\n\n[block_backend]\nname = oss\n......\nuse_https = true\n
"},{"location":"deploy_pro/setup_with_swift/#modify-seafileconf","title":"Modify Seafile.conf","text":"seafile.conf, add the following lines:[block_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-blocks\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n\n[commit_object_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-commits\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n\n[fs_object_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-fs\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\nauth_host option is the address and port of Keystone service.The region option is used to select publicURL,if you don't configure it, use the first publicURL in returning authenticated information.auth_ver option should be set to v1.0, tenant and region are no longer needed.[commit_object_backend]\nname = swift\n......\nuse_https = true\n\n[fs_object_backend]\nname = swift\n......\nuse_https = true\n\n[block_backend]\nname = swift\n......\nuse_https = true\n
"},{"location":"deploy_pro/setup_with_swift/#run-and-test","title":"Run and Test","text":"sudo mkdir -p /etc/pki/tls/certs\nsudo cp /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/certs/ca-bundle.crt\nsudo ln -s /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/cert.pem\n./seafile.sh start and ./seahub.sh start and visit the website.seahub_settings.py,ENABLE_TERMS_AND_CONDITIONS = True\nseafile.conf:[virus_scan]\nscan_command = (command for checking virus)\nvirus_code = (command exit codes when file is virus)\nnonvirus_code = (command exit codes when file is not virus)\nscan_interval = (scanning interval, in unit of minutes, default to 60 minutes)\n
[virus_scan]\nscan_command = clamscan\nvirus_code = 1\nnonvirus_code = 0\ncd seafile-server-latest\n./pro/pro.py virus_scan\nscan_command should be clamdscan in seafile.conf. An example for Clamav-daemon is provided below:[virus_scan]\nscan_command = clamdscan\nvirus_code = 1\nnonvirus_code = 0\n[virus_scan]\n......\nscan_size_limit = (size limit for files to be scanned) # The unit is MB.\nscan_skip_ext = (a comma (',') separated list of file extensions to be ignored)\nthreads = (number of concurrent threads for scan, one thread for one file, default to 4)\n.bmp, .gif, .ico, .png, .jpg, .mp3, .mp4, .wav, .avi, .rmvb, .mkv\nseahub_settings.py:ENABLE_UPLOAD_LINK_VIRUS_CHECK = True\nseafile.conf:
"},{"location":"deploy_pro/virus_scan_with_kav4fs/","title":"Virus Scan with kav4fs","text":""},{"location":"deploy_pro/virus_scan_with_kav4fs/#prerequisite","title":"Prerequisite","text":"[fileserver]\ncheck_virus_on_web_upload = true\n
"},{"location":"deploy_pro/virus_scan_with_kav4fs/#script","title":"Script","text":"<user of running seafile server> ALL=(ALL:ALL) ALL\n<user of running seafile server> ALL=NOPASSWD: /opt/kaspersky/kav4fs/bin/kav4fs-control\nkav4fs_scan.sh:#!/bin/bash\n\nTEMP_LOG_FILE=`mktemp /tmp/XXXXXXXXXX`\nVIRUS_FOUND=1\nCLEAN=0\nUNDEFINED=2\nKAV4FS='/opt/kaspersky/kav4fs/bin/kav4fs-control'\nif [ ! -x $KAV4FS ]\nthen\n echo \"Binary not executable\"\n exit $UNDEFINED\nfi\n\nsudo $KAV4FS --scan-file \"$1\" > $TEMP_LOG_FILE\nif [ \"$?\" -ne 0 ]\nthen\n echo \"Error due to check file '$1'\"\n exit 3\nfi\nTHREATS_C=`grep 'Threats found:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nRISKWARE_C=`grep 'Riskware found:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nINFECTED=`grep 'Infected:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nSUSPICIOUS=`grep 'Suspicious:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nSCAN_ERRORS_C=`grep 'Scan errors:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nPASSWORD_PROTECTED=`grep 'Password protected:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nCORRUPTED=`grep 'Corrupted:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\n\nrm -f $TEMP_LOG_FILE\n\nif [ $THREATS_C -gt 0 -o $RISKWARE_C -gt 0 -o $INFECTED -gt 0 -o $SUSPICIOUS -gt 0 ]\nthen\n exit $VIRUS_FOUND\nelif [ $SCAN_ERRORS_C -gt 0 -o $PASSWORD_PROTECTED -gt 0 -o $CORRUPTED -gt 0 ]\nthen\n exit $UNDEFINED\nelse\n exit $CLEAN\nfi\nchmod u+x kav4fs_scan.sh\n
"},{"location":"deploy_pro/virus_scan_with_kav4fs/#configuration","title":"Configuration","text":"1: found virus\n0: no virus\nother: scan failed\nseafile.conf:
"},{"location":"develop/","title":"Develop Documents","text":"[virus_scan]\nscan_command = <absolute path of kav4fs_scan.sh>\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = <scanning interval, in unit of minutes, default to 60 minutes>\n
"},{"location":"develop/data_model/","title":"Data Model","text":"Repo, Commit, FS, and Block.seafile_db database and the commit objects (see description in later section).seafile_db database containing important information about each repo.
"},{"location":"develop/data_model/#commit","title":"Commit","text":"Repo: contains the ID for each repo.RepoOwner: contains the owner id for each repo.RepoInfo: it is a \"cache\" table for fast access to repo metadata stored in the commit object. It includes repo name, update time, last modifier.RepoSize: the total size of all files in the repo.RepoFileCount: the file count in the repo.RepoHead: contains the \"head commit ID\". This ID points to the head commit in the storage, which will be described in the next section.RepoHead table contains the latest head commit ID for each repo. From this head commit, we can traverse the repo history.seafile-data/storage/commits/<repo_id>. If you use object storage, commit objects are stored in the commits bucket.SeafDir Object and Seafile Object. SeafDir Object represents a directory, and Seafile Object represents a file.SeafDir object contains metadata for each file/sub-folder, which includes name, last modification time, last modifier, size, and object ID. The object ID points to another SeafDir or Seafile object. The Seafile object contains a block list, which is a list of block IDs for the file.seafile-data/storage/fs/<repo_id>. If you use object storage, commit objects are stored in the fs bucket.seafile-data/storage/blocks/<repo_id>. If you use object storage, commit objects are stored in the blocks bucket.
fs and blocks storage location as its parent.commits storage location from its parent. The changes in virtual repo and its parent repo will be bidirectional merged. So that changes from each side can be seen from another.VirtualRepo table in seafile_db database. It contains the folder path in the parent repo for each virtual repo.
/locale/<lang-code>/LC_MESSAGES/django.po\u00a0 and \u00a0/locale/<lang-code>/LC_MESSAGES/djangojs.po/media/locales/<lang-code>/seafile-editor.json
/seafile-server-latest/seahub/locale/ru/LC_MESSAGES/django.po/seafile-server-latest/seahub/locale/ru/LC_MESSAGES/djangojs.po/seafile-server-latest/seahub/media/locales/ru/seafile-editor.json/seafile-server-latest/seahub/seahub/settings.py file and save it. LANGUAGES = (\n ...\n ('ru', '\u0420\u0443\u0441\u0441\u043a\u0438\u0439'),\n ...\n)\n/seafile-server-latest/seahub/locale/<lang-code>/LC_MESSAGES:
msgfmt -o django.mo django.pomsgfmt -o djangojs.mo djangojs.po
./seahub.sh python-env python3 seahub/manage.py compilejsi18n -l <lang-code>./seahub.sh python-env python3 seahub/manage.py collectstatic --noinput -i admin -i termsandconditions --no-post-process
"},{"location":"develop/translation/#faq","title":"FAQ","text":""},{"location":"develop/translation/#filenotfounderror","title":"FileNotFoundError","text":"FileNotFoundError occurred when executing the command manage.py collectstatic.FileNotFoundError: [Errno 2] No such file or directory: '/opt/seafile/seafile-server-latest/seahub/frontend/build'\n
STATICFILES_DIRS in /opt/seafile/seafile-server-latest/seahub/seahub/settings.py manuallysh ./seahub.sh python-env python3 seahub/manage.py collectstatic --noinput -i admin -i termsandconditions --no-post-processSTATICFILES_DIRS manuallysh ./seahub.sh restart
"},{"location":"develop/web_api_v2.1/#admin-only","title":"Admin Only","text":"
"},{"location":"docker/deploy_seafile_with_docker/","title":"Deploy Seafile with Docker","text":""},{"location":"docker/deploy_seafile_with_docker/#getting-started","title":"Getting started","text":"
"},{"location":"docker/deploy_seafile_with_docker/#install-docker","title":"Install docker","text":"/opt/seafile-data is the directory of Seafile. If you decide to put Seafile in a different directory \u2014 which you can \u2014 adjust all paths accordingly./opt/seafile-mysql and /opt/seafile-data, respectively. It is not recommended to change these paths. If you do, account for it when following these instructions..env","text":".env, seafile-server.yml and caddy.yml files for configuration.mkdir /opt/seafile\ncd /opt/seafile\n\n# Seafile CE 12.0\nwget -O .env https://manual.seafile.com/12.0/docker/docker-compose/ce/env\nwget https://manual.seafile.com/12.0/docker/docker-compose/ce/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/docker-compose/ce/caddy.yml\n\nnano .env\n
SEAFILE_VOLUME: The volume directory of Seafile data, default is /opt/seafile-dataSEAFILE_MYSQL_VOLUME: The volume directory of MySQL data, default is /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME: The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's, default is /opt/seafile-caddySEAFILE_MYSQL_ROOT_PASSWORD: The user root password of MySQLSEAFILE_MYSQL_DB_USER: The user of MySQL (database - user can be found in conf/seafile.conf)SEAFILE_MYSQL_DB_PASSWORD: The user seafile password of MySQLJWT: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1SEAFILE_SERVER_HOSTNAME: Seafile server hostname or domainSEAFILE_SERVER_PROTOCOL: Seafile server protocol (http or https)TIME_ZONE: Time zone (default UTC)SEAFILE_ADMIN_EMAIL: Admin usernameSEAFILE_ADMIN_PASSWORD: Admin password# if `.env` file is in current directory:\ndocker compose up -d\n\n# if `.env` file is elsewhere:\ndocker compose -f /path/to/.env up -d\nhttp://seafile.example.com to open Seafile Web UI./opt/seafile-data","text":"
"},{"location":"docker/deploy_seafile_with_docker/#find-logs","title":"Find logs","text":"/opt/seafile-data/seafile/logs/seafile.log./var/log inside the container. For example, you can find the nginx logs in /opt/seafile-data/logs/var-log/nginx/.# if the `.env` file is in current directory:\ndocker compose logs --follow\n# if the `.env` file is elsewhere:\ndocker compose -f /path/to/.env logs --follow\n\n# you can also specify container name:\ndocker compose logs seafile --follow\n# or, if the `.env` file is elsewhere:\ndocker compose -f /path/to/.env logs seafile --follow\n/shared/logs/seafile in the docker, or /opt/seafile-data/logs/seafile in the server that run the docker./shared/logs/var-log, or /opt/seafile-data/logs/var-log in the server that run the docker.
"},{"location":"docker/deploy_seafile_with_docker/#more-configuration-options","title":"More configuration options","text":""},{"location":"docker/deploy_seafile_with_docker/#use-an-existing-mysql-server","title":"Use an existing mysql-server","text":"sudo tail -f $(find /opt/seafile-data/ -type f -name *.log 2>/dev/null)\n.env as followsSEAFILE_MYSQL_DB_HOST=192.168.0.2\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\nSEAFILE_MYSQL_ROOT_PASSWORD is needed during installation. Later, after Seafile is installed, the user seafile will be used to connect to the mysql-server (SEAFILE_MYSQL_DB_PASSWORD). You can remove the SEAFILE_MYSQL_ROOT_PASSWORD./opt/seafile-data/seafile/conf. You can modify the configurations according to Seafile manual
"},{"location":"docker/deploy_seafile_with_docker/#add-a-new-admin","title":"Add a new admin","text":"docker compose restart\ndocker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\nmy_init, Nginx are still run as root inside docker.)NON_ROOT=true to the .env.NON_ROOT=true\n/opt/seafile-data/seafile/ permissions.chmod -R a+rwx /opt/seafile-data/seafile/\ndocker compose down\ndocker compose up -d\nseafile user. (NOTE: Later, when doing maintenance, other scripts in docker are also required to be run as seafile user, e.g. su seafile -c ./seaf-gc.sh)/scripts folder of the docker container. To perform garbage collection, simply run docker exec seafile /scripts/gc.sh. For the community edition, this process will stop the seafile server, but it is a relatively quick process and the seafile server will start automatically once the process has finished. The Professional supports an online garbage collection.docker exec to find errors","text":"
"},{"location":"docker/deploy_seafile_with_docker/#about-ssl-and-caddy","title":"About SSL and Caddy","text":"docker exec -it seafile /bin/bash\nlucaslorentz/caddy-docker-proxy:2.9, which user only needs to correctly configure the following fields in .env to automatically complete the acquisition and update of the certificate:
"},{"location":"docker/non_docker_to_docker/","title":"Migrate from non-docker Seafile deployment to docker","text":"SEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=example.com\n
"},{"location":"docker/non_docker_to_docker/#prepare-mysql-and-the-folders-for-seafile-docker","title":"Prepare MySQL and the folders for Seafile docker","text":""},{"location":"docker/non_docker_to_docker/#add-permissions-to-the-local-mysql-seafile-user","title":"Add permissions to the local MySQL Seafile user","text":"systemctl stop nginx && systemctl disable nginx\nsystemctl stop memcached && systemctl disable memcached\n./seafile.sh stop && ./seahub.sh stop\nseafile as the user to access:
"},{"location":"docker/non_docker_to_docker/#create-the-required-directories-for-seafile-docker-image","title":"Create the required directories for Seafile Docker image","text":"## Note, change the password according to the actual password you use\nGRANT ALL PRIVILEGES ON *.* TO 'seafile'@'%' IDENTIFIED BY 'your-password' WITH GRANT OPTION;\n\n## Grant seafile user can connect the database from any IP address\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to 'seafile'@'%';\nGRANT ALL PRIVILEGES ON `seafile_db`.* to 'seafile'@'%';\nGRANT ALL PRIVILEGES ON `seahub_db`.* to 'seafile'@'%';\n\n## Restart MySQL\nsystemctl restart mariadb\n
"},{"location":"docker/non_docker_to_docker/#prepare-config-files","title":"Prepare config files","text":"mkdir -p /opt/seafile-data/seafile\ncp -r /opt/seafile/conf /opt/seafile-data/seafile\ncp -r /opt/seafile/seahub-data /opt/seafile-data/seafile\n/opt/seafile-data/seafile/conf, including ccnet.conf, seafile.conf, seahub_settings, change HOST=127.0.0.1 to HOST=<local ip>.seahub_settings.py to use the Docker version of Memcached: change it to 'LOCATION': 'memcached:11211' (the network name of Docker version of Memcached is memcached)./opt/seafile-data. Comment out the db part as below:
"},{"location":"docker/non_docker_to_docker/#configure-seafile-docker-to-use-the-old-seafile-data","title":"Configure Seafile Docker to use the old seafile-data","text":"services:\n# db:\n# image: mariadb:10.5\n# container_name: seafile-mysql\n# environment:\n# - MYSQL_ROOT_PASSWORD=db_dev # Required, set the root's password of MySQL service.\n# - MYSQL_LOG_CONSOLE=true\n# volumes:\n# - /opt/seafile-mysql/db:/var/lib/mysql # Required, specifies the path to MySQL data persistent store.\n# networks:\n# - seafile-net\n\n.........\n depends_on:\n# - db \n - memcached\n.........\n/opt/seafile/seafile-data) to /opt/seafile-data/seafile (So you will have /opt/seafile-data/seafile/seafile-data)/opt/seafile/seafile-data) to Seafile docker container directly:.........\n\n seafile:\n image: seafileltd/seafile-mc:8.0.7-1\n container_name: seafile\n ports:\n - \"80:80\"\n# - \"443:443\" # If https is enabled, cancel the comment.\n volumes:\n - /opt/seafile-data:/shared\n - /opt/seafile/seafile-data:/shared/seafile/seafile-data\n .......\n- /opt/seafile/seafile-data:/shared/seafile/seafile-data mount /opt/seafile/seafile-data to /shared/seafile/seafile-data in docker.
"},{"location":"docker/non_docker_to_docker/#security","title":"Security","text":"cd /opt/seafile-data\ndocker compose up -d\n<local ip> you also need to bind your databaseserver to that IP. If this IP is public, it is strongly advised to protect your database port with a firewall. Otherwise your databases are reachable via internet. An alternative might be to start another local IP from RFC 1597 e.g. 192.168.123.45. Afterwards you can bind to that IP.iptables -A INPUT -s 172.16.0.0/12 -j ACCEPT #Allow Dockernetworks\niptables -A INPUT -p tcp -m tcp --dport 3306 -j DROP #Deny Internet\nip6tables -A INPUT -p tcp -m tcp --dport 3306 -j DROP #Deny Internet\n/etc/network/interfaces something like:iface eth0 inet static\n address 192.168.123.45/32\neth0 might be ensXY. Or if you know how to start a dummy interface, thats even better./etc/sysconfig/network/ifcfg-eth0 (ethXY/ensXY/bondXY)/etc/mysql/mariadb.conf.d/50-server.cnf edit the following line to:bind-address = 192.168.123.45\n
"},{"location":"docker/seafile_docker_autostart/","title":"Seafile Docker autostart","text":"service networking reload\nip a #to check whether the ip is present\nservice mysql restart\nss -tulpen | grep 3306 #to check whether the database listens on the correct IP\ncd /opt/seafile-data/\ndocker compose down\ndocker compose up -d\n\n## restart your applications\n
vim /etc/systemd/system/docker-compose.service[Unit]\nDescription=Docker Compose Application Service\nRequires=docker.service\nAfter=docker.service\n\n[Service]\nType=forking\nRemainAfterExit=yes\nWorkingDirectory=/opt/ \nExecStart=/usr/bin/docker compose up -d\nExecStop=/usr/bin/docker compose down\nTimeoutStartSec=0\n\n[Install]\nWantedBy=multi-user.target\nWorkingDirectory is the absolute path to the docker-compose.yml file directory.
chmod 644 /etc/systemd/system/docker-compose.service\n
"},{"location":"docker/seafile_docker_autostart/#method-2","title":"Method 2","text":"systemctl daemon-reload\nsystemctl enable docker-compose.service\nrestart: unless-stopped for each container in docker-compose.yml.services:\n db:\n image: mariadb:10.11\n container_name: seafile-mysql-1\n restart: unless-stopped\n\n memcached:\n image: memcached:1.6.18\n container_name: seafile-memcached\n restart: unless-stopped\n\n elasticsearch:\n image: elasticsearch:8.6.2\n container_name: seafile-elasticsearch\n restart: unless-stopped\n\n seafile:\n image: docker.seadrive.org/seafileltd/seafile-pro-mc:11.0-latest\n container_name: seafile\n restart: unless-stopped\nrestart: unless-stopped, and the Seafile container will automatically start when Docker starts. If the Seafile container does not exist (execute docker compose down), the container will not start automatically.
"},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/","title":"Seafile Docker Cluster Deployment","text":"SSL configuration$ mysql -h{your mysql host} -u[username] -p[password]\n\nmysql>\ncreate user 'seafile'@'%' identified by 'PASSWORD';\n\ncreate database `ccnet_db` character set = 'utf8';\ncreate database `seafile_db` character set = 'utf8';\ncreate database `seahub_db` character set = 'utf8';\n\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to 'seafile'@'%';\nGRANT ALL PRIVILEGES ON `seafile_db`.* to 'seafile'@'%';\nGRANT ALL PRIVILEGES ON `seahub_db`.* to 'seafile'@'%';\n
"},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#deploy-seafile-service","title":"Deploy Seafile service","text":""},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#deploy-seafile-frontend-nodes","title":"Deploy seafile frontend nodes","text":"mysql>\nuse seahub_db;\nCREATE TABLE `avatar_uploaded` (\n `filename` text NOT NULL,\n `filename_md5` char(32) NOT NULL,\n `data` mediumtext NOT NULL,\n `size` int(11) NOT NULL,\n `mtime` datetime NOT NULL,\n PRIMARY KEY (`filename_md5`)\n) ENGINE=InnoDB DEFAULT CHARSET=utf8;\n$ mkdir -p /opt/seafile/shared\n$ cd /opt/seafile\n$ vim docker-compose.yml\nservices:\n seafile:\n image: docker.seadrive.org/seafileltd/seafile-pro-mc:latest\n container_name: seafile\n ports:\n - 80:80\n volumes:\n - /opt/seafile/shared:/shared\n environment:\n - CLUSTER_SERVER=true\n - CLUSTER_MODE=frontend\n - TIME_ZONE=UTC # Optional, default is UTC. Should be uncomment and set to your local time zone.\n
"},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#initial-configuration-files","title":"Initial configuration files","text":"$ cd /opt/seafile\n$ docker compose up -d\n$ docker exec -it seafile bash\n\n# cd /scripts && ./cluster_conf_init.py\n# cd /opt/seafile/conf \nCACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': 'memcached:11211',\n },\n...\n}\n |\n v\n\nCACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '{you memcached server host}:11211',\n },\n...\n}\n[INDEX FILES]\nes_port = {your elasticsearch server port}\nes_host = {your elasticsearch server host}\nexternal_es_server = true\nenabled = true\nhighlight = fvh\ninterval = 10m\n...\nSERVICE_URL = 'http{s}://{your server IP or sitename}/'\nFILE_SERVER_ROOT = 'http{s}://{your server IP or sitename}/seafhttp'\nAVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\n[cluster]\nenabled = true\n
"},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#import-the-tables-of-seahub_db-seafile_db-and-ccnet_db","title":"Import the tables of seahub_db, seafile_db and ccnet_db","text":"[memcached]\nmemcached_options = --SERVER={you memcached server host} --POOL-MIN=10 --POOL-MAX=100\n$ docker exec -it seafile bash\n\n# apt-get update && apt-get install -y mysql-client\n\n# mysql -h{your mysql host} -u[username] -p[password] ccnet_db < /opt/seafile/seafile-server-latest/sql/mysql/ccnet.sql\n# mysql -h{your mysql host} -u[username] -p[password] seafile_db < /opt/seafile/seafile-server-latest/sql/mysql/seafile.sql\n# mysql -h{your mysql host} -u[username] -p[password] seahub_db < /opt/seafile/seafile-server-latest/seahub/sql/mysql.sql\n$ docker exec -it seafile bash\n\n# cd /opt/seafile/seafile-server-latest\n# ./seafile.sh start && ./seahub.sh start\n$ mkdir -p /opt/seafile/shared\n$ cd /opt/seafile\n$ vim docker-compose.yml\nservices:\n seafile:\n image: docker.seadrive.org/seafileltd/seafile-pro-mc:latest\n container_name: seafile\n ports:\n - 80:80\n volumes:\n - /opt/seafile/shared:/shared \n environment:\n - CLUSTER_SERVER=true\n - CLUSTER_MODE=backend\n - TIME_ZONE=UTC # Optional, default is UTC. Should be uncomment and set to your local time zone.\n$ cd /opt/seafile\n$ docker compose up -d\n
"},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#use-s3-as-backend-storage","title":"Use S3 as backend storage","text":"$ docker exec -it seafile bash\n\n# cd /opt/seafile/seafile-server-latest\n# ./seafile.sh start && ./seafile-background-tasks.sh start\n
"},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#deployment-load-balance-optional","title":"Deployment load balance (Optional)","text":""},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#install-haproxy-and-keepalived-services","title":"Install HAproxy and Keepalived services","text":"[commit_object_backend]\nname = s3\nbucket = {your-commit-objects} # The bucket name can only use lowercase letters, numbers, and dashes\nkey_id = {your-key-id}\nkey = {your-secret-key}\nuse_v4_signature = true\naws_region = eu-central-1 # eu-central-1 for Frankfurt region\n\n[fs_object_backend]\nname = s3\nbucket = {your-fs-objects}\nkey_id = {your-key-id}\nkey = {your-secret-key}\nuse_v4_signature = true\naws_region = eu-central-1\n\n[block_backend]\nname = s3\nbucket = {your-block-objects}\nkey_id = {your-key-id}\nkey = {your-secret-key}\nuse_v4_signature = true\naws_region = eu-central-1\n$ apt install haproxy keepalived -y\n\n$ mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak\n\n$ cat > /etc/haproxy/haproxy.cfg << 'EOF'\nglobal\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n timeout connect 10000\n timeout client 300000\n timeout server 300000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafile01 Front-End01-IP:8001 check port 11001 cookie seafile01\n server seafile02 Front-End02-IP:8001 check port 11001 cookie seafile02\nEOF\n$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node1\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state MASTER\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 100\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node2\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state BACKUP\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 98\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n$ systemctl enable --now haproxy\n$ systemctl enable --now keepalived\n
"},{"location":"docker/pro-edition/deploy_onlyoffice_with_docker/#initialize-onlyoffice-local-configuration-file","title":"Initialize OnlyOffice local configuration file","text":"services:\n ...\n\n oods:\n image: onlyoffice/documentserver:latest\n container_name: seafile-oods\n networks:\n - seafile-net\n environment:\n - JWT_ENABLED=true\n - JWT_SECRET=your-secret-string\nmkdir -p /opt/seafile-oods/DocumentServer/\nvim /opt/seafile-oods/DocumentServer/local-production-linux.json\n
"},{"location":"docker/pro-edition/deploy_onlyoffice_with_docker/#add-onlyoffice-to-nginx-conf","title":"Add OnlyOffice to nginx conf","text":"{\n \"services\": {\n \"CoAuthoring\": {\n \"autoAssembly\": {\n \"enable\": true,\n \"interval\": \"5m\"\n }\n }\n },\n \"FileConverter\": {\n \"converter\": {\n \"downloadAttemptMaxCount\": 1\n }\n }\n}\n
"},{"location":"docker/pro-edition/deploy_onlyoffice_with_docker/#modify-seahub_settingspy","title":"Modify seahub_settings.py","text":"# Required for only office document server\nmap $http_x_forwarded_proto $the_scheme {\n default $http_x_forwarded_proto;\n \"\" $scheme;\n}\nmap $http_x_forwarded_host $the_host {\n default $http_x_forwarded_host;\n \"\" $host;\n}\nmap $http_upgrade $proxy_connection {\n default upgrade;\n \"\" close;\n}\nserver {\n listen 80;\n ...\n}\n\nserver {\n listen 443 ssl;\n ...\n\n location /onlyofficeds/ {\n proxy_pass http://oods/;\n proxy_http_version 1.1;\n client_max_body_size 100M;\n proxy_read_timeout 3600s;\n proxy_connect_timeout 3600s;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection $proxy_connection;\n proxy_set_header X-Forwarded-Host $the_host/onlyofficeds;\n proxy_set_header X-Forwarded-Proto $the_scheme;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n }\n}\n
"},{"location":"docker/pro-edition/deploy_onlyoffice_with_docker/#restart-docker-container","title":"Restart docker container","text":"# OnlyOffice\nENABLE_ONLYOFFICE = True\nVERIFY_ONLYOFFICE_CERTIFICATE = True\nONLYOFFICE_APIJS_URL = 'http://<your-seafile-doamin>/onlyofficeds/web-apps/apps/api/documents/api.js'\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods')\nONLYOFFICE_EDIT_FILE_EXTENSION = ('docx', 'pptx', 'xlsx')\nONLYOFFICE_JWT_SECRET = 'your-secret-string'\ndocker compose down\ndocker compose up -d \nsysctl -w vm.max_map_count=262144 #run as root\nnano /etc/sysctl.conf\n\n# modify vm.max_map_count\nvm.max_map_count=262144\n
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#installing-docker","title":"Installing Docker","text":"/opt/seafile-data is the directory of Seafile. If you decide to put Seafile in a different directory - which you can - adjust all paths accordingly.docker login docker.seadrive.org\ndocker pull docker.seadrive.org/seafileltd/seafile-pro-mc:12.0-latest\n.env","text":".env, seafile-server.yml and caddy.yml files for configuration.mkdir /opt/seafile\ncd /opt/seafile\n\n# Seafile PE 12.0\nwget -O .env https://manual.seafile.com/12.0/docker/docker-compose/pro/env\nwget https://manual.seafile.com/12.0/docker/docker-compose/pro/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/docker-compose/pro/caddy.yml\n\nnano .env\n
SEAFILE_VOLUME: The volume directory of Seafile data, default is /opt/seafile-dataSEAFILE_MYSQL_VOLUME: The volume directory of MySQL data, default is /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME: The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's, default is /opt/seafile-caddySEAFILE_ELASTICSEARCH_VOLUME: The volume directory of Elasticsearch data, default is /opt/seafile-elasticsearch/dataSEAFILE_MYSQL_ROOT_PASSWORD: The root password of MySQLSEAFILE_MYSQL_DB_USER: The user of MySQL (database - user can be found in conf/seafile.conf)SEAFILE_MYSQL_DB_PASSWORD: The user seafile password of MySQLJWT: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1SEAFILE_SERVER_HOSTNAME: Seafile server hostname or domainSEAFILE_SERVER_PROTOCOL: Seafile server protocol (http or https)TIME_ZONE: Time zone (default UTC)SEAFILE_ADMIN_EMAIL: Admin usernameSEAFILE_ADMIN_PASSWORD: Admin password
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#starting-the-docker-containers","title":"Starting the Docker Containers","text":"mkdir -p /opt/seafile-elasticsearch/data\nchmod 777 -R /opt/seafile-elasticsearch/data\ndocker compose up -d\n.env.docker compose logs -f\n/shared/logs/seafile in the docker, or /opt/seafile-data/logs/seafile in the server that run the docker./shared/logs/var-log, or /opt/seafile-data/logs/var-log in the server that run the docker.seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#seafile-directory-structure","title":"Seafile directory structure","text":""},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#optseafile-data","title":"docker compose down\n\ndocker compose up -d\n/opt/seafile-data","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#reviewing-the-deployment","title":"Reviewing the Deployment","text":"/opt/seafile-data/seafile/logs/seafile.log./var/log inside the container. For example, you can find the nginx logs in /opt/seafile-data/logs/var-log/nginx/.docker container list should list the containers specified in the .env.$ tree /opt/seafile-data -L 2\n/opt/seafile-data\n\u251c\u2500\u2500 logs\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 var-log\n\u251c\u2500\u2500 nginx\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 conf\n\u2514\u2500\u2500 seafile\n \u00a0\u00a0 \u251c\u2500\u2500 ccnet\n \u00a0\u00a0 \u251c\u2500\u2500 conf\n \u00a0\u00a0 \u251c\u2500\u2500 logs\n \u00a0\u00a0 \u251c\u2500\u2500 pro-data\n \u00a0\u00a0 \u251c\u2500\u2500 seafile-data\n \u00a0\u00a0 \u2514\u2500\u2500 seahub-data\n/opt/seafile-data/seafile/conf. The nginx config file is in /opt/seafile-data/nginx/conf.docker compose restart\n/opt/seafile-data/seafile/logs whereas all other log files are in /opt/seafile-data/logs/var-log..env as followsSEAFILE_MYSQL_DB_HOST=192.168.0.2\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\nSEAFILE_MYSQL_ROOT_PASSWORD is needed during installation. Later, after Seafile is installed, the user seafile will be used to connect to the mysql-server (SEAFILE_MYSQL_DB_PASSWORD). You can remove the SEAFILE_MYSQL_ROOT_PASSWORD.my_init, Nginx are still run as root inside docker.)NON_ROOT=true to the .env.NON_ROOT=true\n/opt/seafile-data/seafile/ permissions.chmod -R a+rwx /opt/seafile-data/seafile/\ndocker compose down\ndocker compose up -d\nseafile user. (NOTE: Later, when doing maintenance, other scripts in docker are also required to be run as seafile user, e.g. su seafile -c ./seaf-gc.sh)/scripts folder of the docker container. To perform garbage collection, simply run docker exec seafile /scripts/gc.sh. For the community edition, this process will stop the seafile server, but it is a relatively quick process and the seafile server will start automatically once the process has finished. The Professional supports an online garbage collection..env
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#clamav-with-docker","title":"Clamav with Docker","text":".env
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#other-functions","title":"Other functions","text":""},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#ldapad-integration-for-pro","title":"LDAP/AD Integration for Pro","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#s3openswiftceph-storage-backends","title":"S3/OpenSwift/Ceph Storage Backends","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#online-file-preview-and-editing","title":"Online File Preview and Editing","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#advanced-user-management","title":"Advanced User Management","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#advanced-authentication","title":"Advanced Authentication","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#admin-tools","title":"Admin Tools","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#faq","title":"FAQ","text":"docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\ndocker compose logs -f.docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh.
"},{"location":"docker/pro-edition/migrate_ce_to_pro_with_docker/#migrate","title":"Migrate","text":""},{"location":"docker/pro-edition/migrate_ce_to_pro_with_docker/#stop-the-seafile-ce","title":"Stop the Seafile CE","text":"# Seafile PE 10.0\nwget -O \"docker-compose.yml\" \"https://manual.seafile.com/docker/docker-compose/pro/10.0/docker-compose.yml\"\n\n# Seafile PE 11.0\nwget -O \"docker-compose.yml\" \"https://manual.seafile.com/docker/docker-compose/pro/11.0/docker-compose.yml\"\ndocker compose down\nseafile-license.txt to the volume directory of the Seafile CE's data. If the directory is /opt/seafile-data, so you should put it in the /opt/seafile-data/seafile/.docker-compose.yml file with the new docker-compose.yml file and modify its configuration based on your actual situation:
"},{"location":"docker/pro-edition/migrate_ce_to_pro_with_docker/#do-the-migration","title":"Do the migration","text":"/opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data;docker compose up\ndocker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py setup --migrate\nexternal_es_server, es_host, es_port in /opt/seafile-data/seafile/conf/seafevents.conf manually.[INDEX FILES]\nexternal_es_server = true\nes_host = elasticsearch\nes_port = 9200\nenabled = true\ninterval = 10m\ndocker restart seafile\nSeaf-fuse is an implementation of the FUSE virtual filesystem. In a word, it mounts all the seafile files to a folder (which is called the '''mount point'''), so that you can access all the files managed by seafile server, just as you access a normal folder on your server.
"},{"location":"extension/fuse/#use-seaf-fuse-in-binary-based-deployment","title":"Use seaf-fuse in binary based deployment","text":"/data/seafile-fuse.
"},{"location":"extension/fuse/#start-seaf-fuse-with-the-script","title":"Start seaf-fuse with the script","text":"mkdir -p /data/seafile-fuse\n./seafile.sh start../seaf-fuse.sh start /data/seafile-fuse\n./seaf-fuse.sh start -o uid=<uid> /data/seafile-fuse\n./seaf-fuse.sh start --disable-block-cache /data/seafile-fuse\nman fuse.
"},{"location":"extension/fuse/#contents-of-the-mounted-folder","title":"Contents of the mounted folder","text":""},{"location":"extension/fuse/#the-top-level-folder","title":"The top level folder","text":"./seaf-fuse.sh stop\n/data/seafile-fuse.$ ls -lhp /data/seafile-fuse\n\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 abc@abc.com/\ndrwxr-xr-x 2 root root 4.0K Jan 4 2015 foo@foo.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 plus@plus.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 sharp@sharp.com/\ndrwxr-xr-x 2 root root 4.0K Jan 3 2015 test@test.com/\n
"},{"location":"extension/fuse/#the-folder-for-each-user","title":"The folder for each user","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com\n\ndrwxr-xr-x 2 root root 924 Jan 1 1970 5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\ndrwxr-xr-x 2 root root 1.6K Jan 1 1970 a09ab9fc-7bd0-49f1-929d-6abeb8491397_My Notes/\n
"},{"location":"extension/fuse/#if-you-get-a-permission-denied-error","title":"If you get a \"Permission denied\" error","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com/5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\n\n-rw-r--r-- 1 root root 501K Jan 1 2015 image.png\n-rw-r--r-- 1 root root 501K Jan 1 2015 sample.jpng\n./seaf-fuse.sh start, most likely you are not in the \"fuse group\". You should:
sudo usermod -a -G fuse <your-user-name>\n
"},{"location":"extension/fuse/#use-seaf-fuse-in-docker-based-deployment","title":"Use seaf-fuse in Docker based deployment","text":"./seaf-fuse.sh start <path>again./data/seafile-fuse in host.
"},{"location":"extension/fuse/#start-seaf-fuse-with-the-script-in-docker","title":"Start seaf-fuse with the script in docker","text":" seafile:\n ...\n volumes:\n ...\n - type: bind\n source: /data/seafile-fuse\n target: /seafile-fuse\n bind:\n propagation: rshared\n privileged: true\n cap_add:\n - SYS_ADMIN\ndocker compose up -d\n\ndocker exec -it seafile bash\n
"},{"location":"extension/webdav/","title":"WebDAV extension","text":"cd /opt/seafile/seafile-server-latest/\n\n./seaf-fuse.sh start /seafile-fuse\n/opt/seafile./opt/seafile/conf/seafdav.conf. If it is not created already, you can just create the file.[WEBDAV]\n\n# Default is false. Change it to true to enable SeafDAV server.\nenabled = true\n\nport = 8080\ndebug = true\n\n# If you deploy seafdav behind nginx/apache, you need to modify \"share_name\".\nshare_name = /seafdav\n\n# SeafDAV uses Gunicorn as web server.\n# This option maps to Gunicorn's 'workers' setting. https://docs.gunicorn.org/en/stable/settings.html?#workers\n# By default it's set to 5 processes.\nworkers = 5\n\n# This option maps to Gunicorn's 'timeout' setting. https://docs.gunicorn.org/en/stable/settings.html?#timeout\n# By default it's set to 1200 seconds, to support large file uploads.\ntimeout = 1200\n./seafile.sh restart\nhttp://example.com:8080/seafdav
"},{"location":"extension/webdav/#proxy-with-nginx","title":"Proxy with Nginx","text":"show_repo_id=true\n
"},{"location":"extension/webdav/#proxy-with-apache","title":"Proxy with Apache","text":".....\n\n location /seafdav {\n rewrite ^/seafdav$ /seafdav/ permanent;\n }\n\n location /seafdav/ {\n proxy_pass http://127.0.0.1:8080/seafdav/;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_read_timeout 1200s;\n client_max_body_size 0;\n\ufeff\n access_log /var/log/nginx/seafdav.access.log seafileformat;\n error_log /var/log/nginx/seafdav.error.log;\n }\n\n location /:dir_browser {\n proxy_pass http://127.0.0.1:8080/:dir_browser;\n }\n
"},{"location":"extension/webdav/#notes-on-clients","title":"Notes on Clients","text":"......\n <Location /seafdav>\n ProxyPass \"http://127.0.0.1:8080/seafdav\"\n </Location>\n
sudo apt-get install davfs2\nsudo mount -t davfs -o uid=<username> https://example.com/seafdav /media/seafdav/\n
"},{"location":"extension/webdav/#mac-os-x","title":"Mac OS X","text":" use_locks 0\nenabled = true in seafdav.conf. If not, modify it and restart seafile server.share_name as the sample configuration above. Restart your seafile server and try again.seafdav.log to see if there is log like the following.\"MOVE ... -> 502 Bad Gateway\n09:47:06.533 - DEBUG : Raising DAVError 502 Bad Gateway: Source and destination must have the same scheme.\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\n(See https://github.com/mar10/wsgidav/issues/183)\n\n09:47:06.533 - DEBUG : Caught (502, \"Source and destination must have the same scheme.\\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\\n(See https://github.com/mar10/wsgidav/issues/183)\")\nHTTP_X_FORWARDED_PROTO value in the request received by Seafile not being HTTPS.HTTP_X_FORWARDED_PROTO. For example, in nginx, changeproxy_set_header X-Forwarded-Proto $scheme;\n
"},{"location":"extension/webdav/#windows-explorer-reports-file-size-exceeds-the-limit-allowed-and-cannot-be-saved","title":"Windows Explorer reports \"file size exceeds the limit allowed and cannot be saved\"","text":"proxy_set_header X-Forwarded-Proto https;\nFileSizeLimitInBytes under HKEY_LOCAL_MACHINE -> SYSTEM -> CurrentControlSet -> Services -> WebClient -> Parameters.
"},{"location":"extra_setup/setup_seadoc/#architecture","title":"Architecture","text":"
"},{"location":"extra_setup/setup_seadoc/#setup-seadoc","title":"Setup SeaDoc","text":"
"},{"location":"extra_setup/setup_seadoc/#deploy-seadoc-on-a-new-host","title":"Deploy SeaDoc on a new host","text":""},{"location":"extra_setup/setup_seadoc/#download-and-modify-seadoc-docker-composeyml","title":"Download and modify SeaDoc docker-compose.yml","text":"
"},{"location":"extra_setup/setup_seadoc/#create-the-seadoc-database-manually","title":"Create the SeaDoc database manually","text":"DB_HOST: MySQL hostDB_PORT: MySQL portDB_USER: MySQL userDB_PASSWD: MySQL passwordvolumes: The volume directory of SeaDoc dataSDOC_SERVER_HOSTNAME: SeaDoc service URLSEAHUB_SERVICE_URL: Seafile service URLcreate database if not exists sdoc_db charset utf8mb4;\nGRANT ALL PRIVILEGES ON `sdoc_db`.* to `seafile`@`%.%.%.%`;\n# for community edition\nwget https://manual.seafile.com/12.0/docker/docker-compose/ce/seadoc.yml\n\n# for pro edition\nwget https://manual.seafile.com/12.0/docker/docker-compose/pro/seadoc.yml\n.env, and insert seadoc.yml into COMPOSE_FILE, and enable SeaDoc server
"},{"location":"extra_setup/setup_seadoc/#create-the-seadoc-database-manually_1","title":"Create the SeaDoc database manually","text":"COMPOSE_FILE='seafile-server.yml,caddy.yml,seadoc.yml'\n\nENABLE_SEADOC=false\nSEADOC_SERVER_URL=https://example.seafile.com/sdoc-server\ncreate database if not exists sdoc_db charset utf8mb4;\nGRANT ALL PRIVILEGES ON `sdoc_db`.* to `seafile`@`%.%.%.%`;\ndocker compose up -d\n/opt/seadoc-data","text":"
"},{"location":"extra_setup/setup_seadoc/#faq","title":"FAQ","text":""},{"location":"extra_setup/setup_seadoc/#about-ssl","title":"About SSL","text":"lucaslorentz/caddy-docker-proxy:2.9, which user only needs to correctly configure the following fields in .env to automatically complete the acquisition and update of the certificate:
"},{"location":"maintain/","title":"Administration","text":""},{"location":"maintain/#enter-the-admin-panel","title":"Enter the admin panel","text":"SEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=example.com\nSystem Admin in the popup of avatar.
"},{"location":"maintain/#logs","title":"Logs","text":"
"},{"location":"maintain/#backup-and-recovery","title":"Backup and Recovery","text":"
"},{"location":"maintain/#clean-database","title":"Clean database","text":"
"},{"location":"maintain/#export-report","title":"Export report","text":"
"},{"location":"maintain/account/","title":"Account Management","text":""},{"location":"maintain/account/#user-management","title":"User Management","text":"social_auth_usersocialauth to map the new external ID to internal ID.reset-admin.sh script under seafile-server directory. This script would help you reset the admin account and password. Your data will not be deleted from the admin account, this only unlocks and changes the password for the admin account../seahub.sh python-env python seahub/manage.py check_user_quota , when the user quota exceeds 90%, an email will be sent. If you want to enable this, you have first to set up notification email.
/opt/seafile\n --seafile-server-9.0.x # untar from seafile package\n --seafile-data # seafile configuration and data (if you choose the default)\n --seahub-data # seahub data\n --logs\n --conf\n
"},{"location":"maintain/backup_recovery/#backup-steps","title":"Backup steps","text":"
"},{"location":"maintain/backup_recovery/#backup-order-database-first-or-data-directory-first","title":"Backup Order: Database First or Data Directory First","text":"
/opt/seafile for binary package based deployment (or /opt/seafile-data for docker based deployment). And you want to backup to /backup directory. The /backup can be an NFS or Windows share mount exported by another machine, or just an external disk. You can create a layout similar to the following in /backup directory:
"},{"location":"maintain/backup_recovery/#backup-and-restore-for-binary-package-based-deployment","title":"Backup and restore for binary package based deployment","text":""},{"location":"maintain/backup_recovery/#backing-up-databases","title":"Backing up Databases","text":"/backup\n---- databases/ contains database backup files\n---- data/ contains backups of the data directory\nccnet_db, seafile_db and seahub_db. mysqldump automatically locks the tables so you don't need to stop Seafile server when backing up MySQL databases. Since the database tables are usually very small, it won't take long to dump.mysqldump -h [mysqlhost] -u[username] -p[password] --opt ccnet_db > /backup/databases/ccnet-db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seafile_db > /backup/databases/seafile-db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seahub_db > /backup/databases/seahub-db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n
"},{"location":"maintain/backup_recovery/#backing-up-seafile-library-data","title":"Backing up Seafile library data","text":"sqlite3 /opt/seafile/ccnet/GroupMgr/groupmgr.db .dump > /backup/databases/groupmgr.db.bak.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nsqlite3 /opt/seafile/ccnet/PeerMgr/usermgr.db .dump > /backup/databases/usermgr.db.bak.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nsqlite3 /opt/seafile/seafile-data/seafile.db .dump > /backup/databases/seafile.db.bak.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nsqlite3 /opt/seafile/seahub.db .dump > /backup/databases/seahub.db.bak.`date +\"%Y-%m-%d-%H-%M-%S\"`\n/opt/seafile directory, so just back up the whole directory. You can directly copy the whole directory to the backup destination, or you can use rsync to do incremental backup. cp -R /opt/seafile /backup/data/seafile-`date +\"%Y-%m-%d-%H-%M-%S\"`\nrsync -az /opt/seafile /backup/data\n/backup/data/seafile.
"},{"location":"maintain/backup_recovery/#restore-the-databases","title":"Restore the databases","text":"/backup/data/seafile to the new machine. Let's assume the seafile deployment location new machine is also /opt/seafile.mysql -u[username] -p[password] ccnet_db < ccnet-db.sql.2013-10-19-16-00-05\nmysql -u[username] -p[password] seafile_db < seafile-db.sql.2013-10-19-16-00-20\nmysql -u[username] -p[password] seahub_db < seahub-db.sql.2013-10-19-16-01-05\n
"},{"location":"maintain/backup_recovery/#backup-and-restore-for-docker-based-deployment","title":"Backup and restore for Docker based deployment","text":""},{"location":"maintain/backup_recovery/#structure","title":"Structure","text":"cd /opt/seafile\nmv ccnet/PeerMgr/usermgr.db ccnet/PeerMgr/usermgr.db.old\nmv ccnet/GroupMgr/groupmgr.db ccnet/GroupMgr/groupmgr.db.old\nmv seafile-data/seafile.db seafile-data/seafile.db.old\nmv seahub.db seahub.db.old\nsqlite3 ccnet/PeerMgr/usermgr.db < usermgr.db.bak.xxxx\nsqlite3 ccnet/GroupMgr/groupmgr.db < groupmgr.db.bak.xxxx\nsqlite3 seafile-data/seafile.db < seafile.db.bak.xxxx\nsqlite3 seahub.db < seahub.db.bak.xxxx\n/opt/seafile-data. And you want to backup to /backup directory.
"},{"location":"maintain/backup_recovery/#backing-up-database","title":"Backing up Database","text":"/opt/seafile-data/seafile/conf # configuration files\n/opt/seafile-data/seafile/seafile-data # data of seafile\n/opt/seafile-data/seafile/seahub-data # data of seahub\n
"},{"location":"maintain/backup_recovery/#backing-up-seafile-library-data_1","title":"Backing up Seafile library data","text":""},{"location":"maintain/backup_recovery/#to-directly-copy-the-whole-data-directory","title":"To directly copy the whole data directory","text":"# It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.\ncd /backup/databases\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt ccnet_db > ccnet_db.sql\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt seafile_db > seafile_db.sql\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt seahub_db > seahub_db.sql\n
"},{"location":"maintain/backup_recovery/#use-rsync-to-do-incremental-backup","title":"Use rsync to do incremental backup","text":"cp -R /opt/seafile-data/seafile /backup/data/\n
"},{"location":"maintain/backup_recovery/#recovery","title":"Recovery","text":""},{"location":"maintain/backup_recovery/#restore-the-databases_1","title":"Restore the databases","text":"rsync -az /opt/seafile-data/seafile /backup/data/\n
"},{"location":"maintain/backup_recovery/#restore-the-seafile-data","title":"Restore the seafile data","text":"docker cp /backup/databases/ccnet_db.sql seafile-mysql:/tmp/ccnet_db.sql\ndocker cp /backup/databases/seafile_db.sql seafile-mysql:/tmp/seafile_db.sql\ndocker cp /backup/databases/seahub_db.sql seafile-mysql:/tmp/seahub_db.sql\n\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] ccnet_db < /tmp/ccnet_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seafile_db < /tmp/seafile_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seahub_db < /tmp/seahub_db.sql\"\n
"},{"location":"maintain/clean_database/","title":"Clean Database","text":""},{"location":"maintain/clean_database/#seahub","title":"Seahub","text":""},{"location":"maintain/clean_database/#session","title":"Session","text":"cp -R /backup/data/* /opt/seafile-data/seafile/\n
"},{"location":"maintain/clean_database/#activity","title":"Activity","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clearsessions\nuse seahub_db;\nDELETE FROM Activity WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"maintain/clean_database/#file-access","title":"File Access","text":"use seahub_db;\nDELETE FROM sysadmin_extra_userloginlog WHERE to_days(now()) - to_days(login_date) > 90;\n
"},{"location":"maintain/clean_database/#file-update","title":"File Update","text":"use seahub_db;\nDELETE FROM FileAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"maintain/clean_database/#permisson","title":"Permisson","text":"use seahub_db;\nDELETE FROM FileUpdate WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"maintain/clean_database/#file-history","title":"File History","text":"use seahub_db;\nDELETE FROM PermAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"maintain/clean_database/#command-clean_db_records","title":"Command clean_db_records","text":"use seahub_db;\nDELETE FROM FileHistory WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"maintain/clean_database/#outdated-library-data","title":"Outdated Library Data","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clean_db_records\ncd <install-path>/seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data\n
"},{"location":"maintain/clean_database/#library-sync-tokens","title":"Library Sync Tokens","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data --dry-run=true\n
delete t,i from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
"},{"location":"maintain/export_file_access_log/","title":"Export File Access Log","text":"select * from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
"},{"location":"maintain/export_report/","title":"Export Report","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01\n
"},{"location":"maintain/export_report/#export-user-storage-report","title":"Export User Storage Report","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_user_traffic_report --date 201906\n
"},{"location":"maintain/export_report/#export-file-access-log","title":"Export File Access Log","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_user_storage_report\n
"},{"location":"maintain/export_user_storage_report/","title":"Export User Storage Report","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01\n
"},{"location":"maintain/logs/","title":"Logs","text":""},{"location":"maintain/logs/#log-files-of-seafile-server","title":"Log files of seafile server","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_user_storage_report\n
"},{"location":"maintain/logs/#log-files-for-seafile-background-node-in-cluster-mode","title":"Log files for seafile background node in cluster mode","text":"
"},{"location":"maintain/seafile_fsck/","title":"Seafile FSCK","text":"cd seafile-server-latest\n./seaf-fsck.sh [--repair|-r] [--export|-E export_path] [repo_id_1 [repo_id_2 ...]]\n
"},{"location":"maintain/seafile_fsck/#checking-integrity-of-libraries","title":"Checking Integrity of Libraries","text":"cd seafile-server-latest\n./seaf-fsck.sh\ncd seafile-server-latest\n./seaf-fsck.sh [library-id1] [library-id2] ...\n[02/13/15 16:21:07] fsck.c(470): Running fsck for repo ca1a860d-e1c1-4a52-8123-0bf9def8697f.\n[02/13/15 16:21:07] fsck.c(413): Checking file system integrity of repo fsck(ca1a860d)...\n[02/13/15 16:21:07] fsck.c(35): Dir 9c09d937397b51e1283d68ee7590cd9ce01fe4c9 is missing.\n[02/13/15 16:21:07] fsck.c(200): Dir /bf/pk/(9c09d937) is corrupted.\n[02/13/15 16:21:07] fsck.c(105): Block 36e3dd8757edeb97758b3b4d8530a4a8a045d3cb is corrupted.\n[02/13/15 16:21:07] fsck.c(178): File /bf/02.1.md(ef37e350) is corrupted.\n[02/13/15 16:21:07] fsck.c(85): Block 650fb22495b0b199cff0f1e1ebf036e548fcb95a is missing.\n[02/13/15 16:21:07] fsck.c(178): File /01.2.md(4a73621f) is corrupted.\n[02/13/15 16:21:07] fsck.c(514): Fsck finished for repo ca1a860d.\n[02/13/15 16:36:11] Commit 6259251e2b0dd9a8e99925ae6199cbf4c134ec10 is missing\n[02/13/15 16:36:11] fsck.c(476): Repo ca1a860d HEAD commit is corrupted, need to restore to an old version.\n[02/13/15 16:36:11] fsck.c(314): Scanning available commits...\n[02/13/15 16:36:11] fsck.c(376): Find available commit 1b26b13c(created at 2015-02-13 16:10:21) for repo ca1a860d.\n
cd seafile-server-latest\n./seaf-fsck.sh --repair\ncd seafile-server-latest\n./seaf-fsck.sh --repair [library-id1] [library-id2] ...\n
"},{"location":"maintain/seafile_fsck/#speeding-up-fsck-by-not-checking-file-contents","title":"Speeding up FSCK by not checking file contents","text":"cd seafile-server-latest\n./seaf-fsck.sh --export top_export_path [library-id1] [library-id2] ...\ntop_export_path is a directory to place the exported files. Each library will be exported as a sub-directory of the export path. If you don't specify library ids, all libraries will be exported.
seaf-gc.sh --dry-run [repo-id1] [repo-id2] ...\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo My Library(ffa57d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 265.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo ffa57d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 5 commits, 265 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 265 blocks total, about 265 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo aa(f3d0a8d0)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 5.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo f3d0a8d0.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 8 commits, 5 blocks.\n[03/19/15 19:41:49] gc-core.c(264): Populating index for sub-repo 9217622a.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 4 commits, 4 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 5 blocks total, about 9 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo test2(e7d26d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 507.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo e7d26d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 577 commits, 507 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 507 blocks total, about 507 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:50] seafserv-gc.c(124): === Repos deleted by users ===\n[03/19/15 19:41:50] seafserv-gc.c(145): === GC is finished ===\n\n[03/19/15 19:41:50] Following repos have blocks to be removed:\nrepo-id1\nrepo-id2\nrepo-id3\nseaf-gc.sh [repo-id1] [repo-id2] ...\nseaf-gc.sh -r\nseaf-gc.sh --rm-fs\n
seaf-gc.sh -t 20\n
"},{"location":"maintain/seafile_gc/#gc-cleanup-script-for-community-version","title":"GC cleanup script for Community Version","text":"seaf-gc.sh --id-prefix a123\n
touch /opt/haiwen/seafile/cleanupScript.sh\n#!/bin/bash\n\n#####\n# Uncomment the following line if you rather want to run the script manually.\n# Display usage if the script is not run as root user\n# if [[ $USER != \"root\" ]]; then\n# echo \"This script must be run as root user!\"\n# exit 1\n# fi\n#\n# echo \"Super User detected!!\"\n# read -p \"Press [ENTER] to start the procedure, this will stop the seafile server!!\"\n#####\n\n# stop the server\necho Stopping the Seafile-Server...\nsystemctl stop seafile.service\nsystemctl stop seahub.service\n\necho Giving the server some time to shut down properly....\nsleep 20\n\n# run the cleanup\necho Seafile cleanup started...\nsudo -u seafile $pathtoseafile/seafile-server-latest/seaf-gc.sh\n\necho Giving the server some time....\nsleep 10\n\n# start the server again\necho Starting the Seafile-Server...\nsystemctl start seafile.service\nsystemctl start seahub.service\n\necho Seafile cleanup done!\nsudo chmod +x /path/to/yourscript.sh\ncrontab -e\n0 2 * * Sun /opt/haiwen/seafile/cleanupScript.sh\n/scripts/gc.sh script. Simply run docker exec <whatever-your-seafile-container-is-called> /scripts/gc.sh.
seahub_settings.py and restart service. ENABLE_TWO_FACTOR_AUTH = True TWO_FACTOR_DEVICE_REMEMBER_DAYS = 30 # optional, default 90 days.
seaf-server)\uff1adata service daemon, handles raw file upload, download and synchronization. Seafile server by default listens on port 8082. You can configure Nginx/Apache to proxy traffic to the local 8082 port.
"},{"location":"overview/file_permission_management/","title":"File permission management","text":"
"},{"location":"security/auditing/","title":"Access log and auditing","text":"
seafevents.conf to turn it on:[Audit]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\nseahub_db.seahub.log).
"},{"location":"security/fail2ban/#copy-and-edit-jaillocal-file","title":"Copy and edit jail.local file","text":" # TimeZone\n TIME_ZONE = 'Europe/Stockholm'\njail.conf filejail.local with : * ports used by your seafile website (e.g. http,https) ; * logpath (e.g. /home/yourusername/logs/seahub.log) ; * maxretry (default to 3 is equivalent to 9 real attemps in seafile, because one line is written every 3 failed authentications into seafile logs).jail.local in /etc/fail2ban with the following content:","text":"
"},{"location":"security/fail2ban/#create-the-fail2ban-filter-file-seafile-authconf-in-etcfail2banfilterd-with-the-following-content","title":"Create the fail2ban filter file # All standard jails are in the file configuration located\n# /etc/fail2ban/jail.conf\n\n# Warning you may override any other parameter (e.g. banaction,\n# action, port, logpath, etc) in that section within jail.local\n\n# Change logpath with your file log used by seafile (e.g. seahub.log)\n# Also you can change the max retry var (3 attemps = 1 line written in the\n# seafile log)\n# So with this maxrety to 1, the user can try 3 times before his IP is banned\n\n[seafile]\n\nenabled = true\nport = http,https\nfilter = seafile-auth\nlogpath = /home/yourusername/logs/seahub.log\nmaxretry = 3\nseafile-auth.conf in /etc/fail2ban/filter.d with the following content:","text":"
"},{"location":"security/fail2ban/#restart-fail2ban","title":"Restart fail2ban","text":"# Fail2Ban filter for seafile\n#\n\n[INCLUDES]\n\n# Read common prefixes. If any customizations available -- read them from\n# common.local\nbefore = common.conf\n\n[Definition]\n\n_daemon = seaf-server\n\nfailregex = Login attempt limit reached.*, ip: <HOST>\n\nignoreregex = \n\n# DEV Notes:\n#\n# pattern : 2015-10-20 15:20:32,402 [WARNING] seahub.auth.views:155 login Login attempt limit reached, username: <user>, ip: 1.2.3.4, attemps: 3\n# 2015-10-20 17:04:32,235 [WARNING] seahub.auth.views:163 login Login attempt limit reached, ip: 1.2.3.4, attempts: 3\nsudo fail2ban-client reload\nsudo iptables -S\n
"},{"location":"security/fail2ban/#tests","title":"Tests","text":"...\n-N fail2ban-seafile\n...\n-A fail2ban-seafile -j RETURN\ndenis@myserver:~$ sudo fail2ban-client status seafile\nStatus for the jail: seafile\n|- filter\n| |- File list: /home/<youruser>/logs/seahub.log\n| |- Currently failed: 0\n| `- Total failed: 1\n`- action\n |- Currently banned: 1\n | `- IP list: 1.2.3.4\n `- Total banned: 1\nsudo iptables -S\n\n...\n-A fail2ban-seafile -s 1.2.3.4/32 -j REJECT --reject-with icmp-port-unreachable\n...\n
"},{"location":"security/fail2ban/#note","title":"Note","text":"sudo fail2ban-client set seafile unbanip 1.2.3.4\n
PBKDF2SHA256$iterations$salt$hash\n
"},{"location":"upgrade/upgrade/","title":"Upgrade manual","text":"PBKDF2(password, salt, iterations). The number of iterations is currently 10000.
"},{"location":"upgrade/upgrade/#special-upgrade-notes","title":"Special upgrade notes","text":"
"},{"location":"upgrade/upgrade/#upgrade-a-binary-package-based-deployment","title":"Upgrade a binary package based deployment","text":""},{"location":"upgrade/upgrade/#major-version-upgrade-eg-from-5xx-to-6yy","title":"Major version upgrade (e.g. from 5.x.x to 6.y.y)","text":"seafile\n -- seafile-server-5.1.0\n -- seafile-server-6.1.0\n -- ccnet\n -- seafile-data\ncd seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n# or via service\n/etc/init.d/seafile-server stop\ncd seafile/seafile-server-6.1.0\nls upgrade/upgrade_*\n...\nupgrade_5.0_5.1.sh\nupgrade_5.1_6.0.sh\nupgrade_6.0_6.1.sh\nupgrade/upgrade_5.1_6.0.sh\nupgrade/upgrade_6.0_6.1.sh\ncd seafile/seafile-server-latest/\n./seafile.sh start\n./seahub.sh start # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n# or via service\n/etc/init.d/seafile-server start\n
"},{"location":"upgrade/upgrade/#minor-version-upgrade-eg-from-61x-to-62y","title":"Minor version upgrade (e.g. from 6.1.x to 6.2.y)","text":"rm -rf seafile-server-5.1.0/\nseafile\n -- seafile-server-6.1.0\n -- seafile-server-6.2.0\n -- ccnet\n -- seafile-data\n
cd seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n# or via service\n/etc/init.d/seafile-server stop\ncd seafile/seafile-server-6.2.0\nls upgrade/upgrade_*\n...\nupgrade/upgrade_5.1_6.0.sh\nupgrade/upgrade_6.0_6.1.sh\nupgrade/upgrade_6.1_6.2.sh\nupgrade/upgrade_6.1_6.2.sh\n./seafile.sh start\n./seahub.sh start\n# or via service\n/etc/init.d/seafile-server start\n
"},{"location":"upgrade/upgrade/#maintenance-version-upgrade-eg-from-622-to-623","title":"Maintenance version upgrade (e.g. from 6.2.2 to 6.2.3)","text":"rm -rf seafile-server-6.1.0/\n
"},{"location":"upgrade/upgrade_a_cluster/","title":"Upgrade a Seafile cluster","text":""},{"location":"upgrade/upgrade_a_cluster/#major-and-minor-version-upgrade","title":"Major and minor version upgrade","text":"minor-upgrade.sh):cd seafile-server-6.2.3/upgrade/ && ./minor-upgrade.shrm -rf seafile-server-6.2.2/
"},{"location":"upgrade/upgrade_a_cluster/#maintanence-upgrade","title":"Maintanence upgrade","text":"./upgrade/minor_upgrade.sh at each node to update the symbolic link.OFFICE_CONVERTOR_ROOT = 'http://<ip of node background>'\n\u2b07\ufe0f\nOFFICE_CONVERTOR_ROOT = 'http://<ip of node background>:6000'\n
"},{"location":"upgrade/upgrade_a_cluster/#for-backend-node","title":"For backend node","text":"[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\n\n\u2b07\ufe0f\n[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\nhost = <ip of node background>\nport = 6000\n
"},{"location":"upgrade/upgrade_a_cluster/#from-63-to-70","title":"From 6.3 to 7.0","text":"[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\n\n\u2b07\ufe0f\n[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\nhost = <ip of node background>\nport = 6000\nseahub_settings.py is:CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '<MEMCACHED SERVER IP>:11211',\n }\n}\n\nCOMPRESS_CACHE_BACKEND = 'django.core.cache.backends.locmem.LocMemCache'\n
"},{"location":"upgrade/upgrade_a_cluster/#from-61-to-62","title":"From 6.1 to 6.2","text":"CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '<MEMCACHED SERVER IP>:11211',\n },\n 'locmem': {\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n },\n}\nCOMPRESS_CACHE_BACKEND = 'locmem'\ncd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n
"},{"location":"upgrade/upgrade_a_cluster/#from-v44-to-v50","title":"From v4.4 to v5.0","text":" - COMPRESS_CACHE_BACKEND = 'locmem://'\n + COMPRESS_CACHE_BACKEND = 'django.core.cache.backends.locmem.LocMemCache'\n
./upgrade/upgrade_4.4_5.0.sh\n
SEAFILE_SKIP_DB_UPGRADE environmental variable turned on:SEAFILE_SKIP_DB_UPGRADE=1 ./upgrade/upgrade_4.4_5.0.sh\n
"},{"location":"upgrade/upgrade_a_cluster/#from-v43-to-v44","title":"From v4.3 to v4.4","text":"conf/\n |__ ccnet.conf\n |__ seafile.conf\n |__ seafevent.conf\n |__ seafdav.conf\n |__ seahub_settings.conf\n
"},{"location":"upgrade/upgrade_a_cluster/#from-v42-to-v43","title":"From v4.2 to v4.3","text":"
"},{"location":"upgrade/upgrade_a_cluster_docker/","title":"Upgrade a Seafile cluster (Docker)","text":""},{"location":"upgrade/upgrade_a_cluster_docker/#major-and-minor-version-upgrade","title":"Major and minor version upgrade","text":"
"},{"location":"upgrade/upgrade_a_cluster_docker/#maintanence-upgrade","title":"Maintanence upgrade","text":"...\nservice:\n ...\n seafile:\n image: seafileltd/seafile-mc:10.0-latest\n ...\n ...\nservice:\n ...\n seafile:\n image: seafileltd/seafile-mc:11.0-latest\n ...\n ...\n
mv /opt/seafile/shared/ssl /opt/seafile/shared/ssl-bak\n\nmv /opt/seafile/shared/nginx/conf/seafile.nginx.conf /opt/seafile/shared/nginx/conf/seafile.nginx.conf.bak\ndocker compose down\ndocker compose up -d\ndocker exec seafile nginx -s reload\n.env and seafile-server.yml files for configuration.mv docker-compose.yml docker-compose.yml.bak\ndocker-compose.yml.bakwget -O .env https://manual.seafile.com/docker/docker-compose/ce/12.0/env\nwget https://manual.seafile.com/docker/docker-compose/ce/12.0/seafile-server.yml\nwget https://manual.seafile.com/docker/docker-compose/ce/12.0/caddy.yml\nwget -O .env https://manual.seafile.com/docker/docker-compose/pro/12.0/env\nwget https://manual.seafile.com/docker/docker-compose/pro/12.0/seafile-server.yml\nwget https://manual.seafile.com/docker/docker-compose/pro/12.0/caddy.yml\n
SEAFILE_VOLUME: The volume directory of Seafile data, default is /opt/seafile-dataSEAFILE_MYSQL_VOLUME: The volume directory of MySQL data, default is /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME: The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's, default is /opt/seafile-caddySEAFILE_ELASTICSEARCH_VOLUME: The volume directory of Elasticsearch dataSEAFILE_MYSQL_ROOT_PASSWORD: The root password of MySQLSEAFILE_MYSQL_DB_PASSWORD: The user seafile password of MySQLJWT: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1SEAFILE_SERVER_HOSTNAME: Seafile server hostname or domainSEAFILE_SERVER_PROTOCOL: Seafile server protocol (http or https)cp seafile.nginx.conf seafile.nginx.conf.bak\nserver listen 80 section:#server {\n# listen 80;\n# server_name _ default_server;\n\n # allow certbot to connect to challenge location via HTTP Port 80\n # otherwise renewal request will fail\n# location /.well-known/acme-challenge/ {\n# alias /var/www/challenges/;\n# try_files $uri =404;\n# }\n\n# location / {\n# rewrite ^ https://example.seafile.com$request_uri? permanent;\n# }\n#}\nserver listen 443 to 80:server {\n#listen 443 ssl;\nlisten 80;\n\n# ssl_certificate /shared/ssl/pkg.seafile.top.crt;\n# ssl_certificate_key /shared/ssl/pkg.seafile.top.key;\n\n# ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;\n\n ...\nseafile-server.yml directory, then modify Seafile .env file.wget https://manual.seafile.com/docker/docker-compose/ce/12.0/seadoc.yml\nwget https://manual.seafile.com/docker/docker-compose/pro/12.0/seadoc.yml\nCOMPOSE_FILE='seafile-server.yml,caddy.yml,seadoc.yml'\n\nSEADOC_VOLUME=/opt/seadoc-data\nENABLE_SEADOC=true\nSEADOC_SERVER_URL=http://example.seafile.com/sdoc-server\n
seadoc.yml to the COMPOSE_FILE field./sdoc-server)/sdoc-server/, /socket.io configs in seafile.nginx.conf file.# location /sdoc-server/ {\n# add_header Access-Control-Allow-Origin *;\n# add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;\n# add_header Access-Control-Allow-Headers \"deviceType,token, authorization, content-type\";\n# if ($request_method = 'OPTIONS') {\n# add_header Access-Control-Allow-Origin *;\n# add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;\n# add_header Access-Control-Allow-Headers \"deviceType,token, authorization, content-type\";\n# return 204;\n# }\n# proxy_pass http://sdoc-server:7070/;\n# proxy_redirect off;\n# proxy_set_header Host $host;\n# proxy_set_header X-Real-IP $remote_addr;\n# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n# proxy_set_header X-Forwarded-Host $server_name;\n# proxy_set_header X-Forwarded-Proto $scheme;\n# client_max_body_size 100m;\n# }\n# location /socket.io {\n# proxy_pass http://sdoc-server:7070;\n# proxy_http_version 1.1;\n# proxy_set_header Upgrade $http_upgrade;\n# proxy_set_header Connection 'upgrade';\n# proxy_redirect off;\n# proxy_buffers 8 32k;\n# proxy_buffer_size 64k;\n# proxy_set_header X-Real-IP $remote_addr;\n# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n# proxy_set_header Host $http_host;\n# proxy_set_header X-NginX-Proxy true;\n# }\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/","title":"Upgrade notes for 10.0","text":"docker compose down\n\ndocker compose up -d\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#saml-sso-change-pro-edition-only","title":"SAML SSO change (pro edition only)","text":"[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\nENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n
seahub_settings.py.ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n ...\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n ...\n },\n 'guest': {\n ...\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n ...\n },\n}\n
seafile-server-latest directory to make the configuration take effect.
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"./seahub.sh python-env python3 seahub/manage.py set_user_role_upload_download_rate_limit\ncurl 'http{s}://<es IP>:9200/_cat/shards/repofiles?v'\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#new-python-libraries","title":"New Python libraries","text":"[INDEX FILES]\n...\nshards = 10 # default is 5\n...\nsudo pip3 install future==0.18.* mysqlclient==2.1.* pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#upgrade-to-100x","title":"Upgrade to 10.0.x","text":"su pip3 install future==0.18.* mysqlclient==2.1.* pillow==9.3.* captcha==0.4 django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1\n
sh upgrade/upgrade_9.0_10.0.sh
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#update-elasticsearch-pro-edition-only","title":"Update Elasticsearch (pro edition only)","text":"docker pull elasticsearch:7.17.9\nmkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\nsudo docker run -d --name es-7.17 -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.17.9\nES_JAVA_OPTS can be adjusted according to your need.# create repo_head index\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8?pretty=true' -d '\n{\n \"mappings\" : {\n \"properties\" : {\n \"commit\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n },\n \"repo\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n },\n \"updatingto\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n }\n }\n }\n}'\n\n# create repofiles index, number_of_shards is the number of shards, here is set to 5, you can also modify it to the most suitable number of shards\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/?pretty=true' -d '\n{\n \"settings\" : {\n \"index\" : {\n \"number_of_shards\" : \"5\",\n \"analysis\" : {\n \"analyzer\" : {\n \"seafile_file_name_ngram_analyzer\" : {\n \"filter\" : [\n \"lowercase\"\n ],\n \"type\" : \"custom\",\n \"tokenizer\" : \"seafile_file_name_ngram_tokenizer\"\n }\n },\n \"tokenizer\" : {\n \"seafile_file_name_ngram_tokenizer\" : {\n \"type\" : \"ngram\",\n \"min_gram\" : \"3\",\n \"max_gram\" : \"4\"\n }\n }\n }\n }\n },\n \"mappings\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"text\",\n \"term_vector\" : \"with_positions_offsets\"\n },\n \"filename\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"ngram\" : {\n \"type\" : \"text\",\n \"analyzer\" : \"seafile_file_name_ngram_analyzer\"\n }\n }\n },\n \"is_dir\" : {\n \"type\" : \"boolean\"\n },\n \"mtime\" : {\n \"type\" : \"date\"\n },\n \"path\" : {\n \"type\" : \"keyword\"\n },\n \"repo\" : {\n \"type\" : \"keyword\"\n },\n \"size\" : {\n \"type\" : \"long\"\n },\n \"suffix\" : {\n \"type\" : \"keyword\"\n }\n }\n }\n}'\nrefresh_interval to -1 and the number_of_replicas to 0 for efficient reindex:curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?wait_for_completion=false&pretty=true' -d '\n{\n \"source\": {\n \"index\": \"repo_head\"\n },\n \"dest\": {\n \"index\": \"repo_head_8\"\n }\n}'\n\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?wait_for_completion=false&pretty=true' -d '\n{\n \"source\": {\n \"index\": \"repofiles\"\n },\n \"dest\": {\n \"index\": \"repofiles_8\"\n }\n}'\n# Get the task_id of the reindex task:\n$ curl 'http{s}://{es server IP}:9200/_tasks?actions=*reindex&pretty'\n# Check to see if the reindex task is complete:\n$ curl 'http{s}://{es server IP}:9200/_tasks/:<task_id>?pretty'\nrefresh_interval and number_of_replicas to the values used in the old index:curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\ngreen (or yellow if it is a single node).curl 'http{s}://{es server IP}:9200/_cluster/health?pretty'\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_aliases?pretty' -d '\n{\n \"actions\": [\n {\"remove_index\": {\"index\": \"repo_head\"}},\n {\"remove_index\": {\"index\": \"repofiles\"}},\n {\"add\": {\"index\": \"repo_head_8\", \"alias\": \"repo_head\"}},\n {\"add\": {\"index\": \"repofiles_8\", \"alias\": \"repofiles\"}}\n ]\n}'\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-two-rebuild-the-index-and-discard-the-old-index-data","title":"Method two, rebuild the index and discard the old index data","text":"$ docker stop es-7.17\n\n$ docker rm es-7.17\n\n$ docker pull elasticsearch:8.6.2\n\n$ sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:8.6.2\ndocker pull elasticsearch:8.5.3\nmkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\nsudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:8.5.3\n[INDEX FILES]\n...\nexternal_es_server = true\nes_host = http{s}://{es server IP}\nes_port = 9200\nshards = 10 # default is 5.\n...\nsu seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start\nrm -rf /opt/seafile-elasticsearch/data/*\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"$ cd /opt/seafile/seafile-server-latest\n$ ./pro/pro.py search --update\nseafevents.conf file. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update.
migrate_ldapusers.py script to merge ccnet_db.LDAPImported table to ccnet_db.EmailUsers table. The setting files need to be changed manually. (See more details below)DISABLE_ADFS_USER_PWD_LOGIN = True in seahub_settings.py.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#django-csrf-protection-issue","title":"Django CSRF protection issue","text":"sudo apt-get update\nsudo apt-get install -y dnsutils\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#new-python-libraries","title":"New Python libraries","text":"CSRF_TRUSTED_ORIGINS = [\"https://<your-domain>\"]\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#upgrade-to-110x","title":"Upgrade to 11.0.x","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#1-stop-seafile-100x-server","title":"1) Stop Seafile-10.0.x server.","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#2-start-from-seafile-110x-run-the-script","title":"2) Start from Seafile 11.0.x, run the script:","text":"sudo apt-get update\nsudo apt-get install -y python3-dev ldap-utils libldap2-dev\n\nsudo pip3 install future==0.18.* mysqlclient==2.1.* pillow==10.2.* sqlalchemy==2.0.18 captcha==0.5.* django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#3modify-configurations-and-migrate-ldap-records","title":"3\uff09Modify configurations and migrate LDAP records","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configurations-for-ldap","title":"Change configurations for LDAP","text":"upgrade/upgrade_10.0_11.0.sh\n# Basic configuration items for LDAP login\nENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.125' # The URL of LDAP server\nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' # The root node of users who can \n # log in to Seafile in the LDAP server\nLDAP_ADMIN_DN = 'administrator@seafile.ren' # DN of the administrator used \n # to query the LDAP server for information\nLDAP_ADMIN_PASSWORD = 'Hello@123' # Password of LDAP_ADMIN_DN\nLDAP_PROVIDER = 'ldap' # Identify the source of the user, used in \n # the table social_auth_usersocialauth, defaults to 'ldap'\nLDAP_LOGIN_ATTR = 'userPrincipalName' # User's attribute used to log in to Seafile, \n # can be mail or userPrincipalName, cannot be changed\nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' # Additional filter conditions,\n # users who meet the filter conditions can log in, otherwise they cannot log in\n# For update user info when login\nLDAP_CONTACT_EMAIL_ATTR = '' # For update user's contact_email\nLDAP_USER_ROLE_ATTR = '' # For update user's role\nLDAP_USER_FIRST_NAME_ATTR = 'givenName' # For update user's first name\nLDAP_USER_LAST_NAME_ATTR = 'sn' # For update user's last name\nLDAP_USER_NAME_REVERSE = False # Whether to reverse the user's first and last name\n# Configuration items for LDAP sync tasks.\nLDAP_SYNC_INTERVAL = 60 # LDAP sync task period, in minutes\n\n# LDAP user sync configuration items.\nENABLE_LDAP_USER_SYNC = True # Whether to enable user sync\nLDAP_USER_OBJECT_CLASS = 'person' # This is the name of the class used to search for user objects. \n # In Active Directory, it's usually \"person\". The default value is \"person\".\nLDAP_DEPT_ATTR = '' # LDAP user's department info\nLDAP_UID_ATTR = '' # LDAP user's login_id attribute\nLDAP_AUTO_REACTIVATE_USERS = True # Whether to auto activate deactivated user\nLDAP_USE_PAGED_RESULT = False # Whether to use pagination extension\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nENABLE_EXTRA_USER_INFO_SYNC = True # Whether to enable sync of additional user information,\n # including user's full name, contact_email, department, and Windows login name, etc.\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# LDAP group sync configuration items.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_FILTER = '' # Group sync filter\nLDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\nSSO_LDAP_USE_SAME_UID = True:SSO_LDAP_USE_SAME_UID = True\nLDAP_LOGIN_ATTR (not LDAP_UID_ATTR), in ADFS it is uid attribute. You need make sure you use the same attribute for the two settings.LDAPImported to EmailUserscd <install-path>/seafile-server-latest\npython3 migrate_ldapusers.py\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configuration-for-oauth","title":"Change configuration for OAuth:","text":"docker exec -it seafile /usr/bin/python3 /opt/seafile/seafile-server-latest/migrate_ldapusers.py\n# Version 10.0 or earlier\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n\n# Since 11.0 version, added 'uid' attribute.\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # In the new version, the email attribute configuration should be kept unchanged to be compatible with old and new user logins\n \"uid\": (True, \"uid\"), # Seafile use 'uid' as the external unique identifier of the user. Different OAuth systems have different attributes, which may be: 'uid' or 'username', etc.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n\"id\": (True, \"email\"). You can also manully add records in social_auth_usersocialauth to map extenral uid to old users.
.env file is needed to contain some configuration items. These configuration items need to be shared by different components in Seafile. We name it .env to be consistant with docker based installation.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#upgrade-to-120-for-binary-installation","title":"Upgrade to 12.0 (for binary installation)","text":"sudo pip3 install future==1.0.* mysqlclient==2.2.* pillow==10.4.* sqlalchemy==2.0.* gevent==24.2.* captcha==0.6.* django_simple_captcha==0.6.* djangosaml2==1.9.* pysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 python-ldap==3.4.* PyMuPDF==1.24.*\n
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#3-create-the-env-file-in-conf-directory","title":"3) Create the upgrade/upgrade_11.0_12.0.sh\n.env file in conf/ directory","text":"JWT_PRIVATE_KEY=xxx\npwgen -s 40 1
sudo apt-get install python3 python3-setuptools python3-pip memcached libmemcached-dev -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#seafile-pro","title":"Seafile-Pro","text":"yum install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
apt-get install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#upgrade-to-71x","title":"Upgrade to 7.1.x","text":"yum install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
upgrade/upgrade_7.0_7.1.sh\n
rm -rf /tmp/seahub_cache # Clear the Seahub cache files from disk.\n# If you are using the Memcached service, you need to restart the service to clear the Seahub cache.\nsystemctl restart memcached\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#proxy-seafdav","title":"Proxy Seafdav","text":"
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#for-apache","title":"For Apache","text":".....\n location /seafdav {\n proxy_pass http://127.0.0.1:8080/seafdav;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_read_timeout 1200s;\n client_max_body_size 0;\n\n access_log /var/log/nginx/seafdav.access.log seafileformat;\n error_log /var/log/nginx/seafdav.error.log;\n }\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#builtin-office-file-preview","title":"Builtin office file preview","text":"......\n <Location /seafdav>\n ProxyPass \"http://127.0.0.1:8080/seafdav\"\n </Location>\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#login-page-customization","title":"Login Page Customization","text":"sudo apt-get install python3-rados\n158 if nickname.strip(): # set nickname when it's not empty\n159 p.nickname = nickname\n\nto \n\n158 if nickname.strip(): # set nickname when it's not empty\n159 p.nickname = nickname.encode(\"iso-8859-1\u201d).decode('utf8')\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#internal-server-error-after-upgrade-to-version-71","title":"Internal server error after upgrade to version 7.1","text":"[INFO] updating seahub database...\n/opt/seafile/seafile-server-7.1.1/seahub/thirdpart/pymysql/cursors.py:170: Warning: (1050, \"Table 'base_reposecretkey' already exists\")\n result = self._query(query)\n[WARNING] Failed to execute sql: (1091, \"Can't DROP 'drafts_draft_origin_file_uuid_7c003c98_uniq'; check that column/key exists\")\ndaemon = True \u00a0to\u00a0daemon = False \u00a0, then run ./seahub.sh again. If there are missing Python dependencies, the error will be reported in the terminal.'BACKEND': 'django_pylibmc.memcached.PyLibMCCache'\n
"},{"location":"upgrade/upgrade_notes_for_8.0.x/","title":"Upgrade notes for 8.0","text":"'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n
apt-get install libmysqlclient-dev\n\nsudo pip3 install -U future mysqlclient sqlalchemy==1.4.3\n
apt-get install default-libmysqlclient-dev \n\nsudo pip3 install future mysqlclient sqlalchemy==1.4.3\n
yum install python3-devel mysql-devel gcc gcc-c++ -y\n\nsudo pip3 install future\nsudo pip3 install mysqlclient==2.0.1 sqlalchemy==1.4.3\n
"},{"location":"upgrade/upgrade_notes_for_8.0.x/#change-shibboleth-setting","title":"Change Shibboleth Setting","text":"yum install python3-devel mysql-devel gcc gcc-c++ -y\n\nsudo pip3 install future mysqlclient sqlalchemy==1.4.3\nEXTRA_MIDDLEWARE_CLASSESEXTRA_MIDDLEWARE_CLASSES = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\nEXTRA_MIDDLEWAREEXTRA_MIDDLEWARE = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\nsettings.MIDDLEWARE_CLASSES is removed since django 2.0.
"},{"location":"upgrade/upgrade_notes_for_9.0.x/","title":"Upgrade notes for 9.0","text":"sh upgrade/upgrade_7.1_8.0.sh
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#new-python-libraries","title":"New Python libraries","text":"[fileserver]\nuse_go_fileserver = true\n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#upgrade-to-90x","title":"Upgrade to 9.0.x","text":"sudo pip3 install pycryptodome==3.12.0 cffi==1.14.0\n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#update-elasticsearch-pro-edition-only","title":"Update ElasticSearch (pro edition only)","text":""},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-one-rebuild-the-index-and-discard-the-old-index-data","title":"Method one, rebuild the index and discard the old index data","text":"sh upgrade/upgrade_8.0_9.0.shdocker pull elasticsearch:7.16.2\nmkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\nsudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms2g -Xmx2g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.16.2\nrm -rf /opt/seafile/pro-data/search/data/*\n[INDEX FILES]\nexternal_es_server = true\nes_host = your server's IP (use 127.0.0.1 if deployed locally)\nes_port = 9200\n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-two-reindex-the-existing-data","title":"Method two, reindex the existing data","text":"su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start \n
docker pull elasticsearch:7.16.2\nmkdir -p /opt/seafile-elasticsearch/data \nmv /opt/seafile/pro-data/search/data/* /opt/seafile-elasticsearch/data/\nchmod -R 777 /opt/seafile-elasticsearch/data/\nsudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.16.2\nES_JAVA_OPTS can be adjusted according to your need.curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head?include_type_name=false&pretty=true' -d '\n{\n \"mappings\" : {\n \"properties\" : {\n \"commit\" : {\n \"type\" : \"text\",\n \"index\" : false\n },\n \"repo\" : {\n \"type\" : \"text\",\n \"index\" : false\n },\n \"updatingto\" : {\n \"type\" : \"text\",\n \"index\" : false\n }\n }\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/?include_type_name=false&pretty=true' -d '\n{\n \"settings\" : {\n \"index\" : {\n \"number_of_shards\" : 5,\n \"number_of_replicas\" : 1,\n \"analysis\" : {\n \"analyzer\" : {\n \"seafile_file_name_ngram_analyzer\" : {\n \"filter\" : [\n \"lowercase\"\n ],\n \"type\" : \"custom\",\n \"tokenizer\" : \"seafile_file_name_ngram_tokenizer\"\n }\n },\n \"tokenizer\" : {\n \"seafile_file_name_ngram_tokenizer\" : {\n \"type\" : \"ngram\",\n \"min_gram\" : \"3\",\n \"max_gram\" : \"4\"\n }\n }\n }\n }\n },\n \"mappings\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"text\",\n \"term_vector\" : \"with_positions_offsets\"\n },\n \"filename\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"ngram\" : {\n \"type\" : \"text\",\n \"analyzer\" : \"seafile_file_name_ngram_analyzer\"\n }\n }\n },\n \"is_dir\" : {\n \"type\" : \"boolean\"\n },\n \"mtime\" : {\n \"type\" : \"date\"\n },\n \"path\" : {\n \"type\" : \"keyword\"\n },\n \"repo\" : {\n \"type\" : \"keyword\"\n },\n \"size\" : {\n \"type\" : \"long\"\n },\n \"suffix\" : {\n \"type\" : \"keyword\"\n }\n }\n }\n}'\nrefresh_interval to -1 and the number_of_replicas to 0 for efficient reindexing:curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?pretty' -d '\n{\n \"source\": {\n \"index\": \"repo_head\",\n \"type\": \"repo_commit\"\n },\n \"dest\": {\n \"index\": \"new_repo_head\",\n \"type\": \"_doc\"\n }\n}'\n\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?pretty' -d '\n{\n \"source\": {\n \"index\": \"repofiles\",\n \"type\": \"file\"\n },\n \"dest\": {\n \"index\": \"new_repofiles\",\n \"type\": \"_doc\"\n }\n}'\nrefresh_interval and number_of_replicas to the values used in the old index.curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\ngreen.curl http{s}://{es server IP}:9200/_cluster/health?pretty\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_aliases?pretty' -d '\n{\n \"actions\": [\n {\"remove_index\": {\"index\": \"repo_head\"}},\n {\"remove_index\": {\"index\": \"repofiles\"}},\n {\"add\": {\"index\": \"new_repo_head\", \"alias\": \"repo_head\"}},\n {\"add\": {\"index\": \"new_repofiles\", \"alias\": \"repofiles\"}}\n ]\n}'\n[INDEX FILES]\nexternal_es_server = true\nes_host = your server's IP\nes_port = 9200\n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start \n./pro/pro.py search --update, and then upgrade the other nodes to Seafile 9.0 version and use the new ElasticSeach 7.x after the index is created. Then deactivate the old backend node and the old version of ElasticSeach.
"},{"location":"#contact-information","title":"Contact information","text":"
"},{"location":"changelog/","title":"Changelog","text":""},{"location":"changelog/#changelogs","title":"Changelogs","text":"
"},{"location":"contribution/","title":"Contribution","text":""},{"location":"contribution/#licensing","title":"Licensing","text":"
"},{"location":"contribution/#discussion","title":"Discussion","text":"
"},{"location":"contribution/#code-style","title":"Code Style","text":"
"},{"location":"build_seafile/linux/","title":"Linux","text":""},{"location":"build_seafile/linux/#preparation","title":"Preparation","text":"
sudo apt-get install autoconf automake libtool libevent-dev libcurl4-openssl-dev libgtk2.0-dev uuid-dev intltool libsqlite3-dev valac libjansson-dev cmake qtchooser qtbase5-dev libqt5webkit5-dev qttools5-dev qttools5-dev-tools libssl-dev\n
"},{"location":"build_seafile/linux/#building","title":"Building","text":"$ sudo yum install wget gcc libevent-devel openssl-devel gtk2-devel libuuid-devel sqlite-devel jansson-devel intltool cmake libtool vala gcc-c++ qt5-qtbase-devel qt5-qttools-devel qt5-qtwebkit-devel libcurl-devel openssl-devel\n
# without alias wget= might not work\nshopt -s expand_aliases\n\nexport version=8.0.0\nalias wget='wget --content-disposition -nc'\nwget https://github.com/haiwen/libsearpc/archive/v3.2-latest.tar.gz\nwget https://github.com/haiwen/ccnet/archive/v${version}.tar.gz \nwget https://github.com/haiwen/seafile/archive/v${version}.tar.gz\nwget https://github.com/haiwen/seafile-client/archive/v${version}.tar.gz\ntar xf libsearpc-3.2-latest.tar.gz\ntar xf ccnet-${version}.tar.gz\ntar xf seafile-${version}.tar.gz\ntar xf seafile-client-${version}.tar.gz\n
"},{"location":"build_seafile/linux/#libsearpc","title":"libsearpc","text":"export PREFIX=/usr\nexport PKG_CONFIG_PATH=\"$PREFIX/lib/pkgconfig:$PKG_CONFIG_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\n
"},{"location":"build_seafile/linux/#seafile","title":"seafile","text":"cd libsearpc-3.2-latest\n./autogen.sh\n./configure --prefix=$PREFIX\nmake\nsudo make install\ncd ..\ngit clone --branch=v4.3.0 https://github.com/warmcat/libwebsockets\ncd libwebsockets\nmkdir build\ncd build\ncmake ..\nmake\nsudo make install\ncd ..\n--enable-ws to no to disable notification server. After that, you can build seafile:
"},{"location":"build_seafile/linux/#seafile-client","title":"seafile-client","text":"cd seafile-${version}/\n./autogen.sh\n./configure --prefix=$PREFIX --disable-fuse\nmake\nsudo make install\ncd ..\n
"},{"location":"build_seafile/linux/#custom-prefix","title":"custom prefix","text":"cd seafile-client-${version}\ncmake -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX=$PREFIX .\nmake\nsudo make install\ncd ..\n$PREFIX, i.e. /opt, you may need a script to set the path variables correctlycat >$PREFIX/bin/seafile-applet.sh <<END\n#!/bin/bash\nexport LD_LIBRARY_PATH=\"$PREFIX/lib:$LD_LIBRARY_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\nexec seafile-applet $@\nEND\ncat >$PREFIX/bin/seaf-cli.sh <<END\nexport LD_LIBRARY_PATH=\"$PREFIX/lib:$LD_LIBRARY_PATH\"\nexport PATH=\"$PREFIX/bin:$PATH\"\nexport PYTHONPATH=$PREFIX/lib/python2.7/site-packages\nexec seaf-cli $@\nEND\nchmod +x $PREFIX/bin/seafile-applet.sh $PREFIX/bin/seaf-cli.sh\n$PREFIX/bin/seafile-applet.sh.
"},{"location":"build_seafile/osx/#building-sync-client","title":"Building Sync Client","text":"
universal_archs arm64 x86_64. Specifies the architecture on which MapPorts is compiled.+universal. MacPorts installs universal versions of all ports.sudo port install autoconf automake pkgconfig libtool glib2 libevent vala openssl git jansson cmake libwebsockets.
export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/opt/local/lib/pkgconfig:/usr/local/lib/pkgconfig\nexport PATH=/opt/local/bin:/usr/local/bin:/opt/local/Library/Frameworks/Python.framework/Versions/3.10/bin:$PATH\nexport LDFLAGS=\"-L/opt/local/lib -L/usr/local/lib\"\nexport CFLAGS=\"-I/opt/local/include -I/usr/local/include\"\nexport CPPFLAGS=\"-I/opt/local/include -I/usr/local/include\"\nexport LD_LIBRARY_PATH=/opt/lib:/usr/local/lib:/opt/local/lib/:/usr/local/lib/:$LD_LIBRARY_PATH\n\nQT_BASE=$HOME/Qt/6.2.4/macos\nexport PATH=$QT_BASE/bin:$PATH\nexport PKG_CONFIG_PATH=$QT_BASE/lib/pkgconfig:$PKG_CONFIG_PATH\nexport NOTARIZE_APPLE_ID=\"Your notarize account\"\nexport NOTARIZE_PASSWORD=\"Your notarize password\"\nexport NOTARIZE_TEAM_ID=\"Your notarize team id\"\nseafile-workspace/\nseafile-workspace/libsearpc/\nseafile-workspace/seafile/\nseafile-workspace/seafile-client/\n$ cd seafile-workspace/libsearpc/\n$ ./autogen.sh\n$ ./configure --disable-compile-demo --enable-compile-universal=yes\n$ make\n$ make install\n$ cd seafile-workspace/seafile/\n$ ./autogen.sh\n$ ./configure --disable-fuse --enable-compile-universal=yes\n$ make\n$ make install\n
"},{"location":"build_seafile/osx/#packaging","title":"Packaging","text":"$ cd seafile-workspace/seafile-client/\n$ cmake -GXcode -B. -S.\n$ xcodebuild -target seafile-applet -configuration Release\n
"},{"location":"build_seafile/rpi/","title":"How to Build Seafile Server Release Package","text":"python3 build-mac-local-py3.py --brand=\"\" --version=1.0.0 --nostrip --universalseafile-build.sh compatible with more platforms, including Raspberry Pi, arm-64, x86-64.
"},{"location":"build_seafile/rpi/#setup-the-build-environment","title":"Setup the build environment","text":"
"},{"location":"build_seafile/rpi/#install-packages","title":"Install packages","text":"
"},{"location":"build_seafile/rpi/#compile-development-libraries","title":"Compile development libraries","text":""},{"location":"build_seafile/rpi/#libevhtp","title":"libevhtp","text":"sudo apt-get install build-essential\nsudo apt-get install libevent-dev libcurl4-openssl-dev libglib2.0-dev uuid-dev intltool libsqlite3-dev libmysqlclient-dev libarchive-dev libtool libjansson-dev valac libfuse-dev re2c flex python-setuptools cmake\ngit clone https://www.github.com/haiwen/libevhtp.git\ncd libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nsudo make install\nldconfig to update the system libraries cache:
"},{"location":"build_seafile/rpi/#install-python-libraries","title":"Install python libraries","text":"sudo ldconfig\n/home/pi/dev/seahub_thirdpart:mkdir -p ~/dev/seahub_thirdpart\n/tmp/:
/home/pi/dev/seahub_thirdpart:
"},{"location":"build_seafile/rpi/#prepare-seafile-source-code","title":"Prepare seafile source code","text":"cd ~/dev/seahub_thirdpart\nexport PYTHONPATH=.\npip install -t ~/dev/seahub_thirdpart/ /tmp/pytz-2016.1.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/Django-1.8.10.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-statici18n-1.1.3.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/djangorestframework-3.3.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django_compressor-1.4.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/jsonfield-1.0.3.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-post_office-2.0.6.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/gunicorn-19.4.5.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/flup-1.0.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/chardet-2.3.0.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/python-dateutil-1.5.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/six-1.9.0.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/django-picklefield-0.3.2.tar.gz\nwget -O /tmp/django_constance.zip https://github.com/haiwen/django-constance/archive/bde7f7c.zip\npip install -t ~/dev/seahub_thirdpart/ /tmp/django_constance.zip\npip install -t ~/dev/seahub_thirdpart/ /tmp/jdcal-1.2.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/et_xmlfile-1.0.1.tar.gz\npip install -t ~/dev/seahub_thirdpart/ /tmp/openpyxl-2.3.0.tar.gz\n
"},{"location":"build_seafile/rpi/#fetch-git-tags-and-prepare-source-tarballs","title":"Fetch git tags and prepare source tarballs","text":"build-server.py script to build the server package from the source tarballs.
v6.0.1-sever tag.v3.0-latest tag (libsearpc has been quite stable and basically has no further development, so the tag is always v3.0-latest)PKG_CONFIG_PATH enviroment variable (So we don't need to make and make install libsearpc/ccnet/seafile into the system):
"},{"location":"build_seafile/rpi/#libsearpc","title":"libsearpc","text":"export PKG_CONFIG_PATH=/home/pi/dev/seafile/lib:$PKG_CONFIG_PATH\nexport PKG_CONFIG_PATH=/home/pi/dev/libsearpc:$PKG_CONFIG_PATH\nexport PKG_CONFIG_PATH=/home/pi/dev/ccnet:$PKG_CONFIG_PATH\n
"},{"location":"build_seafile/rpi/#ccnet","title":"ccnet","text":"cd ~/dev\ngit clone https://github.com/haiwen/libsearpc.git\ncd libsearpc\ngit reset --hard v3.0-latest\n./autogen.sh\n./configure\nmake dist\n
"},{"location":"build_seafile/rpi/#seafile","title":"seafile","text":"cd ~/dev\ngit clone https://github.com/haiwen/ccnet-server.git\ncd ccnet\ngit reset --hard v6.0.1-server\n./autogen.sh\n./configure\nmake dist\n
"},{"location":"build_seafile/rpi/#seahub","title":"seahub","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafile-server.git\ncd seafile\ngit reset --hard v6.0.1-server\n./autogen.sh\n./configure\nmake dist\n
"},{"location":"build_seafile/rpi/#seafobj","title":"seafobj","text":"cd ~/dev\ngit clone https://github.com/haiwen/seahub.git\ncd seahub\ngit reset --hard v6.0.1-server\n./tools/gen-tarball.py --version=6.0.1 --branch=HEAD\n
"},{"location":"build_seafile/rpi/#seafdav","title":"seafdav","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafobj.git\ncd seafobj\ngit reset --hard v6.0.1-server\nmake dist\n
"},{"location":"build_seafile/rpi/#copy-the-source-tar-balls-to-the-same-folder","title":"Copy the source tar balls to the same folder","text":"cd ~/dev\ngit clone https://github.com/haiwen/seafdav.git\ncd seafdav\ngit reset --hard v6.0.1-server\nmake\n
"},{"location":"build_seafile/rpi/#run-the-packaging-script","title":"Run the packaging script","text":"mkdir ~/seafile-sources\ncp ~/dev/libsearpc/libsearpc-<version>-tar.gz ~/seafile-sources\ncp ~/dev/ccnet/ccnet-<version>-tar.gz ~/seafile-sources\ncp ~/dev/seafile/seafile-<version>-tar.gz ~/seafile-sources\ncp ~/dev/seahub/seahub-<version>-tar.gz ~/seafile-sources\n\ncp ~/dev/seafobj/seafobj.tar.gz ~/seafile-sources\ncp ~/dev/seafdav/seafdav.tar.gz ~/seafile-sources\nbuild-server.py script to build the server package.mkdir ~/seafile-server-pkgs\n~/dev/seafile/scripts/build-server.py --libsearpc_version=<libsearpc_version> --ccnet_version=<ccnet_version> --seafile_version=<seafile_version> --seahub_version=<seahub_version> --srcdir= --thirdpartdir=/home/pi/dev/seahub_thirdpart --srcdir=/home/pi/seafile-sources --outputdir=/home/pi/seafile-server-pkgs\nseafile-server_6.0.1_pi.tar.gz in ~/seafile-server-pkgs folder.
"},{"location":"build_seafile/rpi/#test-upgrading-from-a-previous-version","title":"Test upgrading from a previous version","text":"seafile.sh start and seahub.sh start, you can login from a browser.
"},{"location":"build_seafile/server/","title":"Server development","text":"root user, then:
"},{"location":"build_seafile/server/#run-a-container","title":"Run a container","text":"mkdir -p /root/seafile-ce-docker/source-code\nmkdir -p /root/seafile-ce-docker/conf\nmkdir -p /root/seafile-ce-docker/logs\nmkdir -p /root/seafile-ce-docker/mysql-data\nmkdir -p /root/seafile-ce-docker/seafile-data/library-template\ndocker run --mount type=bind,source=/root/seafile-ce-docker/source-code,target=/root/dev/source-code \\\n --mount type=bind,source=/root/seafile-ce-docker/conf,target=/root/dev/conf \\\n --mount type=bind,source=/root/seafile-ce-docker/logs,target=/root/dev/logs \\\n --mount type=bind,source=/root/seafile-ce-docker/seafile-data,target=/root/dev/seafile-data \\\n --mount type=bind,source=/root/seafile-ce-docker/mysql-data,target=/var/lib/mysql \\\n -it -p 8000:8000 -p 8082:8082 -p 3000:3000 --name seafile-ce-env ubuntu:22.04 bash\napt-get update && apt-get upgrade -y\n\napt-get install -y ssh libevent-dev libcurl4-openssl-dev libglib2.0-dev uuid-dev intltool libsqlite3-dev libmysqlclient-dev libarchive-dev libtool libjansson-dev valac libfuse-dev python3-dateutil cmake re2c flex sqlite3 python3-pip python3-simplejson git libssl-dev libldap2-dev libonig-dev vim vim-scripts wget cmake gcc autoconf automake mysql-client librados-dev libxml2-dev curl sudo telnet netcat unzip netbase ca-certificates apt-transport-https build-essential libxslt1-dev libffi-dev libpcre3-dev libz-dev xz-utils nginx pkg-config poppler-utils libmemcached-dev sudo ldap-utils libldap2-dev libjwt-dev\ncurl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg\necho \"deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_16.x nodistro main\" | sudo tee /etc/apt/sources.list.d/nodesource.list\napt-get install -y nodejs\n
"},{"location":"build_seafile/server/#install-mariadb-and-create-databases","title":"Install MariaDB and Create Databases","text":"apt-get install -y python3 python3-dev python3-pip python3-setuptools python3-ldap\n\npython3 -m pip install --upgrade pip\n\npip3 install Django==4.2.* django-statici18n==2.3.* django_webpack_loader==1.7.* django_picklefield==3.1 django_formtools==2.4 django_simple_captcha==0.6.* djangosaml2==1.5.* djangorestframework==3.14.* python-dateutil==2.8.* pyjwt==2.6.* pycryptodome==3.16.* python-cas==1.6.* pysaml2==7.2.* requests==2.28.* requests_oauthlib==1.3.* future==0.18.* gunicorn==20.1.* mysqlclient==2.1.* qrcode==7.3.* pillow==10.2.* chardet==5.1.* cffi==1.15.1 captcha==0.5.* openpyxl==3.0.* Markdown==3.4.* bleach==5.0.* python-ldap==3.4.* sqlalchemy==2.0.18 redis mock pytest pymysql configparser pylibmc django-pylibmc nose exam splinter pytest-django\napt-get install -y mariadb-server\nservice mariadb start\nmysqladmin -u root password your_password\n
"},{"location":"build_seafile/server/#download-source-code","title":"Download Source Code","text":"mysql -uroot -pyour_password -e \"CREATE DATABASE ccnet CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seafile CHARACTER SET utf8;\"\nmysql -uroot -pyour_password -e \"CREATE DATABASE seahub CHARACTER SET utf8;\"\n
"},{"location":"build_seafile/server/#compile-and-install-seaf-server","title":"Compile and Install seaf-server","text":"cd ~/\ncd ~/dev/source-code\n\ngit clone https://github.com/haiwen/libevhtp.git\ngit clone https://github.com/haiwen/libsearpc.git\ngit clone https://github.com/haiwen/seafile-server.git\ngit clone https://github.com/haiwen/seafevents.git\ngit clone https://github.com/haiwen/seafobj.git\ngit clone https://github.com/haiwen/seahub.git\n\ncd libevhtp/\ngit checkout tags/1.1.7 -b tag-1.1.7\n\ncd ../libsearpc/\ngit checkout tags/v3.3-latest -b tag-v3.3-latest\n\ncd ../seafile-server\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seafevents\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seafobj\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n\ncd ../seahub\ngit checkout tags/v11.0.5-server -b tag-v11.0.5-server\n
"},{"location":"build_seafile/server/#create-conf-files","title":"Create Conf Files","text":"cd ../libevhtp\ncmake -DEVHTP_DISABLE_SSL=ON -DEVHTP_BUILD_SHARED=OFF .\nmake\nmake install\nldconfig\n\ncd ../libsearpc\n./autogen.sh\n./configure\nmake\nmake install\nldconfig\n\ncd ../seafile-server\n./autogen.sh\n./configure --disable-fuse\nmake\nmake install\nldconfig\n
"},{"location":"build_seafile/server/#start-seaf-server","title":"Start seaf-server","text":"cd ~/dev/conf\n\ncat > ccnet.conf <<EOF\n[Database]\nENGINE = mysql\nHOST = localhost\nPORT = 3306\nUSER = root\nPASSWD = 123456\nDB = ccnet\nCONNECTION_CHARSET = utf8\nCREATE_TABLES = true\nEOF\n\ncat > seafile.conf <<EOF\n[database]\ntype = mysql\nhost = localhost\nport = 3306\nuser = root\npassword = 123456\ndb_name = seafile\nconnection_charset = utf8\ncreate_tables = true\nEOF\n\ncat > seafevents.conf <<EOF\n[DATABASE]\ntype = mysql\nusername = root\npassword = 123456\nname = seahub\nhost = localhost\nEOF\n\ncat > seahub_settings.py <<EOF\nDATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'NAME': 'seahub',\n 'USER': 'root',\n 'PASSWORD': '123456',\n 'HOST': 'localhost',\n 'PORT': '3306',\n }\n}\nFILE_SERVER_ROOT = 'http://127.0.0.1:8082'\nSERVICE_URL = 'http://127.0.0.1:8000'\nEOF\n
"},{"location":"build_seafile/server/#start-seafevents-and-seahub","title":"Start seafevents and seahub","text":""},{"location":"build_seafile/server/#prepare-environment-variables","title":"Prepare environment variables","text":"seaf-server -F /root/dev/conf -d /root/dev/seafile-data -l /root/dev/logs/seafile.log >> /root/dev/logs/seafile.log 2>&1 &\n
"},{"location":"build_seafile/server/#start-seafevents","title":"Start seafevents","text":"export CCNET_CONF_DIR=/root/dev/conf\nexport SEAFILE_CONF_DIR=/root/dev/seafile-data\nexport SEAFILE_CENTRAL_CONF_DIR=/root/dev/conf\nexport SEAHUB_DIR=/root/dev/source-code/seahub\nexport SEAHUB_LOG_DIR=/root/dev/logs\nexport PYTHONPATH=/usr/local/lib/python3.10/dist-packages/:/usr/local/lib/python3.10/site-packages/:/root/dev/source-code/:/root/dev/source-code/seafobj/:/root/dev/source-code/seahub/thirdpart:$PYTHONPATH\n
"},{"location":"build_seafile/server/#start-seahub","title":"Start seahub","text":""},{"location":"build_seafile/server/#create-seahub-database-tables","title":"Create seahub database tables","text":"cd /root/dev/source-code/seafevents/\npython3 main.py --loglevel=debug --logfile=/root/dev/logs/seafevents.log --config-file /root/dev/conf/seafevents.conf >> /root/dev/logs/seafevents.log 2>&1 &\n
"},{"location":"build_seafile/server/#create-user","title":"Create user","text":"cd /root/dev/source-code/seahub/\npython3 manage.py migrate\n
"},{"location":"build_seafile/server/#start-seahub_1","title":"Start seahub","text":"python3 manage.py createsuperuser\npython3 manage.py runserver 0.0.0.0:8000\ncd /root/dev/source-code/seahub\n\ngit fetch origin master:master\ngit checkout master\nimport os\nPROJECT_ROOT = '/root/dev/source-code/seahub'\nWEBPACK_LOADER = {\n 'DEFAULT': {\n 'BUNDLE_DIR_NAME': 'frontend/',\n 'STATS_FILE': os.path.join(PROJECT_ROOT,\n 'frontend/webpack-stats.dev.json'),\n }\n}\nDEBUG = True\ncd /root/dev/source-code/seahub/frontend\n\nnpm install\ncd /root/dev/source-code/seahub/frontend\n\nnpm run dev\n
"},{"location":"build_seafile/windows/#breakpad","title":"Breakpad","text":"
# Example of the install command:\n$ ./vcpkg.exe install curl[core,openssl]:x64-windows\n
"},{"location":"build_seafile/windows/#building-sync-client","title":"Building Sync Client","text":"$ git clone --depth=1 git@github.com:chromium/gyp.git\n$ python setup.py install\n$ git clone --depth=1 git@github.com:google/breakpad.git\n$ cd breakpad\n$ git clone https://github.com/google/googletest.git testing\n$ cd ..\n# create vs solution, this may throw an error \"module collections.abc has no attribute OrderedDict\", you should open the msvs.py and replace 'collections.abc' with 'collections'.\n$ gyp \u2013-no-circular-check breakpad\\src\\client\\windows\\breakpad_client.gyp\n
gyp \u2013-no-circular-check breakpad\\src\\tools\\windows\\tools_windows.gyp\n
copy C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community\\VC\\Redist\\MSVC\\v142\\MergeModules\\MergeModules\\Microsoft_VC142_CRT_x64.msm C:\\packagelib\nseafile-workspace/\nseafile-workspace/libsearpc/\nseafile-workspace/seafile/\nseafile-workspace/seafile-client/\nseafile-workspace/seafile-shell-ext/\n$ cd seafile-workspace/libsearpc/\n$ devenv libsearpc.sln /build \"Release|x64\"\n$ cd seafile-workspace/seafile/\n$ devenv seafile.sln /build \"Release|x64\"\n$ devenv msi/custom/seafile_custom.sln /build \"Release|x64\"\n$ cd seafile-workspace/seafile-client/\n$ devenv third_party/quazip/quazip.sln /build \"Release|x64\"\n$ devenv seafile-client.sln /build \"Release|x64\"\n
"},{"location":"build_seafile/windows/#packaging","title":"Packaging","text":"$ cd seafile-workspace/seafile-shell-ext/\n$ devenv extensions/seafile_ext.sln /build \"Release|x64\"\n$ devenv seadrive-thumbnail-ext/seadrive_thumbnail_ext.sln /build \"Release|x64\"\n
"},{"location":"changelog/changelog-for-seafile-professional-server-old/","title":"Seafile Professional Server Changelog (old)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#44","title":"4.4","text":"$ cd seafile-workspace/seafile-client/third_party/quazip\n$ devenv quazip.sln /build Release|x64\n$ cd seafile-workspace/seafile/scripts/build\n$ python build-msi-vs.py 1.0.0\n
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#448-20151217","title":"4.4.8 (2015.12.17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#447-20151120","title":"4.4.7 (2015.11.20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#446-20151109","title":"4.4.6 (2015.11.09)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#444-20151029","title":"4.4.4 (2015.10.29)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#443-20151020","title":"4.4.3 (2015.10.20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#442-20151019","title":"4.4.2 (2015.10.19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#441-beta-20150924","title":"4.4.1 beta (2015.09.24)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#440-beta-20150921","title":"4.4.0 beta (2015.09.21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#43","title":"4.3","text":"COMPRESS_CACHE_BACKEND = 'locmem://' should be added to seahub_settings.py
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#433-20150821","title":"4.3.3 (2015.08.21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#432-20150812","title":"4.3.2 (2015.08.12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#431-20150731","title":"4.3.1 (2015.07.31)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#42","title":"4.2","text":"THUMBNAIL_DEFAULT_SIZE = 24, instead of THUMBNAIL_DEFAULT_SIZE = '24'
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#424-20150708","title":"4.2.4 (2015.07.08)","text":"rm -rf /tmp/seafile-office-output/html/\n
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#423-20150707","title":"4.2.3 (2015.07.07)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#422-20150703","title":"4.2.2 (2015.07.03)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#420-20150529","title":"4.2.0 (2015.05.29)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#41","title":"4.1","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#412-20150507","title":"4.1.2 (2015.05.07)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#411-20150416","title":"4.1.1 (2015.04.16)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#410-20150401","title":"4.1.0 (2015.04.01)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#40","title":"4.0","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#406-20150306","title":"4.0.6 (2015.03.06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#405-20150213","title":"4.0.5 (2015.02.13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#404-20150205","title":"4.0.4 (2015.02.05)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#403-20150115","title":"4.0.3 (2015.01.15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#402-20150106","title":"4.0.2 (2015.01.06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#401-20141229","title":"4.0.1 (2014.12.29)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#400-20141213","title":"4.0.0 (2014.12.13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#31","title":"3.1","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#3113-20141125","title":"3.1.13 (2014.11.25)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#3112-20141117","title":"3.1.12 (2014.11.17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#3111-20141103","title":"3.1.11 (2014.11.03)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#3110-20141027","title":"3.1.10 (2014.10.27)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#319-20141013","title":"3.1.9 (2014.10.13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#317-318","title":"3.1.7, 3.1.8","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#316-20140916","title":"3.1.6 (2014.09.16)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#315-20140913","title":"3.1.5 (2014.09.13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#314-20140911","title":"3.1.4 (2014.09.11)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#313-20140829","title":"3.1.3 (2014.08.29)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#312-20140827","title":"3.1.2 (2014.08.27)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#311-20140818","title":"3.1.1 (2014.08.18)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#310-20140815","title":"3.1.0 (2014.08.15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#30","title":"3.0","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#307","title":"3.0.7","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#306","title":"3.0.6","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#305","title":"3.0.5","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#304","title":"3.0.4","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#303","title":"3.0.3","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#302","title":"3.0.2","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#301","title":"3.0.1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#300","title":"3.0.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#22","title":"2.2","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#221","title":"2.2.1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#21","title":"2.1","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#215","title":"2.1.5","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#214-1","title":"2.1.4-1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#214","title":"2.1.4","text":"pro.py search --clear command
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#213","title":"2.1.3","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#212","title":"2.1.2","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#211","title":"2.1.1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#20","title":"2.0","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#205","title":"2.0.5","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#204","title":"2.0.4","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#203","title":"2.0.3","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#201","title":"2.0.1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#200","title":"2.0.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#18","title":"1.8","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#183","title":"1.8.3","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#182","title":"1.8.2","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#181","title":"1.8.1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#180","title":"1.8.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#17","title":"1.7","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#1704","title":"1.7.0.4","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#170","title":"1.7.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/","title":"Seafile Professional Server Changelog","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#11014-2024-08-22","title":"11.0.14 (2024-08-22)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#11013-2024-08-14","title":"11.0.13 (2024-08-14)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#11012-2024-08-07","title":"11.0.12 (2024-08-07)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#11011-2024-07-24","title":"11.0.11 (2024-07-24)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#11010-2024-07-09","title":"11.0.10 (2024-07-09)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1109-2024-06-25","title":"11.0.9 (2024-06-25)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1108-2024-06-20","title":"11.0.8 (2024-06-20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1107-2024-06-03","title":"11.0.7 (2024-06-03)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1106-beta-2024-04-19","title":"11.0.6 beta (2024-04-19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1105-beta-2024-03-20","title":"11.0.5 beta (2024-03-20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1104-beta-and-sdoc-editor-05-2024-02-01","title":"11.0.4 beta and SDoc editor 0.5 (2024-02-01)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#100","title":"10.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10015-2024-03-21","title":"10.0.15 (2024-03-21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10014-2024-02-27","title":"10.0.14 (2024-02-27)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10013-2024-02-05","title":"10.0.13 (2024-02-05)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10011-2023-11-09","title":"10.0.11 (2023-11-09)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#10010-2023-10-17","title":"10.0.10 (2023-10-17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1009-2023-08-25","title":"10.0.9 (2023-08-25)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1008-2023-08-01","title":"10.0.8 (2023-08-01)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1007-2023-07-25","title":"10.0.7 (2023-07-25)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1006-2023-06-27","title":"10.0.6 (2023-06-27)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1005-2023-06-12","title":"10.0.5 (2023-06-12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1004-2023-05-17","title":"10.0.4 (2023-05-17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1003-beta-2023-04-12","title":"10.0.3 beta (2023-04-12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1002-beta-2023-04-12","title":"10.0.2 beta (2023-04-12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#1000-beta-2023-04-12","title":"10.0.0 beta (2023-04-12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#90","title":"9.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#9015-2023-03-01","title":"9.0.15 (2023-03-01)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#9014-2023-01-06","title":"9.0.14 (2023-01-06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#9013-2022-11-11","title":"9.0.13 (2022-11-11)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#9012-2022-11-04","title":"9.0.12 (2022-11-04)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#9011-2022-10-27","title":"9.0.11 (2022-10-27)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#9010-2022-10-12","title":"9.0.10 (2022-10-12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#909-2022-09-22","title":"9.0.9 (2022-09-22)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#908-2022-09-09","title":"9.0.8 (2022-09-09)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#907-20220811","title":"9.0.7 (2022/08/11)","text":"pip3 install lxml to install it.
"},{"location":"changelog/changelog-for-seafile-professional-server/#906-20220706","title":"9.0.6 (2022/07/06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#905-20220321","title":"9.0.5 (2022/03/21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#904-20220124","title":"9.0.4 (2022/01/24)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#903-beta-20211228","title":"9.0.3 beta (2021/12/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#902-beta-20211215","title":"9.0.2 beta (2021/12/15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#901","title":"9.0.1","text":"[fileserver]\nuse_go_fileserver = true\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#8016-20211228","title":"8.0.16 (2021/12/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#8015-20211206","title":"8.0.15 (2021/12/06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#8014-20211117","title":"8.0.14 (2021/11/17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#8012-20211103","title":"8.0.12 (2021/11/03)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#8011-20210926","title":"8.0.11 (2021/09/26)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#8010-20210909","title":"8.0.10 (2021/09/09)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#809-20210826","title":"8.0.9 (2021/08/26)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#808-20210806","title":"8.0.8 (2021/08/06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#807-20210719","title":"8.0.7 (2021/07/19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#806-20210715","title":"8.0.6 (2021/07/15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#805-20210625","title":"8.0.5 (2021/06/25)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#804-20210520","title":"8.0.4 (2021/05/20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#803-20210427","title":"8.0.3 (2021/04/27)","text":"
fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
"},{"location":"changelog/changelog-for-seafile-professional-server/#802-20210421","title":"8.0.2 (2021/04/21)","text":"[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#801-20210407","title":"8.0.1 (2021/04/07)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#800-beta-20210302","title":"8.0.0 beta (2021/03/02)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#71","title":"7.1","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7121-20210713","title":"7.1.21 (2021/07/13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7120-20210702","title":"7.1.20 (2021/07/02)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7119-20210604","title":"7.1.19 (2021/06/04)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7118-20210513","title":"7.1.18 (2021/05/13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7117-20210426","title":"7.1.17 (2021/04/26)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7116-20210419","title":"7.1.16 (2021/04/19)","text":"
fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.
"},{"location":"changelog/changelog-for-seafile-professional-server/#7115-20210318","title":"7.1.15 (2021/03/18)","text":"[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#7114-20210226","title":"7.1.14 (2021/02/26)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7113-20210208","title":"7.1.13 (2021/02/08)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7112-20210203","title":"7.1.12 (2021/02/03)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7111-20210128","title":"7.1.11 (2021/01/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7110-20200111","title":"7.1.10 (2020/01/11)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#719-20201202","title":"7.1.9 (2020/12/02)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#718-20201012","title":"7.1.8 (2020/10/12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#717-20200828","title":"7.1.7 (2020/08/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#716-20200728","title":"7.1.6 (2020/07/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#715-20200630","title":"7.1.5 (2020/06/30)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#714-20200514","title":"7.1.4 (2020/05/14)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#713-20200408","title":"7.1.3 (2020/04/08)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#711-beta-20200227","title":"7.1.1 Beta (2020/02/27)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#710-beta-20200219","title":"7.1.0 Beta (2020/02/19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#70","title":"7.0","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7018-20200521","title":"7.0.18 (2020/05/21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7017-20200428","title":"7.0.17 (2020/04/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7016-20200401","title":"7.0.16 (2020/04/01)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7015-deprecated","title":"7.0.15 (Deprecated)","text":""},{"location":"changelog/changelog-for-seafile-professional-server/#7014-20200306","title":"7.0.14 (2020/03/06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7013-20200116","title":"7.0.13 (2020/01/16)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7012-20200110","title":"7.0.12 (2020/01/10)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7011-20191115","title":"7.0.11 (2019/11/15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#7010-20191022","title":"7.0.10 (2019/10/22)","text":"-Xms1g -Xmx1g
"},{"location":"changelog/changelog-for-seafile-professional-server/#709-20190920","title":"7.0.9 (2019/09/20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#708-20190826","title":"7.0.8 (2019/08/26)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#707-20190729","title":"7.0.7 (2019/07/29)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#706-20190722","title":"7.0.6 (2019/07/22)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#705-20190716","title":"7.0.5 (2019/07/16)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#704-20190705","title":"7.0.4 (2019/07/05)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#703-20190613","title":"7.0.3 (2019/06/13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#702-beta-20190517","title":"7.0.2 beta (2019/05/17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#701-beta-20190418","title":"7.0.1 beta (2019/04/18)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#63","title":"6.3","text":"conf/gunicorn.conf instead of running ./seahub.sh start <another-port>../seahub.sh python-env seahub/manage.py migrate_file_comment\nseafevents.conf):[INDEX FILES]\n...\nhighlight = fvh\n...\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#6313-20190320","title":"6.3.13 (2019/03/20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#6312-20190221","title":"6.3.12 (2019/02/21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#6311-20190115","title":"6.3.11 (2019/01/15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#6310-20190102","title":"6.3.10 (2019/01/02)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#639-20181213","title":"6.3.9 (2018/12/13)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#638-20181210","title":"6.3.8 (2018/12/10)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#637-20181016","title":"6.3.7 (2018/10/16)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#636-20180921","title":"6.3.6 (2018/09/21)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#635-20180918","title":"6.3.5 (2018/09/18)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#634-20180816","title":"6.3.4 (2018/08/16)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#633-20180815","title":"6.3.3 (2018/08/15)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#632-20180730","title":"6.3.2 (2018/07/30)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#631-20180725","title":"6.3.1 (2018/07/25)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#630-beta-20180628","title":"6.3.0 Beta (2018/06/28)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#62","title":"6.2","text":"
./seahub.sh start instead of ./seahub.sh start-fastcgilocation / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n }\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#6213-2018518","title":"6.2.13 (2018.5.18)","text":" # seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#6212-2018420","title":"6.2.12 (2018.4.20)","text":"file already exists error for the first time.
"},{"location":"changelog/changelog-for-seafile-professional-server/#6211-2018419","title":"6.2.11 (2018.4.19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#6210-2018320","title":"6.2.10 (2018.3.20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#629-20180210","title":"6.2.9 (2018.02.10)","text":"per_page parameter to 10 when search file via api.
"},{"location":"changelog/changelog-for-seafile-professional-server/#628-20180202","title":"6.2.8 (2018.02.02)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#627-20180122","title":"6.2.7 (2018.01.22)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#625-626-deprecated","title":"6.2.5, 6.2.6 (deprecated)","text":""},{"location":"changelog/changelog-for-seafile-professional-server/#624-20171220","title":"6.2.4 (2017.12.20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#623-20171219","title":"6.2.3 (2017.12.19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#622-20171212","title":"6.2.2 (2017.12.12)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#621-beta-20171122","title":"6.2.1 beta (2017.11.22)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#620-beta-20171016","title":"6.2.0 beta (2017.10.16)","text":"repo_owner field to library search web api.
"},{"location":"changelog/changelog-for-seafile-professional-server/#61","title":"6.1","text":"ENABLE_REPO_SNAPSHOT_LABEL = True to turn the feature on)
"},{"location":"changelog/changelog-for-seafile-professional-server/#618-20170818","title":"6.1.8 (2017.08.18)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#617-20170817","title":"6.1.7 (2017.08.17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#614-20170711","title":"6.1.4 (2017.07.11)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#613-20170706","title":"6.1.3 (2017.07.06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#612-deprecated","title":"6.1.2 (deprecated)","text":""},{"location":"changelog/changelog-for-seafile-professional-server/#611-20170619","title":"6.1.1 (2017.06.19)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#610-beta-20170606","title":"6.1.0 beta (2017.06.06)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#60","title":"6.0","text":"ENABLE_WIKI = True in seahub_settings.py)cd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n
"},{"location":"changelog/changelog-for-seafile-professional-server/#6012-20170417","title":"6.0.12 (2017.04.17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#6011-deprecated","title":"6.0.11 (Deprecated)","text":""},{"location":"changelog/changelog-for-seafile-professional-server/#6010-20170407","title":"6.0.10 (2017.04.07)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#609-20170401","title":"6.0.9 (2017.04.01)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#608-20170223","title":"6.0.8 (2017.02.23)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#607-20170118","title":"6.0.7 (2017.01.18)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#606-20170111","title":"6.0.6 (2017.01.11)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#605-20161219","title":"6.0.5 (2016.12.19)","text":"# -*- coding: utf-8 -*- to seahub_settings.py, so that admin can use non-ascii characters in the file.
"},{"location":"changelog/changelog-for-seafile-professional-server/#604-20161129","title":"6.0.4 (2016.11.29)","text":"[Audit] and [AUDIT] in seafevent.conf
"},{"location":"changelog/changelog-for-seafile-professional-server/#603-20161117","title":"6.0.3 (2016.11.17)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#602-20161020","title":"6.0.2 (2016.10.20)","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#601-beta","title":"6.0.1 beta","text":"
"},{"location":"changelog/changelog-for-seafile-professional-server/#600-beta","title":"6.0.0 beta","text":"
"},{"location":"changelog/client-changelog/","title":"Seafile Client Changelog","text":""},{"location":"changelog/client-changelog/#90","title":"9.0","text":""},{"location":"changelog/client-changelog/#908-20240812","title":"9.0.8 (2024/08/12)","text":"
"},{"location":"changelog/client-changelog/#907-20240723","title":"9.0.7 (2024/07/23)","text":"
"},{"location":"changelog/client-changelog/#906-20240523","title":"9.0.6 (2024/05/23)","text":"
"},{"location":"changelog/client-changelog/#905-20240305","title":"9.0.5 (2024/03/05)","text":"
"},{"location":"changelog/client-changelog/#904-20230913","title":"9.0.4 (2023/09/13)","text":"
"},{"location":"changelog/client-changelog/#903-20230705","title":"9.0.3 (2023/07/05)","text":"
"},{"location":"changelog/client-changelog/#902-20230427","title":"9.0.2 (2023/04/27)","text":"
"},{"location":"changelog/client-changelog/#901-20230324","title":"9.0.1 (2023/03/24)","text":"
"},{"location":"changelog/client-changelog/#900-20230320","title":"9.0.0 (2023/03/20)","text":"
"},{"location":"changelog/client-changelog/#80","title":"8.0","text":""},{"location":"changelog/client-changelog/#8010-20221228","title":"8.0.10 (2022/12/28)","text":"
"},{"location":"changelog/client-changelog/#809-20221114","title":"8.0.9 (2022/11/14)","text":"
"},{"location":"changelog/client-changelog/#808-20220705","title":"8.0.8 (2022/07/05)","text":"
"},{"location":"changelog/client-changelog/#807-20220429","title":"8.0.7 (2022/04/29)","text":"
"},{"location":"changelog/client-changelog/#806-20220304","title":"8.0.6 (2022/03/04)","text":"
"},{"location":"changelog/client-changelog/#805-20211118","title":"8.0.5 (2021/11/18)","text":"
"},{"location":"changelog/client-changelog/#804-20210922","title":"8.0.4 (2021/09/22)","text":"
"},{"location":"changelog/client-changelog/#803-20210703","title":"8.0.3 (2021/07/03)","text":"
"},{"location":"changelog/client-changelog/#802-20210521","title":"8.0.2 (2021/05/21)","text":"
"},{"location":"changelog/client-changelog/#801-20201215","title":"8.0.1 (2020/12/15)","text":"
"},{"location":"changelog/client-changelog/#800-beta-20201128","title":"8.0.0 beta (2020/11/28)","text":"
"},{"location":"changelog/client-changelog/#70","title":"7.0","text":""},{"location":"changelog/client-changelog/#7010-20201016","title":"7.0.10 (2020/10/16)","text":"
"},{"location":"changelog/client-changelog/#709-20200730","title":"7.0.9 (2020/07/30)","text":"
"},{"location":"changelog/client-changelog/#708-20200603","title":"7.0.8 (2020/06/03)","text":"
"},{"location":"changelog/client-changelog/#707-20200403","title":"7.0.7 (2020/04/03)","text":"
"},{"location":"changelog/client-changelog/#706-20200214","title":"7.0.6 (2020/02/14)","text":"._
"},{"location":"changelog/client-changelog/#705-20200114","title":"7.0.5 (2020/01/14)","text":"
"},{"location":"changelog/client-changelog/#704-20191120","title":"7.0.4 (2019/11/20)","text":"
"},{"location":"changelog/client-changelog/#703-20191031","title":"7.0.3 (2019/10/31)","text":"
"},{"location":"changelog/client-changelog/#702-20190812","title":"7.0.2 (2019/08/12)","text":"
"},{"location":"changelog/client-changelog/#701-20190711","title":"7.0.1 (2019/07/11)","text":"
"},{"location":"changelog/client-changelog/#700-20190604","title":"7.0.0 (2019/06/04)","text":"
"},{"location":"changelog/client-changelog/#62","title":"6.2","text":""},{"location":"changelog/client-changelog/#6210-20190115","title":"6.2.10 (2019/01/15)","text":"
"},{"location":"changelog/client-changelog/#629-20181210","title":"6.2.9 (2018/12/10)","text":"
"},{"location":"changelog/client-changelog/#628-20181205","title":"6.2.8 (2018/12/05)","text":"
"},{"location":"changelog/client-changelog/#627-20181122","title":"6.2.7 (2018/11/22)","text":"
"},{"location":"changelog/client-changelog/#625-20180914","title":"6.2.5 (2018/09/14)","text":"
"},{"location":"changelog/client-changelog/#624-20180803","title":"6.2.4 (2018/08/03)","text":"
"},{"location":"changelog/client-changelog/#623-20180730","title":"6.2.3 (2018/07/30)","text":"
"},{"location":"changelog/client-changelog/#622-621-beta-20180713","title":"6.2.2 6.2.1 Beta (2018/07/13)","text":"
"},{"location":"changelog/client-changelog/#620-beta-20180703","title":"6.2.0 Beta (2018/07/03)","text":"
"},{"location":"changelog/client-changelog/#61","title":"6.1","text":""},{"location":"changelog/client-changelog/#618-20180508","title":"6.1.8 (2018/05/08)","text":"
"},{"location":"changelog/client-changelog/#617-20180329","title":"6.1.7 (2018/03/29)","text":"
"},{"location":"changelog/client-changelog/#616-20180313","title":"6.1.6 (2018/03/13)","text":"
"},{"location":"changelog/client-changelog/#615-20180206","title":"6.1.5 (2018/02/06)","text":"
"},{"location":"changelog/client-changelog/#614-20171220","title":"6.1.4 (2017/12/20)","text":"
"},{"location":"changelog/client-changelog/#613-20171103","title":"6.1.3 (2017/11/03)","text":"
"},{"location":"changelog/client-changelog/#612-20171028","title":"6.1.2 (2017/10/28)","text":"
"},{"location":"changelog/client-changelog/#611-20170920","title":"6.1.1 (2017/09/20)","text":"
"},{"location":"changelog/client-changelog/#610-20170802","title":"6.1.0 (2017/08/02)","text":"
"},{"location":"changelog/client-changelog/#60","title":"6.0","text":""},{"location":"changelog/client-changelog/#607-20170623","title":"6.0.7 (2017/06/23)","text":"
"},{"location":"changelog/client-changelog/#606-20170508","title":"6.0.6 (2017/05/08)","text":"
"},{"location":"changelog/client-changelog/#604-20170221","title":"6.0.4 (2017/02/21)","text":"
"},{"location":"changelog/client-changelog/#603-20170211","title":"6.0.3 (2017/02/11)","text":"
"},{"location":"changelog/client-changelog/#602-deprecated","title":"6.0.2 (deprecated)","text":"
"},{"location":"changelog/client-changelog/#600-20161014","title":"6.0.0 (2016/10/14)","text":"
"},{"location":"changelog/client-changelog/#51","title":"5.1","text":""},{"location":"changelog/client-changelog/#514-20160729","title":"5.1.4 (2016/07/29)","text":"
"},{"location":"changelog/client-changelog/#513-20160627","title":"5.1.3 (2016/06/27)","text":"
"},{"location":"changelog/client-changelog/#512-20160607","title":"5.1.2 (2016/06/07)","text":"
"},{"location":"changelog/client-changelog/#511-20160504","title":"5.1.1 (2016/05/04)","text":"
"},{"location":"changelog/client-changelog/#510-20160411","title":"5.1.0 (2016/04/11)","text":"
"},{"location":"changelog/client-changelog/#50","title":"5.0","text":""},{"location":"changelog/client-changelog/#507-20160329","title":"5.0.7 (2016/03/29)","text":"
"},{"location":"changelog/client-changelog/#506-20160308","title":"5.0.6 (2016/03/08)","text":"
"},{"location":"changelog/client-changelog/#505-20160220","title":"5.0.5 (2016/02/20)","text":"
"},{"location":"changelog/client-changelog/#504-20160126","title":"5.0.4 (2016/01/26)","text":"
"},{"location":"changelog/client-changelog/#503-20160113","title":"5.0.3 (2016/01/13)","text":"
"},{"location":"changelog/client-changelog/#502-20160111","title":"5.0.2 (2016/01/11)","text":"
"},{"location":"changelog/client-changelog/#501-20151221","title":"5.0.1 (2015/12/21)","text":"
"},{"location":"changelog/client-changelog/#500-20151125","title":"5.0.0 (2015/11/25)","text":"
"},{"location":"changelog/client-changelog/#44","title":"4.4","text":""},{"location":"changelog/client-changelog/#442-20151020","title":"4.4.2 (2015/10/20)","text":"
"},{"location":"changelog/client-changelog/#441-20151014","title":"4.4.1 (2015/10/14)","text":"
"},{"location":"changelog/client-changelog/#440-20150918","title":"4.4.0 (2015/09/18)","text":"
"},{"location":"changelog/client-changelog/#43","title":"4.3","text":""},{"location":"changelog/client-changelog/#434-20150914","title":"4.3.4 (2015/09/14)","text":"
"},{"location":"changelog/client-changelog/#433-20150825","title":"4.3.3 (2015/08/25)","text":"
"},{"location":"changelog/client-changelog/#432-20150819","title":"4.3.2 (2015/08/19)","text":"
"},{"location":"changelog/client-changelog/#431-20150811","title":"4.3.1 (2015/08/11)","text":"
"},{"location":"changelog/client-changelog/#430-beta-20150803","title":"4.3.0 beta (2015/08/03)","text":"
"},{"location":"changelog/client-changelog/#42","title":"4.2","text":""},{"location":"changelog/client-changelog/#428-20150711","title":"4.2.8 (2015/07/11)","text":"
"},{"location":"changelog/client-changelog/#427-20150708","title":"4.2.7 (2015/07/08)","text":"
"},{"location":"changelog/client-changelog/#426-20150625","title":"4.2.6 (2015/06/25)","text":"
"},{"location":"changelog/client-changelog/#425-20150624","title":"4.2.5 (2015/06/24)","text":"
"},{"location":"changelog/client-changelog/#424-20150611","title":"4.2.4 (2015/06/11)","text":"
"},{"location":"changelog/client-changelog/#423-20150529","title":"4.2.3 (2015/05/29)","text":"
"},{"location":"changelog/client-changelog/#422-20150526","title":"4.2.2 (2015/05/26)","text":"
"},{"location":"changelog/client-changelog/#421-20150514","title":"4.2.1 (2015/05/14)","text":"
"},{"location":"changelog/client-changelog/#420-20150507","title":"4.2.0 (2015/05/07)","text":"
"},{"location":"changelog/client-changelog/#41","title":"4.1","text":""},{"location":"changelog/client-changelog/#416-20150421","title":"4.1.6 (2015/04/21)","text":"
"},{"location":"changelog/client-changelog/#415-20150409","title":"4.1.5 (2015/04/09)","text":"
"},{"location":"changelog/client-changelog/#414-20150327","title":"4.1.4 (2015/03/27)","text":"
"},{"location":"changelog/client-changelog/#413-20150323","title":"4.1.3 (2015/03/23)","text":"
"},{"location":"changelog/client-changelog/#412-20150319-deprecated","title":"4.1.2 (2015/03/19) (deprecated)","text":"
"},{"location":"changelog/client-changelog/#411-20150303","title":"4.1.1 (2015/03/03)","text":"
"},{"location":"changelog/client-changelog/#410-beta-20150129","title":"4.1.0 beta (2015/01/29)","text":"
"},{"location":"changelog/client-changelog/#40","title":"4.0","text":""},{"location":"changelog/client-changelog/#407-20150122","title":"4.0.7 (2015/01/22)","text":"
"},{"location":"changelog/client-changelog/#405-20141224","title":"4.0.5 (2014/12/24)","text":"
"},{"location":"changelog/client-changelog/#404-20141215","title":"4.0.4 (2014/12/15)","text":"
"},{"location":"changelog/client-changelog/#402-20141129","title":"4.0.2 (2014/11/29)","text":"
"},{"location":"changelog/client-changelog/#401-20141118","title":"4.0.1 (2014/11/18)","text":"
"},{"location":"changelog/client-changelog/#400-20141110","title":"4.0.0 (2014/11/10)","text":"
"},{"location":"changelog/client-changelog/#31","title":"3.1","text":""},{"location":"changelog/client-changelog/#3112-20141201","title":"3.1.12 (2014/12/01)","text":"
"},{"location":"changelog/client-changelog/#3111-20141115","title":"3.1.11 (2014/11/15)","text":"
"},{"location":"changelog/client-changelog/#318-20141028","title":"3.1.8 (2014/10/28)","text":"
"},{"location":"changelog/client-changelog/#317-20140928","title":"3.1.7 (2014/09/28)","text":"
"},{"location":"changelog/client-changelog/#316-20140919","title":"3.1.6 (2014/09/19)","text":"
"},{"location":"changelog/client-changelog/#315-20140814","title":"3.1.5 (2014/08/14)","text":"
"},{"location":"changelog/client-changelog/#314-20140805","title":"3.1.4 (2014/08/05)","text":"
"},{"location":"changelog/client-changelog/#313-20140804","title":"3.1.3 (2014/08/04)","text":"
"},{"location":"changelog/client-changelog/#312-20140801","title":"3.1.2 (2014/08/01)","text":"
"},{"location":"changelog/client-changelog/#311-20140728","title":"3.1.1 (2014/07/28)","text":"
"},{"location":"changelog/client-changelog/#310-20140724","title":"3.1.0 (2014/07/24)","text":"
"},{"location":"changelog/client-changelog/#30","title":"3.0","text":""},{"location":"changelog/client-changelog/#304","title":"3.0.4","text":"
"},{"location":"changelog/client-changelog/#303","title":"3.0.3","text":"
"},{"location":"changelog/client-changelog/#302","title":"3.0.2","text":"
"},{"location":"changelog/client-changelog/#301","title":"3.0.1","text":"
"},{"location":"changelog/client-changelog/#300","title":"3.0.0","text":"
"},{"location":"changelog/client-changelog/#22","title":"2.2","text":""},{"location":"changelog/client-changelog/#220","title":"2.2.0","text":"
"},{"location":"changelog/client-changelog/#21","title":"2.1","text":""},{"location":"changelog/client-changelog/#212","title":"2.1.2","text":"
"},{"location":"changelog/client-changelog/#211","title":"2.1.1","text":"
"},{"location":"changelog/client-changelog/#210","title":"2.1.0","text":"
"},{"location":"changelog/client-changelog/#20","title":"2.0","text":""},{"location":"changelog/client-changelog/#208","title":"2.0.8","text":"
"},{"location":"changelog/client-changelog/#207-dont-use-it","title":"2.0.7 (Don't use it)","text":"
"},{"location":"changelog/client-changelog/#206","title":"2.0.6","text":"
"},{"location":"changelog/client-changelog/#205","title":"2.0.5","text":"
"},{"location":"changelog/client-changelog/#204","title":"2.0.4","text":"
"},{"location":"changelog/client-changelog/#203","title":"2.0.3","text":"
"},{"location":"changelog/client-changelog/#202","title":"2.0.2","text":"
"},{"location":"changelog/client-changelog/#200","title":"2.0.0","text":"
"},{"location":"changelog/client-changelog/#18","title":"1.8","text":"
"},{"location":"changelog/client-changelog/#17","title":"1.7","text":"
"},{"location":"changelog/client-changelog/#16","title":"1.6","text":"
"},{"location":"changelog/client-changelog/#15","title":"1.5","text":"
"},{"location":"changelog/drive-client-changelog/","title":"SeaDrive Client Changelog","text":""},{"location":"changelog/drive-client-changelog/#3011-20240910","title":"3.0.11 (2024/09/10)","text":"
"},{"location":"changelog/drive-client-changelog/#3010-20240618","title":"3.0.10 (2024/06/18)","text":"
"},{"location":"changelog/drive-client-changelog/#309-20240425","title":"3.0.9 (2024/04/25)","text":"
"},{"location":"changelog/drive-client-changelog/#308-20240221","title":"3.0.8 (2024/02/21)","text":"
"},{"location":"changelog/drive-client-changelog/#307-20231204","title":"3.0.7 (2023/12/04)","text":"
"},{"location":"changelog/drive-client-changelog/#306-20230915","title":"3.0.6 (2023/09/15)","text":"
"},{"location":"changelog/drive-client-changelog/#305-20230815","title":"3.0.5 (2023/08/15)","text":"
"},{"location":"changelog/drive-client-changelog/#304-20230610","title":"3.0.4 (2023/06/10)","text":"
"},{"location":"changelog/drive-client-changelog/#303-20230525","title":"3.0.3 (2023/05/25)","text":"
"},{"location":"changelog/drive-client-changelog/#302-beta-20230324","title":"3.0.2 Beta (2023/03/24)","text":"
"},{"location":"changelog/drive-client-changelog/#2027-for-windows-20230324","title":"2.0.27 for Windows (2023/03/24)","text":"
"},{"location":"changelog/drive-client-changelog/#2026-20221228","title":"2.0.26 (2022/12/28)","text":"
"},{"location":"changelog/drive-client-changelog/#2025-windows-20221203","title":"2.0.25 (Windows) (2022/12/03)","text":"
"},{"location":"changelog/drive-client-changelog/#2024-windows-20221114","title":"2.0.24 (Windows) (2022/11/14)","text":"
"},{"location":"changelog/drive-client-changelog/#2024-macos-20221109","title":"2.0.24 (macOS) (2022/11/09)","text":"
"},{"location":"changelog/drive-client-changelog/#2023-20220818","title":"2.0.23 (2022/08/18)","text":"
"},{"location":"changelog/drive-client-changelog/#2022-20220623","title":"2.0.22 (2022/06/23)","text":"
"},{"location":"changelog/drive-client-changelog/#2021-windows-20220321","title":"2.0.21 (Windows) (2022/03/21)","text":"
"},{"location":"changelog/drive-client-changelog/#2020-20220304","title":"2.0.20 (2022/03/04)","text":"
"},{"location":"changelog/drive-client-changelog/#2019-windows-20211229","title":"2.0.19 (Windows) (2021/12/29)","text":"
"},{"location":"changelog/drive-client-changelog/#2018-macos-20211029","title":"2.0.18 (macOS) (2021/10/29)","text":"
"},{"location":"changelog/drive-client-changelog/#2018-windows-20211026","title":"2.0.18 (Windows) (2021/10/26)","text":"
"},{"location":"changelog/drive-client-changelog/#2017-20210930","title":"2.0.17 (2021/09/30)","text":"
"},{"location":"changelog/drive-client-changelog/#2016-2021813","title":"2.0.16 (2021/8/13)","text":"
"},{"location":"changelog/drive-client-changelog/#2015-2021720","title":"2.0.15 (2021/7/20)","text":"
"},{"location":"changelog/drive-client-changelog/#2014-2021526","title":"2.0.14 (2021/5/26)","text":"
"},{"location":"changelog/drive-client-changelog/#2013-2021323","title":"2.0.13 (2021/3/23)","text":"
"},{"location":"changelog/drive-client-changelog/#2012-2021129","title":"2.0.12 (2021/1/29)","text":"
"},{"location":"changelog/drive-client-changelog/#2010-20201229","title":"2.0.10 (2020/12/29)","text":"
"},{"location":"changelog/drive-client-changelog/#209-20201120","title":"2.0.9 (2020/11/20)","text":"
"},{"location":"changelog/drive-client-changelog/#208-20201114","title":"2.0.8 (2020/11/14)","text":"
"},{"location":"changelog/drive-client-changelog/#207-20201031","title":"2.0.7 (2020/10/31)","text":"
"},{"location":"changelog/drive-client-changelog/#206-20200924","title":"2.0.6 (2020/09/24)","text":"
"},{"location":"changelog/drive-client-changelog/#1012-20200825","title":"1.0.12 (2020/08/25)","text":"
"},{"location":"changelog/drive-client-changelog/#205-20200730","title":"2.0.5 (2020/07/30)","text":"
"},{"location":"changelog/drive-client-changelog/#204-20200713","title":"2.0.4 (2020/07/13)","text":"
"},{"location":"changelog/drive-client-changelog/#203-20200617","title":"2.0.3 (2020/06/17)","text":"
"},{"location":"changelog/drive-client-changelog/#202-20200523","title":"2.0.2 (2020/05/23)","text":"
"},{"location":"changelog/drive-client-changelog/#201-for-windows-20200413","title":"2.0.1 for Windows (2020/04/13)","text":"
"},{"location":"changelog/drive-client-changelog/#200-for-windows-20200320","title":"2.0.0 for Windows (2020/03/20)","text":"
"},{"location":"changelog/drive-client-changelog/#1011-20200207","title":"1.0.11 (2020/02/07)","text":"
"},{"location":"changelog/drive-client-changelog/#1010-20191223","title":"1.0.10 (2019/12/23)","text":"
"},{"location":"changelog/drive-client-changelog/#108-20191105","title":"1.0.8 (2019/11/05)","text":"
"},{"location":"changelog/drive-client-changelog/#107-20190821","title":"1.0.7 (2019/08/21)","text":"
"},{"location":"changelog/drive-client-changelog/#106-20190701","title":"1.0.6 (2019/07/01)","text":"
"},{"location":"changelog/drive-client-changelog/#105-20190611","title":"1.0.5 (2019/06/11)","text":"
"},{"location":"changelog/drive-client-changelog/#104-20190423","title":"1.0.4 (2019/04/23)","text":"
"},{"location":"changelog/drive-client-changelog/#103-20190318","title":"1.0.3 (2019/03/18)","text":"
"},{"location":"changelog/drive-client-changelog/#101-20190114","title":"1.0.1 (2019/01/14)","text":"
"},{"location":"changelog/drive-client-changelog/#100-20181119","title":"1.0.0 (2018/11/19)","text":"
"},{"location":"changelog/drive-client-changelog/#095-20180910","title":"0.9.5 (2018/09/10)","text":"
"},{"location":"changelog/drive-client-changelog/#094-20180818","title":"0.9.4 (2018/08/18)","text":"
"},{"location":"changelog/drive-client-changelog/#093-20180619","title":"0.9.3 (2018/06/19)","text":"
"},{"location":"changelog/drive-client-changelog/#092-20180505","title":"0.9.2 (2018/05/05)","text":"
"},{"location":"changelog/drive-client-changelog/#091-20180424","title":"0.9.1 (2018/04/24)","text":"
"},{"location":"changelog/drive-client-changelog/#090-20180424","title":"0.9.0 (2018/04/24)","text":"
"},{"location":"changelog/drive-client-changelog/#086-20180319","title":"0.8.6 (2018/03/19)","text":"
"},{"location":"changelog/drive-client-changelog/#085-20180103","title":"0.8.5 (2018/01/03)","text":"
"},{"location":"changelog/drive-client-changelog/#084-20171201","title":"0.8.4 (2017/12/01)","text":"
"},{"location":"changelog/drive-client-changelog/#083-20171124","title":"0.8.3 (2017/11/24)","text":"
"},{"location":"changelog/drive-client-changelog/#081-20171103","title":"0.8.1 (2017/11/03)","text":"
"},{"location":"changelog/drive-client-changelog/#080-20170916","title":"0.8.0 (2017/09/16)","text":"
"},{"location":"changelog/drive-client-changelog/#071-20170623","title":"0.7.1 (2017/06/23)","text":"
"},{"location":"changelog/drive-client-changelog/#070-20170607","title":"0.7.0 (2017/06/07)","text":"
"},{"location":"changelog/drive-client-changelog/#062-20170422","title":"0.6.2 (2017/04/22)","text":"
"},{"location":"changelog/drive-client-changelog/#061-20170327","title":"0.6.1 (2017/03/27)","text":"
"},{"location":"changelog/drive-client-changelog/#060-20170325","title":"0.6.0 (2017/03/25)","text":"S: because a few programs will automatically try to create files in S:
"},{"location":"changelog/drive-client-changelog/#052-20170309","title":"0.5.2 (2017/03/09)","text":"
"},{"location":"changelog/drive-client-changelog/#051-20170216","title":"0.5.1 (2017/02/16)","text":"
"},{"location":"changelog/drive-client-changelog/#050-20170118","title":"0.5.0 (2017/01/18)","text":"
"},{"location":"changelog/drive-client-changelog/#042-20161216","title":"0.4.2 (2016/12/16)","text":"
"},{"location":"changelog/drive-client-changelog/#041-20161107","title":"0.4.1 (2016/11/07)","text":"
"},{"location":"changelog/drive-client-changelog/#040-20161105","title":"0.4.0 (2016/11/05)","text":"
"},{"location":"changelog/drive-client-changelog/#031-20161022","title":"0.3.1 (2016/10/22)","text":"
"},{"location":"changelog/drive-client-changelog/#030-20161014","title":"0.3.0 (2016/10/14)","text":"
"},{"location":"changelog/drive-client-changelog/#020-20160915","title":"0.2.0 (2016/09/15)","text":"
"},{"location":"changelog/drive-client-changelog/#010-20160902","title":"0.1.0 (2016/09/02)","text":"
"},{"location":"changelog/server-changelog-old/","title":"Seafile Server Changelog (old)","text":""},{"location":"changelog/server-changelog-old/#50","title":"5.0","text":"conf, including:
"},{"location":"changelog/server-changelog-old/#505-20160302","title":"5.0.5 (2016.03.02)","text":"
"},{"location":"changelog/server-changelog-old/#503-20151217","title":"5.0.3 (2015.12.17)","text":"
"},{"location":"changelog/server-changelog-old/#502-20151204","title":"5.0.2 (2015.12.04)","text":"
"},{"location":"changelog/server-changelog-old/#501-beta-20151112","title":"5.0.1 beta (2015.11.12)","text":"
"},{"location":"changelog/server-changelog-old/#500-beta-20151103","title":"5.0.0 beta (2015.11.03)","text":"[[ Pagename]].
conf
"},{"location":"changelog/server-changelog-old/#44","title":"4.4","text":""},{"location":"changelog/server-changelog-old/#446-20151109","title":"4.4.6 (2015.11.09)","text":"
"},{"location":"changelog/server-changelog-old/#444-20151027","title":"4.4.4 (2015.10.27)","text":"
"},{"location":"changelog/server-changelog-old/#443-20151015","title":"4.4.3 (2015.10.15)","text":"
"},{"location":"changelog/server-changelog-old/#442-20151012","title":"4.4.2 (2015.10.12)","text":"
"},{"location":"changelog/server-changelog-old/#441-20150924","title":"4.4.1 (2015.09.24)","text":"
"},{"location":"changelog/server-changelog-old/#440-20150916","title":"4.4.0 (2015.09.16)","text":"
"},{"location":"changelog/server-changelog-old/#43","title":"4.3","text":""},{"location":"changelog/server-changelog-old/#432-20150820","title":"4.3.2 (2015.08.20)","text":"
"},{"location":"changelog/server-changelog-old/#431-20150729","title":"4.3.1 (2015.07.29)","text":"
"},{"location":"changelog/server-changelog-old/#430-20150721","title":"4.3.0 (2015.07.21)","text":"
"},{"location":"changelog/server-changelog-old/#42","title":"4.2","text":"THUMBNAIL_DEFAULT_SIZE = 24, instead of THUMBNAIL_DEFAULT_SIZE = '24'
"},{"location":"changelog/server-changelog-old/#423-20150618","title":"4.2.3 (2015.06.18)","text":"COMPRESS_URL = MEDIA_URL\nSTATIC_URL = MEDIA_URL + '/assets/'\n
"},{"location":"changelog/server-changelog-old/#422-20150529","title":"4.2.2 (2015.05.29)","text":"
"},{"location":"changelog/server-changelog-old/#421-20150527","title":"4.2.1 (2015.05.27)","text":"
"},{"location":"changelog/server-changelog-old/#420-beta-20150513","title":"4.2.0 beta (2015.05.13)","text":"
"},{"location":"changelog/server-changelog-old/#41","title":"4.1","text":""},{"location":"changelog/server-changelog-old/#412-20150331","title":"4.1.2 (2015.03.31)","text":"
"},{"location":"changelog/server-changelog-old/#411-20150325","title":"4.1.1 (2015.03.25)","text":"
"},{"location":"changelog/server-changelog-old/#410-beta-20150318","title":"4.1.0 beta (2015.03.18)","text":"
"},{"location":"changelog/server-changelog-old/#40","title":"4.0","text":""},{"location":"changelog/server-changelog-old/#406-20150204","title":"4.0.6 (2015.02.04)","text":"
"},{"location":"changelog/server-changelog-old/#405-20150114","title":"4.0.5 (2015.01.14)","text":"
"},{"location":"changelog/server-changelog-old/#404-20150106","title":"4.0.4 (2015.01.06)","text":"
"},{"location":"changelog/server-changelog-old/#403-20141230","title":"4.0.3 (2014.12.30)","text":"
"},{"location":"changelog/server-changelog-old/#402-20141226","title":"4.0.2 (2014.12.26)","text":"
"},{"location":"changelog/server-changelog-old/#401-20141129","title":"4.0.1 (2014.11.29)","text":"
"},{"location":"changelog/server-changelog-old/#400-20141110","title":"4.0.0 (2014.11.10)","text":"
"},{"location":"changelog/server-changelog-old/#31","title":"3.1","text":""},{"location":"changelog/server-changelog-old/#317-20141020","title":"3.1.7 (2014.10.20)","text":"
"},{"location":"changelog/server-changelog-old/#316-20140911","title":"3.1.6 (2014.09.11)","text":"
"},{"location":"changelog/server-changelog-old/#315-20140829","title":"3.1.5 (2014.08.29)","text":"
"},{"location":"changelog/server-changelog-old/#314-20140826","title":"3.1.4 (2014.08.26)","text":"
"},{"location":"changelog/server-changelog-old/#313-20140818","title":"3.1.3 (2014.08.18)","text":"
"},{"location":"changelog/server-changelog-old/#312-20140807","title":"3.1.2 (2014.08.07)","text":"
"},{"location":"changelog/server-changelog-old/#311-20140801","title":"3.1.1 (2014.08.01)","text":"
"},{"location":"changelog/server-changelog-old/#310-20140724","title":"3.1.0 (2014.07.24)","text":"
"},{"location":"changelog/server-changelog-old/#30","title":"3.0","text":""},{"location":"changelog/server-changelog-old/#304-20140607","title":"3.0.4 (2014.06.07)","text":"
"},{"location":"changelog/server-changelog-old/#303","title":"3.0.3","text":"
"},{"location":"changelog/server-changelog-old/#302","title":"3.0.2","text":"
"},{"location":"changelog/server-changelog-old/#301","title":"3.0.1","text":"
"},{"location":"changelog/server-changelog-old/#300","title":"3.0.0","text":"
"},{"location":"changelog/server-changelog-old/#300-beta2","title":"3.0.0 beta2","text":"
"},{"location":"changelog/server-changelog-old/#300-beta","title":"3.0.0 beta","text":"
"},{"location":"changelog/server-changelog-old/#22","title":"2.2","text":""},{"location":"changelog/server-changelog-old/#221","title":"2.2.1","text":"
"},{"location":"changelog/server-changelog-old/#220","title":"2.2.0","text":"
"},{"location":"changelog/server-changelog-old/#21","title":"2.1","text":""},{"location":"changelog/server-changelog-old/#215","title":"2.1.5","text":"
"},{"location":"changelog/server-changelog-old/#214","title":"2.1.4","text":"
"},{"location":"changelog/server-changelog-old/#213","title":"2.1.3","text":"
"},{"location":"changelog/server-changelog-old/#212","title":"2.1.2","text":"<a>, <table>, <img> and a few other html elements in markdown to avoid XSS attack.
"},{"location":"changelog/server-changelog-old/#211","title":"2.1.1","text":"
"},{"location":"changelog/server-changelog-old/#210","title":"2.1.0","text":"
"},{"location":"changelog/server-changelog-old/#20","title":"2.0","text":""},{"location":"changelog/server-changelog-old/#204","title":"2.0.4","text":"
"},{"location":"changelog/server-changelog-old/#203","title":"2.0.3","text":"
"},{"location":"changelog/server-changelog-old/#202","title":"2.0.2","text":"
"},{"location":"changelog/server-changelog-old/#201","title":"2.0.1","text":"
"},{"location":"changelog/server-changelog-old/#200","title":"2.0.0","text":"
"},{"location":"changelog/server-changelog-old/#18","title":"1.8","text":""},{"location":"changelog/server-changelog-old/#185","title":"1.8.5","text":"
"},{"location":"changelog/server-changelog-old/#183","title":"1.8.3","text":"
"},{"location":"changelog/server-changelog-old/#182","title":"1.8.2","text":"
"},{"location":"changelog/server-changelog-old/#181","title":"1.8.1","text":"
"},{"location":"changelog/server-changelog-old/#180","title":"1.8.0","text":"
"},{"location":"changelog/server-changelog-old/#17","title":"1.7","text":""},{"location":"changelog/server-changelog-old/#1702-for-linux-32-bit","title":"1.7.0.2 for Linux 32 bit","text":"
"},{"location":"changelog/server-changelog-old/#1701-for-linux-32-bit","title":"1.7.0.1 for Linux 32 bit","text":"
"},{"location":"changelog/server-changelog-old/#170","title":"1.7.0","text":"
"},{"location":"changelog/server-changelog-old/#16","title":"1.6","text":""},{"location":"changelog/server-changelog-old/#161","title":"1.6.1","text":"
"},{"location":"changelog/server-changelog-old/#160","title":"1.6.0","text":"
"},{"location":"changelog/server-changelog-old/#15","title":"1.5","text":""},{"location":"changelog/server-changelog-old/#152","title":"1.5.2","text":"
"},{"location":"changelog/server-changelog-old/#151","title":"1.5.1","text":"
"},{"location":"changelog/server-changelog-old/#150","title":"1.5.0","text":"
"},{"location":"changelog/server-changelog/","title":"Seafile Server Changelog","text":"
"},{"location":"changelog/server-changelog/#11011-2024-08-07","title":"11.0.11 (2024-08-07)","text":"
"},{"location":"changelog/server-changelog/#11010-2024-08-06","title":"11.0.10 (2024-08-06)","text":"
"},{"location":"changelog/server-changelog/#1109-2024-05-30","title":"11.0.9 (2024-05-30)","text":"
"},{"location":"changelog/server-changelog/#1108-2024-04-22","title":"11.0.8 (2024-04-22)","text":"
"},{"location":"changelog/server-changelog/#1107-2024-04-18","title":"11.0.7 (2024-04-18)","text":"
"},{"location":"changelog/server-changelog/#1106-2024-03-14","title":"11.0.6 (2024-03-14)","text":"
"},{"location":"changelog/server-changelog/#1105-2024-01-31","title":"11.0.5 (2024-01-31)","text":"
"},{"location":"changelog/server-changelog/#1104-2024-01-26","title":"11.0.4 (2024-01-26)","text":"
"},{"location":"changelog/server-changelog/#1103-2023-12-19","title":"11.0.3 (2023-12-19)","text":"
"},{"location":"changelog/server-changelog/#1102-2023-11-20","title":"11.0.2 (2023-11-20)","text":"
"},{"location":"changelog/server-changelog/#1101-beta-2023-10-18","title":"11.0.1 beta (2023-10-18)","text":"
"},{"location":"changelog/server-changelog/#1100-beta-cancelled","title":"11.0.0 beta (cancelled)","text":"
"},{"location":"changelog/server-changelog/#100","title":"10.0","text":"
"},{"location":"changelog/server-changelog/#1000-beta-2023-02-22","title":"10.0.0 beta (2023-02-22)","text":"
"},{"location":"changelog/server-changelog/#90","title":"9.0","text":""},{"location":"changelog/server-changelog/#9010-2022-12-07","title":"9.0.10 (2022-12-07)","text":"
"},{"location":"changelog/server-changelog/#909-2022-09-22","title":"9.0.9 (2022-09-22)","text":"
"},{"location":"changelog/server-changelog/#908-2022-09-07","title":"9.0.8 (2022-09-07)","text":"
"},{"location":"changelog/server-changelog/#907-2022-08-10","title":"9.0.7 (2022-08-10)","text":"/accounts/login redirect by ?next= parameterpip3 install lxml to install it.
"},{"location":"changelog/server-changelog/#906-2022-06-22","title":"9.0.6 (2022-06-22)","text":"
"},{"location":"changelog/server-changelog/#905-2022-05-13","title":"9.0.5 (2022-05-13)","text":"
"},{"location":"changelog/server-changelog/#904-2022-02-21","title":"9.0.4 (2022-02-21)","text":"
"},{"location":"changelog/server-changelog/#903-2022-02-15","title":"9.0.3 (2022-02-15)","text":"
"},{"location":"changelog/server-changelog/#902-2021-12-10","title":"9.0.2 (2021-12-10)","text":"
"},{"location":"changelog/server-changelog/#901-beta-2021-11-20","title":"9.0.1 beta (2021-11-20)","text":"
"},{"location":"changelog/server-changelog/#900-beta-2021-11-11","title":"9.0.0 beta (2021-11-11)","text":"
"},{"location":"changelog/server-changelog/#80","title":"8.0","text":"[fileserver]\nuse_go_fileserver = true\n
"},{"location":"changelog/server-changelog/#807-20210809","title":"8.0.7 (2021/08/09)","text":"
"},{"location":"changelog/server-changelog/#806-20210714","title":"8.0.6 (2021/07/14)","text":"
"},{"location":"changelog/server-changelog/#805-20210514","title":"8.0.5 (2021/05/14)","text":"
"},{"location":"changelog/server-changelog/#804-20210325","title":"8.0.4 (2021/03/25)","text":"
"},{"location":"changelog/server-changelog/#803-20210127","title":"8.0.3 (2021/01/27)","text":"
"},{"location":"changelog/server-changelog/#802-20210104","title":"8.0.2 (2021/01/04)","text":"
"},{"location":"changelog/server-changelog/#801-beta-20210104","title":"8.0.1 beta (2021/01/04)","text":"
"},{"location":"changelog/server-changelog/#800-beta-20201127","title":"8.0.0 beta (2020/11/27)","text":"
"},{"location":"changelog/server-changelog/#71","title":"7.1","text":"
"},{"location":"changelog/server-changelog/#714-20200519","title":"7.1.4 (2020/05/19)","text":"
"},{"location":"changelog/server-changelog/#713-20200326","title":"7.1.3 (2020/03/26)","text":"
"},{"location":"changelog/server-changelog/#712-beta-20200305","title":"7.1.2 beta (2020/03/05)","text":"
"},{"location":"changelog/server-changelog/#711-beta-20191223","title":"7.1.1 beta (2019/12/23)","text":"
"},{"location":"changelog/server-changelog/#710-beta-20191205","title":"7.1.0 beta (2019/12/05)","text":"
"},{"location":"changelog/server-changelog/#70","title":"7.0","text":"
"},{"location":"changelog/server-changelog/#704-20190726","title":"7.0.4 (2019/07/26)","text":"
"},{"location":"changelog/server-changelog/#703-20190705","title":"7.0.3 (2019/07/05)","text":"
"},{"location":"changelog/server-changelog/#702-20190613","title":"7.0.2 (2019/06/13)","text":"
"},{"location":"changelog/server-changelog/#701-beta-20190531","title":"7.0.1 beta (2019/05/31)","text":"
"},{"location":"changelog/server-changelog/#700-beta-20190523","title":"7.0.0 beta (2019/05/23)","text":"
"},{"location":"changelog/server-changelog/#63","title":"6.3","text":"conf/gunicorn.conf instead of running ./seahub.sh start <another-port>../seahub.sh python-env seahub/manage.py migrate_file_comment\n
"},{"location":"changelog/server-changelog/#633-20180907","title":"6.3.3 (2018/09/07)","text":"
"},{"location":"changelog/server-changelog/#632-20180709","title":"6.3.2 (2018/07/09)","text":"
"},{"location":"changelog/server-changelog/#631-20180624","title":"6.3.1 (2018/06/24)","text":"
"},{"location":"changelog/server-changelog/#630-beta-20180526","title":"6.3.0 beta (2018/05/26)","text":"
"},{"location":"changelog/server-changelog/#62","title":"6.2","text":"
./seahub.sh start instead of ./seahub.sh start-fastcgilocation / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n }\n
"},{"location":"changelog/server-changelog/#625-20180123","title":"6.2.5 (2018/01/23)","text":" # seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n
"},{"location":"changelog/server-changelog/#624-20180116","title":"6.2.4 (2018/01/16)","text":"
"},{"location":"changelog/server-changelog/#623-20171115","title":"6.2.3 (2017/11/15)","text":"
"},{"location":"changelog/server-changelog/#622-20170925","title":"6.2.2 (2017/09/25)","text":"
"},{"location":"changelog/server-changelog/#621-20170922","title":"6.2.1 (2017/09/22)","text":"
"},{"location":"changelog/server-changelog/#620-beta-20170914","title":"6.2.0 beta (2017/09/14)","text":"
"},{"location":"changelog/server-changelog/#61","title":"6.1","text":"ENABLE_REPO_SNAPSHOT_LABEL = True to turn the feature on)
"},{"location":"changelog/server-changelog/#612-20170815","title":"6.1.2 (2017.08.15)","text":"# for ubuntu 16.04\napt-get install ffmpeg\npip install pillow moviepy\n\n# for Centos 7\nyum -y install epel-release\nrpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro\nyum -y install ffmpeg ffmpeg-devel\npip install pillow moviepy\n
"},{"location":"changelog/server-changelog/#611-20170615","title":"6.1.1 (2017.06.15)","text":"
"},{"location":"changelog/server-changelog/#610-beta-20170511","title":"6.1.0 beta (2017.05.11)","text":"
"},{"location":"changelog/server-changelog/#60","title":"6.0","text":"
"},{"location":"changelog/server-changelog/#608-20170216","title":"6.0.8 (2017.02.16)","text":"
# -*- coding: utf-8 -*- to seahub_settings.py, so that admin can use non-ascii characters in the file.
"},{"location":"changelog/server-changelog/#607-20161216","title":"6.0.7 (2016.12.16)","text":"
"},{"location":"changelog/server-changelog/#606-20161116","title":"6.0.6 (2016.11.16)","text":"
"},{"location":"changelog/server-changelog/#605-20161017","title":"6.0.5 (2016.10.17)","text":"
"},{"location":"changelog/server-changelog/#604-20160922","title":"6.0.4 (2016.09.22)","text":"
"},{"location":"changelog/server-changelog/#603-20160903","title":"6.0.3 (2016.09.03)","text":"
"},{"location":"changelog/server-changelog/#602-20160902","title":"6.0.2 (2016.09.02)","text":"
"},{"location":"changelog/server-changelog/#601-beta-20160822","title":"6.0.1 beta (2016.08.22)","text":"
"},{"location":"changelog/server-changelog/#600-beta-20160802","title":"6.0.0 beta (2016.08.02)","text":"
"},{"location":"changelog/server-changelog/#51","title":"5.1","text":"
"},{"location":"changelog/server-changelog/#514-20160723","title":"5.1.4 (2016.07.23)","text":"# for Ubuntu\nsudo apt-get install python-urllib3\n# for CentOS\nsudo yum install python-urllib3\n
"},{"location":"changelog/server-changelog/#513-20160530","title":"5.1.3 (2016.05.30)","text":"
"},{"location":"changelog/server-changelog/#512-20160513","title":"5.1.2 (2016.05.13)","text":"
"},{"location":"changelog/server-changelog/#511-20160408","title":"5.1.1 (2016.04.08)","text":"
"},{"location":"changelog/server-changelog/#510-beta-20160322","title":"5.1.0 beta (2016.03.22)","text":"
"},{"location":"config/","title":"Server Configuration and Customization","text":""},{"location":"config/#config-files","title":"Config Files","text":"
"},{"location":"config/ccnet-conf/","title":"ccnet.conf","text":"
"},{"location":"config/ccnet-conf/#using-encrypted-connections","title":"Using Encrypted Connections","text":"[Database]\n......\n# Use larger connection pool\nMAX_CONNECTIONS = 200\n[Database]\nUSE_SSL = true\nSKIP_VERIFY = false\nCA_PATH = /etc/mysql/ca.pem\nuse_ssl to true and skip_verify to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path. The ca_path is a trusted CA certificate path for signing MySQL server certificates. When skip_verify is true, there is no need to add the ca_path option. The MySQL server certificate won't be verified at this time.seahub-data/custom/templates/registration/password_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade. seahub-data/custom/templates/sysadmin/user_add_email.html and modify the new one. In this way, the customization will be maintained after upgrade. seahub-data/custom/templates/sysadmin/user_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade. .env file will be used to specify the components used by the Seafile-docker instance and the environment variables required by each component. The default contents list in below
"},{"location":"config/env/#seafile-docker-configurations","title":"Seafile-docker configurations","text":""},{"location":"config/env/#components-configurations","title":"Components configurations","text":"COMPOSE_FILE='seafile-server.yml,caddy.yml'\nCOMPOSE_PATH_SEPARATOR=','\n\n\nSEAFILE_IMAGE=docker.seadrive.org/seafileltd/seafile-pro-mc:12.0-latest\nSEAFILE_DB_IMAGE=mariadb:10.11\nSEAFILE_MEMCACHED_IMAGE=memcached:1.6.29\nSEAFILE_ELASTICSEARCH_IMAGE=elasticsearch:8.15.0 # pro edition only\nSEAFILE_CADDY_IMAGE=lucaslorentz/caddy-docker-proxy:2.9\n\nSEAFILE_VOLUME=/opt/seafile-data\nSEAFILE_MYSQL_VOLUME=/opt/seafile-mysql/db\nSEAFILE_ELASTICSEARCH_VOLUME=/opt/seafile-elasticsearch/data # pro edition only\nSEAFILE_CADDY_VOLUME=/opt/seafile-caddy\n\nSEAFILE_MYSQL_DB_HOST=db\nSEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_USER=seafile\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\n\nTIME_ZONE=Etc/UTC\n\nJWT_PRIVATE_KEY=\n\nSEAFILE_SERVER_HOSTNAME=example.seafile.com\nSEAFILE_SERVER_PROTOCOL=https\n\nSEAFILE_ADMIN_EMAIL=me@example.com\nSEAFILE_ADMIN_PASSWORD=asecret\n\n\nSEADOC_IMAGE=seafileltd/sdoc-server:1.0-latest\nSEADOC_VOLUME=/opt/seadoc-data\n\nENABLE_SEADOC=false\nSEADOC_SERVER_URL=http://example.seafile.com/sdoc-server\n
"},{"location":"config/env/#docker-images-configurations","title":"Docker images configurations","text":"COMPOSE_FILE: .yml files for components of Seafile-docker, each .yml must be separated by the symbol defined in COMPOSE_PATH_SEPARATOR. The core components are involved in seafile-server.yml and caddy.yml which must be taken in this term.COMPOSE_PATH_SEPARATOR: The symbol used to separate the .yml files in term COMPOSE_FILE, default is ','.
"},{"location":"config/env/#persistent-volume-configurations","title":"Persistent Volume Configurations","text":"SEAFILE_IMAGE: The image of Seafile-server, default is docker.seadrive.org/seafileltd/seafile-pro-mc:12.0-latest.SEAFILE_DB_IMAGE: Database server image, default is mariadb:10.11.SEAFILE_MEMCACHED_IMAGE: Cached server image, default is memcached:1.6.29SEAFILE_ELASTICSEARCH_IMAGE: Only valid in pro edition. The elasticsearch image, default is elasticsearch:8.15.0.SEAFILE_CADDY_IMAGE: Caddy server image, default is lucaslorentz/caddy-docker-proxy:2.9.SEADOC_IMAGE: Only valid after integrating SeaDoc. SeaDoc server image, default is seafileltd/sdoc-server:1.0-latest.
"},{"location":"config/env/#mysql-configurations","title":"Mysql configurations","text":"SEAFILE_VOLUME: The volume directory of Seafile data, default is /opt/seafile-data.SEAFILE_MYSQL_VOLUME: The volume directory of MySQL data, default is /opt/seafile-mysql/db.SEAFILE_CADDY_VOLUME: The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's, default is /opt/seafile-caddy.SEAFILE_ELASTICSEARCH_VOLUME: Only valid in pro edition. The volume directory of Elasticsearch data, default is /opt/seafile-elasticsearch/data.SEADOC_VOLUME: Only valid after integrating SeaDoc. The volume directory of SeaDoc server data, default is /opt/seadoc-data.
"},{"location":"config/env/#seafile-server-configurations","title":"Seafile-server configurations","text":"SEAFILE_MYSQL_DB_HOST: The host address of Mysql, default is the pre-defined service name db in Seafile-docker instance.SEAFILE_MYSQL_ROOT_PASSWORD: The root password of MySQL.SEAFILE_MYSQL_DB_USER: The user of MySQL (database - user can be found in conf/seafile.conf).SEAFILE_MYSQL_DB_PASSWORD: The user seafile password of MySQL.
"},{"location":"config/env/#seadoc-configurations-only-valid-after-integrating-seadoc","title":"SeaDoc configurations (only valid after integrating SeaDoc)","text":"SEAFILE_MYSQL_DB_PASSWORD: The user seafile password of MySQLJWT: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1SEAFILE_SERVER_HOSTNAME: Seafile server hostname or domainSEAFILE_SERVER_PROTOCOL: Seafile server protocol (http or https)TIME_ZONE: Time zone (default UTC)SEAFILE_ADMIN_EMAIL: Admin usernameSEAFILE_ADMIN_PASSWORD: Admin password
"},{"location":"config/seafevents-conf/","title":"Configurable Options","text":"ENABLE_SEADOC: Enable the SeaDoc server or not, default is false.SEADOC_SERVER_URL: Only valid in ENABLE_SEADOC=true. Url of Seadoc server (e.g., http://example.seafile.com/sdoc-server).seafevents.conf:
"},{"location":"config/seafevents-conf/#the-following-configurations-for-pro-edition-only","title":"The following configurations for Pro Edition only","text":"[DATABASE]\ntype = mysql\nhost = 192.168.0.2\nport = 3306\nusername = seafile\npassword = password\nname = seahub_db\n\n[STATISTICS]\n## must be \"true\" to enable statistics\nenabled = false\n\n[SEAHUB EMAIL]\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending Seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\n\n[FILE HISTORY]\nenabled = true\nthreshold = 5\nsuffix = md,txt,...\n\n## From seafile 7.0.0\n## Recording file history to database for fast access is enabled by default for 'Markdown, .txt, ppt, pptx, doc, docx, xls, xlsx'. \n## After enable the feature, the old histories version for markdown, doc, docx files will not be list in the history page.\n## (Only new histories that stored in database will be listed) But the users can still access the old versions in the library snapshots.\n## For file types not listed in the suffix , histories version will be scanned from the library history as before.\n## The feature default is enable. You can set the 'enabled = false' to disable the feature.\n\n## The 'threshold' is the time threshold for recording the historical version of a file, in minutes, the default is 5 minutes. \n## This means that if the interval between two adjacent file saves is less than 5 minutes, the two file changes will be merged and recorded as a historical version. \n## When set to 0, there is no time limit, which means that each save will generate a separate historical version.\n\n## If you need to modify the file list format, you can add 'suffix = md, txt, ...' configuration items to achieve.\n[AUDIT]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\n\n[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## From Seafile 6.3.0 pro, in order to speed up the full-text search speed, you should setup\nhighlight = fvh\n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\n## Refer to file search manual for details.\nindex_office_pdf=false\n\n## The default size limit for doc, docx, ppt, pptx, xls, xlsx and pdf files. Files larger than this will not be indexed.\n## Since version 6.2.0\n## Unit: MB\noffice_file_size_limit = 10\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\n\n## The default loglevel is `warning`.\n## Since version 11.0.4\nloglevel = info\n\n[EVENTS PUBLISH]\n## must be \"true\" to enable publish events messages\nenabled = false\n## message format: repo-update\\t{{repo_id}}}\\t{{commit_id}}\n## Currently only support redis message queue\nmq_type = redis\n\n[REDIS]\n## redis use the 0 database and \"repo_update\" channel\nserver = 192.168.1.1\nport = 6379\npassword = q!1w@#123\n\n[AUTO DELETION]\nenabled = true # Default is false, when enabled, users can use file auto deletion feature\ninterval = 86400 # The unit is second(s), the default frequency is one day, that is, it runs once a day\n
"},{"location":"config/seafile-conf/#storage-quota-setting","title":"Storage Quota Setting","text":"./seahub.sh restart\n./seafile.sh restart\nseafile.conf file[quota]\n# default user quota in GB, integer only\ndefault = 2\n
"},{"location":"config/seafile-conf/#default-history-length-limit","title":"Default history length limit","text":"[quota]\nlibrary_file_limit = 100000\n
"},{"location":"config/seafile-conf/#default-trash-expiration-time","title":"Default trash expiration time","text":"[history]\nkeep_days = days of history to keep\n
"},{"location":"config/seafile-conf/#system-trash","title":"System Trash","text":"[library_trash]\nexpire_days = 60\n[memcached]\n# Replace `localhost` with the memcached address:port if you're using remote memcached\n# POOL-MIN and POOL-MAX is used to control connection pool size. Usually the default is good enough.\nmemcached_options = --SERVER=localhost --POOL-MIN=10 --POOL-MAX=100\n[redis]\n# your redis server address\nredis_host = 127.0.0.1\n# your redis server port\nredis_port = 6379\n# size of connection pool to redis, default is 100\nmax_connections = 100\n[fileserver] section of the file seafile.conf[fileserver]\n# bind address for fileserver\n# default to 0.0.0.0, if deployed without proxy: no access restriction\n# set to 127.0.0.1, if used with local proxy: only access by local\nhost = 127.0.0.1\n# tcp port for fileserver\nport = 8082\n[fileserver]\nworker_threads = 15\n[fileserver]\n# Set maximum upload file size to 200M.\n# If not configured, there is no file size limit for uploading.\nmax_upload_size=200\n\n# Set maximum download directory size to 200M.\n# Default is 100M.\nmax_download_dir_size=200\n[fileserver]\nmax_indexing_threads = 10\n[fileserver]\n#Set block size to 2MB\nfixed_block_size=2\n[fileserver]\n#Set uploading time limit to 3600s\nweb_token_expire_time=3600\n[zip]\n# The file name encoding of the downloaded zip file.\nwindows_encoding = iso-8859-1\n[fileserver]\n# After how much time a temp file will be removed. The unit is in seconds. Default to 3 days.\nhttp_temp_file_ttl = x\n# File scan interval. The unit is in seconds. Default to 1 hour.\nhttp_temp_scan_interval = x\nfs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server.[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
use_block_cache option in the [fileserver] group. It's not enabled by default. block_cache_size_limit option is used to limit the size of the cache. Its default value is 10GB. The blocks are cached in seafile-data/block-cache directory. When the total size of cached files exceeds the limit, seaf-server will clean up older files until the size reduces to 70% of the limit. The cleanup interval is 5 minutes. You have to have a good estimate on how much space you need for the cache directory. Otherwise on frequent downloads this directory can be quickly filled up.block_cache_file_types configuration is used to choose the file types that are cached. block_cache_file_types the default value is mp4;mov.[fileserver]\nuse_block_cache = true\n# Set block cache size limit to 100MB\nblock_cache_size_limit = 100\nblock_cache_file_types = mp4;mov\nskip_block_hash option to use a random string as block ID. Note that this option will prevent fsck from checking block content integrity. You should specify --shallow option to fsck to not check content integrity.[fileserver]\nskip_block_hash = true\nfile_ext_white_list option in the [fileserver] group. This option is a list of file types, only the file types in this list are allowed to be uploaded. It's not enabled by default. [fileserver]\nfile_ext_white_list = md;mp4;mov\nupload_limit and download_limit option in the [fileserver] group to limit the speed of file upload and download. It's not enabled by default. [fileserver]\n# The unit is in KB/s.\nupload_limit = 100\ndownload_limit = 100\n
"},{"location":"config/seafile-conf/#database-configuration","title":"Database configuration","text":"[fileserver]\n# default is false\ncheck_virus_on_web_upload = true\n[database] section of the configuration file, whether you use SQLite or MySQL.[database]\ntype=mysql\nhost=127.0.0.1\nuser=root\npassword=root\ndb_name=seafile_db\nconnection_charset=utf8\nmax_connections=100\n[database]\nuse_ssl = true\nskip_verify = false\nca_path = /etc/mysql/ca.pem\nuse_ssl to true and skip_verify to false, it will check whether the MySQL server certificate is legal through the CA configured in ca_path. The ca_path is a trusted CA certificate path for signing MySQL server certificates. When skip_verify is true, there is no need to add the ca_path option. The MySQL server certificate won't be verified at this time.[file_lock]\ndefault_expire_hours = 6\n[file_lock]\nuse_locked_file_cache = true\n
"},{"location":"config/seafile-conf/#storage-backends","title":"Storage Backends","text":"[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\n
"},{"location":"config/seafile-conf/#enable-slow-log","title":"Enable Slow Log","text":"[cluster]\nenabled = true\n[slow_log]\n# default to true\nenable_slow_log = true\n# the unit of all slow log thresholds is millisecond.\n# default to 5000 milliseconds, only RPC queries processed for longer than 5000 milliseconds will be logged.\nrpc_slow_threshold = 5000\nseafile_slow_rpc.log in logs/slow_logs. You can also use log-rotate to rotate the log files. You just need to send SIGUSR2 to seaf-server process. The slow log file will be closed and reopened.SIGUSR1. This signal will trigger rotation for all log files opened by seaf-server. You should change your log rotate settings accordingly.[fileserver]\n# default to false. If enabled, fileserver-access.log will be written to log directory.\nenable_access_log = true\nstart time - user id - url - response code - process time\nSIGUSR1 to trigger log rotation.[fileserver]\nuse_go_fileserver = true\n
max_sync_file_count to limit the size of library to be synced. The default is 100K. With Go fileserver you can set this option to a much higher number, such as 1 million.max_download_dir_size is thus no longer needed by Go fileserver.
"},{"location":"config/seafile-conf/#profiling-go-fileserver-performance","title":"Profiling Go Fileserver Performance","text":"[fileserver]\n# The unit is in M. Default to 2G.\nfs_cache_limit = 100\n# profile_password is required, change it for your need\n[fileserver]\nenable_profiling = true\nprofile_password = 8kcUz1I2sLaywQhCRtn2x1\n
"},{"location":"config/seafile-conf/#notification-server-configuration","title":"Notification server configuration","text":"go tool pprof http://localhost:8082/debug/pprof/heap?password=8kcUz1I2sLaywQhCRtn2x1\ngo tool pprof http://localhost:8082/debug/pprof/profile?password=8kcUz1I2sLaywQhCRtn2x1\n# jwt_private_key are required.You should generate it manually.\n[notification]\nenabled = true\n# the listen IP of notification server. (Do not modify the host when using Nginx or Apache, as Nginx or Apache will proxy the requests to this address)\nhost = 127.0.0.1\n# the port of notification server\nport = 8083\n# the log level of notification server\nlog_level = info\n# jwt_private_key is used to generate jwt token and authenticate seafile server\njwt_private_key = M@O8VWUb81YvmtWLHGB2I_V7di5-@0p(MF*GrE!sIws23F\n# generate jwt_private_key\nopenssl rand -base64 32\nserver {\n ...\n\n location /notification/ping {\n proxy_pass http://127.0.0.1:8083/ping;\n access_log /var/log/nginx/notification.access.log seafileformat;\n error_log /var/log/nginx/notification.error.log;\n }\n location /notification {\n proxy_pass http://127.0.0.1:8083/;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"upgrade\";\n access_log /var/log/nginx/notification.access.log seafileformat;\n error_log /var/log/nginx/notification.error.log;\n }\n\n ...\n}\n
"},{"location":"config/seahub_customization/","title":"Seahub customization","text":""},{"location":"config/seahub_customization/#customize-seahub-logo-and-css","title":"Customize Seahub Logo and CSS","text":" ProxyPass /notification/ping http://127.0.0.1:8083/ping/\n ProxyPassReverse /notification/ping http://127.0.0.1:8083/ping/\n\n ProxyPass /notification ws://127.0.0.1:8083/\n ProxyPassReverse /notification ws://127.0.0.1:8083/\n<seafile-install-path>/seahub-data/custom. Create a symbolic link in seafile-server-latest/seahub/media by ln -s ../../../seahub-data/custom custom.custom/LOGO_PATH in seahub_settings.pyLOGO_PATH = 'custom/mylogo.png'\n
"},{"location":"config/seahub_customization/#customize-favicon","title":"Customize Favicon","text":"LOGO_WIDTH = 149\nLOGO_HEIGHT = 32\ncustom/FAVICON_PATH in seahub_settings.py
"},{"location":"config/seahub_customization/#customize-seahub-css","title":"Customize Seahub CSS","text":"FAVICON_PATH = 'custom/favicon.png'\ncustom/, for example, custom.cssBRANDING_CSS in seahub_settings.py
"},{"location":"config/seahub_customization/#customize-help-page","title":"Customize help page","text":"BRANDING_CSS = 'custom/custom.css'\ncd <seafile-install-path>/seahub-data/custom\nmkdir templates\nmkdir templates/help\ncp ../../seafile-server-latest/seahub/seahub/help/templates/help/install.html templates/help/\ntemplates/help/install.html file and save it. You will see the new help page.ADDITIONAL_SHARE_DIALOG_NOTE = {\n 'title': 'Attention! Read before shareing files:',\n 'content': 'Do not share personal or confidential official data with **.'\n}\nconf/seahub_settings.py configuration file:CUSTOM_NAV_ITEMS = [\n {'icon': 'sf2-icon-star',\n 'desc': 'Custom navigation 1',\n 'link': 'https://www.seafile.com'\n },\n {'icon': 'sf2-icon-wiki-view',\n 'desc': 'Custom navigation 2',\n 'link': 'https://www.seafile.com/help'\n },\n {'icon': 'sf2-icon-wrench',\n 'desc': 'Custom navigation 3',\n 'link': 'http://www.example.com'\n },\n]\nicon field currently only supports icons in Seafile that begin with sf2-icon. You can find the list of icons here: Tools navigation bar on the left.ADDITIONAL_APP_BOTTOM_LINKS = {\n 'seafile': 'https://example.seahub.com/seahub',\n 'dtable-web': 'https://example.seahub.com/web'\n}\nADDITIONAL_ABOUT_DIALOG_LINKS = {\n 'seafile': 'https://example.seahub.com/seahub',\n 'dtable-web': 'https://example.seahub.com/dtable-web'\n}\nENABLE_SETTINGS_VIA_WEB = False to seahub_settings.py.# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\nseahub_settings.py.
"},{"location":"config/seahub_settings_py/#redis","title":"Redis","text":"CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\n
"},{"location":"config/seahub_settings_py/#user-management-options","title":"User management options","text":"# For security consideration, please set to match the host/domain of your site, e.g., ALLOWED_HOSTS = ['.example.com'].\n# Please refer https://docs.djangoproject.com/en/dev/ref/settings/#allowed-hosts for details.\nALLOWED_HOSTS = ['.myseafile.com']\n\n\n# Whether to use a secure cookie for the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-secure\nCSRF_COOKIE_SECURE = True\n\n# The value of the SameSite flag on the CSRF cookie\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-cookie-samesite\nCSRF_COOKIE_SAMESITE = 'Strict'\n\n# https://docs.djangoproject.com/en/3.2/ref/settings/#csrf-trusted-origins\nCSRF_TRUSTED_ORIGINS = ['https://www.myseafile.com']\n
"},{"location":"config/seahub_settings_py/#library-snapshot-label-feature","title":"Library snapshot label feature","text":"# Enalbe or disalbe registration on web. Default is `False`.\nENABLE_SIGNUP = False\n\n# Activate or deactivate user when registration complete. Default is `True`.\n# If set to `False`, new users need to be activated by admin in admin panel.\nACTIVATE_AFTER_REGISTRATION = False\n\n# Whether to send email when a system admin adding a new member. Default is `True`.\nSEND_EMAIL_ON_ADDING_SYSTEM_MEMBER = True\n\n# Whether to send email when a system admin resetting a user's password. Default is `True`.\nSEND_EMAIL_ON_RESETTING_USER_PASSWD = True\n\n# Send system admin notify email when user registration is complete. Default is `False`.\nNOTIFY_ADMIN_AFTER_REGISTRATION = True\n\n# Remember days for login. Default is 7\nLOGIN_REMEMBER_DAYS = 7\n\n# Attempt limit before showing a captcha when login.\nLOGIN_ATTEMPT_LIMIT = 3\n\n# deactivate user account when login attempts exceed limit\n# Since version 5.1.2 or pro 5.1.3\nFREEZE_USER_ON_LOGIN_FAILED = False\n\n# mininum length for user's password\nUSER_PASSWORD_MIN_LENGTH = 6\n\n# LEVEL based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nUSER_PASSWORD_STRENGTH_LEVEL = 3\n\n# default False, only check USER_PASSWORD_MIN_LENGTH\n# when True, check password strength level, STRONG(or above) is allowed\nUSER_STRONG_PASSWORD_REQUIRED = False\n\n# Force user to change password when admin add/reset a user.\n# Added in 5.1.1, deafults to True.\nFORCE_PASSWORD_CHANGE = True\n\n# Age of cookie, in seconds (default: 2 weeks).\nSESSION_COOKIE_AGE = 60 * 60 * 24 * 7 * 2\n\n# Whether a user's session cookie expires when the Web browser is closed.\nSESSION_EXPIRE_AT_BROWSER_CLOSE = False\n\n# Whether to save the session data on every request. Default is `False`\nSESSION_SAVE_EVERY_REQUEST = False\n\n# Whether enable the feature \"published library\". Default is `False`\n# Since 6.1.0 CE\nENABLE_WIKI = True\n\n# In old version, if you use Single Sign On, the password is not saved in Seafile.\n# Users can't use WebDAV because Seafile can't check whether the password is correct.\n# Since version 6.3.8, you can enable this option to let user's to specific a password for WebDAV login.\n# Users login via SSO can use this password to login in WebDAV.\n# Enable the feature. pycryptodome should be installed first.\n# sudo pip install pycryptodome==3.12.0\nENABLE_WEBDAV_SECRET = True\nWEBDAV_SECRET_MIN_LENGTH = 8\n\n# LEVEL for the password, based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above.\nWEBDAV_SECRET_STRENGTH_LEVEL = 1\n\n\n# Since version 7.0.9, you can force a full user to log in with a two factor authentication.\n# The prerequisite is that the administrator should 'enable two factor authentication' in the 'System Admin -> Settings' page.\n# Then you can add the following configuration information to the configuration file.\nENABLE_FORCE_2FA_TO_ALL_USERS = True\n
"},{"location":"config/seahub_settings_py/#library-options","title":"Library options","text":"# Turn on this option to let users to add a label to a library snapshot. Default is `False`\nENABLE_REPO_SNAPSHOT_LABEL = False\n# if enable create encrypted library\nENABLE_ENCRYPTED_LIBRARY = True\n\n# version for encrypted library\n# should only be `2` or `4`.\n# version 3 is insecure (using AES128 encryption) so it's not recommended any more.\nENCRYPTED_LIBRARY_VERSION = 2\n\n# mininum length for password of encrypted library\nREPO_PASSWORD_MIN_LENGTH = 8\n\n# force use password when generate a share/upload link (since version 8.0.9)\nSHARE_LINK_FORCE_USE_PASSWORD = False\n\n# mininum length for password for share link (since version 4.4)\nSHARE_LINK_PASSWORD_MIN_LENGTH = 8\n\n# LEVEL for the password of a share/upload link\n# based on four types of input:\n# num, upper letter, lower letter, other symbols\n# '3' means password must have at least 3 types of the above. (since version 8.0.9)\nSHARE_LINK_PASSWORD_STRENGTH_LEVEL = 3\n\n# Default expire days for share link (since version 6.3.8)\n# Once this value is configured, the user can no longer generate an share link with no expiration time.\n# If the expiration value is not set when the share link is generated, the value configured here will be used.\nSHARE_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be less than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for share link (since version 6.3.6)\n# SHARE_LINK_EXPIRE_DAYS_MIN should be greater than SHARE_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nSHARE_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# Default expire days for upload link (since version 7.1.6)\n# Once this value is configured, the user can no longer generate an upload link with no expiration time.\n# If the expiration value is not set when the upload link is generated, the value configured here will be used.\nUPLOAD_LINK_EXPIRE_DAYS_DEFAULT = 5\n\n# minimum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MIN should be less than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MIN = 3 # default is 0, no limit.\n\n# maximum expire days for upload link (since version 7.1.6)\n# UPLOAD_LINK_EXPIRE_DAYS_MAX should be greater than UPLOAD_LINK_EXPIRE_DAYS_DEFAULT (If the latter is set).\nUPLOAD_LINK_EXPIRE_DAYS_MAX = 8 # default is 0, no limit.\n\n# force user login when view file/folder share link (since version 6.3.6)\nSHARE_LINK_LOGIN_REQUIRED = True\n\n# enable water mark when view(not edit) file in web browser (since version 6.3.6)\nENABLE_WATERMARK = True\n\n# Disable sync with any folder. Default is `False`\n# NOTE: since version 4.2.4\nDISABLE_SYNC_WITH_ANY_FOLDER = True\n\n# Enable or disable library history setting\nENABLE_REPO_HISTORY_SETTING = True\n\n# Enable or disable user share library to any group\n# Since version 6.2.0\nENABLE_SHARE_TO_ALL_GROUPS = True\n\n# Enable or disable user to clean trash (default is True)\n# Since version 6.3.6\nENABLE_USER_CLEAN_TRASH = True\n\n# Add a report abuse button on download links. (since version 7.1.0)\n# Users can report abuse on the share link page, fill in the report type, contact information, and description.\n# Default is false.\nENABLE_SHARE_LINK_REPORT_ABUSE = True\n
"},{"location":"config/seahub_settings_py/#cloud-mode","title":"Cloud Mode","text":"# Online preview maximum file size, defaults to 30M.\nFILE_PREVIEW_MAX_SIZE = 30 * 1024 * 1024\n\n# Extensions of previewed text files.\n# NOTE: since version 6.1.1\nTEXT_PREVIEW_EXT = \"\"\"ac, am, bat, c, cc, cmake, cpp, cs, css, diff, el, h, html,\nhtm, java, js, json, less, make, org, php, pl, properties, py, rb,\nscala, script, sh, sql, txt, text, tex, vi, vim, xhtml, xml, log, csv,\ngroovy, rst, patch, go\"\"\"\n\n\n# Seafile only generates thumbnails for images smaller than the following size.\n# Since version 6.3.8 pro, suport the psd online preview.\nTHUMBNAIL_IMAGE_SIZE_LIMIT = 30 # MB\n\n# Enable or disable thumbnail for video. ffmpeg and moviepy should be installed first.\n# For details, please refer to https://manual.seafile.com/deploy/video_thumbnails.html\n# NOTE: this option is deprecated in version 7.1\nENABLE_VIDEO_THUMBNAIL = False\n\n# Use the frame at 5 second as thumbnail\n# NOTE: this option is deprecated in version 7.1\nTHUMBNAIL_VIDEO_FRAME_TIME = 5\n\n# Absolute filesystem path to the directory that will hold thumbnail files.\nTHUMBNAIL_ROOT = '/haiwen/seahub-data/thumbnail/thumb/'\n\n# Default size for picture preview. Enlarge this size can improve the preview quality.\n# NOTE: since version 6.1.1\nTHUMBNAIL_SIZE_FOR_ORIGINAL = 1024\n
"},{"location":"config/seahub_settings_py/#single-sign-on","title":"Single Sign On","text":"# Enable cloude mode and hide `Organization` tab.\nCLOUD_MODE = True\n\n# Disable global address book\nENABLE_GLOBAL_ADDRESSBOOK = False\n
"},{"location":"config/seahub_settings_py/#other-options","title":"Other options","text":"# Enable authentication with ADFS\n# Default is False\n# Since 6.0.9\nENABLE_ADFS_LOGIN = True\n\n# Force user login through ADFS instead of email and password\n# Default is False\n# Since 11.0.7\nDISABLE_ADFS_USER_PWD_LOGIN = True\n\n# Enable authentication wit Kerberos\n# Default is False\nENABLE_KRB5_LOGIN = True\n\n# Enable authentication with Shibboleth\n# Default is False\nENABLE_SHIBBOLETH_LOGIN = True\n\n# Enable client to open an external browser for single sign on\n# When it is false, the old buitin browser is opened for single sign on\n# When it is true, the default browser of the operation system is opened\n# The benefit of using system browser is that it can support hardware 2FA\n# Since 11.0.0, and sync client 9.0.5, drive client 3.0.8\nCLIENT_SSO_VIA_LOCAL_BROWSER = True # default is False\nCLIENT_SSO_UUID_EXPIRATION = 5 * 60 # in seconds\n
"},{"location":"config/seahub_settings_py/#pro-edition-only-options","title":"Pro edition only options","text":"# This is outside URL for Seahub(Seafile Web). \n# The domain part (i.e., www.example.com) will be used in generating share links and download/upload file via web.\n# Note: Outside URL means \"if you use Nginx, it should be the Nginx's address\"\n# Note: SERVICE_URL is moved to seahub_settings.py since 9.0.0\nSERVICE_URL = 'http://www.example.com:8000'\n\n# Disable settings via Web interface in system admin->settings\n# Default is True\n# Since 5.1.3\nENABLE_SETTINGS_VIA_WEB = False\n\n# Choices can be found here:\n# http://en.wikipedia.org/wiki/List_of_tz_zones_by_name\n# although not all choices may be available on all operating systems.\n# If running in a Windows environment this must be set to the same as your\n# system time zone.\nTIME_ZONE = 'UTC'\n\n# Language code for this installation. All choices can be found here:\n# http://www.i18nguy.com/unicode/language-identifiers.html\n# Default language for sending emails.\nLANGUAGE_CODE = 'en'\n\n# Custom language code choice.\nLANGUAGES = (\n ('en', 'English'),\n ('zh-cn', '\u7b80\u4f53\u4e2d\u6587'),\n ('zh-tw', '\u7e41\u9ad4\u4e2d\u6587'),\n)\n\n# Set this to your website/company's name. This is contained in email notifications and welcome message when user login for the first time.\nSITE_NAME = 'Seafile'\n\n# Browser tab's title\nSITE_TITLE = 'Private Seafile'\n\n# If you don't want to run seahub website on your site's root path, set this option to your preferred path.\n# e.g. setting it to '/seahub/' would run seahub on http://example.com/seahub/.\nSITE_ROOT = '/'\n\n# Max number of files when user upload file/folder.\n# Since version 6.0.4\nMAX_NUMBER_OF_FILES_FOR_FILEUPLOAD = 500\n\n# Control the language that send email. Default to user's current language.\n# Since version 6.1.1\nSHARE_LINK_EMAIL_LANGUAGE = ''\n\n# Interval for browser requests unread notifications\n# Since PRO 6.1.4 or CE 6.1.2\nUNREAD_NOTIFICATIONS_REQUEST_INTERVAL = 3 * 60 # seconds\n\n# Whether to allow user to delete account, change login password or update basic user\n# info on profile page.\n# Since PRO 6.3.10\nENABLE_DELETE_ACCOUNT = False\nENABLE_UPDATE_USER_INFO = False\nENABLE_CHANGE_PASSWORD = False\n\n# Get web api auth token on profile page.\nENABLE_GET_AUTH_TOKEN_BY_SESSION = True\n\n# Since 8.0.6 CE/PRO version.\n# Url redirected to after user logout Seafile.\n# Usually configured as Single Logout url.\nLOGOUT_REDIRECT_URL = 'http{s}://www.example-url.com'\n\n\n# Enable system admin add T&C, all users need to accept terms before using. Defaults to `False`.\n# Since version 6.0\nENABLE_TERMS_AND_CONDITIONS = True\n\n# Enable two factor authentication for accounts. Defaults to `False`.\n# Since version 6.0\nENABLE_TWO_FACTOR_AUTH = True\n\n# Enable user select a template when he/she creates library.\n# When user select a template, Seafile will create folders releated to the pattern automaticly.\n# Since version 6.0\nLIBRARY_TEMPLATES = {\n 'Technology': ['/Develop/Python', '/Test'],\n 'Finance': ['/Current assets', '/Fixed assets/Computer']\n}\n\n# Enable a user to change password in 'settings' page. Default to `True`\n# Since version 6.2.11\nENABLE_CHANGE_PASSWORD = True\n\n# If show contact email when search user.\nENABLE_SHOW_CONTACT_EMAIL_WHEN_SEARCH_USER = True\n
"},{"location":"config/seahub_settings_py/#restful-api","title":"RESTful API","text":"# Whether to show the used traffic in user's profile popup dialog. Default is True\nSHOW_TRAFFIC = True\n\n# Allow administrator to view user's file in UNENCRYPTED libraries\n# through Libraries page in System Admin. Default is False.\nENABLE_SYS_ADMIN_VIEW_REPO = True\n\n# For un-login users, providing an email before downloading or uploading on shared link page.\n# Since version 5.1.4\nENABLE_SHARE_LINK_AUDIT = True\n\n# Check virus after upload files to shared upload links. Defaults to `False`.\n# Since version 6.0\nENABLE_UPLOAD_LINK_VIRUS_CHECK = True\n\n# Send email to these email addresses when a virus is detected.\n# This list can be any valid email address, not necessarily the emails of Seafile user.\n# Since version 6.0.8\nVIRUS_SCAN_NOTIFY_LIST = ['user_a@seafile.com', 'user_b@seafile.com']\n
"},{"location":"config/seahub_settings_py/#seahub-custom-functions","title":"Seahub Custom Functions","text":"# API throttling related settings. Enlarger the rates if you got 429 response code during API calls.\nREST_FRAMEWORK = {\n 'DEFAULT_THROTTLE_RATES': {\n 'ping': '600/minute',\n 'anon': '5/minute',\n 'user': '300/minute',\n },\n 'UNICODE_JSON': False,\n}\n\n# Throtting whitelist used to disable throttle for certain IPs.\n# e.g. REST_FRAMEWORK_THROTTING_WHITELIST = ['127.0.0.1', '192.168.1.1']\n# Please make sure `REMOTE_ADDR` header is configured in Nginx conf according to https://manual.seafile.com/deploy/deploy_with_nginx.html.\nREST_FRAMEWORK_THROTTING_WHITELIST = []\ncustom_search_user function in {seafile install path}/conf/seahub_custom_functions/__init__.pyimport os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseahub_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seahub/seahub')\nsys.path.append(seahub_dir)\n\nfrom seahub.profile.models import Profile\ndef custom_search_user(request, emails):\n\n institution_name = ''\n\n username = request.user.username\n profile = Profile.objects.get_profile_by_user(username)\n if profile:\n institution_name = profile.institution\n\n inst_users = [p.user for p in\n Profile.objects.filter(institution=institution_name)]\n\n filtered_emails = []\n for email in emails:\n if email in inst_users:\n filtered_emails.append(email)\n\n return filtered_emails\ncustom_search_user and seahub_custom_functions/__init__.pytest@test.com, you can define a custom_get_groups function in {seafile install path}/conf/seahub_custom_functions/__init__.pyimport os\nimport sys\n\ncurrent_path = os.path.dirname(os.path.abspath(__file__))\nseaserv_dir = os.path.join(current_path, \\\n '../../seafile-server-latest/seafile/lib64/python2.7/site-packages')\nsys.path.append(seaserv_dir)\n\ndef custom_get_groups(request):\n\n from seaserv import ccnet_api\n\n groups = []\n username = request.user.username\n\n # for current user\n groups += ccnet_api.get_groups(username)\n\n # for 'test@test.com' user\n groups += ccnet_api.get_groups('test@test.com')\n\n return groups\ncustom_get_groups and seahub_custom_functions/__init__.py
"},{"location":"config/sending_email/","title":"Sending Email Notifications on Seahub","text":""},{"location":"config/sending_email/#types-of-email-sending-in-seafile","title":"Types of Email Sending in Seafile","text":"./seahub.sh restart\n
seahub_settings.py to enable email sending.EMAIL_USE_TLS = False\nEMAIL_HOST = 'smtp.example.com' # smpt server\nEMAIL_HOST_USER = 'username@example.com' # username and domain\nEMAIL_HOST_PASSWORD = 'password' # password\nEMAIL_PORT = 25\nDEFAULT_FROM_EMAIL = EMAIL_HOST_USER\nSERVER_EMAIL = EMAIL_HOST_USER\nEMAIL_USE_TLS = True\nEMAIL_HOST = 'smtp.gmail.com'\nEMAIL_HOST_USER = 'username@gmail.com'\nEMAIL_HOST_PASSWORD = 'password'\nEMAIL_PORT = 587\nDEFAULT_FROM_EMAIL = EMAIL_HOST_USER\nSERVER_EMAIL = EMAIL_HOST_USER\nlogs/seahub.log to see what may cause the problem. For a complete email notification list, please refer to email notification list.EMAIL_HOST_USER and EMAIL_HOST_PASSWORD blank (''). (But notice that the emails then will be sent without a From: address.)EMAIL_USE_SSL = True instead of EMAIL_USE_TLS.reply to of email","text":"
"},{"location":"config/sending_email/#config-background-email-sending-task-pro-edition-only","title":"Config background email sending task (Pro Edition Only)","text":"# Set reply-to header to user's email or not, defaults to ``False``. For details,\n# please refer to http://www.w3.org/Protocols/rfc822/\nADD_REPLY_TO_HEADER = True\nseafevents.conf.
"},{"location":"config/sending_email/#customize-email-messages","title":"Customize email messages","text":"[SEAHUB EMAIL]\n\n## must be \"true\" to enable user email notifications when there are new unread notifications\nenabled = true\n\n## interval of sending seahub email. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval = 30m\nSITE_NAME variable in seahub_settings.py. If it is not enough for your case, you can customize the email templates.seahub-data/custom/templates/email_base.html and modify the new one. In this way, the customization will be maintained after upgrade. send_html_email(_(\"Reset Password on %s\") % site_name,\n email_template_name, c, None, [user.username])\nseahub-data/custom/templates/registration/password_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\nseahub-data/custom/templates/sysadmin/user_add_email.html and modify the new one. In this way, the customization will be maintained after upgrade.send_html_email(_(u'Password has been reset on %s') % SITE_NAME,\n 'sysadmin/user_reset_email.html', c, None, [email])\nseahub-data/custom/templates/sysadmin/user_reset_email.html and modify the new one. In this way, the customization will be maintained after upgrade.try:\n if file_shared_type == 'f':\n c['file_shared_type'] = _(u\"file\")\n send_html_email(_(u'A file is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to\n )\n else:\n c['file_shared_type'] = _(u\"directory\")\n send_html_email(_(u'A directory is shared to you on %s') % SITE_NAME,\n 'shared_link_email.html',\n c, from_email, [to_email],\n reply_to=reply_to)\nseahub-data/custom/templates/shared_link_email.html and modify the new one. In this way, the customization will be maintained after upgrade.send_html_email(_('New notice on %s') % settings.SITE_NAME,\n 'notifications/notice_email.html', c,\n None, [to_user])\n
"},{"location":"deploy/#manually-deployment-options","title":"Manually deployment options","text":"
"},{"location":"deploy/#ldap-and-ad-integration","title":"LDAP and AD integration","text":"
"},{"location":"deploy/#trouble-shooting","title":"Trouble shooting","text":"
"},{"location":"deploy/#upgrade-seafile-server","title":"Upgrade Seafile Server","text":"
"},{"location":"deploy/auth_switch/","title":"Switch authentication type","text":"
provider you use in the config file. The user to be migrated should already be able to log in with this new authentication type, but he will be created as a new user with a new unique identifier, so he will not have access to his existing libraries. Note the uid from the social_auth_usersocialauth table. Delete this new, still empty user again.xxx@auth.local.social_auth_usersocialauth with the xxx@auth.local, your provider and the uid.12ae56789f1e4c8d8e1c31415867317c@auth.local from local database authentication to OAuth. The OAuth authentication is configured in seahub_settings.py with the provider name authentik-oauth. The uid of the user inside the Identity Provider is HR12345.mysql> select email,left(passwd,25) from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------------------------------+\n| email | left(passwd,25) |\n+---------------------------------------------+------------------------------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | PBKDF2SHA256$10000$4cdda6... |\n+---------------------------------------------+------------------------------+\n\nmysql> update EmailUser set passwd = '!' where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n\nmysql> insert into `social_auth_usersocialauth` (`username`, `provider`, `uid`, `extra_data`) values ('12ae56789f1e4c8d8e1c31415867317c@auth.local', 'authentik-oauth', 'HR12345', '');\nextra_data field store user's information returned from the provider. For most providers, the extra_data field is usually an empty character. Since version 11.0.3-Pro, the default value of the extra_data field is NULL.
"},{"location":"deploy/auth_switch/#migrating-from-one-external-authentication-to-another","title":"Migrating from one external authentication to another","text":"mysql> select email,passwd from EmailUser where email = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+------- +\n| email | passwd |\n+---------------------------------------------+--------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | ! |\n+---------------------------------------------+--------+\n\nmysql> select username,provider,uid from social_auth_usersocialauth where username = '12ae56789f1e4c8d8e1c31415867317c@auth.local';\n+---------------------------------------------+-----------------+---------+\n| username | provider | uid |\n+---------------------------------------------+-----------------+---------+\n| 12ae56789f1e4c8d8e1c31415867317c@auth.local | authentik-oauth | HR12345 |\n+---------------------------------------------+-----------------+---------+\nsocial_auth_usersocialauth table. No entries need to be deleted or created. You only need to modify the existing ones. The xxx@auth.local remains the same, you only need to replace the provider and the uid.social_auth_usersocialauth table that belongs to the particular user.
"},{"location":"deploy/auto_login_seadrive/#auto-login-on-internet-explorer","title":"Auto Login on Internet Explorer","text":"
HKEY_CURRENT_USER/SOFTWARE/SeaDrive.Key : PreconfigureServerAddr\nType : REG_SZ\nValue : <the url of seafile server>\n\nKey : PreconfigureUseKerberosLogin\nType : REG_SZ\nValue : <0|1> // 0 for normal login, 1 for SSO login\nHKEY_LOCAL_MACHINE/SOFTWARE/Wow6432Node/SeaDrive.
"},{"location":"deploy/auto_login_seadrive/#auto-login-via-group-policy","title":"Auto Login via Group Policy","text":"msiexec /i seadrive.msi /quiet /qn /log install.log\n/opt/seafile-data and /opt/seafile-mysql, are still adopted in this manual. What's more, all k8s YAML files will be placed in /opt/seafile-k8s-yaml. It is not recommended to change these paths. If you do, account for it when following these instructions.
"},{"location":"deploy/deploy_with_k8s/#yaml","title":"YAML","text":"kubectl create secret docker-registry regcred --docker-server=docker.seadrive.org/seafileltd --docker-username=seafile --docker-password=zjkmid6rQibdZ=uJMuWS\n/opt/seafile-k8s-yaml. This series of YAML mainly includes Deployment for pod management and creation, Service for exposing services to the external network, PersistentVolume for defining the location of a volume used for persistent storage on the host and Persistentvolumeclaim for declaring the use of persistent storage in the container. For futher configuration details, you can refer the official documents.apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: mariadb\nspec:\n selector:\n matchLabels:\n app: mariadb\n replicas: 1\n template:\n metadata:\n labels:\n app: mariadb\n spec:\n containers:\n - name: mariadb\n image: mariadb:10.11\n env:\n - name: MARIADB_ROOT_PASSWORD\n value: \"db_dev\"\n - name: MARIADB_AUTO_UPGRADE\n value: \"true\"\n ports:\n - containerPort: 3306\n volumeMounts:\n - name: mariadb-data\n mountPath: /var/lib/mysql\n volumes:\n - name: mariadb-data\n persistentVolumeClaim:\n claimName: mariadb-data\nMARIADB_ROOT_PASSWORD to your own mariadb password. In the above Deployment configuration file, no restart policy for the pod is specified. The default restart policy is Always. If you need to modify it, add the following to the spec attribute:
"},{"location":"deploy/deploy_with_k8s/#mariadb-serviceyaml","title":"mariadb-service.yaml","text":"restartPolicy: OnFailure\n\n#Note:\n# Always: always restart (include normal exit)\n# OnFailure: restart only with unexpected exit\n# Never: do not restart\n
"},{"location":"deploy/deploy_with_k8s/#mariadb-persistentvolumeyaml","title":"mariadb-persistentvolume.yaml","text":"apiVersion: v1\nkind: Service\nmetadata:\n name: mariadb\nspec:\n selector:\n app: mariadb\n ports:\n - protocol: TCP\n port: 3306\n targetPort: 3306\n
"},{"location":"deploy/deploy_with_k8s/#mariadb-persistentvolumeclaimyaml","title":"mariadb-persistentvolumeclaim.yaml","text":"apiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mariadb-data\nspec:\n capacity:\n storage: 1Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: /opt/seafile-mysql/db\n
"},{"location":"deploy/deploy_with_k8s/#memcached","title":"memcached","text":""},{"location":"deploy/deploy_with_k8s/#memcached-deploymentyaml","title":"memcached-deployment.yaml","text":"apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: mariadb-data\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi\n
"},{"location":"deploy/deploy_with_k8s/#memcached-serviceyaml","title":"memcached-service.yaml","text":"apiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: memcached\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: memcached\n template:\n metadata:\n labels:\n app: memcached\n spec:\n containers:\n - name: memcached\n image: memcached:1.6.18\n args: [\"-m\", \"256\"]\n ports:\n - containerPort: 11211\n
"},{"location":"deploy/deploy_with_k8s/#seafile","title":"Seafile","text":""},{"location":"deploy/deploy_with_k8s/#seafile-deploymentyaml","title":"seafile-deployment.yaml","text":"apiVersion: v1\nkind: Service\nmetadata:\n name: memcached\nspec:\n selector:\n app: memcached\n ports:\n - protocol: TCP\n port: 11211\n targetPort: 11211\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: seafile\nspec:\n replicas: 1\n selector:\n matchLabels:\n app: seafile\n template:\n metadata:\n labels:\n app: seafile\n spec:\n containers:\n - name: seafile\n # image: seafileltd/seafile-mc:9.0.10\n # image: seafileltd/seafile-mc:11.0-latest\n image: docker.seadrive.org/seafileltd/seafile-pro-mc:11.0-latest\n env:\n - name: DB_HOST\n value: \"mariadb\"\n - name: DB_ROOT_PASSWD\n value: \"db_dev\" #db's password\n - name: TIME_ZONE\n value: \"Europe/Berlin\"\n - name: SEAFILE_ADMIN_EMAIL\n value: \"admin@seafile.com\" #admin email\n - name: SEAFILE_ADMIN_PASSWORD\n value: \"admin_password\" #admin password\n - name: SEAFILE_SERVER_LETSENCRYPT\n value: \"false\"\n - name: SEAFILE_SERVER_HOSTNAME\n value: \"you_seafile_domain\" #hostname\n ports:\n - containerPort: 80\n # - containerPort: 443\n # name: seafile-secure\n volumeMounts:\n - name: seafile-data\n mountPath: /shared\n volumes:\n - name: seafile-data\n persistentVolumeClaim:\n claimName: seafile-data\n restartPolicy: Always\n # to get image from protected repository\n imagePullSecrets:\n - name: regcred\n
"},{"location":"deploy/deploy_with_k8s/#seafile-persistentvolumeyaml","title":"seafile-persistentvolume.yaml","text":"apiVersion: v1\nkind: Service\nmetadata:\n name: seafile\nspec:\n selector:\n app: seafile\n type: LoadBalancer\n ports:\n - protocol: TCP\n port: 80\n targetPort: 80\n nodePort: 30000\n
"},{"location":"deploy/deploy_with_k8s/#seafile-persistentvolumeclaimyaml","title":"seafile-persistentvolumeclaim.yaml","text":"apiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: seafile-data\nspec:\n capacity:\n storage: 10Gi\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: /opt/seafile-data\n
"},{"location":"deploy/deploy_with_k8s/#deploy-pods","title":"Deploy pods","text":"apiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n name: seafile-data\nspec:\n accessModes:\n - ReadWriteOnce\n resources:\n requests:\n storage: 10Gi\n
"},{"location":"deploy/deploy_with_k8s/#container-management","title":"Container management","text":"kubectl apply -f /opt/seafile-k8s-yaml/\nseafile- as the prefix (such as seafile-748b695648-d6l4g)kubectl get pods\nkubectl logs seafile-748b695648-d6l4g\nkubectl exec -it seafile-748b695648-d6l4g -- bash\n/opt/seafile-data/conf and need to restart the container, the following command can be refered:
"},{"location":"deploy/https_with_apache/","title":"Enabling HTTPS with Apache","text":"kubectl delete deployments --all\nkubectl apply -f /opt/seafile-k8s-yaml/\nseafile.example.com.
# Ubuntu\n$ sudo a2enmod rewrite\n$ sudo a2enmod proxy_http\nvhost.conf. For Debian/Ubuntu, this is sites-enabled/000-default.
"},{"location":"deploy/https_with_apache/#getting-a-lets-encrypt-certificate","title":"Getting a Let's Encrypt certificate","text":"<VirtualHost *:80>\n ServerName seafile.example.com\n # Use \"DocumentRoot /var/www/html\" for CentOS\n # Use \"DocumentRoot /var/www\" for Debian/Ubuntu\n DocumentRoot /var/www\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n AllowEncodedSlashes On\n\n RewriteEngine On\n\n <Location /media>\n Require all granted\n </Location>\n\n #\n # seafile fileserver\n #\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n #\n # seahub\n #\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPreserveHost On\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n</VirtualHost>\nsudo certbot --apache certonly\n/etc/letsencrypt/live. For the host name seafile.example.com, the files are stored in /etc/letsencrypt/live/seafile.example.com. $ sudo a2enmod ssl\n<VirtualHost *:443>\n ServerName seafile.example.com\n DocumentRoot /var/www\n\n SSLEngine On\n SSLCertificateFile /etc/letsencrypt/live/seafile.example.com/fullchain.pem; # Path to your fullchain.pem\n SSLCertificateKeyFile /etc/letsencrypt/live/seafile.example.com/privkey.pem; # Path to your privkey.pem\n\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n <Location /media>\n Require all granted\n </Location>\n\n RewriteEngine On\n\n #\n # seafile fileserver\n #\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n #\n # seahub\n #\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPreserveHost On\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n</VirtualHost>\n
"},{"location":"deploy/https_with_apache/#modifying-seahub_settingspy","title":"Modifying seahub_settings.py","text":"sudo service apache2 restart\nSERVICE_URL in seahub_settings.py informs Seafile about the chosen domain, protocol and port. Change the SERVICE_URLso as to account for the switch from HTTP to HTTPS and to correspond to your host name (the http://must not be removed):SERVICE_URL = 'https://seafile.example.com'\nFILE_SERVER_ROOT in seahub_settings.py informs Seafile about the location of and the protocol used by the file server. Change the FILE_SERVER_ROOTso as to account for the switch from HTTP to HTTPS and to correspond to your host name (the trailing /seafhttp must not be removed):FILE_SERVER_ROOT = 'https://seafile.example.com/seafhttp'\nSERVICE_URL and FILE_SERVER_ROOT can also be modified in Seahub via System Admininstration > Settings. If they are configured via System Admin and in seahub_settings.py, the value in System Admin will take precedence.seafile.conf in /opt/seafile/conf:host = 127.0.0.1 ## default port 0.0.0.0\n
"},{"location":"deploy/https_with_apache/#troubleshooting","title":"Troubleshooting","text":"$ su seafile\n$ cd /opt/seafile/seafile-server-latest\n$ ./seafile.sh restart\n$ ./seahub.sh restart\n
"},{"location":"deploy/https_with_nginx/","title":"Enabling HTTPS with Nginx","text":"seafile.example.com.
# CentOS\n$ sudo yum install nginx -y\n\n# Debian/Ubuntu\n$ sudo apt install nginx -y\n
"},{"location":"deploy/https_with_nginx/#preparing-nginx","title":"Preparing Nginx","text":"# CentOS/Debian/Ubuntu\n$ sudo systemctl start nginx\n$ sudo systemctl enable nginx\n$ sudo setenforce permissive\n$ sed -i 's/^SELINUX=.*/SELINUX=permissive/' /etc/selinux/config\n/etc/nginx/conf.d:
"},{"location":"deploy/https_with_nginx/#preparing-nginx-on-debianubuntu","title":"Preparing Nginx on Debian/Ubuntu","text":"$ touch /etc/nginx/conf.d/seafile.conf\n/etc/nginx/sites-available/:$ touch /etc/nginx/sites-available/seafile.conf\n/etc/nginx/sites-enabled/ and /etc/nginx/sites-available: $ rm /etc/nginx/sites-enabled/default\n$ rm /etc/nginx/sites-available/default\n
"},{"location":"deploy/https_with_nginx/#configuring-nginx","title":"Configuring Nginx","text":"$ ln -s /etc/nginx/sites-available/seafile.conf /etc/nginx/sites-enabled/seafile.conf\nseafile.conf and modify the content to fit your needs:log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n\n proxy_set_header X-Forwarded-For $remote_addr;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n # used for view/edit office file via Office Online Server\n client_max_body_size 0;\n\n access_log /var/log/nginx/seahub.access.log seafileformat;\n error_log /var/log/nginx/seahub.error.log;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$ $1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n\n proxy_connect_timeout 36000s;\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n\n send_timeout 36000s;\n\n access_log /var/log/nginx/seafhttp.access.log seafileformat;\n error_log /var/log/nginx/seafhttp.error.log;\n }\n location /media {\n root /opt/seafile/seafile-server-latest/seahub;\n }\n}\n
listen) - if Seafile server should be available on a non-standard port/ - if Seahub is configured to start on a different port than 8000/seafhttp - if seaf-server is configured to start on a different port than 8082client_max_body_size)client_max_body_size is 1M. Uploading larger files will result in an error message HTTP error code 413 (\"Request Entity Too Large\"). It is recommended to syncronize the value of client_max_body_size with the parameter max_upload_size in section [fileserver] of seafile.conf. Optionally, the value can also be set to 0 to disable this feature. Client uploads are only partly effected by this limit. With a limit of 100 MiB they can safely upload files of any size.
"},{"location":"deploy/https_with_nginx/#getting-a-lets-encrypt-certificate","title":"Getting a Let's Encrypt certificate","text":"$ nginx -t\n$ nginx -s reload\n$ sudo certbot certonly --nginx\n/etc/letsencrypt/live. For the host name seafile.example.com, the files are stored in /etc/letsencrypt/live/seafile.example.com. seafile.conf configuration file in /etc/nginx. log_format seafileformat '$http_x_forwarded_for $remote_addr [$time_local] \"$request\" $status $body_bytes_sent \"$http_referer\" \"$http_user_agent\" $upstream_response_time';\n\nserver {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n\n server_tokens off; # Prevents the Nginx version from being displayed in the HTTP response header\n}\n\nserver {\n listen 443 ssl;\n ssl_certificate /etc/letsencrypt/live/seafile.example.com/fullchain.pem; # Path to your fullchain.pem\n ssl_certificate_key /etc/letsencrypt/live/seafile.example.com/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_read_timeout 1200s;\n\n proxy_set_header X-Forwarded-Proto https;\n\n... # No changes beyond this point compared to the Nginx configuration without HTTPS\n
"},{"location":"deploy/https_with_nginx/#large-file-uploads","title":"Large file uploads","text":"nginx -t\nnginx -s reload\n location /seafhttp {\n ... ...\n proxy_request_buffering off;\n }\n
"},{"location":"deploy/https_with_nginx/#modifying-seahub_settingspy","title":"Modifying seahub_settings.py","text":" location /seafdav {\n ... ...\n proxy_request_buffering off;\n }\nSERVICE_URL in seahub_settings.py informs Seafile about the chosen domain, protocol and port. Change the SERVICE_URLso as to account for the switch from HTTP to HTTPS and to correspond to your host name (the http:// must not be removed):SERVICE_URL = 'https://seafile.example.com'\nFILE_SERVER_ROOT in seahub_settings.py informs Seafile about the location of and the protocol used by the file server. Change the FILE_SERVER_ROOT so as to account for the switch from HTTP to HTTPS and to correspond to your host name (the trailing /seafhttp must not be removed):FILE_SERVER_ROOT = 'https://seafile.example.com/seafhttp'\nSERVICE_URL and FILE_SERVER_ROOT can also be modified in Seahub via System Admininstration > Settings. If they are configured via System Admin and in seahub_settings.py, the value in System Admin will take precedence.[fileserver] block on seafile.conf in /opt/seafile/conf:host = 127.0.0.1 ## default port 0.0.0.0\n
"},{"location":"deploy/https_with_nginx/#additional-modern-settings-for-nginx-optional","title":"Additional modern settings for Nginx (optional)","text":""},{"location":"deploy/https_with_nginx/#activating-ipv6","title":"Activating IPv6","text":"$ su seafile\n$ cd /opt/seafile/seafile-server-latest\n$ ./seafile.sh restart\n$ ./seahub.sh restart # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n
"},{"location":"deploy/https_with_nginx/#activating-http2","title":"Activating HTTP2","text":"listen 443;\nlisten [::]:443;\nhttp2.
"},{"location":"deploy/https_with_nginx/#advanced-tls-configuration-for-nginx-optional","title":"Advanced TLS configuration for Nginx (optional)","text":"listen 443 http2;\nlisten [::]:443 http2;\nseafile.conf, this rating can be significantly improved.
"},{"location":"deploy/https_with_nginx/#enabling-http-strict-transport-security","title":"Enabling HTTP Strict Transport Security","text":" server {\n listen 80;\n server_name seafile.example.com;\n rewrite ^ https://$http_host$request_uri? permanent; # Forced redirect from HTTP to HTTPS\n server_tokens off;\n }\n server {\n listen 443 ssl;\n ssl_certificate /etc/ssl/cacert.pem; # Path to your cacert.pem\n ssl_certificate_key /etc/ssl/privkey.pem; # Path to your privkey.pem\n server_name seafile.example.com;\n server_tokens off;\n\n # HSTS for protection against man-in-the-middle-attacks\n add_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\";\n\n # DH parameters for Diffie-Hellman key exchange\n ssl_dhparam /etc/nginx/dhparam.pem;\n\n # Supported protocols and ciphers for general purpose server with good security and compatability with most clients\n ssl_protocols TLSv1.2 TLSv1.3;\n ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;\n ssl_prefer_server_ciphers off;\n\n # Supported protocols and ciphers for server when clients > 5years (i.e., Windows Explorer) must be supported\n #ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;\n #ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA256:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA;\n #ssl_prefer_server_ciphers on;\n\n ssl_session_timeout 5m;\n ssl_session_cache shared:SSL:5m;\n\n location / {\n proxy_pass http://127.0.0.1:8000;\n proxy_set_header Host $http_host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto https;\n\n access_log /var/log/nginx/seahub.access.log;\n error_log /var/log/nginx/seahub.error.log;\n\n proxy_read_timeout 1200s;\n\n client_max_body_size 0;\n }\n\n location /seafhttp {\n rewrite ^/seafhttp(.*)$ $1 break;\n proxy_pass http://127.0.0.1:8082;\n client_max_body_size 0;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_connect_timeout 36000s;\n proxy_read_timeout 36000s;\n proxy_send_timeout 36000s;\n send_timeout 36000s;\n }\n\n location /media {\n root /home/user/haiwen/seafile-server-latest/seahub;\n }\n }\nadd_header Strict-Transport-Security \"max-age=31536000; includeSubDomains\" always;\n$ openssl dhparam 2048 > /etc/nginx/dhparam.pem # Generates DH parameter of length 2048 bits\n
"},{"location":"deploy/https_with_nginx/#restricting-tls-protocols-and-ciphers","title":"Restricting TLS protocols and ciphers","text":"ssl_dhparam /etc/nginx/dhparam.pem;\nhttps://your-server/krb5-login. Only this URL needs to be configured under Kerberos protection. All other URLs don't go through the Kerberos module. The overall workflow for a user to login with Kerberos is as follows:
https://your-server/krb5-login.
"},{"location":"deploy/kerberos_config/#get-keytab-for-apache","title":"Get keytab for Apache","text":"<IfModule mod_ssl.c>\n <VirtualHost _default_:443>\n ServerName seafile.example.com\n DocumentRoot /var/www\n...\n <Location /krb5-login/>\n SSLRequireSSL\n AuthType Kerberos\n AuthName \"Kerberos EXAMPLE.ORG\"\n KrbMethodNegotiate On\n KrbMethodK5Passwd On\n Krb5KeyTab /etc/apache2/conf.d/http.keytab\n #ErrorDocument 401 '<html><meta http-equiv=\"refresh\" content=\"0; URL=/accounts/login\"><body>Kerberos authentication did not pass.</body></html>'\n Require valid-user\n </Location>\n...\n </VirtualHost>\n</IfModule>\nREMOTE_USER environment variable.
"},{"location":"deploy/kerberos_config/#verify","title":"Verify","text":"ENABLE_KRB5_LOGIN = True\n
user-login-name@domain-name, e.g. john@example.com. It's not a real email address, but it works fine as the unique identifier.social_auth_usersocialauth to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth table.seahub_settings.py. Examples are as follows:ENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.1' \nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' \nLDAP_ADMIN_DN = 'administrator@example.com' \nLDAP_ADMIN_PASSWORD = 'yourpassword' \nLDAP_PROVIDER = 'ldap' \nLDAP_LOGIN_ATTR = 'email' \nLDAP_CONTACT_EMAIL_ATTR = '' \nLDAP_USER_ROLE_ATTR = '' \nLDAP_USER_FIRST_NAME_ATTR = 'givenName' \nLDAP_USER_LAST_NAME_ATTR = 'sn' \nLDAP_USER_NAME_REVERSE = False \nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \n
cn=admin,dc=example,dc=comLDAP_BASE_DN and LDAP_ADMIN_DN:
"},{"location":"deploy/ldap_in_11.0/#advanced-ldap-integration-options","title":"Advanced LDAP Integration Options","text":""},{"location":"deploy/ldap_in_11.0/#multiple-base","title":"Multiple BASE","text":"LDAP_BASE_DN, you first have to navigate your organization hierachy on the domain controller GUI.
cn=users,dc=yourdomain,dc=com as LDAP_BASE_DN (with proper adjustment for your own needs).dsquery command on the domain controller to find out the DN for this OU. For example, if the OU is staffs, you can run dsquery ou -name staff. More information can be found here.user@domain.name format for the LDAP_ADMIN_DN option. For example you can use administrator@example.com for LDAP_ADMIN_DN. Sometime the domain controller doesn't recognize this format. You can still use dsquery command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser. More information here.LDAP_BASE_DN option. The DNs are separated by \";\", e.g.
"},{"location":"deploy/ldap_in_11.0/#additional-search-filter","title":"Additional Search Filter","text":"LDAP_BASE_DN = 'ou=developers,dc=example,dc=com;ou=marketing,dc=example,dc=com'\nLDAP_FILTER option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).(&($LOGIN_ATTR=*)($LDAP_FILTER)). $LOGIN_ATTR and $LDAP_FILTER will be replaced by your option values.seahub_settings.py:LDAP_FILTER = 'memberOf=CN=group,CN=developers,DC=example,DC=com'\n(&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))memberOf attribute is only available in Active Directory.LDAP_FILTER option to limit user scope to a certain AD group.
dsquery command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup.seahub_settings.py:
"},{"location":"deploy/ldap_in_11.0/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"LDAP_FILTER = 'memberOf={output of dsquery command}'\nLDAP_SERVER_URL as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
"},{"location":"deploy/libreoffice_online/","title":"Integrate Seafile with Collabora Online (LibreOffice Online)","text":"LDAP_SERVER_URL = 'ldaps://192.168.0.1:636/'\nserver {\n listen 443 ssl;\n server_name collabora-online.seafile.com;\n\n ssl_certificate /etc/letsencrypt/live/collabora-online.seafile.com/fullchain.pem;\n ssl_certificate_key /etc/letsencrypt/live/collabora-online.seafile.com/privkey.pem;\n\n # static files\n location ^~ /browser {\n proxy_pass https://127.0.0.1:9980;\n proxy_set_header Host $http_host;\n }\n\n # WOPI discovery URL\n location ^~ /hosting/discovery {\n proxy_pass https://127.0.0.1:9980;\n proxy_set_header Host $http_host;\n }\n\n # Capabilities\n location ^~ /hosting/capabilities {\n proxy_pass https://127.0.0.1:9980;\n proxy_set_header Host $http_host;\n }\n\n # main websocket\n location ~ ^/cool/(.*)/ws$ {\n proxy_pass https://127.0.0.1:9980;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"Upgrade\";\n proxy_set_header Host $http_host;\n proxy_read_timeout 36000s;\n }\n\n # download, presentation and image upload\n location ~ ^/(c|l)ool {\n proxy_pass https://127.0.0.1:9980;\n proxy_set_header Host $http_host;\n }\n\n # Admin Console websocket\n location ^~ /cool/adminws {\n proxy_pass https://127.0.0.1:9980;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection \"Upgrade\";\n proxy_set_header Host $http_host;\n proxy_read_timeout 36000s;\n }\n}\ndocker pull collabora/code\ndocker run -t -d -p 127.0.0.1:9980:9980 -e \"aliasgroup1=https://<your-dot-escaped-domain>:443\" -e \"username=***\" -e \"password=***\" --name code --restart always collabora/code\ndomain args is the domain name of your Seafile server, if your Seafile server's domain name is demo.seafile.com, the command should be:docker run -t -d -p 127.0.0.1:9980:9980 -e \"aliasgroup1=https://demo.seafile.com:443\" -e \"username=***\" -e \"password=***\" --name code --restart always collabora/code\n# From 6.1.0 CE version on, Seafile support viewing/editing **doc**, **ppt**, **xls** files via LibreOffice\n# Add this setting to view/edit **doc**, **ppt**, **xls** files\nOFFICE_SERVER_TYPE = 'CollaboraOffice'\n\n# Enable LibreOffice Online\nENABLE_OFFICE_WEB_APP = True\n\n# Url of LibreOffice Online's discovery page\n# The discovery page tells Seafile how to interact with LibreOffice Online when view file online\n# You should change `https://collabora-online.seafile.com/hosting/discovery` to your actual LibreOffice Online server address\nOFFICE_WEB_APP_BASE_URL = 'https://collabora-online.seafile.com/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use LibreOffice Online view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 30 * 60 # seconds\n\n# List of file formats that you want to view through LibreOffice Online\n# You can change this value according to your preferences\n# And of course you should make sure your LibreOffice Online supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through LibreOffice Online\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through LibreOffice Online\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('odp', 'ods', 'odt', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt', 'pptm', 'pptx', 'doc', 'docm', 'docx')\n
sudo apt-get install python-mysqldb or sudo apt-get install python3-mysqldb to install it./opt/seafile.sqlite2mysql.sh:chmod +x sqlite2mysql.sh\n./sqlite2mysql.sh\nccnet-db.sql, seafile-db.sql, seahub-db.sql.mysql> create database ccnet_db character set = 'utf8';\nmysql> create database seafile_db character set = 'utf8';\nmysql> create database seahub_db character set = 'utf8';\nmysql> use ccnet_db;\nmysql> source ccnet-db.sql;\nmysql> use seafile_db;\nmysql> source seafile-db.sql;\nmysql> use seahub_db;\nmysql> source seahub-db.sql;\n[Database]\nENGINE=mysql\nHOST=127.0.0.1\nPORT = 3306\nUSER=root\nPASSWD=root\nDB=ccnet_db\nCONNECTION_CHARSET=utf8\n127.0.0.1, don't use localhost.seafile.conf with following lines:[database]\ntype=mysql\nhost=127.0.0.1\nport = 3306\nuser=root\npassword=root\ndb_name=seafile_db\nconnection_charset=utf8\nseahub_settings.py:DATABASES = {\n 'default': {\n 'ENGINE': 'django.db.backends.mysql',\n 'USER' : 'root',\n 'PASSWORD' : 'root',\n 'NAME' : 'seahub_db',\n 'HOST' : '127.0.0.1',\n 'PORT': '3306',\n # This is only needed for MySQL older than 5.5.5.\n # For MySQL newer than 5.5.5 INNODB is the default already.\n 'OPTIONS': {\n \"init_command\": \"SET storage_engine=INNODB\",\n }\n }\n}\nuser_notitfications table manually by:
"},{"location":"deploy/migrate_from_sqlite_to_mysql/#faq","title":"FAQ","text":""},{"location":"deploy/migrate_from_sqlite_to_mysql/#encountered-errno-150-foreign-key-constraint-is-incorrectly-formed","title":"Encountered use seahub_db;\ndelete from notifications_usernotification;\nerrno: 150 \"Foreign key constraint is incorrectly formed\"","text":"auth_user\nauth_group\nauth_permission\nauth_group_permissions\nauth_user_groups\nauth_user_user_permissions\n
"},{"location":"deploy/notification-server/","title":"Notification Server Overview","text":"post_office_emailtemplate\npost_office_email\npost_office_attachment\npost_office_attachment_emails\n
"},{"location":"deploy/notification-server/#how-to-configure-and-run","title":"How to configure and run","text":"# jwt_private_key are required.You should generate it manually.\n[notification]\nenabled = true\n# the ip of notification server. (Do not modify the host when using Nginx or Apache, as Nginx or Apache will proxy the requests to this address)\nhost = 127.0.0.1\n# the port of notification server\nport = 8083\n# the log level of notification server\n# You can set log_level to debug to print messages sent to clients.\nlog_level = info\n# jwt_private_key is used to generate jwt token and authenticate seafile server\njwt_private_key = M@O8VWUb81YvmtWLHGB2I_V7di5-@0p(MF*GrE!sIws23F\n# generate jwt_private_key\nopenssl rand -base64 32\nmap $http_upgrade $connection_upgrade {\ndefault upgrade;\n'' close;\n}\n\nserver {\n location /notification/ping {\n proxy_pass http://127.0.0.1:8083/ping;\n access_log /var/log/nginx/notif.access.log;\n error_log /var/log/nginx/notif.error.log;\n }\n\n location /notification {\n proxy_pass http://127.0.0.1:8083/;\n proxy_http_version 1.1;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection $connection_upgrade;\n access_log /var/log/nginx/notif.access.log;\n error_log /var/log/nginx/notif.error.log;\n }\n}\n ProxyPass /notification/ping http://127.0.0.1:8083/ping/\n ProxyPassReverse /notification/ping http://127.0.0.1:8083/ping/\n\n ProxyPass /notification ws://127.0.0.1:8083/\n ProxyPassReverse /notification ws://127.0.0.1:8083/\nThe configured ProxyPass and ProxyPassMatch rules are checked in the order of configuration. The first rule that matches wins.\nSo usually you should sort conflicting ProxyPass rules starting with the longest URLs first.\nOtherwise, later rules for longer URLS will be hidden by any earlier rule which uses a leading substring of the URL. Note that there is some relation with worker sharing.\n #\n # notification server\n #\n ProxyPass /notification/ping http://127.0.0.1:8083/ping/\n ProxyPassReverse /notification/ping http://127.0.0.1:8083/ping/\n\n ProxyPass /notification ws://127.0.0.1:8083/\n ProxyPassReverse /notification ws://127.0.0.1:8083/\n\n #\n # seafile fileserver\n #\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n #\n # seahub\n #\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPreserveHost On\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n
"},{"location":"deploy/notification-server/#checking-notification-server-status","title":"Checking notification server status","text":"./seafile.sh restart\nhttp://127.0.0.1:8083/ping from your browser, which will answer {\"ret\": \"pong\"}. If you have a proxy configured, you can access https://{server}/notification/ping from your browser instead.
"},{"location":"deploy/notification-server/#notification-server-in-seafile-cluster","title":"Notification Server in Seafile cluster","text":"Notification server is enabled on the remote server xxxx\n[notification]\nenabled = true\n# the ip of notification server.\nhost = 192.168.1.134\n# the port of notification server\nport = 8083\n# the log level of notification server\nlog_level = info\n# jwt_private_key is used to generate jwt token and authenticate seafile server\njwt_private_key = M@O8VWUb81YvmtWLHGB2I_V7di5-@0p(MF*GrE!sIws23F\n
/notification/ping requests to notification server via http protocol./notification to notification server.
"},{"location":"deploy/oauth/","title":"OAuth Authentication","text":""},{"location":"deploy/oauth/#oauth","title":"OAuth","text":"#/etc/haproxy/haproxy.cfg\n\n# Other existing haproxy configurations\n......\n\nfrontend seafile\n bind 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n acl notif_ping_request url_sub -i /notification/ping\n acl ws_requests url -i /notification\n acl hdr_connection_upgrade hdr(Connection) -i upgrade\n acl hdr_upgrade_websocket hdr(Upgrade) -i websocket\n use_backend ws_backend if hdr_connection_upgrade hdr_upgrade_websocket\n use_backend notif_ping_backend if notif_ping_request\n use_backend ws_backend if ws_requests\n default_backend backup_nodes\n\nbackend backup_nodes\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.0.137:80\n\nbackend notif_ping_backend\n option forwardfor\n server ws 192.168.0.137:8083\n\nbackend ws_backend\n option forwardfor # This sets X-Forwarded-For\n server ws 192.168.0.137:8083\nENABLE_OAUTH = True\n\n# If create new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_CREATE_UNKNOWN_USER = True\n\n# If active new user when he/she logs in Seafile for the first time, defalut `True`.\nOAUTH_ACTIVATE_USER_AFTER_CREATION = True\n\n# Usually OAuth works through SSL layer. If your server is not parametrized to allow HTTPS, some method will raise an \"oauthlib.oauth2.rfc6749.errors.InsecureTransportError\". Set this to `True` to avoid this error.\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\n# Client id/secret generated by authorization server when you register your client application.\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\n\n# Callback url when user authentication succeeded. Note, the redirect url you input when you register your client application MUST be exactly the same as this value.\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following should NOT be changed if you are using Github as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'github.com' \nOAUTH_PROVIDER = 'github.com'\n\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # Please keep the 'email' option unchanged to be compatible with the login of users of version 11.0 and earlier.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n \"uid\": (True, \"uid\"), # Seafile v11.0 + \n}\nOAUTH_PROVIDER_DOMAIN will be deprecated, and it can be replaced by OAUTH_PROVIDER. This variable is used in the database to identify third-party providers, either as a domain or as an easy-to-remember string less than 32 characters. OAUTH_ATTRIBUTE_MAP = {\n <:Attribute in the OAuth provider>: (<:Is required or not in Seafile?>, <:Attribute in Seafile >)\n}\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # it is deprecated\n \"uid / id / username\": (True, \"uid\") \n\n # extra infos you want to update to Seafile\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n}\nid stands for an unique identifier of user in Github, this tells Seafile which attribute remote resoure server uses to indentify its user. The value part True stands for if this field is mandatory by Seafile.uid as the external unique identifier of the user. It stores uid in table social_auth_usersocialauth and map it to internal unique identifier used in Seafile. Different OAuth systems have different attributes, which may be: id or uid or username, etc. And the id/email config id: (True, email) is deprecated. OAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"uid\": (True, \"uid\") ,\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n}\n\"id\": (True, \"email\").\"id\": (True, \"email\") item. Your configuration should be like:
"},{"location":"deploy/oauth/#sample-settings-for-google","title":"Sample settings for Google","text":"OAUTH_ATTRIBUTE_MAP = {\n \"uid\": (True, \"uid\") ,\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"), \n}\n
"},{"location":"deploy/oauth/#sample-settings-for-github","title":"Sample settings for Github","text":"ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\n# The following shoud NOT be changed if you are using Google as OAuth provider.\nOAUTH_PROVIDER_DOMAIN = 'google.com'\nOAUTH_AUTHORIZATION_URL = 'https://accounts.google.com/o/oauth2/v2/auth'\nOAUTH_TOKEN_URL = 'https://www.googleapis.com/oauth2/v4/token'\nOAUTH_USER_INFO_URL = 'https://www.googleapis.com/oauth2/v1/userinfo'\nOAUTH_SCOPE = [\n \"openid\",\n \"https://www.googleapis.com/auth/userinfo.email\",\n \"https://www.googleapis.com/auth/userinfo.profile\",\n]\nOAUTH_ATTRIBUTE_MAP = {\n \"sub\": (True, \"uid\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\nemail is not the unique identifier for an user, but id is in most cases, so we use id as settings example in our manual. As Seafile uses email to identify an unique user account for now, so we combine id and OAUTH_PROVIDER_DOMAIN, which is github.com in your case, to an email format string and then create this account if not exist. Change the setting as followings:
"},{"location":"deploy/oauth/#sample-settings-for-gitlab","title":"Sample settings for GitLab","text":"ENABLE_OAUTH = True\nOAUTH_ENABLE_INSECURE_TRANSPORT = True\n\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = 'http{s}://example.com/oauth/callback/'\n\nOAUTH_PROVIDER_DOMAIN = 'github.com'\nOAUTH_AUTHORIZATION_URL = 'https://github.com/login/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://github.com/login/oauth/access_token'\nOAUTH_USER_INFO_URL = 'https://api.github.com/user'\nOAUTH_SCOPE = [\"user\",]\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, 'uid'),\n \"email\": (False, \"contact_email\"),\n \"name\": (False, \"name\"),\n}\n
OAUTH_REDIRECT_URLopenid and read_user in the scopes list.
"},{"location":"deploy/oauth/#sample-settings-for-azure-cloud","title":"Sample settings for Azure Cloud","text":"ENABLE_OAUTH = True\nOAUTH_CLIENT_ID = \"your-client-id\"\nOAUTH_CLIENT_SECRET = \"your-client-secret\"\nOAUTH_REDIRECT_URL = \"https://your-seafile/oauth/callback/\"\n\nOAUTH_PROVIDER_DOMAIN = 'your-domain'\nOAUTH_AUTHORIZATION_URL = 'https://gitlab.your-domain/oauth/authorize'\nOAUTH_TOKEN_URL = 'https://gitlab.your-domain/oauth/token'\nOAUTH_USER_INFO_URL = 'https://gitlab.your-domain/api/v4/user'\nOAUTH_SCOPE = [\"openid\", \"read_user\"]\nOAUTH_ATTRIBUTE_MAP = {\n \"email\": (True, \"uid\"),\n \"name\": (False, \"name\")\n}\nid field returned from Azure Cloud's user info endpoint, so we use a special configuration for OAUTH_ATTRIBUTE_MAP setting (others are the same as Github/Google):OAUTH_ATTRIBUTE_MAP = {\n \"email\": (True, \"uid\"),\n \"name\": (False, \"name\")\n}\nseahub_settings.py.# Enable OCM\nENABLE_OCM = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"dev\",\n \"server_url\": \"https://seafile-domain-1/\", # should end with '/'\n },\n {\n \"server_name\": \"download\",\n \"server_url\": \"https://seafile-domain-2/\", # should end with '/'\n },\n]\nseahub_settings.py.
"},{"location":"deploy/ocm/#usage","title":"Usage","text":""},{"location":"deploy/ocm/#share-library-to-other-server","title":"Share library to other server","text":"# Enable OCM\nENABLE_OCM_VIA_WEBDAV = True\nOCM_PROVIDER_ID = '71687320-6219-47af-82f3-32012707a5ae' # the unique id of this server\nOCM_REMOTE_SERVERS = [\n {\n \"server_name\": \"nextcloud\",\n \"server_url\": \"https://nextcloud-domain-1/\", # should end with '/'\n }\n]\nonlyoffice.yml provided by Seafile according to this document, or you can deploy it to a different machine according to OnlyOffice official document.
"},{"location":"deploy/only_office/#deployment-of-onlyoffice","title":"Deployment of OnlyOffice","text":"pwgen -s 40 1\nonlyoffice.ymlwget https://manual.seafile.com/12/docker/docker-compose/onlyoffice.yml\nonlyoffice.yml into COMPOSE_FILE list (i.e., COMPOSE_FILE='...,onlyoffice.yml'), and add the following configurations of onlyoffice in .env file.# OnlyOffice image\nONLYOFFICE_IMAGE=onlyoffice/documentserver:8.1.0.1\n\n# Persistent storage directory of OnlyOffice\nONLYOFFICE_VOLUME=/opt/onlyoffice\n\n# OnlyOffice document server port\nONLYOFFICE_PORT=6233\n\n# jwt secret, generated by `pwgen -s 40 1` \nONLYOFFICE_JWT_SECRET=<your jwt secret>\nseahub_settings.pyENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'https://seafile.example.com:6233/web-apps/apps/api/documents/api.js'\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods', 'csv', 'ppsx', 'pps')\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\nONLYOFFICE_PORT, and port in the term ONLYOFFICE_APIJS_URL in seahub_settings.py has been modified together.local-production-linux.json to force some settings.nano local-production-linux.json\n{\n \"services\": {\n \"CoAuthoring\": {\n \"autoAssembly\": {\n \"enable\": true,\n \"interval\": \"5m\"\n }\n }\n },\n \"FileConverter\": {\n \"converter\": {\n \"downloadAttemptMaxCount\": 3\n }\n }\n}\nonlyoffice.yml:service:\n ...\n onlyoffice:\n ...\n volumes:\n ...\n - <Your path to local-production-linux.json>:/etc/onlyoffice/documentserver/local-production-linux.json\n...\nSEAFILE_MYSQL_* in .env. If you need to specify another existing database, please modify it in onlyoffice.ymldocker compose up -d\ndocker exec -it seafile-mysql bash\nonlyoffice and add corresponding permissions for the seafile user
"},{"location":"deploy/only_office/#restart-seafile-docker-instance-and-test-that-onlyoffice-is-running","title":"Restart Seafile-docker instance and test that OnlyOffice is running","text":"create database if not exists onlyoffice charset utf8mb4;\nGRANT ALL PRIVILEGES ON `onlyoffice`.* to `seafile`@`%.%.%.%`;\ndocker-compose down\ndocker-compose up -d\nhttp{s}://{your Seafile server's domain or IP}:6233/welcome, you will get Document Server is running info at this page.docker logs -f seafile-onlyoffice, then open an office file. After the \"Download failed.\" error appears on the page, observe the logs for the following error:==> /var/log/onlyoffice/documentserver/converter/out.log <==\n...\nError: DNS lookup {local IP} (family:undefined, host:undefined) is not allowed. Because, It is a private IP address.\n...\nseahub_settings.py and then restart the service.
"},{"location":"deploy/only_office/#about-ssl","title":"About SSL","text":"ENABLE_ONLYOFFICE = True\nONLYOFFICE_APIJS_URL = 'http{s}://<Your OnlyOffice host url>/web-apps/apps/api/documents/api.js'\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods', 'csv', 'ppsx', 'pps')\nONLYOFFICE_JWT_SECRET = '<your jwt secret>'\nonlyoffice.yml file in this document, SSL is primarily handled by the Caddy. If the OnlyOffice document server and Seafile server are not on the same machine, please refer to the official document to configure SSL for OnlyOffice.
http(s)://SEAFILE_SERVER_URL/outlook/http(s)://SEAFILE_SERVER_URL/accounts/login/ including a redirect request to /outlook/ following a successful authentication (e.g., https://demo.seafile.com/accounts/login/?next=/jwt-sso/?page=/outlook/)# CentOS/RedHat\n$ sudo yum install -y php-fpm php-curl\n$ php --version\n\n# Debian/Ubuntu\n$ sudo apt install -y php-fpm php-curl\n$ php --version\n/var/www:
"},{"location":"deploy/outlook_addin_config/#configuring-seahub","title":"Configuring Seahub","text":"$ mkdir -p /var/www/outlook-sso\n$ cd /var/www/outlook-sso\n$ composer require firebase/php-jwt guzzlehttp/guzzle\nseahub_settings.py using a text editor:ENABLE_JWT_SSO = True\nJWT_SSO_SECRET_KEY = 'SHARED_SECRET'\nENABLE_SYS_ADMIN_GENERATE_USER_AUTH_TOKEN = True\nlocation /outlook {\n alias /var/www/outlook-sso/public;\n index index.php;\n location ~ \\.php$ {\n fastcgi_split_path_info ^(.+\\.php)(/.+)$;\n fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;\n fastcgi_param SCRIPT_FILENAME $request_filename;\n fastcgi_index index.php;\n include fastcgi_params;\n }\n}\n
"},{"location":"deploy/outlook_addin_config/#deploying-the-php-script","title":"Deploying the PHP script","text":"$ nginx -t\n$ nginx -s reload\n$ cd /var/www/outlook-sso\n$ nano config.php\nconfig.php:<?php\n\n# general settings\n$seafile_url = 'SEAFILE_SERVER_URL';\n$jwt_shared_secret = 'SHARED_SECRET';\n\n# Option 1: provide credentials of a seafile admin user\n$seafile_admin_account = [\n 'username' => '',\n 'password' => '',\n];\n\n# Option 2: provide the api-token of a seafile admin user\n$seafile_admin_token = '';\n\n?>\nindex.php and copy & paste the PHP script:mkdir /var/www/outlook-sso/public\n$ cd /var/www/outlook-sso/public\n$ nano index.php\n<?php\n/** IMPORTANT: there is no need to change anything in this file ! **/\n\nrequire_once __DIR__ . '/../vendor/autoload.php';\nrequire_once __DIR__ . '/../config.php';\n\nif(!empty($_GET['jwt-token'])){\n try {\n $decoded = Firebase\\JWT\\JWT::decode($_GET['jwt-token'], new Firebase\\JWT\\Key($jwt_shared_secret, 'HS256'));\n }\n catch (Exception $e){\n echo json_encode([\"error\" => \"wrong JWT-Token\"]);\n die();\n }\n\n try {\n // init connetion to seafile api\n $client = new GuzzleHttp\\Client(['base_uri' => $seafile_url]);\n\n // get admin api-token with his credentials (if not set)\n if(empty($seafile_admin_token)){\n $request = $client->request('POST', '/api2/auth-token/', ['form_params' => $seafile_admin_account]);\n $response = json_decode($request->getBody());\n $seafile_admin_token = $response->token;\n }\n\n // get api-token of the user\n $request = $client->request('POST', '/api/v2.1/admin/generate-user-auth-token/', [\n 'json' => ['email' => $decoded->email],\n 'headers' => ['Authorization' => 'Token '. $seafile_admin_token]\n ]);\n $response = json_decode($request->getBody());\n\n // create the output for the outlook plugin (json like response)\n echo json_encode([\n 'exp' => $decoded->exp,\n 'email' => $decoded->email,\n 'name' => $decoded->name,\n 'token' => $response->token,\n ]);\n } catch (GuzzleHttp\\Exception\\ClientException $e){\n echo $e->getResponse()->getBody();\n }\n}\nelse{ // no jwt-token. therefore redirect to the login page of seafile\n header(\"Location: \". $seafile_url .\"/accounts/login/?next=/jwt-sso/?page=/outlook\");\n} ?>\n/var/www/sso-outlook/ should now look as follows:$ tree -L 2 /var/www/outlook-sso\n/var/www/outlook-sso/\n\u251c\u2500\u2500 composer.json\n\u251c\u2500\u2500 composer.lock\n\u251c\u2500\u2500 config.php\n\u251c\u2500\u2500 public\n| \u2514\u2500\u2500 index.php\n\u2514\u2500\u2500 vendor\n \u251c\u2500\u2500 autoload.php\n \u251c\u2500\u2500 composer\n \u2514\u2500\u2500 firebase\nconf/seahub_settings.py to enable this feature.ENABLE_REMOTE_USER_AUTHENTICATION = True\n\n# Optional, HTTP header, which is configured in your web server conf file,\n# used for Seafile to get user's unique id, default value is 'HTTP_REMOTE_USER'.\nREMOTE_USER_HEADER = 'HTTP_REMOTE_USER'\n\n# Optional, when the value of HTTP_REMOTE_USER is not a valid email address\uff0c\n# Seafile will build a email-like unique id from the value of 'REMOTE_USER_HEADER'\n# and this domain, e.g. user1@example.com.\nREMOTE_USER_DOMAIN = 'example.com'\n\n# Optional, whether to create new user in Seafile system, default value is True.\n# If this setting is disabled, users doesn't preexist in the Seafile DB cannot login.\n# The admin has to first import the users from external systems like LDAP.\nREMOTE_USER_CREATE_UNKNOWN_USER = True\n\n# Optional, whether to activate new user in Seafile system, default value is True.\n# If this setting is disabled, user will be unable to login by default.\n# the administrator needs to manually activate this user.\nREMOTE_USER_ACTIVATE_USER_AFTER_CREATION = True\n\n# Optional, map user attribute in HTTP header and Seafile's user attribute.\nREMOTE_USER_ATTRIBUTE_MAP = {\n 'HTTP_DISPLAYNAME': 'name',\n 'HTTP_MAIL': 'contact_email',\n\n # for user info\n \"HTTP_GIVENNAME\": 'givenname',\n \"HTTP_SN\": 'surname',\n \"HTTP_ORGANIZATION\": 'institution',\n\n # for user role\n 'HTTP_Shibboleth-affiliation': 'affiliation',\n}\n\n# Map affiliation to user role. Though the config name is SHIBBOLETH_AFFILIATION_ROLE_MAP,\n# it is not restricted to Shibboleth\nSHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\nhttps://your-seafile-domain/sso. Only this URL needs to be configured under Shibboleth protection. All other URLs don't go through the Shibboleth module. The overall workflow for a user to login with Shibboleth is as follows:
https://your-seafile-domain/sso.https://your-seafile-domain/sso.HTTP_REMOTE_USER header) and brings the user to her/his home page.https://your-seafile-domain/sso needs to be directed to Apache.
"},{"location":"deploy/shibboleth_authentication/#install-and-configure-shibboleth-service-provider","title":"Install and Configure Shibboleth Service Provider","text":"
"},{"location":"deploy/shibboleth_authentication/#install-and-configure-shibboleth","title":"Install and Configure Shibboleth","text":"<IfModule mod_ssl.c>\n <VirtualHost _default_:443>\n ServerName your-seafile-domain\n DocumentRoot /var/www\n Alias /media /opt/seafile/seafile-server-latest/seahub/media\n\n ErrorLog ${APACHE_LOG_DIR}/seahub.error.log\n CustomLog ${APACHE_LOG_DIR}/seahub.access.log combined\n\n SSLEngine on\n SSLCertificateFile /path/to/ssl-cert.pem\n SSLCertificateKeyFile /path/to/ssl-key.pem\n\n <Location /Shibboleth.sso>\n SetHandler shib\n AuthType shibboleth\n ShibRequestSetting requireSession 1\n Require valid-user\n </Location>\n\n <Location /sso>\n SetHandler shib\n AuthType shibboleth\n ShibUseHeaders On\n ShibRequestSetting requireSession 1\n Require valid-user\n </Location>\n\n RewriteEngine On\n <Location /media>\n Require all granted\n </Location>\n\n # seafile fileserver\n ProxyPass /seafhttp http://127.0.0.1:8082\n ProxyPassReverse /seafhttp http://127.0.0.1:8082\n RewriteRule ^/seafhttp - [QSA,L]\n\n # seahub\n SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n ProxyPass / http://127.0.0.1:8000/\n ProxyPassReverse / http://127.0.0.1:8000/\n\n # for http\n # RequestHeader set REMOTE_USER %{REMOTE_USER}e\n # for https\n RequestHeader set REMOTE_USER %{REMOTE_USER}s\n </VirtualHost>\n</IfModule>\n/etc/shibboleth/shibboleth2.xml and change some property. After you have done all the followings, don't forget to restart Shibboleth(SP)ApplicationDefaults element","text":"entityID and REMOTE_USER property:<!-- The ApplicationDefaults element is where most of Shibboleth's SAML bits are defined. -->\n<ApplicationDefaults entityID=\"https://your-seafile-domain/sso\"\n REMOTE_USER=\"mail\"\n cipherSuites=\"DEFAULT:!EXP:!LOW:!aNULL:!eNULL:!DES:!IDEA:!SEED:!RC4:!3DES:!kRSA:!SSLv2:!SSLv3:!TLSv1:!TLSv1.1\">\nREMOTE_USER environment variable. So you should modify your SP's shibboleth2.xml config file, so that Shibboleth translates your desired attribute into REMOTE_USER environment variable.eppn, and mail. eppn stands for \"Edu Person Principal Name\". It is usually the UserPrincipalName attribute in Active Directory. It's not necessarily a valid email address. mail is the user's email address. You should set REMOTE_USER to either one of these attributes.SSO element","text":"entityID property:
"},{"location":"deploy/shibboleth_authentication/#metadataprovider-element","title":"<!--\nConfigures SSO for a default IdP. To properly allow for >1 IdP, remove\nentityID property and adjust discoveryURL to point to discovery service.\nYou can also override entityID on /Login query string, or in RequestMap/htaccess.\n-->\n<SSO entityID=\"https://your-IdP-domain\">\n <!--discoveryProtocol=\"SAMLDS\" discoveryURL=\"https://wayf.ukfederation.org.uk/DS\"-->\n SAML2\n</SSO>\nMetadataProvider element","text":"url and backingFilePath property:
"},{"location":"deploy/shibboleth_authentication/#attribute-mapxml","title":"attribute-map.xml","text":"<!-- Example of remotely supplied batch of signed metadata. -->\n<MetadataProvider type=\"XML\" validate=\"true\"\n url=\"http://your-IdP-metadata-url\"\n backingFilePath=\"your-IdP-metadata.xml\" maxRefreshDelay=\"7200\">\n <MetadataFilter type=\"RequireValidUntil\" maxValidityInterval=\"2419200\"/>\n <MetadataFilter type=\"Signature\" certificate=\"fedsigner.pem\" verifyBackup=\"false\"/>\n/etc/shibboleth/attribute-map.xml and change some property. After you have done all the followings, don't forget to restart Shibboleth(SP)Attribute element","text":"
"},{"location":"deploy/shibboleth_authentication/#upload-shibbolethsps-metadata","title":"Upload Shibboleth(SP)'s metadata","text":"<!-- Older LDAP-defined attributes (SAML 2.0 names followed by SAML 1 names)... -->\n<Attribute name=\"urn:oid:2.16.840.1.113730.3.1.241\" id=\"displayName\"/>\n<Attribute name=\"urn:oid:0.9.2342.19200300.100.1.3\" id=\"mail\"/>\n\n<Attribute name=\"urn:mace:dir:attribute-def:displayName\" id=\"displayName\"/>\n<Attribute name=\"urn:mace:dir:attribute-def:mail\" id=\"mail\"/>\nENABLE_SHIB_LOGIN = True\nSHIBBOLETH_USER_HEADER = 'HTTP_REMOTE_USER'\n# basic user attributes\nSHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_DISPLAYNAME\": (False, \"display_name\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n}\nEXTRA_MIDDLEWARE = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\nEXTRA_AUTHENTICATION_BACKENDS = (\n 'shibboleth.backends.ShibbolethRemoteUserBackend',\n)\n
SHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_GIVENNAME\": (False, \"givenname\"),\n \"HTTP_SN\": (False, \"surname\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n \"HTTP_ORGANIZATION\": (False, \"institution\"),\n}\nSHIB_ACTIVATE_AFTER_CREATION (defaults to True) which control the user status after shibboleth connection. If this option set to False, user will be inactive after connection, and system admins will be notified by email to activate that account.employee@uni-mainz.de;member@uni-mainz.de;faculty@uni-mainz.de;staff@uni-mainz.de.SHIBBOLETH_ATTRIBUTE_MAP above and add Shibboleth-affiliation field, you may need to change Shibboleth-affiliation according to your Shibboleth SP attributes.SHIBBOLETH_ATTRIBUTE_MAP = {\n \"HTTP_GIVENNAME\": (False, \"givenname\"),\n \"HTTP_SN\": (False, \"surname\"),\n \"HTTP_MAIL\": (False, \"contact_email\"),\n \"HTTP_ORGANIZATION\": (False, \"institution\"),\n \"HTTP_Shibboleth-affiliation\": (False, \"affiliation\"),\n}\nSHIBBOLETH_AFFILIATION_ROLE_MAP = {\n 'employee@uni-mainz.de': 'staff',\n 'member@uni-mainz.de': 'staff',\n 'student@uni-mainz.de': 'student',\n 'employee@hu-berlin.de': 'guest',\n 'patterns': (\n ('*@hu-berlin.de', 'guest1'),\n ('*@*.de', 'guest2'),\n ('*', 'guest'),\n ),\n}\n./seahub.sh restart), you can then test the shibboleth login workflow.seahub_settings.py","text":"
"},{"location":"deploy/shibboleth_authentication/#change-seafiles-code","title":"Change Seafile's code","text":"DEBUG = True\nseafile-server-latest/seahub/thirdpart/shibboleth/middleware.py assert False\nif not username:\n assert False\n#Locate the remote user header.\n# import pprint; pprint.pprint(request.META)\ntry:\n username = request.META[SHIB_USER_HEADER]\nexcept KeyError:\n assert False\n # If specified header doesn't exist then return (leaving\n # request.user set to AnonymousUser by the\n # AuthenticationMiddleware).\n return\n\nif not username:\n assert False\n\np_id = ccnet_api.get_primary_id(username)\nif p_id is not None:\n username = p_id\n
"},{"location":"deploy/start_seafile_at_system_bootup/","title":"Start Seafile at System Bootup","text":""},{"location":"deploy/start_seafile_at_system_bootup/#for-systems-running-systemd-and-python-virtual-environments","title":"For systems running systemd and python virtual environments","text":"
sudo vim /opt/seafile/run_with_venv.sh\n#!/bin/bash\n# Activate the python virtual environment (venv) before starting one of the seafile scripts\n\ndir_name=\"$(dirname $0)\"\nsource \"${dir_name}/python-venv/bin/activate\"\nscript=\"$1\"\nshift 1\n\necho \"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n\"${dir_name}/seafile-server-latest/${script}\" \"$@\"\n
"},{"location":"deploy/start_seafile_at_system_bootup/#seafile-component","title":"Seafile component","text":"sudo chmod 755 /opt/seafile/run_with_venv.sh\nsudo vim /etc/systemd/system/seafile.service\n
"},{"location":"deploy/start_seafile_at_system_bootup/#seahub-component","title":"Seahub component","text":"[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seafile.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\nsudo vim /etc/systemd/system/seahub.service\n
"},{"location":"deploy/start_seafile_at_system_bootup/#for-systems-running-systemd-without-python-virtual-environment","title":"For systems running systemd without python virtual environment","text":"[Unit]\nDescription=Seafile hub\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=bash ${seafile_dir}/run_with_venv.sh seahub.sh start\nExecStop=bash ${seafile_dir}/seafile-server-latest/seahub.sh stop\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
sudo vim /etc/systemd/system/seafile.service\n
"},{"location":"deploy/start_seafile_at_system_bootup/#seahub-component_1","title":"Seahub component","text":"[Unit]\nDescription=Seafile\n# add mysql.service or postgresql.service depending on your database to the line below\nAfter=network.target\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seafile.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seafile.sh stop\nLimitNOFILE=infinity\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\nsudo vim /etc/systemd/system/seahub.service\n
"},{"location":"deploy/start_seafile_at_system_bootup/#seafile-cli-client-optional","title":"Seafile cli client (optional)","text":"[Unit]\nDescription=Seafile hub\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=${seafile_dir}/seafile-server-latest/seahub.sh start\nExecStop=${seafile_dir}/seafile-server-latest/seahub.sh stop\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\nsudo vim /etc/systemd/system/seafile-client.service\n
"},{"location":"deploy/start_seafile_at_system_bootup/#enable-service-start-on-system-boot","title":"Enable service start on system boot","text":"[Unit]\nDescription=Seafile client\n# Uncomment the next line you are running seafile client on the same computer as server\n# After=seafile.service\n# Or the next one in other case\n# After=network.target\n\n[Service]\nType=oneshot\nExecStart=/usr/bin/seaf-cli start\nExecStop=/usr/bin/seaf-cli stop\nRemainAfterExit=yes\nUser=seafile\nGroup=seafile\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"deploy/using_fuse/","title":"Seafile","text":""},{"location":"deploy/using_fuse/#using-fuse","title":"Using Fuse","text":"sudo systemctl enable seafile.service\nsudo systemctl enable seahub.service\nsudo systemctl enable seafile-client.service # optional\nSeaf-fuse is an implementation of the [http://fuse.sourceforge.net FUSE] virtual filesystem. In a word, it mounts all the seafile files to a folder (which is called the '''mount point'''), so that you can access all the files managed by seafile server, just as you access a normal folder on your server./data/seafile-fuse.
"},{"location":"deploy/using_fuse/#start-seaf-fuse-with-the-script","title":"Start seaf-fuse with the script","text":"mkdir -p /data/seafile-fuse\n./seafile.sh start.
"},{"location":"deploy/using_fuse/#stop-seaf-fuse","title":"Stop seaf-fuse","text":"./seaf-fuse.sh start /data/seafile-fuse\n
"},{"location":"deploy/using_fuse/#contents-of-the-mounted-folder","title":"Contents of the mounted folder","text":""},{"location":"deploy/using_fuse/#the-top-level-folder","title":"The top level folder","text":"./seaf-fuse.sh stop\n/data/seafile-fuse.$ ls -lhp /data/seafile-fuse\n\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 abc@abc.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 foo@foo.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 plus@plus.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 sharp@sharp.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 1970 test@test.com/\n
"},{"location":"deploy/using_fuse/#the-folder-for-each-user","title":"The folder for each user","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com\n\ndrwxr-xr-x 2 root root 924 Jan 1 1970 5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\ndrwxr-xr-x 2 root root 1.6K Jan 1 1970 a09ab9fc-7bd0-49f1-929d-6abeb8491397_My Notes/\n
"},{"location":"deploy/using_fuse/#if-you-get-a-permission-denied-error","title":"If you get a \"Permission denied\" error","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com/5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\n\n-rw-r--r-- 1 root root 501K Jan 1 1970 image.png\n-rw-r--r-- 1 root root 501K Jan 1 1970 sample.jpng\n./seaf-fuse.sh start, most likely you are not in the \"fuse group\". You should:
sudo usermod -a -G fuse <your-user-name>\n
"},{"location":"deploy/using_logrotate/","title":"Set up logrotate for server","text":""},{"location":"deploy/using_logrotate/#how-it-works","title":"How it works","text":"./seaf-fuse.sh start <path> again.SIGUR1 signal./etc/logrotate.d//opt/seafile/logs/seafile.log and your seaf-server's pidfile is setup to /opt/seafile/pids/seaf-server.pid:/opt/seafile/logs/seafile.log\n/opt/seafile/logs/seahub.log\n/opt/seafile/logs/seafdav.log\n/opt/seafile/logs/fileserver-access.log\n/opt/seafile/logs/fileserver-error.log\n/opt/seafile/logs/fileserver.log\n/opt/seafile/logs/file_updates_sender.log\n/opt/seafile/logs/repo_old_file_auto_del_scan.log\n/opt/seafile/logs/seahub_email_sender.log\n/opt/seafile/logs/index.log\n{\n daily\n missingok\n rotate 7\n # compress\n # delaycompress\n dateext\n dateformat .%Y-%m-%d\n notifempty\n # create 644 root root\n sharedscripts\n postrotate\n if [ -f /opt/seafile/pids/seaf-server.pid ]; then\n kill -USR1 `cat /opt/seafile/pids/seaf-server.pid`\n fi\n\n if [ -f /opt/seafile/pids/fileserver.pid ]; then\n kill -USR1 `cat /opt/seafile/pids/fileserver.pid`\n fi\n\n if [ -f /opt/seafile/pids/seahub.pid ]; then\n kill -HUP `cat /opt/seafile/pids/seahub.pid`\n fi\n\n if [ -f /opt/seafile/pids/seafdav.pid ]; then\n kill -HUP `cat /opt/seafile/pids/seafdav.pid`\n fi\n\n find /opt/seafile/logs/ -mtime +7 -name \"*.log*\" -exec rm -f {} \\;\n endscript\n}\n/etc/logrotate.d/seafile.
# Debian 10\nsudo apt-get update\nsudo apt-get install python3 python3-setuptools python3-pip default-libmysqlclient-dev -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient\n# Ubuntu 18.04\nsudo apt-get update\nsudo apt-get install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap\n# Ubuntu 20.04\nsudo apt-get update\nsudo apt-get install python3 python3-setuptools python3-pip libmysqlclient-dev memcached libmemcached-dev -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient\n# CentOS 8\nsudo yum install python3 python3-setuptools python3-pip python3-devel mysql-devel gcc -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient\n# Ubuntu 20.04 (almost the same for Ubuntu 18.04 and Debian 10)\nsudo apt-get update\nsudo apt-get install -y python3 python3-setuptools python3-pip libmysqlclient-dev\nsudo apt-get install -y memcached libmemcached-dev\n\nsudo pip3 install --timeout=3600 django==3.2.* Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient pycryptodome==3.12.0 cffi==1.14.0 lxml\n# Ubuntu 22.04 (almost the same for Ubuntu 20.04 and Debian 11, Debian 10)\nsudo apt-get update\nsudo apt-get install -y python3 python3-setuptools python3-pip libmysqlclient-dev\nsudo apt-get install -y memcached libmemcached-dev\n\nsudo pip3 install --timeout=3600 django==3.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==1.4.44 \\\n psd-tools django-pylibmc django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml\n# Ubuntu 22.04 (almost the same for Ubuntu 20.04 and Debian 11, Debian 10)\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev\nsudo apt-get install -y memcached libmemcached-dev\n\nsudo pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n# Debian 12\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmariadb-dev-compat ldap-utils libldap2-dev libsasl2-dev python3.11-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* pymysql pillow==10.0.* pylibmc captcha==0.4 markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 psd-tools django-pylibmc django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n
"},{"location":"deploy/using_mysql/#creating-the-program-directory","title":"Creating the program directory","text":"# Ubuntu 24.04\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev python3.12-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.16.0 lxml python-ldap==3.4.3\n/opt/seafile. Create this directory and change into it:sudo mkdir /opt/seafile\ncd /opt/seafile\n/opt/seafile is assumed for the rest of this manual. If you decide to put Seafile in another directory, modify the commands accordingly.sudo adduser seafile\nsudo chown -R seafile: /opt/seafile\n
"},{"location":"deploy/using_mysql/#downloading-the-install-package","title":"Downloading the install package","text":"su seafile\ntar xf seafile-server_8.0.4_x86-64.tar.gz\n
"},{"location":"deploy/using_mysql/#setting-up-seafile-ce","title":"Setting up Seafile CE","text":"$ tree -L 2\n.\n\u251c\u2500\u2500 seafile-server-8.0.4\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u2514\u2500\u2500 seafile-server_8.0.4_x86-64.tar.gz\n
# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\ncd seafile-server-8.0.4\n./setup-seafile-mysql.sh\n$ tree /opt/seafile -L 2\nseafile\n\u251c\u2500\u2500 ccnet\n\u251c\u2500\u2500 conf\n\u2502 \u2514\u2500\u2500 ccnet.conf\n\u2502 \u2514\u2500\u2500 gunicorn.conf.py\n\u2502 \u2514\u2500\u2500 seafdav.conf\n\u2502 \u2514\u2500\u2500 seafile.conf\n\u2502 \u2514\u2500\u2500 seahub_settings.py\n\u251c\u2500\u2500 seafile-data\n\u2502 \u2514\u2500\u2500 library-template\n\u251c\u2500\u2500 seafile-server-8.0.4\n\u2502 \u2514\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502 \u2514\u2500\u2500 seaf-fsck.sh\n\u2502 \u2514\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502 \u2514\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502 \u2514\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u251c\u2500\u2500 seafile-server-latest -> seafile-server-8.0.6\n\u251c\u2500\u2500 seahub-data\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 avatars\nseafile-server-latest is a symbolic link to the current Seafile Server folder. When later you upgrade to a new version, the upgrade scripts update this link to point to the latest Seafile Server folder.ccnet_db / seafile_db / seahub_db for ccnet/seafile/seahub respectively, and a MySQL user \"seafile\" to access these databases run the following SQL queries:
"},{"location":"deploy/using_mysql/#setup-memory-cache","title":"Setup Memory Cache","text":"create database `ccnet_db` character set = 'utf8';\ncreate database `seafile_db` character set = 'utf8';\ncreate database `seahub_db` character set = 'utf8';\n\ncreate user 'seafile'@'localhost' identified by 'seafile';\n\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seafile_db`.* to `seafile`@localhost;\nGRANT ALL PRIVILEGES ON `seahub_db`.* to `seafile`@localhost;\n# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\nseahub_settings.py.
"},{"location":"deploy/using_mysql/#use-redis","title":"Use Redis","text":"CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\nseahub_settings.py./opt/seafile/conf:
"},{"location":"deploy/using_mysql/#starting-seafile-server","title":"Starting Seafile Server","text":"SERVICE_URL (i.e., SERVICE_URL = 'http://1.2.3.4:8000/').SERVICE_URL (i.e., SERVICE_URL = http://1.2.3.4:8000/)./opt/seafile/seafile-server-latest:# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\n./seafile.sh start # starts seaf-server\n./seahub.sh start # starts seahub\npgrep to check if seafile/seahub processes are still running:pgrep -f seafile-controller # checks seafile processes\npgrep -f \"seahub\" # checks seahub process\npkill to kill the processes:
"},{"location":"deploy/using_mysql/#stopping-and-restarting-seafile-and-seahub","title":"Stopping and Restarting Seafile and Seahub","text":""},{"location":"deploy/using_mysql/#stopping","title":"Stopping","text":"pkill -f seafile-controller\npkill -f \"seahub\"\n
"},{"location":"deploy/using_mysql/#restarting","title":"Restarting","text":"./seahub.sh stop # stops seahub\n./seafile.sh stop # stops seaf-server\n
"},{"location":"deploy/using_mysql/#enabling-https","title":"Enabling HTTPS","text":"# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\n./seafile.sh restart\n./seahub.sh restart\n
"},{"location":"deploy/using_syslog/","title":"Using syslog","text":""},{"location":"deploy/using_syslog/#configure-seafile-to-use-syslog","title":"Configure Seafile to Use Syslog","text":"general section in seafile.conf:[general]\nenable_syslog = true\n/var/log/syslog:May 10 23:45:19 ubuntu seafile-controller[16385]: seafile-controller.c(154): starting ccnet-server ...\nMay 10 23:45:19 ubuntu seafile-controller[16385]: seafile-controller.c(73): spawn_process: ccnet-server -F /home/plt/haiwen/conf -c /home/plt/haiwen/ccnet -f /home/plt/haiwen/logs/ccnet.log -d -P /home/plt/haiwen/pids/ccnet.pid\n
"},{"location":"deploy/using_syslog/#configure-syslog-for-seafevents-professional-edition-only","title":"Configure Syslog For Seafevents (Professional Edition only)","text":"May 12 01:00:51 ubuntu seaf-server[21552]: ../common/mq-mgr.c(60): [mq client] mq cilent is started\nMay 12 01:00:51 ubuntu seaf-server[21552]: ../common/mq-mgr.c(106): [mq mgr] publish to hearbeat mq: seaf_server.heartbeat\nseafevents.conf:[Syslog]\nenabled = true\n/var/log/syslog
"},{"location":"deploy/using_syslog/#configure-syslog-for-seahub","title":"Configure Syslog For Seahub","text":"May 12 01:00:52 ubuntu seafevents[21542]: [seafevents] database: mysql, name: seahub-pro\nMay 12 01:00:52 ubuntu seafevents[21542]: seafes enabled: True\nMay 12 01:00:52 ubuntu seafevents[21542]: seafes dir: /home/plt/pro-haiwen/seafile-pro-server-5.1.4/pro/python/seafes\nseahub_settings.py:
"},{"location":"deploy/video_thumbnails/","title":"Video thumbnails","text":""},{"location":"deploy/video_thumbnails/#install-ffmpeg-package","title":"Install ffmpeg package","text":"LOGGING = {\n 'version': 1,\n 'disable_existing_loggers': True,\n 'formatters': {\n 'verbose': {\n 'format': '%(process)-5d %(thread)d %(name)-50s %(levelname)-8s %(message)s'\n },\n 'standard': {\n 'format': '%(asctime)s [%(levelname)s] %(name)s:%(lineno)s %(funcName)s %(message)s'\n },\n 'simple': {\n 'format': '[%(asctime)s] %(name)s %(levelname)s %(message)s',\n 'datefmt': '%d/%b/%Y %H:%M:%S'\n },\n },\n 'filters': {\n 'require_debug_false': {\n '()': 'django.utils.log.RequireDebugFalse',\n },\n 'require_debug_true': {\n '()': 'django.utils.log.RequireDebugTrue',\n },\n },\n 'handlers': {\n 'console': {\n 'filters': ['require_debug_true'],\n 'class': 'logging.StreamHandler',\n 'formatter': 'simple'\n },\n 'syslog': {\n 'class': 'logging.handlers.SysLogHandler',\n 'address': '/dev/log',\n 'formatter': 'standard'\n },\n },\n 'loggers': {\n # root logger\n \u00a0 \u00a0 \u00a0 \u00a0# All logs printed by Seahub and any third party libraries will be handled by this logger.\n \u00a0 \u00a0 \u00a0 \u00a0'': {\n 'handlers': ['console', 'syslog'],\n 'level': 'INFO', # Logs when log level is higher than info. Level can be any one of DEBUG, INFO, WARNING, ERROR, CRITICAL.\n 'disabled': False\n },\n # This logger recorded logs printed by Django Framework. For example, when you see 5xx page error, you should check the logs recorded by this logger.\n 'django.request': {\n 'handlers': ['console', 'syslog'],\n 'level': 'INFO',\n 'propagate': False,\n },\n },\n}\n# Install ffmpeg\nsudo apt-get update && sudo apt-get -y install ffmpeg\n\n# Now we need to install some modules\npip install pillow moviepy\n# We need to activate the epel repos\nyum -y install epel-release\nrpm --import http://li.nux.ro/download/nux/RPM-GPG-KEY-nux.ro\n\n# Then update the repo and install ffmpeg\nyum -y install ffmpeg ffmpeg-devel\n\n# Now we need to install some modules\npip install pillow moviepy\n
"},{"location":"deploy/video_thumbnails/#configure-seafile-to-create-thumbnails","title":"Configure Seafile to create thumbnails","text":"# Add backports repo to /etc/apt/sources.list.d/\n# e.g. the following repo works (June 2017)\nsudo echo \"deb http://httpredir.debian.org/debian $(lsb_release -cs)-backports main non-free\" > /etc/apt/sources.list.d/debian-backports.list\n\n# Then update the repo and install ffmpeg\nsudo apt-get update && sudo apt-get -y install ffmpeg\n\n# Now we need to install some modules\npip install pillow moviepy\nseahub_settings.py
"},{"location":"deploy_pro/","title":"Deploy Seafile Pro Edition","text":"# Enable or disable thumbnail for video. ffmpeg and moviepy should be installed first. \n# For details, please refer to https://manual.seafile.com/deploy/video_thumbnails/\n# NOTE: since version 6.1\nENABLE_VIDEO_THUMBNAIL = True\n\n# Use the frame at 5 second as thumbnail\nTHUMBNAIL_VIDEO_FRAME_TIME = 5 \n\n# Absolute filesystem path to the directory that will hold thumbnail files.\nTHUMBNAIL_ROOT = '/haiwen/seahub-data/thumbnail/thumb/'\n
"},{"location":"deploy_pro/#migration-and-upgrading","title":"Migration and Upgrading","text":"
"},{"location":"deploy_pro/#s3openswiftceph-storage-backends","title":"S3/OpenSwift/Ceph Storage Backends","text":"
"},{"location":"deploy_pro/#cluster","title":"Cluster","text":"
"},{"location":"deploy_pro/admin_roles_permissions/","title":"Roles and Permissions Support","text":"
default_admin role with all permissions by default. If you set an administrator to some other admin role, the administrator will only have the permissions you configured to True.seahub_settings.py.
"},{"location":"deploy_pro/change_default_java/","title":"Change default java","text":"ENABLED_ADMIN_ROLE_PERMISSIONS = {\n 'system_admin': {\n 'can_view_system_info': True,\n 'can_config_system': True,\n },\n 'daily_admin': {\n 'can_view_system_info': True,\n 'can_view_statistic': True,\n 'can_manage_library': True,\n 'can_manage_user': True,\n 'can_manage_group': True,\n 'can_view_user_log': True,\n },\n 'audit_admin': {\n 'can_view_system_info': True,\n 'can_view_admin_log': True,\n },\n 'custom_admin': {\n 'can_view_system_info': True,\n 'can_config_system': True,\n 'can_view_statistic': True,\n 'can_manage_library': True,\n 'can_manage_user': True,\n 'can_manage_group': True,\n 'can_view_user_log': True,\n 'can_view_admin_log': True,\n },\n}\njava -version, and check the output.
sudo update-alternatives --config java\nsudo alternatives --config java\njava -version to make sure the change has taken effect.
"},{"location":"deploy_pro/config_seafile_with_ADFS/#prepare-certs-file","title":"Prepare Certs File","text":"
These x.509 certs are used to sign and encrypt elements like NameID and Metadata for SAML. \n\n Then copy these two files to **<seafile-install-path>/seahub-data/certs**. (if the certs folder not exists, create it.)\n\n2. x.509 cert from IdP (Identity Provider)\n\n 1. Log into the ADFS server and open the ADFS management.\n\n 1. Double click **Service** and choose **Certificates**.\n\n 1. Export the **Token-Signing** certificate:\n\n 1. Right-click the certificate and select **View Certificate**.\n 1. Select the **Details** tab.\n 1. Click **Copy to File** (select **DER encoded binary X.509**).\n\n 1. Convert this certificate to PEM format, rename it to **idp.crt**\n\n 1. Then copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Prepare IdP Metadata File\n\n1. Open https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml\n\n1. Save this xml file, rename it to **idp_federation_metadata.xml**\n\n1. Copy it to **<seafile-install-path>/seahub-data/certs**.\n\n### Install Requirements on Seafile Server\n\n- For Ubuntu 16.04\n### Config Seafile\n\nAdd the following lines to **seahub_settings.py**\n'allow_unknown_attributes': True,\n\n# your entity id, usually your subdomain plus the url to the metadata view\n'entityid': SP_SERVICE_URL + '/saml2/metadata/',\n\n# directory with attribute mapping\n'attribute_map_dir': ATTRIBUTE_MAP_DIR,\n\n# this block states what services we provide\n'service': {\n # we are just a lonely SP\n 'sp' : {\n \"allow_unsolicited\": True,\n 'name': 'Federated Seafile Service',\n 'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS,\n 'endpoints': {\n # url and binding to the assetion consumer service view\n # do not change the binding or service name\n 'assertion_consumer_service': [\n (SP_SERVICE_URL + '/saml2/acs/',\n saml2.BINDING_HTTP_POST),\n ],\n # url and binding to the single logout service view\n # do not change the binding or service name\n 'single_logout_service': [\n (SP_SERVICE_URL + '/saml2/ls/',\n saml2.BINDING_HTTP_REDIRECT),\n (SP_SERVICE_URL + '/saml2/ls/post',\n saml2.BINDING_HTTP_POST),\n ],\n },\n\n # attributes that this project need to identify a user\n 'required_attributes': [\"uid\"],\n\n # attributes that may be useful to have but not required\n 'optional_attributes': ['eduPersonAffiliation', ],\n\n # in this section the list of IdPs we talk to are defined\n 'idp': {\n # we do not need a WAYF service since there is\n # only an IdP defined here. This IdP should be\n # present in our metadata\n\n # the keys of this dictionary are entity ids\n 'https://adfs-server.adfs.com/federationmetadata/2007-06/federationmetadata.xml': {\n 'single_sign_on_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/idpinitiatedsignon.aspx',\n },\n 'single_logout_service': {\n saml2.BINDING_HTTP_REDIRECT: 'https://adfs-server.adfs.com/adfs/ls/?wa=wsignout1.0',\n },\n },\n },\n },\n},\n\n# where the remote metadata is stored\n'metadata': {\n 'local': [path.join(CERTS_DIR, 'idp_federation_metadata.xml')],\n},\n\n# set to 1 to output debugging information\n'debug': 1,\n\n# Signing\n'key_file': '', \n'cert_file': path.join(CERTS_DIR, 'idp.crt'), # from IdP\n\n# Encryption\n'encryption_keypairs': [{\n 'key_file': path.join(CERTS_DIR, 'sp.key'), # private part\n 'cert_file': path.join(CERTS_DIR, 'sp.crt'), # public part\n}],\n\n'valid_for': 24, # how long is our metadata valid\n
https://demo.seafile.com/saml2/metadata/ in the Federation metadata address.
'name_id_format': saml2.saml.NAMEID_FORMAT_EMAILADDRESS).
"},{"location":"deploy_pro/deploy_clamav_with_seafile/","title":"Deploy ClamAV with Seafile","text":""},{"location":"deploy_pro/deploy_clamav_with_seafile/#use-clamav-with-docker-based-deployment","title":"Use Clamav with Docker based deployment","text":""},{"location":"deploy_pro/deploy_clamav_with_seafile/#add-clamav-to-docker-composeyml","title":"Add Clamav to docker-compose.yml","text":"
"},{"location":"deploy_pro/deploy_clamav_with_seafile/#modify-seafileconf","title":"Modify seafile.conf","text":"services:\n ...\n\n av:\n image: clamav/clamav:latest\n container_name: seafile-clamav\n networks:\n - seafile-net\n
"},{"location":"deploy_pro/deploy_clamav_with_seafile/#restart-docker-container","title":"Restart docker container","text":"[virus_scan]\nscan_command = clamdscan\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = 5\nscan_size_limit = 20\nthreads = 2\ndocker compose down\ndocker compose up -d \napt-get install clamav-daemon clamav-freshclam\n/etc/clamav/clamd.conf,change the following line:
"},{"location":"deploy_pro/deploy_clamav_with_seafile/#start-the-clamav-daemon","title":"Start the clamav-daemon","text":"LocalSocketGroup root\nUser root\nsystemctl start clamav-daemon\n
$ curl https://secure.eicar.org/eicar.com.txt | clamdscan -\n
"},{"location":"deploy_pro/deploy_in_a_cluster/","title":"Deploy in a cluster","text":"stream: Eicar-Test-Signature FOUND\n
"},{"location":"deploy_pro/deploy_in_a_cluster/#preparation","title":"Preparation","text":""},{"location":"deploy_pro/deploy_in_a_cluster/#hardware-database-memory-cache","title":"Hardware, Database, Memory Cache","text":"sudo easy_install pip\nsudo pip install boto\n
"},{"location":"deploy_pro/deploy_in_a_cluster/#configure-a-single-node","title":"Configure a Single Node","text":"sudo pip install setuptools --no-use-wheel --upgrade\n/data/haiwen/ as the top level directory.tar xf seafile-pro-server_8.0.0_x86-64.tar.gz\n
"},{"location":"deploy_pro/deploy_in_a_cluster/#setup-seafile","title":"Setup Seafile","text":"haiwen\n\u251c\u2500\u2500 seafile-license.txt\n\u2514\u2500\u2500 seafile-pro-server-8.0.0/\nseafile.conf[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=192.168.1.134 --POOL-MIN=10 --POOL-MAX=100\n[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=<floating IP address> --POOL-MIN=10 --POOL-MAX=100\n[cluster]\nenabled = true\n\n[redis]\n# your redis server address\nredis_server = 127.0.0.1\n# your redis server port\nredis_port = 6379\n# size of connection pool to redis, default is 100\nmax_connections = 100\nseafile.conf
"},{"location":"deploy_pro/deploy_in_a_cluster/#seahub_settingspy","title":"seahub_settings.py","text":"[cluster]\nhealth_check_port = 12345\n
"},{"location":"deploy_pro/deploy_in_a_cluster/#seafeventsconf","title":"seafevents.conf","text":"AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\nseafevents.conf to disable file indexing service on the local server. The file indexing service should be started on a dedicated background server.[INDEX FILES]\nexternal_es_server = true\n[INDEX FILES] section:[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh # This configuration is only available for Seafile 6.3.0 pro and above.\nindex_office_pdf = true\nexternal_es_server = true\nes_host = background.seafile.com\nes_port = 9200\nenable = true should be left unchanged. For versions older than 6.1, es_port was 9500.
"},{"location":"deploy_pro/deploy_in_a_cluster/#backend-storage-settings","title":"Backend Storage Settings","text":"CREATE TABLE `avatar_uploaded` (`filename` TEXT NOT NULL, `filename_md5` CHAR(32) NOT NULL PRIMARY KEY, `data` MEDIUMTEXT NOT NULL, `size` INTEGER NOT NULL, `mtime` datetime NOT NULL);\n
"},{"location":"deploy_pro/deploy_in_a_cluster/#setup-nginxapache-and-http","title":"Setup Nginx/Apache and HTTP","text":"
"},{"location":"deploy_pro/deploy_in_a_cluster/#run-and-test-the-single-node","title":"Run and Test the Single Node","text":"cd /data/haiwen/seafile-server-latest\n./seafile.sh start\n./seahub.sh start\nhttp://ip-address-of-this-node:80 and login with the admin account./data/haiwen, compress this whole directory into a tarball and copy the tarball to all other Seafile server machines. You can simply uncompress the tarball and use it../seafile.sh and ./seahub.sh to start Seafile server.
"},{"location":"deploy_pro/deploy_in_a_cluster/#start-seafile-service-on-boot","title":"Start Seafile Service on boot","text":"export CLUSTER_MODE=backend\n./seafile.sh start\n./seafile-background-tasks.sh start\n
"},{"location":"deploy_pro/deploy_in_a_cluster/#load-balancer-setting","title":"Load Balancer Setting","text":"/etc/haproxy/haproxy.cfg:11001)
"},{"location":"deploy_pro/deploy_in_a_cluster/#see-how-it-runs","title":"See how it runs","text":"global\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n maxconn 2000\n timeout connect 10000\n timeout client 300000\n timeout server 300000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafileserver01 192.168.1.165:80 check port 11001 cookie seafileserver01\n server seafileserver02 192.168.1.200:80 check port 11001 cookie seafileserver02\n[cluster]\nenabled = true\nmemcached_options = --SERVER=<IP of memcached node> --POOL-MIN=10 --POOL-MAX=100\nenabled option will prevent the start of background tasks by ./seafile.sh start in the front-end node. The tasks should be explicitly started by ./seafile-background-tasks.sh start at the back-end node.AVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\n[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh # This configuration is for improving searching speed\nexternal_es_server = true\nes_host = <IP of background node>\nes_port = 9200\n[INDEX FILES] section is needed to let the front-end node know the file search feature is enabled. The external_es_server = true is to tell the front-end node not to start the ElasticSearch but to use the ElasticSearch server at the back-end node.
"},{"location":"deploy_pro/details_about_file_search/#enable-full-text-search-for-officepdf-files","title":"Enable full text search for Office/PDF files","text":"[INDEX FILES]\n## must be \"true\" to enable search\nenabled = true\n\n## The interval the search index is updated. Can be s(seconds), m(minutes), h(hours), d(days)\ninterval=10m\n\n## this is for improving the search speed\nhighlight = fvh \n\n## If true, indexes the contents of office/pdf files while updating search index\n## Note: If you change this option from \"false\" to \"true\", then you need to clear the search index and update the index again.\nindex_office_pdf=false\n\n## From 9.0.7 pro, Seafile supports connecting to Elasticsearch through username and password, you need to configure username and password for the Elasticsearch server\nusername = elastic # username to connect to Elasticsearch\npassword = elastic_password # password to connect to Elasticsearch\n\n## From 9.0.7 pro, Seafile supports connecting to elasticsearch via HTTPS, you need to configure HTTPS for the Elasticsearch server\nscheme = https # The default is http. If the Elasticsearch server is not configured with HTTPS, the scheme and cafile do not need to be configured\ncafile = path/to/cert.pem # The certificate path for user authentication. If the Elasticsearch server does not enable certificate authentication, do not need to be configured\n\n## From version 11.0.5 Pro, you can custom ElasticSearch index names for distinct instances when intergrating multiple Seafile servers to a single ElasticSearch Server.\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\nindex_office_pdf option in seafevents.conf to true. cd /data/haiwen/seafile-pro-server-1.7.0/\n ./seafile.sh restart\n
"},{"location":"deploy_pro/details_about_file_search/#common-problems","title":"Common problems","text":""},{"location":"deploy_pro/details_about_file_search/#how-to-rebuild-the-index-if-something-went-wrong","title":"How to rebuild the index if something went wrong","text":" ./pro/pro.py search --clear\n ./pro/pro.py search --update\n./pro/pro.py search --clear\n./pro/pro.py search --update\n
"},{"location":"deploy_pro/details_about_file_search/#access-the-aws-elasticsearch-service-using-https","title":"Access the AWS elasticsearch service using HTTPS","text":"rm -rf pro-data/search./pro/pro.py search --update
[INDEX FILES]\nenabled = true\ninterval = 10m\nindex_office_pdf=true\nexternal_es_server = true\nes_host = your domain endpoint(for example, https://search-my-domain.us-east-1.es.amazonaws.com)\nes_port = 443\nscheme = https\nusername = master user\npassword = password\nhighlight = fvh\nrepo_status_index_name = your-repo-status-index-name # default is `repo_head`\nrepo_files_index_name = your-repo-files-index-name # default is `repofiles`\nelasticsearch cannot be greater than 7.14.0, otherwise the elasticsearch service cannot be accessed: https://docs.aws.amazon.com/opensearch-service/latest/developerguide/samplecode.html#client-compatibility, https://github.com/elastic/elasticsearch-py/pull/1623.
"},{"location":"deploy_pro/details_about_file_search/#encrypted-files-cannot-be-searched","title":"Encrypted files cannot be searched","text":"cd haiwen/seafile-pro-server-2.0.4\n./pro/pro.py search --update\nseafile-server-latest/pro/elasticsearch/config/jvm.options file:-Xms2g # Minimum available memory\n-Xmx2g # Maximum available memory\n### It is recommended to set the values of the above two configurations to the same size.\n
"},{"location":"deploy_pro/details_about_file_search/#distributed-indexing","title":"Distributed indexing","text":"./seafile.sh restart\n./seahub.sh restart\n$ apt install redis-server\n$ yum install redis\n$ pip install redis\nseafevents.conf on all frontend nodes, add the following config items:[EVENTS PUBLISH]\nmq_type=redis # must be redis\nenabled=true\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\nseafevents.conf on the backend node to disable the scheduled indexing task, because the scheduled indexing task and the distributed indexing task conflict.[INDEX FILES]\nenabled=true\n |\n V\nenabled=false \n
"},{"location":"deploy_pro/details_about_file_search/#deploy-distributed-indexing","title":"Deploy distributed indexing","text":"$ ./seafile.sh restart && ./seahub.sh restart\nconf directory from the frontend nodes. The master node and slave nodes do not need to start Seafile, but need to read the configuration files to obtain the necessary information.index-master.conf in the conf directory of the master node, e.g.[DEFAULT]\nmq_type=redis # must be redis\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n./run_index_master.sh [start/stop/restart] in the seafile-server-last directory to control the program to start, stop and restart.index-slave.conf in the conf directory of all slave nodes, e.g.[DEFAULT]\nmq_type=redis # must be redis\nindex_workers=2 # number of threads to create/update indexes, you can increase this value according to your needs\n\n[REDIS]\nserver=127.0.0.1 # your redis server host\nport=6379 # your redis server port\npassword=xxx # your redis server password, if not password, do not set this item\n./run_index_worker.sh [start/stop/restart] in the seafile-server-last directory to control the program to start, stop and restart.seafile-server-last directory:$ ./pro/pro.py search --clear\n$ ./run_index_master.sh python-env index_op.py --mode resotre_all_repo\nseafile-server-last directory:$ ./run_index_master.sh python-env index_op.py --mode show_all_task\n# Ubuntu 20.04 (on Debian 10/Ubuntu 18.04, it is almost the same)\nsudo apt-get update\nsudo apt-get install -y python3 python3-setuptools python3-pip libmysqlclient-dev\nsudo apt-get install -y memcached libmemcached-dev\nsudo apt-get install -y poppler-utils\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient\n# CentOS 8\nsudo yum install python3 python3-setuptools python3-pip python3-devel mysql-devel gcc -y\nsudo yum install poppler-utils -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient\n# on Ubuntu 20.04 (on Debian 10/Ubuntu 18.04, it is almost the same)\napt-get update\napt-get install -y python3 python3-setuptools python3-pip python3-ldap libmysqlclient-dev\napt-get install -y memcached libmemcached-dev\napt-get install -y poppler-utils\n\npip3 install --timeout=3600 django==3.2.* future mysqlclient pymysql Pillow pylibmc \\ \ncaptcha jinja2 sqlalchemy==1.4.3 psd-tools django-pylibmc django-simple-captcha pycryptodome==3.12.0 cffi==1.14.0 lxml\n# CentOS 8\nsudo yum install python3 python3-setuptools python3-pip python3-devel mysql-devel gcc -y\nsudo yum install poppler-utils -y\n\nsudo pip3 install --timeout=3600 django==3.2.* Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.4.3 \\\n django-pylibmc django-simple-captcha python3-ldap mysqlclient pycryptodome==3.12.0 cffi==1.14.0 lxml\n# on Ubuntu 22.04 (on Ubuntu 20.04/Debian 11/Debian 10, it is almost the same)\napt-get update\napt-get install -y python3 python3-setuptools python3-pip python3-ldap libmysqlclient-dev\napt-get install -y memcached libmemcached-dev\napt-get install -y poppler-utils\n\nsudo pip3 install --timeout=3600 django==3.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==1.4.44 \\\n psd-tools django-pylibmc django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml\n# CentOS 8\nsudo yum install python3 python3-setuptools python3-pip python3-devel mysql-devel gcc -y\nsudo yum install poppler-utils -y\n\nsudo pip3 install --timeout=3600 django==3.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==1.4.44 \\\n psd-tools django-pylibmc django_simple_captcha==0.5.20 pycryptodome==3.16.* cffi==1.15.1 lxml\n# on Ubuntu 22.04 (on Ubuntu 20.04/Debian 11/Debian 10, it is almost the same)\napt-get update\napt-get install -y python3 python3-dev python3-setuptools python3-pip python3-ldap libmysqlclient-dev ldap-utils libldap2-dev dnsutils\napt-get install -y memcached libmemcached-dev\napt-get install -y poppler-utils\n\nsudo pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3 lxml\n# CentOS 8\nsudo yum install python3 python3-setuptools python3-pip python3-devel mysql-devel gcc bind-utils -y\nsudo yum install poppler-utils -y\n\nsudo pip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3 lxml\n# Debian 12\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmariadb-dev-compat ldap-utils libldap2-dev libsasl2-dev python3.11-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* pymysql pillow==10.0.* pylibmc captcha==0.4 markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 psd-tools django-pylibmc django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 lxml python-ldap==3.4.3\n
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#installing-java-runtime-environment","title":"Installing Java Runtime Environment","text":"# Ubuntu 24.04\nsudo apt-get update\nsudo apt-get install -y python3 python3-dev python3-setuptools python3-pip libmysqlclient-dev ldap-utils libldap2-dev python3.12-venv\nsudo apt-get install -y memcached libmemcached-dev\n\nmkdir /opt/seafile\ncd /opt/seafile\n\n# create the vitual environment in the python-venv directory\npython3 -m venv python-venv\n\n# activate the venv\nsource python-venv/bin/activate\n# Notice that this will usually change your prompt so you know the venv is active\n\n# install packages into the active venv with pip (sudo isn't needed because this is installing in the venv, not system-wide).\npip3 install --timeout=3600 django==4.2.* future==0.18.* mysqlclient==2.1.* \\\n pymysql pillow==10.2.* pylibmc captcha==0.5.* markupsafe==2.0.1 jinja2 sqlalchemy==2.0.18 \\\n psd-tools django-pylibmc django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.16.0 lxml python-ldap==3.4.3\n# Debian 10/Debian 11\nsudo apt-get install default-jre -y\n# Ubuntu 16.04/Ubuntu 18.04/Ubuntu 20.04/Ubuntu 22.04\nsudo apt-get install openjdk-8-jre -y\nsudo ln -sf /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java /usr/bin/\n
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#creating-the-programm-directory","title":"Creating the programm directory","text":"# CentOS\nsudo yum install java-1.8.0-openjdk -y\n/opt/seafile. Create this directory and change into it:mkdir /opt/seafile\ncd /opt/seafile\n/opt/seafile is assumed for the rest of this manual. If you decide to put Seafile in another directory, some commands need to be modified accordingly.adduser seafile\nchown -R seafile: /opt/seafile\n
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#placing-the-seafile-pe-license","title":"Placing the Seafile PE license","text":"su seafile\n/opt/seafile. Make sure that the name is seafile-license.txt. (If the file has a different name or cannot be read, Seafile PE will not start.)
# Debian/Ubuntu\nwget -O 'seafile-pro-server_x.x.x_x86-64_Ubuntu.tar.gz' 'VERSION_SPECIFIC_LINK_FROM_SEAFILE_CUSTOMER_CENTER'\n\n# CentOS\nwget -O 'seafile-pro-server_x.x.x_x86-64_CentOS.tar.gz' 'VERSION_SPECIFIC_LINK_FROM_SEAFILE_CUSTOMER_CENTER'\n# Debian/Ubuntu\ntar xf seafile-pro-server_8.0.4_x86-64_Ubuntu.tar.gz\n# CentOS\ntar xf seafile-pro-server_8.0.4_x86-64_CentOS.tar.gz\n$ tree -L 2 /opt/seafile\n.\n\u251c\u2500\u2500 seafile-license.txt\n\u2514\u2500\u2500 seafile-pro-server-8.0.4\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check-db-type.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 create-db\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index_op.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pro\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 remove-objs.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_master.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_worker.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-encrypt.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gen-key.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-background-tasks.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-import.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub-extra\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u2514\u2500\u2500 seafile-pro-server_8.0.4_x86-64.tar.gz\n
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#run-the-setup-script","title":"Run the setup script","text":"seafile-server_8.0.4_x86-86.tar.gz; uncompressing into folder seafile-server-8.0.4seafile-pro-server_8.0.4_x86-86.tar.gz; uncompressing into folder seafile-pro-server-8.0.4logs):
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#setup-memory-cache","title":"Setup Memory Cache","text":"$ tree -L 2 /opt/seafile\n.\n\u251c\u2500\u2500 seafile-license.txt # license file\n\u251c\u2500\u2500 ccnet \n\u251c\u2500\u2500 conf # configuration files\n\u2502 \u2514\u2500\u2500 ccnet.conf\n\u2502 \u2514\u2500\u2500 gunicorn.conf.py\n\u2502 \u2514\u2500\u2500 __pycache__\n\u2502 \u2514\u2500\u2500 seafdav.conf\n\u2502 \u2514\u2500\u2500 seafevents.conf\n\u2502 \u2514\u2500\u2500 seafile.conf\n\u2502 \u2514\u2500\u2500 seahub_settings.py\n\u251c\u2500\u2500 logs # log files\n\u251c\u2500\u2500 pids # process id files\n\u251c\u2500\u2500 pro-data # data specific for Seafile PE\n\u251c\u2500\u2500 seafile-data # object database\n\u251c\u2500\u2500 seafile-pro-server-8.0.4\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check-db-type.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 check_init_admin.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 create-db\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 index_op.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate-repo.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 migrate.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 pro\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 reset-admin.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_master.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 run_index_worker.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 runtime\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-backup-cmd.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-encrypt.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fsck.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-fuse.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gc.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-gen-key.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile-background-tasks.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seaf-import.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub-extra\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 seahub.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile-mysql.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 setup-seafile.sh\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 sql\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 upgrade\n\u251c\u2500\u2500 seafile-server-latest -> seafile-pro-server-8.0.4\n\u251c\u2500\u2500 seahub-data\n \u2514\u2500\u2500 avatars # user avatars\n# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\nseahub_settings.py.
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#use-redis","title":"Use Redis","text":"CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\nseahub_settings.py.
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#starting-seafile-server","title":"Starting Seafile Server","text":"/opt/seafile/seafile-server-latest:# For installations using python virtual environment, activate it if it isn't already active\nsource python-venv/bin/activate\n\n./seafile.sh start # Start Seafile service\n./seahub.sh start # Start seahub website, port defaults to 127.0.0.1:8000\nsudo docker pull elasticsearch:7.16.2\nsudo mkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\n
"},{"location":"deploy_pro/download_and_setup_seafile_professional_server/#modifying-seafevents","title":"Modifying seafevents","text":"sudo docker run -d \\\n--name es \\\n-p 9200:9200 \\\n-e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" \\\n-e \"ES_JAVA_OPTS=-Xms2g -Xmx2g\" -e \"xpack.security.enabled=false\" \\\n--restart=always \\\n-v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data \\\n-d elasticsearch:7.16.2\nseafevents.conf:[INDEX FILES]\nexternal_es_server = true # required when ElasticSearch on separate host\nes_host = your elasticsearch server's IP # IP address of ElasticSearch host\n # use 127.0.0.1 if deployed on the same server\nes_port = 9200 # port of ElasticSearch host\ninterval = 10m # frequency of index updates in minutes\nhighlight = fvh # parameter for improving the search performance\n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/","title":"Enable search and background tasks in a cluster","text":"./seafile.sh restart && ./seahub.sh restart \n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#71-80","title":"7.1, 8.0","text":""},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#configuring-node-a-the-backend-node","title":"Configuring Node A (the backend node)","text":"sudo apt-get install openjdk-8-jre libreoffice python-uno # or python3-uno for ubuntu 16.04+\nsudo yum install java-1.8.0-openjdk\nsudo yum install libreoffice libreoffice-headless libreoffice-pyuno\nexternal_es_server = true\n[OFFICE CONVERTER]\nenabled = true\nhost = <ip of node background>\nport = 6000\nes_port was 9500.seafevents.conf, add the following lines:[INDEX FILES]\nenabled = true\nexternal_es_server = true\nes_host = <ip of node A>\nes_port = 9200\n\n[OFFICE CONVERTER]\nenabled = true\nhost = <ip of node background>\nport = 6000\n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#start-the-background-node","title":"Start the background node","text":"OFFICE_CONVERTOR_ROOT = 'http://<ip of node background>:6000'\nseafile-background-tasks.sh is needed)./seafile.sh start\n./seafile-background-tasks.sh start\n./seafile-background-tasks.sh stop\n./seafile.sh stop\n/etc/systemd/system/seafile-background-tasks.service:[Unit]\nDescription=Seafile Background Tasks Server\nAfter=network.target seahub.service\n\n[Service]\nType=forking\nExecStart=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh start\nExecStop=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh stop\nUser=root\nGroup=root\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#the-final-configuration-of-the-background-node","title":"The final configuration of the background node","text":"systemctl enable seafile-background-tasks.service\n[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=<you memcached server host> --POOL-MIN=10 --POOL-MAX=100\n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#90","title":"9.0+","text":""},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#configuring-node-a-the-backend-node_1","title":"Configuring Node A (the backend node)","text":"[INDEX FILES]\nenabled = true\ninterval = 10m\nhighlight = fvh # this is for improving the search speed\n\n[OFFICE CONVERTER]\nenabled = true\nhost = <ip of node background>\nport = 6000\nseafevents.conf, add the following lines:[INDEX FILES]\nenabled = true\nexternal_es_server = true\nes_host = <ip of elastic search service>\nes_port = 9200\ninterval = 10m\nhighlight = fvh # this is for improving the search speed\nseafevents.conf, add the following lines:[INDEX FILES]\nenabled = true\nexternal_es_server = true\nes_host = <ip of elastic search service>\nes_port = 9200\n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#start-the-background-node_1","title":"Start the background node","text":"OFFICE_CONVERTOR_ROOT = 'http://<ip of office preview docker service>'\nseafile-background-tasks.sh is needed)export CLUSTER_MODE=backend\n./seafile.sh start\n./seafile-background-tasks.sh start\n./seafile-background-tasks.sh stop\n./seafile.sh stop\n/etc/systemd/system/seafile-background-tasks.service:[Unit]\nDescription=Seafile Background Tasks Server\nAfter=network.target seafile.service\n\n[Service]\nType=forking\nExecStart=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh start\nExecStop=/opt/seafile/seafile-server-latest/seafile-background-tasks.sh stop\nUser=root\nGroup=root\n\n[Install]\nWantedBy=multi-user.target\n
"},{"location":"deploy_pro/enable_search_and_background_tasks_in_a_cluster/#the-final-configuration-of-the-background-node_1","title":"The final configuration of the background node","text":"systemctl enable seafile-background-tasks.service\n[cluster]\nenabled = true\n\n[memcached]\nmemcached_options = --SERVER=<you memcached server host> --POOL-MIN=10 --POOL-MAX=100\n
"},{"location":"deploy_pro/ldap_in_11.0/","title":"Configure Seafile Pro Edition to use LDAP","text":""},{"location":"deploy_pro/ldap_in_11.0/#how-does-ldap-user-management-work-in-seafile","title":"How does LDAP User Management work in Seafile","text":"[INDEX FILES]\nenabled = true\nexternal_es_server = true\nes_host = <ip of elastic search service>\nes_port = 9200\ninterval = 10m\nhighlight = fvh # this is for improving the search speed\n
user-login-name@domain-name, e.g. john@example.com. It's not a real email address, but it works fine as the unique identifier.social_auth_usersocialauth to map the identifier to internal user ID in Seafile. When this ID is changed in LDAP for a user, you only need to update social_auth_usersocialauth table.seahub_settings.py. Examples are as follows:ENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.1' \nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' \nLDAP_ADMIN_DN = 'administrator@example.com' \nLDAP_ADMIN_PASSWORD = 'yourpassword' \nLDAP_PROVIDER = 'ldap' \nLDAP_LOGIN_ATTR = 'email' \nLDAP_CONTACT_EMAIL_ATTR = '' \nLDAP_USER_ROLE_ATTR = '' \nLDAP_USER_FIRST_NAME_ATTR = 'givenName' \nLDAP_USER_LAST_NAME_ATTR = 'sn' \nLDAP_USER_NAME_REVERSE = False \nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \n
cn=admin,dc=example,dc=comLDAP_BASE_DN and LDAP_ADMIN_DN:
"},{"location":"deploy_pro/ldap_in_11.0/#setting-up-ldap-user-sync-optional","title":"Setting Up LDAP User Sync (optional)","text":"LDAP_BASE_DN, you first have to navigate your organization hierachy on the domain controller GUI.
cn=users,dc=yourdomain,dc=com as LDAP_BASE_DN (with proper adjustment for your own needs).dsquery command on the domain controller to find out the DN for this OU. For example, if the OU is staffs, you can run dsquery ou -name staff. More information can be found here.user@domain.name format for the LDAP_ADMIN_DN option. For example you can use administrator@example.com for LDAP_ADMIN_DN. Sometime the domain controller doesn't recognize this format. You can still use dsquery command to find out user's DN. For example, if the user name is 'seafileuser', run dsquery user -name seafileuser. More information here.seahub_settings.py. Examples are as follows:# Basic configuration items\nENABLE_LDAP = True\n......\n\n# ldap user sync options.\nLDAP_SYNC_INTERVAL = 60 \nENABLE_LDAP_USER_SYNC = True \nLDAP_USER_OBJECT_CLASS = 'person'\nLDAP_DEPT_ATTR = '' \nLDAP_UID_ATTR = '' \nLDAP_AUTO_REACTIVATE_USERS = True \nLDAP_USE_PAGED_RESULT = False \nIMPORT_NEW_USER = True \nACTIVATE_USER_WHEN_IMPORT = True \nDEACTIVE_USER_IF_NOTFOUND = False \nENABLE_EXTRA_USER_INFO_SYNC = True \n
"},{"location":"deploy_pro/ldap_in_11.0/#importing-users-without-activating-them","title":"Importing Users without Activating Them","text":"sAMAccountName can be used as UID_ATTR. The attribute will be stored as login_id in Seafile (in seahub_db.profile_profile table).seahub_settings.py:ACTIVATE_USER_WHEN_IMPORT = False\nseahub_settings.py:ACTIVATE_AFTER_FIRST_LOGIN = True\nDEACTIVE_USER_IF_NOTFOUND option, a user will be deactivated when he/she is not found in LDAP server. By default, even after this user reappears in the LDAP server, it won't be reactivated automatically. This is to prevent auto reactivating a user that was manually deactivated by the system admin.seahub_settings.py:
"},{"location":"deploy_pro/ldap_in_11.0/#manually-trigger-synchronization","title":"Manually Trigger Synchronization","text":"LDAP_AUTO_REACTIVATE_USERS = True\ncd seafile-server-latest\n./pro/pro.py ldapsync\n
"},{"location":"deploy_pro/ldap_in_11.0/#setting-up-ldap-group-sync-optional","title":"Setting Up LDAP Group Sync (optional)","text":""},{"location":"deploy_pro/ldap_in_11.0/#how-it-works","title":"How It Works","text":"docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py ldapsync\n
"},{"location":"deploy_pro/ldap_in_11.0/#configuration","title":"Configuration","text":"# ldap group sync options.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_GROUP_FILTER = '' # An additional filter to use when searching group objects.\n # If it's set, the final filter used to run search is \"(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER))\";\n # otherwise the final filter would be \"(objectClass=GROUP_OBJECT_CLASS)\".\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile.\n # Learn more about departments in Seafile [here](https://help.seafile.com/sharing_collaboration/departments/).\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\n
(&(objectClass=GROUP_OBJECT_CLASS)(GROUP_FILTER)); otherwise the final filter would be (objectClass=GROUP_OBJECT_CLASS).
"},{"location":"deploy_pro/ldap_in_11.0/#sync-ou-as-departments","title":"Sync OU as Departments","text":"LDAP_BASE_DN.LDAP_GROUP_OBJECT_CLASS option to posixGroup. A posixGroup object in LDAP usually contains a multi-value attribute for the list of member UIDs. The name of this attribute can be set with the LDAP_GROUP_MEMBER_ATTR option. It's MemberUid by default. The value of the MemberUid attribute is an ID that can be used to identify a user, which corresponds to an attribute in the user object. The name of this ID attribute is usually uid, but can be set via the LDAP_USER_ATTR_IN_MEMBERUID option. Note that posixGroup doesn't support nested groups.
"},{"location":"deploy_pro/ldap_in_11.0/#periodical-and-manual-sync","title":"Periodical and Manual Sync","text":"LDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_DEPT_NAME_ATTR = 'description' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\n[2023-03-30 18:15:05,109] [DEBUG] create group 1, and add dn pair CN=DnsUpdateProxy,CN=Users,DC=Seafile,DC=local<->1 success.\n[2023-03-30 18:15:05,145] [DEBUG] create group 2, and add dn pair CN=Domain Computers,CN=Users,DC=Seafile,DC=local<->2 success.\n[2023-03-30 18:15:05,154] [DEBUG] create group 3, and add dn pair CN=Domain Users,CN=Users,DC=Seafile,DC=local<->3 success.\n[2023-03-30 18:15:05,164] [DEBUG] create group 4, and add dn pair CN=Domain Admins,CN=Users,DC=Seafile,DC=local<->4 success.\n[2023-03-30 18:15:05,176] [DEBUG] create group 5, and add dn pair CN=RAS and IAS Servers,CN=Users,DC=Seafile,DC=local<->5 success.\n[2023-03-30 18:15:05,186] [DEBUG] create group 6, and add dn pair CN=Enterprise Admins,CN=Users,DC=Seafile,DC=local<->6 success.\n[2023-03-30 18:15:05,197] [DEBUG] create group 7, and add dn pair CN=dev,CN=Users,DC=Seafile,DC=local<->7 success.\ncd seafile-server-latest\n./pro/pro.py ldapsync\n
"},{"location":"deploy_pro/ldap_in_11.0/#advanced-ldap-integration-options","title":"Advanced LDAP Integration Options","text":""},{"location":"deploy_pro/ldap_in_11.0/#multiple-base","title":"Multiple BASE","text":"docker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py ldapsync\nLDAP_BASE_DN option. The DNs are separated by \";\", e.g.
"},{"location":"deploy_pro/ldap_in_11.0/#additional-search-filter","title":"Additional Search Filter","text":"LDAP_BASE_DN = 'ou=developers,dc=example,dc=com;ou=marketing,dc=example,dc=com'\nLDAP_FILTER option. The value of this option follows standard LDAP search filter syntax (https://msdn.microsoft.com/en-us/library/aa746475(v=vs.85).aspx).(&($LOGIN_ATTR=*)($LDAP_FILTER)). $LOGIN_ATTR and $LDAP_FILTER will be replaced by your option values.seahub_settings.py:LDAP_FILTER = 'memberOf=CN=group,CN=developers,DC=example,DC=com'\n(&(mail=*)(memberOf=CN=group,CN=developers,DC=example,DC=com))memberOf attribute is only available in Active Directory.LDAP_FILTER option to limit user scope to a certain AD group.
dsquery command on the domain controller. For example, if group name is 'seafilegroup', run dsquery group -name seafilegroup.seahub_settings.py:
"},{"location":"deploy_pro/ldap_in_11.0/#using-tls-connection-to-ldap-server","title":"Using TLS connection to LDAP server","text":"LDAP_FILTER = 'memberOf={output of dsquery command}'\nLDAP_SERVER_URL as the access address of the ldaps protocol to use TLS to connect to the LDAP service, for example:
"},{"location":"deploy_pro/ldap_in_11.0/#use-paged-results-extension","title":"Use paged results extension","text":"LDAP_SERVER_URL = 'ldaps://192.168.0.1:636/'\nseahub_settings.py to enable PR:
"},{"location":"deploy_pro/ldap_in_11.0/#follow-referrals","title":"Follow referrals","text":"LDAP_USE_PAGED_RESULT = True\nseahub_settings.py, e.g.:
"},{"location":"deploy_pro/ldap_in_11.0/#configure-multi-ldap-servers","title":"Configure Multi-ldap Servers","text":"LDAP_FOLLOW_REFERRALS = True\nLDAP in the options with MULTI_LDAP_1, and then add them to seahub_settings.py, for example:# Basic config options\nENABLE_LDAP = True\n......\n\n# Multi ldap config options\nENABLE_MULTI_LDAP_1 = True\nMULTI_LDAP_1_SERVER_URL = 'ldap://192.168.0.2'\nMULTI_LDAP_1_BASE_DN = 'ou=test,dc=seafile,dc=top'\nMULTI_LDAP_1_ADMIN_DN = 'administrator@example.top'\nMULTI_LDAP_1_ADMIN_PASSWORD = 'Hello@123'\nMULTI_LDAP_1_PROVIDER = 'ldap1'\nMULTI_LDAP_1_LOGIN_ATTR = 'userPrincipalName'\n\n# Optional configs\nMULTI_LDAP_1_USER_FIRST_NAME_ATTR = 'givenName'\nMULTI_LDAP_1_USER_LAST_NAME_ATTR = 'sn'\nMULTI_LDAP_1_USER_NAME_REVERSE = False\nENABLE_MULTI_LDAP_1_EXTRA_USER_INFO_SYNC = True\n\nMULTI_LDAP_1_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' \nMULTI_LDAP_1_USE_PAGED_RESULT = False\nMULTI_LDAP_1_FOLLOW_REFERRALS = True\nENABLE_MULTI_LDAP_1_USER_SYNC = True\nENABLE_MULTI_LDAP_1_GROUP_SYNC = True\nMULTI_LDAP_1_SYNC_DEPARTMENT_FROM_OU = True\n\nMULTI_LDAP_1_USER_OBJECT_CLASS = 'person'\nMULTI_LDAP_1_DEPT_ATTR = ''\nMULTI_LDAP_1_UID_ATTR = ''\nMULTI_LDAP_1_CONTACT_EMAIL_ATTR = ''\nMULTI_LDAP_1_USER_ROLE_ATTR = ''\nMULTI_LDAP_1_AUTO_REACTIVATE_USERS = True\n\nMULTI_LDAP_1_GROUP_OBJECT_CLASS = 'group'\nMULTI_LDAP_1_GROUP_FILTER = ''\nMULTI_LDAP_1_GROUP_MEMBER_ATTR = 'member'\nMULTI_LDAP_1_GROUP_UUID_ATTR = 'objectGUID'\nMULTI_LDAP_1_CREATE_DEPARTMENT_LIBRARY = False\nMULTI_LDAP_1_DEPT_REPO_PERM = 'rw'\nMULTI_LDAP_1_DEFAULT_DEPARTMENT_QUOTA = -2\nMULTI_LDAP_1_SYNC_GROUP_AS_DEPARTMENT = False\nMULTI_LDAP_1_USE_GROUP_MEMBER_RANGE_QUERY = False\nMULTI_LDAP_1_USER_ATTR_IN_MEMBERUID = 'uid'\nMULTI_LDAP_1_DEPT_NAME_ATTR = ''\n......\n
"},{"location":"deploy_pro/ldap_in_11.0/#sso-and-ldap-users-use-the-same-uid","title":"SSO and LDAP users use the same uid","text":"# Common user sync options\nLDAP_SYNC_INTERVAL = 60\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# Common group sync options\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\nSSO_LDAP_USE_SAME_UID = True:SSO_LDAP_USE_SAME_UID = True\nLDAP_LOGIN_ATTR (not LDAP_UID_ATTR), in ADFS it is uid attribute. You need make sure you use the same attribute for the two settings.seahub_settings.py, e.g.LDAP_USER_ROLE_ATTR = 'title'\nLDAP_USER_ROLE_ATTR is the attribute field to configure roles in LDAP. You can write a custom function to map the role by creating a file seahub_custom_functions.py under conf/ and edit it like:# -*- coding: utf-8 -*-\n\n# The AD roles attribute returns a list of roles (role_list).\n# The following function use the first entry in the list.\ndef ldap_role_mapping(role):\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n\n# From version 11.0.11-pro, you can define the following function\n# to calculate a role from the role_list.\ndef ldap_role_list_mapping(role_list):\n if not role_list:\n return ''\n for role in role_list:\n if 'staff' in role:\n return 'Staff'\n if 'guest' in role:\n return 'Guest'\n if 'manager' in role:\n return 'Manager'\n# Under Ubuntu\nvi /etc/memcached.conf\n\n# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default\n# Note that the daemon will grow to this size, but does not start out holding this much\n# memory\n# -m 64\n-m 256\n\n# Specify which IP address to listen on. The default is to listen on all IP addresses\n# This parameter is one of the only security measures that memcached has, so make sure\n# it's listening on a firewalled interface.\n-l 0.0.0.0\n\nservice memcached restart\n# For Ubuntu\nsudo apt-get install keepalived -y\n/etc/keepalived/keepalived.conf.cat /etc/keepalived/keepalived.conf\n\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node1\n vrrp_mcast_group4 224.0.100.19\n}\nvrrp_script chk_memcached {\n script \"killall -0 memcached && exit 0 || exit 1\"\n interval 1\n weight -5\n}\n\nvrrp_instance VI_1 {\n state MASTER\n interface ens33\n virtual_router_id 51\n priority 100\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass hello123\n }\n virtual_ipaddress {\n 192.168.1.113/24 dev ens33\n }\n track_script {\n chk_memcached\n }\n}\ncat /etc/keepalived/keepalived.conf\n\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node2\n vrrp_mcast_group4 224.0.100.19\n}\nvrrp_script chk_memcached {\n script \"killall -0 memcached && exit 0 || exit 1\"\n interval 1\n weight -5\n}\n\nvrrp_instance VI_1 {\n state BACKUP\n interface ens33\n virtual_router_id 51\n priority 98\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass hello123\n }\n virtual_ipaddress {\n 192.168.1.113/24 dev ens33\n }\n track_script {\n chk_memcached\n }\n}\n
"},{"location":"deploy_pro/migrate/","title":"Migrate data between different backends","text":"
"},{"location":"deploy_pro/migrate/#create-a-new-temporary-seafileconf","title":"Create a new temporary seafile.conf","text":"[block_backend], [commit_object_backend], [fs_object_backend] options) and save it under a readable path. Let's assume that we are migrating data to S3 and create temporary seafile.conf under /optcat > seafile.conf << EOF\n[commit_object_backend]\nname = s3\nbucket = seacomm\nkey_id = ******\nkey = ******\n\n[fs_object_backend]\nname = s3\nbucket = seafs\nkey_id = ******\nkey = ******\n\n[block_backend]\nname = s3\nbucket = seablk\nkey_id = ******\nkey = ******\nEOF\n\nmv seafile.conf /opt\ncat > seafile.conf << EOF\n[commit_object_backend]\nname = fs\n# the dir configuration is the new seafile-data path\ndir = /var/data_backup\n\n[fs_object_backend]\nname = fs\n# the dir configuration is the new seafile-data path\ndir = /var/data_backup\n\n[block_backend]\nname = fs\n# the dir configuration is the new seafile-data path\ndir = /var/data_backup\n\nEOF\n\nmv seafile.conf /opt\nexport OBJECT_LIST_FILE_PATH=/path/to/object/list/file\n/path/to/object/list/file.commit,/path/to/object/list/file.fs, /path/to/object/list/file.blocks.nworker and maxsize variables in the following code:class ThreadPool(object):\n\ndef __init__(self, do_work, nworker=20):\n self.do_work = do_work\n self.nworker = nworker\n self.task_queue = Queue.Queue(maxsize = 2000)\n--decrypt option, which will decrypt the data while reading it, and then write the unencrypted data to the new backend. Note that you need add this option in all stages of the migration.
"},{"location":"deploy_pro/migrate/#run-migratesh-to-initially-migrate-objects","title":"Run migrate.sh to initially migrate objects","text":"cd ~/haiwen/seafile-server-latest\n./migrate.sh /opt --decrypt\n~/haiwen, enter ~/haiwen/seafile-server-latest and run migrate.sh with parent path of temporary seafile.conf as parameter, here is /opt.cd ~/haiwen/seafile-server-latest\n./migrate.sh /opt\n
"},{"location":"deploy_pro/migrate/#replace-the-original-seafileconf","title":"Replace the original seafile.conf","text":"cd ~/haiwen/seafile-server-latest\n./migrate.sh /opt\nmv /opt/seafile.conf ~/haiwen/conf\n
"},{"location":"deploy_pro/migrate_from_seafile_community_server/#do-the-migration","title":"Do the migration","text":"sudo apt-get install poppler-utils\n/opt/seafile/seafile-server-10.0.0. /opt/seafile/./opt/seafile.tar xf seafile-pro-server_10.0.0_x86-64_Ubuntu.tar.gz\nseafile\n\u251c\u2500\u2500 seafile-license.txt\n\u251c\u2500\u2500 seafile-pro-server-10.0.0/\n\u251c\u2500\u2500 seafile-server-10.0.0/\n\u251c\u2500\u2500 ccnet/\n\u251c\u2500\u2500 seafile-data/\n\u251c\u2500\u2500 seahub-data/\n\u2514\u2500\u2500 conf/\n\u2514\u2500\u2500 logs/\n
"},{"location":"deploy_pro/migrate_from_seafile_community_server/#do-the-migration_1","title":"Do the migration","text":"seafile-server_10.0.0_x86-64_Ubuntu.tar.gz; After uncompressing, the folder is seafile-server-10.0.0seafile-pro-server_10.0.0_x86-64_Ubuntu.tar.gz; After uncompressing, the folder is seafile-pro-server-10.0.0
cd seafile/seafile-server-10.0.0\n./seafile.sh stop\n./seahub.sh stop\n
cd seafile/seafile-pro-server-10.0.0/\n./pro/pro.py setup --migrate\n
"},{"location":"deploy_pro/migrate_from_seafile_community_server/#add-memory-cache-configuration","title":"Add Memory Cache Configuration","text":"seafile\n\u251c\u2500\u2500 seafile-license.txt\n\u251c\u2500\u2500 seafile-pro-server-10.0.0/\n\u251c\u2500\u2500 seafile-server-10.0.0/\n\u251c\u2500\u2500 ccnet/\n\u251c\u2500\u2500 seafile-data/\n\u251c\u2500\u2500 seahub-data/\n\u251c\u2500\u2500 seahub.db\n\u251c\u2500\u2500 seahub_settings.py\n\u2514\u2500\u2500 pro-data/\n# on Debian/Ubuntu 18.04+\napt-get install memcached libmemcached-dev -y\npip3 install --timeout=3600 pylibmc django-pylibmc\n\nsystemctl enable --now memcached\nseahub_settings.py.
"},{"location":"deploy_pro/migrate_from_seafile_community_server/#use-redis","title":"Use Redis","text":"CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '127.0.0.1:11211',\n },\n}\nseahub_settings.py.
"},{"location":"deploy_pro/migrate_from_seafile_community_server/#switch-back-to-community-server","title":"Switch Back to Community Server","text":"cd seafile/seafile-pro-server-10.0.0\n./seafile.sh start\n./seahub.sh start\ncd seafile/seafile-pro-server-10.0.0/\n./seafile.sh stop\n./seahub.sh stop\ncd seafile/seafile-server-10.0.0/\n./upgrade/minor-upgrade.sh\n
"},{"location":"deploy_pro/multi_institutions/","title":"Multiple Organization/Institution User Management","text":"cd haiwen/seafile-server-10.0.0/\n./seafile.sh start\n./seahub.sh start\nseahub_settings.py, add MULTI_INSTITUTION = True to enable multi-institution feature. And add# for 7.1.22 or older\nEXTRA_MIDDLEWARE_CLASSES += (\n 'seahub.institutions.middleware.InstitutionMiddleware',\n)\n\n# for 8.0.0 or newer\nEXTRA_MIDDLEWARE += (\n 'seahub.institutions.middleware.InstitutionMiddleware',\n)\n# for 7.1.22 or older\nEXTRA_MIDDLEWARE_CLASSES = (\n 'seahub.institutions.middleware.InstitutionMiddleware',\n)\n\n# for 8.0.0 or newer\nEXTRA_MIDDLEWARE = (\n 'seahub.institutions.middleware.InstitutionMiddleware',\n)\nEXTRA_MIDDLEWARE_CLASSES or EXTRA_MIDDLEWARE is not defined.profile.institution match the name.
"},{"location":"deploy_pro/multi_tenancy/","title":"Multi-Tenancy Support","text":"SHIBBOLETH_ATTRIBUTE_MAP = {\n \"givenname\": (False, \"givenname\"),\n \"sn\": (False, \"surname\"),\n \"mail\": (False, \"contact_email\"),\n \"organization\": (False, \"institution\"),\n}\n
"},{"location":"deploy_pro/multi_tenancy/#seahub_settingspy","title":"seahub_settings.py","text":"[general]\nmulti_tenancy = true\n
"},{"location":"deploy_pro/multi_tenancy/#usage","title":"Usage","text":"CLOUD_MODE = True\nMULTI_TENANCY = True\n\nORG_MEMBER_QUOTA_ENABLED = True\n\nORG_ENABLE_ADMIN_CUSTOM_NAME = True # Default is True, meaning organization name can be customized\nORG_ENABLE_ADMIN_CUSTOM_LOGO = False # Default is False, if set to True, organization logo can be customized\n\nENABLE_MULTI_ADFS = True # Default is False, if set to True, support per organization custom ADFS/SAML2 login\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n$ apt update\n$ apt install xmlsec1\n$ mkdir -p /opt/seafile/seahub-data/certs\n$ cd /opt/seafile/seahub-data/certs\n$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout sp.key -out sp.crt\ndays option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly.ENABLE_MULTI_ADFS = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n/usr/bin/xmlsec1, you need to add the following configuration in seahub_settings.py:SAML_XMLSEC_BINARY_PATH = '/path/to/xmlsec1'\n$ which xmlsec1\n/opt/seafile/seahub-data/certs, you need to add the following configuration in seahub_settings.py:
"},{"location":"deploy_pro/multi_tenancy/#integration-with-adfssaml-single-sign-on","title":"Integration with ADFS/SAML single sign-on","text":"SAML_CERTS_DIR = '/path/to/certs'\n
"},{"location":"deploy_pro/multiple_storage_backends/#outline","title":"Outline","text":"
storage_id: an internal string ID to identify the storage class. It's not visible to users. For example \"primary storage\".name: A user visible name for the storage class.is_default: whether this storage class is the default. This option are effective in two cases:commits\uff1athe storage for storing the commit objects for this class. It can be any storage that Seafile supports, like file system, ceph, s3.fs\uff1athe storage for storing the fs objects for this class. It can be any storage that Seafile supports, like file system, ceph, s3.blocks\uff1athe storage for storing the block objects for this class. It can be any storage that Seafile supports, like file system, ceph, s3.[storage]\nenable_storage_classes = true\nstorage_classes_file = /opt/seafile_storage_classes.json\n
seafile.conf.seafile_storage_classes.json file on your local disk in a sub-directory of the location that is mounted to the seafile container, and set the storage_classes_file configuration above to a path relative to the /shared/ directory mounted on the seafile container. seafile container in your docker-compose.yml file is similar to the following:// docker-compose.yml\nservices:\n seafile:\n container_name: seafile\n volumes:\n - /opt/seafile-data:/shared\n/opt/seafile-data (such as /opt/seafile-data/conf/) and then configure seafile.conf like so:[storage]\nenable_storage_classes = true\nstorage_classes_file = /shared/conf/seafile_storage_classes.json\nseafile.conf.[\n {\n \"storage_id\": \"hot_storage\",\n \"name\": \"Hot Storage\",\n \"is_default\": true,\n \"commits\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-commits\",\n \"key\": \"ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09\",\n \"key_id\": \"AKIAIOT3GCU5VGCCL44A\"\n },\n \"fs\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-fs\",\n \"key\": \"ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09\",\n \"key_id\": \"AKIAIOT3GCU5VGCCL44A\"\n },\n \"blocks\": {\n \"backend\": \"s3\",\n \"bucket\": \"seafile-blocks\",\n \"key\": \"ZjoJ8RPNDqP1vcdD60U4wAHwUQf2oJYqxN27oR09\",\n \"key_id\": \"AKIAIOT3GCU5VGCCL44A\"\n }\n },\n {\n \"storage_id\": \"cold_storage\",\n \"name\": \"Cold Storage\",\n \"is_default\": false,\n \"fs\": {\n \"backend\": \"fs\",\n \"dir\": \"/storage/seafile/seafile-data\"\n },\n \"commits\": {\n \"backend\": \"fs\",\n \"dir\": \"/storage/seafile/seafile-data\"\n },\n \"blocks\": {\n \"backend\": \"fs\",\n \"dir\": \"/storage/seafile/seaflle-data\"\n }\n },\n {\n \"storage_id\": \"swift_storage\",\n \"name\": \"Swift Storage\",\n \"fs\": {\n \"backend\": \"swift\",\n \"tenant\": \"adminTenant\",\n \"user_name\": \"admin\",\n \"password\": \"openstack\",\n \"container\": \"seafile-commits\",\n \"auth_host\": \"192.168.56.31:5000\",\n \"auth_ver\": \"v2.0\"\n },\n \"commits\": {\n \"backend\": \"swift\",\n \"tenant\": \"adminTenant\",\n \"user_name\": \"admin\",\n \"password\": \"openstack\",\n \"container\": \"seafile-fs\",\n \"auth_host\": \"192.168.56.31:5000\",\n \"auth_ver\": \"v2.0\"\n },\n \"blocks\": {\n \"backend\": \"swift\",\n \"tenant\": \"adminTenant\",\n \"user_name\": \"admin\",\n \"password\": \"openstack\",\n \"container\": \"seafile-blocks\",\n \"auth_host\": \"192.168.56.31:5000\",\n \"auth_ver\": \"v2.0\",\n \"region\": \"RegionTwo\"\n }\n },\n {\n \"storage_id\": \"ceph_storage\",\n \"name\": \"ceph Storage\",\n \"fs\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-fs\"\n },\n \"commits\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-commits\"\n },\n \"blocks\": {\n \"backend\": \"ceph\",\n \"ceph_config\": \"/etc/ceph/ceph.conf\",\n \"pool\": \"seafile-blocks\"\n }\n }\n]\ncommits, fs and blocks information syntax is similar to what is used in [commit_object_backend], [fs_object_backend] and [block_backend] section of seafile.conf. Refer to the detailed syntax in the documentation for the storage you use. For exampe, if you use S3 storage, refer to S3 Storage.fs, commits or blocks, you must explicitly provide the path for the seafile-data directory. The objects will be stored in storage/commits, storage/fs, storage/blocks under this path.
"},{"location":"deploy_pro/multiple_storage_backends/#user-chosen","title":"User Chosen","text":"ENABLE_STORAGE_CLASSES = True\nSTORAGE_CLASS_MAPPING_POLICY = 'USER_SELECT'\nSTORAGE_CLASS_MAPPING_POLIICY in seahub_settings.py, this policy is used by default.storage_ids is added to the role configuration in seahub_settings.py to assign storage classes to each role. If only one storage class is assigned to a role, the users with this role cannot choose storage class for libraries; otherwise, the users can choose a storage class if more than one class are assigned. If no storage class is assigned to a role, the default class specified in the JSON file will be used.
"},{"location":"deploy_pro/multiple_storage_backends/#library-id-based-mapping","title":"Library ID Based Mapping","text":"ENABLE_STORAGE_CLASSES = True\nSTORAGE_CLASS_MAPPING_POLICY = 'ROLE_BASED'\n\nENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_invite_guest': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': ['old_version_id', 'hot_storage', 'cold_storage', 'a_storage'],\n },\n 'guest': {\n 'can_add_repo': True,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'storage_ids': ['hot_storage', 'cold_storage'],\n },\n}\nSTORAGE_CLASS_MAPPING_POLICY = 'REPO_ID_MAPPING'\nfor_new_library to the backends which are expected to store new libraries in json file:
"},{"location":"deploy_pro/multiple_storage_backends/#multiple-storage-backend-data-migration","title":"Multiple Storage Backend Data Migration","text":"[\n{\n\"storage_id\": \"new_backend\",\n\"name\": \"New store\",\n\"for_new_library\": true,\n\"is_default\": false,\n\"fs\": {\"backend\": \"fs\", \"dir\": \"/storage/seafile/new-data\"},\n\"commits\": {\"backend\": \"fs\", \"dir\": \"/storage/seafile/new-data\"},\n\"blocks\": {\"backend\": \"fs\", \"dir\": \"/storage/seafile/new-data\"}\n}\n]\nmigrate-repo.sh script to migrate library data between different storage backends../migrate-repo.sh [repo_id] origin_storage_id destination_storage_id\n
OBJECT_LIST_FILE_PATH environment variable to specify a path prefix to store the migrated object list.export OBJECT_LIST_FILE_PATH=/opt/test\ntest_4c731e5c-f589-4eaa-889f-14c00d4893cb.fs test_4c731e5c-f589-4eaa-889f-14c00d4893cb.commits test_4c731e5c-f589-4eaa-889f-14c00d4893cb.blocks Setting the OBJECT_LIST_FILE_PATH environment variable has two purposes:
"},{"location":"deploy_pro/multiple_storage_backends/#delete-all-objects-in-a-library-in-the-specified-storage-backend","title":"Delete All Objects In a Library In The Specified Storage Backend","text":"remove-objs.sh script (before migration, you need to set the OBJECT_LIST_FILE_PATH environment variable) to delete all objects in a library in the specified storage backend.
"},{"location":"deploy_pro/office_web_app/","title":"Office Online Server","text":"./remove-objs.sh repo_id storage_id\n# Enable Office Online Server\nENABLE_OFFICE_WEB_APP = True\n\n# Url of Office Online Server's discovery page\n# The discovery page tells Seafile how to interact with Office Online Server when view file online\n# You should change `http://example.office-web-app.com` to your actual Office Online Server server address\nOFFICE_WEB_APP_BASE_URL = 'http://example.office-web-app.com/hosting/discovery'\n\n# Expiration of WOPI access token\n# WOPI access token is a string used by Seafile to determine the file's\n# identity and permissions when use Office Online Server view it online\n# And for security reason, this token should expire after a set time period\nWOPI_ACCESS_TOKEN_EXPIRATION = 60 * 60 * 24 # seconds\n\n# List of file formats that you want to view through Office Online Server\n# You can change this value according to your preferences\n# And of course you should make sure your Office Online Server supports to preview\n# the files with the specified extensions\nOFFICE_WEB_APP_FILE_EXTENSION = ('ods', 'xls', 'xlsb', 'xlsm', 'xlsx','ppsx', 'ppt',\n 'pptm', 'pptx', 'doc', 'docm', 'docx')\n\n# Enable edit files through Office Online Server\nENABLE_OFFICE_WEB_APP_EDIT = True\n\n# types of files should be editable through Office Online Server\n# Note, Office Online Server 2016 is needed for editing docx\nOFFICE_WEB_APP_EDIT_FILE_EXTENSION = ('xlsx', 'pptx', 'docx')\n\n\n# HTTPS authentication related (optional)\n\n# Server certificates\n# Path to a CA_BUNDLE file or directory with certificates of trusted CAs\n# NOTE: If set this setting to a directory, the directory must have been processed using the c_rehash utility supplied with OpenSSL.\nOFFICE_WEB_APP_SERVER_CA = '/path/to/certfile'\n\n\n# Client certificates\n# You can specify a single file (containing the private key and the certificate) to use as client side certificate\nOFFICE_WEB_APP_CLIENT_PEM = 'path/to/client.pem'\n\n# or you can specify these two file path to use as client side certificate\nOFFICE_WEB_APP_CLIENT_CERT = 'path/to/client.cert'\nOFFICE_WEB_APP_CLIENT_KEY = 'path/to/client.key'\n./seafile.sh restart\n./seahub.sh restart\n
role_quota is used to set quota for a certain role of users. For example, we can set the quota of employee to 100G by adding 'role_quota': '100g', and leave other role of users to the default quota.can_add_public_repo is to set whether a role can create a public library, default is \"False\". Note:The can_add_public_repo option will not take effect if you configure global CLOUD_MODE = True.storage_ids permission is used for assigning storage backends to users with specific role. More details can be found in multiple storage backends.upload_rate_limit and download_rate_limit are added to limit upload and download speed for users with different roles. After configured the rate limit, run the following command in the seafile-server-latest directory to make the configuration take effect:./seahub.sh python-env python3 seahub/manage.py set_user_role_upload_download_rate_limit\ncan_share_repo is added to limit users' ability to share a library.default and guest, a default user is a normal user with permissions as followings: 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 0, # unit: kb/s\n 'download_rate_limit': 0,\n },\n
"},{"location":"deploy_pro/roles_permissions/#edit-build-in-roles","title":"Edit build-in roles","text":" 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': False,\n 'upload_rate_limit': 0,\n 'download_rate_limit': 0,\n },\nseahub_settings.py with corresponding permissions set to True.
"},{"location":"deploy_pro/roles_permissions/#more-about-guest-invitation-feature","title":"More about guest invitation feature","text":"ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n },\n 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': False,\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n }\n}\ncan_invite_guest permission can invite people outside of the organization as guest.can_invite_guest permission to the user, add the following line to seahub_settings.py,ENABLE_GUEST_INVITATION = True\n\n# invitation expire time\nINVITATIONS_TOKEN_AGE = 72 # hours\ncan_invite_guest permission will see \"Invite People\" section at sidebar of home page.INVITATION_ACCEPTER_BLACKLIST = [\"a@a.com\", \"*@a-a-a.com\", r\".*@(foo|bar).com\", ]\nemployee can invite guest and can create public library and have all other permissions a default user has, you can add following lines to seahub_settings.py
"},{"location":"deploy_pro/saml2_in_10.0/","title":"SAML 2.0 in version 10.0+","text":"ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n },\n 'guest': {\n 'can_add_repo': False,\n 'can_share_repo': False,\n 'can_add_group': False,\n 'can_view_org': False,\n 'can_add_public_repo': False,\n 'can_use_global_address_book': False,\n 'can_generate_share_link': False,\n 'can_generate_upload_link': False,\n 'can_send_share_link_mail': False,\n 'can_invite_guest': False,\n 'can_connect_with_android_clients': False,\n 'can_connect_with_ios_clients': False,\n 'can_connect_with_desktop_clients': False,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': False,\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n },\n 'employee': {\n 'can_add_repo': True,\n 'can_share_repo': True,\n 'can_add_group': True,\n 'can_view_org': True,\n 'can_add_public_repo': True,\n 'can_use_global_address_book': True,\n 'can_generate_share_link': True,\n 'can_generate_upload_link': True,\n 'can_send_share_link_mail': True,\n 'can_invite_guest': True,\n 'can_connect_with_android_clients': True,\n 'can_connect_with_ios_clients': True,\n 'can_connect_with_desktop_clients': True,\n 'storage_ids': [],\n 'role_quota': '',\n 'can_publish_repo': True,\n 'upload_rate_limit': 500,\n 'download_rate_limit': 800,\n },\n}\n$ apt update\n$ apt install xmlsec1\n$ apt install dnsutils # For multi-tenancy feature\n$ mkdir -p /opt/seafile/seahub-data/certs\n$ cd /opt/seafile/seahub-data/certs\n$ openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout sp.key -out sp.crt\ndays option indicates the validity period of the generated certificate. The unit is day. The system admin needs to update the certificate regularly./opt/seafile/seahub-data/certs).SAML_REMOTE_METADATA_URL option in seahub_settings.py, e.g.:SAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\nENABLE_ADFS_LOGIN, LOGIN_REDIRECT_URL and SAML_ATTRIBUTE_MAPPING options to seahub_settings.py, and then restart Seafile, e.g:ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n\n}\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app\n/usr/bin/xmlsec1, you need to add the following configuration in seahub_settings.py:SAML_XMLSEC_BINARY_PATH = '/path/to/xmlsec1'\n$ which xmlsec1\n/opt/seafile/seahub-data/certs, you need to add the following configuration in seahub_settings.py:SAML_CERTS_DIR = '/path/to/certs'\nSingle Sign-On, and use the user assigned to SAML app to perform a SAML login test.
temp.adfs.com as the domain name example.demo.seafile.com as the domain name example.
/opt/seafile/seahub-data/certs).ENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n 'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.\n ...\n}\nSAML_REMOTE_METADATA_URL = 'https://temp.adfs.com/federationmetadata/2007-06/federationmetadata.xml' # The format of the ADFS federation metadata URL is: `https://{your ADFS domain name}/federationmetadata/2007-06/federationmetadata.xml`\n
https://example.com/saml2/metadata/, e.g.:
Seafile, under Notes type a description for this relying party trust, and then click Next.
Seafile Claim rule). Click the Outgoing claim type dropdown menu and select Name ID. Click the Outgoing name ID format dropdown menu and select Email. And then click Finish.Single Sign-On to perform ADFS login test../seaf-gen-key.sh -h. it will print the following usage information: usage :\nseaf-gen-key.sh\n -p <file path to write key iv, default ./seaf-key.txt>\n[store_crypt]\nkey_path = <the key file path generated in previous section>\n
"},{"location":"deploy_pro/seaf_encrypt/#edit-config-files","title":"Edit Config Files","text":"cd seafile-server-latest\ncp -r conf conf-enc\nmkdir seafile-data-enc\ncp -r seafile-data/library-template seafile-data-enc\n# If you use SQLite database\ncp seafile-data/seafile.db seafile-data-enc/\n
"},{"location":"deploy_pro/seaf_encrypt/#migrate-the-data","title":"Migrate the Data","text":"[store_crypt]\nkey_path = <the key file path generated in previous section>\n./seaf-encrypt.sh -f ../conf-enc -e ../seafile-data-enc,Starting seaf-encrypt, please wait ...\n[04/26/16 06:59:40] seaf-encrypt.c(444): Start to encrypt 57 block among 12 repo.\n[04/26/16 06:59:40] seaf-encrypt.c(444): Start to encrypt 102 fs among 12 repo.\n[04/26/16 06:59:41] seaf-encrypt.c(454): Success encrypt all fs.\n[04/26/16 06:59:40] seaf-encrypt.c(444): Start to encrypt 66 commit among 12 repo.\n[04/26/16 06:59:41] seaf-encrypt.c(454): Success encrypt all commit.\n[04/26/16 06:59:41] seaf-encrypt.c(454): Success encrypt all block.\nseaf-encrypt run done\nDone.\nmv conf conf-bak\nmv seafile-data seafile-data-bak\nmv conf-enc conf\nmv seafile-data-enc seafile-data\n
"},{"location":"deploy_pro/seafile_professional_sdition_software_license_agreement/#3-no-derivative-works","title":"3. NO DERIVATIVE WORKS","text":"seafile-data folder) and user avatars as well as thumbnails (located in seahub-data folder) on NFS. Here we'll provide a tutorial about how and what to share.
/data/haiwen, after you run the setup script there should be a seafile-data and seahub-data directory in it. And supposed you mount the NFS drive on /seafile-nfs, you should follow a few steps:
seafile-data and seahub-data folder to /seafile-nfs:mv /data/haiwen/seafile-data /seafile-nfs/\nmv /data/haiwen/seahub-data /seafile-nfs/\n
seafile-data and seahub-data folder cd /data/haiwen\nln -s /seafile-nfs/seafile-data /data/haiwen/seafile-data\nln -s /seafile-nfs/seahub-data /data/haiwen/seahub-data\nseafile-data and seahub-data folder. All other config files and log files will remain independent.
boto library. It's needed to access S3 service.# Version 10.0 or earlier\nsudo pip install boto\n\n# Since 11.0 version\nsudo pip install boto3\n
seafile.conf, add the following lines:[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = eu-central-1\n
bucket: It's required to create separate buckets for commit, fs, and block objects. When creating your buckets on S3, please first read S3 bucket naming rules. Note especially not to use UPPERCASE letters in bucket names (don't use camel style names, such as MyCommitOjbects).key_id and key: The key_id and key are required to authenticate you to S3. You can find the key_id and key in the \"security credentials\" section on your AWS account page.use_v4_signature: There are two versions of authentication protocols that can be used with S3 storage. Version 2 is the older one, which may still be supported by some regions; version 4 is the current one used by most regions. If you don't set this option, Seafile will use v2 protocol. It's suggested to use v4 protocol.aws_region: If you use v4 protocol, set this option to the region you chose when you create the buckets. If it's not set and you're using v4 protocol, Seafile will use us-east-1 as the default. This option will be ignored if you use v2 protocol.
"},{"location":"deploy_pro/setup_with_amazon_s3/#use-server-side-encryption-with-customer-provided-keys-sse-c","title":"Use server-side encryption with customer-provided keys (SSE-C)","text":"[s3]\nuse-sigv4 = True\n[commit_object_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[fs_object_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\n\n[block_backend]\nname = s3\n......\nuse_v4_signature = true\nuse_https = true\nsse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P\nssk_c_key is a 32-byte random string.seafile.conf, add the following lines:[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\n# v2 authentication protocol will be used if not set\nuse_v4_signature = true\n# required for v4 protocol. ignored for v2 protocol.\naws_region = <region name for storage provider>\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nhost = <access endpoint for storage provider>\nkey_id = your-key-id\nkey = your-secret-key\nuse_v4_signature = true\naws_region = <region name for storage provider>\n
host: The endpoint by which you access the storage service. Usually it starts with the region name. It's required to provide the host address, otherwise Seafile will use AWS's address.bucket: It's required to create separate buckets for commit, fs, and block objects.key_id and key: The key_id and key are required to authenticate you to S3 storage.use_v4_signature: There are two versions of authentication protocols that can be used with S3 storage. Version 2 is the older one, which may still be supported by some cloud providers; version 4 is the current one used by Amazon S3 and is supported by most providers. If you don't set this option, Seafile will use v2 protocol. It's suggested to use v4 protocol.aws_region: If you use v4 protocol, set this option to the region you chose when you create the buckets. If it's not set and you're using v4 protocol, Seafile will use us-east-1 as the default. This option will be ignored if you use v2 protocol.
"},{"location":"deploy_pro/setup_with_amazon_s3/#self-hosted-s3-storage","title":"Self-hosted S3 Storage","text":"[s3]\nuse-sigv4 = True\n[commit_object_backend]\nname = s3\nbucket = my-commit-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = 192.168.1.123:8080\npath_style_request = true\n\n[fs_object_backend]\nname = s3\nbucket = my-fs-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = 192.168.1.123:8080\npath_style_request = true\n\n[block_backend]\nname = s3\nbucket = my-block-objects\nkey_id = your-key-id\nkey = your-secret-key\nhost = 192.168.1.123:8080\npath_style_request = true\n
host: It is the address and port of the S3-compatible service. You cannot prepend \"http\" or \"https\" to the host option. By default it'll use http connections. If you want to use https connection, please set use_https = true option.bucket: It's required to create separate buckets for commit, fs, and block objects.key_id and key: The key_id and key are required to authenticate you to S3 storage.path_style_request: This option asks Seafile to use URLs like https://192.168.1.123:8080/bucketname/object to access objects. In Amazon S3, the default URL format is in virtual host style, such as https://bucketname.s3.amazonaws.com/object. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. So we recommend to set this option to true.
"},{"location":"deploy_pro/setup_with_amazon_s3/#use-https-connections-to-s3","title":"Use HTTPS connections to S3","text":"use_v4_signature: There are two versions of authentication protocols that can be used with S3 storage. Version 2 is the protocol supported by most self-hosted storage; version 4 is the current protocol used by AWS S3, but may not be supported by some self-hosted storage. If you don't set this option, Seafile will use v2 protocol. We recommend to use V2 first and if it doesn't work try V4.aws_region: If you use v4 protocol, set this option to the region you chose when you create the buckets. If it's not set and you're using v4 protocol, Seafile will use us-east-1 as the default. This option will be ignored if you use v2 protocol.[commit_object_backend]\nname = s3\n......\nuse_https = true\n\n[fs_object_backend]\nname = s3\n......\nuse_https = true\n\n[block_backend]\nname = s3\n......\nuse_https = true\nsudo mkdir -p /etc/pki/tls/certs\nsudo cp /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/certs/ca-bundle.crt\nsudo ln -s /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/cert.pem\n./seafile.sh start and ./seahub.sh start and visit the website.
"},{"location":"deploy_pro/setup_with_ceph/#install-and-enable-memcached","title":"Install and enable memcached","text":"seafile-machine# sudo scp user@ceph-admin-node:/etc/ceph/ /etc\nsudo apt-get install python3-rados\nsudo apt-get install python-ceph\n
"},{"location":"deploy_pro/setup_with_ceph/#edit-seafile-configuration","title":"Edit seafile configuration","text":"sudo yum install python-rados\nseafile.conf, add the following lines:[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\npool = seafile-fs\n
"},{"location":"deploy_pro/setup_with_ceph/#troubleshooting-librados-incompatibility-issues","title":"Troubleshooting librados incompatibility issues","text":"ceph-admin-node# rados mkpool seafile-blocks\nceph-admin-node# rados mkpool seafile-commits\nceph-admin-node# rados mkpool seafile-fs\n
"},{"location":"deploy_pro/setup_with_ceph/#use-arbitary-ceph-user","title":"Use arbitary Ceph user","text":"cd seafile-server-latest/seafile/lib\nrm librados.so.2 libstdc++.so.6 libnspr4.so\nceph_client_id option to seafile.conf, as the following:[block_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-blocks\n\n[commit_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-commits\n\n[fs_object_backend]\nname = ceph\nceph_config = /etc/ceph/ceph.conf\n# Sepcify Ceph user for Seafile here\nceph_client_id = seafile\npool = seafile-fs\n\n# Memcached or Reids configs\n......\nceph auth add client.seafile \\\n mds 'allow' \\\n mon 'allow r' \\\n osd 'allow rwx pool=seafile-blocks, allow rwx pool=seafile-commits, allow rwx pool=seafile-fs'\n
"},{"location":"deploy_pro/setup_with_oss/","title":"Setup With Alibaba OSS","text":""},{"location":"deploy_pro/setup_with_oss/#prepare","title":"Prepare","text":"[client.seafile]\nkeyring = <path to user's keyring file>\n
"},{"location":"deploy_pro/setup_with_oss/#modify-seafileconf","title":"Modify Seafile.conf","text":"oss2 library: sudo pip install oss2==2.3.0.For more installation help, please refer to this document.seafile.conf, add the following lines:[commit_object_backend]\nname = oss\nbucket = <your-seafile-commits-bucket>\nkey_id = <your-key-id>\nkey = <your-key>\nregion = beijing\n\n[fs_object_backend]\nname = oss\nbucket = <your-seafile-fs-bucket>\nkey_id = <your-key-id>\nkey = <your-key>\nregion = beijing\n\n[block_backend]\nname = oss\nbucket = <your-seafile-blocks-bucket>\nkey_id = <your-key-id>\nkey = <your-key>\nregion = beijing\n[commit_object_backend]\nname = oss\nbucket = <your-seafile-commits-bucket>\nkey_id = <your-key-id>\nkey = <your-key>\nendpoint = vpc100-oss-cn-beijing.aliyuncs.com\n\n[fs_object_backend]\nname = oss\nbucket = <your-seafile-fs-bucket>\nkey_id = <your-key-id>\nkey = <your-key>\nendpoint = vpc100-oss-cn-beijing.aliyuncs.com\n\n[block_backend]\nname = oss\nbucket = <your-seafile-blocks-bucket>\nkey_id = <your-key-id>\nkey = <your-key>\nendpoint = vpc100-oss-cn-beijing.aliyuncs.com\nendpoint option to replace the region option. The corresponding endpoint address can be found at https://www.alibabacloud.com/help/en/object-storage-service/latest/regions-and-endpoints.endpoint is a general option, you can also set it to the OSS access address under the classic network, and it will work as well.
"},{"location":"deploy_pro/setup_with_swift/","title":"Setup With OpenStack Swift","text":"[commit_object_backend]\nname = oss\n......\nuse_https = true\n\n[fs_object_backend]\nname = oss\n......\nuse_https = true\n\n[block_backend]\nname = oss\n......\nuse_https = true\n
"},{"location":"deploy_pro/setup_with_swift/#modify-seafileconf","title":"Modify Seafile.conf","text":"seafile.conf, add the following lines:[block_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-blocks\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n\n[commit_object_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-commits\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\n\n[fs_object_backend]\nname = swift\ntenant = yourTenant\nuser_name = user\npassword = secret\ncontainer = seafile-fs\nauth_host = 192.168.56.31:5000\nauth_ver = v3.0\nregion = yourRegion\nauth_host option is the address and port of Keystone service.The region option is used to select publicURL,if you don't configure it, use the first publicURL in returning authenticated information.auth_ver option should be set to v1.0, tenant and region are no longer needed.[commit_object_backend]\nname = swift\n......\nuse_https = true\n\n[fs_object_backend]\nname = swift\n......\nuse_https = true\n\n[block_backend]\nname = swift\n......\nuse_https = true\n
"},{"location":"deploy_pro/setup_with_swift/#run-and-test","title":"Run and Test","text":"sudo mkdir -p /etc/pki/tls/certs\nsudo cp /etc/ssl/certs/ca-certificates.crt /etc/pki/tls/certs/ca-bundle.crt\nsudo ln -s /etc/pki/tls/certs/ca-bundle.crt /etc/pki/tls/cert.pem\n./seafile.sh start and ./seahub.sh start and visit the website.seahub_settings.py,ENABLE_TERMS_AND_CONDITIONS = True\nseafile.conf:[virus_scan]\nscan_command = (command for checking virus)\nvirus_code = (command exit codes when file is virus)\nnonvirus_code = (command exit codes when file is not virus)\nscan_interval = (scanning interval, in unit of minutes, default to 60 minutes)\n
[virus_scan]\nscan_command = clamscan\nvirus_code = 1\nnonvirus_code = 0\ncd seafile-server-latest\n./pro/pro.py virus_scan\nscan_command should be clamdscan in seafile.conf. An example for Clamav-daemon is provided below:[virus_scan]\nscan_command = clamdscan\nvirus_code = 1\nnonvirus_code = 0\n[virus_scan]\n......\nscan_size_limit = (size limit for files to be scanned) # The unit is MB.\nscan_skip_ext = (a comma (',') separated list of file extensions to be ignored)\nthreads = (number of concurrent threads for scan, one thread for one file, default to 4)\n.bmp, .gif, .ico, .png, .jpg, .mp3, .mp4, .wav, .avi, .rmvb, .mkv\nseahub_settings.py:ENABLE_UPLOAD_LINK_VIRUS_CHECK = True\nseafile.conf:
"},{"location":"deploy_pro/virus_scan_with_kav4fs/","title":"Virus Scan with kav4fs","text":""},{"location":"deploy_pro/virus_scan_with_kav4fs/#prerequisite","title":"Prerequisite","text":"[fileserver]\ncheck_virus_on_web_upload = true\n
"},{"location":"deploy_pro/virus_scan_with_kav4fs/#script","title":"Script","text":"<user of running seafile server> ALL=(ALL:ALL) ALL\n<user of running seafile server> ALL=NOPASSWD: /opt/kaspersky/kav4fs/bin/kav4fs-control\nkav4fs_scan.sh:#!/bin/bash\n\nTEMP_LOG_FILE=`mktemp /tmp/XXXXXXXXXX`\nVIRUS_FOUND=1\nCLEAN=0\nUNDEFINED=2\nKAV4FS='/opt/kaspersky/kav4fs/bin/kav4fs-control'\nif [ ! -x $KAV4FS ]\nthen\n echo \"Binary not executable\"\n exit $UNDEFINED\nfi\n\nsudo $KAV4FS --scan-file \"$1\" > $TEMP_LOG_FILE\nif [ \"$?\" -ne 0 ]\nthen\n echo \"Error due to check file '$1'\"\n exit 3\nfi\nTHREATS_C=`grep 'Threats found:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nRISKWARE_C=`grep 'Riskware found:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nINFECTED=`grep 'Infected:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nSUSPICIOUS=`grep 'Suspicious:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nSCAN_ERRORS_C=`grep 'Scan errors:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nPASSWORD_PROTECTED=`grep 'Password protected:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\nCORRUPTED=`grep 'Corrupted:' $TEMP_LOG_FILE|cut -d':' -f 2|sed 's/ //g'`\n\nrm -f $TEMP_LOG_FILE\n\nif [ $THREATS_C -gt 0 -o $RISKWARE_C -gt 0 -o $INFECTED -gt 0 -o $SUSPICIOUS -gt 0 ]\nthen\n exit $VIRUS_FOUND\nelif [ $SCAN_ERRORS_C -gt 0 -o $PASSWORD_PROTECTED -gt 0 -o $CORRUPTED -gt 0 ]\nthen\n exit $UNDEFINED\nelse\n exit $CLEAN\nfi\nchmod u+x kav4fs_scan.sh\n
"},{"location":"deploy_pro/virus_scan_with_kav4fs/#configuration","title":"Configuration","text":"1: found virus\n0: no virus\nother: scan failed\nseafile.conf:
"},{"location":"develop/","title":"Develop Documents","text":"[virus_scan]\nscan_command = <absolute path of kav4fs_scan.sh>\nvirus_code = 1\nnonvirus_code = 0\nscan_interval = <scanning interval, in unit of minutes, default to 60 minutes>\n
"},{"location":"develop/data_model/","title":"Data Model","text":"Repo, Commit, FS, and Block.seafile_db database and the commit objects (see description in later section).seafile_db database containing important information about each repo.
"},{"location":"develop/data_model/#commit","title":"Commit","text":"Repo: contains the ID for each repo.RepoOwner: contains the owner id for each repo.RepoInfo: it is a \"cache\" table for fast access to repo metadata stored in the commit object. It includes repo name, update time, last modifier.RepoSize: the total size of all files in the repo.RepoFileCount: the file count in the repo.RepoHead: contains the \"head commit ID\". This ID points to the head commit in the storage, which will be described in the next section.RepoHead table contains the latest head commit ID for each repo. From this head commit, we can traverse the repo history.seafile-data/storage/commits/<repo_id>. If you use object storage, commit objects are stored in the commits bucket.SeafDir Object and Seafile Object. SeafDir Object represents a directory, and Seafile Object represents a file.SeafDir object contains metadata for each file/sub-folder, which includes name, last modification time, last modifier, size, and object ID. The object ID points to another SeafDir or Seafile object. The Seafile object contains a block list, which is a list of block IDs for the file.seafile-data/storage/fs/<repo_id>. If you use object storage, commit objects are stored in the fs bucket.seafile-data/storage/blocks/<repo_id>. If you use object storage, commit objects are stored in the blocks bucket.
fs and blocks storage location as its parent.commits storage location from its parent. The changes in virtual repo and its parent repo will be bidirectional merged. So that changes from each side can be seen from another.VirtualRepo table in seafile_db database. It contains the folder path in the parent repo for each virtual repo.
/locale/<lang-code>/LC_MESSAGES/django.po\u00a0 and \u00a0/locale/<lang-code>/LC_MESSAGES/djangojs.po/media/locales/<lang-code>/seafile-editor.json
/seafile-server-latest/seahub/locale/ru/LC_MESSAGES/django.po/seafile-server-latest/seahub/locale/ru/LC_MESSAGES/djangojs.po/seafile-server-latest/seahub/media/locales/ru/seafile-editor.json/seafile-server-latest/seahub/seahub/settings.py file and save it. LANGUAGES = (\n ...\n ('ru', '\u0420\u0443\u0441\u0441\u043a\u0438\u0439'),\n ...\n)\n/seafile-server-latest/seahub/locale/<lang-code>/LC_MESSAGES:
msgfmt -o django.mo django.pomsgfmt -o djangojs.mo djangojs.po
./seahub.sh python-env python3 seahub/manage.py compilejsi18n -l <lang-code>./seahub.sh python-env python3 seahub/manage.py collectstatic --noinput -i admin -i termsandconditions --no-post-process
"},{"location":"develop/translation/#faq","title":"FAQ","text":""},{"location":"develop/translation/#filenotfounderror","title":"FileNotFoundError","text":"FileNotFoundError occurred when executing the command manage.py collectstatic.FileNotFoundError: [Errno 2] No such file or directory: '/opt/seafile/seafile-server-latest/seahub/frontend/build'\n
STATICFILES_DIRS in /opt/seafile/seafile-server-latest/seahub/seahub/settings.py manuallysh ./seahub.sh python-env python3 seahub/manage.py collectstatic --noinput -i admin -i termsandconditions --no-post-processSTATICFILES_DIRS manuallysh ./seahub.sh restart
"},{"location":"develop/web_api_v2.1/#admin-only","title":"Admin Only","text":"
"},{"location":"docker/deploy_seafile_with_docker/","title":"Deploy Seafile with Docker","text":""},{"location":"docker/deploy_seafile_with_docker/#getting-started","title":"Getting started","text":"
"},{"location":"docker/deploy_seafile_with_docker/#install-docker","title":"Install docker","text":"/opt/seafile-data is the directory of Seafile. If you decide to put Seafile in a different directory \u2014 which you can \u2014 adjust all paths accordingly./opt/seafile-mysql and /opt/seafile-data, respectively. It is not recommended to change these paths. If you do, account for it when following these instructions..env","text":".env, seafile-server.yml and caddy.yml files for configuration.mkdir /opt/seafile\ncd /opt/seafile\n\n# Seafile CE 12.0\nwget -O .env https://manual.seafile.com/12.0/docker/docker-compose/ce/env\nwget https://manual.seafile.com/12.0/docker/docker-compose/ce/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/docker-compose/ce/caddy.yml\n\nnano .env\n
SEAFILE_VOLUME: The volume directory of Seafile data, default is /opt/seafile-dataSEAFILE_MYSQL_VOLUME: The volume directory of MySQL data, default is /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME: The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's, default is /opt/seafile-caddySEAFILE_MYSQL_ROOT_PASSWORD: The user root password of MySQLSEAFILE_MYSQL_DB_USER: The user of MySQL (database - user can be found in conf/seafile.conf)SEAFILE_MYSQL_DB_PASSWORD: The user seafile password of MySQLJWT: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1SEAFILE_SERVER_HOSTNAME: Seafile server hostname or domainSEAFILE_SERVER_PROTOCOL: Seafile server protocol (http or https)TIME_ZONE: Time zone (default UTC)SEAFILE_ADMIN_EMAIL: Admin usernameSEAFILE_ADMIN_PASSWORD: Admin password# if `.env` file is in current directory:\ndocker compose up -d\n\n# if `.env` file is elsewhere:\ndocker compose -f /path/to/.env up -d\nhttp://seafile.example.com to open Seafile Web UI./opt/seafile-data","text":"
"},{"location":"docker/deploy_seafile_with_docker/#find-logs","title":"Find logs","text":"/opt/seafile-data/seafile/logs/seafile.log./var/log inside the container. For example, you can find the nginx logs in /opt/seafile-data/logs/var-log/nginx/.# if the `.env` file is in current directory:\ndocker compose logs --follow\n# if the `.env` file is elsewhere:\ndocker compose -f /path/to/.env logs --follow\n\n# you can also specify container name:\ndocker compose logs seafile --follow\n# or, if the `.env` file is elsewhere:\ndocker compose -f /path/to/.env logs seafile --follow\n/shared/logs/seafile in the docker, or /opt/seafile-data/logs/seafile in the server that run the docker./shared/logs/var-log, or /opt/seafile-data/logs/var-log in the server that run the docker.
"},{"location":"docker/deploy_seafile_with_docker/#more-configuration-options","title":"More configuration options","text":""},{"location":"docker/deploy_seafile_with_docker/#use-an-existing-mysql-server","title":"Use an existing mysql-server","text":"sudo tail -f $(find /opt/seafile-data/ -type f -name *.log 2>/dev/null)\n.env as followsSEAFILE_MYSQL_DB_HOST=192.168.0.2\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\nSEAFILE_MYSQL_ROOT_PASSWORD is needed during installation. Later, after Seafile is installed, the user seafile will be used to connect to the mysql-server (SEAFILE_MYSQL_DB_PASSWORD). You can remove the SEAFILE_MYSQL_ROOT_PASSWORD./opt/seafile-data/seafile/conf. You can modify the configurations according to Seafile manual
"},{"location":"docker/deploy_seafile_with_docker/#add-a-new-admin","title":"Add a new admin","text":"docker compose restart\ndocker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\nmy_init, Nginx are still run as root inside docker.)NON_ROOT=true to the .env.NON_ROOT=true\n/opt/seafile-data/seafile/ permissions.chmod -R a+rwx /opt/seafile-data/seafile/\ndocker compose down\ndocker compose up -d\nseafile user. (NOTE: Later, when doing maintenance, other scripts in docker are also required to be run as seafile user, e.g. su seafile -c ./seaf-gc.sh)/scripts folder of the docker container. To perform garbage collection, simply run docker exec seafile /scripts/gc.sh. For the community edition, this process will stop the seafile server, but it is a relatively quick process and the seafile server will start automatically once the process has finished. The Professional supports an online garbage collection.docker exec to find errors","text":"
"},{"location":"docker/deploy_seafile_with_docker/#about-ssl-and-caddy","title":"About SSL and Caddy","text":"docker exec -it seafile /bin/bash\nlucaslorentz/caddy-docker-proxy:2.9, which user only needs to correctly configure the following fields in .env to automatically complete the acquisition and update of the certificate:
"},{"location":"docker/non_docker_to_docker/","title":"Migrate from non-docker Seafile deployment to docker","text":"SEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=example.com\n
"},{"location":"docker/non_docker_to_docker/#prepare-mysql-and-the-folders-for-seafile-docker","title":"Prepare MySQL and the folders for Seafile docker","text":""},{"location":"docker/non_docker_to_docker/#add-permissions-to-the-local-mysql-seafile-user","title":"Add permissions to the local MySQL Seafile user","text":"systemctl stop nginx && systemctl disable nginx\nsystemctl stop memcached && systemctl disable memcached\n./seafile.sh stop && ./seahub.sh stop\nseafile as the user to access:
"},{"location":"docker/non_docker_to_docker/#create-the-required-directories-for-seafile-docker-image","title":"Create the required directories for Seafile Docker image","text":"## Note, change the password according to the actual password you use\nGRANT ALL PRIVILEGES ON *.* TO 'seafile'@'%' IDENTIFIED BY 'your-password' WITH GRANT OPTION;\n\n## Grant seafile user can connect the database from any IP address\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to 'seafile'@'%';\nGRANT ALL PRIVILEGES ON `seafile_db`.* to 'seafile'@'%';\nGRANT ALL PRIVILEGES ON `seahub_db`.* to 'seafile'@'%';\n\n## Restart MySQL\nsystemctl restart mariadb\n
"},{"location":"docker/non_docker_to_docker/#prepare-config-files","title":"Prepare config files","text":"mkdir -p /opt/seafile-data/seafile\ncp -r /opt/seafile/conf /opt/seafile-data/seafile\ncp -r /opt/seafile/seahub-data /opt/seafile-data/seafile\n/opt/seafile-data/seafile/conf, including ccnet.conf, seafile.conf, seahub_settings, change HOST=127.0.0.1 to HOST=<local ip>.seahub_settings.py to use the Docker version of Memcached: change it to 'LOCATION': 'memcached:11211' (the network name of Docker version of Memcached is memcached)./opt/seafile-data. Comment out the db part as below:
"},{"location":"docker/non_docker_to_docker/#configure-seafile-docker-to-use-the-old-seafile-data","title":"Configure Seafile Docker to use the old seafile-data","text":"services:\n# db:\n# image: mariadb:10.5\n# container_name: seafile-mysql\n# environment:\n# - MYSQL_ROOT_PASSWORD=db_dev # Required, set the root's password of MySQL service.\n# - MYSQL_LOG_CONSOLE=true\n# volumes:\n# - /opt/seafile-mysql/db:/var/lib/mysql # Required, specifies the path to MySQL data persistent store.\n# networks:\n# - seafile-net\n\n.........\n depends_on:\n# - db \n - memcached\n.........\n/opt/seafile/seafile-data) to /opt/seafile-data/seafile (So you will have /opt/seafile-data/seafile/seafile-data)/opt/seafile/seafile-data) to Seafile docker container directly:.........\n\n seafile:\n image: seafileltd/seafile-mc:8.0.7-1\n container_name: seafile\n ports:\n - \"80:80\"\n# - \"443:443\" # If https is enabled, cancel the comment.\n volumes:\n - /opt/seafile-data:/shared\n - /opt/seafile/seafile-data:/shared/seafile/seafile-data\n .......\n- /opt/seafile/seafile-data:/shared/seafile/seafile-data mount /opt/seafile/seafile-data to /shared/seafile/seafile-data in docker.
"},{"location":"docker/non_docker_to_docker/#security","title":"Security","text":"cd /opt/seafile-data\ndocker compose up -d\n<local ip> you also need to bind your databaseserver to that IP. If this IP is public, it is strongly advised to protect your database port with a firewall. Otherwise your databases are reachable via internet. An alternative might be to start another local IP from RFC 1597 e.g. 192.168.123.45. Afterwards you can bind to that IP.iptables -A INPUT -s 172.16.0.0/12 -j ACCEPT #Allow Dockernetworks\niptables -A INPUT -p tcp -m tcp --dport 3306 -j DROP #Deny Internet\nip6tables -A INPUT -p tcp -m tcp --dport 3306 -j DROP #Deny Internet\n/etc/network/interfaces something like:iface eth0 inet static\n address 192.168.123.45/32\neth0 might be ensXY. Or if you know how to start a dummy interface, thats even better./etc/sysconfig/network/ifcfg-eth0 (ethXY/ensXY/bondXY)/etc/mysql/mariadb.conf.d/50-server.cnf edit the following line to:bind-address = 192.168.123.45\n
"},{"location":"docker/seafile_docker_autostart/","title":"Seafile Docker autostart","text":"service networking reload\nip a #to check whether the ip is present\nservice mysql restart\nss -tulpen | grep 3306 #to check whether the database listens on the correct IP\ncd /opt/seafile-data/\ndocker compose down\ndocker compose up -d\n\n## restart your applications\n
vim /etc/systemd/system/docker-compose.service[Unit]\nDescription=Docker Compose Application Service\nRequires=docker.service\nAfter=docker.service\n\n[Service]\nType=forking\nRemainAfterExit=yes\nWorkingDirectory=/opt/ \nExecStart=/usr/bin/docker compose up -d\nExecStop=/usr/bin/docker compose down\nTimeoutStartSec=0\n\n[Install]\nWantedBy=multi-user.target\nWorkingDirectory is the absolute path to the docker-compose.yml file directory.
chmod 644 /etc/systemd/system/docker-compose.service\n
"},{"location":"docker/seafile_docker_autostart/#method-2","title":"Method 2","text":"systemctl daemon-reload\nsystemctl enable docker-compose.service\nrestart: unless-stopped for each container in docker-compose.yml.services:\n db:\n image: mariadb:10.11\n container_name: seafile-mysql-1\n restart: unless-stopped\n\n memcached:\n image: memcached:1.6.18\n container_name: seafile-memcached\n restart: unless-stopped\n\n elasticsearch:\n image: elasticsearch:8.6.2\n container_name: seafile-elasticsearch\n restart: unless-stopped\n\n seafile:\n image: docker.seadrive.org/seafileltd/seafile-pro-mc:11.0-latest\n container_name: seafile\n restart: unless-stopped\nrestart: unless-stopped, and the Seafile container will automatically start when Docker starts. If the Seafile container does not exist (execute docker compose down), the container will not start automatically.
"},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/","title":"Seafile Docker Cluster Deployment","text":"SSL configuration$ mysql -h{your mysql host} -u[username] -p[password]\n\nmysql>\ncreate user 'seafile'@'%' identified by 'PASSWORD';\n\ncreate database `ccnet_db` character set = 'utf8';\ncreate database `seafile_db` character set = 'utf8';\ncreate database `seahub_db` character set = 'utf8';\n\nGRANT ALL PRIVILEGES ON `ccnet_db`.* to 'seafile'@'%';\nGRANT ALL PRIVILEGES ON `seafile_db`.* to 'seafile'@'%';\nGRANT ALL PRIVILEGES ON `seahub_db`.* to 'seafile'@'%';\n
"},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#deploy-seafile-service","title":"Deploy Seafile service","text":""},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#deploy-seafile-frontend-nodes","title":"Deploy seafile frontend nodes","text":"mysql>\nuse seahub_db;\nCREATE TABLE `avatar_uploaded` (\n `filename` text NOT NULL,\n `filename_md5` char(32) NOT NULL,\n `data` mediumtext NOT NULL,\n `size` int(11) NOT NULL,\n `mtime` datetime NOT NULL,\n PRIMARY KEY (`filename_md5`)\n) ENGINE=InnoDB DEFAULT CHARSET=utf8;\n$ mkdir -p /opt/seafile/shared\n$ cd /opt/seafile\n$ vim docker-compose.yml\nservices:\n seafile:\n image: docker.seadrive.org/seafileltd/seafile-pro-mc:latest\n container_name: seafile\n ports:\n - 80:80\n volumes:\n - /opt/seafile/shared:/shared\n environment:\n - CLUSTER_SERVER=true\n - CLUSTER_MODE=frontend\n - TIME_ZONE=UTC # Optional, default is UTC. Should be uncomment and set to your local time zone.\n
"},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#initial-configuration-files","title":"Initial configuration files","text":"$ cd /opt/seafile\n$ docker compose up -d\n$ docker exec -it seafile bash\n\n# cd /scripts && ./cluster_conf_init.py\n# cd /opt/seafile/conf \nCACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': 'memcached:11211',\n },\n...\n}\n |\n v\n\nCACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '{you memcached server host}:11211',\n },\n...\n}\n[INDEX FILES]\nes_port = {your elasticsearch server port}\nes_host = {your elasticsearch server host}\nexternal_es_server = true\nenabled = true\nhighlight = fvh\ninterval = 10m\n...\nSERVICE_URL = 'http{s}://{your server IP or sitename}/'\nFILE_SERVER_ROOT = 'http{s}://{your server IP or sitename}/seafhttp'\nAVATAR_FILE_STORAGE = 'seahub.base.database_storage.DatabaseStorage'\n[cluster]\nenabled = true\n
"},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#import-the-tables-of-seahub_db-seafile_db-and-ccnet_db","title":"Import the tables of seahub_db, seafile_db and ccnet_db","text":"[memcached]\nmemcached_options = --SERVER={you memcached server host} --POOL-MIN=10 --POOL-MAX=100\n$ docker exec -it seafile bash\n\n# apt-get update && apt-get install -y mysql-client\n\n# mysql -h{your mysql host} -u[username] -p[password] ccnet_db < /opt/seafile/seafile-server-latest/sql/mysql/ccnet.sql\n# mysql -h{your mysql host} -u[username] -p[password] seafile_db < /opt/seafile/seafile-server-latest/sql/mysql/seafile.sql\n# mysql -h{your mysql host} -u[username] -p[password] seahub_db < /opt/seafile/seafile-server-latest/seahub/sql/mysql.sql\n$ docker exec -it seafile bash\n\n# cd /opt/seafile/seafile-server-latest\n# ./seafile.sh start && ./seahub.sh start\n$ mkdir -p /opt/seafile/shared\n$ cd /opt/seafile\n$ vim docker-compose.yml\nservices:\n seafile:\n image: docker.seadrive.org/seafileltd/seafile-pro-mc:latest\n container_name: seafile\n ports:\n - 80:80\n volumes:\n - /opt/seafile/shared:/shared \n environment:\n - CLUSTER_SERVER=true\n - CLUSTER_MODE=backend\n - TIME_ZONE=UTC # Optional, default is UTC. Should be uncomment and set to your local time zone.\n$ cd /opt/seafile\n$ docker compose up -d\n
"},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#use-s3-as-backend-storage","title":"Use S3 as backend storage","text":"$ docker exec -it seafile bash\n\n# cd /opt/seafile/seafile-server-latest\n# ./seafile.sh start && ./seafile-background-tasks.sh start\n
"},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#deployment-load-balance-optional","title":"Deployment load balance (Optional)","text":""},{"location":"docker/cluster/deploy_seafile_cluster_with_docker/#install-haproxy-and-keepalived-services","title":"Install HAproxy and Keepalived services","text":"[commit_object_backend]\nname = s3\nbucket = {your-commit-objects} # The bucket name can only use lowercase letters, numbers, and dashes\nkey_id = {your-key-id}\nkey = {your-secret-key}\nuse_v4_signature = true\naws_region = eu-central-1 # eu-central-1 for Frankfurt region\n\n[fs_object_backend]\nname = s3\nbucket = {your-fs-objects}\nkey_id = {your-key-id}\nkey = {your-secret-key}\nuse_v4_signature = true\naws_region = eu-central-1\n\n[block_backend]\nname = s3\nbucket = {your-block-objects}\nkey_id = {your-key-id}\nkey = {your-secret-key}\nuse_v4_signature = true\naws_region = eu-central-1\n$ apt install haproxy keepalived -y\n\n$ mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak\n\n$ cat > /etc/haproxy/haproxy.cfg << 'EOF'\nglobal\n log 127.0.0.1 local1 notice\n maxconn 4096\n user haproxy\n group haproxy\n\ndefaults\n log global\n mode http\n retries 3\n timeout connect 10000\n timeout client 300000\n timeout server 300000\n\nlisten seafile 0.0.0.0:80\n mode http\n option httplog\n option dontlognull\n option forwardfor\n cookie SERVERID insert indirect nocache\n server seafile01 Front-End01-IP:8001 check port 11001 cookie seafile01\n server seafile02 Front-End02-IP:8001 check port 11001 cookie seafile02\nEOF\n$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node1\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state MASTER\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 100\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n$ cat > /etc/keepalived/keepalived.conf << 'EOF'\n! Configuration File for keepalived\n\nglobal_defs {\n notification_email {\n root@localhost\n }\n notification_email_from keepalived@localhost\n smtp_server 127.0.0.1\n smtp_connect_timeout 30\n router_id node2\n vrrp_mcast_group4 224.0.100.18\n}\n\nvrrp_instance VI_1 {\n state BACKUP\n interface eno1 # Set to the device name of a valid network interface on the current server, and the virtual IP will be bound to the network interface\n virtual_router_id 50\n priority 98\n advert_int 1\n authentication {\n auth_type PASS\n auth_pass seafile123\n }\n virtual_ipaddress {\n 172.26.154.45/24 dev eno1 # Configure to the correct virtual IP and network interface device name\n }\n}\nEOF\n$ systemctl enable --now haproxy\n$ systemctl enable --now keepalived\n
"},{"location":"docker/pro-edition/deploy_onlyoffice_with_docker/#initialize-onlyoffice-local-configuration-file","title":"Initialize OnlyOffice local configuration file","text":"services:\n ...\n\n oods:\n image: onlyoffice/documentserver:latest\n container_name: seafile-oods\n networks:\n - seafile-net\n environment:\n - JWT_ENABLED=true\n - JWT_SECRET=your-secret-string\nmkdir -p /opt/seafile-oods/DocumentServer/\nvim /opt/seafile-oods/DocumentServer/local-production-linux.json\n
"},{"location":"docker/pro-edition/deploy_onlyoffice_with_docker/#add-onlyoffice-to-nginx-conf","title":"Add OnlyOffice to nginx conf","text":"{\n \"services\": {\n \"CoAuthoring\": {\n \"autoAssembly\": {\n \"enable\": true,\n \"interval\": \"5m\"\n }\n }\n },\n \"FileConverter\": {\n \"converter\": {\n \"downloadAttemptMaxCount\": 1\n }\n }\n}\n
"},{"location":"docker/pro-edition/deploy_onlyoffice_with_docker/#modify-seahub_settingspy","title":"Modify seahub_settings.py","text":"# Required for only office document server\nmap $http_x_forwarded_proto $the_scheme {\n default $http_x_forwarded_proto;\n \"\" $scheme;\n}\nmap $http_x_forwarded_host $the_host {\n default $http_x_forwarded_host;\n \"\" $host;\n}\nmap $http_upgrade $proxy_connection {\n default upgrade;\n \"\" close;\n}\nserver {\n listen 80;\n ...\n}\n\nserver {\n listen 443 ssl;\n ...\n\n location /onlyofficeds/ {\n proxy_pass http://oods/;\n proxy_http_version 1.1;\n client_max_body_size 100M;\n proxy_read_timeout 3600s;\n proxy_connect_timeout 3600s;\n proxy_set_header Upgrade $http_upgrade;\n proxy_set_header Connection $proxy_connection;\n proxy_set_header X-Forwarded-Host $the_host/onlyofficeds;\n proxy_set_header X-Forwarded-Proto $the_scheme;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n }\n}\n
"},{"location":"docker/pro-edition/deploy_onlyoffice_with_docker/#restart-docker-container","title":"Restart docker container","text":"# OnlyOffice\nENABLE_ONLYOFFICE = True\nVERIFY_ONLYOFFICE_CERTIFICATE = True\nONLYOFFICE_APIJS_URL = 'http://<your-seafile-doamin>/onlyofficeds/web-apps/apps/api/documents/api.js'\nONLYOFFICE_FILE_EXTENSION = ('doc', 'docx', 'ppt', 'pptx', 'xls', 'xlsx', 'odt', 'fodt', 'odp', 'fodp', 'ods', 'fods')\nONLYOFFICE_EDIT_FILE_EXTENSION = ('docx', 'pptx', 'xlsx')\nONLYOFFICE_JWT_SECRET = 'your-secret-string'\ndocker compose down\ndocker compose up -d \nsysctl -w vm.max_map_count=262144 #run as root\nnano /etc/sysctl.conf\n\n# modify vm.max_map_count\nvm.max_map_count=262144\n
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#installing-docker","title":"Installing Docker","text":"/opt/seafile-data is the directory of Seafile. If you decide to put Seafile in a different directory - which you can - adjust all paths accordingly.docker login docker.seadrive.org\ndocker pull docker.seadrive.org/seafileltd/seafile-pro-mc:12.0-latest\n.env","text":".env, seafile-server.yml and caddy.yml files for configuration.mkdir /opt/seafile\ncd /opt/seafile\n\n# Seafile PE 12.0\nwget -O .env https://manual.seafile.com/12.0/docker/docker-compose/pro/env\nwget https://manual.seafile.com/12.0/docker/docker-compose/pro/seafile-server.yml\nwget https://manual.seafile.com/12.0/docker/docker-compose/pro/caddy.yml\n\nnano .env\n
SEAFILE_VOLUME: The volume directory of Seafile data, default is /opt/seafile-dataSEAFILE_MYSQL_VOLUME: The volume directory of MySQL data, default is /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME: The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's, default is /opt/seafile-caddySEAFILE_ELASTICSEARCH_VOLUME: The volume directory of Elasticsearch data, default is /opt/seafile-elasticsearch/dataSEAFILE_MYSQL_ROOT_PASSWORD: The root password of MySQLSEAFILE_MYSQL_DB_USER: The user of MySQL (database - user can be found in conf/seafile.conf)SEAFILE_MYSQL_DB_PASSWORD: The user seafile password of MySQLJWT: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1SEAFILE_SERVER_HOSTNAME: Seafile server hostname or domainSEAFILE_SERVER_PROTOCOL: Seafile server protocol (http or https)TIME_ZONE: Time zone (default UTC)SEAFILE_ADMIN_EMAIL: Admin usernameSEAFILE_ADMIN_PASSWORD: Admin password
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#starting-the-docker-containers","title":"Starting the Docker Containers","text":"mkdir -p /opt/seafile-elasticsearch/data\nchmod 777 -R /opt/seafile-elasticsearch/data\ndocker compose up -d\n.env.docker compose logs -f\n/shared/logs/seafile in the docker, or /opt/seafile-data/logs/seafile in the server that run the docker./shared/logs/var-log, or /opt/seafile-data/logs/var-log in the server that run the docker.seafile-license.txt license file, simply put it in the volume of the Seafile container. The volumne's default path in the Compose file is /opt/seafile-data. If you have modified the path, save the license file under your custom path.
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#seafile-directory-structure","title":"Seafile directory structure","text":""},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#optseafile-data","title":"docker compose down\n\ndocker compose up -d\n/opt/seafile-data","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#reviewing-the-deployment","title":"Reviewing the Deployment","text":"/opt/seafile-data/seafile/logs/seafile.log./var/log inside the container. For example, you can find the nginx logs in /opt/seafile-data/logs/var-log/nginx/.docker container list should list the containers specified in the .env.$ tree /opt/seafile-data -L 2\n/opt/seafile-data\n\u251c\u2500\u2500 logs\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 var-log\n\u251c\u2500\u2500 nginx\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 conf\n\u2514\u2500\u2500 seafile\n \u00a0\u00a0 \u251c\u2500\u2500 ccnet\n \u00a0\u00a0 \u251c\u2500\u2500 conf\n \u00a0\u00a0 \u251c\u2500\u2500 logs\n \u00a0\u00a0 \u251c\u2500\u2500 pro-data\n \u00a0\u00a0 \u251c\u2500\u2500 seafile-data\n \u00a0\u00a0 \u2514\u2500\u2500 seahub-data\n/opt/seafile-data/seafile/conf. The nginx config file is in /opt/seafile-data/nginx/conf.docker compose restart\n/opt/seafile-data/seafile/logs whereas all other log files are in /opt/seafile-data/logs/var-log..env as followsSEAFILE_MYSQL_DB_HOST=192.168.0.2\nSEAFILE_MYSQL_DB_PORT=3306\nSEAFILE_MYSQL_ROOT_PASSWORD=ROOT_PASSWORD\nSEAFILE_MYSQL_DB_PASSWORD=PASSWORD\nSEAFILE_MYSQL_ROOT_PASSWORD is needed during installation. Later, after Seafile is installed, the user seafile will be used to connect to the mysql-server (SEAFILE_MYSQL_DB_PASSWORD). You can remove the SEAFILE_MYSQL_ROOT_PASSWORD.my_init, Nginx are still run as root inside docker.)NON_ROOT=true to the .env.NON_ROOT=true\n/opt/seafile-data/seafile/ permissions.chmod -R a+rwx /opt/seafile-data/seafile/\ndocker compose down\ndocker compose up -d\nseafile user. (NOTE: Later, when doing maintenance, other scripts in docker are also required to be run as seafile user, e.g. su seafile -c ./seaf-gc.sh)/scripts folder of the docker container. To perform garbage collection, simply run docker exec seafile /scripts/gc.sh. For the community edition, this process will stop the seafile server, but it is a relatively quick process and the seafile server will start automatically once the process has finished. The Professional supports an online garbage collection..env
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#clamav-with-docker","title":"Clamav with Docker","text":".env
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#other-functions","title":"Other functions","text":""},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#ldapad-integration-for-pro","title":"LDAP/AD Integration for Pro","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#s3openswiftceph-storage-backends","title":"S3/OpenSwift/Ceph Storage Backends","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#online-file-preview-and-editing","title":"Online File Preview and Editing","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#advanced-user-management","title":"Advanced User Management","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#advanced-authentication","title":"Advanced Authentication","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#admin-tools","title":"Admin Tools","text":"
"},{"location":"docker/pro-edition/deploy_seafile_pro_with_docker/#faq","title":"FAQ","text":"docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh\ndocker compose logs -f.docker exec -it seafile /opt/seafile/seafile-server-latest/reset-admin.sh.
"},{"location":"docker/pro-edition/migrate_ce_to_pro_with_docker/#migrate","title":"Migrate","text":""},{"location":"docker/pro-edition/migrate_ce_to_pro_with_docker/#stop-the-seafile-ce","title":"Stop the Seafile CE","text":"# Seafile PE 10.0\nwget -O \"docker-compose.yml\" \"https://manual.seafile.com/docker/docker-compose/pro/10.0/docker-compose.yml\"\n\n# Seafile PE 11.0\nwget -O \"docker-compose.yml\" \"https://manual.seafile.com/docker/docker-compose/pro/11.0/docker-compose.yml\"\ndocker compose down\nseafile-license.txt to the volume directory of the Seafile CE's data. If the directory is /opt/seafile-data, so you should put it in the /opt/seafile-data/seafile/.docker-compose.yml file with the new docker-compose.yml file and modify its configuration based on your actual situation:
"},{"location":"docker/pro-edition/migrate_ce_to_pro_with_docker/#do-the-migration","title":"Do the migration","text":"/opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data;docker compose up\ndocker exec -it seafile /opt/seafile/seafile-server-latest/pro/pro.py setup --migrate\nexternal_es_server, es_host, es_port in /opt/seafile-data/seafile/conf/seafevents.conf manually.[INDEX FILES]\nexternal_es_server = true\nes_host = elasticsearch\nes_port = 9200\nenabled = true\ninterval = 10m\ndocker restart seafile\nSeaf-fuse is an implementation of the FUSE virtual filesystem. In a word, it mounts all the seafile files to a folder (which is called the '''mount point'''), so that you can access all the files managed by seafile server, just as you access a normal folder on your server.
"},{"location":"extension/fuse/#use-seaf-fuse-in-binary-based-deployment","title":"Use seaf-fuse in binary based deployment","text":"/data/seafile-fuse.
"},{"location":"extension/fuse/#start-seaf-fuse-with-the-script","title":"Start seaf-fuse with the script","text":"mkdir -p /data/seafile-fuse\n./seafile.sh start../seaf-fuse.sh start /data/seafile-fuse\n./seaf-fuse.sh start -o uid=<uid> /data/seafile-fuse\n./seaf-fuse.sh start --disable-block-cache /data/seafile-fuse\nman fuse.
"},{"location":"extension/fuse/#contents-of-the-mounted-folder","title":"Contents of the mounted folder","text":""},{"location":"extension/fuse/#the-top-level-folder","title":"The top level folder","text":"./seaf-fuse.sh stop\n/data/seafile-fuse.$ ls -lhp /data/seafile-fuse\n\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 abc@abc.com/\ndrwxr-xr-x 2 root root 4.0K Jan 4 2015 foo@foo.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 plus@plus.com/\ndrwxr-xr-x 2 root root 4.0K Jan 1 2015 sharp@sharp.com/\ndrwxr-xr-x 2 root root 4.0K Jan 3 2015 test@test.com/\n
"},{"location":"extension/fuse/#the-folder-for-each-user","title":"The folder for each user","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com\n\ndrwxr-xr-x 2 root root 924 Jan 1 1970 5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\ndrwxr-xr-x 2 root root 1.6K Jan 1 1970 a09ab9fc-7bd0-49f1-929d-6abeb8491397_My Notes/\n
"},{"location":"extension/fuse/#if-you-get-a-permission-denied-error","title":"If you get a \"Permission denied\" error","text":"$ ls -lhp /data/seafile-fuse/abc@abc.com/5403ac56-5552-4e31-a4f1-1de4eb889a5f_Photos/\n\n-rw-r--r-- 1 root root 501K Jan 1 2015 image.png\n-rw-r--r-- 1 root root 501K Jan 1 2015 sample.jpng\n./seaf-fuse.sh start, most likely you are not in the \"fuse group\". You should:
sudo usermod -a -G fuse <your-user-name>\n
"},{"location":"extension/fuse/#use-seaf-fuse-in-docker-based-deployment","title":"Use seaf-fuse in Docker based deployment","text":"./seaf-fuse.sh start <path>again./data/seafile-fuse in host.
"},{"location":"extension/fuse/#start-seaf-fuse-with-the-script-in-docker","title":"Start seaf-fuse with the script in docker","text":" seafile:\n ...\n volumes:\n ...\n - type: bind\n source: /data/seafile-fuse\n target: /seafile-fuse\n bind:\n propagation: rshared\n privileged: true\n cap_add:\n - SYS_ADMIN\ndocker compose up -d\n\ndocker exec -it seafile bash\n
"},{"location":"extension/webdav/","title":"WebDAV extension","text":"cd /opt/seafile/seafile-server-latest/\n\n./seaf-fuse.sh start /seafile-fuse\n/opt/seafile./opt/seafile/conf/seafdav.conf. If it is not created already, you can just create the file.[WEBDAV]\n\n# Default is false. Change it to true to enable SeafDAV server.\nenabled = true\n\nport = 8080\ndebug = true\n\n# If you deploy seafdav behind nginx/apache, you need to modify \"share_name\".\nshare_name = /seafdav\n\n# SeafDAV uses Gunicorn as web server.\n# This option maps to Gunicorn's 'workers' setting. https://docs.gunicorn.org/en/stable/settings.html?#workers\n# By default it's set to 5 processes.\nworkers = 5\n\n# This option maps to Gunicorn's 'timeout' setting. https://docs.gunicorn.org/en/stable/settings.html?#timeout\n# By default it's set to 1200 seconds, to support large file uploads.\ntimeout = 1200\n./seafile.sh restart\nhttp://example.com:8080/seafdav
"},{"location":"extension/webdav/#proxy-with-nginx","title":"Proxy with Nginx","text":"show_repo_id=true\n
"},{"location":"extension/webdav/#proxy-with-apache","title":"Proxy with Apache","text":".....\n\n location /seafdav {\n rewrite ^/seafdav$ /seafdav/ permanent;\n }\n\n location /seafdav/ {\n proxy_pass http://127.0.0.1:8080/seafdav/;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_read_timeout 1200s;\n client_max_body_size 0;\n\ufeff\n access_log /var/log/nginx/seafdav.access.log seafileformat;\n error_log /var/log/nginx/seafdav.error.log;\n }\n\n location /:dir_browser {\n proxy_pass http://127.0.0.1:8080/:dir_browser;\n }\n
"},{"location":"extension/webdav/#notes-on-clients","title":"Notes on Clients","text":"......\n <Location /seafdav>\n ProxyPass \"http://127.0.0.1:8080/seafdav\"\n </Location>\n
sudo apt-get install davfs2\nsudo mount -t davfs -o uid=<username> https://example.com/seafdav /media/seafdav/\n
"},{"location":"extension/webdav/#mac-os-x","title":"Mac OS X","text":" use_locks 0\nenabled = true in seafdav.conf. If not, modify it and restart seafile server.share_name as the sample configuration above. Restart your seafile server and try again.seafdav.log to see if there is log like the following.\"MOVE ... -> 502 Bad Gateway\n09:47:06.533 - DEBUG : Raising DAVError 502 Bad Gateway: Source and destination must have the same scheme.\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\n(See https://github.com/mar10/wsgidav/issues/183)\n\n09:47:06.533 - DEBUG : Caught (502, \"Source and destination must have the same scheme.\\nIf you are running behind a reverse proxy, you may have to rewrite the 'Destination' header.\\n(See https://github.com/mar10/wsgidav/issues/183)\")\nHTTP_X_FORWARDED_PROTO value in the request received by Seafile not being HTTPS.HTTP_X_FORWARDED_PROTO. For example, in nginx, changeproxy_set_header X-Forwarded-Proto $scheme;\n
"},{"location":"extension/webdav/#windows-explorer-reports-file-size-exceeds-the-limit-allowed-and-cannot-be-saved","title":"Windows Explorer reports \"file size exceeds the limit allowed and cannot be saved\"","text":"proxy_set_header X-Forwarded-Proto https;\nFileSizeLimitInBytes under HKEY_LOCAL_MACHINE -> SYSTEM -> CurrentControlSet -> Services -> WebClient -> Parameters.
"},{"location":"extra_setup/setup_seadoc/#architecture","title":"Architecture","text":"
"},{"location":"extra_setup/setup_seadoc/#setup-seadoc","title":"Setup SeaDoc","text":"
"},{"location":"extra_setup/setup_seadoc/#deploy-seadoc-on-a-new-host","title":"Deploy SeaDoc on a new host","text":""},{"location":"extra_setup/setup_seadoc/#download-and-modify-seadoc-docker-composeyml","title":"Download and modify SeaDoc docker-compose.yml","text":"
"},{"location":"extra_setup/setup_seadoc/#create-the-seadoc-database-manually","title":"Create the SeaDoc database manually","text":"DB_HOST: MySQL hostDB_PORT: MySQL portDB_USER: MySQL userDB_PASSWD: MySQL passwordvolumes: The volume directory of SeaDoc dataSDOC_SERVER_HOSTNAME: SeaDoc service URLSEAHUB_SERVICE_URL: Seafile service URLcreate database if not exists sdoc_db charset utf8mb4;\nGRANT ALL PRIVILEGES ON `sdoc_db`.* to `seafile`@`%.%.%.%`;\n# for community edition\nwget https://manual.seafile.com/12.0/docker/docker-compose/ce/seadoc.yml\n\n# for pro edition\nwget https://manual.seafile.com/12.0/docker/docker-compose/pro/seadoc.yml\n.env, and insert seadoc.yml into COMPOSE_FILE, and enable SeaDoc server
"},{"location":"extra_setup/setup_seadoc/#create-the-seadoc-database-manually_1","title":"Create the SeaDoc database manually","text":"COMPOSE_FILE='seafile-server.yml,caddy.yml,seadoc.yml'\n\nENABLE_SEADOC=false\nSEADOC_SERVER_URL=https://example.seafile.com/sdoc-server\ncreate database if not exists sdoc_db charset utf8mb4;\nGRANT ALL PRIVILEGES ON `sdoc_db`.* to `seafile`@`%.%.%.%`;\ndocker compose up -d\n/opt/seadoc-data","text":"
"},{"location":"extra_setup/setup_seadoc/#faq","title":"FAQ","text":""},{"location":"extra_setup/setup_seadoc/#about-ssl","title":"About SSL","text":"lucaslorentz/caddy-docker-proxy:2.9, which user only needs to correctly configure the following fields in .env to automatically complete the acquisition and update of the certificate:
"},{"location":"maintain/","title":"Administration","text":""},{"location":"maintain/#enter-the-admin-panel","title":"Enter the admin panel","text":"SEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=example.com\nSystem Admin in the popup of avatar.
"},{"location":"maintain/#logs","title":"Logs","text":"
"},{"location":"maintain/#backup-and-recovery","title":"Backup and Recovery","text":"
"},{"location":"maintain/#clean-database","title":"Clean database","text":"
"},{"location":"maintain/#export-report","title":"Export report","text":"
"},{"location":"maintain/account/","title":"Account Management","text":""},{"location":"maintain/account/#user-management","title":"User Management","text":"social_auth_usersocialauth to map the new external ID to internal ID.reset-admin.sh script under seafile-server directory. This script would help you reset the admin account and password. Your data will not be deleted from the admin account, this only unlocks and changes the password for the admin account../seahub.sh python-env python seahub/manage.py check_user_quota , when the user quota exceeds 90%, an email will be sent. If you want to enable this, you have first to set up notification email.
/opt/seafile\n --seafile-server-9.0.x # untar from seafile package\n --seafile-data # seafile configuration and data (if you choose the default)\n --seahub-data # seahub data\n --logs\n --conf\n
"},{"location":"maintain/backup_recovery/#backup-steps","title":"Backup steps","text":"
"},{"location":"maintain/backup_recovery/#backup-order-database-first-or-data-directory-first","title":"Backup Order: Database First or Data Directory First","text":"
/opt/seafile for binary package based deployment (or /opt/seafile-data for docker based deployment). And you want to backup to /backup directory. The /backup can be an NFS or Windows share mount exported by another machine, or just an external disk. You can create a layout similar to the following in /backup directory:
"},{"location":"maintain/backup_recovery/#backup-and-restore-for-binary-package-based-deployment","title":"Backup and restore for binary package based deployment","text":""},{"location":"maintain/backup_recovery/#backing-up-databases","title":"Backing up Databases","text":"/backup\n---- databases/ contains database backup files\n---- data/ contains backups of the data directory\nccnet_db, seafile_db and seahub_db. mysqldump automatically locks the tables so you don't need to stop Seafile server when backing up MySQL databases. Since the database tables are usually very small, it won't take long to dump.mysqldump -h [mysqlhost] -u[username] -p[password] --opt ccnet_db > /backup/databases/ccnet-db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seafile_db > /backup/databases/seafile-db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seahub_db > /backup/databases/seahub-db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n
"},{"location":"maintain/backup_recovery/#backing-up-seafile-library-data","title":"Backing up Seafile library data","text":"sqlite3 /opt/seafile/ccnet/GroupMgr/groupmgr.db .dump > /backup/databases/groupmgr.db.bak.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nsqlite3 /opt/seafile/ccnet/PeerMgr/usermgr.db .dump > /backup/databases/usermgr.db.bak.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nsqlite3 /opt/seafile/seafile-data/seafile.db .dump > /backup/databases/seafile.db.bak.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nsqlite3 /opt/seafile/seahub.db .dump > /backup/databases/seahub.db.bak.`date +\"%Y-%m-%d-%H-%M-%S\"`\n/opt/seafile directory, so just back up the whole directory. You can directly copy the whole directory to the backup destination, or you can use rsync to do incremental backup. cp -R /opt/seafile /backup/data/seafile-`date +\"%Y-%m-%d-%H-%M-%S\"`\nrsync -az /opt/seafile /backup/data\n/backup/data/seafile.
"},{"location":"maintain/backup_recovery/#restore-the-databases","title":"Restore the databases","text":"/backup/data/seafile to the new machine. Let's assume the seafile deployment location new machine is also /opt/seafile.mysql -u[username] -p[password] ccnet_db < ccnet-db.sql.2013-10-19-16-00-05\nmysql -u[username] -p[password] seafile_db < seafile-db.sql.2013-10-19-16-00-20\nmysql -u[username] -p[password] seahub_db < seahub-db.sql.2013-10-19-16-01-05\n
"},{"location":"maintain/backup_recovery/#backup-and-restore-for-docker-based-deployment","title":"Backup and restore for Docker based deployment","text":""},{"location":"maintain/backup_recovery/#structure","title":"Structure","text":"cd /opt/seafile\nmv ccnet/PeerMgr/usermgr.db ccnet/PeerMgr/usermgr.db.old\nmv ccnet/GroupMgr/groupmgr.db ccnet/GroupMgr/groupmgr.db.old\nmv seafile-data/seafile.db seafile-data/seafile.db.old\nmv seahub.db seahub.db.old\nsqlite3 ccnet/PeerMgr/usermgr.db < usermgr.db.bak.xxxx\nsqlite3 ccnet/GroupMgr/groupmgr.db < groupmgr.db.bak.xxxx\nsqlite3 seafile-data/seafile.db < seafile.db.bak.xxxx\nsqlite3 seahub.db < seahub.db.bak.xxxx\n/opt/seafile-data. And you want to backup to /backup directory.
"},{"location":"maintain/backup_recovery/#backing-up-database","title":"Backing up Database","text":"/opt/seafile-data/seafile/conf # configuration files\n/opt/seafile-data/seafile/seafile-data # data of seafile\n/opt/seafile-data/seafile/seahub-data # data of seahub\n
"},{"location":"maintain/backup_recovery/#backing-up-seafile-library-data_1","title":"Backing up Seafile library data","text":""},{"location":"maintain/backup_recovery/#to-directly-copy-the-whole-data-directory","title":"To directly copy the whole data directory","text":"# It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.\ncd /backup/databases\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt ccnet_db > ccnet_db.sql\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt seafile_db > seafile_db.sql\ndocker exec -it seafile-mysql mysqldump -u[username] -p[password] --opt seahub_db > seahub_db.sql\n
"},{"location":"maintain/backup_recovery/#use-rsync-to-do-incremental-backup","title":"Use rsync to do incremental backup","text":"cp -R /opt/seafile-data/seafile /backup/data/\n
"},{"location":"maintain/backup_recovery/#recovery","title":"Recovery","text":""},{"location":"maintain/backup_recovery/#restore-the-databases_1","title":"Restore the databases","text":"rsync -az /opt/seafile-data/seafile /backup/data/\n
"},{"location":"maintain/backup_recovery/#restore-the-seafile-data","title":"Restore the seafile data","text":"docker cp /backup/databases/ccnet_db.sql seafile-mysql:/tmp/ccnet_db.sql\ndocker cp /backup/databases/seafile_db.sql seafile-mysql:/tmp/seafile_db.sql\ndocker cp /backup/databases/seahub_db.sql seafile-mysql:/tmp/seahub_db.sql\n\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] ccnet_db < /tmp/ccnet_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seafile_db < /tmp/seafile_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seahub_db < /tmp/seahub_db.sql\"\n
"},{"location":"maintain/clean_database/","title":"Clean Database","text":""},{"location":"maintain/clean_database/#seahub","title":"Seahub","text":""},{"location":"maintain/clean_database/#session","title":"Session","text":"cp -R /backup/data/* /opt/seafile-data/seafile/\n
"},{"location":"maintain/clean_database/#activity","title":"Activity","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clearsessions\nuse seahub_db;\nDELETE FROM Activity WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"maintain/clean_database/#file-access","title":"File Access","text":"use seahub_db;\nDELETE FROM sysadmin_extra_userloginlog WHERE to_days(now()) - to_days(login_date) > 90;\n
"},{"location":"maintain/clean_database/#file-update","title":"File Update","text":"use seahub_db;\nDELETE FROM FileAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"maintain/clean_database/#permisson","title":"Permisson","text":"use seahub_db;\nDELETE FROM FileUpdate WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"maintain/clean_database/#file-history","title":"File History","text":"use seahub_db;\nDELETE FROM PermAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"maintain/clean_database/#command-clean_db_records","title":"Command clean_db_records","text":"use seahub_db;\nDELETE FROM FileHistory WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"maintain/clean_database/#outdated-library-data","title":"Outdated Library Data","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clean_db_records\ncd <install-path>/seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data\n
"},{"location":"maintain/clean_database/#library-sync-tokens","title":"Library Sync Tokens","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data --dry-run=true\n
delete t,i from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
"},{"location":"maintain/export_file_access_log/","title":"Export File Access Log","text":"select * from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
"},{"location":"maintain/export_report/","title":"Export Report","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01\n
"},{"location":"maintain/export_report/#export-user-storage-report","title":"Export User Storage Report","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_user_traffic_report --date 201906\n
"},{"location":"maintain/export_report/#export-file-access-log","title":"Export File Access Log","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_user_storage_report\n
"},{"location":"maintain/export_user_storage_report/","title":"Export User Storage Report","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01\n
"},{"location":"maintain/logs/","title":"Logs","text":""},{"location":"maintain/logs/#log-files-of-seafile-server","title":"Log files of seafile server","text":"cd <install-path>/seafile-server-latest\n./seahub.sh python-env python seahub/manage.py export_user_storage_report\n
"},{"location":"maintain/logs/#log-files-for-seafile-background-node-in-cluster-mode","title":"Log files for seafile background node in cluster mode","text":"
"},{"location":"maintain/seafile_fsck/","title":"Seafile FSCK","text":"cd seafile-server-latest\n./seaf-fsck.sh [--repair|-r] [--export|-E export_path] [repo_id_1 [repo_id_2 ...]]\n
"},{"location":"maintain/seafile_fsck/#checking-integrity-of-libraries","title":"Checking Integrity of Libraries","text":"cd seafile-server-latest\n./seaf-fsck.sh\ncd seafile-server-latest\n./seaf-fsck.sh [library-id1] [library-id2] ...\n[02/13/15 16:21:07] fsck.c(470): Running fsck for repo ca1a860d-e1c1-4a52-8123-0bf9def8697f.\n[02/13/15 16:21:07] fsck.c(413): Checking file system integrity of repo fsck(ca1a860d)...\n[02/13/15 16:21:07] fsck.c(35): Dir 9c09d937397b51e1283d68ee7590cd9ce01fe4c9 is missing.\n[02/13/15 16:21:07] fsck.c(200): Dir /bf/pk/(9c09d937) is corrupted.\n[02/13/15 16:21:07] fsck.c(105): Block 36e3dd8757edeb97758b3b4d8530a4a8a045d3cb is corrupted.\n[02/13/15 16:21:07] fsck.c(178): File /bf/02.1.md(ef37e350) is corrupted.\n[02/13/15 16:21:07] fsck.c(85): Block 650fb22495b0b199cff0f1e1ebf036e548fcb95a is missing.\n[02/13/15 16:21:07] fsck.c(178): File /01.2.md(4a73621f) is corrupted.\n[02/13/15 16:21:07] fsck.c(514): Fsck finished for repo ca1a860d.\n[02/13/15 16:36:11] Commit 6259251e2b0dd9a8e99925ae6199cbf4c134ec10 is missing\n[02/13/15 16:36:11] fsck.c(476): Repo ca1a860d HEAD commit is corrupted, need to restore to an old version.\n[02/13/15 16:36:11] fsck.c(314): Scanning available commits...\n[02/13/15 16:36:11] fsck.c(376): Find available commit 1b26b13c(created at 2015-02-13 16:10:21) for repo ca1a860d.\n
cd seafile-server-latest\n./seaf-fsck.sh --repair\ncd seafile-server-latest\n./seaf-fsck.sh --repair [library-id1] [library-id2] ...\n
"},{"location":"maintain/seafile_fsck/#speeding-up-fsck-by-not-checking-file-contents","title":"Speeding up FSCK by not checking file contents","text":"cd seafile-server-latest\n./seaf-fsck.sh --export top_export_path [library-id1] [library-id2] ...\ntop_export_path is a directory to place the exported files. Each library will be exported as a sub-directory of the export path. If you don't specify library ids, all libraries will be exported.
seaf-gc.sh --dry-run [repo-id1] [repo-id2] ...\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo My Library(ffa57d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 265.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo ffa57d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 5 commits, 265 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 265 blocks total, about 265 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo aa(f3d0a8d0)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 5.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo f3d0a8d0.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 8 commits, 5 blocks.\n[03/19/15 19:41:49] gc-core.c(264): Populating index for sub-repo 9217622a.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 4 commits, 4 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 5 blocks total, about 9 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo test2(e7d26d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 507.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo e7d26d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 577 commits, 507 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 507 blocks total, about 507 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:50] seafserv-gc.c(124): === Repos deleted by users ===\n[03/19/15 19:41:50] seafserv-gc.c(145): === GC is finished ===\n\n[03/19/15 19:41:50] Following repos have blocks to be removed:\nrepo-id1\nrepo-id2\nrepo-id3\nseaf-gc.sh [repo-id1] [repo-id2] ...\nseaf-gc.sh -r\nseaf-gc.sh --rm-fs\n
seaf-gc.sh -t 20\n
"},{"location":"maintain/seafile_gc/#gc-cleanup-script-for-community-version","title":"GC cleanup script for Community Version","text":"seaf-gc.sh --id-prefix a123\n
touch /opt/haiwen/seafile/cleanupScript.sh\n#!/bin/bash\n\n#####\n# Uncomment the following line if you rather want to run the script manually.\n# Display usage if the script is not run as root user\n# if [[ $USER != \"root\" ]]; then\n# echo \"This script must be run as root user!\"\n# exit 1\n# fi\n#\n# echo \"Super User detected!!\"\n# read -p \"Press [ENTER] to start the procedure, this will stop the seafile server!!\"\n#####\n\n# stop the server\necho Stopping the Seafile-Server...\nsystemctl stop seafile.service\nsystemctl stop seahub.service\n\necho Giving the server some time to shut down properly....\nsleep 20\n\n# run the cleanup\necho Seafile cleanup started...\nsudo -u seafile $pathtoseafile/seafile-server-latest/seaf-gc.sh\n\necho Giving the server some time....\nsleep 10\n\n# start the server again\necho Starting the Seafile-Server...\nsystemctl start seafile.service\nsystemctl start seahub.service\n\necho Seafile cleanup done!\nsudo chmod +x /path/to/yourscript.sh\ncrontab -e\n0 2 * * Sun /opt/haiwen/seafile/cleanupScript.sh\n/scripts/gc.sh script. Simply run docker exec <whatever-your-seafile-container-is-called> /scripts/gc.sh.
seahub_settings.py and restart service. ENABLE_TWO_FACTOR_AUTH = True TWO_FACTOR_DEVICE_REMEMBER_DAYS = 30 # optional, default 90 days.
seaf-server)\uff1adata service daemon, handles raw file upload, download and synchronization. Seafile server by default listens on port 8082. You can configure Nginx/Apache to proxy traffic to the local 8082 port.
"},{"location":"overview/file_permission_management/","title":"File permission management","text":"
"},{"location":"security/auditing/","title":"Access log and auditing","text":"
seafevents.conf to turn it on:[Audit]\n## Audit log is disabled default.\n## Leads to additional SQL tables being filled up, make sure your SQL server is able to handle it.\nenabled = true\nseahub_db.seahub.log).
"},{"location":"security/fail2ban/#copy-and-edit-jaillocal-file","title":"Copy and edit jail.local file","text":" # TimeZone\n TIME_ZONE = 'Europe/Stockholm'\njail.conf filejail.local with : * ports used by your seafile website (e.g. http,https) ; * logpath (e.g. /home/yourusername/logs/seahub.log) ; * maxretry (default to 3 is equivalent to 9 real attemps in seafile, because one line is written every 3 failed authentications into seafile logs).jail.local in /etc/fail2ban with the following content:","text":"
"},{"location":"security/fail2ban/#create-the-fail2ban-filter-file-seafile-authconf-in-etcfail2banfilterd-with-the-following-content","title":"Create the fail2ban filter file # All standard jails are in the file configuration located\n# /etc/fail2ban/jail.conf\n\n# Warning you may override any other parameter (e.g. banaction,\n# action, port, logpath, etc) in that section within jail.local\n\n# Change logpath with your file log used by seafile (e.g. seahub.log)\n# Also you can change the max retry var (3 attemps = 1 line written in the\n# seafile log)\n# So with this maxrety to 1, the user can try 3 times before his IP is banned\n\n[seafile]\n\nenabled = true\nport = http,https\nfilter = seafile-auth\nlogpath = /home/yourusername/logs/seahub.log\nmaxretry = 3\nseafile-auth.conf in /etc/fail2ban/filter.d with the following content:","text":"
"},{"location":"security/fail2ban/#restart-fail2ban","title":"Restart fail2ban","text":"# Fail2Ban filter for seafile\n#\n\n[INCLUDES]\n\n# Read common prefixes. If any customizations available -- read them from\n# common.local\nbefore = common.conf\n\n[Definition]\n\n_daemon = seaf-server\n\nfailregex = Login attempt limit reached.*, ip: <HOST>\n\nignoreregex = \n\n# DEV Notes:\n#\n# pattern : 2015-10-20 15:20:32,402 [WARNING] seahub.auth.views:155 login Login attempt limit reached, username: <user>, ip: 1.2.3.4, attemps: 3\n# 2015-10-20 17:04:32,235 [WARNING] seahub.auth.views:163 login Login attempt limit reached, ip: 1.2.3.4, attempts: 3\nsudo fail2ban-client reload\nsudo iptables -S\n
"},{"location":"security/fail2ban/#tests","title":"Tests","text":"...\n-N fail2ban-seafile\n...\n-A fail2ban-seafile -j RETURN\ndenis@myserver:~$ sudo fail2ban-client status seafile\nStatus for the jail: seafile\n|- filter\n| |- File list: /home/<youruser>/logs/seahub.log\n| |- Currently failed: 0\n| `- Total failed: 1\n`- action\n |- Currently banned: 1\n | `- IP list: 1.2.3.4\n `- Total banned: 1\nsudo iptables -S\n\n...\n-A fail2ban-seafile -s 1.2.3.4/32 -j REJECT --reject-with icmp-port-unreachable\n...\n
"},{"location":"security/fail2ban/#note","title":"Note","text":"sudo fail2ban-client set seafile unbanip 1.2.3.4\n
PBKDF2SHA256$iterations$salt$hash\n
"},{"location":"upgrade/upgrade/","title":"Upgrade manual","text":"PBKDF2(password, salt, iterations). The number of iterations is currently 10000.
"},{"location":"upgrade/upgrade/#special-upgrade-notes","title":"Special upgrade notes","text":"
"},{"location":"upgrade/upgrade/#upgrade-a-binary-package-based-deployment","title":"Upgrade a binary package based deployment","text":""},{"location":"upgrade/upgrade/#major-version-upgrade-eg-from-5xx-to-6yy","title":"Major version upgrade (e.g. from 5.x.x to 6.y.y)","text":"seafile\n -- seafile-server-5.1.0\n -- seafile-server-6.1.0\n -- ccnet\n -- seafile-data\ncd seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n# or via service\n/etc/init.d/seafile-server stop\ncd seafile/seafile-server-6.1.0\nls upgrade/upgrade_*\n...\nupgrade_5.0_5.1.sh\nupgrade_5.1_6.0.sh\nupgrade_6.0_6.1.sh\nupgrade/upgrade_5.1_6.0.sh\nupgrade/upgrade_6.0_6.1.sh\ncd seafile/seafile-server-latest/\n./seafile.sh start\n./seahub.sh start # or \"./seahub.sh start-fastcgi\" if you're using fastcgi\n# or via service\n/etc/init.d/seafile-server start\n
"},{"location":"upgrade/upgrade/#minor-version-upgrade-eg-from-61x-to-62y","title":"Minor version upgrade (e.g. from 6.1.x to 6.2.y)","text":"rm -rf seafile-server-5.1.0/\nseafile\n -- seafile-server-6.1.0\n -- seafile-server-6.2.0\n -- ccnet\n -- seafile-data\n
cd seafile/seafile-server-latest\n./seahub.sh stop\n./seafile.sh stop\n# or via service\n/etc/init.d/seafile-server stop\ncd seafile/seafile-server-6.2.0\nls upgrade/upgrade_*\n...\nupgrade/upgrade_5.1_6.0.sh\nupgrade/upgrade_6.0_6.1.sh\nupgrade/upgrade_6.1_6.2.sh\nupgrade/upgrade_6.1_6.2.sh\n./seafile.sh start\n./seahub.sh start\n# or via service\n/etc/init.d/seafile-server start\n
"},{"location":"upgrade/upgrade/#maintenance-version-upgrade-eg-from-622-to-623","title":"Maintenance version upgrade (e.g. from 6.2.2 to 6.2.3)","text":"rm -rf seafile-server-6.1.0/\n
"},{"location":"upgrade/upgrade_a_cluster/","title":"Upgrade a Seafile cluster","text":""},{"location":"upgrade/upgrade_a_cluster/#major-and-minor-version-upgrade","title":"Major and minor version upgrade","text":"minor-upgrade.sh):cd seafile-server-6.2.3/upgrade/ && ./minor-upgrade.shrm -rf seafile-server-6.2.2/
"},{"location":"upgrade/upgrade_a_cluster/#maintanence-upgrade","title":"Maintanence upgrade","text":"./upgrade/minor_upgrade.sh at each node to update the symbolic link.OFFICE_CONVERTOR_ROOT = 'http://<ip of node background>'\n\u2b07\ufe0f\nOFFICE_CONVERTOR_ROOT = 'http://<ip of node background>:6000'\n
"},{"location":"upgrade/upgrade_a_cluster/#for-backend-node","title":"For backend node","text":"[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\n\n\u2b07\ufe0f\n[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\nhost = <ip of node background>\nport = 6000\n
"},{"location":"upgrade/upgrade_a_cluster/#from-63-to-70","title":"From 6.3 to 7.0","text":"[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\n\n\u2b07\ufe0f\n[OFFICE CONVERTER]\nenabled = true\nworkers = 1\nmax-size = 10\nhost = <ip of node background>\nport = 6000\nseahub_settings.py is:CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '<MEMCACHED SERVER IP>:11211',\n }\n}\n\nCOMPRESS_CACHE_BACKEND = 'django.core.cache.backends.locmem.LocMemCache'\n
"},{"location":"upgrade/upgrade_a_cluster/#from-61-to-62","title":"From 6.1 to 6.2","text":"CACHES = {\n 'default': {\n 'BACKEND': 'django_pylibmc.memcached.PyLibMCCache',\n 'LOCATION': '<MEMCACHED SERVER IP>:11211',\n },\n 'locmem': {\n 'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',\n },\n}\nCOMPRESS_CACHE_BACKEND = 'locmem'\ncd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n
"},{"location":"upgrade/upgrade_a_cluster/#from-v44-to-v50","title":"From v4.4 to v5.0","text":" - COMPRESS_CACHE_BACKEND = 'locmem://'\n + COMPRESS_CACHE_BACKEND = 'django.core.cache.backends.locmem.LocMemCache'\n
./upgrade/upgrade_4.4_5.0.sh\n
SEAFILE_SKIP_DB_UPGRADE environmental variable turned on:SEAFILE_SKIP_DB_UPGRADE=1 ./upgrade/upgrade_4.4_5.0.sh\n
"},{"location":"upgrade/upgrade_a_cluster/#from-v43-to-v44","title":"From v4.3 to v4.4","text":"conf/\n |__ ccnet.conf\n |__ seafile.conf\n |__ seafevent.conf\n |__ seafdav.conf\n |__ seahub_settings.conf\n
"},{"location":"upgrade/upgrade_a_cluster/#from-v42-to-v43","title":"From v4.2 to v4.3","text":"
"},{"location":"upgrade/upgrade_a_cluster_docker/","title":"Upgrade a Seafile cluster (Docker)","text":""},{"location":"upgrade/upgrade_a_cluster_docker/#major-and-minor-version-upgrade","title":"Major and minor version upgrade","text":"
"},{"location":"upgrade/upgrade_a_cluster_docker/#maintanence-upgrade","title":"Maintanence upgrade","text":"...\nservice:\n ...\n seafile:\n image: seafileltd/seafile-mc:10.0-latest\n ...\n ...\nservice:\n ...\n seafile:\n image: seafileltd/seafile-mc:11.0-latest\n ...\n ...\n
mv /opt/seafile/shared/ssl /opt/seafile/shared/ssl-bak\n\nmv /opt/seafile/shared/nginx/conf/seafile.nginx.conf /opt/seafile/shared/nginx/conf/seafile.nginx.conf.bak\ndocker compose down\ndocker compose up -d\ndocker exec seafile nginx -s reload\n.env and seafile-server.yml files for configuration.mv docker-compose.yml docker-compose.yml.bak\ndocker-compose.yml.bakwget -O .env https://manual.seafile.com/docker/docker-compose/ce/12.0/env\nwget https://manual.seafile.com/docker/docker-compose/ce/12.0/seafile-server.yml\nwget https://manual.seafile.com/docker/docker-compose/ce/12.0/caddy.yml\nwget -O .env https://manual.seafile.com/docker/docker-compose/pro/12.0/env\nwget https://manual.seafile.com/docker/docker-compose/pro/12.0/seafile-server.yml\nwget https://manual.seafile.com/docker/docker-compose/pro/12.0/caddy.yml\n
SEAFILE_VOLUME: The volume directory of Seafile data, default is /opt/seafile-dataSEAFILE_MYSQL_VOLUME: The volume directory of MySQL data, default is /opt/seafile-mysql/dbSEAFILE_CADDY_VOLUME: The volume directory of Caddy data used to store certificates obtained from Let's Encrypt's, default is /opt/seafile-caddySEAFILE_ELASTICSEARCH_VOLUME: The volume directory of Elasticsearch dataSEAFILE_MYSQL_ROOT_PASSWORD: The root password of MySQLSEAFILE_MYSQL_DB_PASSWORD: The user seafile password of MySQLJWT: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1SEAFILE_SERVER_HOSTNAME: Seafile server hostname or domainSEAFILE_SERVER_PROTOCOL: Seafile server protocol (http or https)cp seafile.nginx.conf seafile.nginx.conf.bak\nserver listen 80 section:#server {\n# listen 80;\n# server_name _ default_server;\n\n # allow certbot to connect to challenge location via HTTP Port 80\n # otherwise renewal request will fail\n# location /.well-known/acme-challenge/ {\n# alias /var/www/challenges/;\n# try_files $uri =404;\n# }\n\n# location / {\n# rewrite ^ https://example.seafile.com$request_uri? permanent;\n# }\n#}\nserver listen 443 to 80:server {\n#listen 443 ssl;\nlisten 80;\n\n# ssl_certificate /shared/ssl/pkg.seafile.top.crt;\n# ssl_certificate_key /shared/ssl/pkg.seafile.top.key;\n\n# ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;\n\n ...\nseafile-server.yml directory, then modify Seafile .env file.wget https://manual.seafile.com/docker/docker-compose/ce/12.0/seadoc.yml\nwget https://manual.seafile.com/docker/docker-compose/pro/12.0/seadoc.yml\nCOMPOSE_FILE='seafile-server.yml,caddy.yml,seadoc.yml'\n\nSEADOC_VOLUME=/opt/seadoc-data\nENABLE_SEADOC=true\nSEADOC_SERVER_URL=http://example.seafile.com/sdoc-server\n
seadoc.yml to the COMPOSE_FILE field./sdoc-server)/sdoc-server/, /socket.io configs in seafile.nginx.conf file.# location /sdoc-server/ {\n# add_header Access-Control-Allow-Origin *;\n# add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;\n# add_header Access-Control-Allow-Headers \"deviceType,token, authorization, content-type\";\n# if ($request_method = 'OPTIONS') {\n# add_header Access-Control-Allow-Origin *;\n# add_header Access-Control-Allow-Methods GET,POST,PUT,DELETE,OPTIONS;\n# add_header Access-Control-Allow-Headers \"deviceType,token, authorization, content-type\";\n# return 204;\n# }\n# proxy_pass http://sdoc-server:7070/;\n# proxy_redirect off;\n# proxy_set_header Host $host;\n# proxy_set_header X-Real-IP $remote_addr;\n# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n# proxy_set_header X-Forwarded-Host $server_name;\n# proxy_set_header X-Forwarded-Proto $scheme;\n# client_max_body_size 100m;\n# }\n# location /socket.io {\n# proxy_pass http://sdoc-server:7070;\n# proxy_http_version 1.1;\n# proxy_set_header Upgrade $http_upgrade;\n# proxy_set_header Connection 'upgrade';\n# proxy_redirect off;\n# proxy_buffers 8 32k;\n# proxy_buffer_size 64k;\n# proxy_set_header X-Real-IP $remote_addr;\n# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n# proxy_set_header Host $http_host;\n# proxy_set_header X-NginX-Proxy true;\n# }\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/","title":"Upgrade notes for 10.0","text":"docker compose down\n\ndocker compose up -d\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#saml-sso-change-pro-edition-only","title":"SAML SSO change (pro edition only)","text":"[memcached]\nmemcached_options = --SERVER=<the IP of Memcached Server> --POOL-MIN=10 --POOL-MAX=100\nENABLE_ADFS_LOGIN = True\nLOGIN_REDIRECT_URL = '/saml2/complete/'\nSAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx'\nSAML_ATTRIBUTE_MAPPING = {\n 'name': ('display_name', ),\n 'mail': ('contact_email', ),\n ...\n}\n
seahub_settings.py.ENABLED_ROLE_PERMISSIONS = {\n 'default': {\n ...\n 'upload_rate_limit': 2000, # unit: kb/s\n 'download_rate_limit': 4000,\n ...\n },\n 'guest': {\n ...\n 'upload_rate_limit': 100,\n 'download_rate_limit': 200,\n ...\n },\n}\n
seafile-server-latest directory to make the configuration take effect.
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#elasticsearch-change-pro-edition-only","title":"ElasticSearch change (pro edition only)","text":"./seahub.sh python-env python3 seahub/manage.py set_user_role_upload_download_rate_limit\ncurl 'http{s}://<es IP>:9200/_cat/shards/repofiles?v'\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#new-python-libraries","title":"New Python libraries","text":"[INDEX FILES]\n...\nshards = 10 # default is 5\n...\nsudo pip3 install future==0.18.* mysqlclient==2.1.* pillow==10.2.* captcha==0.5.* django_simple_captcha==0.5.20 djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#upgrade-to-100x","title":"Upgrade to 10.0.x","text":"su pip3 install future==0.18.* mysqlclient==2.1.* pillow==9.3.* captcha==0.4 django_simple_captcha==0.5.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1\n
sh upgrade/upgrade_9.0_10.0.sh
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#update-elasticsearch-pro-edition-only","title":"Update Elasticsearch (pro edition only)","text":"docker pull elasticsearch:7.17.9\nmkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\nsudo docker run -d --name es-7.17 -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.17.9\nES_JAVA_OPTS can be adjusted according to your need.# create repo_head index\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8?pretty=true' -d '\n{\n \"mappings\" : {\n \"properties\" : {\n \"commit\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n },\n \"repo\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n },\n \"updatingto\" : {\n \"type\" : \"keyword\",\n \"index\" : false\n }\n }\n }\n}'\n\n# create repofiles index, number_of_shards is the number of shards, here is set to 5, you can also modify it to the most suitable number of shards\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/?pretty=true' -d '\n{\n \"settings\" : {\n \"index\" : {\n \"number_of_shards\" : \"5\",\n \"analysis\" : {\n \"analyzer\" : {\n \"seafile_file_name_ngram_analyzer\" : {\n \"filter\" : [\n \"lowercase\"\n ],\n \"type\" : \"custom\",\n \"tokenizer\" : \"seafile_file_name_ngram_tokenizer\"\n }\n },\n \"tokenizer\" : {\n \"seafile_file_name_ngram_tokenizer\" : {\n \"type\" : \"ngram\",\n \"min_gram\" : \"3\",\n \"max_gram\" : \"4\"\n }\n }\n }\n }\n },\n \"mappings\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"text\",\n \"term_vector\" : \"with_positions_offsets\"\n },\n \"filename\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"ngram\" : {\n \"type\" : \"text\",\n \"analyzer\" : \"seafile_file_name_ngram_analyzer\"\n }\n }\n },\n \"is_dir\" : {\n \"type\" : \"boolean\"\n },\n \"mtime\" : {\n \"type\" : \"date\"\n },\n \"path\" : {\n \"type\" : \"keyword\"\n },\n \"repo\" : {\n \"type\" : \"keyword\"\n },\n \"size\" : {\n \"type\" : \"long\"\n },\n \"suffix\" : {\n \"type\" : \"keyword\"\n }\n }\n }\n}'\nrefresh_interval to -1 and the number_of_replicas to 0 for efficient reindex:curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?wait_for_completion=false&pretty=true' -d '\n{\n \"source\": {\n \"index\": \"repo_head\"\n },\n \"dest\": {\n \"index\": \"repo_head_8\"\n }\n}'\n\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?wait_for_completion=false&pretty=true' -d '\n{\n \"source\": {\n \"index\": \"repofiles\"\n },\n \"dest\": {\n \"index\": \"repofiles_8\"\n }\n}'\n# Get the task_id of the reindex task:\n$ curl 'http{s}://{es server IP}:9200/_tasks?actions=*reindex&pretty'\n# Check to see if the reindex task is complete:\n$ curl 'http{s}://{es server IP}:9200/_tasks/:<task_id>?pretty'\nrefresh_interval and number_of_replicas to the values used in the old index:curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repo_head_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/repofiles_8/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\ngreen (or yellow if it is a single node).curl 'http{s}://{es server IP}:9200/_cluster/health?pretty'\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_aliases?pretty' -d '\n{\n \"actions\": [\n {\"remove_index\": {\"index\": \"repo_head\"}},\n {\"remove_index\": {\"index\": \"repofiles\"}},\n {\"add\": {\"index\": \"repo_head_8\", \"alias\": \"repo_head\"}},\n {\"add\": {\"index\": \"repofiles_8\", \"alias\": \"repofiles\"}}\n ]\n}'\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-two-rebuild-the-index-and-discard-the-old-index-data","title":"Method two, rebuild the index and discard the old index data","text":"$ docker stop es-7.17\n\n$ docker rm es-7.17\n\n$ docker pull elasticsearch:8.6.2\n\n$ sudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:8.6.2\ndocker pull elasticsearch:8.5.3\nmkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\nsudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:8.5.3\n[INDEX FILES]\n...\nexternal_es_server = true\nes_host = http{s}://{es server IP}\nes_port = 9200\nshards = 10 # default is 5.\n...\nsu seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start\nrm -rf /opt/seafile-elasticsearch/data/*\n
"},{"location":"upgrade/upgrade_notes_for_10.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"$ cd /opt/seafile/seafile-server-latest\n$ ./pro/pro.py search --update\nseafevents.conf file. The background node does not start the Seafile background service, just manually run the command ./pro/pro.py search --update.
migrate_ldapusers.py script to merge ccnet_db.LDAPImported table to ccnet_db.EmailUsers table. The setting files need to be changed manually. (See more details below)DISABLE_ADFS_USER_PWD_LOGIN = True in seahub_settings.py.
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#django-csrf-protection-issue","title":"Django CSRF protection issue","text":"sudo apt-get update\nsudo apt-get install -y dnsutils\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#new-python-libraries","title":"New Python libraries","text":"CSRF_TRUSTED_ORIGINS = [\"https://<your-domain>\"]\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#upgrade-to-110x","title":"Upgrade to 11.0.x","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#1-stop-seafile-100x-server","title":"1) Stop Seafile-10.0.x server.","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#2-start-from-seafile-110x-run-the-script","title":"2) Start from Seafile 11.0.x, run the script:","text":"sudo apt-get update\nsudo apt-get install -y python3-dev ldap-utils libldap2-dev\n\nsudo pip3 install future==0.18.* mysqlclient==2.1.* pillow==10.2.* sqlalchemy==2.0.18 captcha==0.5.* django_simple_captcha==0.6.* djangosaml2==1.5.* pysaml2==7.2.* pycryptodome==3.16.* cffi==1.15.1 python-ldap==3.4.3\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#3modify-configurations-and-migrate-ldap-records","title":"3\uff09Modify configurations and migrate LDAP records","text":""},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configurations-for-ldap","title":"Change configurations for LDAP","text":"upgrade/upgrade_10.0_11.0.sh\n# Basic configuration items for LDAP login\nENABLE_LDAP = True\nLDAP_SERVER_URL = 'ldap://192.168.0.125' # The URL of LDAP server\nLDAP_BASE_DN = 'ou=test,dc=seafile,dc=ren' # The root node of users who can \n # log in to Seafile in the LDAP server\nLDAP_ADMIN_DN = 'administrator@seafile.ren' # DN of the administrator used \n # to query the LDAP server for information\nLDAP_ADMIN_PASSWORD = 'Hello@123' # Password of LDAP_ADMIN_DN\nLDAP_PROVIDER = 'ldap' # Identify the source of the user, used in \n # the table social_auth_usersocialauth, defaults to 'ldap'\nLDAP_LOGIN_ATTR = 'userPrincipalName' # User's attribute used to log in to Seafile, \n # can be mail or userPrincipalName, cannot be changed\nLDAP_FILTER = 'memberOf=CN=testgroup,OU=test,DC=seafile,DC=ren' # Additional filter conditions,\n # users who meet the filter conditions can log in, otherwise they cannot log in\n# For update user info when login\nLDAP_CONTACT_EMAIL_ATTR = '' # For update user's contact_email\nLDAP_USER_ROLE_ATTR = '' # For update user's role\nLDAP_USER_FIRST_NAME_ATTR = 'givenName' # For update user's first name\nLDAP_USER_LAST_NAME_ATTR = 'sn' # For update user's last name\nLDAP_USER_NAME_REVERSE = False # Whether to reverse the user's first and last name\n# Configuration items for LDAP sync tasks.\nLDAP_SYNC_INTERVAL = 60 # LDAP sync task period, in minutes\n\n# LDAP user sync configuration items.\nENABLE_LDAP_USER_SYNC = True # Whether to enable user sync\nLDAP_USER_OBJECT_CLASS = 'person' # This is the name of the class used to search for user objects. \n # In Active Directory, it's usually \"person\". The default value is \"person\".\nLDAP_DEPT_ATTR = '' # LDAP user's department info\nLDAP_UID_ATTR = '' # LDAP user's login_id attribute\nLDAP_AUTO_REACTIVATE_USERS = True # Whether to auto activate deactivated user\nLDAP_USE_PAGED_RESULT = False # Whether to use pagination extension\nIMPORT_NEW_USER = True # Whether to import new users when sync user\nACTIVATE_USER_WHEN_IMPORT = True # Whether to activate the user when importing new user\nENABLE_EXTRA_USER_INFO_SYNC = True # Whether to enable sync of additional user information,\n # including user's full name, contact_email, department, and Windows login name, etc.\nDEACTIVE_USER_IF_NOTFOUND = False # Set to \"true\" if you want to deactivate a user \n # when he/she was deleted in AD server.\n\n# LDAP group sync configuration items.\nENABLE_LDAP_GROUP_SYNC = True # Whether to enable group sync\nLDAP_GROUP_FILTER = '' # Group sync filter\nLDAP_SYNC_DEPARTMENT_FROM_OU = True # Whether to enable sync departments from OU.\nLDAP_GROUP_OBJECT_CLASS = 'group' # This is the name of the class used to search for group objects.\nLDAP_GROUP_MEMBER_ATTR = 'member' # The attribute field to use when loading the group's members. \n # For most directory servers, the attributes is \"member\" \n # which is the default value.For \"posixGroup\", it should be set to \"memberUid\".\nLDAP_USER_ATTR_IN_MEMBERUID = 'uid' # The user attribute set in 'memberUid' option, \n # which is used in \"posixGroup\".The default value is \"uid\".\nLDAP_GROUP_UUID_ATTR = 'objectGUID' # Used to uniquely identify groups in LDAP\nLDAP_USE_GROUP_MEMBER_RANGE_QUERY = False # When a group contains too many members, \n # AD will only return part of them. Set this option to TRUE\n # to make LDAP sync work with large groups.\nLDAP_SYNC_GROUP_AS_DEPARTMENT = False # Whether to sync groups as top-level departments in Seafile\nLDAP_DEPT_NAME_ATTR = '' # Used to get the department name.\nLDAP_CREATE_DEPARTMENT_LIBRARY = False # If you decide to sync the group as a department,\n # you can set this option to \"true\". In this way, when \n # the group is synchronized for the first time, a library\n # is automatically created for the department, and the \n # library's name is the department's name.\nLDAP_DEPT_REPO_PERM = 'rw' # Set the permissions of the department repo, default permission is 'rw'.\nLDAP_DEFAULT_DEPARTMENT_QUOTA = -2 # You can set a default space quota for each department\n # when you synchronize a group for the first time. The \n # quota is set to unlimited if this option is not set.\n # Unit is MB.\nDEL_GROUP_IF_NOT_FOUND = False # Set to \"true\", sync process will delete the group if not found it in LDAP server.\nDEL_DEPARTMENT_IF_NOT_FOUND = False # Set to \"true\", sync process will deleted the department if not found it in LDAP server.\nSSO_LDAP_USE_SAME_UID = True:SSO_LDAP_USE_SAME_UID = True\nLDAP_LOGIN_ATTR (not LDAP_UID_ATTR), in ADFS it is uid attribute. You need make sure you use the same attribute for the two settings.LDAPImported to EmailUserscd <install-path>/seafile-server-latest\npython3 migrate_ldapusers.py\n
"},{"location":"upgrade/upgrade_notes_for_11.0.x/#change-configuration-for-oauth","title":"Change configuration for OAuth:","text":"docker exec -it seafile /usr/bin/python3 /opt/seafile/seafile-server-latest/migrate_ldapusers.py\n# Version 10.0 or earlier\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"),\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n\n# Since 11.0 version, added 'uid' attribute.\nOAUTH_ATTRIBUTE_MAP = {\n \"id\": (True, \"email\"), # In the new version, the email attribute configuration should be kept unchanged to be compatible with old and new user logins\n \"uid\": (True, \"uid\"), # Seafile use 'uid' as the external unique identifier of the user. Different OAuth systems have different attributes, which may be: 'uid' or 'username', etc.\n \"name\": (False, \"name\"),\n \"email\": (False, \"contact_email\"),\n}\n\"id\": (True, \"email\"). You can also manully add records in social_auth_usersocialauth to map extenral uid to old users.
.env file is needed to contain some configuration items. These configuration items need to be shared by different components in Seafile. We name it .env to be consistant with docker based installation.
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#upgrade-to-120-for-binary-installation","title":"Upgrade to 12.0 (for binary installation)","text":"sudo pip3 install future==1.0.* mysqlclient==2.2.* pillow==10.4.* sqlalchemy==2.0.* gevent==24.2.* captcha==0.6.* django_simple_captcha==0.6.* djangosaml2==1.9.* pysaml2==7.3.* pycryptodome==3.20.* cffi==1.17.0 python-ldap==3.4.* PyMuPDF==1.24.*\n
"},{"location":"upgrade/upgrade_notes_for_12.0.x/#3-create-the-env-file-in-conf-directory","title":"3) Create the upgrade/upgrade_11.0_12.0.sh\n.env file in conf/ directory","text":"JWT_PRIVATE_KEY=xxx\nSEAFILE_SERVER_PROTOCOL=https\nSEAFILE_SERVER_HOSTNAME=seafile.example.com\npwgen -s 40 1
sudo apt-get install python3 python3-setuptools python3-pip memcached libmemcached-dev -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#seafile-pro","title":"Seafile-Pro","text":"yum install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
apt-get install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#upgrade-to-71x","title":"Upgrade to 7.1.x","text":"yum install python3 python3-setuptools python3-pip -y\n\nsudo pip3 install --timeout=3600 Pillow==9.4.0 pylibmc captcha jinja2 sqlalchemy==1.3.8 \\\n django-pylibmc django-simple-captcha python3-ldap\n
upgrade/upgrade_7.0_7.1.sh\n
rm -rf /tmp/seahub_cache # Clear the Seahub cache files from disk.\n# If you are using the Memcached service, you need to restart the service to clear the Seahub cache.\nsystemctl restart memcached\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#proxy-seafdav","title":"Proxy Seafdav","text":"
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#for-apache","title":"For Apache","text":".....\n location /seafdav {\n proxy_pass http://127.0.0.1:8080/seafdav;\n proxy_set_header Host $host;\n proxy_set_header X-Real-IP $remote_addr;\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\n proxy_set_header X-Forwarded-Host $server_name;\n proxy_set_header X-Forwarded-Proto $scheme;\n proxy_read_timeout 1200s;\n client_max_body_size 0;\n\n access_log /var/log/nginx/seafdav.access.log seafileformat;\n error_log /var/log/nginx/seafdav.error.log;\n }\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#builtin-office-file-preview","title":"Builtin office file preview","text":"......\n <Location /seafdav>\n ProxyPass \"http://127.0.0.1:8080/seafdav\"\n </Location>\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#login-page-customization","title":"Login Page Customization","text":"sudo apt-get install python3-rados\n158 if nickname.strip(): # set nickname when it's not empty\n159 p.nickname = nickname\n\nto \n\n158 if nickname.strip(): # set nickname when it's not empty\n159 p.nickname = nickname.encode(\"iso-8859-1\u201d).decode('utf8')\n
"},{"location":"upgrade/upgrade_notes_for_7.1.x/#internal-server-error-after-upgrade-to-version-71","title":"Internal server error after upgrade to version 7.1","text":"[INFO] updating seahub database...\n/opt/seafile/seafile-server-7.1.1/seahub/thirdpart/pymysql/cursors.py:170: Warning: (1050, \"Table 'base_reposecretkey' already exists\")\n result = self._query(query)\n[WARNING] Failed to execute sql: (1091, \"Can't DROP 'drafts_draft_origin_file_uuid_7c003c98_uniq'; check that column/key exists\")\ndaemon = True \u00a0to\u00a0daemon = False \u00a0, then run ./seahub.sh again. If there are missing Python dependencies, the error will be reported in the terminal.'BACKEND': 'django_pylibmc.memcached.PyLibMCCache'\n
"},{"location":"upgrade/upgrade_notes_for_8.0.x/","title":"Upgrade notes for 8.0","text":"'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',\n
apt-get install libmysqlclient-dev\n\nsudo pip3 install -U future mysqlclient sqlalchemy==1.4.3\n
apt-get install default-libmysqlclient-dev \n\nsudo pip3 install future mysqlclient sqlalchemy==1.4.3\n
yum install python3-devel mysql-devel gcc gcc-c++ -y\n\nsudo pip3 install future\nsudo pip3 install mysqlclient==2.0.1 sqlalchemy==1.4.3\n
"},{"location":"upgrade/upgrade_notes_for_8.0.x/#change-shibboleth-setting","title":"Change Shibboleth Setting","text":"yum install python3-devel mysql-devel gcc gcc-c++ -y\n\nsudo pip3 install future mysqlclient sqlalchemy==1.4.3\nEXTRA_MIDDLEWARE_CLASSESEXTRA_MIDDLEWARE_CLASSES = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\nEXTRA_MIDDLEWAREEXTRA_MIDDLEWARE = (\n 'shibboleth.middleware.ShibbolethRemoteUserMiddleware',\n)\nsettings.MIDDLEWARE_CLASSES is removed since django 2.0.
"},{"location":"upgrade/upgrade_notes_for_9.0.x/","title":"Upgrade notes for 9.0","text":"sh upgrade/upgrade_7.1_8.0.sh
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#new-python-libraries","title":"New Python libraries","text":"[fileserver]\nuse_go_fileserver = true\n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#upgrade-to-90x","title":"Upgrade to 9.0.x","text":"sudo pip3 install pycryptodome==3.12.0 cffi==1.14.0\n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#update-elasticsearch-pro-edition-only","title":"Update ElasticSearch (pro edition only)","text":""},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-one-rebuild-the-index-and-discard-the-old-index-data","title":"Method one, rebuild the index and discard the old index data","text":"sh upgrade/upgrade_8.0_9.0.shdocker pull elasticsearch:7.16.2\nmkdir -p /opt/seafile-elasticsearch/data && chmod -R 777 /opt/seafile-elasticsearch/data/\nsudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms2g -Xmx2g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.16.2\nrm -rf /opt/seafile/pro-data/search/data/*\n[INDEX FILES]\nexternal_es_server = true\nes_host = your server's IP (use 127.0.0.1 if deployed locally)\nes_port = 9200\n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-two-reindex-the-existing-data","title":"Method two, reindex the existing data","text":"su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start \n
docker pull elasticsearch:7.16.2\nmkdir -p /opt/seafile-elasticsearch/data \nmv /opt/seafile/pro-data/search/data/* /opt/seafile-elasticsearch/data/\nchmod -R 777 /opt/seafile-elasticsearch/data/\nsudo docker run -d --name es -p 9200:9200 -e \"discovery.type=single-node\" -e \"bootstrap.memory_lock=true\" -e \"ES_JAVA_OPTS=-Xms1g -Xmx1g\" -e \"xpack.security.enabled=false\" --restart=always -v /opt/seafile-elasticsearch/data:/usr/share/elasticsearch/data -d elasticsearch:7.16.2\nES_JAVA_OPTS can be adjusted according to your need.curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head?include_type_name=false&pretty=true' -d '\n{\n \"mappings\" : {\n \"properties\" : {\n \"commit\" : {\n \"type\" : \"text\",\n \"index\" : false\n },\n \"repo\" : {\n \"type\" : \"text\",\n \"index\" : false\n },\n \"updatingto\" : {\n \"type\" : \"text\",\n \"index\" : false\n }\n }\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/?include_type_name=false&pretty=true' -d '\n{\n \"settings\" : {\n \"index\" : {\n \"number_of_shards\" : 5,\n \"number_of_replicas\" : 1,\n \"analysis\" : {\n \"analyzer\" : {\n \"seafile_file_name_ngram_analyzer\" : {\n \"filter\" : [\n \"lowercase\"\n ],\n \"type\" : \"custom\",\n \"tokenizer\" : \"seafile_file_name_ngram_tokenizer\"\n }\n },\n \"tokenizer\" : {\n \"seafile_file_name_ngram_tokenizer\" : {\n \"type\" : \"ngram\",\n \"min_gram\" : \"3\",\n \"max_gram\" : \"4\"\n }\n }\n }\n }\n },\n \"mappings\" : {\n \"properties\" : {\n \"content\" : {\n \"type\" : \"text\",\n \"term_vector\" : \"with_positions_offsets\"\n },\n \"filename\" : {\n \"type\" : \"text\",\n \"fields\" : {\n \"ngram\" : {\n \"type\" : \"text\",\n \"analyzer\" : \"seafile_file_name_ngram_analyzer\"\n }\n }\n },\n \"is_dir\" : {\n \"type\" : \"boolean\"\n },\n \"mtime\" : {\n \"type\" : \"date\"\n },\n \"path\" : {\n \"type\" : \"keyword\"\n },\n \"repo\" : {\n \"type\" : \"keyword\"\n },\n \"size\" : {\n \"type\" : \"long\"\n },\n \"suffix\" : {\n \"type\" : \"keyword\"\n }\n }\n }\n}'\nrefresh_interval to -1 and the number_of_replicas to 0 for efficient reindexing:curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : \"-1\",\n \"number_of_replicas\" : 0\n }\n}'\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?pretty' -d '\n{\n \"source\": {\n \"index\": \"repo_head\",\n \"type\": \"repo_commit\"\n },\n \"dest\": {\n \"index\": \"new_repo_head\",\n \"type\": \"_doc\"\n }\n}'\n\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_reindex/?pretty' -d '\n{\n \"source\": {\n \"index\": \"repofiles\",\n \"type\": \"file\"\n },\n \"dest\": {\n \"index\": \"new_repofiles\",\n \"type\": \"_doc\"\n }\n}'\nrefresh_interval and number_of_replicas to the values used in the old index.curl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repo_head/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\n\ncurl -X PUT -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/new_repofiles/_settings?pretty' -d '\n{\n \"index\" : {\n \"refresh_interval\" : null,\n \"number_of_replicas\" : 1\n }\n}'\ngreen.curl http{s}://{es server IP}:9200/_cluster/health?pretty\ncurl -X POST -H 'Content-Type: application/json' 'http{s}://{es server IP}:9200/_aliases?pretty' -d '\n{\n \"actions\": [\n {\"remove_index\": {\"index\": \"repo_head\"}},\n {\"remove_index\": {\"index\": \"repofiles\"}},\n {\"add\": {\"index\": \"new_repo_head\", \"alias\": \"repo_head\"}},\n {\"add\": {\"index\": \"new_repofiles\", \"alias\": \"repofiles\"}}\n ]\n}'\n[INDEX FILES]\nexternal_es_server = true\nes_host = your server's IP\nes_port = 9200\n
"},{"location":"upgrade/upgrade_notes_for_9.0.x/#method-three-if-you-are-in-a-cluster-environment","title":"Method three, if you are in a cluster environment","text":"su seafile\ncd seafile-server-latest/\n./seafile.sh stop && ./seahub.stop \n./seafile.sh start && ./seahub.start \n./pro/pro.py search --update, and then upgrade the other nodes to Seafile 9.0 version and use the new ElasticSeach 7.x after the index is created. Then deactivate the old backend node and the old version of ElasticSeach.
.env file in conf/ directory¶conf/.env
JWT_PRIVATE_KEY=xxx
+SEAFILE_SERVER_PROTOCOL=https
+SEAFILE_SERVER_HOSTNAME=seafile.example.com
Note: JWT_PRIVATE_KEY, A random string with a length of no less than 32 characters, generate example: pwgen -s 40 1