diff --git a/13.0/search/search_index.json b/13.0/search/search_index.json index ec1c8cd5..f096a73a 100644 --- a/13.0/search/search_index.json +++ b/13.0/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"","title":"Introduction","text":"

Seafile is an open source cloud storage system for file sync, share and document collaboration. SeaDoc is an extension of Seafile that providing a lightweight online collaborative document feature.

"},{"location":"#license","title":"LICENSE","text":"

The different components of Seafile project are released under different licenses:

"},{"location":"#contact-information","title":"Contact information","text":""},{"location":"404/","title":"404 - Page Not Found or Remove Permanently","text":"

The requested file was not found. If you are still using https://manual.seafile.com/xxx/, please move to https://manual.seafile.com/latest/xxx/ as this path has been deprecated. We apologize for the inconvenience caused.

"},{"location":"changelog/","title":"Changelog","text":""},{"location":"changelog/#changelogs","title":"Changelogs","text":""},{"location":"administration/","title":"Administration","text":""},{"location":"administration/#enter-the-admin-panel","title":"Enter the admin panel","text":"

As the system admin, you can enter the admin panel by click System Admin in the popup of avatar.

"},{"location":"administration/#account-management","title":"Account management","text":""},{"location":"administration/#logs","title":"Logs","text":""},{"location":"administration/#backup-and-recovery","title":"Backup and Recovery","text":"

Backup and recovery:

Recover corrupt files after server hard shutdown or system crash:

You can run Seafile GC to remove unused files:

"},{"location":"administration/#clean-database","title":"Clean database","text":""},{"location":"administration/#export-report","title":"Export report","text":""},{"location":"administration/account/","title":"Account Management","text":""},{"location":"administration/account/#user-management","title":"User Management","text":"

When you setup seahub website, you should have setup a admin account. After you logged in a admin, you may add/delete users and file libraries.

"},{"location":"administration/account/#how-to-change-a-users-id","title":"How to change a user's ID","text":"

Since version 11.0, if you need to change a user's external ID, you can manually modify database table social_auth_usersocialauth to map the new external ID to internal ID.

"},{"location":"administration/account/#resetting-user-password","title":"Resetting User Password","text":"

Administrator can reset password for a user in \"System Admin\" page.

In a private server, the default settings doesn't support users to reset their password by email. If you want to enable this, you have first to set up notification email.

"},{"location":"administration/account/#forgot-admin-account-or-password","title":"Forgot Admin Account or Password?","text":"

You may run reset-admin.sh script under seafile-server-latest directory. This script would help you reset the admin account and password. Your data will not be deleted from the admin account, this only unlocks and changes the password for the admin account.

Tip

Enter into the docker image, then go to /opt/seafile/seafile-server-latest

"},{"location":"administration/account/#user-quota-notice","title":"User Quota Notice","text":"

Under the seafile-server-latest directory, run ./seahub.sh python-env python seahub/manage.py check_user_quota , when the user quota exceeds 90%, an email will be sent. If you want to enable this, you have first to set up notification email.

"},{"location":"administration/auditing/","title":"Access log and auditing (Pro)","text":"

In the Pro Edition, Seafile offers four audit logs in system admin panel:

The audit log data is saved in seahub_db.

"},{"location":"administration/backup_recovery/","title":"Backup and Recovery","text":""},{"location":"administration/backup_recovery/#overview","title":"Overview","text":"

There are generally two parts of data to backup

There are 3 databases:

"},{"location":"administration/backup_recovery/#backup-steps","title":"Backup steps","text":"

The backup is a two step procedure:

  1. Backup the databases;
  2. Backup the seafile data directory;
"},{"location":"administration/backup_recovery/#backup-order-database-first-or-data-directory-first","title":"Backup Order: Database First or Data Directory First","text":"

The second sequence is better in the sense that it avoids library corruption. Like other backup solutions, some new data can be lost in recovery. There is always a backup window. However, if your storage backup mechanism can finish quickly enough, using the first sequence can retain more data.

We assume your seafile data directory is in /opt/seafile for binary package based deployment (or /opt/seafile-data for docker based deployment). And you want to backup to /backup directory. The /backup can be an NFS or Windows share mount exported by another machine, or just an external disk. You can create a layout similar to the following in /backup directory:

/backup\n---- databases/  contains database backup files\n---- data/  contains backups of the data directory\n
"},{"location":"administration/backup_recovery/#backup-and-restore-for-binary-package-based-deployment","title":"Backup and restore for binary package based deployment","text":""},{"location":"administration/backup_recovery/#backing-up-databases","title":"Backing up Databases","text":"

It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.

Assume your database names are ccnet_db, seafile_db and seahub_db. mysqldump automatically locks the tables so you don't need to stop Seafile server when backing up MySQL databases. Since the database tables are usually very small, it won't take long to dump.

mysqldump -h [mysqlhost] -u[username] -p[password] --opt ccnet_db > /backup/databases/ccnet_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seafile_db > /backup/databases/seafile_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmysqldump -h [mysqlhost] -u[username] -p[password] --opt seahub_db > /backup/databases/seahub_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n

mysqldump: command not found

You may encounter this problem on some machines with a minimal (from 10.5) or a newer (from 11.0) Mariadb server installed, of which the mysql* series of commands have been gradually deprecated. If you encounter this error, use the mariadb-dump command, such as:

mariadb-dump -h [mysqlhost] -u[username] -p[password] --opt ccnet_db > /backup/databases/ccnet_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmariadb-dump -h [mysqlhost] -u[username] -p[password] --opt seafile_db > /backup/databases/seafile_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n\nmariadb-dump -h [mysqlhost] -u[username] -p[password] --opt seahub_db > /backup/databases/seahub_db.sql.`date +\"%Y-%m-%d-%H-%M-%S\"`\n
"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data","title":"Backing up Seafile library data","text":"

The data files are all stored in the /opt/seafile directory, so just back up the whole directory. You can directly copy the whole directory to the backup destination, or you can use rsync to do incremental backup.

To directly copy the whole data directory,

cp -R /opt/seafile /backup/data/seafile-`date +\"%Y-%m-%d-%H-%M-%S\"`\n

This produces a separate copy of the data directory each time. You can delete older backup copies after a new one is completed.

If you have a lot of data, copying the whole data directory would take long. You can use rsync to do incremental backup.

rsync -az /opt/seafile /backup/data\n

This command backup the data directory to /backup/data/seafile.

"},{"location":"administration/backup_recovery/#restore-from-backup","title":"Restore from backup","text":"

Now supposed your primary seafile server is broken, you're switching to a new machine. Using the backup data to restore your Seafile instance:

  1. Copy /backup/data/seafile to the new machine. Let's assume the seafile deployment location new machine is also /opt/seafile.
  2. Restore the database.
  3. Since database and data are backed up separately, they may become a little inconsistent with each other. To correct the potential inconsistency, run seaf-fsck tool to check data integrity on the new machine. See seaf-fsck documentation.
"},{"location":"administration/backup_recovery/#restore-the-databases","title":"Restore the databases","text":"

Now with the latest valid database backup files at hand, you can restore them.

mysql -u[username] -p[password] ccnet_db < ccnet_db.sql.2013-10-19-16-00-05\nmysql -u[username] -p[password] seafile_db < seafile_db.sql.2013-10-19-16-00-20\nmysql -u[username] -p[password] seahub_db < seahub_db.sql.2013-10-19-16-01-05\n

mysql: command not found

You may encounter this problem on some machines with a minimal (from 10.5) or a newer (from 11.0) Mariadb server installed, of which the mysql* series of commands have been gradually deprecated. If you encounter this error, use the mariadb command, such as:

mariadb -u[username] -p[password] ccnet_db < ccnet_db.sql.2013-10-19-16-00-05\nmariadb -u[username] -p[password] seafile_db < seafile_db.sql.2013-10-19-16-00-20\nmariadb -u[username] -p[password] seahub_db < seahub_db.sql.2013-10-19-16-01-05\n
"},{"location":"administration/backup_recovery/#backup-and-restore-for-docker-based-deployment","title":"Backup and restore for Docker based deployment","text":""},{"location":"administration/backup_recovery/#structure","title":"Structure","text":"

We assume your seafile volumns path is in /opt/seafile-data. And you want to backup to /backup directory.

The data files to be backed up:

/opt/seafile-data/seafile/conf  # configuration files\n/opt/seafile-data/seafile/seafile-data # data of seafile\n/opt/seafile-data/seafile/seahub-data # data of seahub\n
"},{"location":"administration/backup_recovery/#backing-up-database","title":"Backing up Database","text":"
# It's recommended to backup the database to a separate file each time. Don't overwrite older database backups for at least a week.\ncd /backup/databases\ndocker exec -it seafile-mysql mariadb-dump  -u[username] -p[password] --opt ccnet_db > ccnet_db.sql\ndocker exec -it seafile-mysql mariadb-dump  -u[username] -p[password] --opt seafile_db > seafile_db.sql\ndocker exec -it seafile-mysql mariadb-dump  -u[username] -p[password] --opt seahub_db > seahub_db.sql\n

Tip

The default image of database is Mariadb 10.11 from Seafile 12, you may not be able to find these commands in the container (such as mysqldump: command not found), since commands of mysql* series have been gradually deprecated. So we recommend that you use the mariadb* series of commands.

However, if you still use the MySQL docker image, you should continue to use mysqldump here:

docker exec -it seafile-mysql mysqldump  -u[username] -p[password] --opt ccnet_db > ccnet_db.sql\ndocker exec -it seafile-mysql mysqldump  -u[username] -p[password] --opt seafile_db > seafile_db.sql\ndocker exec -it seafile-mysql mysqldump  -u[username] -p[password] --opt seahub_db > seahub_db.sql\n
"},{"location":"administration/backup_recovery/#backing-up-seafile-library-data_1","title":"Backing up Seafile library data","text":""},{"location":"administration/backup_recovery/#to-directly-copy-the-whole-data-directory","title":"To directly copy the whole data directory","text":"
cp -R /opt/seafile-data/seafile /backup/data/\n
"},{"location":"administration/backup_recovery/#use-rsync-to-do-incremental-backup","title":"Use rsync to do incremental backup","text":"
rsync -az /opt/seafile-data/seafile /backup/data/\n
"},{"location":"administration/backup_recovery/#recovery","title":"Recovery","text":""},{"location":"administration/backup_recovery/#restore-the-databases_1","title":"Restore the databases","text":"
docker cp /backup/databases/ccnet_db.sql seafile-mysql:/tmp/ccnet_db.sql\ndocker cp /backup/databases/seafile_db.sql seafile-mysql:/tmp/seafile_db.sql\ndocker cp /backup/databases/seahub_db.sql seafile-mysql:/tmp/seahub_db.sql\n\ndocker exec -it seafile-mysql /bin/sh -c \"mariadb -u[username] -p[password] ccnet_db < /tmp/ccnet_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mariadb -u[username] -p[password] seafile_db < /tmp/seafile_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mariadb -u[username] -p[password] seahub_db < /tmp/seahub_db.sql\"\n

Tip

The default image of database is Mariadb 10.11 from Seafile 12, you may not be able to find these commands in the container (such as mysql: command not found), since commands of mysql* series have been gradually deprecated. So we recommend that you use the mariadb* series of commands.

However, if you still use the MySQL docker image, you should continue to use mysql here:

docker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] ccnet_db < /tmp/ccnet_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seafile_db < /tmp/seafile_db.sql\"\ndocker exec -it seafile-mysql /bin/sh -c \"mysql -u[username] -p[password] seahub_db < /tmp/seahub_db.sql\"\n
"},{"location":"administration/backup_recovery/#restore-the-seafile-data","title":"Restore the seafile data","text":"
# Recommended: use rsync to restore, preserving ownership/permissions/ACL/xattrs.\n# Run a dry-run first to review the changes.\n# Dry-run (no changes made)\nsudo rsync -aHAX --dry-run --itemize-changes /backup/data/seafile/ /opt/seafile-data/seafile/\n\n# Restore (apply changes)\nsudo rsync -aHAX /backup/data/seafile/ /opt/seafile-data/seafile/\n\n# Optional: make the target an exact mirror of the backup\n# (will delete files present in the target but not in the backup;\n# add only after reviewing the dry-run output)\n# sudo rsync -aHAX --delete /backup/data/seafile/ /opt/seafile-data/seafile/\n

Note

Trailing \u201c/\u201d on the source means \u201ccopy the directory CONTENTS\u201d.

Run with sudo to preserve owners, groups, ACLs (-A) and xattrs (-X).

"},{"location":"administration/clean_database/","title":"Clean Database","text":""},{"location":"administration/clean_database/#session","title":"Session","text":"

Use the following command to clear expired session records in Seahub database:

cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py clearsessions\n

Tip

Enter into the docker image, then go to /opt/seafile/seafile-server-latest

"},{"location":"administration/clean_database/#use-clean_db_records-command-to-clean-seahub_db","title":"Use clean_db_records command to clean seahub_db","text":"

Use the following command to simultaneously clean up table records of Activity, sysadmin_extra_userloginlog, FileAudit, FileUpdate, FileHistory, PermAudit, FileTrash 90 days ago:

./seahub.sh python-env python3 seahub/manage.py clean_db_records\n

You can also clean these tables manually if you like as following.

"},{"location":"administration/clean_database/#activity","title":"Activity","text":"

Use the following command to clear the activity records:

use seahub_db;\nDELETE FROM Activity WHERE to_days(now()) - to_days(timestamp) > 90;\nDELETE FROM UserActivity WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#login","title":"Login","text":"

Use the following command to clean the login records:

use seahub_db;\nDELETE FROM sysadmin_extra_userloginlog WHERE to_days(now()) - to_days(login_date) > 90;\n
"},{"location":"administration/clean_database/#file-access","title":"File Access","text":"

Use the following command to clean the file access records:

use seahub_db;\nDELETE FROM FileAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#file-update","title":"File Update","text":"

Use the following command to clean the file update records:

use seahub_db;\nDELETE FROM FileUpdate WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#permisson","title":"Permisson","text":"

Use the following command to clean the permission change audit records:

use seahub_db;\nDELETE FROM PermAudit WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#file-history","title":"File History","text":"

Use the following command to clean the file history records:

use seahub_db;\nDELETE FROM FileHistory WHERE to_days(now()) - to_days(timestamp) > 90;\n
"},{"location":"administration/clean_database/#clean-outdated-library-data","title":"Clean outdated library data","text":"

Since version 6.2, we offer command to clear outdated library records in Seafile database, e.g. records that are not deleted after a library is deleted. This is because users can restore a deleted library, so we can't delete these records at library deleting time.

./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data\n

This command has been improved in version 10.0, including:

  1. It will clear the invalid data in small batch, avoiding consume too much database resource in a short time.

  2. Dry-run mode: if you just want to see how much invalid data can be deleted without actually deleting any data, you can use the dry-run option, e.g.

./seahub.sh python-env python3 seahub/manage.py clear_invalid_repo_data --dry-run=true\n
"},{"location":"administration/clean_database/#clean-library-sync-tokens","title":"Clean library sync tokens","text":"

There are two tables in Seafile db that are related to library sync tokens.

When you have many sync clients connected to the server, these two tables can have large number of rows. Many of them are no longer actively used. You may clean the tokens that are not used in a recent period, by the following SQL query:

delete t,i from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n

xxxx is the UNIX timestamp for the time before which tokens will be deleted.

To be safe, you can first check how many tokens will be removed:

select * from RepoUserToken t, RepoTokenPeerInfo i where t.token=i.token and sync_time < xxxx;\n
"},{"location":"administration/export_report/","title":"Export Report","text":"

Since version 7.0.8 pro, Seafile provides commands to export reports via command line.

Tip

Enter into the docker image, then go to /opt/seafile/seafile-server-latest

"},{"location":"administration/export_report/#export-user-traffic-report","title":"Export User Traffic Report","text":"
cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py export_user_traffic_report --date 201906\n
"},{"location":"administration/export_report/#export-user-storage-report","title":"Export User Storage Report","text":"
cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py export_user_storage_report\n
"},{"location":"administration/export_report/#export-file-access-log","title":"Export File Access Log","text":"
cd seafile-server-latest\n./seahub.sh python-env python3 seahub/manage.py export_file_access_log --start-date 2019-06-01 --end-date 2019-07-01\n
"},{"location":"administration/logs/","title":"Seafile server logs","text":""},{"location":"administration/logs/#log-files-of-seafile-server","title":"Log files of seafile server","text":""},{"location":"administration/logs/#log-files-for-seafile-background-node-in-cluster-mode","title":"Log files for seafile background node in cluster mode","text":""},{"location":"administration/logs/#log-files-for-seadoc-server","title":"Log files for seadoc server","text":"

The logs for seadoc server are located in the /opt/seadoc-data/logs directory.

"},{"location":"administration/logs/#log-files-for-seasearch-server","title":"Log files for SeaSearch server","text":"

The logs for seasearch server are located in the /opt/seasearch-data/log directory.

"},{"location":"administration/logs/#log-files-for-nginx","title":"Log files for Nginx","text":"

The logs for Nginx are located in the /opt/seafile-data/seafile/logs directory.

"},{"location":"administration/seafile_fsck/","title":"Seafile FSCK","text":"

On the server side, Seafile stores the files in the libraries in an internal format. Seafile has its own representation of directories and files (similar to Git).

With default installation, these internal objects are stored in the server's file system directly (such as Ext4, NTFS). But most file systems don't assure the integrity of file contents after a hard shutdown or system crash. So if new Seafile internal objects are being written when the system crashes, they can be corrupt after the system reboots. This will make part of the corresponding library not accessible.

Warning

If you store the seafile-data directory in a battery-backed NAS (like EMC or NetApp), or use S3 backend available in the Pro edition, the internal objects won't be corrupt.

Note

If your Seafile server is deployed with Docker, make sure you have enter the container before executing the following commands in this manual:

docker exec -it seafile bash\n

This is also required for the other scripts in this document.

We provide a seaf-fsck.sh script to check the integrity of libraries. The seaf-fsck tool accepts the following arguments:

cd /opt/seafile/seafile-server-latest\n./seaf-fsck.sh [--repair|-r] [--export|-E export_path] [repo_id_1 [repo_id_2 ...]]\n

There are three modes of operation for seaf-fsck:

  1. checking integrity of libraries.
  2. repairing corrupted libraries.
  3. exporting libraries.
"},{"location":"administration/seafile_fsck/#checking-integrity-of-libraries","title":"Checking Integrity of Libraries","text":"

Running seaf-fsck.sh without any arguments will run a read-only integrity check for all libraries.

./seaf-fsck.sh\n

If you want to check integrity for specific libraries, just append the library id's as arguments:

./seaf-fsck.sh [library-id1] [library-id2] ...\n

The output looks like:

[02/13/15 16:21:07] fsck.c(470): Running fsck for repo ca1a860d-e1c1-4a52-8123-0bf9def8697f.\n[02/13/15 16:21:07] fsck.c(413): Checking file system integrity of repo fsck(ca1a860d)...\n[02/13/15 16:21:07] fsck.c(35): Dir 9c09d937397b51e1283d68ee7590cd9ce01fe4c9 is missing.\n[02/13/15 16:21:07] fsck.c(200): Dir /bf/pk/(9c09d937) is corrupted.\n[02/13/15 16:21:07] fsck.c(105): Block 36e3dd8757edeb97758b3b4d8530a4a8a045d3cb is corrupted.\n[02/13/15 16:21:07] fsck.c(178): File /bf/02.1.md(ef37e350) is corrupted.\n[02/13/15 16:21:07] fsck.c(85): Block 650fb22495b0b199cff0f1e1ebf036e548fcb95a is missing.\n[02/13/15 16:21:07] fsck.c(178): File /01.2.md(4a73621f) is corrupted.\n[02/13/15 16:21:07] fsck.c(514): Fsck finished for repo ca1a860d.\n

The corrupted files and directories are reported in the above message. By the way, you may also see output like the following:

[02/13/15 16:36:11] Commit 6259251e2b0dd9a8e99925ae6199cbf4c134ec10 is missing\n[02/13/15 16:36:11] fsck.c(476): Repo ca1a860d HEAD commit is corrupted, need to restore to an old version.\n[02/13/15 16:36:11] fsck.c(314): Scanning available commits...\n[02/13/15 16:36:11] fsck.c(376): Find available commit 1b26b13c(created at 2015-02-13 16:10:21) for repo ca1a860d.\n

This means the head commit (current state of the library) recorded in database is not consistent with the library data. In such case, fsck will try to find the last consistent state and check the integrity in that state.

Tip

If you have many libraries, it's helpful to save the fsck output into a log file for later analysis.

"},{"location":"administration/seafile_fsck/#repairing-corruption","title":"Repairing Corruption","text":"

Corruption repair in seaf-fsck basically works in two steps:

  1. If the library state (commit) recorded in database is not found in data directory, find the last available state from data directory.
  2. Check data integrity in that specific state. If files or directories are corrupted, set them to empty files or empty directories. The corrupted paths will be reported, so that the user can recover them from somewhere else.

Running the following command repairs all the libraries:

./seaf-fsck.sh --repair\n

Most of time you run the read-only integrity check first, to find out which libraries are corrupted. And then you repair specific libraries with the following command:

./seaf-fsck.sh --repair [library-id1] [library-id2] ...\n

After repairing, in the library history, seaf-fsck includes the list of files and folders that are corrupted. So it's much easier to located corrupted paths.

"},{"location":"administration/seafile_fsck/#best-practice-for-repairing-a-library","title":"Best Practice for Repairing a Library","text":"

To check all libraries and find out which library is corrupted, the system admin can run seaf-fsck.sh without any argument and save the output to a log file. Search for keyword \"Fail\" in the log file to locate corrupted libraries. You can run seaf-fsck to check all libraries when your Seafile server is running. It won't damage or change any files.

When the system admin find a library is corrupted, he/she should run seaf-fsck.sh with \"--repair\" for the library. After the command fixes the library, the admin should inform user to recover files from other places. There are two ways:

"},{"location":"administration/seafile_fsck/#speeding-up-fsck-by-not-checking-file-contents","title":"Speeding up FSCK by not checking file contents","text":"

Starting from Pro edition 7.1.5, an option is added to speed up FSCK. Most of the running time of seaf-fsck is spent on calculating hashes for file contents. This hash will be compared with block object ID. If they're not consistent, the block is detected as corrupted.

In many cases, the file contents won't be corrupted most of time. Some objects are just missing from the system. So it's enough to only check for object existence. This will greatly speed up the fsck process.

To skip checking file contents, add the --shallow or -s option to seaf-fsck.

"},{"location":"administration/seafile_fsck/#exporting-libraries-to-file-system","title":"Exporting Libraries to File System","text":"

You can use seaf-fsck to export all the files in libraries to external file system (such as Ext4). This procedure doesn't rely on the seafile database. As long as you have your seafile-data directory, you can always export your files from Seafile to external file system. The command about this operation is

./seaf-fsck.sh --export top_export_path [library-id1] [library-id2] ...\n

The argument top_export_path is a directory to place the exported files. Each library will be exported as a sub-directory of the export path. If you don't specify library ids, all libraries will be exported.

Note

Currently only un-encrypted libraries can be exported. Encrypted libraries will be skipped.

"},{"location":"administration/seafile_fsck/#checking-file-size","title":"Checking file size","text":"

Starting from version 13.0.9, fsck has added an option to check whether the file size matches the actual file content. Some problematic clients may upload incorrect blocks, causing the actual file size to not match the file content. With this option, you can detect files with size mismatches, along with the method and time of their upload.

To check whether the file size matches, add the --check-file-size or -S option to seaf-fsck.

"},{"location":"administration/seafile_gc/","title":"Seafile GC","text":"

Seafile uses storage de-duplication technology to reduce storage usage. The underlying data blocks will not be removed immediately after you delete a file or a library. As a result, the number of unused data blocks will increase on Seafile server.

To release the storage space occupied by unused blocks, you have to run a \"garbage collection\" program to clean up unused blocks on your server.

The GC program cleans up two types of unused blocks:

  1. Blocks that no library references to, that is, the blocks belong to deleted libraries;
  2. If you set history length limit on some libraries, the out-dated blocks in those libraries will also be removed.
"},{"location":"administration/seafile_gc/#run-gc","title":"Run GC","text":"

Note

If your Seafile server is deployed with Docker, make sure you have enter the container before executing the script:

docker exec -it seafile bash\n

For all scripts in this document, is located in /opt/seafile/seafile-server-latest:

cd `/opt/seafile/seafile-server-latest # valid both Docker-base Seafile and binary-package-base Seafile\n

This is also required for the other scripts in this document.

"},{"location":"administration/seafile_gc/#dry-run-mode","title":"Dry-run Mode","text":"

To see how much garbage can be collected without actually removing any garbage, use the dry-run option:

./seaf-gc.sh --dry-run [repo-id1] [repo-id2] ...\n

The output should look like:

[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo My Library(ffa57d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 265.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo ffa57d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 5 commits, 265 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 265 blocks total, about 265 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo aa(f3d0a8d0)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 5.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo f3d0a8d0.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 8 commits, 5 blocks.\n[03/19/15 19:41:49] gc-core.c(264): Populating index for sub-repo 9217622a.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 4 commits, 4 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 5 blocks total, about 9 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:49] seafserv-gc.c(115): GC version 1 repo test2(e7d26d93)\n[03/19/15 19:41:49] gc-core.c(394): GC started. Total block number is 507.\n[03/19/15 19:41:49] gc-core.c(75): GC index size is 1024 Byte.\n[03/19/15 19:41:49] gc-core.c(408): Populating index.\n[03/19/15 19:41:49] gc-core.c(262): Populating index for repo e7d26d93.\n[03/19/15 19:41:49] gc-core.c(308): Traversed 577 commits, 507 blocks.\n[03/19/15 19:41:49] gc-core.c(440): Scanning unused blocks.\n[03/19/15 19:41:49] gc-core.c(472): GC finished. 507 blocks total, about 507 reachable blocks, 0 blocks can be removed.\n\n[03/19/15 19:41:50] seafserv-gc.c(124): === Repos deleted by users ===\n[03/19/15 19:41:50] seafserv-gc.c(145): === GC is finished ===\n\n[03/19/15 19:41:50] Following repos have blocks to be removed:\nrepo-id1\nrepo-id2\nrepo-id3\n

If you give specific library ids, only those libraries will be checked; otherwise all libraries will be checked.

repos have blocks to be removed

Notice that at the end of the output there is a \"repos have blocks to be removed\" section. It contains the list of libraries that have garbage blocks. Later when you run GC without --dry-run option, you can use these libraris ids as input arguments to GC program.

"},{"location":"administration/seafile_gc/#removing-garbage","title":"Removing Garbage","text":"

To actually remove garbage blocks, run without the --dry-run option:

./seaf-gc.sh [repo-id1] [repo-id2] ...\n

If libraries ids are specified, only those libraries will be checked for garbage.

As described before, there are two types of garbage blocks to be removed. Sometimes just removing the first type of blocks (those that belong to deleted libraries) is good enough. In this case, the GC program won't bother to check the libraries for outdated historic blocks. The \"-r\" option implements this feature:

./seaf-gc.sh -r\n

Success

Libraries deleted by the users are not immediately removed from the system. Instead, they're moved into a \"trash\" in the system admin page. Before they're cleared from the trash, their blocks won't be garbage collected.

"},{"location":"administration/seafile_gc/#removing-fs-objects","title":"Removing FS objects","text":"

Since Pro server 8.0.6 and community edition 9.0, you can remove garbage fs objects. It should be run without the --dry-run option:

./seaf-gc.sh --rm-fs\n

Bug reports

This command has bug before Pro Edition 10.0.15 and Community Edition 11.0.7. It could cause virtual libraries (e.g. shared folders) failing to merge into their parent libraries. Please avoid using this option in the affected versions. Please contact our support team if you are affected by this bug.

"},{"location":"administration/seafile_gc/#using-multiple-threads-in-gc","title":"Using Multiple Threads in GC","text":"

You can specify the thread number in GC. By default,

You can specify the thread number in with \"-t\" option. \"-t\" option can be used together with all other options. Each thread will do GC on one library. For example, the following command will use 20 threads to GC all libraries:

./seaf-gc.sh -t 20\n

Since the threads are concurrent, the output of each thread may mix with each others. Library ID is printed in each line of output.

"},{"location":"administration/seafile_gc/#run-gc-based-on-library-id-prefix","title":"Run GC based on library ID prefix","text":"

Since GC usually runs quite slowly as it needs to traverse the entire library history. You can use multiple threads to run GC in parallel. For even larger deployments, it's also desirable to run GC on multiple server in parallel.

A simple pattern to divide the workload among multiple GC servers is to assign libraries to servers based on library ID. Since Pro edition 7.1.5, this is supported. You can add \"--id-prefix\" option to seaf-gc.sh, to specify the library ID prefix. For example, the below command will only process libraries having \"a123\" as ID prefix.

./seaf-gc.sh --id-prefix a123\n
"},{"location":"administration/seafile_metrics/","title":"Monitor Seafile with Prometheus","text":"

Seafile provides a standardized interface to expose system operational metrics, enabling integration with Prometheus and Grafana. This allows administrators to real-time monitor Seafile service status, including (but not limited to) I/O queue length and background task latency.

"},{"location":"administration/seafile_metrics/#configuration-steps","title":"Configuration Steps","text":"

To enable metric monitoring for Seafile, follow these steps:

"},{"location":"administration/seafile_metrics/#1-enable-metric-exposure","title":"1. Enable Metric Exposure","text":"

Edit the Seafile configuration file seahub_settings.py (located in the Seafile configuration directory) and add the following configuration items. If the items already exist, update their values accordingly:

# Enable the metric exposure function (set to True to activate)\nENABLE_METRIC = True\n\n# Authentication username\n# Used for HTTP Basic Authentication when accessing Seafile's metric endpoint\nMETRIC_AUTH_USER = \"your_prometheus_username\"\n\n# Authentication password corresponding to the above username\nMETRIC_AUTH_PWD = \"your_prometheus_password\"\n

Note

Replace your_prometheus_username and your_prometheus_password with custom credentials (recommend using strong, unique passwords for security).

"},{"location":"administration/seafile_metrics/#2-configure-prometheus","title":"2. Configure Prometheus","text":"

After completing the above Seafile configuration, Prometheus can retrieve Seafile metrics via the /metrics endpoint. Key requirements for such configuration:

For detailed configuration guides of monitoring tools, refer to the official documentation below:

"},{"location":"administration/seafile_metrics/#effect-description","title":"Effect Description","text":"

Once the configuration is complete:

  1. Prometheus will periodically scrape Seafile metrics from the /metrics endpoint (based on the configured scrape interval).
  2. You can create custom visual dashboards in Grafana (e.g., \"Seafile Monitoring Dashboard\" ) to visualize metrics in real time.
  3. Alerts can be set up in Grafana (e.g., trigger an alert when Seafile storage usage exceeds 90%) to proactively monitor system health.
"},{"location":"administration/security_features/","title":"Security Questions","text":""},{"location":"administration/security_features/#how-is-the-connection-between-client-and-server-encrypted","title":"How is the connection between client and server encrypted?","text":"

Seafile uses HTTP(S) to syncing files between client and server (Since version 4.1.0).

"},{"location":"administration/security_features/#encrypted-library","title":"Encrypted Library","text":"

Seafile provides a feature called encrypted library to protect your privacy. The file encryption/decryption is performed on client-side when using the desktop client for file synchronization. The password of an encrypted library is not stored on the server. Even the system admin of the server can't view the file contents.

There are a few limitation about this feature:

  1. File metadata is NOT encrypted. The metadata includes: the complete list of directory and file names, every files size, the history of editors, when, and what byte ranges were altered.
  2. The client side encryption does currently NOT work while using the web browser and the cloud file explorer of the desktop client. When you are browsing encrypted libraries via the web browser or the cloud file explorer, you need to input the password and the server is going to use the password to decrypt the \"file key\" for the library (see description below) and cache the password in memory for one hour. The plain text password is never stored or cached on the server.
  3. If you create an encrypted library on the web interface, the library password and encryption keys will pass through the server. If you want end-to-end protection, you should create encrypted libraries from desktop client only.
  4. For encryption protocol version 4, each library use its own salt to derive key/iv pairs. However, all files within a library shares the same salt. Likewise, all the files within a library are encrypted with the same key/iv pair. With encryption protocol version 2, all libraries use the same salt, but separate key/iv pairs.
  5. Encrypted library doesn't ensure file integrity. For example, the server admin can still partially change the contents of files in an encrypted library. The client is not able to detect such changes to contents.

The client side encryption works on iOS client since version 2.1.6. The Android client support client side encryption since version 2.1.0. But since version 3.0.0, the iOS and Android clients drop support for client side encryptioin. You need to send the password to the server to encrypt/decrypt files.

"},{"location":"administration/security_features/#how-does-an-encrypted-library-work","title":"How does an encrypted library work?","text":"

When you create an encrypted library, you'll need to provide a password for it. All the data in that library will be encrypted with the password before uploading it to the server (see limitations above).

"},{"location":"administration/security_features/#encryptiondecryption-procedure","title":"Encryption/Decryption procedure","text":"

There are currently two supported encryption protocol versions for encrypted libraries, version 2 and versioin 4. The two versions shares the same basic procedure so we first describe the procedure.

  1. Generate a 32-byte long cryptographically strong random number. This will be used as the file encryption key (\"file key\").
  2. Encrypt the file key with the user provided password. We first use a secure hash algorithm to derive a key/iv pair from the password, then use AES 256/CBC to encrypt the file key. The result is called the \"encrypted file key\". This encrypted file key will be sent to and stored on the server. When you need to access the data, you can decrypt the file key from the encrypted file key.
  3. A \"magic token\" is derived from the password and library id, with the same secure hash algorithm. This token is stored with the library and will be use to check passwords before decrypting data later.
  4. All file data is encrypted by the file key with AES 256/CBC. We use PBKDF2-SHA256 with 1000 iterations to derive key/iv pair from the file key. After encryption, the data is uploaded to the server.

The only difference between version 2 and version 4 is on the usage of salt for the secure hash algorithm. In version 2, all libaries share the same fixed salt. In version 4, each library will use a separate and randomly generated salt.

"},{"location":"administration/security_features/#secure-hash-algorithms-for-password-verification","title":"Secure hash algorithms for password verification","text":"

A secure hash algorithm is used to derive key/iv pair for encrypting the file key. So it's critical to choose a relatively costly algorithm to prevent brute-force guessing for the password.

Before version 12, a fixed secure hash algorithm (PBKDF2-SHA256 with 1000 iterations) is used, which is far from secure for today's standard.

Since Seafile server version 12, we allow the admin to choose proper secure hash algorithms. Currently two hash algorithms are supported.

"},{"location":"administration/security_features/#client-side-encryption-and-decryption","title":"Client-side encryption and decryption","text":"

The above encryption procedure can be executed on the desktop and the mobile client. The Seahub browser client uses a different encryption procedure that happens at the server. Because of this your password will be transferred to the server.

When you sync an encrypted library to the desktop, the client needs to verify your password. When you create the library, a \"magic token\" is derived from the password and library id. This token is stored with the library on the server side. The client use this token to check whether your password is correct before you sync the library. The magic token is generated by the secure hash algorithm chosen when the library was created.

For maximum security, the plain-text password won't be saved on the client side, too. The client only saves the key/iv pair derived from the \"file key\", which is used to decrypt the data. So if you forget the password, you won't be able to recover it or access your data on the server.

"},{"location":"administration/security_features/#why-fileserver-delivers-every-content-to-everybody-knowing-the-content-url-of-an-unshared-private-file","title":"Why fileserver delivers every content to everybody knowing the content URL of an unshared private file?","text":"

When a file download link is clicked, a random URL is generated for user to access the file from fileserver. This url can only be access once. After that, all access will be denied to the url. So even if someone else happens to know about the url, he can't access it anymore.

This was changed in Seafile server version 12. Instead of a random URL, a URL like 'https://yourserver.com/seafhttp/repos/{library id}/file_path' is used for downloading the file. Authorization will be done by checking cookies or API tokens on the server side. This makes the URL more cache-friendly while still being secure.

"},{"location":"administration/security_features/#how-does-seafile-store-user-login-password","title":"How does Seafile store user login password?","text":"

User login passwords are stored in hash form only. Note that user login password is different from the passwords used in encrypted libraries. In the database, its format is

PBKDF2SHA256$iterations$salt$hash\n

The record is divided into 4 parts by the $ sign.

To calculate the hash:

"},{"location":"administration/two_factor_authentication/","title":"Two-Factor Authentication","text":"

Starting from version 6.0, we added Two-Factor Authentication to enhance account security.

There are two ways to enable this feature:

After that, there will be a \"Two-Factor Authentication\" section in the user profile page.

Users can use the Google Authenticator app on their smart-phone to scan the QR code.

"},{"location":"changelog/changelog-for-seafile-professional-server-old/","title":"Seafile Professional Server Changelog (old)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#71","title":"7.1","text":"

Upgrade

Please check our document for how to upgrade to 7.1.

"},{"location":"changelog/changelog-for-seafile-professional-server-old/#7122-20210729","title":"7.1.22 (2021/07/29)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7121-20210713","title":"7.1.21 (2021/07/13)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7120-20210702","title":"7.1.20 (2021/07/02)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7119-20210604","title":"7.1.19 (2021/06/04)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7118-20210513","title":"7.1.18 (2021/05/13)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7117-20210426","title":"7.1.17 (2021/04/26)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7116-20210419","title":"7.1.16 (2021/04/19)","text":"

Potential breaking change in Seafile Pro 7.1.16: You can set the maximum number of files contained in a library that can be synced by the Seafile client. The default is 100000. When you download a repo, Seafile client will request fs id list, and you can control the timeout period of this request through fs_id_list_request_timeout configuration, which defaults to 5 minutes. These two options are added to prevent long fs-id-list requests from overloading the server. If you have large libraries on the server, this can cause \"internal server error\" returned to the client. You have to set a large enough limit for these two options.

[fileserver]\nmax_sync_file_count = 100000\nfs_id_list_request_timeout = 300\n
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#7115-20210318","title":"7.1.15 (2021/03/18)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7114-20210226","title":"7.1.14 (2021/02/26)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7113-20210208","title":"7.1.13 (2021/02/08)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7112-20210203","title":"7.1.12 (2021/02/03)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7111-20210128","title":"7.1.11 (2021/01/28)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7110-20200111","title":"7.1.10 (2020/01/11)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#719-20201202","title":"7.1.9 (2020/12/02)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#718-20201012","title":"7.1.8 (2020/10/12)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#717-20200828","title":"7.1.7 (2020/08/28)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#716-20200728","title":"7.1.6 (2020/07/28)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#715-20200630","title":"7.1.5 (2020/06/30)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#714-20200514","title":"7.1.4 (2020/05/14)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#713-20200408","title":"7.1.3 (2020/04/08)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#711-beta-20200227","title":"7.1.1 Beta (2020/02/27)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#710-beta-20200219","title":"7.1.0 Beta (2020/02/19)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#70","title":"7.0","text":"

Since seafile-pro 7.0.0, we have upgraded Elasticsearch to 5.6. As Elasticsearch 5.6 relies on the Java 8 environment and can't run with root, you need to run Seafile with a non-root user and upgrade the Java version.

"},{"location":"changelog/changelog-for-seafile-professional-server-old/#7019-20200907","title":"7.0.19 (2020/09/07)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7018-20200521","title":"7.0.18 (2020/05/21)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7017-20200428","title":"7.0.17 (2020/04/28)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7016-20200401","title":"7.0.16 (2020/04/01)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7015-deprecated","title":"7.0.15 (Deprecated)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7014-20200306","title":"7.0.14 (2020/03/06)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7013-20200116","title":"7.0.13 (2020/01/16)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7012-20200110","title":"7.0.12 (2020/01/10)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7011-20191115","title":"7.0.11 (2019/11/15)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#7010-20191022","title":"7.0.10 (2019/10/22)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#709-20190920","title":"7.0.9 (2019/09/20)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#708-20190826","title":"7.0.8 (2019/08/26)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#707-20190729","title":"7.0.7 (2019/07/29)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#706-20190722","title":"7.0.6 (2019/07/22)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#705-20190716","title":"7.0.5 (2019/07/16)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#704-20190705","title":"7.0.4 (2019/07/05)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#703-20190613","title":"7.0.3 (2019/06/13)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#702-beta-20190517","title":"7.0.2 beta (2019/05/17)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#701-beta-20190418","title":"7.0.1 beta (2019/04/18)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#63","title":"6.3","text":"

In version 6.3, Django is upgraded to version 1.11. Django 1.8, which is used in version 6.2, is deprecated in 2018 April.

With this upgrade, the fast-cgi mode is no longer supported. You need to config Seafile behind Nginx/Apache in WSGI mode.

The way to run Seahub in another port is also changed. You need to modify the configuration file conf/gunicorn.conf instead of running ./seahub.sh start <another-port>.

Version 6.3 also changed the database table for file comments, if you have used this feature, you need migrate old file comments using the following commends after upgrading to 6.3:

./seahub.sh python-env seahub/manage.py migrate_file_comment\n

Note, this command should be run while Seafile server is running.

Version 6.3 changed '/shib-login' to '/sso'. If you use Shibboleth, you need to to update your Apache/Nginx config. Please check the updated document: shibboleth config v6.3

Version 6.3 add a new option for file search (seafevents.conf):

[INDEX FILES]\n...\nhighlight = fvh\n...\n

This option will make search speed improved significantly (10x) when the search result contains large pdf/doc files. But you need to rebuild search index if you want to add this option.

"},{"location":"changelog/changelog-for-seafile-professional-server-old/#6314-20190521","title":"6.3.14 (2019/05/21)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#6313-20190320","title":"6.3.13 (2019/03/20)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#6312-20190221","title":"6.3.12 (2019/02/21)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#6311-20190115","title":"6.3.11 (2019/01/15)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#6310-20190102","title":"6.3.10 (2019/01/02)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#639-20181213","title":"6.3.9 (2018/12/13)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#638-20181210","title":"6.3.8 (2018/12/10)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#637-20181016","title":"6.3.7 (2018/10/16)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#636-20180921","title":"6.3.6 (2018/09/21)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#635-20180918","title":"6.3.5 (2018/09/18)","text":"

New features

"},{"location":"changelog/changelog-for-seafile-professional-server-old/#634-20180816","title":"6.3.4 (2018/08/16)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#633-20180815","title":"6.3.3 (2018/08/15)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#632-20180730","title":"6.3.2 (2018/07/30)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#631-20180725","title":"6.3.1 (2018/07/25)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#630-beta-20180628","title":"6.3.0 Beta (2018/06/28)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#62","title":"6.2","text":"

From 6.2, It is recommended to use proxy mode for communication between Seahub and Nginx/Apache. Two steps are needed if you'd like to switch to WSGI mode:

  1. Change the config file of Nginx/Apache.
  2. Restart Seahub with ./seahub.sh start instead of ./seahub.sh start-fastcgi

The configuration of Nginx is as following:

location / {\n         proxy_pass         http://127.0.0.1:8000;\n         proxy_set_header   Host $host;\n         proxy_set_header   X-Real-IP $remote_addr;\n         proxy_set_header   X-Forwarded-For $proxy_add_x_forwarded_for;\n         proxy_set_header   X-Forwarded-Host $server_name;\n         proxy_read_timeout  1200s;\n\n         # used for view/edit office file via Office Online Server\n         client_max_body_size 0;\n\n         access_log      /var/log/nginx/seahub.access.log;\n         error_log       /var/log/nginx/seahub.error.log;\n    }\n

The configuration of Apache is as following:

    # seahub\n    SetEnvIf Authorization \"(.*)\" HTTP_AUTHORIZATION=$1\n    ProxyPass / http://127.0.0.1:8000/\n    ProxyPassReverse / http://127.0.0.1:8000/\n
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#6213-2018518","title":"6.2.13 (2018.5.18)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#6212-2018420","title":"6.2.12 (2018.4.20)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#6211-2018419","title":"6.2.11 (2018.4.19)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#6210-2018320","title":"6.2.10 (2018.3.20)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#629-20180210","title":"6.2.9 (2018.02.10)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#628-20180202","title":"6.2.8 (2018.02.02)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#627-20180122","title":"6.2.7 (2018.01.22)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#625-626-deprecated","title":"6.2.5, 6.2.6 (deprecated)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#624-20171220","title":"6.2.4 (2017.12.20)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#623-20171219","title":"6.2.3 (2017.12.19)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#622-20171212","title":"6.2.2 (2017.12.12)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#621-beta-20171122","title":"6.2.1 beta (2017.11.22)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#620-beta-20171016","title":"6.2.0 beta (2017.10.16)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#61","title":"6.1","text":"

You can follow the document on minor upgrade.

"},{"location":"changelog/changelog-for-seafile-professional-server-old/#619-20170928","title":"6.1.9 \uff082017.09.28\uff09","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#618-20170818","title":"6.1.8 (2017.08.18)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#617-20170817","title":"6.1.7 (2017.08.17)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#614-20170711","title":"6.1.4 (2017.07.11)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#613-20170706","title":"6.1.3 (2017.07.06)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#612-deprecated","title":"6.1.2 (deprecated)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#611-20170619","title":"6.1.1 (2017.06.19)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#610-beta-20170606","title":"6.1.0 beta (2017.06.06)","text":"

Web UI Improvement:

  1. Add thumbnail for video files (turn off by default)
  2. Improved image file view, using thumbnail to view pictures
  3. Move items by drap & drop
  4. Add create docx/xlsx/pptx in web interface
  5. Add OnlyOffice integration
  6. Show which client modify a file in history, this will help to find which client accidentally modified a file or deleted a file.

Improvement for admins:

  1. Admin can set default quota for each role
  2. Admin can set user\u2019s quote, delete users in bulk in admin panel
  3. Support using admin panel in mobile platform
  4. Add translation for settings page
  5. Add admin operation logs
  6. Admin can change users' login_id in web interface
  7. Admin can create libraries in admin panel
  8. Admin can set logo and favicon in admin panel

System changes:

  1. Remove wiki by default (to turn it on, set ENABLE_WIKI = True in seahub_settings.py)
  2. Upgrade Django to 1.8.18
  3. Clean Ajax API
  4. Increase share link token length to 20 characters
  5. Upgrade jstree to latest version
  6. Update ElasticSearch to 2.4.5
"},{"location":"changelog/changelog-for-seafile-professional-server-old/#60","title":"6.0","text":"

You can follow the document on minor upgrade.

Special note for upgrading a cluster:

In version 6.0, the folder download mechanism has been updated. This requires that, in a cluster deployment, seafile-data/httptemp folder must be in an NFS share. You can make this folder a symlink to the NFS share.

cd /data/haiwen/\nln -s /nfs-share/seafile-httptemp seafile-data/httptemp\n

The httptemp folder only contains temp files for downloading/uploading file on web UI. So there is no reliability requirement for the NFS share. You can export it from any node in the cluster.

"},{"location":"changelog/changelog-for-seafile-professional-server-old/#6013-20170508","title":"6.0.13 (2017.05.08)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#6012-20170417","title":"6.0.12 (2017.04.17)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#6011-deprecated","title":"6.0.11 (Deprecated)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#6010-20170407","title":"6.0.10 (2017.04.07)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#609-20170401","title":"6.0.9 (2017.04.01)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#608-20170223","title":"6.0.8 (2017.02.23)","text":"

Improvement for admin

Other

"},{"location":"changelog/changelog-for-seafile-professional-server-old/#607-20170118","title":"6.0.7 (2017.01.18)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#606-20170111","title":"6.0.6 (2017.01.11)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#605-20161219","title":"6.0.5 (2016.12.19)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#604-20161129","title":"6.0.4 (2016.11.29)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#603-20161117","title":"6.0.3 (2016.11.17)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#602-20161020","title":"6.0.2 (2016.10.20)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#601-beta","title":"6.0.1 beta","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#600-beta","title":"6.0.0 beta","text":"

Pro only features

"},{"location":"changelog/changelog-for-seafile-professional-server-old/#44","title":"4.4","text":"

Note: Two new options are added in version 4.4, both are in seahub_settings.py

This version contains no database table change.

"},{"location":"changelog/changelog-for-seafile-professional-server-old/#449-20160229","title":"4.4.9 (2016.02.29)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#448-20151217","title":"4.4.8 (2015.12.17)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#447-20151120","title":"4.4.7 (2015.11.20)","text":""},{"location":"changelog/changelog-for-seafile-professional-server-old/#446-20151109","title":"4.4.6 (2015.11.09)","text":"