opt: some tabs contents

This commit is contained in:
Junxiang Huang 2024-10-29 18:23:56 +08:00
parent a5364ad34c
commit 87ee7d6f16
7 changed files with 283 additions and 289 deletions

View File

@ -47,13 +47,13 @@ OAUTH_ATTRIBUTE_MAP = {
}
```
!!! tip "There are some more explanations about the settings"
!!! tip "More explanations about the settings"
**OAUTH_PROVIDER / OAUTH_PROVIDER_DOMAIN**
- **OAUTH_PROVIDER / OAUTH_PROVIDER_DOMAIN**
`OAUTH_PROVIDER_DOMAIN` will be deprecated, and it can be replaced by `OAUTH_PROVIDER`. This variable is used in the database to identify third-party providers, either as a domain or as an easy-to-remember string less than 32 characters.
**OAUTH_ATTRIBUTE_MAP**
- **OAUTH_ATTRIBUTE_MAP**
This variables describes which claims from the response of the user info endpoint are to be filled into which attributes of the new Seafile user. The format is showing like below:
@ -134,7 +134,8 @@ OAUTH_ATTRIBUTE_MAP = {
```
=== "Github"
For Github, `email` is not the unique identifier for an user, but `id` is in most cases, so we use `id` as settings example in our manual. As Seafile uses email to identify an unique user account for now, so we combine `id` and `OAUTH_PROVIDER_DOMAIN`, which is github.com in your case, to an email format string and then create this account if not exist. Change the setting as followings:
!!! note
For Github, `email` is not the unique identifier for an user, but `id` is in most cases, so we use `id` as settings example in our manual. As Seafile uses email to identify an unique user account for now, so we combine `id` and `OAUTH_PROVIDER_DOMAIN`, which is github.com in your case, to an email format string and then create this account if not exist.
```python
ENABLE_OAUTH = True
@ -157,19 +158,20 @@ OAUTH_ATTRIBUTE_MAP = {
```
=== "GitLab"
To enable OAuth via GitLab. Create an application in GitLab (under Admin area->Applications).
!!! note
To enable OAuth via GitLab. Create an application in GitLab (under Admin area->Applications).
Fill in required fields:
Fill in required fields:
- Name: a name you specify
- Name: a name you specify
- Redirect URI: The callback url see below `OAUTH_REDIRECT_URL`
- Redirect URI: The callback url see below `OAUTH_REDIRECT_URL`
- Trusted: Skip confirmation dialog page. Select this to *not* ask the user if he wants to authorize seafile to receive access to his/her account data.
- Trusted: Skip confirmation dialog page. Select this to *not* ask the user if he wants to authorize seafile to receive access to his/her account data.
- Scopes: Select `openid` and `read_user` in the scopes list.
- Scopes: Select `openid` and `read_user` in the scopes list.
Press submit and copy the client id and secret you receive on the confirmation page and use them in this template for your seahub_settings.py:
Press submit and copy the client id and secret you receive on the confirmation page and use them in this template for your seahub_settings.py
```python
ENABLE_OAUTH = True
@ -189,7 +191,8 @@ OAUTH_ATTRIBUTE_MAP = {
```
=== "Azure Cloud"
For users of Azure Cloud, as there is no `id` field returned from Azure Cloud's user info endpoint, so we use a special configuration for `OAUTH_ATTRIBUTE_MAP` setting (others are the same as Github/Google):
!!! note
For users of Azure Cloud, as there is no `id` field returned from Azure Cloud's user info endpoint, so we use a special configuration for `OAUTH_ATTRIBUTE_MAP` setting (others are the same as Github/Google). Please see [this tutorial](https://forum.seafile.com/t/oauth-authentification-against-microsoft-office365-azure-cloud/7999) for the complete deployment process of OAuth against Azure Cloud.
```python
OAUTH_ATTRIBUTE_MAP = {
@ -197,5 +200,3 @@ OAUTH_ATTRIBUTE_MAP = {
"name": (False, "name")
}
```
Please see [this tutorial](https://forum.seafile.com/t/oauth-authentification-against-microsoft-office365-azure-cloud/7999) for the complete deployment process of OAuth against Azure Cloud.

View File

@ -8,9 +8,9 @@ Seafile currently supports sharing between Seafile servers with version greater
## Configuration
=== "Sharing between Seafile servers"
Add the following configuration to `seahub_settings.py`.
Add the following configuration to `seahub_settings.py`.
=== "Sharing between Seafile servers"
```python
# Enable OCM
@ -28,12 +28,8 @@ Seafile currently supports sharing between Seafile servers with version greater
]
```
OCM_REMOTE_SERVERS is a list of servers that you allow your users to share libraries with.
=== "Sharing from NextCloud to Seafile"
Add the following configuration to `seahub_settings.py`.
```python
# Enable OCM
ENABLE_OCM_VIA_WEBDAV = True
@ -46,6 +42,8 @@ Seafile currently supports sharing between Seafile servers with version greater
]
```
!!! tip "OCM_REMOTE_SERVERS is a list of servers that you allow your users to share libraries with"
## Usage
### Share library to other server

View File

@ -31,160 +31,161 @@ $ openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -keyout sp.key -out sp.cr
## Integration with ADFS/SAML single sign-on
=== "Microsoft Azure SAML single sign-on app"
### Microsoft Azure SAML single sign-on app
If you use Microsoft Azure SAML app to achieve single sign-on, please follow the steps below:
If you use Microsoft Azure SAML app to achieve single sign-on, please follow the steps below:
**First**, add SAML single sign-on app and assign users, refer to: [add an Azure AD SAML application](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal), [create and assign users](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal-assign-users).
**First**, add SAML single sign-on app and assign users, refer to: [add an Azure AD SAML application](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal), [create and assign users](https://docs.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal-assign-users).
**Second**, setup the _Identifier_, _Reply URL_, and _Sign on URL_ of the SAML app based on your service URL, refer to: [enable single sign-on for saml app](https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal-setup-sso). The format of the _Identifier_, _Reply URL_, and _Sign on URL_ are: https://example.com/saml2/metadata/, https://example.com/saml2/acs/, https://example.com/, e.g.:
**Second**, setup the _Identifier_, _Reply URL_, and _Sign on URL_ of the SAML app based on your service URL, refer to: [enable single sign-on for saml app](https://learn.microsoft.com/en-us/azure/active-directory/manage-apps/add-application-portal-setup-sso). The format of the _Identifier_, _Reply URL_, and _Sign on URL_ are: https://example.com/saml2/metadata/, https://example.com/saml2/acs/, https://example.com/, e.g.:
![](../images/auto-upload/72c7b210-4a91-4e86-ba2e-df5ae0a4a0b0.png)
![](../images/auto-upload/72c7b210-4a91-4e86-ba2e-df5ae0a4a0b0.png)
**Next**, [edit saml attributes & claims](https://learn.microsoft.com/en-us/azure/active-directory/develop/saml-claims-customization). Keep the default attributes & claims of SAML app unchanged, the _uid_ attribute must be added, the _mail_ and _name_ attributes are optional, e.g.:
**Next**, [edit saml attributes & claims](https://learn.microsoft.com/en-us/azure/active-directory/develop/saml-claims-customization). Keep the default attributes & claims of SAML app unchanged, the _uid_ attribute must be added, the _mail_ and _name_ attributes are optional, e.g.:
![](../images/auto-upload/417d-a48a-3e10c46b98f0.png)
![](../images/auto-upload/417d-a48a-3e10c46b98f0.png)
**Next**, download the base64 format SAML app's certificate and rename to idp.crt:
**Next**, download the base64 format SAML app's certificate and rename to idp.crt:
![](../images/auto-upload/0a693563-d511-4c3c-ac30-82a26d10cfab.png)
![](../images/auto-upload/0a693563-d511-4c3c-ac30-82a26d10cfab.png)
and put it under the certs directory(`/opt/seafile/seahub-data/certs`).
and put it under the certs directory(`/opt/seafile/seahub-data/certs`).
**Next**, copy the metadata URL of the SAML app:
**Next**, copy the metadata URL of the SAML app:
![](../images/auto-upload/1426318f-0a61-462d-a514-13768ca0b18c.png)
![](../images/auto-upload/1426318f-0a61-462d-a514-13768ca0b18c.png)
and paste it into the `SAML_REMOTE_METADATA_URL` option in seahub_settings.py, e.g.:
and paste it into the `SAML_REMOTE_METADATA_URL` option in seahub_settings.py, e.g.:
```python
SAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app
```
**Next**, add `ENABLE_ADFS_LOGIN`, `LOGIN_REDIRECT_URL` and `SAML_ATTRIBUTE_MAPPING` options to seahub_settings.py, and then restart Seafile, e.g:
```python
ENABLE_ADFS_LOGIN = True
LOGIN_REDIRECT_URL = '/saml2/complete/'
SAML_ATTRIBUTE_MAPPING = {
'name': ('display_name', ),
'mail': ('contact_email', ),
'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.
...
}
SAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app
```
!!! note
- If the xmlsec1 binary is **not located in** `/usr/bin/xmlsec1`, you need to add the following configuration in seahub_settings.py:
```python
SAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app
SAML_XMLSEC_BINARY_PATH = '/path/to/xmlsec1'
```
**Next**, add `ENABLE_ADFS_LOGIN`, `LOGIN_REDIRECT_URL` and `SAML_ATTRIBUTE_MAPPING` options to seahub_settings.py, and then restart Seafile, e.g:
View where the xmlsec1 binary is located:
```
$ which xmlsec1
```
- If certificates are **not placed in** `/opt/seafile/seahub-data/certs`, you need to add the following configuration in seahub_settings.py:
```python
ENABLE_ADFS_LOGIN = True
LOGIN_REDIRECT_URL = '/saml2/complete/'
SAML_ATTRIBUTE_MAPPING = {
'name': ('display_name', ),
'mail': ('contact_email', ),
'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.
...
}
SAML_REMOTE_METADATA_URL = 'https://login.microsoftonline.com/xxx/federationmetadata/2007-06/federationmetadata.xml?appid=xxx' # copy from SAML app
SAML_CERTS_DIR = '/path/to/certs'
```
!!! note
- If the xmlsec1 binary is **not located in** `/usr/bin/xmlsec1`, you need to add the following configuration in seahub_settings.py:
**Finally**, open the browser and enter the Seafile login page, click `Single Sign-On`, and use the user assigned to SAML app to perform a SAML login test.
```python
SAML_XMLSEC_BINARY_PATH = '/path/to/xmlsec1'
```
### On-premise ADFS
View where the xmlsec1 binary is located:
If you use Microsoft ADFS to achieve single sign-on, please follow the steps below:
```
$ which xmlsec1
```
**First**, please make sure the following preparations are done:
- If certificates are **not placed in** `/opt/seafile/seahub-data/certs`, you need to add the following configuration in seahub_settings.py:
1. A Windows Server with [ADFS](https://learn.microsoft.com/en-us/windows-server/identity/active-directory-federation-services) installed. For configuring and installing ADFS you can see [this article](https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/deployment/deploying-a-federation-server-farm).
```python
SAML_CERTS_DIR = '/path/to/certs'
```
2. A valid SSL certificate for ADFS server, and here we use `temp.adfs.com` as the domain name example.
**Finally**, open the browser and enter the Seafile login page, click `Single Sign-On`, and use the user assigned to SAML app to perform a SAML login test.
=== "On-premise ADFS"
3. A valid SSL certificate for Seafile server, and here we use `demo.seafile.com` as the domain name example.
If you use Microsoft ADFS to achieve single sign-on, please follow the steps below:
**Second**, download the base64 format certificate and upload it:
**First**, please make sure the following preparations are done:
* Navigate to the _AD FS_ management window. In the left sidebar menu, navigate to **Services** > **Certificates**.
1. A Windows Server with [ADFS](https://learn.microsoft.com/en-us/windows-server/identity/active-directory-federation-services) installed. For configuring and installing ADFS you can see [this article](https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/deployment/deploying-a-federation-server-farm).
* Locate the _Token-signing_ certificate. Right-click the certificate and select **View Certificate**.
2. A valid SSL certificate for ADFS server, and here we use `temp.adfs.com` as the domain name example.
![](../images/auto-upload/7a1eead2-272f-40ec-9768-effc1d4f3273.png)
3. A valid SSL certificate for Seafile server, and here we use `demo.seafile.com` as the domain name example.
* In the dialog box, select the **Details** tab.
**Second**, download the base64 format certificate and upload it:
* Click **Copy to File**.
* Navigate to the _AD FS_ management window. In the left sidebar menu, navigate to **Services** > **Certificates**.
* In the _Certificate Export Wizard_ that opens, click **Next**.
* Locate the _Token-signing_ certificate. Right-click the certificate and select **View Certificate**.
* Select **Base-64 encoded X.509 (.CER)**, then click **Next**.
![](../images/auto-upload/7a1eead2-272f-40ec-9768-effc1d4f3273.png)
* Named it **idp.crt**, then click **Next**.
* In the dialog box, select the **Details** tab.
* Click **Finish** to complete the download.
* Click **Copy to File**.
* And then put it under the certs directory(`/opt/seafile/seahub-data/certs`).
* In the _Certificate Export Wizard_ that opens, click **Next**.
**Next**, add the following configurations to seahub_settings.py and then restart Seafile:
* Select **Base-64 encoded X.509 (.CER)**, then click **Next**.
```python
ENABLE_ADFS_LOGIN = True
LOGIN_REDIRECT_URL = '/saml2/complete/'
SAML_ATTRIBUTE_MAPPING = {
'name': ('display_name', ),
'mail': ('contact_email', ),
'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.
...
}
SAML_REMOTE_METADATA_URL = 'https://temp.adfs.com/federationmetadata/2007-06/federationmetadata.xml' # The format of the ADFS federation metadata URL is: `https://{your ADFS domain name}/federationmetadata/2007-06/federationmetadata.xml`
* Named it **idp.crt**, then click **Next**.
```
* Click **Finish** to complete the download.
**Next**, add [relying party trust](https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/operations/create-a-relying-party-trust#to-create-a-claims-aware-relying-party-trust-using-federation-metadata):
* And then put it under the certs directory(`/opt/seafile/seahub-data/certs`).
* Log into the ADFS server and open the ADFS management.
**Next**, add the following configurations to seahub_settings.py and then restart Seafile:
* Under **Actions**, click **Add Relying Party Trust**.
```python
ENABLE_ADFS_LOGIN = True
LOGIN_REDIRECT_URL = '/saml2/complete/'
SAML_ATTRIBUTE_MAPPING = {
'name': ('display_name', ),
'mail': ('contact_email', ),
'seafile_groups': ('', ), # Optional, set this attribute if you need to synchronize groups/departments.
...
}
SAML_REMOTE_METADATA_URL = 'https://temp.adfs.com/federationmetadata/2007-06/federationmetadata.xml' # The format of the ADFS federation metadata URL is: `https://{your ADFS domain name}/federationmetadata/2007-06/federationmetadata.xml`
* On the Welcome page, choose **Claims aware** and click **Start**.
```
* Select **Import data about the relying party published online or on a local network**, type your metadate url in **Federation metadata address (host name or URL)**, and then click **Next**. Your metadate url format is: `https://example.com/saml2/metadata/`, e.g.:
**Next**, add [relying party trust](https://learn.microsoft.com/en-us/windows-server/identity/ad-fs/operations/create-a-relying-party-trust#to-create-a-claims-aware-relying-party-trust-using-federation-metadata):
![](../images/auto-upload/4d6412ee-009e-42df-b0eb-081735d873c5.png)
* Log into the ADFS server and open the ADFS management.
* On the **Specify Display Name** page type a name in **Display name**, e.g. `Seafile`, under **Notes** type a description for this relying party trust, and then click **Next**.
* Under **Actions**, click **Add Relying Party Trust**.
* In the **Choose an access control policy** window, select **Permit everyone**, then click **Next**.
* On the Welcome page, choose **Claims aware** and click **Start**.
* Review your settings, then click **Next**.
* Select **Import data about the relying party published online or on a local network**, type your metadate url in **Federation metadata address (host name or URL)**, and then click **Next**. Your metadate url format is: `https://example.com/saml2/metadata/`, e.g.:
* Click **Close**.
![](../images/auto-upload/4d6412ee-009e-42df-b0eb-081735d873c5.png)
**Next**, create claims rules:
* On the **Specify Display Name** page type a name in **Display name**, e.g. `Seafile`, under **Notes** type a description for this relying party trust, and then click **Next**.
* Open the ADFS management, click **Relying Party Trusts**.
* In the **Choose an access control policy** window, select **Permit everyone**, then click **Next**.
* Right-click your trust, and then click **Edit Claim Issuance Policy**.
* Review your settings, then click **Next**.
* On the **Issuance Transform Rules** tab click **Add Rules**.
* Click **Close**.
* Click the **Claim rule template** dropdown menu and select **Send LDAP Attributes as Claims**, and then click **Next**.
**Next**, create claims rules:
* In the **Claim rule name** field, type the display name for this rule, such as **Seafile Claim rule**. Click the **Attribute store** dropdown menu and select **Active Directory**. In the **LDAP Attribute** column, click the dropdown menu and select **User-Principal-Name**. In the **Outgoing Claim Type** column, click the dropdown menu and select **UPN**. And then click **Finish**.
* Open the ADFS management, click **Relying Party Trusts**.
* Click **Add Rule** again.
* Right-click your trust, and then click **Edit Claim Issuance Policy**.
* Click the **Claim rule template** dropdown menu and select **Transform an Incoming Claim**, and then click **Next**.
* On the **Issuance Transform Rules** tab click **Add Rules**.
* In the **Claim rule name** field, type the display name for this rule, such as **UPN to Name ID**. Click the **Incoming claim type** dropdown menu and select **UPN**(It must match the **Outgoing Claim Type** in rule `Seafile Claim rule`). Click the **Outgoing claim type** dropdown menu and select **Name ID**. Click the **Outgoing name ID format** dropdown menu and select **Email**. And then click **Finish**.
* Click the **Claim rule template** dropdown menu and select **Send LDAP Attributes as Claims**, and then click **Next**.
* Click **OK** to add both new rules.
* In the **Claim rule name** field, type the display name for this rule, such as **Seafile Claim rule**. Click the **Attribute store** dropdown menu and select **Active Directory**. In the **LDAP Attribute** column, click the dropdown menu and select **User-Principal-Name**. In the **Outgoing Claim Type** column, click the dropdown menu and select **UPN**. And then click **Finish**.
!!! tip "When creating claims rule, you can also select other LDAP Attributes, such as E-Mail-Addresses, depending on your ADFS service"
* Click **Add Rule** again.
* Click the **Claim rule template** dropdown menu and select **Transform an Incoming Claim**, and then click **Next**.
* In the **Claim rule name** field, type the display name for this rule, such as **UPN to Name ID**. Click the **Incoming claim type** dropdown menu and select **UPN**(It must match the **Outgoing Claim Type** in rule `Seafile Claim rule`). Click the **Outgoing claim type** dropdown menu and select **Name ID**. Click the **Outgoing name ID format** dropdown menu and select **Email**. And then click **Finish**.
* Click **OK** to add both new rules.
!!! tip "When creating claims rule, you can also select other LDAP Attributes, such as E-Mail-Addresses, depending on your ADFS service"
**Finally**, open the browser and enter the Seafile login page, click `Single Sign-On` to perform ADFS login test.
**Finally**, open the browser and enter the Seafile login page, click `Single Sign-On` to perform ADFS login test.

View File

@ -6,206 +6,198 @@ To setup Seafile Professional Server with Amazon S3:
- Setup the basic Seafile Professional Server following the guide on [Download and setup Seafile Professional Server](../setup_binary/installation_pro.md)
- Install the python `boto` library. It's needed to access S3 service.
```
# Version 10.0 or earlier
sudo pip install boto
=== "Seafile 10.0 or earlier"
# Since 11.0 version
sudo pip install boto3
```
```
sudo pip install boto
```
=== "Seafile 11.0"
```
sudo pip install boto3
```
- Install and configure memcached or Redis. For best performance, Seafile requires enable memory cache for objects. We recommend to at least allocate 128MB memory for memcached or Redis.
The configuration options differ for different S3 storage. We'll describe the configurations in separate sections.
=== "AWS S3"
!!! note "You also need to add [memory cache configurations](../config/seafile-conf.md#cache-pro-edition-only)"
AWS S3 is the original S3 storage provider.
## AWS S3
AWS S3 is the original S3 storage provider.
Edit `seafile.conf`, add the following lines:
Edit `seafile.conf`, add the following lines:
```
[commit_object_backend]
name = s3
bucket = my-commit-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = eu-central-1
[fs_object_backend]
name = s3
bucket = my-fs-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = eu-central-1
[block_backend]
name = s3
bucket = my-block-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = eu-central-1
```
We'll explain the configurations below:
| Variable | Description |
| --- | --- |
| `bucket` | It's required to create separate buckets for commit, fs, and block objects. When creating your buckets on S3, please first read [S3 bucket naming rules][1]. Note especially not to use **UPPERCASE** letters in bucket names (don't use camel style names, such as MyCommitObjects). |
| `key_id` | The `key_id` is required to authenticate you to S3. You can find the `key_id` in the "security credentials" section on your AWS account page. |
| `key` | The `key` is required to authenticate you to S3. You can find the `key` in the "security credentials" section on your AWS account page. |
| `use_v4_signature` | There are two versions of authentication protocols that can be used with S3 storage: Version 2 (older, may still be supported by some regions) and Version 4 (current, used by most regions). If you don't set this option, Seafile will use the v2 protocol. It's suggested to use the v4 protocol. |
| `aws_region` | If you use the v4 protocol, set this option to the region you chose when you create the buckets. If it's not set and you're using the v4 protocol, Seafile will use `us-east-1` as the default. This option will be ignored if you use the v2 protocol. |
[1]: <https://docs.aws.amazon.com/AmazonS3/latest/userguide/BucketRestrictions.html#bucketnamingrules> (Replace this placeholder with the actual link to the S3 bucket naming rules documentation if necessary)
!!! tip
For file search and webdav to work with the v4 signature mechanism, you need to add following lines to ~/.boto
```
[commit_object_backend]
name = s3
bucket = my-commit-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = eu-central-1
[fs_object_backend]
name = s3
bucket = my-fs-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = eu-central-1
[block_backend]
name = s3
bucket = my-block-objects
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = eu-central-1
[s3]
use-sigv4 = True
```
!!! note "You also need to add [memory cache configurations](../config/seafile-conf.md#cache-pro-edition-only)"
### Use server-side encryption with customer-provided keys (SSE-C)
We'll explain the configurations below:
Since Pro 11.0, you can use SSE-C to S3. Add the following options to seafile.conf:
| Variable | Description |
| --- | --- |
| `bucket` | It's required to create separate buckets for commit, fs, and block objects. When creating your buckets on S3, please first read [S3 bucket naming rules][1]. Note especially not to use **UPPERCASE** letters in bucket names (don't use camel style names, such as MyCommitObjects). |
| `key_id` | The `key_id` is required to authenticate you to S3. You can find the `key_id` in the "security credentials" section on your AWS account page. |
| `key` | The `key` is required to authenticate you to S3. You can find the `key` in the "security credentials" section on your AWS account page. |
| `use_v4_signature` | There are two versions of authentication protocols that can be used with S3 storage: Version 2 (older, may still be supported by some regions) and Version 4 (current, used by most regions). If you don't set this option, Seafile will use the v2 protocol. It's suggested to use the v4 protocol. |
| `aws_region` | If you use the v4 protocol, set this option to the region you chose when you create the buckets. If it's not set and you're using the v4 protocol, Seafile will use `us-east-1` as the default. This option will be ignored if you use the v2 protocol. |
[1]: <https://docs.aws.amazon.com/AmazonS3/latest/userguide/BucketRestrictions.html#bucketnamingrules> (Replace this placeholder with the actual link to the S3 bucket naming rules documentation if necessary)
```
[commit_object_backend]
name = s3
......
use_v4_signature = true
use_https = true
sse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P
!!! tip
For file search and webdav to work with the v4 signature mechanism, you need to add following lines to ~/.boto
[fs_object_backend]
name = s3
......
use_v4_signature = true
use_https = true
sse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P
```
[s3]
use-sigv4 = True
```
[block_backend]
name = s3
......
use_v4_signature = true
use_https = true
sse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P
```
!!! note "Use server-side encryption with customer-provided keys (SSE-C)"
`ssk_c_key` is a 32-byte random string.
Since Pro 11.0, you can use SSE-C to S3. Add the following options to seafile.conf:
## Other Public Hosted S3 Storage
```
[commit_object_backend]
name = s3
......
use_v4_signature = true
use_https = true
sse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P
There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support
[fs_object_backend]
name = s3
......
use_v4_signature = true
use_https = true
sse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P
```
[commit_object_backend]
name = s3
bucket = my-commit-objects
host = <access endpoint for storage provider>
key_id = your-key-id
key = your-secret-key
# v2 authentication protocol will be used if not set
use_v4_signature = true
# required for v4 protocol. ignored for v2 protocol.
aws_region = <region name for storage provider>
[block_backend]
name = s3
......
use_v4_signature = true
use_https = true
sse_c_key = XiqMSf3x5ja4LRibBbV0sVntVpdHXl3P
```
[fs_object_backend]
name = s3
bucket = my-fs-objects
host = <access endpoint for storage provider>
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = <region name for storage provider>
`ssk_c_key` is a 32-byte random string.
[block_backend]
name = s3
bucket = my-block-objects
host = <access endpoint for storage provider>
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = <region name for storage provider>
```
=== "Other Public Hosted S3 Storage"
| variable | description |
|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `host` | The endpoint by which you access the storage service. Usually it starts with the region name. It's required to provide the host address, otherwise Seafile will use AWS's address. |
| `bucket` | It's required to create separate buckets for commit, fs, and block objects. |
| `key_id` | The key_id is required to authenticate you to S3 storage. |
| `key` | The key is required to authenticate you to S3 storage. (Note: `key_id` and `key` are typically used together for authentication.) |
| `use_v4_signature` | There are two versions of authentication protocols that can be used with S3 storage. Version 2 is the older one, which may still be supported by some cloud providers; version 4 is the current one used by Amazon S3 and is supported by most providers. If you don't set this option, Seafile will use v2 protocol. It's suggested to use v4 protocol. |
| `aws_region` | If you use v4 protocol, set this option to the region you chose when you create the buckets. If it's not set and you're using v4 protocol, Seafile will use `us-east-1` as the default. This option will be ignored if you use v2 protocol. |
There are other S3-compatible cloud storage providers in the market, such as Blackblaze and Wasabi. Configuration for those providers are just a bit different from AWS. We don't assure the following configuration works for all providers. If you have problems please contact our support.
Edit `seafile.conf`, add the following lines:
!!! tip
For file search and webdav to work with the v4 signature mechanism, you need to add following lines to ~/.boto
```
[commit_object_backend]
name = s3
bucket = my-commit-objects
host = <access endpoint for storage provider>
key_id = your-key-id
key = your-secret-key
# v2 authentication protocol will be used if not set
use_v4_signature = true
# required for v4 protocol. ignored for v2 protocol.
aws_region = <region name for storage provider>
[fs_object_backend]
name = s3
bucket = my-fs-objects
host = <access endpoint for storage provider>
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = <region name for storage provider>
[block_backend]
name = s3
bucket = my-block-objects
host = <access endpoint for storage provider>
key_id = your-key-id
key = your-secret-key
use_v4_signature = true
aws_region = <region name for storage provider>
[s3]
use-sigv4 = True
```
## Self-hosted S3 Storage
!!! note "You also need to add [memory cache configurations](../config/seafile-conf.md#cache-pro-edition-only)"
Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift and Ceph's RADOS Gateway. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:
We'll explain the configurations below:
```
[commit_object_backend]
name = s3
bucket = my-commit-objects
key_id = your-key-id
key = your-secret-key
host = 192.168.1.123:8080
path_style_request = true
| variable | description |
|----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `host` | The endpoint by which you access the storage service. Usually it starts with the region name. It's required to provide the host address, otherwise Seafile will use AWS's address. |
| `bucket` | It's required to create separate buckets for commit, fs, and block objects. |
| `key_id` | The key_id is required to authenticate you to S3 storage. |
| `key` | The key is required to authenticate you to S3 storage. (Note: `key_id` and `key` are typically used together for authentication.) |
| `use_v4_signature` | There are two versions of authentication protocols that can be used with S3 storage. Version 2 is the older one, which may still be supported by some cloud providers; version 4 is the current one used by Amazon S3 and is supported by most providers. If you don't set this option, Seafile will use v2 protocol. It's suggested to use v4 protocol. |
| `aws_region` | If you use v4 protocol, set this option to the region you chose when you create the buckets. If it's not set and you're using v4 protocol, Seafile will use `us-east-1` as the default. This option will be ignored if you use v2 protocol. |
[fs_object_backend]
name = s3
bucket = my-fs-objects
key_id = your-key-id
key = your-secret-key
host = 192.168.1.123:8080
path_style_request = true
!!! tip
For file search and webdav to work with the v4 signature mechanism, you need to add following lines to ~/.boto
[block_backend]
name = s3
bucket = my-block-objects
key_id = your-key-id
key = your-secret-key
host = 192.168.1.123:8080
path_style_request = true
```
```
[s3]
use-sigv4 = True
```
| variable | description |
|--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `host` | It is the address and port of the S3-compatible service. You cannot prepend "http" or "https" to the `host` option. By default it'll use http connections. If you want to use https connection, please set `use_https = true` option. |
| `bucket` | It's required to create separate buckets for commit, fs, and block objects. |
| `key_id` | The key_id is required to authenticate you to S3 storage. |
| `key` | The key is required to authenticate you to S3 storage. (Note: `key_id` and `key` are typically used together for authentication.) |
| `path_style_request` | This option asks Seafile to use URLs like `https://192.168.1.123:8080/bucketname/object` to access objects. In Amazon S3, the default URL format is in virtual host style, such as `https://bucketname.s3.amazonaws.com/object`. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. So we recommend to set this option to true. |
=== "Self-hosted S3 Storage"
Below are a few options that are not shown in the example configuration above:
Many self-hosted object storage systems are now compatible with the S3 API, such as OpenStack Swift and Ceph's RADOS Gateway. You can use these S3-compatible storage systems as backend for Seafile. Here is an example config:
```
[commit_object_backend]
name = s3
bucket = my-commit-objects
key_id = your-key-id
key = your-secret-key
host = 192.168.1.123:8080
path_style_request = true
[fs_object_backend]
name = s3
bucket = my-fs-objects
key_id = your-key-id
key = your-secret-key
host = 192.168.1.123:8080
path_style_request = true
[block_backend]
name = s3
bucket = my-block-objects
key_id = your-key-id
key = your-secret-key
host = 192.168.1.123:8080
path_style_request = true
```
!!! note "You also need to add [memory cache configurations](../config/seafile-conf.md#cache-pro-edition-only)"
We'll explain the configurations below:
| variable | description |
|--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `host` | It is the address and port of the S3-compatible service. You cannot prepend "http" or "https" to the `host` option. By default it'll use http connections. If you want to use https connection, please set `use_https = true` option. |
| `bucket` | It's required to create separate buckets for commit, fs, and block objects. |
| `key_id` | The key_id is required to authenticate you to S3 storage. |
| `key` | The key is required to authenticate you to S3 storage. (Note: `key_id` and `key` are typically used together for authentication.) |
| `path_style_request` | This option asks Seafile to use URLs like `https://192.168.1.123:8080/bucketname/object` to access objects. In Amazon S3, the default URL format is in virtual host style, such as `https://bucketname.s3.amazonaws.com/object`. But this style relies on advanced DNS server setup. So most self-hosted storage systems only implement the path style format. So we recommend to set this option to true. |
Below are a few options that are not shown in the example configuration above:
| variable | description |
|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `use_v4_signature` | There are two versions of authentication protocols that can be used with S3 storage. Version 2 is the protocol supported by most self-hosted storage; version 4 is the current protocol used by AWS S3, but may not be supported by some self-hosted storage. If you don't set this option, Seafile will use the v2 protocol by default. We recommend trying V2 first and switching to V4 if V2 doesn't work. |
| `aws_region` | If you use the v4 protocol, set this option to the region you chose when you created the buckets. If it's not set and you're using the v4 protocol, Seafile will use `us-east-1` as the default. This option will be ignored if you use the v2 protocol. |
| variable | description |
|---------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `use_v4_signature` | There are two versions of authentication protocols that can be used with S3 storage. Version 2 is the protocol supported by most self-hosted storage; version 4 is the current protocol used by AWS S3, but may not be supported by some self-hosted storage. If you don't set this option, Seafile will use the v2 protocol by default. We recommend trying V2 first and switching to V4 if V2 doesn't work. |
| `aws_region` | If you use the v4 protocol, set this option to the region you chose when you created the buckets. If it's not set and you're using the v4 protocol, Seafile will use `us-east-1` as the default. This option will be ignored if you use the v2 protocol. |
## Use HTTPS connections to S3

View File

@ -321,11 +321,11 @@ Seahub caches items(avatars, profiles, etc) on file system by default(/tmp/seahu
=== "Redis"
Redis is supported since version 11.0.
!!! success "Redis is supported since version 11.0"
First, install Redis with package installers in your OS.
1. Install Redis with package installers in your OS.
Then refer to [Django's documentation about using Redis cache](https://docs.djangoproject.com/en/4.2/topics/cache/#redis) to add Redis configurations to `seahub_settings.py`.
2. refer to [Django's documentation about using Redis cache](https://docs.djangoproject.com/en/4.2/topics/cache/#redis) to add Redis configurations to `seahub_settings.py`.
### Tweaking conf files

View File

@ -327,11 +327,11 @@ Memory cache is mandatory for pro edition. You may use Memcached or Reids as cac
```
=== "Redis"
Redis is supported since version 11.0.
!!! success "Redis is supported since version 11.0"
First, install Redis with package installers in your OS.
1. Install Redis with package installers in your OS.
Then refer to [Django's documentation about using Redis cache](https://docs.djangoproject.com/en/4.2/topics/cache/#redis) to add Redis configurations to `seahub_settings.py`.
2. refer to [Django's documentation about using Redis cache](https://docs.djangoproject.com/en/4.2/topics/cache/#redis) to add Redis configurations to `seahub_settings.py`.
### Enabling HTTP/HTTPS

View File

@ -11,13 +11,15 @@ Please check the **upgrade notes** for any special configuration or changes befo
From Seafile Docker 12.0, we recommend that you use `.env` and `seafile-server.yml` files for configuration.
First, backup the original docker-compose.yml file:
### Backup the original docker-compose.yml file:
```sh
mv docker-compose.yml docker-compose.yml.bak
```
Then download [.env](../docker/ce/env), [seafile-server.yml](../docker/ce/seafile-server.yml) and [caddy.yml](../docker/ce/caddy.yml), and modify .env file according to the old configuration in `docker-compose.yml.bak`
### Download Seafile 12,0 Docker files
Download [.env](../docker/ce/env), [seafile-server.yml](../docker/ce/seafile-server.yml) and [caddy.yml](../docker/ce/caddy.yml), and modify .env file according to the old configuration in `docker-compose.yml.bak`
=== "Seafile community edition"