summaryrefslogtreecommitdiffstats
path: root/doc/radosgw
diff options
context:
space:
mode:
Diffstat (limited to 'doc/radosgw')
-rw-r--r--doc/radosgw/STS.rst297
-rw-r--r--doc/radosgw/STSLite.rst196
-rw-r--r--doc/radosgw/admin.rst715
-rw-r--r--doc/radosgw/adminops.rst2166
-rw-r--r--doc/radosgw/api.rst16
-rw-r--r--doc/radosgw/archive-sync-module.rst44
-rw-r--r--doc/radosgw/barbican.rst123
-rw-r--r--doc/radosgw/bucketpolicy.rst216
-rw-r--r--doc/radosgw/cloud-sync-module.rst244
-rw-r--r--doc/radosgw/cloud-transition.rst368
-rw-r--r--doc/radosgw/compression.rst91
-rw-r--r--doc/radosgw/config-ref.rst301
-rw-r--r--doc/radosgw/d3n_datacache.rst116
-rw-r--r--doc/radosgw/dynamicresharding.rst238
-rw-r--r--doc/radosgw/elastic-sync-module.rst181
-rw-r--r--doc/radosgw/encryption.rst96
-rw-r--r--doc/radosgw/frontends.rst163
-rw-r--r--doc/radosgw/index.rst87
-rw-r--r--doc/radosgw/keycloak.rst138
-rw-r--r--doc/radosgw/keystone.rst179
-rw-r--r--doc/radosgw/kmip.rst219
-rw-r--r--doc/radosgw/layout.rst208
-rw-r--r--doc/radosgw/ldap-auth.rst167
-rw-r--r--doc/radosgw/lua-scripting.rst570
-rw-r--r--doc/radosgw/mfa.rst102
-rw-r--r--doc/radosgw/multisite-sync-policy.rst716
-rw-r--r--doc/radosgw/multisite.rst1690
-rw-r--r--doc/radosgw/multitenancy.rst169
-rw-r--r--doc/radosgw/nfs.rst375
-rw-r--r--doc/radosgw/notifications.rst547
-rw-r--r--doc/radosgw/oidc.rst97
-rw-r--r--doc/radosgw/opa.rst72
-rw-r--r--doc/radosgw/orphans.rst117
-rw-r--r--doc/radosgw/placement.rst263
-rw-r--r--doc/radosgw/pools.rst57
-rw-r--r--doc/radosgw/qat-accel.rst155
-rw-r--r--doc/radosgw/rgw-cache.rst155
-rw-r--r--doc/radosgw/role.rst570
-rw-r--r--doc/radosgw/s3-notification-compatibility.rst149
-rw-r--r--doc/radosgw/s3.rst98
-rw-r--r--doc/radosgw/s3/authentication.rst235
-rw-r--r--doc/radosgw/s3/bucketops.rst706
-rw-r--r--doc/radosgw/s3/commons.rst113
-rw-r--r--doc/radosgw/s3/cpp.rst337
-rw-r--r--doc/radosgw/s3/csharp.rst199
-rw-r--r--doc/radosgw/s3/java.rst212
-rw-r--r--doc/radosgw/s3/objectops.rst558
-rw-r--r--doc/radosgw/s3/perl.rst192
-rw-r--r--doc/radosgw/s3/php.rst214
-rw-r--r--doc/radosgw/s3/python.rst197
-rw-r--r--doc/radosgw/s3/ruby.rst364
-rw-r--r--doc/radosgw/s3/serviceops.rst69
-rw-r--r--doc/radosgw/s3select.rst796
-rw-r--r--doc/radosgw/session-tags.rst427
-rw-r--r--doc/radosgw/swift.rst79
-rw-r--r--doc/radosgw/swift/auth.rst82
-rw-r--r--doc/radosgw/swift/containerops.rst341
-rw-r--r--doc/radosgw/swift/java.rst175
-rw-r--r--doc/radosgw/swift/objectops.rst271
-rw-r--r--doc/radosgw/swift/python.rst114
-rw-r--r--doc/radosgw/swift/ruby.rst119
-rw-r--r--doc/radosgw/swift/serviceops.rst76
-rw-r--r--doc/radosgw/swift/tempurl.rst102
-rw-r--r--doc/radosgw/swift/tutorial.rst62
-rw-r--r--doc/radosgw/sync-modules.rst97
-rw-r--r--doc/radosgw/troubleshooting.rst208
-rw-r--r--doc/radosgw/vault.rst442
67 files changed, 19258 insertions, 0 deletions
diff --git a/doc/radosgw/STS.rst b/doc/radosgw/STS.rst
new file mode 100644
index 000000000..bc89b89da
--- /dev/null
+++ b/doc/radosgw/STS.rst
@@ -0,0 +1,297 @@
+===========
+STS in Ceph
+===========
+
+Secure Token Service is a web service in AWS that returns a set of temporary security credentials for authenticating federated users.
+The link to official AWS documentation can be found here: https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html.
+
+Ceph Object Gateway implements a subset of STS APIs that provide temporary credentials for identity and access management.
+These temporary credentials can be used to make subsequent S3 calls which will be authenticated by the STS engine in Ceph Object Gateway.
+Permissions of the temporary credentials can be further restricted via an IAM policy passed as a parameter to the STS APIs.
+
+STS REST APIs
+=============
+
+The following STS REST APIs have been implemented in Ceph Object Gateway:
+
+1. AssumeRole: Returns a set of temporary credentials that can be used for
+cross-account access. The temporary credentials will have permissions that are
+allowed by both - permission policies attached with the Role and policy attached
+with the AssumeRole API.
+
+Parameters:
+ **RoleArn** (String/ Required): ARN of the Role to Assume.
+
+ **RoleSessionName** (String/ Required): An Identifier for the assumed role
+ session.
+
+ **Policy** (String/ Optional): An IAM Policy in JSON format.
+
+ **DurationSeconds** (Integer/ Optional): The duration in seconds of the session.
+ Its default value is 3600.
+
+ **ExternalId** (String/ Optional): A unique Id that might be used when a role is
+ assumed in another account.
+
+ **SerialNumber** (String/ Optional): The Id number of the MFA device associated
+ with the user making the AssumeRole call.
+
+ **TokenCode** (String/ Optional): The value provided by the MFA device, if the
+ trust policy of the role being assumed requires MFA.
+
+2. AssumeRoleWithWebIdentity: Returns a set of temporary credentials for users that
+have been authenticated by a web/mobile app by an OpenID Connect /OAuth2.0 Identity Provider.
+Currently Keycloak has been tested and integrated with RGW.
+
+Parameters:
+ **RoleArn** (String/ Required): ARN of the Role to Assume.
+
+ **RoleSessionName** (String/ Required): An Identifier for the assumed role
+ session.
+
+ **Policy** (String/ Optional): An IAM Policy in JSON format.
+
+ **DurationSeconds** (Integer/ Optional): The duration in seconds of the session.
+ Its default value is 3600.
+
+ **ProviderId** (String/ Optional): Fully qualified host component of the domain name
+ of the IDP. Valid only for OAuth2.0 tokens (not for OpenID Connect tokens).
+
+ **WebIdentityToken** (String/ Required): The OpenID Connect/ OAuth2.0 token, which the
+ application gets in return after authenticating its user with an IDP.
+
+Before invoking AssumeRoleWithWebIdentity, an OpenID Connect Provider entity (which the web application
+authenticates with), needs to be created in RGW.
+
+The trust between the IDP and the role is created by adding a condition to the role's trust policy, which
+allows access only to applications which satisfy the given condition.
+All claims of the JWT are supported in the condition of the role's trust policy.
+An example of a policy that uses the 'aud' claim in the condition is of the form::
+
+ '''{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Federated":["arn:aws:iam:::oidc-provider/<URL of IDP>"]},"Action":["sts:AssumeRoleWithWebIdentity"],"Condition":{"StringEquals":{"<URL of IDP> :app_id":"<aud>"}}}]}'''
+
+The app_id in the condition above must match the 'aud' claim of the incoming token.
+
+An example of a policy that uses the 'sub' claim in the condition is of the form::
+
+ "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/<URL of IDP>\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"<URL of IDP> :sub\":\"<sub>\"\}\}\}\]\}"
+
+Similarly, an example of a policy that uses 'azp' claim in the condition is of the form::
+
+ "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/<URL of IDP>\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"<URL of IDP> :azp\":\"<azp>\"\}\}\}\]\}"
+
+A shadow user is created corresponding to every federated user. The user id is derived from the 'sub' field of the incoming web token.
+The user is created in a separate namespace - 'oidc' such that the user id doesn't clash with any other user ids in rgw. The format of the user id
+is - <tenant>$<user-namespace>$<sub> where user-namespace is 'oidc' for users that authenticate with oidc providers.
+
+RGW now supports Session tags that can be passed in the web token to AssumeRoleWithWebIdentity call. More information related to Session Tags can be found here
+:doc:`session-tags`.
+
+STS Configuration
+=================
+
+The following configurable options have to be added for STS integration::
+
+ [client.{your-rgw-name}]
+ rgw_sts_key = {sts key for encrypting the session token}
+ rgw_s3_auth_use_sts = true
+
+Notes:
+
+* By default, STS and S3 APIs co-exist in the same namespace, and both S3
+ and STS APIs can be accessed via the same endpoint in Ceph Object Gateway.
+* The ``rgw_sts_key`` needs to be a hex-string consisting of exactly 16 characters.
+
+Examples
+========
+1. In order to get the example to work, make sure that the user TESTER has the ``roles`` capability assigned:
+
+.. code-block:: console
+
+ radosgw-admin caps add --uid="TESTER" --caps="roles=*"
+
+2. The following is an example of the AssumeRole API call, which shows steps to create a role, assign a policy to it
+ (that allows access to S3 resources), assuming a role to get temporary credentials and accessing S3 resources using
+ those credentials. In this example, TESTER1 assumes a role created by TESTER, to access S3 resources owned by TESTER,
+ according to the permission policy attached to the role.
+
+.. code-block:: python
+
+ import boto3
+
+ iam_client = boto3.client('iam',
+ aws_access_key_id=<access_key of TESTER>,
+ aws_secret_access_key=<secret_key of TESTER>,
+ endpoint_url=<IAM URL>,
+ region_name=''
+ )
+
+ policy_document = '''{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":["arn:aws:iam:::user/TESTER1"]},"Action":["sts:AssumeRole"]}]}'''
+
+ role_response = iam_client.create_role(
+ AssumeRolePolicyDocument=policy_document,
+ Path='/',
+ RoleName='S3Access',
+ )
+
+ role_policy = '''{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"s3:*","Resource":"arn:aws:s3:::*"}}'''
+
+ response = iam_client.put_role_policy(
+ RoleName='S3Access',
+ PolicyName='Policy1',
+ PolicyDocument=role_policy
+ )
+
+ sts_client = boto3.client('sts',
+ aws_access_key_id=<access_key of TESTER1>,
+ aws_secret_access_key=<secret_key of TESTER1>,
+ endpoint_url=<STS URL>,
+ region_name='',
+ )
+
+ response = sts_client.assume_role(
+ RoleArn=role_response['Role']['Arn'],
+ RoleSessionName='Bob',
+ DurationSeconds=3600
+ )
+
+ s3client = boto3.client('s3',
+ aws_access_key_id = response['Credentials']['AccessKeyId'],
+ aws_secret_access_key = response['Credentials']['SecretAccessKey'],
+ aws_session_token = response['Credentials']['SessionToken'],
+ endpoint_url=<S3 URL>,
+ region_name='',)
+
+ bucket_name = 'my-bucket'
+ s3bucket = s3client.create_bucket(Bucket=bucket_name)
+ resp = s3client.list_buckets()
+
+2. The following is an example of AssumeRoleWithWebIdentity API call, where an external app that has users authenticated with
+an OpenID Connect/ OAuth2 IDP (Keycloak in this example), assumes a role to get back temporary credentials and access S3 resources
+according to permission policy of the role.
+
+.. code-block:: python
+
+ import boto3
+
+ iam_client = boto3.client('iam',
+ aws_access_key_id=<access_key of TESTER>,
+ aws_secret_access_key=<secret_key of TESTER>,
+ endpoint_url=<IAM URL>,
+ region_name=''
+ )
+
+ oidc_response = iam_client.create_open_id_connect_provider(
+ Url=<URL of the OpenID Connect Provider,
+ ClientIDList=[
+ <Client id registered with the IDP>
+ ],
+ ThumbprintList=[
+ <Thumbprint of the IDP>
+ ]
+ )
+
+ policy_document = '''{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"Federated":["arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/demo"]},"Action":["sts:AssumeRoleWithWebIdentity"],"Condition":{"StringEquals":{"localhost:8080/auth/realms/demo:app_id":"customer-portal"}}}]}'''
+ role_response = iam_client.create_role(
+ AssumeRolePolicyDocument=policy_document,
+ Path='/',
+ RoleName='S3Access',
+ )
+
+ role_policy = '''{"Version":"2012-10-17","Statement":{"Effect":"Allow","Action":"s3:*","Resource":"arn:aws:s3:::*"}}'''
+
+ response = iam_client.put_role_policy(
+ RoleName='S3Access',
+ PolicyName='Policy1',
+ PolicyDocument=role_policy
+ )
+
+ sts_client = boto3.client('sts',
+ aws_access_key_id=<access_key of TESTER1>,
+ aws_secret_access_key=<secret_key of TESTER1>,
+ endpoint_url=<STS URL>,
+ region_name='',
+ )
+
+ response = client.assume_role_with_web_identity(
+ RoleArn=role_response['Role']['Arn'],
+ RoleSessionName='Bob',
+ DurationSeconds=3600,
+ WebIdentityToken=<Web Token>
+ )
+
+ s3client = boto3.client('s3',
+ aws_access_key_id = response['Credentials']['AccessKeyId'],
+ aws_secret_access_key = response['Credentials']['SecretAccessKey'],
+ aws_session_token = response['Credentials']['SessionToken'],
+ endpoint_url=<S3 URL>,
+ region_name='',)
+
+ bucket_name = 'my-bucket'
+ s3bucket = s3client.create_bucket(Bucket=bucket_name)
+ resp = s3client.list_buckets()
+
+How to obtain thumbprint of an OpenID Connect Provider IDP
+==========================================================
+1. Take the OpenID connect provider's URL and add /.well-known/openid-configuration
+to it to get the URL to get the IDP's configuration document. For example, if the URL
+of the IDP is http://localhost:8000/auth/realms/quickstart, then the URL to get the
+document from is http://localhost:8000/auth/realms/quickstart/.well-known/openid-configuration
+
+2. Use the following curl command to get the configuration document from the URL described
+in step 1::
+
+ curl -k -v \
+ -X GET \
+ -H "Content-Type: application/x-www-form-urlencoded" \
+ "http://localhost:8000/auth/realms/quickstart/.well-known/openid-configuration" \
+ | jq .
+
+ 3. From the response of step 2, use the value of "jwks_uri" to get the certificate of the IDP,
+ using the following code::
+ curl -k -v \
+ -X GET \
+ -H "Content-Type: application/x-www-form-urlencoded" \
+ "http://$KC_SERVER/$KC_CONTEXT/realms/$KC_REALM/protocol/openid-connect/certs" \
+ | jq .
+
+3. Copy the result of "x5c" in the response above, in a file certificate.crt, and add
+'-----BEGIN CERTIFICATE-----' at the beginning and "-----END CERTIFICATE-----"
+at the end.
+
+4. Use the following OpenSSL command to get the certificate thumbprint::
+
+ openssl x509 -in certificate.crt -fingerprint -noout
+
+5. The result of the above command in step 4, will be a SHA1 fingerprint, like the following::
+
+ SHA1 Fingerprint=F7:D7:B3:51:5D:D0:D3:19:DD:21:9A:43:A9:EA:72:7A:D6:06:52:87
+
+6. Remove the colons from the result above to get the final thumbprint which can be as input
+while creating the OpenID Connect Provider entity in IAM::
+
+ F7D7B3515DD0D319DD219A43A9EA727AD6065287
+
+Roles in RGW
+============
+
+More information for role manipulation can be found here
+:doc:`role`.
+
+OpenID Connect Provider in RGW
+==============================
+
+More information for OpenID Connect Provider entity manipulation
+can be found here
+:doc:`oidc`.
+
+Keycloak integration with Radosgw
+=================================
+
+Steps for integrating Radosgw with Keycloak can be found here
+:doc:`keycloak`.
+
+STSLite
+=======
+STSLite has been built on STS, and documentation for the same can be found here
+:doc:`STSLite`. \ No newline at end of file
diff --git a/doc/radosgw/STSLite.rst b/doc/radosgw/STSLite.rst
new file mode 100644
index 000000000..7880e373f
--- /dev/null
+++ b/doc/radosgw/STSLite.rst
@@ -0,0 +1,196 @@
+=========
+STS Lite
+=========
+
+Ceph Object Gateway provides support for a subset of Amazon Secure Token Service
+(STS) APIs. STS Lite is an extension of STS and builds upon one of its APIs to
+decrease the load on external IDPs like Keystone and LDAP.
+
+A set of temporary security credentials is returned after authenticating
+a set of AWS credentials with the external IDP. These temporary credentials can be used
+to make subsequent S3 calls which will be authenticated by the STS engine in Ceph,
+resulting in less load on the Keystone/ LDAP server.
+
+Temporary and limited privileged credentials can be obtained for a local user
+also using the STS Lite API.
+
+STS Lite REST APIs
+==================
+
+The following STS Lite REST API is part of STS Lite in Ceph Object Gateway:
+
+1. GetSessionToken: Returns a set of temporary credentials for a set of AWS
+credentials. After initial authentication with Keystone/ LDAP, the temporary
+credentials returned can be used to make subsequent S3 calls. The temporary
+credentials will have the same permission as that of the AWS credentials.
+
+Parameters:
+ **DurationSeconds** (Integer/ Optional): The duration in seconds for which the
+ credentials should remain valid. Its default value is 3600. Its default max
+ value is 43200 which is can be configured using rgw sts max session duration.
+
+ **SerialNumber** (String/ Optional): The Id number of the MFA device associated
+ with the user making the GetSessionToken call.
+
+ **TokenCode** (String/ Optional): The value provided by the MFA device, if MFA is required.
+
+An administrative user needs to attach a policy to allow invocation of GetSessionToken API using its permanent
+credentials and to allow subsequent S3 operations invocation using only the temporary credentials returned
+by GetSessionToken.
+
+The user attaching the policy needs to have admin caps. For example::
+
+ radosgw-admin caps add --uid="TESTER" --caps="user-policy=*"
+
+The following is the policy that needs to be attached to a user 'TESTER1'::
+
+ user_policy = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Deny\",\"Action\":\"s3:*\",\"Resource\":[\"*\"],\"Condition\":{\"BoolIfExists\":{\"sts:authentication\":\"false\"}}},{\"Effect\":\"Allow\",\"Action\":\"sts:GetSessionToken\",\"Resource\":\"*\",\"Condition\":{\"BoolIfExists\":{\"sts:authentication\":\"false\"}}}]}"
+
+
+STS Lite Configuration
+======================
+
+The following configurable options are available for STS Lite integration::
+
+ [client.radosgw.gateway]
+ rgw sts key = {sts key for encrypting the session token}
+ rgw s3 auth use sts = true
+
+The above STS configurables can be used with the Keystone configurables if one
+needs to use STS Lite in conjunction with Keystone. The complete set of
+configurable options will be::
+
+ [client.{your-rgw-name}]
+ rgw_sts_key = {sts key for encrypting/ decrypting the session token, exactly 16 hex characters}
+ rgw_s3_auth_use_sts = true
+
+ rgw keystone url = {keystone server url:keystone server admin port}
+ rgw keystone admin project = {keystone admin project name}
+ rgw keystone admin tenant = {keystone service tenant name}
+ rgw keystone admin domain = {keystone admin domain name}
+ rgw keystone api version = {keystone api version}
+ rgw keystone implicit tenants = {true for private tenant for each new user}
+ rgw keystone admin password = {keystone service tenant user name}
+ rgw keystone admin user = keystone service tenant user password}
+ rgw keystone accepted roles = {accepted user roles}
+ rgw keystone token cache size = {number of tokens to cache}
+ rgw s3 auth use keystone = true
+
+The details of the integrating ldap with Ceph Object Gateway can be found here:
+:doc:`keystone`
+
+The complete set of configurables to use STS Lite with LDAP are::
+
+ [client.{your-rgw-name}]
+ rgw_sts_key = {sts key for encrypting/ decrypting the session token, exactly 16 hex characters}
+ rgw_s3_auth_use_sts = true
+
+ rgw_s3_auth_use_ldap = true
+ rgw_ldap_uri = {LDAP server to use}
+ rgw_ldap_binddn = {Distinguished Name (DN) of the service account}
+ rgw_ldap_secret = {password for the service account}
+ rgw_ldap_searchdn = {base in the directory information tree for searching users}
+ rgw_ldap_dnattr = {attribute being used in the constructed search filter to match a username}
+ rgw_ldap_searchfilter = {search filter}
+
+The details of the integrating ldap with Ceph Object Gateway can be found here:
+:doc:`ldap-auth`
+
+Note: By default, STS and S3 APIs co-exist in the same namespace, and both S3
+and STS APIs can be accessed via the same endpoint in Ceph Object Gateway.
+
+Example showing how to Use STS Lite with Keystone
+=================================================
+
+The following are the steps needed to use STS Lite with Keystone. Boto 3.x has
+been used to write an example code to show the integration of STS Lite with
+Keystone.
+
+1. Generate EC2 credentials :
+
+.. code-block:: javascript
+
+ openstack ec2 credentials create
+ +------------+--------------------------------------------------------+
+ | Field | Value |
+ +------------+--------------------------------------------------------+
+ | access | b924dfc87d454d15896691182fdeb0ef |
+ | links | {u'self': u'http://192.168.0.15/identity/v3/users/ |
+ | | 40a7140e424f493d8165abc652dc731c/credentials/ |
+ | | OS-EC2/b924dfc87d454d15896691182fdeb0ef'} |
+ | project_id | c703801dccaf4a0aaa39bec8c481e25a |
+ | secret | 6a2142613c504c42a94ba2b82147dc28 |
+ | trust_id | None |
+ | user_id | 40a7140e424f493d8165abc652dc731c |
+ +------------+--------------------------------------------------------+
+
+2. Use the credentials created in the step 1. to get back a set of temporary
+ credentials using GetSessionToken API.
+
+.. code-block:: python
+
+ import boto3
+
+ access_key = <ec2 access key>
+ secret_key = <ec2 secret key>
+
+ client = boto3.client('sts',
+ aws_access_key_id=access_key,
+ aws_secret_access_key=secret_key,
+ endpoint_url=<STS URL>,
+ region_name='',
+ )
+
+ response = client.get_session_token(
+ DurationSeconds=43200
+ )
+
+3. The temporary credentials obtained in step 2. can be used for making S3 calls:
+
+.. code-block:: python
+
+ s3client = boto3.client('s3',
+ aws_access_key_id = response['Credentials']['AccessKeyId'],
+ aws_secret_access_key = response['Credentials']['SecretAccessKey'],
+ aws_session_token = response['Credentials']['SessionToken'],
+ endpoint_url=<S3 URL>,
+ region_name='')
+
+ bucket = s3client.create_bucket(Bucket='my-new-shiny-bucket')
+ response = s3client.list_buckets()
+ for bucket in response["Buckets"]:
+ print("{name}\t{created}".format(
+ name = bucket['Name'],
+ created = bucket['CreationDate'],
+ ))
+
+Similar steps can be performed for using GetSessionToken with LDAP.
+
+Limitations and Workarounds
+===========================
+
+1. Keystone currently supports only S3 requests, hence in order to successfully
+authenticate an STS request, the following workaround needs to be added to boto
+to the following file - botocore/auth.py
+
+Lines 13-16 have been added as a workaround in the code block below:
+
+.. code-block:: python
+
+ class SigV4Auth(BaseSigner):
+ """
+ Sign a request with Signature V4.
+ """
+ REQUIRES_REGION = True
+
+ def __init__(self, credentials, service_name, region_name):
+ self.credentials = credentials
+ # We initialize these value here so the unit tests can have
+ # valid values. But these will get overridden in ``add_auth``
+ # later for real requests.
+ self._region_name = region_name
+ if service_name == 'sts':
+ self._service_name = 's3'
+ else:
+ self._service_name = service_name
+
diff --git a/doc/radosgw/admin.rst b/doc/radosgw/admin.rst
new file mode 100644
index 000000000..8d70252fe
--- /dev/null
+++ b/doc/radosgw/admin.rst
@@ -0,0 +1,715 @@
+=============
+ Admin Guide
+=============
+
+Once you have your Ceph Object Storage service up and running, you may
+administer the service with user management, access controls, quotas
+and usage tracking among other features.
+
+
+User Management
+===============
+
+Ceph Object Storage user management refers to users of the Ceph Object Storage
+service (i.e., not the Ceph Object Gateway as a user of the Ceph Storage
+Cluster). You must create a user, access key and secret to enable end users to
+interact with Ceph Object Gateway services.
+
+There are two user types:
+
+- **User:** The term 'user' reflects a user of the S3 interface.
+
+- **Subuser:** The term 'subuser' reflects a user of the Swift interface. A subuser
+ is associated to a user .
+
+.. ditaa::
+ +---------+
+ | User |
+ +----+----+
+ |
+ | +-----------+
+ +-----+ Subuser |
+ +-----------+
+
+You can create, modify, view, suspend and remove users and subusers. In addition
+to user and subuser IDs, you may add a display name and an email address for a
+user. You can specify a key and secret, or generate a key and secret
+automatically. When generating or specifying keys, note that user IDs correspond
+to an S3 key type and subuser IDs correspond to a swift key type. Swift keys
+also have access levels of ``read``, ``write``, ``readwrite`` and ``full``.
+
+
+Create a User
+-------------
+
+To create a user (S3 interface), execute the following::
+
+ radosgw-admin user create --uid={username} --display-name="{display-name}" [--email={email}]
+
+For example::
+
+ radosgw-admin user create --uid=johndoe --display-name="John Doe" --email=john@example.com
+
+.. code-block:: javascript
+
+ { "user_id": "johndoe",
+ "display_name": "John Doe",
+ "email": "john@example.com",
+ "suspended": 0,
+ "max_buckets": 1000,
+ "subusers": [],
+ "keys": [
+ { "user": "johndoe",
+ "access_key": "11BS02LGFB6AL6H1ADMW",
+ "secret_key": "vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY"}],
+ "swift_keys": [],
+ "caps": [],
+ "op_mask": "read, write, delete",
+ "default_placement": "",
+ "placement_tags": [],
+ "bucket_quota": { "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1},
+ "user_quota": { "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1},
+ "temp_url_keys": []}
+
+Creating a user also creates an ``access_key`` and ``secret_key`` entry for use
+with any S3 API-compatible client.
+
+.. important:: Check the key output. Sometimes ``radosgw-admin``
+ generates a JSON escape (``\``) character, and some clients
+ do not know how to handle JSON escape characters. Remedies include
+ removing the JSON escape character (``\``), encapsulating the string
+ in quotes, regenerating the key and ensuring that it
+ does not have a JSON escape character or specify the key and secret
+ manually.
+
+
+Create a Subuser
+----------------
+
+To create a subuser (Swift interface) for the user, you must specify the user ID
+(``--uid={username}``), a subuser ID and the access level for the subuser. ::
+
+ radosgw-admin subuser create --uid={uid} --subuser={uid} --access=[ read | write | readwrite | full ]
+
+For example::
+
+ radosgw-admin subuser create --uid=johndoe --subuser=johndoe:swift --access=full
+
+
+.. note:: ``full`` is not ``readwrite``, as it also includes the access control policy.
+
+.. code-block:: javascript
+
+ { "user_id": "johndoe",
+ "display_name": "John Doe",
+ "email": "john@example.com",
+ "suspended": 0,
+ "max_buckets": 1000,
+ "subusers": [
+ { "id": "johndoe:swift",
+ "permissions": "full-control"}],
+ "keys": [
+ { "user": "johndoe",
+ "access_key": "11BS02LGFB6AL6H1ADMW",
+ "secret_key": "vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY"}],
+ "swift_keys": [],
+ "caps": [],
+ "op_mask": "read, write, delete",
+ "default_placement": "",
+ "placement_tags": [],
+ "bucket_quota": { "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1},
+ "user_quota": { "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1},
+ "temp_url_keys": []}
+
+
+Get User Info
+-------------
+
+To get information about a user, you must specify ``user info`` and the user ID
+(``--uid={username}``) . ::
+
+ radosgw-admin user info --uid=johndoe
+
+
+
+Modify User Info
+----------------
+
+To modify information about a user, you must specify the user ID (``--uid={username}``)
+and the attributes you want to modify. Typical modifications are to keys and secrets,
+email addresses, display names and access levels. For example::
+
+ radosgw-admin user modify --uid=johndoe --display-name="John E. Doe"
+
+To modify subuser values, specify ``subuser modify``, user ID and the subuser ID. For example::
+
+ radosgw-admin subuser modify --uid=johndoe --subuser=johndoe:swift --access=full
+
+
+User Enable/Suspend
+-------------------
+
+When you create a user, the user is enabled by default. However, you may suspend
+user privileges and re-enable them at a later time. To suspend a user, specify
+``user suspend`` and the user ID. ::
+
+ radosgw-admin user suspend --uid=johndoe
+
+To re-enable a suspended user, specify ``user enable`` and the user ID. ::
+
+ radosgw-admin user enable --uid=johndoe
+
+.. note:: Disabling the user disables the subuser.
+
+
+Remove a User
+-------------
+
+When you remove a user, the user and subuser are removed from the system.
+However, you may remove just the subuser if you wish. To remove a user (and
+subuser), specify ``user rm`` and the user ID. ::
+
+ radosgw-admin user rm --uid=johndoe
+
+To remove the subuser only, specify ``subuser rm`` and the subuser ID. ::
+
+ radosgw-admin subuser rm --subuser=johndoe:swift
+
+
+Options include:
+
+- **Purge Data:** The ``--purge-data`` option purges all data associated
+ to the UID.
+
+- **Purge Keys:** The ``--purge-keys`` option purges all keys associated
+ to the UID.
+
+
+Remove a Subuser
+----------------
+
+When you remove a sub user, you are removing access to the Swift interface.
+The user will remain in the system. To remove the subuser, specify
+``subuser rm`` and the subuser ID. ::
+
+ radosgw-admin subuser rm --subuser=johndoe:swift
+
+
+
+Options include:
+
+- **Purge Keys:** The ``--purge-keys`` option purges all keys associated
+ to the UID.
+
+
+Add / Remove a Key
+------------------------
+
+Both users and subusers require the key to access the S3 or Swift interface. To
+use S3, the user needs a key pair which is composed of an access key and a
+secret key. On the other hand, to use Swift, the user typically needs a secret
+key (password), and use it together with the associated user ID. You may create
+a key and either specify or generate the access key and/or secret key. You may
+also remove a key. Options include:
+
+- ``--key-type=<type>`` specifies the key type. The options are: s3, swift
+- ``--access-key=<key>`` manually specifies an S3 access key.
+- ``--secret-key=<key>`` manually specifies a S3 secret key or a Swift secret key.
+- ``--gen-access-key`` automatically generates a random S3 access key.
+- ``--gen-secret`` automatically generates a random S3 secret key or a random Swift secret key.
+
+An example how to add a specified S3 key pair for a user. ::
+
+ radosgw-admin key create --uid=foo --key-type=s3 --access-key fooAccessKey --secret-key fooSecretKey
+
+.. code-block:: javascript
+
+ { "user_id": "foo",
+ "rados_uid": 0,
+ "display_name": "foo",
+ "email": "foo@example.com",
+ "suspended": 0,
+ "keys": [
+ { "user": "foo",
+ "access_key": "fooAccessKey",
+ "secret_key": "fooSecretKey"}],
+ }
+
+Note that you may create multiple S3 key pairs for a user.
+
+To attach a specified swift secret key for a subuser. ::
+
+ radosgw-admin key create --subuser=foo:bar --key-type=swift --secret-key barSecret
+
+.. code-block:: javascript
+
+ { "user_id": "foo",
+ "rados_uid": 0,
+ "display_name": "foo",
+ "email": "foo@example.com",
+ "suspended": 0,
+ "subusers": [
+ { "id": "foo:bar",
+ "permissions": "full-control"}],
+ "swift_keys": [
+ { "user": "foo:bar",
+ "secret_key": "asfghjghghmgm"}]}
+
+Note that a subuser can have only one swift secret key.
+
+Subusers can also be used with S3 APIs if the subuser is associated with a S3 key pair. ::
+
+ radosgw-admin key create --subuser=foo:bar --key-type=s3 --access-key barAccessKey --secret-key barSecretKey
+
+.. code-block:: javascript
+
+ { "user_id": "foo",
+ "rados_uid": 0,
+ "display_name": "foo",
+ "email": "foo@example.com",
+ "suspended": 0,
+ "subusers": [
+ { "id": "foo:bar",
+ "permissions": "full-control"}],
+ "keys": [
+ { "user": "foo:bar",
+ "access_key": "barAccessKey",
+ "secret_key": "barSecretKey"}],
+ }
+
+
+To remove a S3 key pair, specify the access key. ::
+
+ radosgw-admin key rm --uid=foo --key-type=s3 --access-key=fooAccessKey
+
+To remove the swift secret key. ::
+
+ radosgw-admin key rm --subuser=foo:bar --key-type=swift
+
+
+Add / Remove Admin Capabilities
+-------------------------------
+
+The Ceph Storage Cluster provides an administrative API that enables users to
+execute administrative functions via the REST API. By default, users do NOT have
+access to this API. To enable a user to exercise administrative functionality,
+provide the user with administrative capabilities.
+
+To add administrative capabilities to a user, execute the following::
+
+ radosgw-admin caps add --uid={uid} --caps={caps}
+
+
+You can add read, write or all capabilities to users, buckets, metadata and
+usage (utilization). For example::
+
+ --caps="[users|buckets|metadata|usage|zone|amz-cache|info|bilog|mdlog|datalog|user-policy|oidc-provider|roles|ratelimit]=[*|read|write|read, write]"
+
+For example::
+
+ radosgw-admin caps add --uid=johndoe --caps="users=*;buckets=*"
+
+
+To remove administrative capabilities from a user, execute the following::
+
+ radosgw-admin caps rm --uid=johndoe --caps={caps}
+
+
+Quota Management
+================
+
+The Ceph Object Gateway enables you to set quotas on users and buckets owned by
+users. Quotas include the maximum number of objects in a bucket and the maximum
+storage size a bucket can hold.
+
+- **Bucket:** The ``--bucket`` option allows you to specify a quota for
+ buckets the user owns.
+
+- **Maximum Objects:** The ``--max-objects`` setting allows you to specify
+ the maximum number of objects. A negative value disables this setting.
+
+- **Maximum Size:** The ``--max-size`` option allows you to specify a quota
+ size in B/K/M/G/T, where B is the default. A negative value disables this setting.
+
+- **Quota Scope:** The ``--quota-scope`` option sets the scope for the quota.
+ The options are ``bucket`` and ``user``. Bucket quotas apply to buckets a
+ user owns. User quotas apply to a user.
+
+
+Set User Quota
+--------------
+
+Before you enable a quota, you must first set the quota parameters.
+For example::
+
+ radosgw-admin quota set --quota-scope=user --uid=<uid> [--max-objects=<num objects>] [--max-size=<max size>]
+
+For example::
+
+ radosgw-admin quota set --quota-scope=user --uid=johndoe --max-objects=1024 --max-size=1024B
+
+
+A negative value for num objects and / or max size means that the
+specific quota attribute check is disabled.
+
+
+Enable/Disable User Quota
+-------------------------
+
+Once you set a user quota, you may enable it. For example::
+
+ radosgw-admin quota enable --quota-scope=user --uid=<uid>
+
+You may disable an enabled user quota. For example::
+
+ radosgw-admin quota disable --quota-scope=user --uid=<uid>
+
+
+Set Bucket Quota
+----------------
+
+Bucket quotas apply to the buckets owned by the specified ``uid``. They are
+independent of the user. ::
+
+ radosgw-admin quota set --uid=<uid> --quota-scope=bucket [--max-objects=<num objects>] [--max-size=<max size]
+
+A negative value for num objects and / or max size means that the
+specific quota attribute check is disabled.
+
+
+Enable/Disable Bucket Quota
+---------------------------
+
+Once you set a bucket quota, you may enable it. For example::
+
+ radosgw-admin quota enable --quota-scope=bucket --uid=<uid>
+
+You may disable an enabled bucket quota. For example::
+
+ radosgw-admin quota disable --quota-scope=bucket --uid=<uid>
+
+
+Get Quota Settings
+------------------
+
+You may access each user's quota settings via the user information
+API. To read user quota setting information with the CLI interface,
+execute the following::
+
+ radosgw-admin user info --uid=<uid>
+
+
+Update Quota Stats
+------------------
+
+Quota stats get updated asynchronously. You can update quota
+statistics for all users and all buckets manually to retrieve
+the latest quota stats. ::
+
+ radosgw-admin user stats --uid=<uid> --sync-stats
+
+.. _rgw_user_usage_stats:
+
+Get User Usage Stats
+--------------------
+
+To see how much of the quota a user has consumed, execute the following::
+
+ radosgw-admin user stats --uid=<uid>
+
+.. note:: You should execute ``radosgw-admin user stats`` with the
+ ``--sync-stats`` option to receive the latest data.
+
+Default Quotas
+--------------
+
+You can set default quotas in the config. These defaults are used when
+creating a new user and have no effect on existing users. If the
+relevant default quota is set in config, then that quota is set on the
+new user, and that quota is enabled. See ``rgw bucket default quota max objects``,
+``rgw bucket default quota max size``, ``rgw user default quota max objects``, and
+``rgw user default quota max size`` in `Ceph Object Gateway Config Reference`_
+
+Quota Cache
+-----------
+
+Quota statistics are cached on each RGW instance. If there are multiple
+instances, then the cache can keep quotas from being perfectly enforced, as
+each instance will have a different view of quotas. The options that control
+this are ``rgw bucket quota ttl``, ``rgw user quota bucket sync interval`` and
+``rgw user quota sync interval``. The higher these values are, the more
+efficient quota operations are, but the more out-of-sync multiple instances
+will be. The lower these values are, the closer to perfect enforcement
+multiple instances will achieve. If all three are 0, then quota caching is
+effectively disabled, and multiple instances will have perfect quota
+enforcement. See `Ceph Object Gateway Config Reference`_
+
+Reading / Writing Global Quotas
+-------------------------------
+
+You can read and write global quota settings in the period configuration. To
+view the global quota settings::
+
+ radosgw-admin global quota get
+
+The global quota settings can be manipulated with the ``global quota``
+counterparts of the ``quota set``, ``quota enable``, and ``quota disable``
+commands. ::
+
+ radosgw-admin global quota set --quota-scope bucket --max-objects 1024
+ radosgw-admin global quota enable --quota-scope bucket
+
+.. note:: In a multisite configuration, where there is a realm and period
+ present, changes to the global quotas must be committed using ``period
+ update --commit``. If there is no period present, the rados gateway(s) must
+ be restarted for the changes to take effect.
+
+
+Rate Limit Management
+=====================
+
+The Ceph Object Gateway makes it possible to set rate limits on users and
+buckets. "Rate limit" includes the maximum number of read operations (read
+ops) and write operations (write ops) per minute and the number of bytes per
+minute that can be written or read per user or per bucket.
+
+Operations that use the ``GET`` method or the ``HEAD`` method in their REST
+requests are "read requests". All other requests are "write requests".
+
+Each object gateway tracks per-user metrics separately from bucket metrics.
+These metrics are not shared with other gateways. The configured limits should
+be divided by the number of active object gateways. For example, if "user A" is
+to be be limited to 10 ops per minute and there are two object gateways in the
+cluster, then the limit on "user A" should be ``5`` (10 ops per minute / 2
+RGWs). If the requests are **not** balanced between RGWs, the rate limit might
+be underutilized. For example: if the ops limit is ``5`` and there are two
+RGWs, **but** the Load Balancer sends load to only one of those RGWs, the
+effective limit is 5 ops, because this limit is enforced per RGW. If the rate
+limit that has been set for the bucket has been reached but the rate limit that
+has been set for the user has not been reached, then the request is cancelled.
+The contrary holds as well: if the rate limit that has been set for the user
+has been reached but the rate limit that has been set for the bucket has not
+been reached, then the request is cancelled.
+
+The accounting of bandwidth happens only after a request has been accepted.
+This means that requests will proceed even if the bucket rate limit or user
+rate limit is reached during the execution of the request. The RGW keeps track
+of a "debt" consisting of bytes used in excess of the configured value; users
+or buckets that incur this kind of debt are prevented from sending more
+requests until the "debt" has been repaid. The maximum size of the "debt" is
+twice the max-read/write-bytes per minute. If "user A" is subject to a 1-byte
+read limit per minute and they attempt to GET an object that is 1 GB in size,
+then the ``GET`` action will fail. After "user A" has completed this 1 GB
+operation, RGW blocks the user's requests for up to two minutes. After this
+time has elapsed, "user A" will be able to send ``GET`` requests again.
+
+
+- **Bucket:** The ``--bucket`` option allows you to specify a rate limit for a
+ bucket.
+
+- **User:** The ``--uid`` option allows you to specify a rate limit for a
+ user.
+
+- **Maximum Read Ops:** The ``--max-read-ops`` setting allows you to specify
+ the maximum number of read ops per minute per RGW. A 0 value disables this setting (which means unlimited access).
+
+- **Maximum Read Bytes:** The ``--max-read-bytes`` setting allows you to specify
+ the maximum number of read bytes per minute per RGW. A 0 value disables this setting (which means unlimited access).
+
+- **Maximum Write Ops:** The ``--max-write-ops`` setting allows you to specify
+ the maximum number of write ops per minute per RGW. A 0 value disables this setting (which means unlimited access).
+
+- **Maximum Write Bytes:** The ``--max-write-bytes`` setting allows you to specify
+ the maximum number of write bytes per minute per RGW. A 0 value disables this setting (which means unlimited access).
+
+- **Rate Limit Scope:** The ``--ratelimit-scope`` option sets the scope for the rate limit.
+ The options are ``bucket`` , ``user`` and ``anonymous``. Bucket rate limit apply to buckets.
+ The user rate limit applies to a user. Anonymous applies to an unauthenticated user.
+ Anonymous scope is only available for global rate limit.
+
+
+Set User Rate Limit
+-------------------
+
+Before you enable a rate limit, you must first set the rate limit parameters.
+For example::
+
+ radosgw-admin ratelimit set --ratelimit-scope=user --uid=<uid> <[--max-read-ops=<num ops>] [--max-read-bytes=<num bytes>]
+ [--max-write-ops=<num ops>] [--max-write-bytes=<num bytes>]>
+
+For example::
+
+ radosgw-admin ratelimit set --ratelimit-scope=user --uid=johndoe --max-read-ops=1024 --max-write-bytes=10240
+
+
+A 0 value for num ops and / or num bytes means that the
+specific rate limit attribute check is disabled.
+
+Get User Rate Limit
+-------------------
+
+Get the current configured rate limit parameters
+For example::
+
+ radosgw-admin ratelimit get --ratelimit-scope=user --uid=<uid>
+
+For example::
+
+ radosgw-admin ratelimit get --ratelimit-scope=user --uid=johndoe
+
+
+A 0 value for num ops and / or num bytes means that the
+specific rate limit attribute check is disabled.
+
+
+Enable/Disable User Rate Limit
+------------------------------
+
+Once you set a user rate limit, you may enable it. For example::
+
+ radosgw-admin ratelimit enable --ratelimit-scope=user --uid=<uid>
+
+You may disable an enabled user rate limit. For example::
+
+ radosgw-admin ratelimit disable --ratelimit-scope=user --uid=johndoe
+
+
+Set Bucket Rate Limit
+---------------------
+
+Before you enable a rate limit, you must first set the rate limit parameters.
+For example::
+
+ radosgw-admin ratelimit set --ratelimit-scope=bucket --bucket=<bucket> <[--max-read-ops=<num ops>] [--max-read-bytes=<num bytes>]
+ [--max-write-ops=<num ops>] [--max-write-bytes=<num bytes>]>
+
+For example::
+
+ radosgw-admin ratelimit set --ratelimit-scope=bucket --bucket=mybucket --max-read-ops=1024 --max-write-bytes=10240
+
+
+A 0 value for num ops and / or num bytes means that the
+specific rate limit attribute check is disabled.
+
+Get Bucket Rate Limit
+---------------------
+
+Get the current configured rate limit parameters
+For example::
+
+ radosgw-admin ratelimit set --ratelimit-scope=bucket --bucket=<bucket>
+
+For example::
+
+ radosgw-admin ratelimit get --ratelimit-scope=bucket --bucket=mybucket
+
+
+A 0 value for num ops and / or num bytes means that the
+specific rate limit attribute check is disabled.
+
+
+Enable/Disable Bucket Rate Limit
+--------------------------------
+
+Once you set a bucket rate limit, you may enable it. For example::
+
+ radosgw-admin ratelimit enable --ratelimit-scope=bucket --bucket=<bucket>
+
+You may disable an enabled bucket rate limit. For example::
+
+ radosgw-admin ratelimit disable --ratelimit-scope=bucket --uid=mybucket
+
+
+Reading / Writing Global Rate Limit Configuration
+-------------------------------------------------
+
+You can read and write global rate limit settings in the period configuration. To
+view the global rate limit settings::
+
+ radosgw-admin global ratelimit get
+
+The global rate limit settings can be manipulated with the ``global ratelimit``
+counterparts of the ``ratelimit set``, ``ratelimit enable``, and ``ratelimit disable``
+commands. Per user and per bucket ratelimit configuration is overriding the global configuration::
+
+ radosgw-admin global ratelimit set --ratelimit-scope bucket --max-read-ops=1024
+ radosgw-admin global ratelimit enable --ratelimit-scope bucket
+
+The global rate limit can configure rate limit scope for all authenticated users::
+
+ radosgw-admin global ratelimit set --ratelimit-scope user --max-read-ops=1024
+ radosgw-admin global ratelimit enable --ratelimit-scope user
+
+The global rate limit can configure rate limit scope for all unauthenticated users::
+
+ radosgw-admin global ratelimit set --ratelimit-scope=anonymous --max-read-ops=1024
+ radosgw-admin global ratelimit enable --ratelimit-scope=anonymous
+
+.. note:: In a multisite configuration, where there is a realm and period
+ present, changes to the global rate limit must be committed using ``period
+ update --commit``. If there is no period present, the rados gateway(s) must
+ be restarted for the changes to take effect.
+
+Usage
+=====
+
+The Ceph Object Gateway logs usage for each user. You can track
+user usage within date ranges too.
+
+- Add ``rgw enable usage log = true`` in [client.rgw] section of ceph.conf and restart the radosgw service.
+
+Options include:
+
+- **Start Date:** The ``--start-date`` option allows you to filter usage
+ stats from a particular start date and an optional start time
+ (**format:** ``yyyy-mm-dd [HH:MM:SS]``).
+
+- **End Date:** The ``--end-date`` option allows you to filter usage up
+ to a particular date and an optional end time
+ (**format:** ``yyyy-mm-dd [HH:MM:SS]``).
+
+- **Log Entries:** The ``--show-log-entries`` option allows you to specify
+ whether or not to include log entries with the usage stats
+ (options: ``true`` | ``false``).
+
+.. note:: You may specify time with minutes and seconds, but it is stored
+ with 1 hour resolution.
+
+
+Show Usage
+----------
+
+To show usage statistics, specify the ``usage show``. To show usage for a
+particular user, you must specify a user ID. You may also specify a start date,
+end date, and whether or not to show log entries.::
+
+ radosgw-admin usage show --uid=johndoe --start-date=2012-03-01 --end-date=2012-04-01
+
+You may also show a summary of usage information for all users by omitting a user ID. ::
+
+ radosgw-admin usage show --show-log-entries=false
+
+
+Trim Usage
+----------
+
+With heavy use, usage logs can begin to take up storage space. You can trim
+usage logs for all users and for specific users. You may also specify date
+ranges for trim operations. ::
+
+ radosgw-admin usage trim --start-date=2010-01-01 --end-date=2010-12-31
+ radosgw-admin usage trim --uid=johndoe
+ radosgw-admin usage trim --uid=johndoe --end-date=2013-12-31
+
+
+.. _radosgw-admin: ../../man/8/radosgw-admin/
+.. _Pool Configuration: ../../rados/configuration/pool-pg-config-ref/
+.. _Ceph Object Gateway Config Reference: ../config-ref/
diff --git a/doc/radosgw/adminops.rst b/doc/radosgw/adminops.rst
new file mode 100644
index 000000000..0974b95c5
--- /dev/null
+++ b/doc/radosgw/adminops.rst
@@ -0,0 +1,2166 @@
+.. _radosgw admin ops:
+
+==================
+ Admin Operations
+==================
+
+An admin API request will be done on a URI that starts with the configurable 'admin'
+resource entry point. Authorization for the admin API duplicates the S3 authorization
+mechanism. Some operations require that the user holds special administrative capabilities.
+The response entity type (XML or JSON) may be specified as the 'format' option in the
+request and defaults to JSON if not specified.
+
+Info
+====
+
+Get RGW cluster/endpoint information.
+
+:caps: info=read
+
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{admin}/info?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+None.
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains an ``info`` section.
+
+``info``
+
+:Description: A container for all returned information.
+:Type: Container
+
+``cluster_id``
+
+:Description: The (typically unique) identifier for the controlling
+ backing store for the RGW cluster. In the typical case,
+ this is value returned from librados::rados::cluster_fsid().
+:Type: String
+:Parent: ``info``
+
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+None.
+
+
+Get Usage
+=========
+
+Request bandwidth usage information.
+
+Note: this feature is disabled by default, can be enabled by setting ``rgw
+enable usage log = true`` in the appropriate section of ceph.conf. For changes
+in ceph.conf to take effect, radosgw process restart is needed.
+
+:caps: usage=read
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{admin}/usage?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user for which the information is requested. If not specified will apply to all users.
+:Type: String
+:Example: ``foo_user``
+:Required: No
+
+``start``
+
+:Description: Date and (optional) time that specifies the start time of the requested data.
+:Type: String
+:Example: ``2012-09-25 16:00:00``
+:Required: No
+
+``end``
+
+:Description: Date and (optional) time that specifies the end time of the requested data (non-inclusive).
+:Type: String
+:Example: ``2012-09-25 16:00:00``
+:Required: No
+
+
+``show-entries``
+
+:Description: Specifies whether data entries should be returned.
+:Type: Boolean
+:Example: True [True]
+:Required: No
+
+
+``show-summary``
+
+:Description: Specifies whether data summary should be returned.
+:Type: Boolean
+:Example: True [True]
+:Required: No
+
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the requested information.
+
+``usage``
+
+:Description: A container for the usage information.
+:Type: Container
+
+``entries``
+
+:Description: A container for the usage entries information.
+:Type: Container
+
+``user``
+
+:Description: A container for the user data information.
+:Type: Container
+
+``owner``
+
+:Description: The name of the user that owns the buckets.
+:Type: String
+
+``bucket``
+
+:Description: The bucket name.
+:Type: String
+
+``time``
+
+:Description: Time lower bound for which data is being specified (rounded to the beginning of the first relevant hour).
+:Type: String
+
+``epoch``
+
+:Description: The time specified in seconds since 1/1/1970.
+:Type: String
+
+``categories``
+
+:Description: A container for stats categories.
+:Type: Container
+
+``entry``
+
+:Description: A container for stats entry.
+:Type: Container
+
+``category``
+
+:Description: Name of request category for which the stats are provided.
+:Type: String
+
+``bytes_sent``
+
+:Description: Number of bytes sent by the RADOS Gateway.
+:Type: Integer
+
+``bytes_received``
+
+:Description: Number of bytes received by the RADOS Gateway.
+:Type: Integer
+
+``ops``
+
+:Description: Number of operations.
+:Type: Integer
+
+``successful_ops``
+
+:Description: Number of successful operations.
+:Type: Integer
+
+``summary``
+
+:Description: A container for stats summary.
+:Type: Container
+
+``total``
+
+:Description: A container for stats summary aggregated total.
+:Type: Container
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+TBD.
+
+Trim Usage
+==========
+
+Remove usage information. With no dates specified, removes all usage
+information.
+
+Note: this feature is disabled by default, can be enabled by setting ``rgw
+enable usage log = true`` in the appropriate section of ceph.conf. For changes
+in ceph.conf to take effect, radosgw process restart is needed.
+
+:caps: usage=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/usage?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user for which the information is requested. If not specified will apply to all users.
+:Type: String
+:Example: ``foo_user``
+:Required: No
+
+``start``
+
+:Description: Date and (optional) time that specifies the start time of the requested data.
+:Type: String
+:Example: ``2012-09-25 16:00:00``
+:Required: No
+
+``end``
+
+:Description: Date and (optional) time that specifies the end time of the requested data (none inclusive).
+:Type: String
+:Example: ``2012-09-25 16:00:00``
+:Required: No
+
+
+``remove-all``
+
+:Description: Required when uid is not specified, in order to acknowledge multi user data removal.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+TBD.
+
+Get User Info
+=============
+
+Get user information.
+
+:caps: users=read
+
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{admin}/user?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user for which the information is requested.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the user information.
+
+``user``
+
+:Description: A container for the user data information.
+:Type: Container
+
+``user_id``
+
+:Description: The user id.
+:Type: String
+:Parent: ``user``
+
+``display_name``
+
+:Description: Display name for the user.
+:Type: String
+:Parent: ``user``
+
+``suspended``
+
+:Description: True if the user is suspended.
+:Type: Boolean
+:Parent: ``user``
+
+``max_buckets``
+
+:Description: The maximum number of buckets to be owned by the user.
+:Type: Integer
+:Parent: ``user``
+
+``subusers``
+
+:Description: Subusers associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+``keys``
+
+:Description: S3 keys associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+``swift_keys``
+
+:Description: Swift keys associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+``caps``
+
+:Description: User capabilities.
+:Type: Container
+:Parent: ``user``
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+None.
+
+Create User
+===========
+
+Create a new user. By default, a S3 key pair will be created automatically
+and returned in the response. If only one of ``access-key`` or ``secret-key``
+is provided, the omitted key will be automatically generated. By default, a
+generated key is added to the keyring without replacing an existing key pair.
+If ``access-key`` is specified and refers to an existing key owned by the user
+then it will be modified.
+
+.. versionadded:: Luminous
+
+A ``tenant`` may either be specified as a part of uid or as an additional
+request param.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{admin}/user?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID to be created.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+A tenant name may also specified as a part of ``uid``, by following the syntax
+``tenant$user``, refer to :ref:`Multitenancy <rgw-multitenancy>` for more details.
+
+``display-name``
+
+:Description: The display name of the user to be created.
+:Type: String
+:Example: ``foo user``
+:Required: Yes
+
+
+``email``
+
+:Description: The email address associated with the user.
+:Type: String
+:Example: ``foo@bar.com``
+:Required: No
+
+``key-type``
+
+:Description: Key type to be generated, options are: swift, s3 (default).
+:Type: String
+:Example: ``s3`` [``s3``]
+:Required: No
+
+``access-key``
+
+:Description: Specify access key.
+:Type: String
+:Example: ``ABCD0EF12GHIJ2K34LMN``
+:Required: No
+
+
+``secret-key``
+
+:Description: Specify secret key.
+:Type: String
+:Example: ``0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8``
+:Required: No
+
+``user-caps``
+
+:Description: User capabilities.
+:Type: String
+:Example: ``usage=read, write; users=read``
+:Required: No
+
+``generate-key``
+
+:Description: Generate a new key pair and add to the existing keyring.
+:Type: Boolean
+:Example: True [True]
+:Required: No
+
+``max-buckets``
+
+:Description: Specify the maximum number of buckets the user can own.
+:Type: Integer
+:Example: 500 [1000]
+:Required: No
+
+``suspended``
+
+:Description: Specify whether the user should be suspended.
+:Type: Boolean
+:Example: False [False]
+:Required: No
+
+.. versionadded:: Jewel
+
+``tenant``
+
+:Description: the Tenant under which a user is a part of.
+:Type: string
+:Example: tenant1
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the user information.
+
+``user``
+
+:Description: A container for the user data information.
+:Type: Container
+
+``tenant``
+
+:Description: The tenant which user is a part of.
+:Type: String
+:Parent: ``user``
+
+``user_id``
+
+:Description: The user id.
+:Type: String
+:Parent: ``user``
+
+``display_name``
+
+:Description: Display name for the user.
+:Type: String
+:Parent: ``user``
+
+``suspended``
+
+:Description: True if the user is suspended.
+:Type: Boolean
+:Parent: ``user``
+
+``max_buckets``
+
+:Description: The maximum number of buckets to be owned by the user.
+:Type: Integer
+:Parent: ``user``
+
+``subusers``
+
+:Description: Subusers associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+``keys``
+
+:Description: S3 keys associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+``swift_keys``
+
+:Description: Swift keys associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+``caps``
+
+:Description: User capabilities.
+:Type: Container
+:Parent: ``user``
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``UserExists``
+
+:Description: Attempt to create existing user.
+:Code: 409 Conflict
+
+``InvalidAccessKey``
+
+:Description: Invalid access key specified.
+:Code: 400 Bad Request
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``InvalidSecretKey``
+
+:Description: Invalid secret key specified.
+:Code: 400 Bad Request
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``KeyExists``
+
+:Description: Provided access key exists and belongs to another user.
+:Code: 409 Conflict
+
+``EmailExists``
+
+:Description: Provided email address exists.
+:Code: 409 Conflict
+
+``InvalidCapability``
+
+:Description: Attempt to grant invalid admin capability.
+:Code: 400 Bad Request
+
+
+Modify User
+===========
+
+Modify a user.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{admin}/user?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID to be modified.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+``display-name``
+
+:Description: The display name of the user to be modified.
+:Type: String
+:Example: ``foo user``
+:Required: No
+
+``email``
+
+:Description: The email address to be associated with the user.
+:Type: String
+:Example: ``foo@bar.com``
+:Required: No
+
+``generate-key``
+
+:Description: Generate a new key pair and add to the existing keyring.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+``access-key``
+
+:Description: Specify access key.
+:Type: String
+:Example: ``ABCD0EF12GHIJ2K34LMN``
+:Required: No
+
+``secret-key``
+
+:Description: Specify secret key.
+:Type: String
+:Example: ``0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8``
+:Required: No
+
+``key-type``
+
+:Description: Key type to be generated, options are: swift, s3 (default).
+:Type: String
+:Example: ``s3``
+:Required: No
+
+``max-buckets``
+
+:Description: Specify the maximum number of buckets the user can own.
+:Type: Integer
+:Example: 500 [1000]
+:Required: No
+
+``suspended``
+
+:Description: Specify whether the user should be suspended.
+:Type: Boolean
+:Example: False [False]
+:Required: No
+
+``op-mask``
+
+:Description: The op-mask of the user to be modified.
+:Type: String
+:Example: ``read, write, delete, *``
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the user information.
+
+``user``
+
+:Description: A container for the user data information.
+:Type: Container
+
+``user_id``
+
+:Description: The user id.
+:Type: String
+:Parent: ``user``
+
+``display_name``
+
+:Description: Display name for the user.
+:Type: String
+:Parent: ``user``
+
+
+``suspended``
+
+:Description: True if the user is suspended.
+:Type: Boolean
+:Parent: ``user``
+
+
+``max_buckets``
+
+:Description: The maximum number of buckets to be owned by the user.
+:Type: Integer
+:Parent: ``user``
+
+
+``subusers``
+
+:Description: Subusers associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+
+``keys``
+
+:Description: S3 keys associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+
+``swift_keys``
+
+:Description: Swift keys associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+
+``caps``
+
+:Description: User capabilities.
+:Type: Container
+:Parent: ``user``
+
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``InvalidAccessKey``
+
+:Description: Invalid access key specified.
+:Code: 400 Bad Request
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``InvalidSecretKey``
+
+:Description: Invalid secret key specified.
+:Code: 400 Bad Request
+
+``KeyExists``
+
+:Description: Provided access key exists and belongs to another user.
+:Code: 409 Conflict
+
+``EmailExists``
+
+:Description: Provided email address exists.
+:Code: 409 Conflict
+
+``InvalidCapability``
+
+:Description: Attempt to grant invalid admin capability.
+:Code: 400 Bad Request
+
+Remove User
+===========
+
+Remove an existing user.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/user?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID to be removed.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes.
+
+``purge-data``
+
+:Description: When specified the buckets and objects belonging
+ to the user will also be removed.
+:Type: Boolean
+:Example: True
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+None
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+None.
+
+Create Subuser
+==============
+
+Create a new subuser (primarily useful for clients using the Swift API).
+Note that in general for a subuser to be useful, it must be granted
+permissions by specifying ``access``. As with user creation if
+``subuser`` is specified without ``secret``, then a secret key will
+be automatically generated.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{admin}/user?subuser&format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID under which a subuser is to be created.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+
+``subuser``
+
+:Description: Specify the subuser ID to be created.
+:Type: String
+:Example: ``sub_foo``
+:Required: Yes
+
+``secret-key``
+
+:Description: Specify secret key.
+:Type: String
+:Example: ``0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8``
+:Required: No
+
+``key-type``
+
+:Description: Key type to be generated, options are: swift (default), s3.
+:Type: String
+:Example: ``swift`` [``swift``]
+:Required: No
+
+``access``
+
+:Description: Set access permissions for sub-user, should be one
+ of ``read, write, readwrite, full``.
+:Type: String
+:Example: ``read``
+:Required: No
+
+``generate-secret``
+
+:Description: Generate the secret key.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the subuser information.
+
+
+``subusers``
+
+:Description: Subusers associated with the user account.
+:Type: Container
+
+``id``
+
+:Description: Subuser id.
+:Type: String
+:Parent: ``subusers``
+
+``permissions``
+
+:Description: Subuser access to user account.
+:Type: String
+:Parent: ``subusers``
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``SubuserExists``
+
+:Description: Specified subuser exists.
+:Code: 409 Conflict
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``InvalidSecretKey``
+
+:Description: Invalid secret key specified.
+:Code: 400 Bad Request
+
+``InvalidAccess``
+
+:Description: Invalid subuser access specified.
+:Code: 400 Bad Request
+
+Modify Subuser
+==============
+
+Modify an existing subuser
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{admin}/user?subuser&format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID under which the subuser is to be modified.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+``subuser``
+
+:Description: The subuser ID to be modified.
+:Type: String
+:Example: ``sub_foo``
+:Required: Yes
+
+``generate-secret``
+
+:Description: Generate a new secret key for the subuser,
+ replacing the existing key.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+``secret``
+
+:Description: Specify secret key.
+:Type: String
+:Example: ``0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8``
+:Required: No
+
+``key-type``
+
+:Description: Key type to be generated, options are: swift (default), s3 .
+:Type: String
+:Example: ``swift`` [``swift``]
+:Required: No
+
+``access``
+
+:Description: Set access permissions for sub-user, should be one
+ of ``read, write, readwrite, full``.
+:Type: String
+:Example: ``read``
+:Required: No
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the subuser information.
+
+
+``subusers``
+
+:Description: Subusers associated with the user account.
+:Type: Container
+
+``id``
+
+:Description: Subuser id.
+:Type: String
+:Parent: ``subusers``
+
+``permissions``
+
+:Description: Subuser access to user account.
+:Type: String
+:Parent: ``subusers``
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``InvalidSecretKey``
+
+:Description: Invalid secret key specified.
+:Code: 400 Bad Request
+
+``InvalidAccess``
+
+:Description: Invalid subuser access specified.
+:Code: 400 Bad Request
+
+Remove Subuser
+==============
+
+Remove an existing subuser
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/user?subuser&format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID under which the subuser is to be removed.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+
+``subuser``
+
+:Description: The subuser ID to be removed.
+:Type: String
+:Example: ``sub_foo``
+:Required: Yes
+
+``purge-keys``
+
+:Description: Remove keys belonging to the subuser.
+:Type: Boolean
+:Example: True [True]
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+None.
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+None.
+
+Create Key
+==========
+
+Create a new key. If a ``subuser`` is specified then by default created keys
+will be swift type. If only one of ``access-key`` or ``secret-key`` is provided the
+committed key will be automatically generated, that is if only ``secret-key`` is
+specified then ``access-key`` will be automatically generated. By default, a
+generated key is added to the keyring without replacing an existing key pair.
+If ``access-key`` is specified and refers to an existing key owned by the user
+then it will be modified. The response is a container listing all keys of the same
+type as the key created. Note that when creating a swift key, specifying the option
+``access-key`` will have no effect. Additionally, only one swift key may be held by
+each user or subuser.
+
+:caps: users=write
+
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{admin}/user?key&format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID to receive the new key.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+``subuser``
+
+:Description: The subuser ID to receive the new key.
+:Type: String
+:Example: ``sub_foo``
+:Required: No
+
+``key-type``
+
+:Description: Key type to be generated, options are: swift, s3 (default).
+:Type: String
+:Example: ``s3`` [``s3``]
+:Required: No
+
+``access-key``
+
+:Description: Specify the access key.
+:Type: String
+:Example: ``AB01C2D3EF45G6H7IJ8K``
+:Required: No
+
+``secret-key``
+
+:Description: Specify the secret key.
+:Type: String
+:Example: ``0ab/CdeFGhij1klmnopqRSTUv1WxyZabcDEFgHij``
+:Required: No
+
+``generate-key``
+
+:Description: Generate a new key pair and add to the existing keyring.
+:Type: Boolean
+:Example: True [True]
+:Required: No
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+``keys``
+
+:Description: Keys of type created associated with this user account.
+:Type: Container
+
+``user``
+
+:Description: The user account associated with the key.
+:Type: String
+:Parent: ``keys``
+
+``access-key``
+
+:Description: The access key.
+:Type: String
+:Parent: ``keys``
+
+``secret-key``
+
+:Description: The secret key
+:Type: String
+:Parent: ``keys``
+
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``InvalidAccessKey``
+
+:Description: Invalid access key specified.
+:Code: 400 Bad Request
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``InvalidSecretKey``
+
+:Description: Invalid secret key specified.
+:Code: 400 Bad Request
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``KeyExists``
+
+:Description: Provided access key exists and belongs to another user.
+:Code: 409 Conflict
+
+Remove Key
+==========
+
+Remove an existing key.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/user?key&format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``access-key``
+
+:Description: The S3 access key belonging to the S3 key pair to remove.
+:Type: String
+:Example: ``AB01C2D3EF45G6H7IJ8K``
+:Required: Yes
+
+``uid``
+
+:Description: The user to remove the key from.
+:Type: String
+:Example: ``foo_user``
+:Required: No
+
+``subuser``
+
+:Description: The subuser to remove the key from.
+:Type: String
+:Example: ``sub_foo``
+:Required: No
+
+``key-type``
+
+:Description: Key type to be removed, options are: swift, s3.
+ NOTE: Required to remove swift key.
+:Type: String
+:Example: ``swift``
+:Required: No
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+None.
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+None.
+
+Get Bucket Info
+===============
+
+Get information about a subset of the existing buckets. If ``uid`` is specified
+without ``bucket`` then all buckets belonging to the user will be returned. If
+``bucket`` alone is specified, information for that particular bucket will be
+retrieved.
+
+:caps: buckets=read
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{admin}/bucket?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket to return info on.
+:Type: String
+:Example: ``foo_bucket``
+:Required: No
+
+``uid``
+
+:Description: The user to retrieve bucket information for.
+:Type: String
+:Example: ``foo_user``
+:Required: No
+
+``stats``
+
+:Description: Return bucket statistics.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful the request returns a buckets container containing
+the desired bucket information.
+
+``stats``
+
+:Description: Per bucket information.
+:Type: Container
+
+``buckets``
+
+:Description: Contains a list of one or more bucket containers.
+:Type: Container
+
+``bucket``
+
+:Description: Container for single bucket information.
+:Type: Container
+:Parent: ``buckets``
+
+``name``
+
+:Description: The name of the bucket.
+:Type: String
+:Parent: ``bucket``
+
+``pool``
+
+:Description: The pool the bucket is stored in.
+:Type: String
+:Parent: ``bucket``
+
+``id``
+
+:Description: The unique bucket id.
+:Type: String
+:Parent: ``bucket``
+
+``marker``
+
+:Description: Internal bucket tag.
+:Type: String
+:Parent: ``bucket``
+
+``owner``
+
+:Description: The user id of the bucket owner.
+:Type: String
+:Parent: ``bucket``
+
+``usage``
+
+:Description: Storage usage information.
+:Type: Container
+:Parent: ``bucket``
+
+``index``
+
+:Description: Status of bucket index.
+:Type: String
+:Parent: ``bucket``
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``IndexRepairFailed``
+
+:Description: Bucket index repair failed.
+:Code: 409 Conflict
+
+Check Bucket Index
+==================
+
+Check the index of an existing bucket. NOTE: to check multipart object
+accounting with ``check-objects``, ``fix`` must be set to True.
+
+:caps: buckets=write
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{admin}/bucket?index&format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket to return info on.
+:Type: String
+:Example: ``foo_bucket``
+:Required: Yes
+
+``check-objects``
+
+:Description: Check multipart object accounting.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+``fix``
+
+:Description: Also fix the bucket index when checking.
+:Type: Boolean
+:Example: False [False]
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+``index``
+
+:Description: Status of bucket index.
+:Type: String
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``IndexRepairFailed``
+
+:Description: Bucket index repair failed.
+:Code: 409 Conflict
+
+Remove Bucket
+=============
+
+Delete an existing bucket.
+
+:caps: buckets=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/bucket?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket to remove.
+:Type: String
+:Example: ``foo_bucket``
+:Required: Yes
+
+``purge-objects``
+
+:Description: Remove a buckets objects before deletion.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+None.
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``BucketNotEmpty``
+
+:Description: Attempted to delete non-empty bucket.
+:Code: 409 Conflict
+
+``ObjectRemovalFailed``
+
+:Description: Unable to remove objects.
+:Code: 409 Conflict
+
+Unlink Bucket
+=============
+
+Unlink a bucket from a specified user. Primarily useful for changing
+bucket ownership.
+
+:caps: buckets=write
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{admin}/bucket?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket to unlink.
+:Type: String
+:Example: ``foo_bucket``
+:Required: Yes
+
+``uid``
+
+:Description: The user ID to unlink the bucket from.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+None.
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``BucketUnlinkFailed``
+
+:Description: Unable to unlink bucket from specified user.
+:Code: 409 Conflict
+
+Link Bucket
+===========
+
+Link a bucket to a specified user, unlinking the bucket from
+any previous user.
+
+:caps: buckets=write
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{admin}/bucket?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket name to unlink.
+:Type: String
+:Example: ``foo_bucket``
+:Required: Yes
+
+``bucket-id``
+
+:Description: The bucket id to unlink.
+:Type: String
+:Example: ``dev.6607669.420``
+:Required: No
+
+``uid``
+
+:Description: The user ID to link the bucket to.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: Container for single bucket information.
+:Type: Container
+
+``name``
+
+:Description: The name of the bucket.
+:Type: String
+:Parent: ``bucket``
+
+``pool``
+
+:Description: The pool the bucket is stored in.
+:Type: String
+:Parent: ``bucket``
+
+``id``
+
+:Description: The unique bucket id.
+:Type: String
+:Parent: ``bucket``
+
+``marker``
+
+:Description: Internal bucket tag.
+:Type: String
+:Parent: ``bucket``
+
+``owner``
+
+:Description: The user id of the bucket owner.
+:Type: String
+:Parent: ``bucket``
+
+``usage``
+
+:Description: Storage usage information.
+:Type: Container
+:Parent: ``bucket``
+
+``index``
+
+:Description: Status of bucket index.
+:Type: String
+:Parent: ``bucket``
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``BucketUnlinkFailed``
+
+:Description: Unable to unlink bucket from specified user.
+:Code: 409 Conflict
+
+``BucketLinkFailed``
+
+:Description: Unable to link bucket to specified user.
+:Code: 409 Conflict
+
+Remove Object
+=============
+
+Remove an existing object. NOTE: Does not require owner to be non-suspended.
+
+:caps: buckets=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/bucket?object&format=json HTTP/1.1
+ Host: {fqdn}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket containing the object to be removed.
+:Type: String
+:Example: ``foo_bucket``
+:Required: Yes
+
+``object``
+
+:Description: The object to remove.
+:Type: String
+:Example: ``foo.txt``
+:Required: Yes
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+None.
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``NoSuchObject``
+
+:Description: Specified object does not exist.
+:Code: 404 Not Found
+
+``ObjectRemovalFailed``
+
+:Description: Unable to remove objects.
+:Code: 409 Conflict
+
+
+
+Get Bucket or Object Policy
+===========================
+
+Read the policy of an object or bucket.
+
+:caps: buckets=read
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{admin}/bucket?policy&format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket to read the policy from.
+:Type: String
+:Example: ``foo_bucket``
+:Required: Yes
+
+``object``
+
+:Description: The object to read the policy from.
+:Type: String
+:Example: ``foo.txt``
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, returns the object or bucket policy
+
+``policy``
+
+:Description: Access control policy.
+:Type: Container
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``IncompleteBody``
+
+:Description: Either bucket was not specified for a bucket policy request or bucket
+ and object were not specified for an object policy request.
+:Code: 400 Bad Request
+
+Add A User Capability
+=====================
+
+Add an administrative capability to a specified user.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{admin}/user?caps&format=json HTTP/1.1
+ Host: {fqdn}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID to add an administrative capability to.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+``user-caps``
+
+:Description: The administrative capability to add to the user.
+:Type: String
+:Example: ``usage=read,write;user=write``
+:Required: Yes
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the user's capabilities.
+
+``user``
+
+:Description: A container for the user data information.
+:Type: Container
+:Parent: ``user``
+
+``user_id``
+
+:Description: The user id.
+:Type: String
+:Parent: ``user``
+
+``caps``
+
+:Description: User capabilities.
+:Type: Container
+:Parent: ``user``
+
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``InvalidCapability``
+
+:Description: Attempt to grant invalid admin capability.
+:Code: 400 Bad Request
+
+Example Request
+~~~~~~~~~~~~~~~
+
+::
+
+ PUT /{admin}/user?caps&user-caps=usage=read,write;user=write&format=json HTTP/1.1
+ Host: {fqdn}
+ Content-Type: text/plain
+ Authorization: {your-authorization-token}
+
+
+
+Remove A User Capability
+========================
+
+Remove an administrative capability from a specified user.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/user?caps&format=json HTTP/1.1
+ Host: {fqdn}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID to remove an administrative capability from.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+``user-caps``
+
+:Description: The administrative capabilities to remove from the user.
+:Type: String
+:Example: ``usage=read, write``
+:Required: Yes
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the user's capabilities.
+
+``user``
+
+:Description: A container for the user data information.
+:Type: Container
+:Parent: ``user``
+
+``user_id``
+
+:Description: The user id.
+:Type: String
+:Parent: ``user``
+
+``caps``
+
+:Description: User capabilities.
+:Type: Container
+:Parent: ``user``
+
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``InvalidCapability``
+
+:Description: Attempt to remove an invalid admin capability.
+:Code: 400 Bad Request
+
+``NoSuchCap``
+
+:Description: User does not possess specified capability.
+:Code: 404 Not Found
+
+
+Quotas
+======
+
+The Admin Operations API enables you to set quotas on users and on bucket owned
+by users. See `Quota Management`_ for additional details. Quotas include the
+maximum number of objects in a bucket and the maximum storage size in megabytes.
+
+To view quotas, the user must have a ``users=read`` capability. To set,
+modify or disable a quota, the user must have ``users=write`` capability.
+See the `Admin Guide`_ for details.
+
+Valid parameters for quotas include:
+
+- **Bucket:** The ``bucket`` option allows you to specify a quota for
+ buckets owned by a user.
+
+- **Maximum Objects:** The ``max-objects`` setting allows you to specify
+ the maximum number of objects. A negative value disables this setting.
+
+- **Maximum Size:** The ``max-size`` option allows you to specify a quota
+ for the maximum number of bytes. The ``max-size-kb`` option allows you
+ to specify it in KiB. A negative value disables this setting.
+
+- **Quota Type:** The ``quota-type`` option sets the scope for the quota.
+ The options are ``bucket`` and ``user``.
+
+- **Enable/Disable Quota:** The ``enabled`` option specifies whether the
+ quota should be enabled. The value should be either 'True' or 'False'.
+
+Get User Quota
+~~~~~~~~~~~~~~
+
+To get a quota, the user must have ``users`` capability set with ``read``
+permission. ::
+
+ GET /admin/user?quota&uid=<uid>&quota-type=user
+
+
+Set User Quota
+~~~~~~~~~~~~~~
+
+To set a quota, the user must have ``users`` capability set with ``write``
+permission. ::
+
+ PUT /admin/user?quota&uid=<uid>&quota-type=user
+
+
+The content must include a JSON representation of the quota settings
+as encoded in the corresponding read operation.
+
+
+Get Bucket Quota
+~~~~~~~~~~~~~~~~
+
+To get a quota, the user must have ``users`` capability set with ``read``
+permission. ::
+
+ GET /admin/user?quota&uid=<uid>&quota-type=bucket
+
+
+Set Bucket Quota
+~~~~~~~~~~~~~~~~
+
+To set a quota, the user must have ``users`` capability set with ``write``
+permission. ::
+
+ PUT /admin/user?quota&uid=<uid>&quota-type=bucket
+
+The content must include a JSON representation of the quota settings
+as encoded in the corresponding read operation.
+
+
+Set Quota for an Individual Bucket
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To set a quota, the user must have ``buckets`` capability set with ``write``
+permission. ::
+
+ PUT /admin/bucket?quota&uid=<uid>&bucket=<bucket-name>
+
+The content must include a JSON representation of the quota settings
+as mentioned in Set Bucket Quota section above.
+
+
+
+Rate Limit
+==========
+
+The Admin Operations API enables you to set and get ratelimit configurations on users and on bucket and global rate limit configurations. See `Rate Limit Management`_ for additional details.
+Rate Limit includes the maximum number of operations and/or bytes per minute, separated by read and/or write, to a bucket and/or by a user and the maximum storage size in megabytes.
+
+To view rate limit, the user must have a ``ratelimit=read`` capability. To set,
+modify or disable a ratelimit, the user must have ``ratelimit=write`` capability.
+See the `Admin Guide`_ for details.
+
+Valid parameters for quotas include:
+
+- **Bucket:** The ``bucket`` option allows you to specify a rate limit for
+ a bucket.
+
+- **User:** The ``uid`` option allows you to specify a rate limit for a user.
+
+- **Maximum Read Bytes:** The ``max-read-bytes`` setting allows you to specify
+ the maximum number of read bytes per minute. A 0 value disables this setting.
+
+- **Maximum Write Bytes:** The ``max-write-bytes`` setting allows you to specify
+ the maximum number of write bytes per minute. A 0 value disables this setting.
+
+- **Maximum Read Ops:** The ``max-read-ops`` setting allows you to specify
+ the maximum number of read ops per minute. A 0 value disables this setting.
+
+- **Maximum Write Ops:** The ``max-write-ops`` setting allows you to specify
+ the maximum number of write ops per minute. A 0 value disables this setting.
+
+- **Global:** The ``global`` option allows you to specify a global rate limit.
+ The value should be either 'True' or 'False'.
+
+- **Rate Limit Scope:** The ``ratelimit-scope`` option sets the scope for the rate limit.
+ The options are ``bucket`` , ``user`` and ``anonymous``.
+ ``anonymous`` is only valid for setting global configuration
+
+- **Enable/Disable Rate Limit:** The ``enabled`` option specifies whether the
+ rate limit should be enabled. The value should be either 'True' or 'False'.
+
+Get User Rate Limit
+~~~~~~~~~~~~~~~~~~~
+
+To get a rate limit, the user must have ``ratelimit`` capability set with ``read``
+permission. ::
+
+ GET /{admin}/ratelimit?ratelimit-scope=user&uid=<uid>
+
+
+Set User Rate Limit
+~~~~~~~~~~~~~~~~~~~
+
+To set a rate limit, the user must have ``ratelimit`` capability set with ``write``
+permission. ::
+
+ POST /{admin}/ratelimit?ratelimit-scope=user&uid=<uid><[&max-read-bytes=<bytes>][&max-write-bytes=<bytes>][&max-read-ops=<ops>][&max-write-ops=<ops>][enabled=<True|False>]>
+
+
+
+Get Bucket Rate Limit
+~~~~~~~~~~~~~~~~~~~~~
+
+To get a rate limit, the user must have ``users`` capability set with ``read``
+permission. ::
+
+ GET /{admin}/ratelimit?bucket=<bucket>&ratelimit-scope=bucket
+
+
+
+Set Rate Limit for an Individual Bucket
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To set a rate limit, the user must have ``ratelimit`` capability set with ``write``
+permission. ::
+
+ POST /{admin}/ratelimit?bucket=<bucket-name>&ratelimit-scope=bucket<[&max-read-bytes=<bytes>][&max-write-bytes=<bytes>][&max-read-ops=<ops>][&max-write-ops=<ops>]>
+
+
+
+Get Global Rate Limit
+~~~~~~~~~~~~~~~~~~~~~
+
+To get a global rate limit, the user must have ``ratelimit`` capability set with ``read``
+permission. ::
+
+ GET /{admin}/ratelimit?global=<True|False>
+
+
+
+Set Global User Rate Limit
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To set a rate limit, the user must have ``ratelimit`` capability set with ``write``
+permission. ::
+
+ POST /{admin}/ratelimit?ratelimit-scope=user&global=<True|False><[&max-read-bytes=<bytes>][&max-write-bytes=<bytes>][&max-read-ops=<ops>][&max-write-ops=<ops>][enabled=<True|False>]>
+
+
+
+Set Global Rate Limit Bucket
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To set a rate limit, the user must have ``ratelimit`` capability set with ``write``
+permission. ::
+
+ POST /{admin}/ratelimit?ratelimit-scope=bucket&global=<True|False><[&max-read-bytes=<bytes>][&max-write-bytes=<bytes>][&max-read-ops=<ops>][&max-write-ops=<ops>]>
+
+
+
+Set Global Anonymous User Rate Limit
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To set a rate limit, the user must have ``ratelimit`` capability set with ``write``
+permission. ::
+
+ POST /{admin}/ratelimit?ratelimit-scope=anon&global=<True|False><[&max-read-bytes=<bytes>][&max-write-bytes=<bytes>][&max-read-ops=<ops>][&max-write-ops=<ops>][enabled=<True|False>]>
+
+
+
+Standard Error Responses
+========================
+
+``AccessDenied``
+
+:Description: Access denied.
+:Code: 403 Forbidden
+
+``InternalError``
+
+:Description: Internal server error.
+:Code: 500 Internal Server Error
+
+``NoSuchUser``
+
+:Description: User does not exist.
+:Code: 404 Not Found
+
+``NoSuchBucket``
+
+:Description: Bucket does not exist.
+:Code: 404 Not Found
+
+``NoSuchKey``
+
+:Description: No such access key.
+:Code: 404 Not Found
+
+
+
+
+Binding libraries
+========================
+
+``Golang``
+
+ - `ceph/go-ceph`_
+ - `IrekFasikhov/go-rgwadmin`_
+ - `QuentinPerez/go-radosgw`_
+
+``Java``
+
+ - `twonote/radosgw-admin4j`_
+
+``PHP``
+
+ - `lbausch/php-ceph-radosgw-admin`_
+ - `myENA/php-rgw-api`_
+
+``Python``
+
+ - `UMIACS/rgwadmin`_
+ - `valerytschopp/python-radosgw-admin`_
+
+
+
+.. _Admin Guide: ../admin
+.. _Quota Management: ../admin#quota-management
+.. _IrekFasikhov/go-rgwadmin: https://github.com/IrekFasikhov/go-rgwadmin
+.. _QuentinPerez/go-radosgw: https://github.com/QuentinPerez/go-radosgw
+.. _ceph/go-ceph: https://github.com/ceph/go-ceph/
+.. _Rate Limit Management: ../admin#rate-limit-management
+.. _IrekFasikhov/go-rgwadmin: https://github.com/IrekFasikhov/go-rgwadmin
+.. _QuentinPerez/go-radosgw: https://github.com/QuentinPerez/go-radosgw
+.. _twonote/radosgw-admin4j: https://github.com/twonote/radosgw-admin4j
+.. _lbausch/php-ceph-radosgw-admin: https://github.com/lbausch/php-ceph-radosgw-admin
+.. _myENA/php-rgw-api: https://github.com/myENA/php-rgw-api
+.. _UMIACS/rgwadmin: https://github.com/UMIACS/rgwadmin
+.. _valerytschopp/python-radosgw-admin: https://github.com/valerytschopp/python-radosgw-admin
diff --git a/doc/radosgw/api.rst b/doc/radosgw/api.rst
new file mode 100644
index 000000000..cb31284e0
--- /dev/null
+++ b/doc/radosgw/api.rst
@@ -0,0 +1,16 @@
+.. _radosgw api:
+
+===============
+librgw (Python)
+===============
+
+.. highlight:: python
+
+The `rgw` python module provides file-like access to rgw.
+
+API Reference
+=============
+
+.. automodule:: rgw
+ :members: LibRGWFS, FileHandle
+
diff --git a/doc/radosgw/archive-sync-module.rst b/doc/radosgw/archive-sync-module.rst
new file mode 100644
index 000000000..b121ee6b1
--- /dev/null
+++ b/doc/radosgw/archive-sync-module.rst
@@ -0,0 +1,44 @@
+===================
+Archive Sync Module
+===================
+
+.. versionadded:: Nautilus
+
+This sync module leverages the versioning feature of the S3 objects in RGW to
+have an archive zone that captures the different versions of the S3 objects
+as they occur over time in the other zones.
+
+An archive zone allows to have a history of versions of S3 objects that can
+only be eliminated through the gateways associated with the archive zone.
+
+This functionality is useful to have a configuration where several
+non-versioned zones replicate their data and metadata through their zone
+gateways (mirror configuration) providing high availability to the end users,
+while the archive zone captures all the data updates and metadata for
+consolidate them as versions of S3 objects.
+
+Including an archive zone in a multizone configuration allows you to have the
+flexibility of an S3 object history in one only zone while saving the space
+that the replicas of the versioned S3 objects would consume in the rest of the
+zones.
+
+
+
+Archive Sync Tier Type Configuration
+------------------------------------
+
+How to Configure
+~~~~~~~~~~~~~~~~
+
+See `Multisite Configuration`_ for how to multisite config instructions. The
+archive sync module requires a creation of a new zone. The zone tier type needs
+to be defined as ``archive``:
+
+::
+
+ # radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --endpoints={http://fqdn}[,{http://fqdn}]
+ --tier-type=archive
+
+.. _Multisite Configuration: ../multisite
diff --git a/doc/radosgw/barbican.rst b/doc/radosgw/barbican.rst
new file mode 100644
index 000000000..a90d063fb
--- /dev/null
+++ b/doc/radosgw/barbican.rst
@@ -0,0 +1,123 @@
+==============================
+OpenStack Barbican Integration
+==============================
+
+OpenStack `Barbican`_ can be used as a secure key management service for
+`Server-Side Encryption`_.
+
+.. image:: ../images/rgw-encryption-barbican.png
+
+#. `Configure Keystone`_
+#. `Create a Keystone user`_
+#. `Configure the Ceph Object Gateway`_
+#. `Create a key in Barbican`_
+
+Configure Keystone
+==================
+
+Barbican depends on Keystone for authorization and access control of its keys.
+
+See `OpenStack Keystone Integration`_.
+
+Create a Keystone user
+======================
+
+Create a new user that will be used by the Ceph Object Gateway to retrieve
+keys.
+
+For example::
+
+ user = rgwcrypt-user
+ pass = rgwcrypt-password
+ tenant = rgwcrypt
+
+See OpenStack documentation for `Manage projects, users, and roles`_.
+
+Create a key in Barbican
+========================
+
+See Barbican documentation for `How to Create a Secret`_. Requests to
+Barbican must include a valid Keystone token in the ``X-Auth-Token`` header.
+
+.. note:: Server-side encryption keys must be 256-bit long and base64 encoded.
+
+Example request::
+
+ POST /v1/secrets HTTP/1.1
+ Host: barbican.example.com:9311
+ Accept: */*
+ Content-Type: application/json
+ X-Auth-Token: 7f7d588dd29b44df983bc961a6b73a10
+ Content-Length: 299
+
+ {
+ "name": "my-key",
+ "expiration": "2016-12-28T19:14:44.180394",
+ "algorithm": "aes",
+ "bit_length": 256,
+ "mode": "cbc",
+ "payload": "6b+WOZ1T3cqZMxgThRcXAQBrS5mXKdDUphvpxptl9/4=",
+ "payload_content_type": "application/octet-stream",
+ "payload_content_encoding": "base64"
+ }
+
+Response::
+
+ {"secret_ref": "http://barbican.example.com:9311/v1/secrets/d1e7ef3b-f841-4b7c-90b2-b7d90ca2d723"}
+
+In the response, ``d1e7ef3b-f841-4b7c-90b2-b7d90ca2d723`` is the key id that
+can be used in any `SSE-KMS`_ request.
+
+This newly created key is not accessible by user ``rgwcrypt-user``. This
+privilege must be added with an ACL. See `How to Set/Replace ACL`_ for more
+details.
+
+Example request (assuming that the Keystone id of ``rgwcrypt-user`` is
+``906aa90bd8a946c89cdff80d0869460f``)::
+
+ PUT /v1/secrets/d1e7ef3b-f841-4b7c-90b2-b7d90ca2d723/acl HTTP/1.1
+ Host: barbican.example.com:9311
+ Accept: */*
+ Content-Type: application/json
+ X-Auth-Token: 7f7d588dd29b44df983bc961a6b73a10
+ Content-Length: 101
+
+ {
+ "read":{
+ "users":[ "906aa90bd8a946c89cdff80d0869460f" ],
+ "project-access": true
+ }
+ }
+
+Response::
+
+ {"acl_ref": "http://barbican.example.com:9311/v1/secrets/d1e7ef3b-f841-4b7c-90b2-b7d90ca2d723/acl"}
+
+Configure the Ceph Object Gateway
+=================================
+
+Edit the Ceph configuration file to enable Barbican as a KMS and add information
+about the Barbican server and Keystone user::
+
+ rgw crypt s3 kms backend = barbican
+ rgw barbican url = http://barbican.example.com:9311
+ rgw keystone barbican user = rgwcrypt-user
+ rgw keystone barbican password = rgwcrypt-password
+
+When using Keystone API version 2::
+
+ rgw keystone barbican tenant = rgwcrypt
+
+When using API version 3::
+
+ rgw keystone barbican project
+ rgw keystone barbican domain
+
+
+.. _Barbican: https://wiki.openstack.org/wiki/Barbican
+.. _Server-Side Encryption: ../encryption
+.. _OpenStack Keystone Integration: ../keystone
+.. _Manage projects, users, and roles: https://docs.openstack.org/admin-guide/cli-manage-projects-users-and-roles.html#create-a-user
+.. _How to Create a Secret: https://developer.openstack.org/api-guide/key-manager/secrets.html#how-to-create-a-secret
+.. _SSE-KMS: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
+.. _How to Set/Replace ACL: https://developer.openstack.org/api-guide/key-manager/acls.html#how-to-set-replace-acl
diff --git a/doc/radosgw/bucketpolicy.rst b/doc/radosgw/bucketpolicy.rst
new file mode 100644
index 000000000..99f40c5d7
--- /dev/null
+++ b/doc/radosgw/bucketpolicy.rst
@@ -0,0 +1,216 @@
+===============
+Bucket Policies
+===============
+
+.. versionadded:: Luminous
+
+The Ceph Object Gateway supports a subset of the Amazon S3 policy
+language applied to buckets.
+
+
+Creation and Removal
+====================
+
+Bucket policies are managed through standard S3 operations rather than
+radosgw-admin.
+
+For example, one may use s3cmd to set or delete a policy thus::
+
+ $ cat > examplepol
+ {
+ "Version": "2012-10-17",
+ "Statement": [{
+ "Effect": "Allow",
+ "Principal": {"AWS": ["arn:aws:iam::usfolks:user/fred:subuser"]},
+ "Action": "s3:PutObjectAcl",
+ "Resource": [
+ "arn:aws:s3:::happybucket/*"
+ ]
+ }]
+ }
+
+ $ s3cmd setpolicy examplepol s3://happybucket
+ $ s3cmd delpolicy s3://happybucket
+
+
+Limitations
+===========
+
+Currently, we support only the following actions:
+
+- s3:AbortMultipartUpload
+- s3:CreateBucket
+- s3:DeleteBucketPolicy
+- s3:DeleteBucket
+- s3:DeleteBucketWebsite
+- s3:DeleteObject
+- s3:DeleteObjectVersion
+- s3:DeleteReplicationConfiguration
+- s3:GetAccelerateConfiguration
+- s3:GetBucketAcl
+- s3:GetBucketCORS
+- s3:GetBucketLocation
+- s3:GetBucketLogging
+- s3:GetBucketNotification
+- s3:GetBucketPolicy
+- s3:GetBucketRequestPayment
+- s3:GetBucketTagging
+- s3:GetBucketVersioning
+- s3:GetBucketWebsite
+- s3:GetLifecycleConfiguration
+- s3:GetObjectAcl
+- s3:GetObject
+- s3:GetObjectTorrent
+- s3:GetObjectVersionAcl
+- s3:GetObjectVersion
+- s3:GetObjectVersionTorrent
+- s3:GetReplicationConfiguration
+- s3:IPAddress
+- s3:NotIpAddress
+- s3:ListAllMyBuckets
+- s3:ListBucketMultipartUploads
+- s3:ListBucket
+- s3:ListBucketVersions
+- s3:ListMultipartUploadParts
+- s3:PutAccelerateConfiguration
+- s3:PutBucketAcl
+- s3:PutBucketCORS
+- s3:PutBucketLogging
+- s3:PutBucketNotification
+- s3:PutBucketPolicy
+- s3:PutBucketRequestPayment
+- s3:PutBucketTagging
+- s3:PutBucketVersioning
+- s3:PutBucketWebsite
+- s3:PutLifecycleConfiguration
+- s3:PutObjectAcl
+- s3:PutObject
+- s3:PutObjectVersionAcl
+- s3:PutReplicationConfiguration
+- s3:RestoreObject
+
+We do not yet support setting policies on users, groups, or roles.
+
+We use the RGW ‘tenant’ identifier in place of the Amazon twelve-digit
+account ID. In the future we may allow you to assign an account ID to
+a tenant, but for now if you want to use policies between AWS S3 and
+RGW S3 you will have to use the Amazon account ID as the tenant ID when
+creating users.
+
+Under AWS, all tenants share a single namespace. RGW gives every
+tenant its own namespace of buckets. There may be an option to enable
+an AWS-like 'flat' bucket namespace in future versions. At present, to
+access a bucket belonging to another tenant, address it as
+"tenant:bucket" in the S3 request.
+
+In AWS, a bucket policy can grant access to another account, and that
+account owner can then grant access to individual users with user
+permissions. Since we do not yet support user, role, and group
+permissions, account owners will currently need to grant access
+directly to individual users, and granting an entire account access to
+a bucket grants access to all users in that account.
+
+Bucket policies do not yet support string interpolation.
+
+For all requests, condition keys we support are:
+- aws:CurrentTime
+- aws:EpochTime
+- aws:PrincipalType
+- aws:Referer
+- aws:SecureTransport
+- aws:SourceIp
+- aws:UserAgent
+- aws:username
+
+We support certain s3 condition keys for bucket and object requests.
+
+.. versionadded:: Mimic
+
+Bucket Related Operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
++-----------------------+----------------------+----------------+
+| Permission | Condition Keys | Comments |
++-----------------------+----------------------+----------------+
+| | s3:x-amz-acl | |
+| | s3:x-amz-grant-<perm>| |
+|s3:createBucket | where perm is one of | |
+| | read/write/read-acp | |
+| | write-acp/ | |
+| | full-control | |
++-----------------------+----------------------+----------------+
+| | s3:prefix | |
+| +----------------------+----------------+
+| s3:ListBucket & | s3:delimiter | |
+| +----------------------+----------------+
+| s3:ListBucketVersions | s3:max-keys | |
++-----------------------+----------------------+----------------+
+| s3:PutBucketAcl | s3:x-amz-acl | |
+| | s3:x-amz-grant-<perm>| |
++-----------------------+----------------------+----------------+
+
+.. _tag_policy:
+
+Object Related Operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
++-----------------------------+-----------------------------------------------+-------------------+
+|Permission |Condition Keys | Comments |
+| | | |
++-----------------------------+-----------------------------------------------+-------------------+
+| |s3:x-amz-acl & s3:x-amz-grant-<perm> | |
+| | | |
+| +-----------------------------------------------+-------------------+
+| |s3:x-amz-copy-source | |
+| | | |
+| +-----------------------------------------------+-------------------+
+| |s3:x-amz-server-side-encryption | |
+| | | |
+| +-----------------------------------------------+-------------------+
+|s3:PutObject |s3:x-amz-server-side-encryption-aws-kms-key-id | |
+| | | |
+| +-----------------------------------------------+-------------------+
+| |s3:x-amz-metadata-directive |PUT & COPY to |
+| | |overwrite/preserve |
+| | |metadata in COPY |
+| | |requests |
+| +-----------------------------------------------+-------------------+
+| |s3:RequestObjectTag/<tag-key> | |
+| | | |
++-----------------------------+-----------------------------------------------+-------------------+
+|s3:PutObjectAcl |s3:x-amz-acl & s3-amz-grant-<perm> | |
+|s3:PutObjectVersionAcl | | |
+| +-----------------------------------------------+-------------------+
+| |s3:ExistingObjectTag/<tag-key> | |
+| | | |
++-----------------------------+-----------------------------------------------+-------------------+
+| |s3:RequestObjectTag/<tag-key> | |
+|s3:PutObjectTagging & +-----------------------------------------------+-------------------+
+|s3:PutObjectVersionTagging |s3:ExistingObjectTag/<tag-key> | |
+| | | |
++-----------------------------+-----------------------------------------------+-------------------+
+|s3:GetObject & |s3:ExistingObjectTag/<tag-key> | |
+|s3:GetObjectVersion | | |
++-----------------------------+-----------------------------------------------+-------------------+
+|s3:GetObjectAcl & |s3:ExistingObjectTag/<tag-key> | |
+|s3:GetObjectVersionAcl | | |
++-----------------------------+-----------------------------------------------+-------------------+
+|s3:GetObjectTagging & |s3:ExistingObjectTag/<tag-key> | |
+|s3:GetObjectVersionTagging | | |
++-----------------------------+-----------------------------------------------+-------------------+
+|s3:DeleteObjectTagging & |s3:ExistingObjectTag/<tag-key> | |
+|s3:DeleteObjectVersionTagging| | |
++-----------------------------+-----------------------------------------------+-------------------+
+
+
+More may be supported soon as we integrate with the recently rewritten
+Authentication/Authorization subsystem.
+
+Swift
+=====
+
+There is no way to set bucket policies under Swift, but bucket
+policies that have been set govern Swift as well as S3 operations.
+
+Swift credentials are matched against Principals specified in a policy
+in a way specific to whatever backend is being used.
diff --git a/doc/radosgw/cloud-sync-module.rst b/doc/radosgw/cloud-sync-module.rst
new file mode 100644
index 000000000..a601bd503
--- /dev/null
+++ b/doc/radosgw/cloud-sync-module.rst
@@ -0,0 +1,244 @@
+=========================
+Cloud Sync Module
+=========================
+
+.. versionadded:: Mimic
+
+This module syncs zone data to a remote cloud service. The sync is unidirectional; data is not synced back from the
+remote zone. The goal of this module is to enable syncing data to multiple cloud providers. The currently supported
+cloud providers are those that are compatible with AWS (S3).
+
+User credentials for the remote cloud object store service need to be configured. Since many cloud services impose limits
+on the number of buckets that each user can create, the mapping of source objects and buckets is configurable.
+It is possible to configure different targets to different buckets and bucket prefixes. Note that source ACLs will not
+be preserved. It is possible to map permissions of specific source users to specific destination users.
+
+Due to API limitations there is no way to preserve original object modification time and ETag. The cloud sync module
+stores these as metadata attributes on the destination objects.
+
+
+
+Cloud Sync Tier Type Configuration
+-------------------------------------
+
+Trivial Configuration:
+~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+ {
+ "connection": {
+ "access_key": <access>,
+ "secret": <secret>,
+ "endpoint": <endpoint>,
+ "host_style": <path | virtual>,
+ },
+ "acls": [ { "type": <id | email | uri>,
+ "source_id": <source_id>,
+ "dest_id": <dest_id> } ... ],
+ "target_path": <target_path>,
+ }
+
+
+Non Trivial Configuration:
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+ {
+ "default": {
+ "connection": {
+ "access_key": <access>,
+ "secret": <secret>,
+ "endpoint": <endpoint>,
+ "host_style" <path | virtual>,
+ },
+ "acls": [
+ {
+ "type" : <id | email | uri>, # optional, default is id
+ "source_id": <id>,
+ "dest_id": <id>
+ } ... ]
+ "target_path": <path> # optional
+ },
+ "connections": [
+ {
+ "connection_id": <id>,
+ "access_key": <access>,
+ "secret": <secret>,
+ "endpoint": <endpoint>,
+ "host_style" <path | virtual>, # optional
+ } ... ],
+ "acl_profiles": [
+ {
+ "acls_id": <id>, # acl mappings
+ "acls": [ {
+ "type": <id | email | uri>,
+ "source_id": <id>,
+ "dest_id": <id>
+ } ... ]
+ }
+ ],
+ "profiles": [
+ {
+ "source_bucket": <source>,
+ "connection_id": <connection_id>,
+ "acls_id": <mappings_id>,
+ "target_path": <dest>, # optional
+ } ... ],
+ }
+
+
+.. Note:: Trivial configuration can coincide with the non-trivial one.
+
+
+* ``connection`` (container)
+
+Represents a connection to the remote cloud service. Contains ``connection_id``, ``access_key``,
+``secret``, ``endpoint``, and ``host_style``.
+
+* ``access_key`` (string)
+
+The remote cloud access key that will be used for a specific connection.
+
+* ``secret`` (string)
+
+The secret key for the remote cloud service.
+
+* ``endpoint`` (string)
+
+URL of remote cloud service endpoint.
+
+* ``host_style`` (path | virtual)
+
+Type of host style to be used when accessing remote cloud endpoint (default: ``path``).
+
+* ``acls`` (array)
+
+Contains a list of ``acl_mappings``.
+
+* ``acl_mapping`` (container)
+
+Each ``acl_mapping`` structure contains ``type``, ``source_id``, and ``dest_id``. These
+will define the ACL mutation that will be done on each object. An ACL mutation allows converting source
+user id to a destination id.
+
+* ``type`` (id | email | uri)
+
+ACL type: ``id`` defines user id, ``email`` defines user by email, and ``uri`` defines user by ``uri`` (group).
+
+* ``source_id`` (string)
+
+ID of user in the source zone.
+
+* ``dest_id`` (string)
+
+ID of user in the destination.
+
+* ``target_path`` (string)
+
+A string that defines how the target path is created. The target path specifies a prefix to which
+the source object name is appended. The target path configurable can include any of the following
+variables:
+- ``sid``: unique string that represents the sync instance ID
+- ``zonegroup``: the zonegroup name
+- ``zonegroup_id``: the zonegroup ID
+- ``zone``: the zone name
+- ``zone_id``: the zone id
+- ``bucket``: source bucket name
+- ``owner``: source bucket owner ID
+
+For example: ``target_path = rgwx-${zone}-${sid}/${owner}/${bucket}``
+
+
+* ``acl_profiles`` (array)
+
+An array of ``acl_profile``.
+
+* ``acl_profile`` (container)
+
+Each profile contains ``acls_id`` (string) that represents the profile, and ``acls`` array that
+holds a list of ``acl_mappings``.
+
+* ``profiles`` (array)
+
+A list of profiles. Each profile contains the following:
+- ``source_bucket``: either a bucket name, or a bucket prefix (if ends with ``*``) that defines the source bucket(s) for this profile
+- ``target_path``: as defined above
+- ``connection_id``: ID of the connection that will be used for this profile
+- ``acls_id``: ID of ACLs profile that will be used for this profile
+
+
+S3 Specific Configurables:
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Currently cloud sync will only work with backends that are compatible with AWS S3. There are
+a few configurables that can be used to tweak its behavior when accessing these cloud services:
+
+::
+
+ {
+ "multipart_sync_threshold": {object_size},
+ "multipart_min_part_size": {part_size}
+ }
+
+
+* ``multipart_sync_threshold`` (integer)
+
+Objects this size or larger will be synced to the cloud using multipart upload.
+
+* ``multipart_min_part_size`` (integer)
+
+Minimum parts size to use when syncing objects using multipart upload.
+
+
+How to Configure
+~~~~~~~~~~~~~~~~
+
+See :ref:`multisite` for how to multisite config instructions. The cloud sync module requires a creation of a new zone. The zone
+tier type needs to be defined as ``cloud``:
+
+::
+
+ # radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --endpoints={http://fqdn}[,{http://fqdn}]
+ --tier-type=cloud
+
+
+The tier configuration can be then done using the following command
+
+::
+
+ # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --tier-config={key}={val}[,{key}={val}]
+
+The ``key`` in the configuration specifies the config variable that needs to be updated, and
+the ``val`` specifies its new value. Nested values can be accessed using period. For example:
+
+::
+
+ # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --tier-config=connection.access_key={key},connection.secret={secret}
+
+
+Configuration array entries can be accessed by specifying the specific entry to be referenced enclosed
+in square brackets, and adding new array entry can be done by using `[]`. Index value of `-1` references
+the last entry in the array. At the moment it is not possible to create a new entry and reference it
+again at the same command.
+For example, creating a new profile for buckets starting with {prefix}:
+
+::
+
+ # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --tier-config=profiles[].source_bucket={prefix}'*'
+
+ # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --tier-config=profiles[-1].connection_id={conn_id},profiles[-1].acls_id={acls_id}
+
+
+An entry can be removed by using ``--tier-config-rm={key}``.
diff --git a/doc/radosgw/cloud-transition.rst b/doc/radosgw/cloud-transition.rst
new file mode 100644
index 000000000..c00ad790b
--- /dev/null
+++ b/doc/radosgw/cloud-transition.rst
@@ -0,0 +1,368 @@
+================
+Cloud Transition
+================
+
+This feature enables data transition to a remote cloud service as part of `Lifecycle Configuration <https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html>`__ via :ref:`storage_classes`. The transition is unidirectional; data cannot be transitioned back from the remote zone. The goal of this feature is to enable data transition to multiple cloud providers. The currently supported cloud providers are those that are compatible with AWS (S3).
+
+Special storage class of tier type ``cloud-s3`` is used to configure the remote cloud S3 object store service to which the data needs to be transitioned. These are defined in terms of zonegroup placement targets and unlike regular storage classes, do not need a data pool.
+
+User credentials for the remote cloud object store service need to be configured. Note that source ACLs will not
+be preserved. It is possible to map permissions of specific source users to specific destination users.
+
+
+Cloud Storage Class Configuration
+---------------------------------
+
+::
+
+ {
+ "access_key": <access>,
+ "secret": <secret>,
+ "endpoint": <endpoint>,
+ "region": <region>,
+ "host_style": <path | virtual>,
+ "acls": [ { "type": <id | email | uri>,
+ "source_id": <source_id>,
+ "dest_id": <dest_id> } ... ],
+ "target_path": <target_path>,
+ "target_storage_class": <target-storage-class>,
+ "multipart_sync_threshold": {object_size},
+ "multipart_min_part_size": {part_size},
+ "retain_head_object": <true | false>
+ }
+
+
+Cloud Transition Specific Configurables:
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+* ``access_key`` (string)
+
+The remote cloud S3 access key that will be used for a specific connection.
+
+* ``secret`` (string)
+
+The secret key for the remote cloud S3 service.
+
+* ``endpoint`` (string)
+
+URL of remote cloud S3 service endpoint.
+
+* ``region`` (string)
+
+The remote cloud S3 service region name.
+
+* ``host_style`` (path | virtual)
+
+Type of host style to be used when accessing remote cloud S3 endpoint (default: ``path``).
+
+* ``acls`` (array)
+
+Contains a list of ``acl_mappings``.
+
+* ``acl_mapping`` (container)
+
+Each ``acl_mapping`` structure contains ``type``, ``source_id``, and ``dest_id``. These
+will define the ACL mutation that will be done on each object. An ACL mutation allows converting source
+user id to a destination id.
+
+* ``type`` (id | email | uri)
+
+ACL type: ``id`` defines user id, ``email`` defines user by email, and ``uri`` defines user by ``uri`` (group).
+
+* ``source_id`` (string)
+
+ID of user in the source zone.
+
+* ``dest_id`` (string)
+
+ID of user in the destination.
+
+* ``target_path`` (string)
+
+A string that defines how the target path is created. The target path specifies a prefix to which
+the source 'bucket-name/object-name' is appended. If not specified the target_path created is "rgwx-${zonegroup}-${storage-class}-cloud-bucket".
+
+For example: ``target_path = rgwx-archive-${zonegroup}/``
+
+* ``target_storage_class`` (string)
+
+A string that defines the target storage class to which the object transitions to. If not specified, object is transitioned to STANDARD storage class.
+
+* ``retain_head_object`` (true | false)
+
+If true, retains the metadata of the object transitioned to cloud. If false (default), the object is deleted post transition.
+This option is ignored for current versioned objects. For more details, refer to section "Versioned Objects" below.
+
+
+S3 Specific Configurables:
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Currently cloud transition will only work with backends that are compatible with AWS S3. There are
+a few configurables that can be used to tweak its behavior when accessing these cloud services:
+
+::
+
+ {
+ "multipart_sync_threshold": {object_size},
+ "multipart_min_part_size": {part_size}
+ }
+
+
+* ``multipart_sync_threshold`` (integer)
+
+Objects this size or larger will be transitioned to the cloud using multipart upload.
+
+* ``multipart_min_part_size`` (integer)
+
+Minimum parts size to use when transitioning objects using multipart upload.
+
+
+How to Configure
+~~~~~~~~~~~~~~~~
+
+See :ref:`adding_a_storage_class` for how to configure storage-class for a zonegroup. The cloud transition requires a creation of a special storage class with tier type defined as ``cloud-s3``
+
+.. note:: If you have not done any previous `Multisite Configuration`_,
+ a ``default`` zone and zonegroup are created for you, and changes
+ to the zone/zonegroup will not take effect until the Ceph Object
+ Gateways are restarted. If you have created a realm for multisite,
+ the zone/zonegroup changes will take effect once the changes are
+ committed with ``radosgw-admin period update --commit``.
+
+::
+
+ # radosgw-admin zonegroup placement add --rgw-zonegroup={zone-group-name} \
+ --placement-id={placement-id} \
+ --storage-class={storage-class-name} \
+ --tier-type=cloud-s3
+
+For example:
+
+::
+
+ # radosgw-admin zonegroup placement add --rgw-zonegroup=default \
+ --placement-id=default-placement \
+ --storage-class=CLOUDTIER --tier-type=cloud-s3
+ [
+ {
+ "key": "default-placement",
+ "val": {
+ "name": "default-placement",
+ "tags": [],
+ "storage_classes": [
+ "CLOUDTIER",
+ "STANDARD"
+ ],
+ "tier_targets": [
+ {
+ "key": "CLOUDTIER",
+ "val": {
+ "tier_type": "cloud-s3",
+ "storage_class": "CLOUDTIER",
+ "retain_head_object": "false",
+ "s3": {
+ "endpoint": "",
+ "access_key": "",
+ "secret": "",
+ "host_style": "path",
+ "target_storage_class": "",
+ "target_path": "",
+ "acl_mappings": [],
+ "multipart_sync_threshold": 33554432,
+ "multipart_min_part_size": 33554432
+ }
+ }
+ }
+ ]
+ }
+ }
+ ]
+
+
+.. note:: Once a storage class is created of ``--tier-type=cloud-s3``, it cannot be later modified to any other storage class type.
+
+The tier configuration can be then done using the following command
+
+::
+
+ # radosgw-admin zonegroup placement modify --rgw-zonegroup={zone-group-name} \
+ --placement-id={placement-id} \
+ --storage-class={storage-class-name} \
+ --tier-config={key}={val}[,{key}={val}]
+
+The ``key`` in the configuration specifies the config variable that needs to be updated, and
+the ``val`` specifies its new value.
+
+
+For example:
+
+::
+
+ # radosgw-admin zonegroup placement modify --rgw-zonegroup default \
+ --placement-id default-placement \
+ --storage-class CLOUDTIER \
+ --tier-config=endpoint=http://XX.XX.XX.XX:YY,\
+ access_key=<access_key>,secret=<secret>, \
+ multipart_sync_threshold=44432, \
+ multipart_min_part_size=44432, \
+ retain_head_object=true
+
+Nested values can be accessed using period. For example:
+
+::
+
+ # radosgw-admin zonegroup placement modify --rgw-zonegroup={zone-group-name} \
+ --placement-id={placement-id} \
+ --storage-class={storage-class-name} \
+ --tier-config=acls.source_id=${source-id}, \
+ acls.dest_id=${dest-id}
+
+
+
+Configuration array entries can be accessed by specifying the specific entry to be referenced enclosed
+in square brackets, and adding new array entry can be done by using `[]`.
+For example, creating a new acl array entry:
+
+::
+
+ # radosgw-admin zonegroup placement modify --rgw-zonegroup={zone-group-name} \
+ --placement-id={placement-id} \
+ --storage-class={storage-class-name} \
+ --tier-config=acls[].source_id=${source-id}, \
+ acls[${source-id}].dest_id=${dest-id}, \
+ acls[${source-id}].type=email
+
+An entry can be removed by using ``--tier-config-rm={key}``.
+
+For example,
+
+::
+
+ # radosgw-admin zonegroup placement modify --rgw-zonegroup default \
+ --placement-id default-placement \
+ --storage-class CLOUDTIER \
+ --tier-config-rm=acls.source_id=testid
+
+ # radosgw-admin zonegroup placement modify --rgw-zonegroup default \
+ --placement-id default-placement \
+ --storage-class CLOUDTIER \
+ --tier-config-rm=target_path
+
+The storage class can be removed using the following command
+
+::
+
+ # radosgw-admin zonegroup placement rm --rgw-zonegroup={zone-group-name} \
+ --placement-id={placement-id} \
+ --storage-class={storage-class-name}
+
+For example,
+
+::
+
+ # radosgw-admin zonegroup placement rm --rgw-zonegroup default \
+ --placement-id default-placement \
+ --storage-class CLOUDTIER
+ [
+ {
+ "key": "default-placement",
+ "val": {
+ "name": "default-placement",
+ "tags": [],
+ "storage_classes": [
+ "STANDARD"
+ ]
+ }
+ }
+ ]
+
+Object modification & Limitations
+----------------------------------
+
+The cloud storage class once configured can then be used like any other storage class in the bucket lifecycle rules. For example,
+
+::
+
+ <Transition>
+ <StorageClass>CLOUDTIER</StorageClass>
+ ....
+ ....
+ </Transition>
+
+
+Since the transition is unidirectional, while configuring S3 lifecycle rules, the cloud storage class should be specified last among all the storage classes the object transitions to. Subsequent rules (if any) do not apply post transition to the cloud.
+
+Due to API limitations there is no way to preserve original object modification time and ETag but they get stored as metadata attributes on the destination objects, as shown below:
+
+::
+
+ x-amz-meta-rgwx-source: rgw
+ x-amz-meta-rgwx-source-etag: ed076287532e86365e841e92bfc50d8c
+ x-amz-meta-rgwx-source-key: lc.txt
+ x-amz-meta-rgwx-source-mtime: 1608546349.757100363
+ x-amz-meta-rgwx-versioned-epoch: 0
+
+In order to allow some cloud services detect the source and map the user-defined 'x-amz-meta-' attributes, below two additional new attributes are added to the objects being transitioned
+
+::
+
+ x-rgw-cloud : true/false
+ (set to "true", by default, if the object is being transitioned from RGW)
+
+ x-rgw-cloud-keep-attrs : true/false
+ (if set to default value "true", the cloud service should map and store all the x-amz-meta-* attributes. If it cannot, then the operation should fail.
+ if set to "false", the cloud service can ignore such attributes and just store the object data being sent.)
+
+
+By default, post transition, the source object gets deleted. But it is possible to retain its metadata but with updated values (like storage-class and object-size) by setting config option 'retain_head_object' to true. However GET on those objects shall still fail with 'InvalidObjectState' error.
+
+For example,
+::
+
+ # s3cmd info s3://bucket/lc.txt
+ s3://bucket/lc.txt (object):
+ File size: 12
+ Last mod: Mon, 21 Dec 2020 10:25:56 GMT
+ MIME type: text/plain
+ Storage: CLOUDTIER
+ MD5 sum: ed076287532e86365e841e92bfc50d8c
+ SSE: none
+ Policy: none
+ CORS: none
+ ACL: M. Tester: FULL_CONTROL
+ x-amz-meta-s3cmd-attrs: atime:1608466266/ctime:1597606156/gid:0/gname:root/md5:ed076287532e86365e841e92bfc50d8c/mode:33188/mtime:1597605793/uid:0/uname:root
+
+ # s3cmd get s3://bucket/lc.txt lc_restore.txt
+ download: 's3://bucket/lc.txt' -> 'lc_restore.txt' [1 of 1]
+ ERROR: S3 error: 403 (InvalidObjectState)
+
+To avoid object names collision across various buckets, source bucket name is prepended to the target object name. If the object is versioned, object versionid is appended to the end.
+
+Below is the sample object name format:
+::
+
+ s3://<target_path>/<source_bucket_name>/<source_object_name>(-<source_object_version_id>)
+
+
+Versioned Objects
+~~~~~~~~~~~~~~~~~
+
+For versioned and locked objects, similar semantics as that of LifecycleExpiration are applied as stated below.
+
+* If the object is current, post transitioning to cloud, it is made noncurrent with delete marker created.
+
+* If the object is noncurrent and is locked, its transition is skipped.
+
+
+Future Work
+-----------
+
+* Send presigned redirect or read-through the objects transitioned to cloud
+
+* Support s3:RestoreObject operation on cloud transitioned objects.
+
+* Federation between RGW and Cloud services.
+
+* Support transition to other cloud providers (like Azure).
+
+.. _`Multisite Configuration`: ../multisite
diff --git a/doc/radosgw/compression.rst b/doc/radosgw/compression.rst
new file mode 100644
index 000000000..fba0681da
--- /dev/null
+++ b/doc/radosgw/compression.rst
@@ -0,0 +1,91 @@
+===========
+Compression
+===========
+
+.. versionadded:: Kraken
+
+The Ceph Object Gateway supports server-side compression of uploaded objects,
+using any of Ceph's existing compression plugins.
+
+.. note:: The Reef release added a :ref:`feature_compress_encrypted` zonegroup
+ feature to enable compression with `Server-Side Encryption`_.
+
+
+Configuration
+=============
+
+Compression can be enabled on a storage class in the Zone's placement target
+by providing the ``--compression=<type>`` option to the command
+``radosgw-admin zone placement modify``.
+
+The compression ``type`` refers to the name of the compression plugin to use
+when writing new object data. Each compressed object remembers which plugin
+was used, so changing this setting does not hinder the ability to decompress
+existing objects, nor does it force existing objects to be recompressed.
+
+This compression setting applies to all new objects uploaded to buckets using
+this placement target. Compression can be disabled by setting the ``type`` to
+an empty string or ``none``.
+
+For example::
+
+ $ radosgw-admin zone placement modify \
+ --rgw-zone default \
+ --placement-id default-placement \
+ --storage-class STANDARD \
+ --compression zlib
+ {
+ ...
+ "placement_pools": [
+ {
+ "key": "default-placement",
+ "val": {
+ "index_pool": "default.rgw.buckets.index",
+ "storage_classes": {
+ "STANDARD": {
+ "data_pool": "default.rgw.buckets.data",
+ "compression_type": "zlib"
+ }
+ },
+ "data_extra_pool": "default.rgw.buckets.non-ec",
+ "index_type": 0,
+ }
+ }
+ ],
+ ...
+ }
+
+.. note:: A ``default`` zone is created for you if you have not done any
+ previous `Multisite Configuration`_.
+
+
+Statistics
+==========
+
+While all existing commands and APIs continue to report object and bucket
+sizes based their uncompressed data, compression statistics for a given bucket
+are included in its ``bucket stats``::
+
+ $ radosgw-admin bucket stats --bucket=<name>
+ {
+ ...
+ "usage": {
+ "rgw.main": {
+ "size": 1075028,
+ "size_actual": 1331200,
+ "size_utilized": 592035,
+ "size_kb": 1050,
+ "size_kb_actual": 1300,
+ "size_kb_utilized": 579,
+ "num_objects": 104
+ }
+ },
+ ...
+ }
+
+The ``size_utilized`` and ``size_kb_utilized`` fields represent the total
+size of compressed data, in bytes and kilobytes respectively.
+
+
+.. _`Server-Side Encryption`: ../encryption
+.. _`Multisite Configuration`: ../multisite
diff --git a/doc/radosgw/config-ref.rst b/doc/radosgw/config-ref.rst
new file mode 100644
index 000000000..916ff4ff5
--- /dev/null
+++ b/doc/radosgw/config-ref.rst
@@ -0,0 +1,301 @@
+======================================
+ Ceph Object Gateway Config Reference
+======================================
+
+The following settings may added to the Ceph configuration file (i.e., usually
+``ceph.conf``) under the ``[client.radosgw.{instance-name}]`` section. The
+settings may contain default values. If you do not specify each setting in the
+Ceph configuration file, the default value will be set automatically.
+
+Configuration variables set under the ``[client.radosgw.{instance-name}]``
+section will not apply to rgw or radosgw-admin commands without an instance-name
+specified in the command. Thus variables meant to be applied to all RGW
+instances or all radosgw-admin options can be put into the ``[global]`` or the
+``[client]`` section to avoid specifying ``instance-name``.
+
+.. confval:: rgw_frontends
+.. confval:: rgw_data
+.. confval:: rgw_enable_apis
+.. confval:: rgw_cache_enabled
+.. confval:: rgw_cache_lru_size
+.. confval:: rgw_dns_name
+.. confval:: rgw_script_uri
+.. confval:: rgw_request_uri
+.. confval:: rgw_print_continue
+.. confval:: rgw_remote_addr_param
+.. confval:: rgw_op_thread_timeout
+.. confval:: rgw_op_thread_suicide_timeout
+.. confval:: rgw_thread_pool_size
+.. confval:: rgw_num_control_oids
+.. confval:: rgw_init_timeout
+.. confval:: rgw_mime_types_file
+.. confval:: rgw_s3_success_create_obj_status
+.. confval:: rgw_resolve_cname
+.. confval:: rgw_obj_stripe_size
+.. confval:: rgw_extended_http_attrs
+.. confval:: rgw_exit_timeout_secs
+.. confval:: rgw_get_obj_window_size
+.. confval:: rgw_get_obj_max_req_size
+.. confval:: rgw_multipart_min_part_size
+.. confval:: rgw_relaxed_s3_bucket_names
+.. confval:: rgw_list_buckets_max_chunk
+.. confval:: rgw_override_bucket_index_max_shards
+.. confval:: rgw_curl_wait_timeout_ms
+.. confval:: rgw_copy_obj_progress
+.. confval:: rgw_copy_obj_progress_every_bytes
+.. confval:: rgw_max_copy_obj_concurrent_io
+.. confval:: rgw_admin_entry
+.. confval:: rgw_content_length_compat
+.. confval:: rgw_bucket_quota_ttl
+.. confval:: rgw_user_quota_bucket_sync_interval
+.. confval:: rgw_user_quota_sync_interval
+.. confval:: rgw_bucket_default_quota_max_objects
+.. confval:: rgw_bucket_default_quota_max_size
+.. confval:: rgw_user_default_quota_max_objects
+.. confval:: rgw_user_default_quota_max_size
+.. confval:: rgw_verify_ssl
+.. confval:: rgw_max_chunk_size
+
+Lifecycle Settings
+==================
+
+Bucket Lifecycle configuration can be used to manage your objects so they are stored
+effectively throughout their lifetime. In past releases Lifecycle processing was rate-limited
+by single threaded processing. With the Nautilus release this has been addressed and the
+Ceph Object Gateway now allows for parallel thread processing of bucket lifecycles across
+additional Ceph Object Gateway instances and replaces the in-order
+index shard enumeration with a random ordered sequence.
+
+There are two options in particular to look at when looking to increase the
+aggressiveness of lifecycle processing:
+
+.. confval:: rgw_lc_max_worker
+.. confval:: rgw_lc_max_wp_worker
+
+These values can be tuned based upon your specific workload to further increase the
+aggressiveness of lifecycle processing. For a workload with a larger number of buckets (thousands)
+you would look at increasing the :confval:`rgw_lc_max_worker` value from the default value of 3 whereas for a
+workload with a smaller number of buckets but higher number of objects (hundreds of thousands)
+per bucket you would consider decreasing :confval:`rgw_lc_max_wp_worker` from the default value of 3.
+
+.. note:: When looking to tune either of these specific values please validate the
+ current Cluster performance and Ceph Object Gateway utilization before increasing.
+
+Garbage Collection Settings
+===========================
+
+The Ceph Object Gateway allocates storage for new objects immediately.
+
+The Ceph Object Gateway purges the storage space used for deleted and overwritten
+objects in the Ceph Storage cluster some time after the gateway deletes the
+objects from the bucket index. The process of purging the deleted object data
+from the Ceph Storage cluster is known as Garbage Collection or GC.
+
+To view the queue of objects awaiting garbage collection, execute the following
+
+.. prompt:: bash $
+
+ radosgw-admin gc list
+
+.. note:: Specify ``--include-all`` to list all entries, including unexpired
+ Garbage Collection objects.
+
+Garbage collection is a background activity that may
+execute continuously or during times of low loads, depending upon how the
+administrator configures the Ceph Object Gateway. By default, the Ceph Object
+Gateway conducts GC operations continuously. Since GC operations are a normal
+part of Ceph Object Gateway operations, especially with object delete
+operations, objects eligible for garbage collection exist most of the time.
+
+Some workloads may temporarily or permanently outpace the rate of garbage
+collection activity. This is especially true of delete-heavy workloads, where
+many objects get stored for a short period of time and then deleted. For these
+types of workloads, administrators can increase the priority of garbage
+collection operations relative to other operations with the following
+configuration parameters.
+
+.. confval:: rgw_gc_max_objs
+.. confval:: rgw_gc_obj_min_wait
+.. confval:: rgw_gc_processor_max_time
+.. confval:: rgw_gc_processor_period
+.. confval:: rgw_gc_max_concurrent_io
+
+:Tuning Garbage Collection for Delete Heavy Workloads:
+
+As an initial step towards tuning Ceph Garbage Collection to be more
+aggressive the following options are suggested to be increased from their
+default configuration values::
+
+ rgw_gc_max_concurrent_io = 20
+ rgw_gc_max_trim_chunk = 64
+
+.. note:: Modifying these values requires a restart of the RGW service.
+
+Once these values have been increased from default please monitor for performance of the cluster during Garbage Collection to verify no adverse performance issues due to the increased values.
+
+Multisite Settings
+==================
+
+.. versionadded:: Jewel
+
+You may include the following settings in your Ceph configuration
+file under each ``[client.radosgw.{instance-name}]`` instance.
+
+.. confval:: rgw_zone
+.. confval:: rgw_zonegroup
+.. confval:: rgw_realm
+.. confval:: rgw_run_sync_thread
+.. confval:: rgw_data_log_window
+.. confval:: rgw_data_log_changes_size
+.. confval:: rgw_data_log_obj_prefix
+.. confval:: rgw_data_log_num_shards
+.. confval:: rgw_md_log_max_shards
+.. confval:: rgw_data_sync_poll_interval
+.. confval:: rgw_meta_sync_poll_interval
+.. confval:: rgw_bucket_sync_spawn_window
+.. confval:: rgw_data_sync_spawn_window
+.. confval:: rgw_meta_sync_spawn_window
+
+.. important:: The values of :confval:`rgw_data_log_num_shards` and
+ :confval:`rgw_md_log_max_shards` should not be changed after sync has
+ started.
+
+S3 Settings
+===========
+
+.. confval:: rgw_s3_auth_use_ldap
+
+Swift Settings
+==============
+
+.. confval:: rgw_enforce_swift_acls
+.. confval:: rgw_swift_tenant_name
+.. confval:: rgw_swift_token_expiration
+.. confval:: rgw_swift_url
+.. confval:: rgw_swift_url_prefix
+.. confval:: rgw_swift_auth_url
+.. confval:: rgw_swift_auth_entry
+.. confval:: rgw_swift_account_in_url
+.. confval:: rgw_swift_versioning_enabled
+.. confval:: rgw_trust_forwarded_https
+
+Logging Settings
+================
+
+.. confval:: rgw_log_nonexistent_bucket
+.. confval:: rgw_log_object_name
+.. confval:: rgw_log_object_name_utc
+.. confval:: rgw_usage_max_shards
+.. confval:: rgw_usage_max_user_shards
+.. confval:: rgw_enable_ops_log
+.. confval:: rgw_enable_usage_log
+.. confval:: rgw_ops_log_rados
+.. confval:: rgw_ops_log_socket_path
+.. confval:: rgw_ops_log_data_backlog
+.. confval:: rgw_usage_log_flush_threshold
+.. confval:: rgw_usage_log_tick_interval
+.. confval:: rgw_log_http_headers
+
+Keystone Settings
+=================
+
+.. confval:: rgw_keystone_url
+.. confval:: rgw_keystone_api_version
+.. confval:: rgw_keystone_admin_domain
+.. confval:: rgw_keystone_admin_project
+.. confval:: rgw_keystone_admin_token
+.. confval:: rgw_keystone_admin_token_path
+.. confval:: rgw_keystone_admin_tenant
+.. confval:: rgw_keystone_admin_user
+.. confval:: rgw_keystone_admin_password
+.. confval:: rgw_keystone_admin_password_path
+.. confval:: rgw_keystone_accepted_roles
+.. confval:: rgw_keystone_token_cache_size
+.. confval:: rgw_keystone_verify_ssl
+.. confval:: rgw_keystone_service_token_enabled
+.. confval:: rgw_keystone_service_token_accepted_roles
+.. confval:: rgw_keystone_expired_token_cache_expiration
+
+Server-side encryption Settings
+===============================
+
+.. confval:: rgw_crypt_s3_kms_backend
+
+Barbican Settings
+=================
+
+.. confval:: rgw_barbican_url
+.. confval:: rgw_keystone_barbican_user
+.. confval:: rgw_keystone_barbican_password
+.. confval:: rgw_keystone_barbican_tenant
+.. confval:: rgw_keystone_barbican_project
+.. confval:: rgw_keystone_barbican_domain
+
+HashiCorp Vault Settings
+========================
+
+.. confval:: rgw_crypt_vault_auth
+.. confval:: rgw_crypt_vault_token_file
+.. confval:: rgw_crypt_vault_addr
+.. confval:: rgw_crypt_vault_prefix
+.. confval:: rgw_crypt_vault_secret_engine
+.. confval:: rgw_crypt_vault_namespace
+
+SSE-S3 Settings
+===============
+
+.. confval:: rgw_crypt_sse_s3_backend
+.. confval:: rgw_crypt_sse_s3_vault_secret_engine
+.. confval:: rgw_crypt_sse_s3_key_template
+.. confval:: rgw_crypt_sse_s3_vault_auth
+.. confval:: rgw_crypt_sse_s3_vault_token_file
+.. confval:: rgw_crypt_sse_s3_vault_addr
+.. confval:: rgw_crypt_sse_s3_vault_prefix
+.. confval:: rgw_crypt_sse_s3_vault_namespace
+.. confval:: rgw_crypt_sse_s3_vault_verify_ssl
+.. confval:: rgw_crypt_sse_s3_vault_ssl_cacert
+.. confval:: rgw_crypt_sse_s3_vault_ssl_clientcert
+.. confval:: rgw_crypt_sse_s3_vault_ssl_clientkey
+
+
+QoS settings
+------------
+
+.. versionadded:: Nautilus
+
+The ``civetweb`` frontend has a threading model that uses a thread per
+connection and hence is automatically throttled by :confval:`rgw_thread_pool_size`
+configurable when it comes to accepting connections. The newer ``beast`` frontend is
+not restricted by the thread pool size when it comes to accepting new
+connections, so a scheduler abstraction is introduced in the Nautilus release
+to support future methods of scheduling requests.
+
+Currently the scheduler defaults to a throttler which throttles the active
+connections to a configured limit. QoS based on mClock is currently in an
+*experimental* phase and not recommended for production yet. Current
+implementation of *dmclock_client* op queue divides RGW ops on admin, auth
+(swift auth, sts) metadata & data requests.
+
+
+.. confval:: rgw_max_concurrent_requests
+.. confval:: rgw_scheduler_type
+.. confval:: rgw_dmclock_auth_res
+.. confval:: rgw_dmclock_auth_wgt
+.. confval:: rgw_dmclock_auth_lim
+.. confval:: rgw_dmclock_admin_res
+.. confval:: rgw_dmclock_admin_wgt
+.. confval:: rgw_dmclock_admin_lim
+.. confval:: rgw_dmclock_data_res
+.. confval:: rgw_dmclock_data_wgt
+.. confval:: rgw_dmclock_data_lim
+.. confval:: rgw_dmclock_metadata_res
+.. confval:: rgw_dmclock_metadata_wgt
+.. confval:: rgw_dmclock_metadata_lim
+
+.. _Architecture: ../../architecture#data-striping
+.. _Pool Configuration: ../../rados/configuration/pool-pg-config-ref/
+.. _Cluster Pools: ../../rados/operations/pools
+.. _Rados cluster handles: ../../rados/api/librados-intro/#step-2-configuring-a-cluster-handle
+.. _Barbican: ../barbican
+.. _Encryption: ../encryption
+.. _HTTP Frontends: ../frontends
diff --git a/doc/radosgw/d3n_datacache.rst b/doc/radosgw/d3n_datacache.rst
new file mode 100644
index 000000000..12d2850a5
--- /dev/null
+++ b/doc/radosgw/d3n_datacache.rst
@@ -0,0 +1,116 @@
+==================
+D3N RGW Data Cache
+==================
+
+.. contents::
+
+Datacenter-Data-Delivery Network (D3N) uses high-speed storage such as NVMe flash or DRAM to cache
+datasets on the access side.
+Such caching allows big data jobs to use the compute and fast storage resources available on each
+Rados Gateway node at the edge.
+
+Many datacenters include low-cost, centralized storage repositories, called data lakes,
+to store and share terabyte and petabyte-scale datasets.
+By necessity most distributed big-data analytic clusters such as Hadoop and Spark must
+depend on accessing a centrally located data lake that is relatively far away.
+Even with a well-designed datacenter network, cluster-to-data lake bandwidth is typically much less
+than the bandwidth of a solid-state storage located at an edge node.
+
+| D3N improves the performance of big-data jobs running in analysis clusters by speeding up recurring reads from the data lake.
+| The Rados Gateways act as cache servers for the back-end object store (OSDs), storing data locally for reuse.
+
+Architecture
+============
+
+D3N improves the performance of big-data jobs by speeding up repeatedly accessed dataset reads from the data lake.
+Cache servers are located in the datacenter on the access side of potential network and storage bottlenecks.
+D3Ns two-layer logical cache forms a traditional caching hierarchy :sup:`*`
+where caches nearer the client have the lowest access latency and overhead,
+while caches in higher levels in the hierarchy are slower (requiring multiple hops to access),
+The layer 1 cache server nearest to the client handles object requests by breaking them into blocks,
+returning any blocks which are cached locally, and forwarding missed requests to the block home location
+(as determined by consistent hashing) in the next layer.
+Cache misses are forwarded to successive logical caching layers until a miss at the top layer is resolved
+by a request to the data lake (Rados)
+
+:sup:`*` currently only layer 1 cache has been upstreamed.
+
+See `MOC D3N (Datacenter-scale Data Delivery Network)`_ and `Red Hat Research D3N Cache for Data Centers`_.
+
+Implementation
+==============
+
+- The D3N cache supports both the `S3` and `Swift` object storage interfaces.
+- D3N currently caches only tail objects, because they are immutable (by default it is parts of objects that are larger than 4MB).
+ (the NGINX `RGW Data cache and CDN`_ supports caching of all object sizes)
+
+
+Requirements
+------------
+
+- An SSD (/dev/nvme,/dev/pmem,/dev/shm) or similar block storage device, formatted
+ (filesystems other than XFS were not tested) and mounted.
+ It will be used as the cache backing store.
+ (depending on device performance, multiple RGWs may share a single device but each requires
+ a discrete directory on the device filesystem)
+
+Limitations
+-----------
+
+- D3N will not cache objects compressed by `Rados Gateway Compression`_ (OSD level compression is supported).
+- D3N will not cache objects encrypted by `Rados Gateway Encryption`_.
+- D3N will be disabled if the ``rgw_max_chunk_size`` config variable value differs from the ``rgw_obj_stripe_size`` config variable value.
+
+
+D3N Environment Setup
+=====================
+
+Running
+-------
+
+To enable D3N on an existing RGWs the following configuration entries are required
+in each Rados Gateways ceph.conf client section, for example for ``[client.rgw.8000]``::
+
+ [client.rgw.8000]
+ rgw_d3n_l1_local_datacache_enabled = true
+ rgw_d3n_l1_datacache_persistent_path = "/mnt/nvme0/rgw_datacache/client.rgw.8000/"
+ rgw_d3n_l1_datacache_size = 10737418240
+
+The above example assumes that the cache backing-store solid state device
+is mounted at `/mnt/nvme0` and has `10 GB` of free space available for the cache.
+
+The persistent path directory has to be created before starting the Gateway.
+(``mkdir -p /mnt/nvme0/rgw_datacache/client.rgw.8000/``)
+
+If another Gateway is co-located on the same machine, configure it's persistent path to a discrete directory,
+for example in the case of `[client.rgw.8001]` configure
+``rgw_d3n_l1_datacache_persistent_path = "/mnt/nvme0/rgw_datacache/client.rgw.8001/"``
+in the ``[client.rgw.8001]`` ceph.conf client section.
+
+In a multiple co-located Gateways configuration consider assigning clients with different workloads
+to each Gateway without a balancer in order to avoid cached data duplication.
+
+ NOTE: each time the Rados Gateway is restarted the content of the cache directory is purged.
+
+Logs
+----
+- D3N related log lines in `radosgw.*.log` contain the string ``d3n`` (case insensitive).
+- low level D3N logs can be enabled by the ``debug_rgw_datacache`` subsystem (up to ``debug_rgw_datacache=30``)
+
+
+CONFIG REFERENCE
+================
+The following D3N related settings can be added to the Ceph configuration file
+(i.e., usually `ceph.conf`) under the ``[client.rgw.{instance-name}]`` section.
+
+.. confval:: rgw_d3n_l1_local_datacache_enabled
+.. confval:: rgw_d3n_l1_datacache_persistent_path
+.. confval:: rgw_d3n_l1_datacache_size
+.. confval:: rgw_d3n_l1_eviction_policy
+
+
+.. _MOC D3N (Datacenter-scale Data Delivery Network): https://massopen.cloud/research-and-development/cloud-research/d3n/
+.. _Red Hat Research D3N Cache for Data Centers: https://research.redhat.com/blog/research_project/d3n-multilayer-cache/
+.. _Rados Gateway Compression: ../compression/
+.. _Rados Gateway Encryption: ../encryption/
+.. _RGW Data cache and CDN: ../rgw-cache/
diff --git a/doc/radosgw/dynamicresharding.rst b/doc/radosgw/dynamicresharding.rst
new file mode 100644
index 000000000..b8bd68d9e
--- /dev/null
+++ b/doc/radosgw/dynamicresharding.rst
@@ -0,0 +1,238 @@
+.. _rgw_dynamic_bucket_index_resharding:
+
+===================================
+RGW Dynamic Bucket Index Resharding
+===================================
+
+.. versionadded:: Luminous
+
+A large bucket index can lead to performance problems, which can
+be addressed by sharding bucket indexes.
+Until Luminous, changing the number of bucket shards (resharding)
+needed to be done offline, with RGW services disabled.
+Since the Luminous release Ceph has supported online bucket resharding.
+
+Each bucket index shard can handle its entries efficiently up until
+reaching a certain threshold. If this threshold is
+exceeded the system can suffer from performance issues. The dynamic
+resharding feature detects this situation and automatically increases
+the number of shards used by a bucket's index, resulting in a
+reduction of the number of entries in each shard. This
+process is transparent to the user. Writes to the target bucket
+are blocked (but reads are not) briefly during resharding process.
+
+By default dynamic bucket index resharding can only increase the
+number of bucket index shards to 1999, although this upper-bound is a
+configuration parameter (see Configuration below). When
+possible, the process chooses a prime number of shards in order to
+spread the number of entries across the bucket index
+shards more evenly.
+
+Detection of resharding opportunities runs as a background process
+that periodically
+scans all buckets. A bucket that requires resharding is added to
+a queue. A thread runs in the background and processes the queueued
+resharding tasks, one at a time and in order.
+
+Multisite
+=========
+
+With Ceph releases Prior to Reef, the Ceph Object Gateway (RGW) does not support
+dynamic resharding in a
+multisite environment. For information on dynamic resharding, see
+:ref:`Resharding <feature_resharding>` in the RGW multisite documentation.
+
+Configuration
+=============
+
+Enable/Disable dynamic bucket index resharding:
+
+- ``rgw_dynamic_resharding``: true/false, default: true
+
+Configuration options that control the resharding process:
+
+- ``rgw_max_objs_per_shard``: maximum number of objects per bucket index shard before resharding is triggered, default: 100000
+
+- ``rgw_max_dynamic_shards``: maximum number of bucket index shards that dynamic resharding can increase to, default: 1999
+
+- ``rgw_reshard_bucket_lock_duration``: duration, in seconds, that writes to the bucket are locked during resharding, default: 360 (i.e., 6 minutes)
+
+- ``rgw_reshard_thread_interval``: maximum time, in seconds, between rounds of resharding queue processing, default: 600 seconds (i.e., 10 minutes)
+
+- ``rgw_reshard_num_logs``: number of shards for the resharding queue, default: 16
+
+Admin commands
+==============
+
+Add a bucket to the resharding queue
+------------------------------------
+
+::
+
+ # radosgw-admin reshard add --bucket <bucket_name> --num-shards <new number of shards>
+
+List resharding queue
+---------------------
+
+::
+
+ # radosgw-admin reshard list
+
+Process tasks on the resharding queue
+-------------------------------------
+
+::
+
+ # radosgw-admin reshard process
+
+Bucket resharding status
+------------------------
+
+::
+
+ # radosgw-admin reshard status --bucket <bucket_name>
+
+The output is a JSON array of 3 objects (reshard_status, new_bucket_instance_id, num_shards) per shard.
+
+For example, the output at each dynamic resharding stage is shown below:
+
+``1. Before resharding occurred:``
+::
+
+ [
+ {
+ "reshard_status": "not-resharding",
+ "new_bucket_instance_id": "",
+ "num_shards": -1
+ }
+ ]
+
+``2. During resharding:``
+::
+
+ [
+ {
+ "reshard_status": "in-progress",
+ "new_bucket_instance_id": "1179f470-2ebf-4630-8ec3-c9922da887fd.8652.1",
+ "num_shards": 2
+ },
+ {
+ "reshard_status": "in-progress",
+ "new_bucket_instance_id": "1179f470-2ebf-4630-8ec3-c9922da887fd.8652.1",
+ "num_shards": 2
+ }
+ ]
+
+``3. After resharding completed:``
+::
+
+ [
+ {
+ "reshard_status": "not-resharding",
+ "new_bucket_instance_id": "",
+ "num_shards": -1
+ },
+ {
+ "reshard_status": "not-resharding",
+ "new_bucket_instance_id": "",
+ "num_shards": -1
+ }
+ ]
+
+
+Cancel pending bucket resharding
+--------------------------------
+
+Note: Bucket resharding operations cannot be cancelled while executing. ::
+
+ # radosgw-admin reshard cancel --bucket <bucket_name>
+
+Manual immediate bucket resharding
+----------------------------------
+
+::
+
+ # radosgw-admin bucket reshard --bucket <bucket_name> --num-shards <new number of shards>
+
+When choosing a number of shards, the administrator must anticipate each
+bucket's peak number of objects. Ideally one should aim for no
+more than 100000 entries per shard at any given time.
+
+Additionally, bucket index shards that are prime numbers are more effective
+in evenly distributing bucket index entries.
+For example, 7001 bucket index shards is better than 7000
+since the former is prime. A variety of web sites have lists of prime
+numbers; search for "list of prime numbers" with your favorite
+search engine to locate some web sites.
+
+Troubleshooting
+===============
+
+Clusters prior to Luminous 12.2.11 and Mimic 13.2.5 left behind stale bucket
+instance entries, which were not automatically cleaned up. This issue also affected
+LifeCycle policies, which were no longer applied to resharded buckets. Both of
+these issues could be worked around by running ``radosgw-admin`` commands.
+
+Stale instance management
+-------------------------
+
+List the stale instances in a cluster that are ready to be cleaned up.
+
+::
+
+ # radosgw-admin reshard stale-instances list
+
+Clean up the stale instances in a cluster. Note: cleanup of these
+instances should only be done on a single-site cluster.
+
+::
+
+ # radosgw-admin reshard stale-instances rm
+
+
+Lifecycle fixes
+---------------
+
+For clusters with resharded instances, it is highly likely that the old
+lifecycle processes would have flagged and deleted lifecycle processing as the
+bucket instance changed during a reshard. While this is fixed for buckets
+deployed on newer Ceph releases (from Mimic 13.2.6 and Luminous 12.2.12),
+older buckets that had lifecycle policies and that have undergone
+resharding must be fixed manually.
+
+The command to do so is:
+
+::
+
+ # radosgw-admin lc reshard fix --bucket {bucketname}
+
+
+If the ``--bucket`` argument is not provided, this
+command will try to fix lifecycle policies for all the buckets in the cluster.
+
+Object Expirer fixes
+--------------------
+
+Objects subject to Swift object expiration on older clusters may have
+been dropped from the log pool and never deleted after the bucket was
+resharded. This would happen if their expiration time was before the
+cluster was upgraded, but if their expiration was after the upgrade
+the objects would be correctly handled. To manage these expire-stale
+objects, ``radosgw-admin`` provides two subcommands.
+
+Listing:
+
+::
+
+ # radosgw-admin objects expire-stale list --bucket {bucketname}
+
+Displays a list of object names and expiration times in JSON format.
+
+Deleting:
+
+::
+
+ # radosgw-admin objects expire-stale rm --bucket {bucketname}
+
+
+Initiates deletion of such objects, displaying a list of object names, expiration times, and deletion status in JSON format.
diff --git a/doc/radosgw/elastic-sync-module.rst b/doc/radosgw/elastic-sync-module.rst
new file mode 100644
index 000000000..60c806e87
--- /dev/null
+++ b/doc/radosgw/elastic-sync-module.rst
@@ -0,0 +1,181 @@
+=========================
+ElasticSearch Sync Module
+=========================
+
+.. versionadded:: Kraken
+
+.. note::
+ As of 31 May 2020, only Elasticsearch 6 and lower are supported. ElasticSearch 7 is not supported.
+
+This sync module writes the metadata from other zones to `ElasticSearch`_. As of
+luminous this is a json of data fields we currently store in ElasticSearch.
+
+::
+
+ {
+ "_index" : "rgw-gold-ee5863d6",
+ "_type" : "object",
+ "_id" : "34137443-8592-48d9-8ca7-160255d52ade.34137.1:object1:null",
+ "_score" : 1.0,
+ "_source" : {
+ "bucket" : "testbucket123",
+ "name" : "object1",
+ "instance" : "null",
+ "versioned_epoch" : 0,
+ "owner" : {
+ "id" : "user1",
+ "display_name" : "user1"
+ },
+ "permissions" : [
+ "user1"
+ ],
+ "meta" : {
+ "size" : 712354,
+ "mtime" : "2017-05-04T12:54:16.462Z",
+ "etag" : "7ac66c0f148de9519b8bd264312c4d64"
+ }
+ }
+ }
+
+
+
+ElasticSearch tier type configurables
+-------------------------------------
+
+* ``endpoint``
+
+Specifies the Elasticsearch server endpoint to access
+
+* ``num_shards`` (integer)
+
+The number of shards that Elasticsearch will be configured with on
+data sync initialization. Note that this cannot be changed after init.
+Any change here requires rebuild of the Elasticsearch index and reinit
+of the data sync process.
+
+* ``num_replicas`` (integer)
+
+The number of the replicas that Elasticsearch will be configured with
+on data sync initialization.
+
+* ``explicit_custom_meta`` (true | false)
+
+Specifies whether all user custom metadata will be indexed, or whether
+user will need to configure (at the bucket level) what custom
+metadata entries should be indexed. This is false by default
+
+* ``index_buckets_list`` (comma separated list of strings)
+
+If empty, all buckets will be indexed. Otherwise, only buckets
+specified here will be indexed. It is possible to provide bucket
+prefixes (e.g., foo\*), or bucket suffixes (e.g., \*bar).
+
+* ``approved_owners_list`` (comma separated list of strings)
+
+If empty, buckets of all owners will be indexed (subject to other
+restrictions), otherwise, only buckets owned by specified owners will
+be indexed. Suffixes and prefixes can also be provided.
+
+* ``override_index_path`` (string)
+
+if not empty, this string will be used as the elasticsearch index
+path. Otherwise the index path will be determined and generated on
+sync initialization.
+
+
+End user metadata queries
+-------------------------
+
+.. versionadded:: Luminous
+
+Since the ElasticSearch cluster now stores object metadata, it is important that
+the ElasticSearch endpoint is not exposed to the public and only accessible to
+the cluster administrators. For exposing metadata queries to the end user itself
+this poses a problem since we'd want the user to only query their metadata and
+not of any other users, this would require the ElasticSearch cluster to
+authenticate users in a way similar to RGW does which poses a problem.
+
+As of Luminous RGW in the metadata master zone can now service end user
+requests. This allows for not exposing the elasticsearch endpoint in public and
+also solves the authentication and authorization problem since RGW itself can
+authenticate the end user requests. For this purpose RGW introduces a new query
+in the bucket APIs that can service elasticsearch requests. All these requests
+must be sent to the metadata master zone.
+
+Syntax
+~~~~~~
+
+Get an elasticsearch query
+``````````````````````````
+
+::
+
+ GET /{bucket}?query={query-expr}
+
+request params:
+ - max-keys: max number of entries to return
+ - marker: pagination marker
+
+``expression := [(]<arg> <op> <value> [)][<and|or> ...]``
+
+op is one of the following:
+<, <=, ==, >=, >
+
+For example ::
+
+ GET /?query=name==foo
+
+Will return all the indexed keys that user has read permission to, and
+are named 'foo'.
+
+The output will be a list of keys in XML that is similar to the S3
+list buckets response.
+
+Configure custom metadata fields
+````````````````````````````````
+
+Define which custom metadata entries should be indexed (under the
+specified bucket), and what are the types of these keys. If explicit
+custom metadata indexing is configured, this is needed so that rgw
+will index the specified custom metadata values. Otherwise it is
+needed in cases where the indexed metadata keys are of a type other
+than string.
+
+::
+
+ POST /{bucket}?mdsearch
+ x-amz-meta-search: <key [; type]> [, ...]
+
+Multiple metadata fields must be comma separated, a type can be forced for a
+field with a `;`. The currently allowed types are string(default), integer and
+date
+
+eg. if you want to index a custom object metadata x-amz-meta-year as int,
+x-amz-meta-date as type date and x-amz-meta-title as string, you'd do
+
+::
+
+ POST /mybooks?mdsearch
+ x-amz-meta-search: x-amz-meta-year;int, x-amz-meta-release-date;date, x-amz-meta-title;string
+
+
+Delete custom metadata configuration
+````````````````````````````````````
+
+Delete custom metadata bucket configuration.
+
+::
+
+ DELETE /<bucket>?mdsearch
+
+Get custom metadata configuration
+`````````````````````````````````
+
+Retrieve custom metadata bucket configuration.
+
+::
+
+ GET /<bucket>?mdsearch
+
+
+.. _`Elasticsearch`: https://github.com/elastic/elasticsearch
diff --git a/doc/radosgw/encryption.rst b/doc/radosgw/encryption.rst
new file mode 100644
index 000000000..e30fe1468
--- /dev/null
+++ b/doc/radosgw/encryption.rst
@@ -0,0 +1,96 @@
+==========
+Encryption
+==========
+
+.. versionadded:: Luminous
+
+The Ceph Object Gateway supports server-side encryption of uploaded objects,
+with 3 options for the management of encryption keys. Server-side encryption
+means that the data is sent over HTTP in its unencrypted form, and the Ceph
+Object Gateway stores that data in the Ceph Storage Cluster in encrypted form.
+
+.. note:: Requests for server-side encryption must be sent over a secure HTTPS
+ connection to avoid sending secrets in plaintext. If a proxy is used
+ for SSL termination, ``rgw trust forwarded https`` must be enabled
+ before forwarded requests will be trusted as secure.
+
+.. note:: Server-side encryption keys must be 256-bit long and base64 encoded.
+
+Customer-Provided Keys
+======================
+
+In this mode, the client passes an encryption key along with each request to
+read or write encrypted data. It is the client's responsibility to manage those
+keys and remember which key was used to encrypt each object.
+
+This is implemented in S3 according to the `Amazon SSE-C`_ specification.
+
+As all key management is handled by the client, no special Ceph configuration
+is needed to support this encryption mode.
+
+Key Management Service
+======================
+
+In this mode, an administrator stores keys in a secure key management service.
+These keys are then
+retrieved on demand by the Ceph Object Gateway to serve requests to encrypt
+or decrypt data.
+
+This is implemented in S3 according to the `Amazon SSE-KMS`_ specification.
+
+In principle, any key management service could be used here. Currently
+integration with `Barbican`_, `Vault`_, and `KMIP`_ are implemented.
+
+See `OpenStack Barbican Integration`_, `HashiCorp Vault Integration`_,
+and `KMIP Integration`_.
+
+SSE-S3
+======
+
+This makes key management invisible to the user. They are still stored
+in Vault, but they are automatically created and deleted by Ceph and
+retrieved as required to serve requests to encrypt
+or decrypt data.
+
+This is implemented in S3 according to the `Amazon SSE-S3`_ specification.
+
+In principle, any key management service could be used here. Currently
+only integration with `Vault`_, is implemented.
+
+See `HashiCorp Vault Integration`_.
+
+Bucket Encryption APIs
+======================
+
+Bucket Encryption APIs to support server-side encryption with Amazon
+S3-managed keys (SSE-S3) or AWS KMS customer master keys (SSE-KMS).
+
+See `PutBucketEncryption`_, `GetBucketEncryption`_, `DeleteBucketEncryption`_
+
+Automatic Encryption (for testing only)
+=======================================
+
+A ``rgw crypt default encryption key`` can be set in ceph.conf to force the
+encryption of all objects that do not otherwise specify an encryption mode.
+
+The configuration expects a base64-encoded 256 bit key. For example::
+
+ rgw crypt default encryption key = 4YSmvJtBv0aZ7geVgAsdpRnLBEwWSWlMIGnRS8a9TSA=
+
+.. important:: This mode is for diagnostic purposes only! The ceph configuration
+ file is not a secure method for storing encryption keys. Keys that are
+ accidentally exposed in this way should be considered compromised.
+
+
+.. _Amazon SSE-C: https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
+.. _Amazon SSE-KMS: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
+.. _Amazon SSE-S3: https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html
+.. _Barbican: https://wiki.openstack.org/wiki/Barbican
+.. _Vault: https://www.vaultproject.io/docs/
+.. _KMIP: http://www.oasis-open.org/committees/kmip/
+.. _PutBucketEncryption: https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutBucketEncryption.html
+.. _GetBucketEncryption: https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetBucketEncryption.html
+.. _DeleteBucketEncryption: https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteBucketEncryption.html
+.. _OpenStack Barbican Integration: ../barbican
+.. _HashiCorp Vault Integration: ../vault
+.. _KMIP Integration: ../kmip
diff --git a/doc/radosgw/frontends.rst b/doc/radosgw/frontends.rst
new file mode 100644
index 000000000..45b29cb6f
--- /dev/null
+++ b/doc/radosgw/frontends.rst
@@ -0,0 +1,163 @@
+.. _rgw_frontends:
+
+==============
+HTTP Frontends
+==============
+
+.. contents::
+
+The Ceph Object Gateway supports two embedded HTTP frontend libraries
+that can be configured with ``rgw_frontends``. See `Config Reference`_
+for details about the syntax.
+
+Beast
+=====
+
+.. versionadded:: Mimic
+
+The ``beast`` frontend uses the Boost.Beast library for HTTP parsing
+and the Boost.Asio library for asynchronous network i/o.
+
+Options
+-------
+
+``port`` and ``ssl_port``
+
+:Description: Sets the ipv4 & ipv6 listening port number. Can be specified multiple
+ times as in ``port=80 port=8000``.
+:Type: Integer
+:Default: ``80``
+
+
+``endpoint`` and ``ssl_endpoint``
+
+:Description: Sets the listening address in the form ``address[:port]``, where
+ the address is an IPv4 address string in dotted decimal form, or
+ an IPv6 address in hexadecimal notation surrounded by square
+ brackets. Specifying a IPv6 endpoint would listen to v6 only. The
+ optional port defaults to 80 for ``endpoint`` and 443 for
+ ``ssl_endpoint``. Can be specified multiple times as in
+ ``endpoint=[::1] endpoint=192.168.0.100:8000``.
+
+:Type: Integer
+:Default: None
+
+
+``ssl_certificate``
+
+:Description: Path to the SSL certificate file used for SSL-enabled endpoints.
+ If path is prefixed with ``config://``, the certificate will be
+ pulled from the ceph monitor ``config-key`` database.
+
+:Type: String
+:Default: None
+
+
+``ssl_private_key``
+
+:Description: Optional path to the private key file used for SSL-enabled
+ endpoints. If one is not given, the ``ssl_certificate`` file
+ is used as the private key.
+ If path is prefixed with ``config://``, the certificate will be
+ pulled from the ceph monitor ``config-key`` database.
+
+:Type: String
+:Default: None
+
+``ssl_options``
+
+:Description: Optional colon separated list of ssl context options:
+
+ ``default_workarounds`` Implement various bug workarounds.
+
+ ``no_compression`` Disable compression.
+
+ ``no_sslv2`` Disable SSL v2.
+
+ ``no_sslv3`` Disable SSL v3.
+
+ ``no_tlsv1`` Disable TLS v1.
+
+ ``no_tlsv1_1`` Disable TLS v1.1.
+
+ ``no_tlsv1_2`` Disable TLS v1.2.
+
+ ``single_dh_use`` Always create a new key when using tmp_dh parameters.
+
+:Type: String
+:Default: ``no_sslv2:no_sslv3:no_tlsv1:no_tlsv1_1``
+
+``ssl_ciphers``
+
+:Description: Optional list of one or more cipher strings separated by colons.
+ The format of the string is described in openssl's ciphers(1)
+ manual.
+
+:Type: String
+:Default: None
+
+``tcp_nodelay``
+
+:Description: If set the socket option will disable Nagle's algorithm on
+ the connection which means that packets will be sent as soon
+ as possible instead of waiting for a full buffer or timeout to occur.
+
+ ``1`` Disable Nagel's algorithm for all sockets.
+
+ ``0`` Keep the default: Nagel's algorithm enabled.
+
+:Type: Integer (0 or 1)
+:Default: 0
+
+``max_connection_backlog``
+
+:Description: Optional value to define the maximum size for the queue of
+ connections waiting to be accepted. If not configured, the value
+ from ``boost::asio::socket_base::max_connections`` will be used.
+
+:Type: Integer
+:Default: None
+
+``request_timeout_ms``
+
+:Description: The amount of time in milliseconds that Beast will wait
+ for more incoming data or outgoing data before giving up.
+ Setting this value to 0 will disable timeout.
+
+:Type: Integer
+:Default: ``65000``
+
+``rgw_thread_pool_size``
+
+:Description: Sets the number of threads spawned by Beast to handle
+ incoming HTTP connections. This effectively limits the number
+ of concurrent connections that the frontend can service.
+
+:Type: Integer
+:Default: ``512``
+
+``max_header_size``
+
+:Description: The maximum number of header bytes available for a single request.
+
+:Type: Integer
+:Default: ``16384``
+:Maximum: ``65536``
+
+
+Generic Options
+===============
+
+Some frontend options are generic and supported by all frontends:
+
+``prefix``
+
+:Description: A prefix string that is inserted into the URI of all
+ requests. For example, a swift-only frontend could supply
+ a uri prefix of ``/swift``.
+
+:Type: String
+:Default: None
+
+
+.. _Config Reference: ../config-ref
diff --git a/doc/radosgw/index.rst b/doc/radosgw/index.rst
new file mode 100644
index 000000000..704436202
--- /dev/null
+++ b/doc/radosgw/index.rst
@@ -0,0 +1,87 @@
+.. _object-gateway:
+
+=====================
+ Ceph Object Gateway
+=====================
+
+:term:`Ceph Object Gateway` is an object storage interface built on top of
+``librados``. It provides a RESTful gateway between applications and Ceph
+Storage Clusters. :term:`Ceph Object Storage` supports two interfaces:
+
+#. **S3-compatible:** Provides object storage functionality with an interface
+ that is compatible with a large subset of the Amazon S3 RESTful API.
+
+#. **Swift-compatible:** Provides object storage functionality with an interface
+ that is compatible with a large subset of the OpenStack Swift API.
+
+Ceph Object Storage uses the Ceph Object Gateway daemon (``radosgw``), an HTTP
+server designed for interacting with a Ceph Storage Cluster. The Ceph Object
+Gateway provides interfaces that are compatible with both Amazon S3 and
+OpenStack Swift, and it has its own user management. Ceph Object Gateway can
+store data in the same Ceph Storage Cluster in which data from Ceph File System
+clients and Ceph Block Device clients is stored. The S3 API and the Swift API
+share a common namespace, which makes it possible to write data to a Ceph
+Storage Cluster with one API and then retrieve that data with the other API.
+
+.. ditaa::
+
+ +------------------------+ +------------------------+
+ | S3 compatible API | | Swift compatible API |
+ +------------------------+-+------------------------+
+ | radosgw |
+ +---------------------------------------------------+
+ | librados |
+ +------------------------+-+------------------------+
+ | OSDs | | Monitors |
+ +------------------------+ +------------------------+
+
+.. note:: Ceph Object Storage does **NOT** use the Ceph Metadata Server.
+
+
+.. toctree::
+ :maxdepth: 1
+
+ HTTP Frontends <frontends>
+ Multisite Configuration <multisite>
+ Pool Placement and Storage Classes <placement>
+ Multisite Sync Policy Configuration <multisite-sync-policy>
+ Configuring Pools <pools>
+ Config Reference <config-ref>
+ Admin Guide <admin>
+ S3 API <s3>
+ Data caching and CDN <rgw-cache.rst>
+ Swift API <swift>
+ Admin Ops API <adminops>
+ Python binding <api>
+ Export over NFS <nfs>
+ OpenStack Keystone Integration <keystone>
+ OpenStack Barbican Integration <barbican>
+ HashiCorp Vault Integration <vault>
+ KMIP Integration <kmip>
+ Open Policy Agent Integration <opa>
+ Multi-tenancy <multitenancy>
+ Compression <compression>
+ LDAP Authentication <ldap-auth>
+ Server-Side Encryption <encryption>
+ Bucket Policy <bucketpolicy>
+ Dynamic bucket index resharding <dynamicresharding>
+ Multi factor authentication <mfa>
+ Sync Modules <sync-modules>
+ Bucket Notifications <notifications>
+ Data Layout in RADOS <layout>
+ STS <STS>
+ STS Lite <STSLite>
+ Keycloak <keycloak>
+ Session Tags <session-tags>
+ Role <role>
+ Orphan List and Associated Tooling <orphans>
+ OpenID Connect Provider <oidc>
+ troubleshooting
+ Manpage radosgw <../../man/8/radosgw>
+ Manpage radosgw-admin <../../man/8/radosgw-admin>
+ QAT Acceleration for Encryption and Compression <qat-accel>
+ S3-select <s3select>
+ Lua Scripting <lua-scripting>
+ D3N Data Cache <d3n_datacache>
+ Cloud Transition <cloud-transition>
+
diff --git a/doc/radosgw/keycloak.rst b/doc/radosgw/keycloak.rst
new file mode 100644
index 000000000..ec285a62f
--- /dev/null
+++ b/doc/radosgw/keycloak.rst
@@ -0,0 +1,138 @@
+.. _radosgw_keycloak:
+
+=================================
+Integrating Keycloak with RadosGW
+=================================
+
+If Keycloak is set up as an OpenID Connect Identity Provider, it can be used by
+mobile apps and web apps to authenticate their users. By using the web token
+returned by the authentication process, a mobile app or web app can call
+AssumeRoleWithWebIdentity, receive a set of temporary S3 credentials, and use
+those credentials to make S3 calls.
+
+Setting up Keycloak
+===================
+
+Documentation for installing and operating Keycloak can be found here:
+https://www.keycloak.org/guides.
+
+Configuring Keycloak to talk to RGW
+===================================
+
+To configure Keycloak to talk to RGW, add the following configurables::
+
+ [client.radosgw.gateway]
+ rgw sts key = {sts key for encrypting/ decrypting the session token}
+ rgw s3 auth use sts = true
+
+Fetching a web token with Keycloak
+==================================
+
+Several examples of apps authenticating with Keycloak can be found here:
+https://github.com/keycloak/keycloak-quickstarts/blob/latest/docs/getting-started.md.
+
+Here you might consider the example of the app-profile-jee-jsp app (in the link
+above). To fetch the access token (web token) for such an application using the
+grant type 'client_credentials', one can use client id and client secret as
+follows::
+
+ KC_REALM=demo
+ KC_CLIENT=<client id>
+ KC_CLIENT_SECRET=<client secret>
+ KC_SERVER=<host>:8080
+ KC_CONTEXT=auth
+
+ # Request Tokens for credentials
+ KC_RESPONSE=$( \
+ curl -k -v -X POST \
+ -H "Content-Type: application/x-www-form-urlencoded" \
+ -d "scope=openid" \
+ -d "grant_type=client_credentials" \
+ -d "client_id=$KC_CLIENT" \
+ -d "client_secret=$KC_CLIENT_SECRET" \
+ "http://$KC_SERVER/$KC_CONTEXT/realms/$KC_REALM/protocol/openid-connect/token" \
+ | jq .
+ )
+
+ KC_ACCESS_TOKEN=$(echo $KC_RESPONSE| jq -r .access_token)
+
+It is also possible to fetch an access token for a particular user with the
+grant type 'password'. To fetch such an access token, use client id, client
+secret, username, and password as follows::
+
+ KC_REALM=demo
+ KC_USERNAME=<username>
+ KC_PASSWORD=<userpassword>
+ KC_CLIENT=<client id>
+ KC_CLIENT_SECRET=<client secret>
+ KC_SERVER=<host>:8080
+ KC_CONTEXT=auth
+
+ # Request Tokens for credentials
+ KC_RESPONSE=$( \
+ curl -k -v -X POST \
+ -H "Content-Type: application/x-www-form-urlencoded" \
+ -d "scope=openid" \
+ -d "grant_type=password" \
+ -d "client_id=$KC_CLIENT" \
+ -d "client_secret=$KC_CLIENT_SECRET" \
+ -d "username=$KC_USERNAME" \
+ -d "password=$KC_PASSWORD" \
+ "http://$KC_SERVER/$KC_CONTEXT/realms/$KC_REALM/protocol/openid-connect/token" \
+ | jq .
+ )
+
+ KC_ACCESS_TOKEN=$(echo $KC_RESPONSE| jq -r .access_token)
+
+``KC_ACCESS_TOKEN`` can be used to invoke ``AssumeRoleWithWebIdentity``: see
+:doc:`STS`.
+
+Adding tags to a user in Keycloak
+=================================
+
+To create a user in Keycloak and add tags to it as its attributes, follow these
+steps:
+
+#. Add a user:
+
+ .. image:: ../images/keycloak-adduser.png
+ :align: center
+
+#. Add user details:
+
+ .. image:: ../images/keycloak-userdetails.png
+ :align: center
+
+#. Add user credentials:
+
+ .. image:: ../images/keycloak-usercredentials.png
+ :align: center
+
+#. Add tags to the 'attributes' tab of the user:
+
+ .. image:: ../images/keycloak-usertags.png
+ :align: center
+
+#. Add a protocol mapper that maps the user attribute to a client:
+
+ .. image:: ../images/keycloak-userclientmapper.png
+ :align: center
+
+After these steps have been completed, the tag 'Department' will appear in the
+JWT (web token), under the 'https://aws.amazon.com/tags' namespace.
+
+Tags can be verified by performing token introspection on a JWT. To introspect
+a token, use ``client id`` and ``client secret`` as follows::
+
+ KC_REALM=demo
+ KC_CLIENT=<client id>
+ KC_CLIENT_SECRET=<client secret>
+ KC_SERVER=<host>:8080
+ KC_CONTEXT=auth
+
+ curl -k -v \
+ -X POST \
+ -u "$KC_CLIENT:$KC_CLIENT_SECRET" \
+ -d "token=$KC_ACCESS_TOKEN" \
+ "http://$KC_SERVER/$KC_CONTEXT/realms/$KC_REALM/protocol/openid-connect/token/introspect" \
+ | jq .
diff --git a/doc/radosgw/keystone.rst b/doc/radosgw/keystone.rst
new file mode 100644
index 000000000..20edc3d24
--- /dev/null
+++ b/doc/radosgw/keystone.rst
@@ -0,0 +1,179 @@
+=====================================
+ Integrating with OpenStack Keystone
+=====================================
+
+It is possible to integrate the Ceph Object Gateway with Keystone, the OpenStack
+identity service. This sets up the gateway to accept Keystone as the users
+authority. A user that Keystone authorizes to access the gateway will also be
+automatically created on the Ceph Object Gateway (if didn't exist beforehand). A
+token that Keystone validates will be considered as valid by the gateway.
+
+The following configuration options are available for Keystone integration::
+
+ [client.radosgw.gateway]
+ rgw keystone api version = {keystone api version}
+ rgw keystone url = {keystone server url:keystone server admin port}
+ rgw keystone admin token = {keystone admin token}
+ rgw keystone admin token path = {path to keystone admin token} #preferred
+ rgw keystone accepted roles = {accepted user roles}
+ rgw keystone token cache size = {number of tokens to cache}
+ rgw keystone implicit tenants = {true for private tenant for each new user}
+
+It is also possible to configure a Keystone service tenant, user & password for
+Keystone (for v2.0 version of the OpenStack Identity API), similar to the way
+OpenStack services tend to be configured, this avoids the need for setting the
+shared secret ``rgw keystone admin token`` in the configuration file, which is
+recommended to be disabled in production environments. The service tenant
+credentials should have admin privileges, for more details refer the `OpenStack
+Keystone documentation`_, which explains the process in detail. The requisite
+configuration options for are::
+
+ rgw keystone admin user = {keystone service tenant user name}
+ rgw keystone admin password = {keystone service tenant user password}
+ rgw keystone admin password = {keystone service tenant user password path} # preferred
+ rgw keystone admin tenant = {keystone service tenant name}
+
+
+A Ceph Object Gateway user is mapped into a Keystone ``tenant``. A Keystone user
+has different roles assigned to it on possibly more than a single tenant. When
+the Ceph Object Gateway gets the ticket, it looks at the tenant, and the user
+roles that are assigned to that ticket, and accepts/rejects the request
+according to the ``rgw keystone accepted roles`` configurable.
+
+For a v3 version of the OpenStack Identity API you should replace
+``rgw keystone admin tenant`` with::
+
+ rgw keystone admin domain = {keystone admin domain name}
+ rgw keystone admin project = {keystone admin project name}
+
+For compatibility with previous versions of ceph, it is also
+possible to set ``rgw keystone implicit tenants`` to either
+``s3`` or ``swift``. This has the effect of splitting
+the identity space such that the indicated protocol will
+only use implicit tenants, and the other protocol will
+never use implicit tenants. Some older versions of ceph
+only supported implicit tenants with swift.
+
+Ocata (and later)
+-----------------
+
+Keystone itself needs to be configured to point to the Ceph Object Gateway as an
+object-storage endpoint::
+
+ openstack service create --name=swift \
+ --description="Swift Service" \
+ object-store
+ +-------------+----------------------------------+
+ | Field | Value |
+ +-------------+----------------------------------+
+ | description | Swift Service |
+ | enabled | True |
+ | id | 37c4c0e79571404cb4644201a4a6e5ee |
+ | name | swift |
+ | type | object-store |
+ +-------------+----------------------------------+
+
+ openstack endpoint create --region RegionOne \
+ --publicurl "http://radosgw.example.com:8080/swift/v1" \
+ --adminurl "http://radosgw.example.com:8080/swift/v1" \
+ --internalurl "http://radosgw.example.com:8080/swift/v1" \
+ swift
+ +--------------+------------------------------------------+
+ | Field | Value |
+ +--------------+------------------------------------------+
+ | adminurl | http://radosgw.example.com:8080/swift/v1 |
+ | id | e4249d2b60e44743a67b5e5b38c18dd3 |
+ | internalurl | http://radosgw.example.com:8080/swift/v1 |
+ | publicurl | http://radosgw.example.com:8080/swift/v1 |
+ | region | RegionOne |
+ | service_id | 37c4c0e79571404cb4644201a4a6e5ee |
+ | service_name | swift |
+ | service_type | object-store |
+ +--------------+------------------------------------------+
+
+ $ openstack endpoint show object-store
+ +--------------+------------------------------------------+
+ | Field | Value |
+ +--------------+------------------------------------------+
+ | adminurl | http://radosgw.example.com:8080/swift/v1 |
+ | enabled | True |
+ | id | e4249d2b60e44743a67b5e5b38c18dd3 |
+ | internalurl | http://radosgw.example.com:8080/swift/v1 |
+ | publicurl | http://radosgw.example.com:8080/swift/v1 |
+ | region | RegionOne |
+ | service_id | 37c4c0e79571404cb4644201a4a6e5ee |
+ | service_name | swift |
+ | service_type | object-store |
+ +--------------+------------------------------------------+
+
+.. note:: If your radosgw ``ceph.conf`` sets the configuration option
+ ``rgw swift account in url = true``, your ``object-store``
+ endpoint URLs must be set to include the suffix
+ ``/v1/AUTH_%(tenant_id)s`` (instead of just ``/v1``).
+
+The Keystone URL is the Keystone admin RESTful API URL. The admin token is the
+token that is configured internally in Keystone for admin requests.
+
+OpenStack Keystone may be terminated with a self signed ssl certificate, in
+order for radosgw to interact with Keystone in such a case, you could either
+install Keystone's ssl certificate in the node running radosgw. Alternatively
+radosgw could be made to not verify the ssl certificate at all (similar to
+OpenStack clients with a ``--insecure`` switch) by setting the value of the
+configurable ``rgw keystone verify ssl`` to false.
+
+
+.. _OpenStack Keystone documentation: http://docs.openstack.org/developer/keystone/configuringservices.html#setting-up-projects-users-and-roles
+
+Cross Project(Tenant) Access
+----------------------------
+
+In order to let a project (earlier called a 'tenant') access buckets belonging to a different project, the following config option needs to be enabled::
+
+ rgw swift account in url = true
+
+The Keystone object-store endpoint must accordingly be configured to include the AUTH_%(project_id)s suffix::
+
+ openstack endpoint create --region RegionOne \
+ --publicurl "http://radosgw.example.com:8080/swift/v1/AUTH_$(project_id)s" \
+ --adminurl "http://radosgw.example.com:8080/swift/v1/AUTH_$(project_id)s" \
+ --internalurl "http://radosgw.example.com:8080/swift/v1/AUTH_$(project_id)s" \
+ swift
+ +--------------+--------------------------------------------------------------+
+ | Field | Value |
+ +--------------+--------------------------------------------------------------+
+ | adminurl | http://radosgw.example.com:8080/swift/v1/AUTH_$(project_id)s |
+ | id | e4249d2b60e44743a67b5e5b38c18dd3 |
+ | internalurl | http://radosgw.example.com:8080/swift/v1/AUTH_$(project_id)s |
+ | publicurl | http://radosgw.example.com:8080/swift/v1/AUTH_$(project_id)s |
+ | region | RegionOne |
+ | service_id | 37c4c0e79571404cb4644201a4a6e5ee |
+ | service_name | swift |
+ | service_type | object-store |
+ +--------------+--------------------------------------------------------------+
+
+Keystone integration with the S3 API
+------------------------------------
+
+It is possible to use Keystone for authentication even when using the
+S3 API (with AWS-like access and secret keys), if the ``rgw s3 auth
+use keystone`` option is set. For details, see
+:doc:`s3/authentication`.
+
+Service token support
+---------------------
+
+Service tokens can be enabled to support RadosGW Keystone integration
+to allow expired tokens when coupled with a valid service token in the request.
+
+Enable the support with ``rgw keystone service token enabled`` and use the
+``rgw keystone service token accepted roles`` option to specify which roles are considered
+service roles.
+
+The ``rgw keystone expired token cache expiration`` option can be used to tune the cache
+expiration for an expired token allowed with a service token, please note that this must
+be lower than the ``[token]/allow_expired_window`` option in the Keystone configuration.
+
+Enabling this will cause an expired token given in the X-Auth-Token header to be allowed
+if coupled with a X-Service-Token header that contains a valid token with the accepted
+roles. This can allow long running processes using a user token in X-Auth-Token to function
+beyond the expiration of the token.
diff --git a/doc/radosgw/kmip.rst b/doc/radosgw/kmip.rst
new file mode 100644
index 000000000..988897121
--- /dev/null
+++ b/doc/radosgw/kmip.rst
@@ -0,0 +1,219 @@
+================
+KMIP Integration
+================
+
+`KMIP`_ can be used as a secure key management service for
+`Server-Side Encryption`_ (SSE-KMS).
+
+.. ditaa::
+
+ +---------+ +---------+ +------+ +-------+
+ | Client | | RadosGW | | KMIP | | OSD |
+ +---------+ +---------+ +------+ +-------+
+ | create secret | | |
+ | key for key ID | | |
+ |-----------------+---------------->| |
+ | | | |
+ | upload object | | |
+ | with key ID | | |
+ |---------------->| request secret | |
+ | | key for key ID | |
+ | |---------------->| |
+ | |<----------------| |
+ | | return secret | |
+ | | key | |
+ | | | |
+ | | encrypt object | |
+ | | with secret key | |
+ | |--------------+ | |
+ | | | | |
+ | |<-------------+ | |
+ | | | |
+ | | store encrypted | |
+ | | object | |
+ | |------------------------------>|
+
+#. `Setting KMIP Access for Ceph`_
+#. `Creating Keys in KMIP`_
+#. `Configure the Ceph Object Gateway`_
+#. `Upload object`_
+
+Before you can use KMIP with ceph, you will need to do three things.
+You will need to associate ceph with client information in KMIP,
+and configure ceph to use that client information.
+You will also need to create 1 or more keys in KMIP.
+
+Setting KMIP Access for Ceph
+============================
+
+Setting up Ceph in KMIP is very dependent on the mechanism(s) supported
+by your implementation of KMIP. Two implementations are described
+here,
+
+1. `IBM Security Guardium Key Lifecycle Manager (SKLM)`__. This is a well
+ supported commercial product.
+
+__ SKLM_
+
+2. PyKMIP_. This is a small python project, suitable for experimental
+ and testing use only.
+
+Using IBM SKLM
+--------------
+
+IBM SKLM__ supports client authentication using certificates.
+Certificates may either be self-signed certificates created,
+for instance, using openssl, or certificates may be created
+using SKLM. Ceph should then be configured (see below) to
+use KMIP and an attempt made to use it. This will fail,
+but it will leave an "untrusted client device certificate" in SKLM.
+This can be then upgraded to a registered client using the web
+interface to complete the registration process.
+
+__ SKLM_
+
+Find untrusted clients under ``Advanced Configuration``,
+``Client Device Communication Certificates``. Select
+``Modify SSL/KMIP Certificates for Clients``, then toggle the flag
+``allow the server to trust this certificate and communicate...``.
+
+Using PyKMIP
+------------
+
+PyKMIP_ has no special registration process, it simply
+trusts the certificate. However, the certificate has to
+be issued by a certificate authority that is trusted by
+pykmip. PyKMIP also prefers that the certificate contain
+an extension for "extended key usage". However, that
+can be defeated by specifying ``enable_tls_client_auth=False``
+in the server configuration.
+
+Creating Keys in KMIP
+=====================
+
+Some KMIP implementations come with a web interface or other
+administrative tools to create and manage keys. Refer to your
+documentation on that if you wish to use it. The KMIP protocol can also
+be used to create and manage keys. PyKMIP comes with a python client
+library that can be used this way.
+
+In preparation to using the pykmip client, you'll need to have a valid
+kmip client key & certificate, such as the one you created for ceph.
+
+Next, you'll then need to download and install it::
+
+ virtualenv $HOME/my-kmip-env
+ source $HOME/my-kmip-env/bin/activate
+ pip install pykmip
+
+Then you'll need to prepare a configuration file
+for the client, something like this::
+
+ cat <<EOF >$HOME/my-kmip-configuration
+ [client]
+ host={hostname}
+ port=5696
+ certfile={clientcert}
+ keyfile={clientkey}
+ ca_certs={clientca}
+ ssl_version=PROTOCOL_TLSv1_2
+ EOF
+
+You will need to replace {hostname} with the name of your kmip host,
+also replace {clientcert} {clientkey} and {clientca} with pathnames to
+a suitable pem encoded certificate, such as the one you created for
+ceph to use.
+
+Now, you can run this python script directly from
+the shell::
+
+ python
+ from kmip.pie import client
+ from kmip import enums
+ import ssl
+ import os
+ import sys
+ import json
+ c = client.ProxyKmipClient(config_file=os.environ['HOME']+"/my-kmip-configuration")
+
+ while True:
+ l=sys.stdin.readline()
+ keyname=l.strip()
+ if keyname == "": break
+ with c:
+ key_id = c.create(
+ enums.CryptographicAlgorithm.AES,
+ 256,
+ operation_policy_name='default',
+ name=keyname,
+ cryptographic_usage_mask=[
+ enums.CryptographicUsageMask.ENCRYPT,
+ enums.CryptographicUsageMask.DECRYPT
+ ]
+ )
+ c.activate(key_id)
+ attrs = c.get_attributes(uid=key_id)
+ r = {}
+ for a in attrs[1]:
+ r[str(a.attribute_name)] = str(a.attribute_value)
+ print (json.dumps(r))
+
+If this is all entered at the shell prompt, python will
+prompt with ">>>" then "..." until the script is read in,
+after which it will read and process names with no prompt
+until a blank line or end of file (^D) is given it, or
+an error occurs. Of course you can turn this into a regular
+python script if you prefer.
+
+Configure the Ceph Object Gateway
+=================================
+
+Edit the Ceph configuration file to enable Vault as a KMS backend for
+server-side encryption::
+
+ rgw crypt s3 kms backend = kmip
+ rgw crypt kmip ca path: /etc/ceph/kmiproot.crt
+ rgw crypt kmip client cert: /etc/ceph/kmip-client.crt
+ rgw crypt kmip client key: /etc/ceph/private/kmip-client.key
+ rgw crypt kmip kms key template: pykmip-$keyid
+
+You may need to change the paths above to match where
+you actually want to store kmip certificate data.
+
+The kmip key template describes how ceph will modify
+the name given to it before it looks it up
+in kmip. The default is just "$keyid".
+If you don't want ceph to see all your kmip
+keys, you can use this to limit ceph to just the
+designated subset of your kmip key namespace.
+
+Upload object
+=============
+
+When uploading an object to the Gateway, provide the SSE key ID in the request.
+As an example, for the kv engine, using the AWS command-line client::
+
+ aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt \
+ s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id mybucketkey
+
+As an example, for the transit engine, using the AWS command-line client::
+
+ aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt \
+ s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id mybucketkey
+
+The Object Gateway will fetch the key from Vault, encrypt the object and store
+it in the bucket. Any request to download the object will make the Gateway
+automatically retrieve the correspondent key from Vault and decrypt the object.
+
+Note that the secret will be fetched from kmip using a name constructed
+from the key template, replacing ``$keyid`` with the key provided.
+
+With the ceph configuration given above,
+radosgw would fetch the secret from::
+
+ pykmip-mybucketkey
+
+.. _Server-Side Encryption: ../encryption
+.. _KMIP: http://www.oasis-open.org/committees/kmip/
+.. _SKLM: https://www.ibm.com/products/ibm-security-key-lifecycle-manager
+.. _PyKMIP: https://pykmip.readthedocs.io/en/latest/
diff --git a/doc/radosgw/layout.rst b/doc/radosgw/layout.rst
new file mode 100644
index 000000000..723adf827
--- /dev/null
+++ b/doc/radosgw/layout.rst
@@ -0,0 +1,208 @@
+===========================
+ Rados Gateway Data Layout
+===========================
+
+Although the source code is the ultimate guide, this document helps
+new developers to get up to speed with the implementation details.
+
+Introduction
+------------
+
+Swift offers something called a *container*, which we use interchangeably with
+the term *bucket*, so we say that RGW's buckets implement Swift containers.
+
+This document does not consider how RGW operates on these structures,
+e.g. the use of encode() and decode() methods for serialization and so on.
+
+Conceptual View
+---------------
+
+Although RADOS only knows about pools and objects with their xattrs and
+omap[1], conceptually RGW organizes its data into three different kinds:
+metadata, bucket index, and data.
+
+Metadata
+^^^^^^^^
+
+We have 3 'sections' of metadata: 'user', 'bucket', and 'bucket.instance'.
+You can use the following commands to introspect metadata entries: ::
+
+ $ radosgw-admin metadata list
+ $ radosgw-admin metadata list bucket
+ $ radosgw-admin metadata list bucket.instance
+ $ radosgw-admin metadata list user
+
+ $ radosgw-admin metadata get bucket:<bucket>
+ $ radosgw-admin metadata get bucket.instance:<bucket>:<bucket_id>
+ $ radosgw-admin metadata get user:<user> # get or set
+
+Some variables have been used in above commands, they are:
+
+- user: Holds user information
+- bucket: Holds a mapping between bucket name and bucket instance id
+- bucket.instance: Holds bucket instance information[2]
+
+Every metadata entry is kept on a single RADOS object. See below for implementation details.
+
+Note that the metadata is not indexed. When listing a metadata section we do a
+RADOS ``pgls`` operation on the containing pool.
+
+Bucket Index
+^^^^^^^^^^^^
+
+It's a different kind of metadata, and kept separately. The bucket index holds
+a key-value map in RADOS objects. By default it is a single RADOS object per
+bucket, but it is possible since Hammer to shard that map over multiple RADOS
+objects. The map itself is kept in omap, associated with each RADOS object.
+The key of each omap is the name of the objects, and the value holds some basic
+metadata of that object -- metadata that shows up when listing the bucket.
+Also, each omap holds a header, and we keep some bucket accounting metadata
+in that header (number of objects, total size, etc.).
+
+Note that we also hold other information in the bucket index, and it's kept in
+other key namespaces. We can hold the bucket index log there, and for versioned
+objects there is more information that we keep on other keys.
+
+Data
+^^^^
+
+Objects data is kept in one or more RADOS objects for each rgw object.
+
+Object Lookup Path
+------------------
+
+When accessing objects, REST APIs come to RGW with three parameters:
+account information (access key in S3 or account name in Swift),
+bucket or container name, and object name (or key). At present, RGW only
+uses account information to find out the user ID and for access control.
+Only the bucket name and object key are used to address the object in a pool.
+
+The user ID in RGW is a string, typically the actual user name from the user
+credentials and not a hashed or mapped identifier.
+
+When accessing a user's data, the user record is loaded from an object
+"<user_id>" in pool "default.rgw.meta" with namespace "users.uid".
+
+Bucket names are represented in the pool "default.rgw.meta" with namespace
+"root". Bucket record is
+loaded in order to obtain so-called marker, which serves as a bucket ID.
+
+The object is located in pool "default.rgw.buckets.data".
+Object name is "<marker>_<key>",
+for example "default.7593.4_image.png", where the marker is "default.7593.4"
+and the key is "image.png". Since these concatenated names are not parsed,
+only passed down to RADOS, the choice of the separator is not important and
+causes no ambiguity. For the same reason, slashes are permitted in object
+names (keys).
+
+It is also possible to create multiple data pools and make it so that
+different users\` buckets will be created in different RADOS pools by default,
+thus providing the necessary scaling. The layout and naming of these pools
+is controlled by a 'policy' setting.[3]
+
+An RGW object may consist of several RADOS objects, the first of which
+is the head that contains the metadata, such as manifest, ACLs, content type,
+ETag, and user-defined metadata. The metadata is stored in xattrs.
+The head may also contain up to :confval:`rgw_max_chunk_size` of object data, for efficiency
+and atomicity. The manifest describes how each object is laid out in RADOS
+objects.
+
+Bucket and Object Listing
+-------------------------
+
+Buckets that belong to a given user are listed in an omap of an object named
+"<user_id>.buckets" (for example, "foo.buckets") in pool "default.rgw.meta"
+with namespace "users.uid".
+These objects are accessed when listing buckets, when updating bucket
+contents, and updating and retrieving bucket statistics (e.g. for quota).
+
+See the user-visible, encoded class 'cls_user_bucket_entry' and its
+nested class 'cls_user_bucket' for the values of these omap entries.
+
+These listings are kept consistent with buckets in pool ".rgw".
+
+Objects that belong to a given bucket are listed in a bucket index,
+as discussed in sub-section 'Bucket Index' above. The default naming
+for index objects is ".dir.<marker>" in pool "default.rgw.buckets.index".
+
+Footnotes
+---------
+
+[1] Omap is a key-value store, associated with an object, in a way similar
+to how Extended Attributes associate with a POSIX file. An object's omap
+is not physically located in the object's storage, but its precise
+implementation is invisible and immaterial to RADOS Gateway.
+In Hammer, LevelDB is used to store omap data within each OSD; later releases
+default to RocksDB but can be configured to use LevelDB.
+
+[2] Before the Dumpling release, the 'bucket.instance' metadata did not
+exist and the 'bucket' metadata contained its information. It is possible
+to encounter such buckets in old installations.
+
+[3] Pool names changed with the Infernalis release.
+If you are looking at an older setup, some details may be different. In
+particular there was a different pool for each of the namespaces that are
+now being used inside the ``default.root.meta`` pool.
+
+Appendix: Compendium
+--------------------
+
+Known pools:
+
+.rgw.root
+ Unspecified region, zone, and global information records, one per object.
+
+<zone>.rgw.control
+ notify.<N>
+
+<zone>.rgw.meta
+ Multiple namespaces with different kinds of metadata:
+
+ namespace: root
+ <bucket>
+ .bucket.meta.<bucket>:<marker> # see put_bucket_instance_info()
+
+ The tenant is used to disambiguate buckets, but not bucket instances.
+ Example::
+
+ .bucket.meta.prodtx:test%25star:default.84099.6
+ .bucket.meta.testcont:default.4126.1
+ .bucket.meta.prodtx:testcont:default.84099.4
+ prodtx/testcont
+ prodtx/test%25star
+ testcont
+
+ namespace: users.uid
+ Contains _both_ per-user information (RGWUserInfo) in "<user>" objects
+ and per-user lists of buckets in omaps of "<user>.buckets" objects.
+ The "<user>" may contain the tenant if non-empty, for example::
+
+ prodtx$prodt
+ test2.buckets
+ prodtx$prodt.buckets
+ test2
+
+ namespace: users.email
+ Unimportant
+
+ namespace: users.keys
+ 47UA98JSTJZ9YAN3OS3O
+
+ This allows ``radosgw`` to look up users by their access keys during authentication.
+
+ namespace: users.swift
+ test:tester
+
+<zone>.rgw.buckets.index
+ Objects are named ".dir.<marker>", each contains a bucket index.
+ If the index is sharded, each shard appends the shard index after
+ the marker.
+
+<zone>.rgw.buckets.data
+ default.7593.4__shadow_.488urDFerTYXavx4yAd-Op8mxehnvTI_1
+ <marker>_<key>
+
+An example of a marker would be "default.16004.1" or "default.7593.4".
+The current format is "<zone>.<instance_id>.<bucket_id>". But once
+generated, a marker is not parsed again, so its format may change
+freely in the future.
diff --git a/doc/radosgw/ldap-auth.rst b/doc/radosgw/ldap-auth.rst
new file mode 100644
index 000000000..486d0c623
--- /dev/null
+++ b/doc/radosgw/ldap-auth.rst
@@ -0,0 +1,167 @@
+===================
+LDAP Authentication
+===================
+
+.. versionadded:: Jewel
+
+You can delegate the Ceph Object Gateway authentication to an LDAP server.
+
+How it works
+============
+
+The Ceph Object Gateway extracts the users LDAP credentials from a token. A
+search filter is constructed with the user name. The Ceph Object Gateway uses
+the configured service account to search the directory for a matching entry. If
+an entry is found, the Ceph Object Gateway attempts to bind to the found
+distinguished name with the password from the token. If the credentials are
+valid, the bind will succeed, and the Ceph Object Gateway will grant access and
+radosgw-user will be created with the provided username.
+
+You can limit the allowed users by setting the base for the search to a
+specific organizational unit or by specifying a custom search filter, for
+example requiring specific group membership, custom object classes, or
+attributes.
+
+The LDAP credentials must be available on the server to perform the LDAP
+authentication. Make sure to set the ``rgw`` log level low enough to hide the
+base-64-encoded credentials / access tokens.
+
+Requirements
+============
+
+- **LDAP or Active Directory:** A running LDAP instance accessible by the Ceph
+ Object Gateway
+- **Service account:** LDAP credentials to be used by the Ceph Object Gateway
+ with search permissions
+- **User account:** At least one user account in the LDAP directory
+- **Do not overlap LDAP and local users:** You should not use the same user
+ names for local users and for users being authenticated by using LDAP. The
+ Ceph Object Gateway cannot distinguish them and it treats them as the same
+ user.
+
+Sanity checks
+=============
+
+Use the ``ldapsearch`` utility to verify the service account or the LDAP connection:
+
+::
+
+ # ldapsearch -x -D "uid=ceph,ou=system,dc=example,dc=com" -W \
+ -H ldaps://example.com -b "ou=users,dc=example,dc=com" 'uid=*' dn
+
+.. note:: Make sure to use the same LDAP parameters like in the Ceph configuration file to
+ eliminate possible problems.
+
+Configuring the Ceph Object Gateway to use LDAP authentication
+==============================================================
+
+The following parameters in the Ceph configuration file are related to the LDAP
+authentication:
+
+- ``rgw_s3_auth_use_ldap``: Set this to ``true`` to enable S3 authentication with LDAP
+- ``rgw_ldap_uri``: Specifies the LDAP server to use. Make sure to use the
+ ``ldaps://<fqdn>:<port>`` parameter to not transmit clear text credentials
+ over the wire.
+- ``rgw_ldap_binddn``: The Distinguished Name (DN) of the service account used
+ by the Ceph Object Gateway
+- ``rgw_ldap_secret``: Path to file containing credentials for ``rgw_ldap_binddn``
+- ``rgw_ldap_searchdn``: Specifies the base in the directory information tree
+ for searching users. This might be your users organizational unit or some
+ more specific Organizational Unit (OU).
+- ``rgw_ldap_dnattr``: The attribute being used in the constructed search
+ filter to match a username. Depending on your Directory Information Tree
+ (DIT) this would probably be ``uid`` or ``cn``. The generated filter string
+ will be, e.g., ``cn=some_username``.
+- ``rgw_ldap_searchfilter``: If not specified, the Ceph Object Gateway
+ automatically constructs the search filter with the ``rgw_ldap_dnattr``
+ setting. Use this parameter to narrow the list of allowed users in very
+ flexible ways. Consult the *Using a custom search filter to limit user access
+ section* for details
+
+Using a custom search filter to limit user access
+=================================================
+
+There are two ways to use the ``rgw_search_filter`` parameter:
+
+Specifying a partial filter to further limit the constructed search filter
+--------------------------------------------------------------------------
+
+An example for a partial filter:
+
+::
+
+ "objectclass=inetorgperson"
+
+The Ceph Object Gateway will generate the search filter as usual with the
+user name from the token and the value of ``rgw_ldap_dnattr``. The constructed
+filter is then combined with the partial filter from the ``rgw_search_filter``
+attribute. Depending on the user name and the settings the final search filter
+might become:
+
+::
+
+ "(&(uid=hari)(objectclass=inetorgperson))"
+
+So user ``hari`` will only be granted access if he is found in the LDAP
+directory, has an object class of ``inetorgperson``, and did specify a valid
+password.
+
+Specifying a complete filter
+----------------------------
+
+A complete filter must contain a ``@USERNAME@`` token which will be substituted
+with the user name during the authentication attempt. The ``rgw_ldap_dnattr``
+parameter is not used anymore in this case. For example, to limit valid users
+to a specific group, use the following filter:
+
+::
+
+ "(&(uid=@USERNAME@)(memberOf=cn=ceph-users,ou=groups,dc=mycompany,dc=com))"
+
+.. note:: Using the ``memberOf`` attribute in LDAP searches requires server side
+ support from you specific LDAP server implementation.
+
+Generating an access token for LDAP authentication
+==================================================
+
+The ``radosgw-token`` utility generates the access token based on the LDAP
+user name and password. It will output a base-64 encoded string which is the
+access token.
+
+::
+
+ # export RGW_ACCESS_KEY_ID="<username>"
+ # export RGW_SECRET_ACCESS_KEY="<password>"
+ # radosgw-token --encode
+
+.. important:: The access token is a base-64 encoded JSON struct and contains
+ the LDAP credentials as a clear text.
+
+Alternatively, users can also generate the token manually by base-64-encoding
+this JSON snippet, if they do not have the ``radosgw-token`` tool installed.
+
+::
+
+ {
+ "RGW_TOKEN": {
+ "version": 1,
+ "type": "ldap",
+ "id": "your_username",
+ "key": "your_clear_text_password_here"
+ }
+ }
+
+Using the access token
+======================
+
+Use your favorite S3 client and specify the token as the access key in your
+client or environment variables.
+
+::
+
+ # export AWS_ACCESS_KEY_ID=<base64-encoded token generated by radosgw-token>
+ # export AWS_SECRET_ACCESS_KEY="" # define this with an empty string, otherwise tools might complain about missing env variables.
+
+.. important:: The access token is a base-64 encoded JSON struct and contains
+ the LDAP credentials as a clear text. DO NOT share it unless
+ you want to share your clear text password!
diff --git a/doc/radosgw/lua-scripting.rst b/doc/radosgw/lua-scripting.rst
new file mode 100644
index 000000000..c85f72a6e
--- /dev/null
+++ b/doc/radosgw/lua-scripting.rst
@@ -0,0 +1,570 @@
+=============
+Lua Scripting
+=============
+
+.. versionadded:: Pacific
+
+.. contents::
+
+This feature allows users to assign execution context to Lua scripts. The supported contexts are:
+
+ - ``prerequest`` which will execute a script before each operation is performed
+ - ``postrequest`` which will execute after each operation is performed
+ - ``background`` which will execute within a specified time interval
+ - ``getdata`` which will execute on objects' data when objects are downloaded
+ - ``putdata`` which will execute on objects' data when objects are uploaded
+
+A request (pre or post) or data (get or put) context script may be constrained to operations belonging to a specific tenant's users.
+The request context script can also access fields in the request and modify certain fields, as well as the `Global RGW Table`_.
+The data context script can access the content of the object as well as the request fields and the `Global RGW Table`_.
+All Lua language features can be used in all contexts.
+
+By default, all Lua standard libraries are available in the script, however, in order to allow for other Lua modules to be used in the script, we support adding packages to an allowlist:
+
+ - All packages in the allowlist are being re-installed using the luarocks package manager on radosgw restart. Therefore a restart is needed for adding or removing of packages to take effect
+ - To add a package that contains C source code that needs to be compiled, use the ``--allow-compilation`` flag. In this case a C compiler needs to be available on the host
+ - Lua packages are installed in, and used from, a directory local to the radosgw. Meaning that Lua packages in the allowlist are separated from any Lua packages available on the host.
+ By default, this directory would be ``/tmp/luarocks/<entity name>``. Its prefix part (``/tmp/luarocks/``) could be set to a different location via the ``rgw_luarocks_location`` configuration parameter.
+ Note that this parameter should not be set to one of the default locations where luarocks install packages (e.g. ``$HOME/.luarocks``, ``/usr/lib64/lua``, ``/usr/share/lua``).
+
+
+.. toctree::
+ :maxdepth: 1
+
+
+Script Management via CLI
+-------------------------
+
+To upload a script:
+
+
+::
+
+ # radosgw-admin script put --infile={lua-file-path} --context={prerequest|postrequest|background|getdata|putdata} [--tenant={tenant-name}]
+
+
+* When uploading a script with the ``background`` context, a tenant name should not be specified.
+* When uploading a script into a cluster deployed with cephadm, use the following command:
+
+::
+
+ # cephadm shell radosgw-admin script put --infile=/rootfs/{lua-file-path} --context={prerequest|postrequest|background|getdata|putdata} [--tenant={tenant-name}]
+
+
+To print the content of the script to standard output:
+
+::
+
+ # radosgw-admin script get --context={prerequest|postrequest|background|getdata|putdata} [--tenant={tenant-name}]
+
+
+To remove the script:
+
+::
+
+ # radosgw-admin script rm --context={prerequest|postrequest|background|getdata|putdata} [--tenant={tenant-name}]
+
+
+Package Management via CLI
+--------------------------
+
+To add a package to the allowlist:
+
+::
+
+ # radosgw-admin script-package add --package={package name} [--allow-compilation]
+
+
+To add a specific version of a package to the allowlist:
+
+::
+
+ # radosgw-admin script-package add --package='{package name} {package version}' [--allow-compilation]
+
+
+* When adding a different version of a package which already exists in the list, the newly
+ added version will override the existing one.
+
+* When adding a package without a version specified, the latest version of the package
+ will be added.
+
+
+To remove a package from the allowlist:
+
+::
+
+ # radosgw-admin script-package rm --package={package name}
+
+
+To remove a specific version of a package from the allowlist:
+
+::
+
+ # radosgw-admin script-package rm --package='{package name} {package version}'
+
+
+* When removing a package without a version specified, any existing versions of the
+ package will be removed.
+
+
+To print the list of packages in the allowlist:
+
+::
+
+ # radosgw-admin script-package list
+
+
+Context Free Functions
+----------------------
+Debug Log
+~~~~~~~~~
+The ``RGWDebugLog()`` function accepts a string and prints it to the debug log with priority 20.
+Each log message is prefixed ``Lua INFO:``. This function has no return value.
+
+Request Fields
+-----------------
+
+.. warning:: This feature is experimental. Fields may be removed or renamed in the future.
+
+.. note::
+
+ - Although Lua is a case-sensitive language, field names provided by the radosgw are case-insensitive. Function names remain case-sensitive.
+ - Fields marked "optional" can have a nil value.
+ - Fields marked as "iterable" can be used by the pairs() function and with the # length operator.
+ - All table fields can be used with the bracket operator ``[]``.
+ - ``time`` fields are strings with the following format: ``%Y-%m-%d %H:%M:%S``.
+
+
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| Field | Type | Description | Iterable | Writeable | Optional |
++====================================================+==========+==============================================================+==========+===========+==========+
+| ``Request.RGWOp`` | string | radosgw operation | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.DecodedURI`` | string | decoded URI | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ContentLength`` | integer | size of the request | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.GenericAttributes`` | table | string to string generic attributes map | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Response`` | table | response to the request | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Response.HTTPStatusCode`` | integer | HTTP status code | no | yes | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Response.HTTPStatus`` | string | HTTP status text | no | yes | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Response.RGWCode`` | integer | radosgw error code | no | yes | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Response.Message`` | string | response message | no | yes | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.SwiftAccountName`` | string | swift account name | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket`` | table | info on the bucket | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Tenant`` | string | tenant of the bucket | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Name`` | string | bucket name (writeable only in ``prerequest`` context) | no | yes | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Marker`` | string | bucket marker (initial id) | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Id`` | string | bucket id | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Count`` | integer | number of objects in the bucket | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Size`` | integer | total size of objects in the bucket | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.ZoneGroupId`` | string | zone group of the bucket | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.CreationTime`` | time | creation time of the bucket | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.MTime`` | time | modification time of the bucket | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Quota`` | table | bucket quota | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Quota.MaxSize`` | integer | bucket quota max size | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Quota.MaxObjects`` | integer | bucket quota max number of objects | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Reques.Bucket.Quota.Enabled`` | boolean | bucket quota is enabled | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Quota.Rounded`` | boolean | bucket quota is rounded to 4K | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.PlacementRule`` | table | bucket placement rule | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.PlacementRule.Name`` | string | bucket placement rule name | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.PlacementRule.StorageClass`` | string | bucket placement rule storage class | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.User`` | table | bucket owner | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.User.Tenant`` | string | bucket owner tenant | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.User.Id`` | string | bucket owner id | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Object`` | table | info on the object | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Object.Name`` | string | object name | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Object.Instance`` | string | object version | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Object.Id`` | string | object id | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Object.Size`` | integer | object size | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Object.MTime`` | time | object mtime | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.CopyFrom`` | table | information on copy operation | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.CopyFrom.Tenant`` | string | tenant of the object copied from | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.CopyFrom.Bucket`` | string | bucket of the object copied from | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.CopyFrom.Object`` | table | object copied from. See: ``Request.Object`` | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ObjectOwner`` | table | object owner | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ObjectOwner.DisplayName`` | string | object owner display name | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ObjectOwner.User`` | table | object user. See: ``Request.Bucket.User`` | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ZoneGroup.Name`` | string | name of zone group | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ZoneGroup.Endpoint`` | string | endpoint of zone group | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl`` | table | user ACL | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Owner`` | table | user ACL owner. See: ``Request.ObjectOwner`` | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants`` | table | user ACL map of string to grant | yes | no | no |
+| | | note: grants without an Id are not presented when iterated | | | |
+| | | and only one of them can be accessed via brackets | | | |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"]`` | table | user ACL grant | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"].Type`` | integer | user ACL grant type | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"].User`` | table | user ACL grant user | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"].User.Tenant`` | table | user ACL grant user tenant | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"].User.Id`` | table | user ACL grant user id | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"].GroupType`` | integer | user ACL grant group type | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"].Referer`` | string | user ACL grant referer | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.BucketAcl`` | table | bucket ACL. See: ``Request.UserAcl`` | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ObjectAcl`` | table | object ACL. See: ``Request.UserAcl`` | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Environment`` | table | string to string environment map | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Policy`` | table | policy | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Policy.Text`` | string | policy text | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Policy.Id`` | string | policy Id | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Policy.Statements`` | table | list of string statements | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserPolicies`` | table | list of user policies | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserPolicies[<index>]`` | table | user policy. See: ``Request.Policy`` | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.RGWId`` | string | radosgw host id: ``<host>-<zone>-<zonegroup>`` | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP`` | table | HTTP header | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.Parameters`` | table | string to string parameter map | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.Resources`` | table | string to string resource map | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.Metadata`` | table | string to string metadata map | yes | yes | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.StorageClass`` | string | storage class | no | yes | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.Host`` | string | host name | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.Method`` | string | HTTP method | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.URI`` | string | URI | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.QueryString`` | string | HTTP query string | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.Domain`` | string | domain name | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Time`` | time | request time | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Dialect`` | string | "S3" or "Swift" | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Id`` | string | request Id | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.TransactionId`` | string | transaction Id | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Tags`` | table | object tags map | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.User`` | table | user that triggered the request | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.User.Tenant`` | string | triggering user tenant | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.User.Id`` | string | triggering user id | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Trace`` | table | info on trace | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Trace.Enable`` | boolean | tracing is enabled | no | yes | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+
+Request Functions
+--------------------
+Operations Log
+~~~~~~~~~~~~~~
+The ``Request.Log()`` function prints the requests into the operations log. This function has no parameters. It returns 0 for success and an error code if it fails.
+
+Tracing
+~~~~~~~
+Tracing functions can be used only in the ``postrequest`` context.
+
+- ``Request.Trace.SetAttribute(<key>, <value>)`` - sets the attribute for the request's trace.
+ The function takes two arguments: the first is the ``key``, which should be a string, and the second is the ``value``, which can either be a string or a number (integer or double).
+ You may then locate specific traces by using this attribute.
+
+- ``Request.Trace.AddEvent(<name>, <attributes>)`` - adds an event to the first span of the request's trace
+ An event is defined by event name, event time, and zero or more event attributes.
+ The function accepts one or two arguments: A string containing the event ``name`` should be the first argument, followed by the event ``attributes``, which is optional for events without attributes.
+ An event's attributes must be a table of strings.
+
+Background Context
+--------------------
+The ``background`` context may be used for purposes that include analytics, monitoring, caching data for other context executions.
+- Background script execution default interval is 5 seconds.
+
+Data Context
+--------------------
+Both ``getdata`` and ``putdata`` contexts have the following fields:
+- ``Data`` which is read-only and iterable (byte by byte). In case that an object is uploaded or retrieved in multiple chunks, the ``Data`` field will hold data of one chunk at a time.
+- ``Offset`` which is holding the offset of the chunk within the entire object.
+- The ``Request`` fields and the background ``RGW`` table are also available in these contexts.
+
+Global RGW Table
+--------------------
+The ``RGW`` Lua table is accessible from all contexts and saves data written to it
+during execution so that it may be read and used later during other executions, from the same context of a different one.
+- Each RGW instance has its own private and ephemeral ``RGW`` Lua table that is lost when the daemon restarts. Note that ``background`` context scripts will run on every instance.
+- The maximum number of entries in the table is 100,000. Each entry has a string key a value with a combined length of no more than 1KB.
+A Lua script will abort with an error if the number of entries or entry size exceeds these limits.
+- The ``RGW`` Lua table uses string indices and can store values of type: string, integer, double and boolean
+
+Increment/Decrement Functions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Since entries in the ``RGW`` table could be accessed from multiple places at the same time we need a way
+to atomically increment and decrement numeric values in it. For that the following functions should be used:
+- ``RGW.increment(<key>, [value])`` would increment the value of ``key`` by ``value`` if value is provided or by 1 if not
+- ``RGW.decrement(<key>, [value])`` would decrement the value of ``key`` by ``value`` if value is provided or by 1 if not
+- if the value of ``key`` is not numeric, the execution of the script would fail
+- if we try to increment or decrement by non-numeric values, the execution of the script would fail
+
+
+Lua Code Samples
+----------------
+- Print information on source and destination objects in case of copy:
+
+.. code-block:: lua
+
+ function print_object(object)
+ RGWDebugLog(" Name: " .. object.Name)
+ RGWDebugLog(" Instance: " .. object.Instance)
+ RGWDebugLog(" Id: " .. object.Id)
+ RGWDebugLog(" Size: " .. object.Size)
+ RGWDebugLog(" MTime: " .. object.MTime)
+ end
+
+ if Request.CopyFrom and Request.Object and Request.CopyFrom.Object then
+ RGWDebugLog("copy from object:")
+ print_object(Request.CopyFrom.Object)
+ RGWDebugLog("to object:")
+ print_object(Request.Object)
+ end
+
+- Print ACLs via a "generic function":
+
+.. code-block:: lua
+
+ function print_owner(owner)
+ RGWDebugLog("Owner:")
+ RGWDebugLog(" Display Name: " .. owner.DisplayName)
+ RGWDebugLog(" Id: " .. owner.User.Id)
+ RGWDebugLog(" Tenant: " .. owner.User.Tenant)
+ end
+
+ function print_acl(acl_type)
+ index = acl_type .. "ACL"
+ acl = Request[index]
+ if acl then
+ RGWDebugLog(acl_type .. "ACL Owner")
+ print_owner(acl.Owner)
+ RGWDebugLog(" there are " .. #acl.Grants .. " grant for owner")
+ for k,v in pairs(acl.Grants) do
+ RGWDebugLog(" Grant Key: " .. k)
+ RGWDebugLog(" Grant Type: " .. v.Type)
+ RGWDebugLog(" Grant Group Type: " .. v.GroupType)
+ RGWDebugLog(" Grant Referer: " .. v.Referer)
+ RGWDebugLog(" Grant User Tenant: " .. v.User.Tenant)
+ RGWDebugLog(" Grant User Id: " .. v.User.Id)
+ end
+ else
+ RGWDebugLog("no " .. acl_type .. " ACL in request: " .. Request.Id)
+ end
+ end
+
+ print_acl("User")
+ print_acl("Bucket")
+ print_acl("Object")
+
+- Use of operations log only in case of errors:
+
+.. code-block:: lua
+
+ if Request.Response.HTTPStatusCode ~= 200 then
+ RGWDebugLog("request is bad, use ops log")
+ rc = Request.Log()
+ RGWDebugLog("ops log return code: " .. rc)
+ end
+
+- Set values into the error message:
+
+.. code-block:: lua
+
+ if Request.Response.HTTPStatusCode == 500 then
+ Request.Response.Message = "<Message> something bad happened :-( </Message>"
+ end
+
+- Add metadata to objects that was not originally sent by the client:
+
+In the ``prerequest`` context we should add:
+
+.. code-block:: lua
+
+ if Request.RGWOp == 'put_obj' then
+ Request.HTTP.Metadata["x-amz-meta-mydata"] = "my value"
+ end
+
+In the ``postrequest`` context we look at the metadata:
+
+.. code-block:: lua
+
+ RGWDebugLog("number of metadata entries is: " .. #Request.HTTP.Metadata)
+ for k, v in pairs(Request.HTTP.Metadata) do
+ RGWDebugLog("key=" .. k .. ", " .. "value=" .. v)
+ end
+
+- Use modules to create Unix socket based, JSON encoded, "access log":
+
+First we should add the following packages to the allowlist:
+
+::
+
+ # radosgw-admin script-package add --package=luajson
+ # radosgw-admin script-package add --package=luasocket --allow-compilation
+
+
+Then, do a restart for the radosgw and upload the following script to the ``postrequest`` context:
+
+.. code-block:: lua
+
+ if Request.RGWOp == "get_obj" then
+ local json = require("json")
+ local socket = require("socket")
+ local unix = require("socket.unix")
+ local s = assert(unix())
+ E = {}
+
+ msg = {bucket = (Request.Bucket or (Request.CopyFrom or E).Bucket).Name,
+ time = Request.Time,
+ operation = Request.RGWOp,
+ http_status = Request.Response.HTTPStatusCode,
+ error_code = Request.Response.HTTPStatus,
+ object_size = Request.Object.Size,
+ trans_id = Request.TransactionId}
+
+ assert(s:connect("/tmp/socket"))
+ assert(s:send(json.encode(msg).."\n"))
+ assert(s:close())
+ end
+
+
+- Trace only requests of specific bucket
+
+Tracing is disabled by default, so we should enable tracing for this specific bucket
+
+.. code-block:: lua
+
+ if Request.Bucket.Name == "my-bucket" then
+ Request.Trace.Enable = true
+ end
+
+
+If `tracing is enabled <https://docs.ceph.com/en/latest/jaegertracing/#how-to-enable-tracing-in-ceph/>`_ on the RGW, the value of Request.Trace.Enable is true, so we should disable tracing for all other requests that do not match the bucket name.
+In the ``prerequest`` context:
+
+.. code-block:: lua
+
+ if Request.Bucket.Name ~= "my-bucket" then
+ Request.Trace.Enable = false
+ end
+
+Note that changing ``Request.Trace.Enable`` does not change the tracer's state, but disables or enables the tracing for the request only.
+
+
+- Add Information for requests traces
+
+in ``postrequest`` context, we can add attributes and events to the request's trace.
+
+.. code-block:: lua
+
+ Request.Trace.AddEvent("lua script execution started")
+
+ Request.Trace.SetAttribute("HTTPStatusCode", Request.Response.HTTPStatusCode)
+
+ event_attrs = {}
+ for k,v in pairs(Request.GenericAttributes) do
+ event_attrs[k] = v
+ end
+
+ Request.Trace.AddEvent("second event", event_attrs)
+
+- The entropy value of an object could be used to detect whether the object is encrypted.
+ The following script calculates the entropy and size of uploaded objects and print to debug log
+
+in the ``putdata`` context, add the following script
+
+.. code-block:: lua
+
+ function object_entropy()
+ local byte_hist = {}
+ local byte_hist_size = 256
+ for i = 1,byte_hist_size do
+ byte_hist[i] = 0
+ end
+ local total = 0
+
+ for i, c in pairs(Data) do
+ local byte = c:byte() + 1
+ byte_hist[byte] = byte_hist[byte] + 1
+ total = total + 1
+ end
+
+ entropy = 0
+
+ for _, count in ipairs(byte_hist) do
+ if count ~= 0 then
+ local p = 1.0 * count / total
+ entropy = entropy - (p * math.log(p)/math.log(byte_hist_size))
+ end
+ end
+
+ return entropy
+ end
+
+ local full_name = Request.Bucket.Name.."\\"..Request.Object.Name
+ RGWDebugLog("entropy of chunk of: " .. full_name .. " at offset:" .. tostring(Offset) .. " is: " .. tostring(object_entropy()))
+ RGWDebugLog("payload size of chunk of: " .. full_name .. " is: " .. #Data)
+
diff --git a/doc/radosgw/mfa.rst b/doc/radosgw/mfa.rst
new file mode 100644
index 000000000..416f23af1
--- /dev/null
+++ b/doc/radosgw/mfa.rst
@@ -0,0 +1,102 @@
+.. _rgw_mfa:
+
+==========================================
+RGW Support for Multifactor Authentication
+==========================================
+
+.. versionadded:: Mimic
+
+The S3 multifactor authentication (MFA) feature allows
+users to require the use of one-time password when removing
+objects on certain buckets. The buckets need to be configured
+with versioning and MFA enabled which can be done through
+the S3 api.
+
+Time-based one time password tokens can be assigned to a user
+through radosgw-admin. Each token has a secret seed, and a serial
+id that is assigned to it. Tokens are added to the user, can
+be listed, removed, and can also be re-synchronized.
+
+Multisite
+=========
+
+While the MFA IDs are set on the user's metadata, the
+actual MFA one time password configuration resides in the local zone's
+osds. Therefore, in a multi-site environment it is advisable to use
+different tokens for different zones.
+
+
+Terminology
+=============
+
+-``TOTP``: Time-based One Time Password
+
+-``token serial``: a string that represents the ID of a TOTP token
+
+-``token seed``: the secret seed that is used to calculate the TOTP
+
+-``totp seconds``: the time resolution that is being used for TOTP generation
+
+-``totp window``: the number of TOTP tokens that are checked before and after the current token when validating token
+
+-``totp pin``: the valid value of a TOTP token at a certain time
+
+
+Admin commands
+==============
+
+Create a new MFA TOTP token
+------------------------------------
+
+::
+
+ # radosgw-admin mfa create --uid=<user-id> \
+ --totp-serial=<serial> \
+ --totp-seed=<seed> \
+ [ --totp-seed-type=<hex|base32> ] \
+ [ --totp-seconds=<num-seconds> ] \
+ [ --totp-window=<twindow> ]
+
+List MFA TOTP tokens
+---------------------
+
+::
+
+ # radosgw-admin mfa list --uid=<user-id>
+
+
+Show MFA TOTP token
+------------------------------------
+
+::
+
+ # radosgw-admin mfa get --uid=<user-id> --totp-serial=<serial>
+
+
+Delete MFA TOTP token
+------------------------
+
+::
+
+ # radosgw-admin mfa remove --uid=<user-id> --totp-serial=<serial>
+
+
+Check MFA TOTP token
+--------------------------------
+
+Test a TOTP token pin, needed for validating that TOTP functions correctly. ::
+
+ # radosgw-admin mfa check --uid=<user-id> --totp-serial=<serial> \
+ --totp-pin=<pin>
+
+
+Re-sync MFA TOTP token
+--------------------------------
+
+In order to re-sync the TOTP token (in case of time skew). This requires
+feeding two consecutive pins: the previous pin, and the current pin. ::
+
+ # radosgw-admin mfa resync --uid=<user-id> --totp-serial=<serial> \
+ --totp-pin=<prev-pin> --totp=pin=<current-pin>
+
+
diff --git a/doc/radosgw/multisite-sync-policy.rst b/doc/radosgw/multisite-sync-policy.rst
new file mode 100644
index 000000000..7a6e7105c
--- /dev/null
+++ b/doc/radosgw/multisite-sync-policy.rst
@@ -0,0 +1,716 @@
+.. _radosgw-multisite-sync-policy:
+
+=====================
+Multisite Sync Policy
+=====================
+
+.. versionadded:: Octopus
+
+Multisite bucket-granularity sync policy provides fine grained control of data movement between buckets in different zones. It extends the zone sync mechanism. Previously buckets were being treated symmetrically, that is -- each (data) zone holds a mirror of that bucket that should be the same as all the other zones. Whereas leveraging the bucket-granularity sync policy is possible for buckets to diverge, and a bucket can pull data from other buckets (ones that don't share its name or its ID) in different zone. The sync process was assuming therefore that the bucket sync source and the bucket sync destination were always referring to the same bucket, now that is not the case anymore.
+
+The sync policy supersedes the old zonegroup coarse configuration (sync_from*). The sync policy can be configured at the zonegroup level (and if it is configured it replaces the old style config), but it can also be configured at the bucket level.
+
+In the sync policy multiple groups that can contain lists of data-flow configurations can be defined, as well as lists of pipe configurations. The data-flow defines the flow of data between the different zones. It can define symmetrical data flow, in which multiple zones sync data from each other, and it can define directional data flow, in which the data moves in one way from one zone to another.
+
+A pipe defines the actual buckets that can use these data flows, and the properties that are associated with it (for example: source object prefix).
+
+A sync policy group can be in 3 states:
+
++----------------------------+----------------------------------------+
+| Value | Description |
++============================+========================================+
+| ``enabled`` | sync is allowed and enabled |
++----------------------------+----------------------------------------+
+| ``allowed`` | sync is allowed |
++----------------------------+----------------------------------------+
+| ``forbidden`` | sync (as defined by this group) is not |
+| | allowed and can override other groups |
++----------------------------+----------------------------------------+
+
+A policy can be defined at the bucket level. A bucket level sync policy inherits the data flow of the zonegroup policy, and can only define a subset of what the zonegroup allows.
+
+A wildcard zone, and a wildcard bucket parameter in the policy defines all relevant zones, or all relevant buckets. In the context of a bucket policy it means the current bucket instance. A disaster recovery configuration where entire zones are mirrored doesn't require configuring anything on the buckets. However, for a fine grained bucket sync it would be better to configure the pipes to be synced by allowing (status=allowed) them at the zonegroup level (e.g., using wildcards), but only enable the specific sync at the bucket level (status=enabled). If needed, the policy at the bucket level can limit the data movement to specific relevant zones.
+
+.. important:: Any changes to the zonegroup policy needs to be applied on the
+ zonegroup master zone, and require period update and commit. Changes
+ to the bucket policy needs to be applied on the zonegroup master
+ zone. The changes are dynamically handled by rgw.
+
+
+S3 Replication API
+~~~~~~~~~~~~~~~~~~
+
+The S3 bucket replication api has also been implemented, and allows users to create replication rules between different buckets. Note though that while the AWS replication feature allows bucket replication within the same zone, rgw does not allow it at the moment. However, the rgw api also added a new 'Zone' array that allows users to select to what zones the specific bucket will be synced.
+
+
+Sync Policy Control Reference
+=============================
+
+
+Get Sync Policy
+~~~~~~~~~~~~~~~
+
+To retrieve the current zonegroup sync policy, or a specific bucket policy:
+
+::
+
+ # radosgw-admin sync policy get [--bucket=<bucket>]
+
+
+Create Sync Policy Group
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To create a sync policy group:
+
+::
+
+ # radosgw-admin sync group create [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --status=<enabled | allowed | forbidden> \
+
+
+Modify Sync Policy Group
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To modify a sync policy group:
+
+::
+
+ # radosgw-admin sync group modify [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --status=<enabled | allowed | forbidden> \
+
+
+Show Sync Policy Group
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To show a sync policy group:
+
+::
+
+ # radosgw-admin sync group get [--bucket=<bucket>] \
+ --group-id=<group-id>
+
+
+Remove Sync Policy Group
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To remove a sync policy group:
+
+::
+
+ # radosgw-admin sync group remove [--bucket=<bucket>] \
+ --group-id=<group-id>
+
+
+
+Create Sync Flow
+~~~~~~~~~~~~~~~~
+
+- To create or update directional sync flow:
+
+::
+
+ # radosgw-admin sync group flow create [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --flow-id=<flow-id> \
+ --flow-type=directional \
+ --source-zone=<source_zone> \
+ --dest-zone=<dest_zone>
+
+
+- To create or update symmetrical sync flow:
+
+::
+
+ # radosgw-admin sync group flow create [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --flow-id=<flow-id> \
+ --flow-type=symmetrical \
+ --zones=<zones>
+
+
+Where zones are a comma separated list of all the zones that need to add to the flow.
+
+
+Remove Sync Flow Zones
+~~~~~~~~~~~~~~~~~~~~~~
+
+- To remove directional sync flow:
+
+::
+
+ # radosgw-admin sync group flow remove [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --flow-id=<flow-id> \
+ --flow-type=directional \
+ --source-zone=<source_zone> \
+ --dest-zone=<dest_zone>
+
+
+- To remove specific zones from symmetrical sync flow:
+
+::
+
+ # radosgw-admin sync group flow remove [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --flow-id=<flow-id> \
+ --flow-type=symmetrical \
+ --zones=<zones>
+
+
+Where zones are a comma separated list of all zones to remove from the flow.
+
+
+- To remove symmetrical sync flow:
+
+::
+
+ # radosgw-admin sync group flow remove [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --flow-id=<flow-id> \
+ --flow-type=symmetrical
+
+
+Create Sync Pipe
+~~~~~~~~~~~~~~~~
+
+To create sync group pipe, or update its parameters:
+
+
+::
+
+ # radosgw-admin sync group pipe create [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --pipe-id=<pipe-id> \
+ --source-zones=<source_zones> \
+ [--source-bucket=<source_buckets>] \
+ [--source-bucket-id=<source_bucket_id>] \
+ --dest-zones=<dest_zones> \
+ [--dest-bucket=<dest_buckets>] \
+ [--dest-bucket-id=<dest_bucket_id>] \
+ [--prefix=<source_prefix>] \
+ [--prefix-rm] \
+ [--tags-add=<tags>] \
+ [--tags-rm=<tags>] \
+ [--dest-owner=<owner>] \
+ [--storage-class=<storage_class>] \
+ [--mode=<system | user>] \
+ [--uid=<user_id>]
+
+
+Zones are either a list of zones, or '*' (wildcard). Wildcard zones mean any zone that matches the sync flow rules.
+Buckets are either a bucket name, or '*' (wildcard). Wildcard bucket means the current bucket
+Prefix can be defined to filter source objects.
+Tags are passed by a comma separated list of 'key=value'.
+Destination owner can be set to force a destination owner of the objects. If user mode is selected, only the destination bucket owner can be set.
+Destination storage class can also be configured.
+User id can be set for user mode, and will be the user under which the sync operation will be executed (for permissions validation).
+
+
+Remove Sync Pipe
+~~~~~~~~~~~~~~~~
+
+To remove specific sync group pipe params, or the entire pipe:
+
+
+::
+
+ # radosgw-admin sync group pipe remove [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --pipe-id=<pipe-id> \
+ [--source-zones=<source_zones>] \
+ [--source-bucket=<source_buckets>] \
+ [--source-bucket-id=<source_bucket_id>] \
+ [--dest-zones=<dest_zones>] \
+ [--dest-bucket=<dest_buckets>] \
+ [--dest-bucket-id=<dest_bucket_id>]
+
+
+Sync Info
+~~~~~~~~~
+
+To get information about the expected sync sources and targets (as defined by the sync policy):
+
+::
+
+ # radosgw-admin sync info [--bucket=<bucket>] \
+ [--effective-zone-name=<zone>]
+
+
+Since a bucket can define a policy that defines data movement from it towards a different bucket at a different zone, when the policy is created we also generate a list of bucket dependencies that are used as hints when a sync of any particular bucket happens. The fact that a bucket references another bucket does not mean it actually syncs to/from it, as the data flow might not permit it.
+
+
+Examples
+========
+
+The system in these examples includes 3 zones: ``us-east`` (the master zone), ``us-west``, ``us-west-2``.
+
+Example 1: Two Zones, Complete Mirror
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This is similar to older (pre ``Octopus``) sync capabilities, but being done via the new sync policy engine. Note that changes to the zonegroup sync policy require a period update and commit.
+
+
+::
+
+ [us-east] $ radosgw-admin sync group create --group-id=group1 --status=allowed
+ [us-east] $ radosgw-admin sync group flow create --group-id=group1 \
+ --flow-id=flow-mirror --flow-type=symmetrical \
+ --zones=us-east,us-west
+ [us-east] $ radosgw-admin sync group pipe create --group-id=group1 \
+ --pipe-id=pipe1 --source-zones='*' \
+ --source-bucket='*' --dest-zones='*' \
+ --dest-bucket='*'
+ [us-east] $ radosgw-admin sync group modify --group-id=group1 --status=enabled
+ [us-east] $ radosgw-admin period update --commit
+
+ $ radosgw-admin sync info --bucket=buck
+ {
+ "sources": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-east",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "params": {
+ ...
+ }
+ }
+ ],
+ "dests": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-east",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ }
+ ],
+ ...
+ }
+ }
+
+
+Note that the "id" field in the output above reflects the pipe rule
+that generated that entry, a single rule can generate multiple sync
+entries as can be seen in the example.
+
+::
+
+ [us-west] $ radosgw-admin sync info --bucket=buck
+ {
+ "sources": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-east",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ }
+ ],
+ "dests": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-east",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ }
+ ],
+ ...
+ }
+
+
+
+Example 2: Directional, Entire Zone Backup
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Also similar to older sync capabilities. In here we add a third zone, ``us-west-2`` that will be a replica of ``us-west``, but data will not be replicated back from it.
+
+::
+
+ [us-east] $ radosgw-admin sync group flow create --group-id=group1 \
+ --flow-id=us-west-backup --flow-type=directional \
+ --source-zone=us-west --dest-zone=us-west-2
+ [us-east] $ radosgw-admin period update --commit
+
+
+Note that us-west has two dests:
+
+::
+
+ [us-west] $ radosgw-admin sync info --bucket=buck
+ {
+ "sources": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-east",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ }
+ ],
+ "dests": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-east",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ },
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-west-2",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ }
+ ],
+ ...
+ }
+
+
+Whereas us-west-2 has only source and no destinations:
+
+::
+
+ [us-west-2] $ radosgw-admin sync info --bucket=buck
+ {
+ "sources": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-west-2",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ }
+ ],
+ "dests": [],
+ ...
+ }
+
+
+
+Example 3: Mirror a Specific Bucket
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Using the same group configuration, but this time switching it to ``allowed`` state, which means that sync is allowed but not enabled.
+
+::
+
+ [us-east] $ radosgw-admin sync group modify --group-id=group1 --status=allowed
+ [us-east] $ radosgw-admin period update --commit
+
+
+And we will create a bucket level policy rule for existing bucket ``buck2``. Note that the bucket needs to exist before being able to set this policy, and admin commands that modify bucket policies need to run on the master zone, however, they do not require period update. There is no need to change the data flow, as it is inherited from the zonegroup policy. A bucket policy flow will only be a subset of the flow defined in the zonegroup policy. Same goes for pipes, although a bucket policy can enable pipes that are not enabled (albeit not forbidden) at the zonegroup policy.
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck2 \
+ --group-id=buck2-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck2 \
+ --group-id=buck2-default --pipe-id=pipe1 \
+ --source-zones='*' --dest-zones='*'
+
+
+
+Example 4: Limit Bucket Sync To Specific Zones
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This will only sync ``buck3`` to ``us-east`` (from any zone that flow allows to sync into ``us-east``).
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck3 \
+ --group-id=buck3-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck3 \
+ --group-id=buck3-default --pipe-id=pipe1 \
+ --source-zones='*' --dest-zones=us-east
+
+
+
+Example 5: Sync From a Different Bucket
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Note that bucket sync only works (currently) across zones and not within the same zone.
+
+Set ``buck4`` to pull data from ``buck5``:
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck4 '
+ --group-id=buck4-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck4 \
+ --group-id=buck4-default --pipe-id=pipe1 \
+ --source-zones='*' --source-bucket=buck5 \
+ --dest-zones='*'
+
+
+can also limit it to specific zones, for example the following will
+only sync data originated in us-west:
+
+::
+
+ [us-east] $ radosgw-admin sync group pipe modify --bucket=buck4 \
+ --group-id=buck4-default --pipe-id=pipe1 \
+ --source-zones=us-west --source-bucket=buck5 \
+ --dest-zones='*'
+
+
+Checking the sync info for ``buck5`` on ``us-west`` is interesting:
+
+::
+
+ [us-west] $ radosgw-admin sync info --bucket=buck5
+ {
+ "sources": [],
+ "dests": [],
+ "hints": {
+ "sources": [],
+ "dests": [
+ "buck4:115b12b3-....14433.2"
+ ]
+ },
+ "resolved-hints-1": {
+ "sources": [],
+ "dests": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck5"
+ },
+ "dest": {
+ "zone": "us-east",
+ "bucket": "buck4:115b12b3-....14433.2"
+ },
+ ...
+ },
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck5"
+ },
+ "dest": {
+ "zone": "us-west-2",
+ "bucket": "buck4:115b12b3-....14433.2"
+ },
+ ...
+ }
+ ]
+ },
+ "resolved-hints": {
+ "sources": [],
+ "dests": []
+ }
+ }
+
+
+Note that there are resolved hints, which means that the bucket ``buck5`` found about ``buck4`` syncing from it indirectly, and not from its own policy (the policy for ``buck5`` itself is empty).
+
+
+Example 6: Sync To Different Bucket
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The same mechanism can work for configuring data to be synced to (vs. synced from as in the previous example). Note that internally data is still pulled from the source at the destination zone:
+
+Set ``buck6`` to "push" data to ``buck5``:
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck6 \
+ --group-id=buck6-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck6 \
+ --group-id=buck6-default --pipe-id=pipe1 \
+ --source-zones='*' --source-bucket='*' \
+ --dest-zones='*' --dest-bucket=buck5
+
+
+A wildcard bucket name means the current bucket in the context of bucket sync policy.
+
+Combined with the configuration in Example 5, we can now write data to ``buck6`` on ``us-east``, data will sync to ``buck5`` on ``us-west``, and from there it will be distributed to ``buck4`` on ``us-east``, and on ``us-west-2``.
+
+Example 7: Source Filters
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Sync from ``buck8`` to ``buck9``, but only objects that start with ``foo/``:
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck8 \
+ --group-id=buck8-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck8 \
+ --group-id=buck8-default --pipe-id=pipe-prefix \
+ --prefix=foo/ --source-zones='*' --dest-zones='*' \
+ --dest-bucket=buck9
+
+
+Also sync from ``buck8`` to ``buck9`` any object that has the tags ``color=blue`` or ``color=red``:
+
+::
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck8 \
+ --group-id=buck8-default --pipe-id=pipe-tags \
+ --tags-add=color=blue,color=red --source-zones='*' \
+ --dest-zones='*' --dest-bucket=buck9
+
+
+And we can check the expected sync in ``us-east`` (for example):
+
+::
+
+ [us-east] $ radosgw-admin sync info --bucket=buck8
+ {
+ "sources": [],
+ "dests": [
+ {
+ "id": "pipe-prefix",
+ "source": {
+ "zone": "us-east",
+ "bucket": "buck8:115b12b3-....14433.5"
+ },
+ "dest": {
+ "zone": "us-west",
+ "bucket": "buck9"
+ },
+ "params": {
+ "source": {
+ "filter": {
+ "prefix": "foo/",
+ "tags": []
+ }
+ },
+ ...
+ }
+ },
+ {
+ "id": "pipe-tags",
+ "source": {
+ "zone": "us-east",
+ "bucket": "buck8:115b12b3-....14433.5"
+ },
+ "dest": {
+ "zone": "us-west",
+ "bucket": "buck9"
+ },
+ "params": {
+ "source": {
+ "filter": {
+ "tags": [
+ {
+ "key": "color",
+ "value": "blue"
+ },
+ {
+ "key": "color",
+ "value": "red"
+ }
+ ]
+ }
+ },
+ ...
+ }
+ }
+ ],
+ ...
+ }
+
+
+Note that there aren't any sources, only two different destinations (one for each configuration). When the sync process happens it will select the relevant rule for each object it syncs.
+
+Prefixes and tags can be combined, in which object will need to have both in order to be synced. The priority param can also be passed, and it can be used to determine when there are multiple different rules that are matched (and have the same source and destination), to determine which of the rules to be used.
+
+
+Example 8: Destination Params: Storage Class
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Storage class of the destination objects can be configured:
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck10 \
+ --group-id=buck10-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck10 \
+ --group-id=buck10-default \
+ --pipe-id=pipe-storage-class \
+ --source-zones='*' --dest-zones=us-west-2 \
+ --storage-class=CHEAP_AND_SLOW
+
+
+Example 9: Destination Params: Destination Owner Translation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Set the destination objects owner as the destination bucket owner.
+This requires specifying the uid of the destination bucket:
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck11 \
+ --group-id=buck11-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck11 \
+ --group-id=buck11-default --pipe-id=pipe-dest-owner \
+ --source-zones='*' --dest-zones='*' \
+ --dest-bucket=buck12 --dest-owner=joe
+
+Example 10: Destination Params: User Mode
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+User mode makes sure that the user has permissions to both read the objects, and write to the destination bucket. This requires that the uid of the user (which in its context the operation executes) is specified.
+
+::
+
+ [us-east] $ radosgw-admin sync group pipe modify --bucket=buck11 \
+ --group-id=buck11-default --pipe-id=pipe-dest-owner \
+ --mode=user --uid=jenny
+
+
+
diff --git a/doc/radosgw/multisite.rst b/doc/radosgw/multisite.rst
new file mode 100644
index 000000000..c7627371d
--- /dev/null
+++ b/doc/radosgw/multisite.rst
@@ -0,0 +1,1690 @@
+.. _multisite:
+
+==========
+Multi-Site
+==========
+
+Single-zone Configurations and Multi-site Configurations
+========================================================
+
+Single-zone Configurations
+--------------------------
+
+A single-zone configuration typically consists of two things:
+
+#. One "zonegroup", which contains one zone.
+#. One or more `ceph-radosgw` instances that have `ceph-radosgw` client
+ requests load-balanced between them.
+
+In a typical single-zone configuration, there are multiple `ceph-radosgw`
+instances that make use of a single Ceph storage cluster.
+
+Varieties of Multi-site Configuration
+-------------------------------------
+
+.. versionadded:: Jewel
+
+Beginning with the Kraken release, Ceph supports several multi-site
+configurations for the Ceph Object Gateway:
+
+- **Multi-zone:** A more advanced topology, the "multi-zone" configuration, is
+ possible. A multi-zone configuration consists of one zonegroup and
+ multiple zones, with each zone consisting of one or more `ceph-radosgw`
+ instances. **Each zone is backed by its own Ceph Storage Cluster.**
+
+ The presence of multiple zones in a given zonegroup provides disaster
+ recovery for that zonegroup in the event that one of the zones experiences a
+ significant failure. Beginning with the Kraken release, each zone is active
+ and can receive write operations. A multi-zone configuration that contains
+ multiple active zones enhances disaster recovery and can also be used as a
+ foundation for content delivery networks.
+
+- **Multi-zonegroups:** Ceph Object Gateway supports multiple zonegroups (which
+ were formerly called "regions"). Each zonegroup contains one or more zones.
+ If two zones are in the same zonegroup, and if that zonegroup is in the same
+ realm as a second zonegroup, then the objects stored in the two zones share
+ a global object namespace. This global object namespace ensures unique
+ object IDs across zonegroups and zones.
+
+ Each bucket is owned by the zonegroup where it was created (except where
+ overridden by the :ref:`LocationConstraint<s3_bucket_placement>` on
+ bucket creation), and its object data will only replicate to other zones in
+ that zonegroup. Any request for data in that bucket that are sent to other
+ zonegroups will redirect to the zonegroup where the bucket resides.
+
+ It can be useful to create multiple zonegroups when you want to share a
+ namespace of users and buckets across many zones, but isolate the object data
+ to a subset of those zones. It might be that you have several connected sites
+ that share storage, but only require a single backup for purposes of disaster
+ recovery. In such a case, it could make sense to create several zonegroups
+ with only two zones each to avoid replicating all objects to all zones.
+
+ In other cases, it might make more sense to isolate things in separate
+ realms, with each realm having a single zonegroup. Zonegroups provide
+ flexibility by making it possible to control the isolation of data and
+ metadata separately.
+
+- **Multiple Realms:** Beginning with the Kraken release, the Ceph Object
+ Gateway supports "realms", which are containers for zonegroups. Realms make
+ it possible to set policies that apply to multiple zonegroups. Realms have a
+ globally unique namespace and can contain either a single zonegroup or
+ multiple zonegroups. If you choose to make use of multiple realms, you can
+ define multiple namespaces and multiple configurations (this means that each
+ realm can have a configuration that is distinct from the configuration of
+ other realms).
+
+
+Diagram - Replication of Object Data Between Zones
+--------------------------------------------------
+
+The replication of object data between zones within a zonegroup looks
+something like this:
+
+.. image:: ../images/zone-sync.svg
+ :align: center
+
+At the top of this diagram, we see two applications (also known as "clients").
+The application on the right is both writing and reading data from the Ceph
+Cluster, by means of the RADOS Gateway (RGW). The application on the left is
+only *reading* data from the Ceph Cluster, by means of an instance of RADOS
+Gateway (RGW). In both cases (read-and-write and read-only), the transmssion of
+data is handled RESTfully.
+
+In the middle of this diagram, we see two zones, each of which contains an
+instance of RADOS Gateway (RGW). These instances of RGW are handling the
+movement of data from the applications to the zonegroup. The arrow from the
+master zone (US-EAST) to the secondary zone (US-WEST) represents an act of data
+synchronization.
+
+At the bottom of this diagram, we see the data distributed into the Ceph
+Storage Cluster.
+
+For additional details on setting up a cluster, see `Ceph Object Gateway for
+Production <https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_object_gateway_for_production/index/>`__.
+
+Functional Changes from Infernalis
+==================================
+
+Beginning with Kraken, each Ceph Object Gateway can be configured to work in an
+active-active zone mode. This makes it possible to write to non-master zones.
+
+The multi-site configuration is stored within a container called a "realm". The
+realm stores zonegroups, zones, and a time "period" with multiple epochs (which
+(the epochs) are used for tracking changes to the configuration).
+
+Beginning with Kraken, the ``ceph-radosgw`` daemons handle the synchronization
+of data across zones, which eliminates the need for a separate synchronization
+agent. This new approach to synchronization allows the Ceph Object Gateway to
+operate with an "active-active" configuration instead of with an
+"active-passive" configuration.
+
+Requirements and Assumptions
+============================
+
+A multi-site configuration requires at least two Ceph storage clusters. The
+multi-site configuration must have at least two Ceph object gateway instances
+(one for each Ceph storage cluster).
+
+This guide assumes that at least two Ceph storage clusters are in
+geographically separate locations; however, the configuration can work on the
+same site. This guide also assumes two Ceph object gateway servers named
+``rgw1`` and ``rgw2``.
+
+.. important:: Running a single geographically-distributed Ceph storage cluster
+ is NOT recommended unless you have low latency WAN connections.
+
+A multi-site configuration requires a master zonegroup and a master zone. Each
+zonegroup requires a master zone. Zonegroups may have one or more secondary
+or non-master zones.
+
+In this guide, the ``rgw1`` host will serve as the master zone of the master
+zonegroup; and, the ``rgw2`` host will serve as the secondary zone of the
+master zonegroup.
+
+See `Pools`_ for instructions on creating and tuning pools for Ceph Object
+Storage.
+
+See `Sync Policy Config`_ for instructions on defining fine-grained bucket sync
+policy rules.
+
+.. _master-zone-label:
+
+Configuring a Master Zone
+=========================
+
+All gateways in a multi-site configuration retrieve their configurations from a
+``ceph-radosgw`` daemon that is on a host within both the master zonegroup and
+the master zone. To configure your gateways in a multi-site configuration,
+choose a ``ceph-radosgw`` instance to configure the master zonegroup and
+master zone.
+
+Create a Realm
+--------------
+
+A realm contains the multi-site configuration of zonegroups and zones. The
+realm enforces a globally unique namespace within itself.
+
+#. Create a new realm for the multi-site configuration by opening a command
+ line interface on a host that will serve in the master zonegroup and zone.
+ Then run the following command:
+
+ .. prompt:: bash #
+
+ radosgw-admin realm create --rgw-realm={realm-name} [--default]
+
+ For example:
+
+ .. prompt:: bash #
+
+ radosgw-admin realm create --rgw-realm=movies --default
+
+ .. note:: If you intend the cluster to have a single realm, specify the ``--default`` flag.
+
+ If ``--default`` is specified, ``radosgw-admin`` uses this realm by default.
+
+ If ``--default`` is not specified, you must specify either the ``--rgw-realm`` flag or the ``--realm-id`` flag to identify the realm when adding zonegroups and zones.
+
+#. After the realm has been created, ``radosgw-admin`` echoes back the realm
+ configuration. For example:
+
+ ::
+
+ {
+ "id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62",
+ "name": "movies",
+ "current_period": "1950b710-3e63-4c41-a19e-46a715000980",
+ "epoch": 1
+ }
+
+ .. note:: Ceph generates a unique ID for the realm, which can be used to rename the realm if the need arises.
+
+Create a Master Zonegroup
+--------------------------
+
+A realm must have at least one zonegroup which serves as the master zonegroup
+for the realm.
+
+#. To create a new master zonegroup for the multi-site configuration, open a
+ command-line interface on a host in the master zonegroup and zone. Then
+ run the following command:
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup create --rgw-zonegroup={name} --endpoints={url} [--rgw-realm={realm-name}|--realm-id={realm-id}] --master --default
+
+ For example:
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --rgw-realm=movies --master --default
+
+ .. note:: If the realm will have only a single zonegroup, specify the ``--default`` flag.
+
+ If ``--default`` is specified, ``radosgw-admin`` uses this zonegroup by default when adding new zones.
+
+ If ``--default`` is not specified, you must use either the ``--rgw-zonegroup`` flag or the ``--zonegroup-id`` flag to identify the zonegroup when adding or modifying zones.
+
+#. After creating the master zonegroup, ``radosgw-admin`` echoes back the
+ zonegroup configuration. For example:
+
+ ::
+
+ {
+ "id": "f1a233f5-c354-4107-b36c-df66126475a6",
+ "name": "us",
+ "api_name": "us",
+ "is_master": "true",
+ "endpoints": [
+ "http:\/\/rgw1:80"
+ ],
+ "hostnames": [],
+ "hostnames_s3website": [],
+ "master_zone": "",
+ "zones": [],
+ "placement_targets": [],
+ "default_placement": "",
+ "realm_id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62"
+ }
+
+Create a Master Zone
+--------------------
+
+.. important:: Zones must be created on a Ceph Object Gateway node that will be
+ within the zone.
+
+Create a new master zone for the multi-site configuration by opening a command
+line interface on a host that serves in the master zonegroup and zone. Then
+run the following command:
+
+.. prompt:: bash #
+
+ radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --master --default \
+ --endpoints={http://fqdn}[,{http://fqdn}]
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east \
+ --master --default \
+ --endpoints={http://fqdn}[,{http://fqdn}]
+
+
+.. note:: The ``--access-key`` and ``--secret`` aren’t specified. These
+ settings will be added to the zone once the user is created in the
+ next section.
+
+.. important:: The following steps assume a multi-site configuration that uses
+ newly installed systems that aren’t storing data yet. DO NOT DELETE the
+ ``default`` zone and its pools if you are already using the zone to store
+ data, or the data will be deleted and unrecoverable.
+
+Delete Default Zonegroup and Zone
+----------------------------------
+
+#. Delete the ``default`` zone if it exists. Remove it from the default
+ zonegroup first.
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup delete --rgw-zonegroup=default --rgw-zone=default
+ radosgw-admin period update --commit
+ radosgw-admin zone delete --rgw-zone=default
+ radosgw-admin period update --commit
+ radosgw-admin zonegroup delete --rgw-zonegroup=default
+ radosgw-admin period update --commit
+
+#. Delete the ``default`` pools in your Ceph storage cluster if they exist.
+
+ .. important:: The following step assumes a multi-site configuration that uses newly installed systems that aren’t currently storing data. DO NOT DELETE the ``default`` zonegroup if you are already using it to store data.
+
+ .. prompt:: bash #
+
+ ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it
+
+Create a System User
+--------------------
+
+#. The ``ceph-radosgw`` daemons must authenticate before pulling realm and
+ period information. In the master zone, create a "system user" to facilitate
+ authentication between daemons.
+
+ .. prompt:: bash #
+
+ radosgw-admin user create --uid="{user-name}" --display-name="{Display Name}" --system
+
+ For example:
+
+ .. prompt:: bash #
+
+ radosgw-admin user create --uid="synchronization-user" --display-name="Synchronization User" --system
+
+#. Make a note of the ``access_key`` and ``secret_key``. The secondary zones
+ require them to authenticate against the master zone.
+
+#. Add the system user to the master zone:
+
+ .. prompt:: bash #
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --access-key={access-key} --secret={secret}
+ radosgw-admin period update --commit
+
+Update the Period
+-----------------
+
+After updating the master zone configuration, update the period.
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+.. note:: Updating the period changes the epoch, and ensures that other zones
+ will receive the updated configuration.
+
+Update the Ceph Configuration File
+----------------------------------
+
+Update the Ceph configuration file on master zone hosts by adding the
+``rgw_zone`` configuration option and the name of the master zone to the
+instance entry.
+
+::
+
+ [client.rgw.{instance-name}]
+ ...
+ rgw_zone={zone-name}
+
+For example:
+
+::
+
+ [client.rgw.rgw1]
+ host = rgw1
+ rgw frontends = "civetweb port=80"
+ rgw_zone=us-east
+
+Start the Gateway
+-----------------
+
+On the object gateway host, start and enable the Ceph Object Gateway
+service:
+
+.. prompt:: bash #
+
+ systemctl start ceph-radosgw@rgw.`hostname -s`
+ systemctl enable ceph-radosgw@rgw.`hostname -s`
+
+.. _secondary-zone-label:
+
+Configuring Secondary Zones
+===========================
+
+Zones that are within a zonegroup replicate all data in order to ensure that
+every zone has the same data. When creating a secondary zone, run the following
+operations on a host identified to serve the secondary zone.
+
+.. note:: To add a second secondary zone (that is, a second non-master zone
+ within a zonegroup that already contains a secondary zone), follow :ref:`the
+ same procedures that are used for adding a secondary
+ zone<radosgw-multisite-secondary-zone-creating>`. Be sure to specify a
+ different zone name than the name of the first secondary zone.
+
+.. important:: Metadata operations (for example, user creation) must be
+ run on a host within the master zone. Bucket operations can be received
+ by the master zone or the secondary zone, but the secondary zone will
+ redirect bucket operations to the master zone. If the master zone is down,
+ bucket operations will fail.
+
+Pulling the Realm Configuration
+-------------------------------
+
+The URL path, access key, and secret of the master zone in the master zone
+group are used to pull the realm configuration to the host. When pulling the
+configuration of a non-default realm, specify the realm using the
+``--rgw-realm`` or ``--realm-id`` configuration options.
+
+.. prompt:: bash #
+
+ radosgw-admin realm pull --url={url-to-master-zone-gateway}
+ --access-key={access-key} --secret={secret}
+
+.. note:: Pulling the realm configuration also retrieves the remote's current
+ period configuration, and makes it the current period on this host as well.
+
+If this realm is the only realm, run the following command to make it the
+default realm:
+
+.. prompt:: bash #
+
+ radosgw-admin realm default --rgw-realm={realm-name}
+
+.. _radosgw-multisite-secondary-zone-creating:
+
+Creating a Secondary Zone
+-------------------------
+
+.. important:: When a zone is created, it must be on a Ceph Object Gateway node
+ within the zone.
+
+In order to create a secondary zone for the multi-site configuration, open a
+command line interface on a host identified to serve the secondary zone.
+Specify the zonegroup ID, the new zone name, and an endpoint for the zone.
+**DO NOT** use the ``--master`` or ``--default`` flags. Beginning in Kraken,
+all zones run in an active-active configuration by default, which means that a
+gateway client may write data to any zone and the zone will replicate the data
+to all other zones within the zonegroup. If you want to prevent the secondary
+zone from accepting write operations, include the ``--read-only`` flag in the
+command in order to create an active-passive configuration between the master
+zone and the secondary zone. In any case, don't forget to provide the
+``access_key`` and ``secret_key`` of the generated system user that is stored
+in the master zone of the master zonegroup. Run the following command:
+
+.. prompt:: bash #
+
+ radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --access-key={system-key} --secret={secret} \
+ --endpoints=http://{fqdn}:80 \
+ [--read-only]
+
+For example:
+
+
+.. prompt:: bash #
+
+ radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-west \
+ --access-key={system-key} --secret={secret} \
+ --endpoints=http://rgw2:80
+
+.. important:: The following steps assume a multi-site configuration that uses
+ newly installed systems that have not yet begun storing data. **DO NOT
+ DELETE the ``default`` zone or its pools** if you are already using it to
+ store data, or the data will be irretrievably lost.
+
+Delete the default zone if needed:
+
+.. prompt:: bash #
+
+ radosgw-admin zone delete --rgw-zone=default
+
+Finally, delete the default pools in your Ceph storage cluster if needed:
+
+.. prompt:: bash #
+
+ ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it
+
+Updating the Ceph Configuration File
+------------------------------------
+
+To update the Ceph configuration file on the secondary zone hosts, add the
+``rgw_zone`` configuration option and the name of the secondary zone to the
+instance entry.
+
+::
+
+ [client.rgw.{instance-name}]
+ ...
+ rgw_zone={zone-name}
+
+For example:
+
+::
+
+ [client.rgw.rgw2]
+ host = rgw2
+ rgw frontends = "civetweb port=80"
+ rgw_zone=us-west
+
+Updating the Period
+-------------------
+
+After updating the master zone configuration, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+.. note:: Updating the period changes the epoch, and ensures that other zones
+ will receive the updated configuration.
+
+Starting the Gateway
+--------------------
+
+To start the gateway, start and enable the Ceph Object Gateway service by
+running the following commands on the object gateway host:
+
+.. prompt:: bash #
+
+ systemctl start ceph-radosgw@rgw.`hostname -s`
+ systemctl enable ceph-radosgw@rgw.`hostname -s`
+
+Checking Synchronization Status
+-------------------------------
+
+After the secondary zone is up and running, you can check the synchronization
+status. The process of synchronization will copy users and buckets that were
+created in the master zone from the master zone to the secondary zone.
+
+.. prompt:: bash #
+
+ radosgw-admin sync status
+
+The output reports the status of synchronization operations. For example:
+
+::
+
+ realm f3239bc5-e1a8-4206-a81d-e1576480804d (earth)
+ zonegroup c50dbb7e-d9ce-47cc-a8bb-97d9b399d388 (us)
+ zone 4c453b70-4a16-4ce8-8185-1893b05d346e (us-west)
+ metadata sync syncing
+ full sync: 0/64 shards
+ metadata is caught up with master
+ incremental sync: 64/64 shards
+ data sync source: 1ee9da3e-114d-4ae3-a8a4-056e8a17f532 (us-east)
+ syncing
+ full sync: 0/128 shards
+ incremental sync: 128/128 shards
+ data is caught up with source
+
+.. note:: Secondary zones accept bucket operations; however, secondary zones
+ redirect bucket operations to the master zone and then synchronize with the
+ master zone to receive the result of the bucket operations. If the master
+ zone is down, bucket operations executed on the secondary zone will fail,
+ but object operations should succeed.
+
+
+Verifying an Object
+-------------------
+
+By default, after the successful synchronization of an object there is no
+subsequent verification of the object. However, you can enable verification by
+setting :confval:`rgw_sync_obj_etag_verify` to ``true``. After this value is
+set to true, an MD5 checksum is used to verify the integrity of the data that
+was transferred from the source to the destination. This ensures the integrity
+of any object that has been fetched from a remote server over HTTP (including
+multisite sync). This option may decrease the performance of your RGW because
+it requires more computation.
+
+
+Maintenance
+===========
+
+Checking the Sync Status
+------------------------
+
+Information about the replication status of a zone can be queried with:
+
+.. prompt:: bash $
+
+ radosgw-admin sync status
+
+::
+
+ realm b3bc1c37-9c44-4b89-a03b-04c269bea5da (earth)
+ zonegroup f54f9b22-b4b6-4a0e-9211-fa6ac1693f49 (us)
+ zone adce11c9-b8ed-4a90-8bc5-3fc029ff0816 (us-2)
+ metadata sync syncing
+ full sync: 0/64 shards
+ incremental sync: 64/64 shards
+ metadata is behind on 1 shards
+ oldest incremental change not applied: 2017-03-22 10:20:00.0.881361s
+ data sync source: 341c2d81-4574-4d08-ab0f-5a2a7b168028 (us-1)
+ syncing
+ full sync: 0/128 shards
+ incremental sync: 128/128 shards
+ data is caught up with source
+ source: 3b5d1a3f-3f27-4e4a-8f34-6072d4bb1275 (us-3)
+ syncing
+ full sync: 0/128 shards
+ incremental sync: 128/128 shards
+ data is caught up with source
+
+The output might be different, depending on the sync status. During sync, the
+shards are of two types:
+
+- **Behind shards** are shards that require a data sync (either a full data
+ sync or an incremental data sync) in order to be brought up to date.
+
+- **Recovery shards** are shards that encountered an error during sync and have
+ been marked for retry. The error occurs mostly on minor issues, such as
+ acquiring a lock on a bucket. Errors of this kind typically resolve on their
+ own.
+
+Check the logs
+--------------
+
+For multi-site deployments only, you can examine the metadata log (``mdlog``),
+the bucket index log (``bilog``), and the data log (``datalog``). You can list
+them and also trim them. Trimming is not needed in most cases because
+:confval:`rgw_sync_log_trim_interval` is set to 20 minutes by default. It
+should not be necessary to trim the logs unless
+:confval:`rgw_sync_log_trim_interval` has been manually set to 0.
+
+Changing the Metadata Master Zone
+---------------------------------
+
+.. important:: Care must be taken when changing the metadata master zone by
+ promoting a zone to master. A zone that isn't finished syncing metadata from
+ the current master zone will be unable to serve any remaining entries if it
+ is promoted to master, and those metadata changes will be lost. For this
+ reason, we recommend waiting for a zone's ``radosgw-admin sync status`` to
+ complete the process of synchronizing the metadata before promoting the zone
+ to master.
+
+Similarly, if the current master zone is processing changes to metadata at the
+same time that another zone is being promoted to master, these changes are
+likely to be lost. To avoid losing these changes, we recommend shutting down
+any ``radosgw`` instances on the previous master zone. After the new master
+zone has been promoted, the previous master zone's new period can be fetched
+with ``radosgw-admin period pull`` and the gateway(s) can be restarted.
+
+To promote a zone to metadata master, run the following commands on that zone
+(in this example, the zone is zone ``us-2`` in zonegroup ``us``):
+
+.. prompt:: bash $
+
+ radosgw-admin zone modify --rgw-zone=us-2 --master
+ radosgw-admin zonegroup modify --rgw-zonegroup=us --master
+ radosgw-admin period update --commit
+
+This generates a new period, and the radosgw instance(s) in zone ``us-2`` sends
+this period to other zones.
+
+Failover and Disaster Recovery
+==============================
+
+Setting Up Failover to the Secondary Zone
+-----------------------------------------
+
+If the master zone fails, you can fail over to the secondary zone for
+disaster recovery by following these steps:
+
+#. Make the secondary zone the master and default zone. For example:
+
+ .. prompt:: bash #
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --master --default
+
+ By default, Ceph Object Gateway runs in an active-active
+ configuration. However, if the cluster is configured to run in an
+ active-passive configuration, the secondary zone is a read-only zone.
+ To allow the secondary zone to receive write
+ operations, remove its ``--read-only`` status. For example:
+
+ .. prompt:: bash #
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --master --default \
+ --read-only=false
+
+#. Update the period to make the changes take effect.
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+#. Finally, restart the Ceph Object Gateway.
+
+ .. prompt:: bash #
+
+ systemctl restart ceph-radosgw@rgw.`hostname -s`
+
+Reverting from Failover
+-----------------------
+
+If the former master zone recovers, you can revert the failover operation by following these steps:
+
+#. From within the recovered zone, pull the latest realm configuration
+ from the current master zone:
+
+ .. prompt:: bash #
+
+ radosgw-admin realm pull --url={url-to-master-zone-gateway} \
+ --access-key={access-key} --secret={secret}
+
+#. Make the recovered zone the master and default zone:
+
+ .. prompt:: bash #
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --master --default
+
+#. Update the period so that the changes take effect:
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+#. Restart the Ceph Object Gateway in the recovered zone:
+
+ .. prompt:: bash #
+
+ systemctl restart ceph-radosgw@rgw.`hostname -s`
+
+#. If the secondary zone needs to be a read-only configuration, update
+ the secondary zone:
+
+ .. prompt:: bash #
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --read-only
+
+#. Update the period so that the changes take effect:
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+#. Restart the Ceph Object Gateway in the secondary zone:
+
+ .. prompt:: bash #
+
+ systemctl restart ceph-radosgw@rgw.`hostname -s`
+
+.. _rgw-multisite-migrate-from-single-site:
+
+Migrating a Single-Site Deployment to Multi-Site
+=================================================
+
+To migrate from a single-site deployment with a ``default`` zonegroup and zone
+to a multi-site system, follow these steps:
+
+1. Create a realm. Replace ``<name>`` with the realm name:
+
+ .. prompt:: bash #
+
+ radosgw-admin realm create --rgw-realm=<name> --default
+
+2. Rename the default zonegroup and zone. Replace ``<name>`` with the zone name
+ or zonegroup name:
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name=<name>
+ radosgw-admin zone rename --rgw-zone default --zone-new-name us-east-1 --rgw-zonegroup=<name>
+
+3. Rename the default zonegroup's ``api_name``. Replace ``<name>`` with the zonegroup name:
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup modify --api-name=<name> --rgw-zonegroup=<name>
+
+4. Configure the master zonegroup. Replace ``<name>`` with the realm name or
+ zonegroup name. Replace ``<fqdn>`` with the fully qualified domain name(s)
+ in the zonegroup:
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup modify --rgw-realm=<name> --rgw-zonegroup=<name> --endpoints http://<fqdn>:80 --master --default
+
+5. Configure the master zone. Replace ``<name>`` with the realm name, zone
+ name, or zonegroup name. Replace ``<fqdn>`` with the fully qualified domain
+ name(s) in the zonegroup:
+
+ .. prompt:: bash #
+
+ radosgw-admin zone modify --rgw-realm=<name> --rgw-zonegroup=<name> \
+ --rgw-zone=<name> --endpoints http://<fqdn>:80 \
+ --access-key=<access-key> --secret=<secret-key> \
+ --master --default
+
+6. Create a system user. Replace ``<user-id>`` with the username. Replace
+ ``<display-name>`` with a display name. The display name is allowed to
+ contain spaces:
+
+ .. prompt:: bash #
+
+ radosgw-admin user create --uid=<user-id> \
+ --display-name="<display-name>" \
+ --access-key=<access-key> \
+ --secret=<secret-key> --system
+
+7. Commit the updated configuration:
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+8. Restart the Ceph Object Gateway:
+
+ .. prompt:: bash #
+
+ systemctl restart ceph-radosgw@rgw.`hostname -s`
+
+After completing this procedure, proceed to `Configure a Secondary
+Zone <#configure-secondary-zones>`_ and create a secondary zone
+in the master zonegroup.
+
+Multi-Site Configuration Reference
+==================================
+
+The following sections provide additional details and command-line
+usage for realms, periods, zonegroups and zones.
+
+For more details on every available configuration option, see
+``src/common/options/rgw.yaml.in``.
+
+Alternatively, go to the :ref:`mgr-dashboard` configuration page (found under
+`Cluster`), where you can view and set all of the options. While on the page,
+set the level to ``advanced`` and search for RGW to see all basic and advanced
+configuration options.
+
+.. _rgw-realms:
+
+Realms
+------
+
+A realm is a globally unique namespace that consists of one or more zonegroups.
+Zonegroups contain one or more zones. Zones contain buckets. Buckets contain
+objects.
+
+Realms make it possible for the Ceph Object Gateway to support multiple
+namespaces and their configurations on the same hardware.
+
+Each realm is associated with a "period". A period represents the state
+of the zonegroup and zone configuration in time. Each time you make a
+change to a zonegroup or zone, you should update and commit the period.
+
+To ensure backward compatibility with Infernalis and earlier releases, the Ceph
+Object Gateway does not by default create a realm. However, as a best practice,
+we recommend that you create realms when creating new clusters.
+
+Create a Realm
+~~~~~~~~~~~~~~
+
+To create a realm, run ``realm create`` and specify the realm name.
+If the realm is the default, specify ``--default``.
+
+.. prompt:: bash #
+
+ radosgw-admin realm create --rgw-realm={realm-name} [--default]
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin realm create --rgw-realm=movies --default
+
+By specifying ``--default``, the realm will be called implicitly with
+each ``radosgw-admin`` call unless ``--rgw-realm`` and the realm name
+are explicitly provided.
+
+Make a Realm the Default
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+One realm in the list of realms should be the default realm. There may be only
+one default realm. If there is only one realm and it wasn’t specified as the
+default realm when it was created, make it the default realm. Alternatively, to
+change which realm is the default, run the following command:
+
+.. prompt:: bash #
+
+ radosgw-admin realm default --rgw-realm=movies
+
+.. note:: When the realm is default, the command line assumes
+ ``--rgw-realm=<realm-name>`` as an argument.
+
+Delete a Realm
+~~~~~~~~~~~~~~
+
+To delete a realm, run ``realm rm`` and specify the realm name:
+
+.. prompt:: bash #
+
+ radosgw-admin realm rm --rgw-realm={realm-name}
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin realm rm --rgw-realm=movies
+
+Get a Realm
+~~~~~~~~~~~
+
+To get a realm, run ``realm get`` and specify the realm name:
+
+.. prompt:: bash #
+
+ radosgw-admin realm get --rgw-realm=<name>
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin realm get --rgw-realm=movies [> filename.json]
+
+::
+
+ {
+ "id": "0a68d52e-a19c-4e8e-b012-a8f831cb3ebc",
+ "name": "movies",
+ "current_period": "b0c5bbef-4337-4edd-8184-5aeab2ec413b",
+ "epoch": 1
+ }
+
+Set a Realm
+~~~~~~~~~~~
+
+To set a realm, run ``realm set``, specify the realm name, and use the
+``--infile=`` option (make sure that the ``--infile`` option has an input file
+name as an argument):
+
+.. prompt:: bash #
+
+ radosgw-admin realm set --rgw-realm=<name> --infile=<infilename>
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin realm set --rgw-realm=movies --infile=filename.json
+
+List Realms
+~~~~~~~~~~~
+
+To list realms, run ``realm list``:
+
+.. prompt:: bash #
+
+ radosgw-admin realm list
+
+List Realm Periods
+~~~~~~~~~~~~~~~~~~
+
+To list realm periods, run ``realm list-periods``:
+
+.. prompt:: bash #
+
+ radosgw-admin realm list-periods
+
+Pull a Realm
+~~~~~~~~~~~~
+
+To pull a realm from the node that contains both the master zonegroup and
+master zone to a node that contains a secondary zonegroup or zone, run ``realm
+pull`` on the node that will receive the realm configuration:
+
+.. prompt:: bash #
+
+ radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}
+
+Rename a Realm
+~~~~~~~~~~~~~~
+
+A realm is not part of the period. Consequently, any renaming of the realm is
+applied only locally, and will therefore not get pulled when you run ``realm
+pull``. If you are renaming a realm that contains multiple zones, run the
+``rename`` command on each zone.
+
+To rename a realm, run the following:
+
+.. prompt:: bash #
+
+ radosgw-admin realm rename --rgw-realm=<current-name> --realm-new-name=<new-realm-name>
+
+.. note:: DO NOT use ``realm set`` to change the ``name`` parameter. Doing so
+ changes the internal name only. If you use ``realm set`` to change the
+ ``name`` parameter, then ``--rgw-realm`` still expects the realm's old name.
+
+Zonegroups
+-----------
+
+Zonegroups make it possible for the Ceph Object Gateway to support multi-site
+deployments and a global namespace. Zonegroups were formerly called "regions"
+(in releases prior to and including Infernalis).
+
+A zonegroup defines the geographic location of one or more Ceph Object Gateway
+instances within one or more zones.
+
+The configuration of zonegroups differs from typical configuration procedures,
+because not all of the zonegroup configuration settings are stored to a
+configuration file.
+
+You can list zonegroups, get a zonegroup configuration, and set a zonegroup
+configuration.
+
+Creating a Zonegroup
+~~~~~~~~~~~~~~~~~~~~
+
+Creating a zonegroup consists of specifying the zonegroup name. Newly created
+zones reside in the default realm unless a different realm is specified by
+using the option ``--rgw-realm=<realm-name>``.
+
+If the zonegroup is the default zonegroup, specify the ``--default`` flag. If
+the zonegroup is the master zonegroup, specify the ``--master`` flag. For
+example:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup create --rgw-zonegroup=<name> [--rgw-realm=<name>][--master] [--default]
+
+
+.. note:: Use ``zonegroup modify --rgw-zonegroup=<zonegroup-name>`` to modify
+ an existing zonegroup’s settings.
+
+Making a Zonegroup the Default
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+One zonegroup in the list of zonegroups must be the default zonegroup. There
+can be only one default zonegroup. In the case that there is only one zonegroup
+which was not designated the default zonegroup when it was created, use the
+following command to make it the default zonegroup. Commands of this form can
+be used to change which zonegroup is the default.
+
+#. Designate a zonegroup as the default zonegroup:
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup default --rgw-zonegroup=comedy
+
+ .. note:: When the zonegroup is default, the command line assumes that the name of the zonegroup will be the argument of the ``--rgw-zonegroup=<zonegroup-name>`` option. (In this example, ``<zonegroup-name>`` has been retained for the sake of consistency and legibility.)
+
+#. Update the period:
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Adding a Zone to a Zonegroup
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This procedure explains how to add a zone to a zonegroup.
+
+#. Run the following command to add a zone to a zonegroup:
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup add --rgw-zonegroup=<name> --rgw-zone=<name>
+
+#. Update the period:
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Removing a Zone from a Zonegroup
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. Run this command to remove a zone from a zonegroup:
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup remove --rgw-zonegroup=<name> --rgw-zone=<name>
+
+#. Update the period:
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Renaming a Zonegroup
+~~~~~~~~~~~~~~~~~~~~
+
+#. Run this command to rename the zonegroup:
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup rename --rgw-zonegroup=<name> --zonegroup-new-name=<name>
+
+#. Update the period:
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Deleting a Zonegroup
+~~~~~~~~~~~~~~~~~~~~
+
+#. To delete a zonegroup, run the following command:
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup delete --rgw-zonegroup=<name>
+
+#. Update the period:
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Listing Zonegroups
+~~~~~~~~~~~~~~~~~~
+
+A Ceph cluster contains a list of zonegroup. To list the zonegroups, run
+this command:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup list
+
+The ``radosgw-admin`` returns a JSON formatted list of zonegroups.
+
+::
+
+ {
+ "default_info": "90b28698-e7c3-462c-a42d-4aa780d24eda",
+ "zonegroups": [
+ "us"
+ ]
+ }
+
+Getting a Zonegroup Map
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To list the details of each zonegroup, run this command:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup-map get
+
+.. note:: If you receive a ``failed to read zonegroup map`` error, run
+ ``radosgw-admin zonegroup-map update`` as ``root`` first.
+
+Getting a Zonegroup
+~~~~~~~~~~~~~~~~~~~~
+
+To view the configuration of a zonegroup, run this command:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup get [--rgw-zonegroup=<zonegroup>]
+
+The zonegroup configuration looks like this:
+
+::
+
+ {
+ "id": "90b28698-e7c3-462c-a42d-4aa780d24eda",
+ "name": "us",
+ "api_name": "us",
+ "is_master": "true",
+ "endpoints": [
+ "http:\/\/rgw1:80"
+ ],
+ "hostnames": [],
+ "hostnames_s3website": [],
+ "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e",
+ "zones": [
+ {
+ "id": "9248cab2-afe7-43d8-a661-a40bf316665e",
+ "name": "us-east",
+ "endpoints": [
+ "http:\/\/rgw1"
+ ],
+ "log_meta": "true",
+ "log_data": "true",
+ "bucket_index_max_shards": 0,
+ "read_only": "false"
+ },
+ {
+ "id": "d1024e59-7d28-49d1-8222-af101965a939",
+ "name": "us-west",
+ "endpoints": [
+ "http:\/\/rgw2:80"
+ ],
+ "log_meta": "false",
+ "log_data": "true",
+ "bucket_index_max_shards": 0,
+ "read_only": "false"
+ }
+ ],
+ "placement_targets": [
+ {
+ "name": "default-placement",
+ "tags": []
+ }
+ ],
+ "default_placement": "default-placement",
+ "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe"
+ }
+
+Setting a Zonegroup
+~~~~~~~~~~~~~~~~~~~~
+
+The process of defining a zonegroup consists of creating a JSON object and
+specifying the required settings. Here is a list of the required settings:
+
+1. ``name``: The name of the zonegroup. Required.
+
+2. ``api_name``: The API name for the zonegroup. Optional.
+
+3. ``is_master``: Determines whether the zonegroup is the master zonegroup.
+ Required. **note:** You can only have one master zonegroup.
+
+4. ``endpoints``: A list of all the endpoints in the zonegroup. For example,
+ you may use multiple domain names to refer to the same zonegroup. Remember
+ to escape the forward slashes (``\/``). You may also specify a port
+ (``fqdn:port``) for each endpoint. Optional.
+
+5. ``hostnames``: A list of all the hostnames in the zonegroup. For example,
+ you may use multiple domain names to refer to the same zonegroup. Optional.
+ The ``rgw dns name`` setting will be included in this list automatically.
+ Restart the gateway daemon(s) after changing this setting.
+
+6. ``master_zone``: The master zone for the zonegroup. Optional. Uses
+ the default zone if not specified. **note:** You can only have one
+ master zone per zonegroup.
+
+7. ``zones``: A list of all zones within the zonegroup. Each zone has a name
+ (required), a list of endpoints (optional), and a setting that determines
+ whether the gateway will log metadata and data operations (false by
+ default).
+
+8. ``placement_targets``: A list of placement targets (optional). Each
+ placement target contains a name (required) for the placement target
+ and a list of tags (optional) so that only users with the tag can use
+ the placement target (that is, the user’s ``placement_tags`` field in
+ the user info).
+
+9. ``default_placement``: The default placement target for the object index and
+ object data. Set to ``default-placement`` by default. It is also possible
+ to set a per-user default placement in the user info for each user.
+
+Setting a Zonegroup - Procedure
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. To set a zonegroup, create a JSON object that contains the required fields,
+ save the object to a file (for example, ``zonegroup.json``), and run the
+ following command:
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup set --infile zonegroup.json
+
+ Where ``zonegroup.json`` is the JSON file you created.
+
+ .. important:: The ``default`` zonegroup ``is_master`` setting is ``true`` by default. If you create an additional zonegroup and want to make it the master zonegroup, you must either set the ``default`` zonegroup ``is_master`` setting to ``false`` or delete the ``default`` zonegroup.
+
+#. Update the period:
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Setting a Zonegroup Map
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+The process of setting a zonegroup map comprises (1) creating a JSON object
+that consists of one or more zonegroups, and (2) setting the
+``master_zonegroup`` for the cluster. Each zonegroup in the zonegroup map
+consists of a key/value pair where the ``key`` setting is equivalent to the
+``name`` setting for an individual zonegroup configuration and the ``val`` is
+a JSON object consisting of an individual zonegroup configuration.
+
+You may only have one zonegroup with ``is_master`` equal to ``true``, and it
+must be specified as the ``master_zonegroup`` at the end of the zonegroup map.
+The following JSON object is an example of a default zonegroup map:
+
+::
+
+ {
+ "zonegroups": [
+ {
+ "key": "90b28698-e7c3-462c-a42d-4aa780d24eda",
+ "val": {
+ "id": "90b28698-e7c3-462c-a42d-4aa780d24eda",
+ "name": "us",
+ "api_name": "us",
+ "is_master": "true",
+ "endpoints": [
+ "http:\/\/rgw1:80"
+ ],
+ "hostnames": [],
+ "hostnames_s3website": [],
+ "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e",
+ "zones": [
+ {
+ "id": "9248cab2-afe7-43d8-a661-a40bf316665e",
+ "name": "us-east",
+ "endpoints": [
+ "http:\/\/rgw1"
+ ],
+ "log_meta": "true",
+ "log_data": "true",
+ "bucket_index_max_shards": 0,
+ "read_only": "false"
+ },
+ {
+ "id": "d1024e59-7d28-49d1-8222-af101965a939",
+ "name": "us-west",
+ "endpoints": [
+ "http:\/\/rgw2:80"
+ ],
+ "log_meta": "false",
+ "log_data": "true",
+ "bucket_index_max_shards": 0,
+ "read_only": "false"
+ }
+ ],
+ "placement_targets": [
+ {
+ "name": "default-placement",
+ "tags": []
+ }
+ ],
+ "default_placement": "default-placement",
+ "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe"
+ }
+ }
+ ],
+ "master_zonegroup": "90b28698-e7c3-462c-a42d-4aa780d24eda",
+ "bucket_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "user_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ }
+ }
+
+#. To set a zonegroup map, run the following command:
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup-map set --infile zonegroupmap.json
+
+ In this command, ``zonegroupmap.json`` is the JSON file you created. Ensure
+ that you have zones created for the ones specified in the zonegroup map.
+
+#. Update the period:
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+.. _radosgw-zones:
+
+Zones
+-----
+
+A zone defines a logical group that consists of one or more Ceph Object Gateway
+instances. All RGWs in a given zone serve S3 objects that are backed by RADOS objects that are stored in the same set of pools in the same cluster. Ceph Object Gateway supports zones.
+
+The procedure for configuring zones differs from typical configuration
+procedures, because not all of the settings end up in a Ceph configuration
+file.
+
+Zones can be listed. You can "get" a zone configuration and "set" a zone
+configuration.
+
+Creating a Zone
+~~~~~~~~~~~~~~~
+
+To create a zone, specify a zone name. If you are creating a master zone,
+specify the ``--master`` flag. Only one zone in a zonegroup may be a master
+zone. To add the zone to a zonegroup, specify the ``--rgw-zonegroup`` option
+with the zonegroup name.
+
+.. prompt:: bash #
+
+ radosgw-admin zone create --rgw-zone=<name> \
+ [--zonegroup=<zonegroup-name]\
+ [--endpoints=<endpoint>[,<endpoint>] \
+ [--master] [--default] \
+ --access-key $SYSTEM_ACCESS_KEY --secret $SYSTEM_SECRET_KEY
+
+After you have created the zone, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Deleting a Zone
+~~~~~~~~~~~~~~~
+
+To delete a zone, first remove it from the zonegroup:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup remove --zonegroup=<name>\
+ --zone=<name>
+
+Then, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Next, delete the zone:
+
+.. prompt:: bash #
+
+ radosgw-admin zone delete --rgw-zone<name>
+
+Finally, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+.. important:: Do not delete a zone without removing it from a zonegroup first.
+ Otherwise, updating the period will fail.
+
+If the pools for the deleted zone will not be used anywhere else,
+consider deleting the pools. Replace ``<del-zone>`` in the example below
+with the deleted zone’s name.
+
+.. important:: Only delete the pools with prepended zone names. Deleting the
+ root pool (for example, ``.rgw.root``) will remove all of the system’s
+ configuration.
+
+.. important:: When the pools are deleted, all of the data within them are
+ deleted in an unrecoverable manner. Delete the pools only if the pool's
+ contents are no longer needed.
+
+.. prompt:: bash #
+
+ ceph osd pool rm <del-zone>.rgw.control <del-zone>.rgw.control --yes-i-really-really-mean-it
+ ceph osd pool rm <del-zone>.rgw.meta <del-zone>.rgw.meta --yes-i-really-really-mean-it
+ ceph osd pool rm <del-zone>.rgw.log <del-zone>.rgw.log --yes-i-really-really-mean-it
+ ceph osd pool rm <del-zone>.rgw.otp <del-zone>.rgw.otp --yes-i-really-really-mean-it
+ ceph osd pool rm <del-zone>.rgw.buckets.index <del-zone>.rgw.buckets.index --yes-i-really-really-mean-it
+ ceph osd pool rm <del-zone>.rgw.buckets.non-ec <del-zone>.rgw.buckets.non-ec --yes-i-really-really-mean-it
+ ceph osd pool rm <del-zone>.rgw.buckets.data <del-zone>.rgw.buckets.data --yes-i-really-really-mean-it
+
+Modifying a Zone
+~~~~~~~~~~~~~~~~
+
+To modify a zone, specify the zone name and the parameters you wish to
+modify.
+
+.. prompt:: bash #
+
+ radosgw-admin zone modify [options]
+
+Where ``[options]``:
+
+- ``--access-key=<key>``
+- ``--secret/--secret-key=<key>``
+- ``--master``
+- ``--default``
+- ``--endpoints=<list>``
+
+Then, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Listing Zones
+~~~~~~~~~~~~~
+
+As ``root``, to list the zones in a cluster, run the following command:
+
+.. prompt:: bash #
+
+ radosgw-admin zone list
+
+Getting a Zone
+~~~~~~~~~~~~~~
+
+As ``root``, to get the configuration of a zone, run the following command:
+
+.. prompt:: bash #
+
+ radosgw-admin zone get [--rgw-zone=<zone>]
+
+The ``default`` zone looks like this:
+
+::
+
+ { "domain_root": ".rgw",
+ "control_pool": ".rgw.control",
+ "gc_pool": ".rgw.gc",
+ "log_pool": ".log",
+ "intent_log_pool": ".intent-log",
+ "usage_log_pool": ".usage",
+ "user_keys_pool": ".users",
+ "user_email_pool": ".users.email",
+ "user_swift_pool": ".users.swift",
+ "user_uid_pool": ".users.uid",
+ "system_key": { "access_key": "", "secret_key": ""},
+ "placement_pools": [
+ { "key": "default-placement",
+ "val": { "index_pool": ".rgw.buckets.index",
+ "data_pool": ".rgw.buckets"}
+ }
+ ]
+ }
+
+Setting a Zone
+~~~~~~~~~~~~~~
+
+Configuring a zone involves specifying a series of Ceph Object Gateway
+pools. For consistency, we recommend using a pool prefix that is the
+same as the zone name. See
+`Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
+for details of configuring pools.
+
+To set a zone, create a JSON object consisting of the pools, save the
+object to a file (e.g., ``zone.json``); then, run the following
+command, replacing ``{zone-name}`` with the name of the zone:
+
+.. prompt:: bash #
+
+ radosgw-admin zone set --rgw-zone={zone-name} --infile zone.json
+
+Where ``zone.json`` is the JSON file you created.
+
+Then, as ``root``, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Renaming a Zone
+~~~~~~~~~~~~~~~
+
+To rename a zone, specify the zone name and the new zone name.
+
+.. prompt:: bash #
+
+ radosgw-admin zone rename --rgw-zone=<name> --zone-new-name=<name>
+
+Then, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Zonegroup and Zone Settings
+----------------------------
+
+When configuring a default zonegroup and zone, the pool name includes
+the zone name. For example:
+
+- ``default.rgw.control``
+
+To change the defaults, include the following settings in your Ceph
+configuration file under each ``[client.radosgw.{instance-name}]``
+instance.
+
++-------------------------------------+-----------------------------------+---------+-----------------------+
+| Name | Description | Type | Default |
++=====================================+===================================+=========+=======================+
+| ``rgw_zone`` | The name of the zone for the | String | None |
+| | gateway instance. | | |
++-------------------------------------+-----------------------------------+---------+-----------------------+
+| ``rgw_zonegroup`` | The name of the zonegroup for | String | None |
+| | the gateway instance. | | |
++-------------------------------------+-----------------------------------+---------+-----------------------+
+| ``rgw_zonegroup_root_pool`` | The root pool for the zonegroup. | String | ``.rgw.root`` |
++-------------------------------------+-----------------------------------+---------+-----------------------+
+| ``rgw_zone_root_pool`` | The root pool for the zone. | String | ``.rgw.root`` |
++-------------------------------------+-----------------------------------+---------+-----------------------+
+| ``rgw_default_zone_group_info_oid`` | The OID for storing the default | String | ``default.zonegroup`` |
+| | zonegroup. We do not recommend | | |
+| | changing this setting. | | |
++-------------------------------------+-----------------------------------+---------+-----------------------+
+
+
+Zone Features
+=============
+
+Some multisite features require support from all zones before they can be enabled. Each zone lists its ``supported_features``, and each zonegroup lists its ``enabled_features``. Before a feature can be enabled in the zonegroup, it must be supported by all of its zones.
+
+On creation of new zones and zonegroups, all known features are supported and some features (see table below) are enabled by default. After upgrading an existing multisite configuration, however, new features must be enabled manually.
+
+Supported Features
+------------------
+
++-----------------------------------+---------+----------+
+| Feature | Release | Default |
++===================================+=========+==========+
+| :ref:`feature_resharding` | Reef | Enabled |
++-----------------------------------+---------+----------+
+| :ref:`feature_compress_encrypted` | Reef | Disabled |
++-----------------------------------+---------+----------+
+
+.. _feature_resharding:
+
+resharding
+~~~~~~~~~~
+
+This feature allows buckets to be resharded in a multisite configuration
+without interrupting the replication of their objects. When
+``rgw_dynamic_resharding`` is enabled, it runs on each zone independently, and
+zones may choose different shard counts for the same bucket. When buckets are
+resharded manually with ``radosgw-admin bucket reshard``, only that zone's
+bucket is modified. A zone feature should only be marked as supported after all
+of its RGWs and OSDs have upgraded.
+
+.. note:: Dynamic resharding is not supported in multisite deployments prior to
+ the Reef release.
+
+
+.. _feature_compress_encrypted:
+
+compress-encrypted
+~~~~~~~~~~~~~~~~~~
+
+This feature enables support for combining `Server-Side Encryption`_ and
+`Compression`_ on the same object. Object data gets compressed before encryption.
+Prior to Reef, multisite would not replicate such objects correctly, so all zones
+must upgrade to Reef or later before enabling.
+
+.. warning:: The compression ratio may leak information about the encrypted data,
+ and allow attackers to distinguish whether two same-sized objects might contain
+ the same data. Due to these security considerations, this feature is disabled
+ by default.
+
+Commands
+--------
+
+Add support for a zone feature
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+On the cluster that contains the given zone:
+
+.. prompt:: bash $
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --enable-feature={feature-name}
+ radosgw-admin period update --commit
+
+
+Remove support for a zone feature
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+On the cluster that contains the given zone:
+
+.. prompt:: bash $
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --disable-feature={feature-name}
+ radosgw-admin period update --commit
+
+Enable a zonegroup feature
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+On any cluster in the realm:
+
+.. prompt:: bash $
+
+ radosgw-admin zonegroup modify --rgw-zonegroup={zonegroup-name} --enable-feature={feature-name}
+ radosgw-admin period update --commit
+
+Disable a zonegroup feature
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+On any cluster in the realm:
+
+.. prompt:: bash $
+
+ radosgw-admin zonegroup modify --rgw-zonegroup={zonegroup-name} --disable-feature={feature-name}
+ radosgw-admin period update --commit
+
+
+.. _`Pools`: ../pools
+.. _`Sync Policy Config`: ../multisite-sync-policy
+.. _`Server-Side Encryption`: ../encryption
+.. _`Compression`: ../compression
diff --git a/doc/radosgw/multitenancy.rst b/doc/radosgw/multitenancy.rst
new file mode 100644
index 000000000..09f5071c1
--- /dev/null
+++ b/doc/radosgw/multitenancy.rst
@@ -0,0 +1,169 @@
+.. _rgw-multitenancy:
+
+=================
+RGW Multi-tenancy
+=================
+
+.. versionadded:: Jewel
+
+The multi-tenancy feature allows to use buckets and users of the same
+name simultaneously by segregating them under so-called ``tenants``.
+This may be useful, for instance, to permit users of Swift API to
+create buckets with easily conflicting names such as "test" or "trove".
+
+From the Jewel release onward, each user and bucket lies under a tenant.
+For compatibility, a "legacy" tenant with an empty name is provided.
+Whenever a bucket is referred without an explicit tenant, an implicit
+tenant is used, taken from the user performing the operation. Since
+the pre-existing users are under the legacy tenant, they continue
+to create and access buckets as before. The layout of objects in RADOS
+is extended in a compatible way, ensuring a smooth upgrade to Jewel.
+
+Administering Users With Explicit Tenants
+=========================================
+
+Tenants as such do not have any operations on them. They appear and
+disappear as needed, when users are administered. In order to create,
+modify, and remove users with explicit tenants, either an additional
+option --tenant is supplied, or a syntax '<tenant>$<user>' is used
+in the parameters of the radosgw-admin command.
+
+Examples
+--------
+
+Create a user testx$tester to be accessed with S3::
+
+ # radosgw-admin --tenant testx --uid tester --display-name "Test User" --access_key TESTER --secret test123 user create
+
+Create a user testx$tester to be accessed with Swift::
+
+ # radosgw-admin --tenant testx --uid tester --display-name "Test User" --subuser tester:test --key-type swift --access full user create
+ # radosgw-admin --subuser 'testx$tester:test' --key-type swift --secret test123
+
+.. note:: The subuser with explicit tenant has to be quoted in the shell.
+
+ Tenant names may contain only alphanumeric characters and underscores.
+
+Accessing Buckets with Explicit Tenants
+=======================================
+
+When a client application accesses buckets, it always operates with
+credentials of a particular user. As mentioned above, every user belongs
+to a tenant. Therefore, every operation has an implicit tenant in its
+context, to be used if no tenant is specified explicitly. Thus a complete
+compatibility is maintained with previous releases, as long as the
+referred buckets and referring user belong to the same tenant.
+In other words, anything unusual occurs when accessing another tenant's
+buckets *only*.
+
+Extensions employed to specify an explicit tenant differ according
+to the protocol and authentication system used.
+
+S3
+--
+
+In case of S3, a colon character is used to separate tenant and bucket.
+Thus a sample URL would be::
+
+ https://ep.host.dom/tenant:bucket
+
+Here's a simple Python sample:
+
+.. code-block:: python
+ :linenos:
+
+ from boto.s3.connection import S3Connection, OrdinaryCallingFormat
+ c = S3Connection(
+ aws_access_key_id="TESTER",
+ aws_secret_access_key="test123",
+ host="ep.host.dom",
+ calling_format = OrdinaryCallingFormat())
+ bucket = c.get_bucket("test5b:testbucket")
+
+Note that it's not possible to supply an explicit tenant using
+a hostname. Hostnames cannot contain colons, or any other separators
+that are not already valid in bucket names. Using a period creates an
+ambiguous syntax. Therefore, the bucket-in-URL-path format has to be
+used.
+
+Due to the fact that the native S3 API does not deal with
+multi-tenancy and radosgw's implementation does, things get a bit
+involved when dealing with signed URLs and public read ACLs.
+
+* A **signed URL** does contain the ``AWSAccessKeyId`` query
+ parameters, from which radosgw is able to discern the correct user
+ and tenant owning the bucket. In other words, an application
+ generating signed URLs should be able to take just the un-prefixed
+ bucket name, and produce a signed URL that itself contains the
+ bucket name without the tenant prefix. However, it is *possible* to
+ include the prefix if you so choose.
+
+ Thus, accessing a signed URL of an object ``bar`` in a container
+ ``foo`` belonging to the tenant ``7188e165c0ae4424ac68ae2e89a05c50``
+ would be possible either via
+ ``http://<host>:<port>/foo/bar?AWSAccessKeyId=b200fb6634c547199e436a0f93c0c46e&Expires=1542890806&Signature=eok6CYQC%2FDwmQQmqvY5jTg6ehXU%3D``,
+ or via
+ ``http://<host>:<port>/7188e165c0ae4424ac68ae2e89a05c50:foo/bar?AWSAccessKeyId=b200fb6634c547199e436a0f93c0c46e&Expires=1542890806&Signature=eok6CYQC%2FDwmQQmqvY5jTg6ehXU%3D``,
+ depending on whether or not the tenant prefix was passed in on
+ signature generation.
+
+* A bucket with a **public read ACL** is meant to be read by an HTTP
+ client *without* including any query parameters that would allow
+ radosgw to discern tenants. Thus, publicly readable objects must
+ always be accessed using the bucket name with the tenant prefix.
+
+ Thus, if you set a public read ACL on an object ``bar`` in a
+ container ``foo`` belonging to the tenant
+ ``7188e165c0ae4424ac68ae2e89a05c50``, you would need to access that
+ object via the public URL
+ ``http://<host>:<port>/7188e165c0ae4424ac68ae2e89a05c50:foo/bar``.
+
+Swift with built-in authenticator
+---------------------------------
+
+TBD -- not in test_multen.py yet
+
+Swift with Keystone
+-------------------
+
+In the default configuration, although native Swift has inherent
+multi-tenancy, radosgw does not enable multi-tenancy for the Swift
+API. This is to ensure that a setup with legacy buckets --- that is,
+buckets that were created before radosgw supported multitenancy ---,
+those buckets retain their dual-API capability to be queried and
+modified using either S3 or Swift.
+
+If you want to enable multitenancy for Swift, particularly if your
+users only ever authenticate against OpenStack Keystone, you should
+enable Keystone-based multitenancy with the following ``ceph.conf``
+configuration option::
+
+ rgw keystone implicit tenants = true
+
+Once you enable this option, any newly connecting user (whether they
+are using the Swift API, or Keystone-authenticated S3) will prompt
+radosgw to create a user named ``<tenant_id>$<tenant_id``, where
+``<tenant_id>`` is a Keystone tenant (project) UUID --- for example,
+``7188e165c0ae4424ac68ae2e89a05c50$7188e165c0ae4424ac68ae2e89a05c50``.
+
+Whenever that user then creates an Swift container, radosgw internally
+translates the given container name into
+``<tenant_id>/<container_name>``, such as
+``7188e165c0ae4424ac68ae2e89a05c50/foo``. This ensures that if there
+are two or more different tenants all creating a container named
+``foo``, radosgw is able to transparently discern them by their tenant
+prefix.
+
+It is also possible to limit the effects of implicit tenants
+to only apply to swift or s3, by setting ``rgw keystone implicit tenants``
+to either ``s3`` or ``swift``. This will likely primarily
+be of use to users who had previously used implicit tenants
+with older versions of ceph, where implicit tenants
+only applied to the swift protocol.
+
+Notes and known issues
+----------------------
+
+Just to be clear, it is not possible to create buckets in other
+tenants at present. The owner of newly created bucket is extracted
+from authentication information.
diff --git a/doc/radosgw/nfs.rst b/doc/radosgw/nfs.rst
new file mode 100644
index 000000000..373765e10
--- /dev/null
+++ b/doc/radosgw/nfs.rst
@@ -0,0 +1,375 @@
+===
+NFS
+===
+
+.. versionadded:: Jewel
+
+.. note:: Only the NFSv4 protocol is supported when using a cephadm or rook based deployment.
+
+Ceph Object Gateway namespaces can be exported over the file-based
+NFSv4 protocols, alongside traditional HTTP access
+protocols (S3 and Swift).
+
+In particular, the Ceph Object Gateway can now be configured to
+provide file-based access when embedded in the NFS-Ganesha NFS server.
+
+The simplest and preferred way of managing nfs-ganesha clusters and rgw exports
+is using ``ceph nfs ...`` commands. See :doc:`/mgr/nfs` for more details.
+
+librgw
+======
+
+The librgw.so shared library (Unix) provides a loadable interface to
+Ceph Object Gateway services, and instantiates a full Ceph Object Gateway
+instance on initialization.
+
+In turn, librgw.so exports rgw_file, a stateful API for file-oriented
+access to RGW buckets and objects. The API is general, but its design
+is strongly influenced by the File System Abstraction Layer (FSAL) API
+of NFS-Ganesha, for which it has been primarily designed.
+
+A set of Python bindings is also provided.
+
+Namespace Conventions
+=====================
+
+The implementation conforms to Amazon Web Services (AWS) hierarchical
+namespace conventions which map UNIX-style path names onto S3 buckets
+and objects.
+
+The top level of the attached namespace consists of S3 buckets,
+represented as NFS directories. Files and directories subordinate to
+buckets are each represented as objects, following S3 prefix and
+delimiter conventions, with '/' being the only supported path
+delimiter [#]_.
+
+For example, if an NFS client has mounted an RGW namespace at "/nfs",
+then a file "/nfs/mybucket/www/index.html" in the NFS namespace
+corresponds to an RGW object "www/index.html" in a bucket/container
+"mybucket."
+
+Although it is generally invisible to clients, the NFS namespace is
+assembled through concatenation of the corresponding paths implied by
+the objects in the namespace. Leaf objects, whether files or
+directories, will always be materialized in an RGW object of the
+corresponding key name, "<name>" if a file, "<name>/" if a directory.
+Non-leaf directories (e.g., "www" above) might only be implied by
+their appearance in the names of one or more leaf objects. Directories
+created within NFS or directly operated on by an NFS client (e.g., via
+an attribute-setting operation such as chown or chmod) always have a
+leaf object representation used to store materialized attributes such
+as Unix ownership and permissions.
+
+Supported Operations
+====================
+
+The RGW NFS interface supports most operations on files and
+directories, with the following restrictions:
+
+- Links, including symlinks, are not supported.
+- NFS ACLs are not supported.
+
+ + Unix user and group ownership and permissions *are* supported.
+
+- Directories may not be moved/renamed.
+
+ + Files may be moved between directories.
+
+- Only full, sequential *write* I/O is supported
+
+ + i.e., write operations are constrained to be **uploads**.
+ + Many typical I/O operations such as editing files in place will necessarily fail as they perform non-sequential stores.
+ + Some file utilities *apparently* writing sequentially (e.g., some versions of GNU tar) may fail due to infrequent non-sequential stores.
+ + When mounting via NFS, sequential application I/O can generally be constrained to be written sequentially to the NFS server via a synchronous mount option (e.g. -osync in Linux).
+ + NFS clients which cannot mount synchronously (e.g., MS Windows) will not be able to upload files.
+
+Security
+========
+
+The RGW NFS interface provides a hybrid security model with the
+following characteristics:
+
+- NFS protocol security is provided by the NFS-Ganesha server, as negotiated by the NFS server and clients
+
+ + e.g., clients can by trusted (AUTH_SYS), or required to present Kerberos user credentials (RPCSEC_GSS)
+ + RPCSEC_GSS wire security can be integrity only (krb5i) or integrity and privacy (encryption, krb5p)
+ + various NFS-specific security and permission rules are available
+
+ * e.g., root-squashing
+
+- a set of RGW/S3 security credentials (unknown to NFS) is associated with each RGW NFS mount (i.e., NFS-Ganesha EXPORT)
+
+ + all RGW object operations performed via the NFS server will be performed by the RGW user associated with the credentials stored in the export being accessed (currently only RGW and RGW LDAP credentials are supported)
+
+ * additional RGW authentication types such as Keystone are not currently supported
+
+Manually configuring an NFS-Ganesha Instance
+============================================
+
+Each NFS RGW instance is an NFS-Ganesha server instance *embedding*
+a full Ceph RGW instance.
+
+Therefore, the RGW NFS configuration includes Ceph and Ceph Object
+Gateway-specific configuration in a local ceph.conf, as well as
+NFS-Ganesha-specific configuration in the NFS-Ganesha config file,
+ganesha.conf.
+
+ceph.conf
+---------
+
+Required ceph.conf configuration for RGW NFS includes:
+
+* valid [client.rgw.{instance-name}] section
+* valid values for minimal instance configuration, in particular, an installed and correct ``keyring``
+
+Other config variables (e.g., ``rgw data`` and ``rgw backend store``) are
+optional.
+
+A small number of config variables (e.g., ``rgw_nfs_namespace_expire_secs``)
+are unique to RGW NFS.
+
+In particular, front-end selection is handled specially by the librgw.so runtime. By default, only the
+``rgw-nfs`` frontend is started. Additional frontends (e.g., ``beast``) are enabled via the
+``rgw nfs frontends`` config option. Its syntax is identical to the ordinary ``rgw frontends`` option.
+Default options for non-default frontends are specified via ``rgw frontend defaults`` as normal.
+
+ganesha.conf
+------------
+
+A strictly minimal ganesha.conf for use with RGW NFS includes one
+EXPORT block with embedded FSAL block of type RGW::
+
+ EXPORT
+ {
+ Export_ID={numeric-id};
+ Path = "/";
+ Pseudo = "/";
+ Access_Type = RW;
+ SecType = "sys";
+ NFS_Protocols = 4;
+ Transport_Protocols = TCP;
+
+ # optional, permit unsquashed access by client "root" user
+ #Squash = No_Root_Squash;
+
+ FSAL {
+ Name = RGW;
+ User_Id = {s3-user-id};
+ Access_Key_Id ="{s3-access-key}";
+ Secret_Access_Key = "{s3-secret}";
+ }
+ }
+
+``Export_ID`` must have an integer value, e.g., "77"
+
+``Path`` (for RGW) should be "/"
+
+``Pseudo`` defines an NFSv4 pseudo root name (NFSv4 only)
+
+``SecType = sys;`` allows clients to attach without Kerberos
+authentication
+
+``Squash = No_Root_Squash;`` enables the client root user to override
+permissions (Unix convention). When root-squashing is enabled,
+operations attempted by the root user are performed as if by the local
+"nobody" (and "nogroup") user on the NFS-Ganesha server
+
+The RGW FSAL additionally supports RGW-specific configuration
+variables in the RGW config section::
+
+ RGW {
+ cluster = "{cluster name, default 'ceph'}";
+ name = "client.rgw.{instance-name}";
+ ceph_conf = "/opt/ceph-rgw/etc/ceph/ceph.conf";
+ init_args = "-d --debug-rgw=16";
+ }
+
+``cluster`` sets a Ceph cluster name (must match the cluster being exported)
+
+``name`` sets an RGW instance name (must match the cluster being exported)
+
+``ceph_conf`` gives a path to a non-default ceph.conf file to use
+
+
+Other useful NFS-Ganesha configuration:
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Any EXPORT block which should support NFSv3 should include version 3
+in the NFS_Protocols setting. Additionally, NFSv3 is the last major
+version to support the UDP transport. To enable UDP, include it in the
+Transport_Protocols setting. For example::
+
+ EXPORT {
+ ...
+ NFS_Protocols = 3,4;
+ Transport_Protocols = UDP,TCP;
+ ...
+ }
+
+One important family of options pertains to interaction with the Linux
+idmapping service, which is used to normalize user and group names
+across systems. Details of idmapper integration are not provided here.
+
+With Linux NFS clients, NFS-Ganesha can be configured
+to accept client-supplied numeric user and group identifiers with
+NFSv4, which by default stringifies these--this may be useful in small
+setups and for experimentation::
+
+ NFSV4 {
+ Allow_Numeric_Owners = true;
+ Only_Numeric_Owners = true;
+ }
+
+Troubleshooting
+~~~~~~~~~~~~~~~
+
+NFS-Ganesha configuration problems are usually debugged by running the
+server with debugging options, controlled by the LOG config section.
+
+NFS-Ganesha log messages are grouped into various components, logging
+can be enabled separately for each component. Valid values for
+component logging include::
+
+ *FATAL* critical errors only
+ *WARN* unusual condition
+ *DEBUG* mildly verbose trace output
+ *FULL_DEBUG* verbose trace output
+
+Example::
+
+ LOG {
+
+ Components {
+ MEMLEAKS = FATAL;
+ FSAL = FATAL;
+ NFSPROTO = FATAL;
+ NFS_V4 = FATAL;
+ EXPORT = FATAL;
+ FILEHANDLE = FATAL;
+ DISPATCH = FATAL;
+ CACHE_INODE = FATAL;
+ CACHE_INODE_LRU = FATAL;
+ HASHTABLE = FATAL;
+ HASHTABLE_CACHE = FATAL;
+ DUPREQ = FATAL;
+ INIT = DEBUG;
+ MAIN = DEBUG;
+ IDMAPPER = FATAL;
+ NFS_READDIR = FATAL;
+ NFS_V4_LOCK = FATAL;
+ CONFIG = FATAL;
+ CLIENTID = FATAL;
+ SESSIONS = FATAL;
+ PNFS = FATAL;
+ RW_LOCK = FATAL;
+ NLM = FATAL;
+ RPC = FATAL;
+ NFS_CB = FATAL;
+ THREAD = FATAL;
+ NFS_V4_ACL = FATAL;
+ STATE = FATAL;
+ FSAL_UP = FATAL;
+ DBUS = FATAL;
+ }
+ # optional: redirect log output
+ # Facility {
+ # name = FILE;
+ # destination = "/tmp/ganesha-rgw.log";
+ # enable = active;
+ }
+ }
+
+Running Multiple NFS Gateways
+=============================
+
+Each NFS-Ganesha instance acts as a full gateway endpoint, with the
+limitation that currently an NFS-Ganesha instance cannot be configured
+to export HTTP services. As with ordinary gateway instances, any
+number of NFS-Ganesha instances can be started, exporting the same or
+different resources from the cluster. This enables the clustering of
+NFS-Ganesha instances. However, this does not imply high availability.
+
+When regular gateway instances and NFS-Ganesha instances overlap the
+same data resources, they will be accessible from both the standard S3
+API and through the NFS-Ganesha instance as exported. You can
+co-locate the NFS-Ganesha instance with a Ceph Object Gateway instance
+on the same host.
+
+RGW vs RGW NFS
+==============
+
+Exporting an NFS namespace and other RGW namespaces (e.g., S3 or Swift
+via the Civetweb HTTP front-end) from the same program instance is
+currently not supported.
+
+When adding objects and buckets outside of NFS, those objects will
+appear in the NFS namespace in the time set by
+``rgw_nfs_namespace_expire_secs``, which defaults to 300 seconds (5 minutes).
+Override the default value for ``rgw_nfs_namespace_expire_secs`` in the
+Ceph configuration file to change the refresh rate.
+
+If exporting Swift containers that do not conform to valid S3 bucket
+naming requirements, set ``rgw_relaxed_s3_bucket_names`` to true in the
+[client.rgw] section of the Ceph configuration file. For example,
+if a Swift container name contains underscores, it is not a valid S3
+bucket name and will be rejected unless ``rgw_relaxed_s3_bucket_names``
+is set to true.
+
+Configuring NFSv4 clients
+=========================
+
+To access the namespace, mount the configured NFS-Ganesha export(s)
+into desired locations in the local POSIX namespace. As noted, this
+implementation has a few unique restrictions:
+
+- NFS 4.1 and higher protocol flavors are preferred
+
+ + NFSv4 OPEN and CLOSE operations are used to track upload transactions
+
+- To upload data successfully, clients must preserve write ordering
+
+ + on Linux and many Unix NFS clients, use the -osync mount option
+
+Conventions for mounting NFS resources are platform-specific. The
+following conventions work on Linux and some Unix platforms:
+
+From the command line::
+
+ mount -t nfs -o nfsvers=4.1,noauto,soft,sync,proto=tcp <ganesha-host-name>:/ <mount-point>
+
+In /etc/fstab::
+
+<ganesha-host-name>:/ <mount-point> nfs noauto,soft,nfsvers=4.1,sync,proto=tcp 0 0
+
+Specify the NFS-Ganesha host name and the path to the mount point on
+the client.
+
+Configuring NFSv3 Clients
+=========================
+
+Linux clients can be configured to mount with NFSv3 by supplying
+``nfsvers=3`` and ``noacl`` as mount options. To use UDP as the
+transport, add ``proto=udp`` to the mount options. However, TCP is the
+preferred transport::
+
+ <ganesha-host-name>:/ <mount-point> nfs noauto,noacl,soft,nfsvers=3,sync,proto=tcp 0 0
+
+Configure the NFS Ganesha EXPORT block Protocols setting with version
+3 and the Transports setting with UDP if the mount will use version 3 with UDP.
+
+NFSv3 Semantics
+---------------
+
+Since NFSv3 does not communicate client OPEN and CLOSE operations to
+file servers, RGW NFS cannot use these operations to mark the
+beginning and ending of file upload transactions. Instead, RGW NFS
+starts a new upload when the first write is sent to a file at offset
+0, and finalizes the upload when no new writes to the file have been
+seen for a period of time, by default, 10 seconds. To change this
+timeout, set an alternate value for ``rgw_nfs_write_completion_interval_s``
+in the RGW section(s) of the Ceph configuration file.
+
+References
+==========
+
+.. [#] http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html
diff --git a/doc/radosgw/notifications.rst b/doc/radosgw/notifications.rst
new file mode 100644
index 000000000..1d18772b2
--- /dev/null
+++ b/doc/radosgw/notifications.rst
@@ -0,0 +1,547 @@
+====================
+Bucket Notifications
+====================
+
+.. versionadded:: Nautilus
+
+.. contents::
+
+Bucket notifications provide a mechanism for sending information out of radosgw
+when certain events happen on the bucket. Notifications can be sent to HTTP
+endpoints, AMQP0.9.1 endpoints, and Kafka endpoints.
+
+A user can create topics. A topic entity is defined by its name and is "per
+tenant". A user can associate its topics (via notification configuration) only
+with buckets it owns.
+
+A notification entity must be created in order to send event notifications for
+a specific bucket. A notification entity can be created either for a subset
+of event types or for all event types (which is the default). The
+notification may also filter out events based on matches of the prefixes and
+suffixes of (1) the keys, (2) the metadata attributes attached to the object,
+or (3) the object tags. Regular-expression matching can also be used on these
+to create filters. There can be multiple notifications for any specific topic,
+and the same topic can used for multiple notifications.
+
+REST API has been defined so as to provide configuration and control interfaces
+for the bucket notification mechanism.
+
+.. toctree::
+ :maxdepth: 1
+
+ S3 Bucket Notification Compatibility <s3-notification-compatibility>
+
+.. note:: To enable bucket notifications API, the `rgw_enable_apis` configuration parameter should contain: "notifications".
+
+Notification Reliability
+------------------------
+
+Notifications can be sent synchronously or asynchronously. This section
+describes the latency and reliability that you should expect for synchronous
+and asynchronous notifications.
+
+Synchronous Notifications
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Notifications can be sent synchronously, as part of the operation that
+triggered them. In this mode, the operation is acknowledged (acked) only after
+the notification is sent to the topic's configured endpoint. This means that
+the round trip time of the notification (the time it takes to send the
+notification to the topic's endpoint plus the time it takes to receive the
+acknowledgement) is added to the latency of the operation itself.
+
+.. note:: The original triggering operation is considered successful even if
+ the notification fails with an error, cannot be delivered, or times out.
+
+Asynchronous Notifications
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Notifications can be sent asynchronously. They are committed into persistent
+storage and then asynchronously sent to the topic's configured endpoint. In
+this case, the only latency added to the original operation is the latency
+added when the notification is committed to persistent storage.
+
+.. note:: If the notification fails with an error, cannot be delivered, or
+ times out, it is retried until it is successfully acknowledged.
+
+.. tip:: To minimize the latency added by asynchronous notification, we
+ recommended placing the "log" pool on fast media.
+
+
+Topic Management via CLI
+------------------------
+
+Fetch the configuration of all topics associated with tenants by running the
+following command:
+
+.. prompt:: bash #
+
+ radosgw-admin topic list [--tenant={tenant}]
+
+
+Fetch the configuration of a specific topic by running the following command:
+
+.. prompt:: bash #
+
+ radosgw-admin topic get --topic={topic-name} [--tenant={tenant}]
+
+
+Remove a topic by running the following command:
+
+.. prompt:: bash #
+
+ radosgw-admin topic rm --topic={topic-name} [--tenant={tenant}]
+
+
+Notification Performance Statistics
+-----------------------------------
+
+- ``pubsub_event_triggered``: a running counter of events that have at least one topic associated with them
+- ``pubsub_event_lost``: a running counter of events that had topics associated with them, but that were not pushed to any of the endpoints
+- ``pubsub_push_ok``: a running counter, for all notifications, of events successfully pushed to their endpoints
+- ``pubsub_push_fail``: a running counter, for all notifications, of events that failed to be pushed to their endpoints
+- ``pubsub_push_pending``: the gauge value of events pushed to an endpoint but not acked or nacked yet
+
+.. note::
+
+ ``pubsub_event_triggered`` and ``pubsub_event_lost`` are incremented per
+ event on each notification, but ``pubsub_push_ok`` and ``pubsub_push_fail``
+ are incremented per push action on each notification.
+
+Bucket Notification REST API
+----------------------------
+
+Topics
+~~~~~~
+
+.. note::
+
+ In all topic actions, the parameters are URL-encoded and sent in the
+ message body using this content type:
+ ``application/x-www-form-urlencoded``.
+
+
+.. _Create a Topic:
+
+Create a Topic
+``````````````
+
+This creates a new topic. Provide the topic with push endpoint parameters,
+which will be used later when a notification is created. A response is
+generated. A successful response includes the topic's `ARN
+<https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`_
+(the "Amazon Resource Name", a unique identifier used to reference the topic).
+To update a topic, use the same command that you used to create it (but when
+updating, use the name of an existing topic and different endpoint values).
+
+.. tip:: Any notification already associated with the topic must be re-created
+ in order for the topic to update.
+
+::
+
+ POST
+
+ Action=CreateTopic
+ &Name=<topic-name>
+ [&Attributes.entry.1.key=amqp-exchange&Attributes.entry.1.value=<exchange>]
+ [&Attributes.entry.2.key=amqp-ack-level&Attributes.entry.2.value=none|broker|routable]
+ [&Attributes.entry.3.key=verify-ssl&Attributes.entry.3.value=true|false]
+ [&Attributes.entry.4.key=kafka-ack-level&Attributes.entry.4.value=none|broker]
+ [&Attributes.entry.5.key=use-ssl&Attributes.entry.5.value=true|false]
+ [&Attributes.entry.6.key=ca-location&Attributes.entry.6.value=<file path>]
+ [&Attributes.entry.7.key=OpaqueData&Attributes.entry.7.value=<opaque data>]
+ [&Attributes.entry.8.key=push-endpoint&Attributes.entry.8.value=<endpoint>]
+ [&Attributes.entry.9.key=persistent&Attributes.entry.9.value=true|false]
+ [&Attributes.entry.10.key=cloudevents&Attributes.entry.10.value=true|false]
+ [&Attributes.entry.11.key=mechanism&Attributes.entry.11.value=<mechanism>]
+
+Request parameters:
+
+- push-endpoint: This is the URI of an endpoint to send push notifications to.
+- OpaqueData: Opaque data is set in the topic configuration and added to all
+ notifications that are triggered by the topic.
+- persistent: This indicates whether notifications to this endpoint are
+ persistent (=asynchronous) or not persistent. (This is "false" by default.)
+
+- HTTP endpoint
+
+ - URI: ``http[s]://<fqdn>[:<port]``
+ - port: This defaults to 80 for HTTP and 443 for HTTPS.
+ - verify-ssl: This indicates whether the server certificate is validated by
+ the client. (This is "true" by default.)
+ - cloudevents: This indicates whether the HTTP header should contain
+ attributes according to the `S3 CloudEvents Spec`_. (This is "false" by
+ default.)
+
+- AMQP0.9.1 endpoint
+
+ - URI: ``amqp[s]://[<user>:<password>@]<fqdn>[:<port>][/<vhost>]``
+ - user/password: This defaults to "guest/guest".
+ - user/password: This must be provided only over HTTPS. Topic creation
+ requests will otherwise be rejected.
+ - port: This defaults to 5672 for unencrypted connections and 5671 for
+ SSL-encrypted connections.
+ - vhost: This defaults to "/".
+ - verify-ssl: This indicates whether the server certificate is validated by
+ the client. (This is "true" by default.)
+ - If ``ca-location`` is provided and a secure connection is used, the
+ specified CA will be used to authenticate the broker. The default CA will
+ not be used.
+ - amqp-exchange: The exchanges must exist and must be able to route messages
+ based on topics. This parameter is mandatory.
+ - amqp-ack-level: No end2end acking is required. Messages may persist in the
+ broker before being delivered to their final destinations. Three ack methods
+ exist:
+
+ - "none": The message is considered "delivered" if it is sent to the broker.
+ - "broker": The message is considered "delivered" if it is acked by the broker (default).
+ - "routable": The message is considered "delivered" if the broker can route to a consumer.
+
+.. tip:: The topic-name (see :ref:`Create a Topic`) is used for the
+ AMQP topic ("routing key" for a topic exchange).
+
+- Kafka endpoint
+
+ - URI: ``kafka://[<user>:<password>@]<fqdn>[:<port]``
+ - ``use-ssl``: If this is set to "true", a secure connection is used to
+ connect to the broker. (This is "false" by default.)
+ - ``ca-location``: If this is provided and a secure connection is used, the
+ specified CA will be used instead of the default CA to authenticate the
+ broker.
+ - user/password may be provided over HTTPS. If not, the config parameter
+ `rgw_allow_notification_secrets_in_cleartext` must be `true` in order to create topic
+ - user/password may be provided along with ``use-ssl``.
+ The broker credentials will otherwise be sent over insecure transport
+ - ``mechanism`` may be provided together with user/password (default: ``PLAIN``).
+ The supported SASL mechanisms are:
+
+ - PLAIN
+ - SCRAM-SHA-256
+ - SCRAM-SHA-512
+ - GSSAPI
+ - OAUTHBEARER
+
+ - port: This defaults to 9092.
+ - kafka-ack-level: No end2end acking is required. Messages may persist in the
+ broker before being delivered to their final destinations. Two ack methods
+ exist:
+
+ - "none": Messages are considered "delivered" if sent to the broker.
+ - "broker": Messages are considered "delivered" if acked by the broker. (This
+ is the default.)
+
+.. note::
+
+ - The key-value pair of a specific parameter need not reside in the same
+ line as the parameter, and need not appear in any specific order, but it
+ must use the same index.
+ - Attribute indexing need not be sequential and need not start from any
+ specific value.
+ - `AWS Create Topic`_ provides a detailed explanation of the endpoint
+ attributes format. In our case, however, different keys and values are
+ used.
+
+The response has the following format:
+
+::
+
+ <CreateTopicResponse xmlns="https://sns.amazonaws.com/doc/2010-03-31/">
+ <CreateTopicResult>
+ <TopicArn></TopicArn>
+ </CreateTopicResult>
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </CreateTopicResponse>
+
+The topic `ARN
+<https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`_
+in the response has the following format:
+
+::
+
+ arn:aws:sns:<zone-group>:<tenant>:<topic>
+
+Get Topic Attributes
+````````````````````
+
+This returns information about a specific topic. This includes push-endpoint
+information, if provided.
+
+::
+
+ POST
+
+ Action=GetTopicAttributes
+ &TopicArn=<topic-arn>
+
+The response has the following format:
+
+::
+
+ <GetTopicAttributesResponse>
+ <GetTopicAttributesResult>
+ <Attributes>
+ <entry>
+ <key>User</key>
+ <value></value>
+ </entry>
+ <entry>
+ <key>Name</key>
+ <value></value>
+ </entry>
+ <entry>
+ <key>EndPoint</key>
+ <value></value>
+ </entry>
+ <entry>
+ <key>TopicArn</key>
+ <value></value>
+ </entry>
+ <entry>
+ <key>OpaqueData</key>
+ <value></value>
+ </entry>
+ </Attributes>
+ </GetTopicAttributesResult>
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </GetTopicAttributesResponse>
+
+- User: the name of the user that created the topic.
+- Name: the name of the topic.
+- EndPoint: The JSON-formatted endpoint parameters, including:
+ - EndpointAddress: The push-endpoint URL.
+ - EndpointArgs: The push-endpoint args.
+ - EndpointTopic: The topic name to be sent to the endpoint (can be different
+ than the above topic name).
+ - HasStoredSecret: This is "true" if the endpoint URL contains user/password
+ information. In this case, the request must be made over HTTPS. The "topic
+ get" request will otherwise be rejected.
+ - Persistent: This is "true" if the topic is persistent.
+- TopicArn: topic `ARN
+ <https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`_.
+- OpaqueData: The opaque data set on the topic.
+
+Get Topic Information
+`````````````````````
+
+This returns information about a specific topic. This includes push-endpoint
+information, if provided. Note that this API is now deprecated in favor of the
+AWS compliant `GetTopicAttributes` API.
+
+::
+
+ POST
+
+ Action=GetTopic
+ &TopicArn=<topic-arn>
+
+The response has the following format:
+
+::
+
+ <GetTopicResponse>
+ <GetTopicResult>
+ <Topic>
+ <User></User>
+ <Name></Name>
+ <EndPoint>
+ <EndpointAddress></EndpointAddress>
+ <EndpointArgs></EndpointArgs>
+ <EndpointTopic></EndpointTopic>
+ <HasStoredSecret></HasStoredSecret>
+ <Persistent></Persistent>
+ </EndPoint>
+ <TopicArn></TopicArn>
+ <OpaqueData></OpaqueData>
+ </Topic>
+ </GetTopicResult>
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </GetTopicResponse>
+
+- User: The name of the user that created the topic.
+- Name: The name of the topic.
+- EndpointAddress: The push-endpoint URL.
+- EndpointArgs: The push-endpoint args.
+- EndpointTopic: The topic name to be sent to the endpoint (which can be
+ different than the above topic name).
+- HasStoredSecret: This is "true" if the endpoint URL contains user/password
+ information. In this case, the request must be made over HTTPS. The "topic
+ get" request will otherwise be rejected.
+- Persistent: "true" if topic is persistent.
+- TopicArn: topic `ARN
+ <https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`_.
+- OpaqueData: the opaque data set on the topic.
+
+Delete Topic
+````````````
+
+::
+
+ POST
+
+ Action=DeleteTopic
+ &TopicArn=<topic-arn>
+
+This deletes the specified topic.
+
+.. note::
+
+ - Deleting an unknown notification (for example, double delete) is not
+ considered an error.
+ - Deleting a topic does not automatically delete all notifications associated
+ with it.
+
+The response has the following format:
+
+::
+
+ <DeleteTopicResponse xmlns="https://sns.amazonaws.com/doc/2010-03-31/">
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </DeleteTopicResponse>
+
+List Topics
+```````````
+
+List all topics associated with a tenant.
+
+::
+
+ POST
+
+ Action=ListTopics
+
+The response has the following format:
+
+::
+
+ <ListTopicsResponse xmlns="https://sns.amazonaws.com/doc/2010-03-31/">
+ <ListTopicsResult>
+ <Topics>
+ <member>
+ <User></User>
+ <Name></Name>
+ <EndPoint>
+ <EndpointAddress></EndpointAddress>
+ <EndpointArgs></EndpointArgs>
+ <EndpointTopic></EndpointTopic>
+ </EndPoint>
+ <TopicArn></TopicArn>
+ <OpaqueData></OpaqueData>
+ </member>
+ </Topics>
+ </ListTopicsResult>
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </ListTopicsResponse>
+
+- If the endpoint URL contains user/password information in any part of the
+ topic, the request must be made over HTTPS. The "topic list" request will
+ otherwise be rejected.
+
+Notifications
+~~~~~~~~~~~~~
+
+Detailed under: `Bucket Operations`_.
+
+.. note::
+
+ - "Abort Multipart Upload" request does not emit a notification
+ - Both "Initiate Multipart Upload" and "POST Object" requests will emit an ``s3:ObjectCreated:Post`` notification
+
+Events
+~~~~~~
+
+Events are in JSON format (regardless of the actual endpoint), and are S3-compatible.
+For example:
+
+::
+
+ {"Records":[
+ {
+ "eventVersion":"2.1",
+ "eventSource":"ceph:s3",
+ "awsRegion":"zonegroup1",
+ "eventTime":"2019-11-22T13:47:35.124724Z",
+ "eventName":"ObjectCreated:Put",
+ "userIdentity":{
+ "principalId":"tester"
+ },
+ "requestParameters":{
+ "sourceIPAddress":""
+ },
+ "responseElements":{
+ "x-amz-request-id":"503a4c37-85eb-47cd-8681-2817e80b4281.5330.903595",
+ "x-amz-id-2":"14d2-zone1-zonegroup1"
+ },
+ "s3":{
+ "s3SchemaVersion":"1.0",
+ "configurationId":"mynotif1",
+ "bucket":{
+ "name":"mybucket1",
+ "ownerIdentity":{
+ "principalId":"tester"
+ },
+ "arn":"arn:aws:s3:zonegroup1::mybucket1",
+ "id":"503a4c37-85eb-47cd-8681-2817e80b4281.5332.38"
+ },
+ "object":{
+ "key":"myimage1.jpg",
+ "size":"1024",
+ "eTag":"37b51d194a7513e45b56f6524f2d51f2",
+ "versionId":"",
+ "sequencer": "F7E6D75DC742D108",
+ "metadata":[],
+ "tags":[]
+ }
+ },
+ "eventId":"",
+ "opaqueData":"me@example.com"
+ }
+ ]}
+
+- awsRegion: The zonegroup.
+- eventTime: The timestamp, indicating when the event was triggered.
+- eventName: For the list of supported events see: `S3 Notification
+ Compatibility`_. Note that eventName values do not start with the `s3:`
+ prefix.
+- userIdentity.principalId: The user that triggered the change.
+- requestParameters.sourceIPAddress: not supported
+- responseElements.x-amz-request-id: The request ID of the original change.
+- responseElements.x_amz_id_2: The RGW on which the change was made.
+- s3.configurationId: The notification ID that created the event.
+- s3.bucket.name: The name of the bucket.
+- s3.bucket.ownerIdentity.principalId: The owner of the bucket.
+- s3.bucket.arn: The ARN of the bucket.
+- s3.bucket.id: The ID of the bucket. (This is an extension to the S3
+ notification API.)
+- s3.object.key: The object key.
+- s3.object.size: The object size.
+- s3.object.eTag: The object etag.
+- s3.object.versionId: The object version, if the bucket is versioned. When a
+ copy is made, it includes the version of the target object. When a delete
+ marker is created, it includes the version of the delete marker.
+- s3.object.sequencer: The monotonically-increasing identifier of the "change
+ per object" (hexadecimal format).
+- s3.object.metadata: Any metadata set on the object that is sent as
+ ``x-amz-meta-`` (that is, any metadata set on the object that is sent as an
+ extension to the S3 notification API).
+- s3.object.tags: Any tags set on the object. (This is an extension to the S3
+ notification API.)
+- s3.eventId: The unique ID of the event, which could be used for acking. (This
+ is an extension to the S3 notification API.)
+- s3.opaqueData: This means that "opaque data" is set in the topic configuration
+ and is added to all notifications triggered by the topic. (This is an
+ extension to the S3 notification API.)
+
+.. _S3 Notification Compatibility: ../s3-notification-compatibility
+.. _AWS Create Topic: https://docs.aws.amazon.com/sns/latest/api/API_CreateTopic.html
+.. _Bucket Operations: ../s3/bucketops
+.. _S3 CloudEvents Spec: https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md
diff --git a/doc/radosgw/oidc.rst b/doc/radosgw/oidc.rst
new file mode 100644
index 000000000..46593f1d8
--- /dev/null
+++ b/doc/radosgw/oidc.rst
@@ -0,0 +1,97 @@
+===============================
+ OpenID Connect Provider in RGW
+===============================
+
+An entity describing the OpenID Connect Provider needs to be created in RGW, in order to establish trust between the two.
+
+REST APIs for Manipulating an OpenID Connect Provider
+=====================================================
+
+The following REST APIs can be used for creating and managing an OpenID Connect Provider entity in RGW.
+
+In order to invoke the REST admin APIs, a user with admin caps needs to be created.
+
+.. code-block:: javascript
+
+ radosgw-admin --uid TESTER --display-name "TestUser" --access_key TESTER --secret test123 user create
+ radosgw-admin caps add --uid="TESTER" --caps="oidc-provider=*"
+
+
+CreateOpenIDConnectProvider
+---------------------------------
+
+Create an OpenID Connect Provider entity in RGW
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``ClientIDList.member.N``
+
+:Description: List of Client Ids that needs access to S3 resources.
+:Type: Array of Strings
+
+``ThumbprintList.member.N``
+
+:Description: List of OpenID Connect IDP's server certificates' thumbprints. A maximum of 5 thumbprints are allowed.
+:Type: Array of Strings
+
+``Url``
+
+:Description: URL of the IDP.
+:Type: String
+
+
+Example::
+ POST "<hostname>?Action=Action=CreateOpenIDConnectProvider
+ &ThumbprintList.list.1=F7D7B3515DD0D319DD219A43A9EA727AD6065287
+ &ClientIDList.list.1=app-profile-jsp
+ &Url=http://localhost:8080/auth/realms/quickstart
+
+
+DeleteOpenIDConnectProvider
+---------------------------
+
+Deletes an OpenID Connect Provider entity in RGW
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``OpenIDConnectProviderArn``
+
+:Description: ARN of the IDP which is returned by the Create API.
+:Type: String
+
+Example::
+ POST "<hostname>?Action=Action=DeleteOpenIDConnectProvider
+ &OpenIDConnectProviderArn=arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart
+
+
+GetOpenIDConnectProvider
+---------------------------
+
+Gets information about an IDP.
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``OpenIDConnectProviderArn``
+
+:Description: ARN of the IDP which is returned by the Create API.
+:Type: String
+
+Example::
+ POST "<hostname>?Action=Action=GetOpenIDConnectProvider
+ &OpenIDConnectProviderArn=arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart
+
+ListOpenIDConnectProviders
+--------------------------
+
+Lists information about all IDPs
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+None
+
+Example::
+ POST "<hostname>?Action=Action=ListOpenIDConnectProviders
diff --git a/doc/radosgw/opa.rst b/doc/radosgw/opa.rst
new file mode 100644
index 000000000..f1b76b5ef
--- /dev/null
+++ b/doc/radosgw/opa.rst
@@ -0,0 +1,72 @@
+==============================
+Open Policy Agent Integration
+==============================
+
+Open Policy Agent (OPA) is a lightweight general-purpose policy engine
+that can be co-located with a service. OPA can be integrated as a
+sidecar, host-level daemon, or library.
+
+Services can offload policy decisions to OPA by executing queries. Hence,
+policy enforcement can be decoupled from policy decisions.
+
+Configure OPA
+=============
+
+To configure OPA, load custom policies into OPA that control the resources users
+are allowed to access. Relevant data or context can also be loaded into OPA to make decisions.
+
+Policies and data can be loaded into OPA in the following ways::
+ * OPA's RESTful APIs
+ * OPA's *bundle* feature that downloads policies and data from remote HTTP servers
+ * Filesystem
+
+Configure the Ceph Object Gateway
+=================================
+
+The following configuration options are available for OPA integration::
+
+ rgw use opa authz = {use opa server to authorize client requests}
+ rgw opa url = {opa server url:opa server port}
+ rgw opa token = {opa bearer token}
+ rgw opa verify ssl = {verify opa server ssl certificate}
+
+How does the RGW-OPA integration work
+=====================================
+
+After a user is authenticated, OPA can be used to check if the user is authorized
+to perform the given action on the resource. OPA responds with an allow or deny
+decision which is sent back to the RGW which enforces the decision.
+
+Example request::
+
+ POST /v1/data/ceph/authz HTTP/1.1
+ Host: opa.example.com:8181
+ Content-Type: application/json
+
+ {
+ "input": {
+ "method": "GET",
+ "subuser": "subuser",
+ "user_info": {
+ "user_id": "john",
+ "display_name": "John"
+ },
+ "bucket_info": {
+ "bucket": {
+ "name": "Testbucket",
+ "bucket_id": "testbucket"
+ },
+ "owner": "john"
+ }
+ }
+ }
+
+Response::
+
+ {"result": true}
+
+The above is a sample request sent to OPA which contains information about the
+user, resource and the action to be performed on the resource. Based on the polices
+and data loaded into OPA, it will verify whether the request should be allowed or denied.
+In the sample request, RGW makes a POST request to the endpoint */v1/data/ceph/authz*,
+where *ceph* is the package name and *authz* is the rule name.
diff --git a/doc/radosgw/orphans.rst b/doc/radosgw/orphans.rst
new file mode 100644
index 000000000..bf6b10edf
--- /dev/null
+++ b/doc/radosgw/orphans.rst
@@ -0,0 +1,117 @@
+==================================
+Orphan List and Associated Tooling
+==================================
+
+.. version added:: Luminous
+
+.. contents::
+
+Orphans are RADOS objects that are left behind after their associated
+RGW objects are removed. Normally these RADOS objects are removed
+automatically, either immediately or through a process known as
+"garbage collection". Over the history of RGW, however, there may have
+been bugs that prevented these RADOS objects from being deleted, and
+these RADOS objects may be consuming space on the Ceph cluster without
+being of any use. From the perspective of RGW, we call such RADOS
+objects "orphans".
+
+Orphans Find -- DEPRECATED
+--------------------------
+
+The `radosgw-admin` tool has/had three subcommands to help manage
+orphans, however these subcommands are (or will soon be)
+deprecated. These subcommands are:
+
+.. prompt:: bash #
+
+ radosgw-admin orphans find ...
+ radosgw-admin orphans finish ...
+ radosgw-admin orphans list-jobs ...
+
+There are two key problems with these subcommands, however. First,
+these subcommands have not been actively maintained and therefore have
+not tracked RGW as it has evolved in terms of features and updates. As
+a result the confidence that these subcommands can accurately identify
+true orphans is presently low.
+
+Second, these subcommands store intermediate results on the cluster
+itself. This can be problematic when cluster administrators are
+confronting insufficient storage space and want to remove orphans as a
+means of addressing the issue. The intermediate results could strain
+the existing cluster storage capacity even further.
+
+For these reasons "orphans find" has been deprecated.
+
+Orphan List
+-----------
+
+Because "orphans find" has been deprecated, RGW now includes an
+additional tool -- 'rgw-orphan-list'. When run it will list the
+available pools and prompt the user to enter the name of the data
+pool. At that point the tool will, perhaps after an extended period of
+time, produce a local file containing the RADOS objects from the
+designated pool that appear to be orphans. The administrator is free
+to examine this file and the decide on a course of action, perhaps
+removing those RADOS objects from the designated pool.
+
+All intermediate results are stored on the local file system rather
+than the Ceph cluster. So running the 'rgw-orphan-list' tool should
+have no appreciable impact on the amount of cluster storage consumed.
+
+WARNING: Experimental Status
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The 'rgw-orphan-list' tool is new and therefore currently considered
+experimental. The list of orphans produced should be "sanity checked"
+before being used for a large delete operation.
+
+WARNING: Specifying a Data Pool
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If a pool other than an RGW data pool is specified, the results of the
+tool will be erroneous. All RADOS objects found on such a pool will
+falsely be designated as orphans.
+
+WARNING: Unindexed Buckets
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+RGW allows for unindexed buckets, that is buckets that do not maintain
+an index of their contents. This is not a typical configuration, but
+it is supported. Because the 'rgw-orphan-list' tool uses the bucket
+indices to determine what RADOS objects should exist, objects in the
+unindexed buckets will falsely be listed as orphans.
+
+
+RADOS List
+----------
+
+One of the sub-steps in computing a list of orphans is to map each RGW
+object into its corresponding set of RADOS objects. This is done using
+a subcommand of 'radosgw-admin'.
+
+.. prompt:: bash #
+
+ radosgw-admin bucket radoslist [--bucket={bucket-name}]
+
+The subcommand will produce a list of RADOS objects that support all
+of the RGW objects. If a bucket is specified then the subcommand will
+only produce a list of RADOS objects that correspond back the RGW
+objects in the specified bucket.
+
+Note: Shared Bucket Markers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Some administrators will be aware of the coding schemes used to name
+the RADOS objects that correspond to RGW objects, which include a
+"marker" unique to a given bucket.
+
+RADOS objects that correspond with the contents of one RGW bucket,
+however, may contain a marker that specifies a different bucket. This
+behavior is a consequence of the "shallow copy" optimization used by
+RGW. When larger objects are copied from bucket to bucket, only the
+"head" objects are actually copied, and the tail objects are
+shared. Those shared objects will contain the marker of the original
+bucket.
+
+.. _Data Layout in RADOS : ../layout
+.. _Pool Placement and Storage Classes : ../placement
diff --git a/doc/radosgw/placement.rst b/doc/radosgw/placement.rst
new file mode 100644
index 000000000..28c71783d
--- /dev/null
+++ b/doc/radosgw/placement.rst
@@ -0,0 +1,263 @@
+==================================
+Pool Placement and Storage Classes
+==================================
+
+.. contents::
+
+Placement Targets
+=================
+
+.. versionadded:: Jewel
+
+Placement targets control which `Pools`_ are associated with a particular
+bucket. A bucket's placement target is selected on creation, and cannot be
+modified. The ``radosgw-admin bucket stats`` command will display its
+``placement_rule``.
+
+The zonegroup configuration contains a list of placement targets with an
+initial target named ``default-placement``. The zone configuration then maps
+each zonegroup placement target name onto its local storage. This zone
+placement information includes the ``index_pool`` name for the bucket index,
+the ``data_extra_pool`` name for metadata about incomplete multipart uploads,
+and a ``data_pool`` name for each storage class.
+
+.. _storage_classes:
+
+Storage Classes
+===============
+
+.. versionadded:: Nautilus
+
+Storage classes are used to customize the placement of object data. S3 Bucket
+Lifecycle rules can automate the transition of objects between storage classes.
+
+Storage classes are defined in terms of placement targets. Each zonegroup
+placement target lists its available storage classes with an initial class
+named ``STANDARD``. The zone configuration is responsible for providing a
+``data_pool`` pool name for each of the zonegroup's storage classes.
+
+Zonegroup/Zone Configuration
+============================
+
+Placement configuration is performed with ``radosgw-admin`` commands on
+the zonegroups and zones.
+
+The zonegroup placement configuration can be queried with:
+
+::
+
+ $ radosgw-admin zonegroup get
+ {
+ "id": "ab01123f-e0df-4f29-9d71-b44888d67cd5",
+ "name": "default",
+ "api_name": "default",
+ ...
+ "placement_targets": [
+ {
+ "name": "default-placement",
+ "tags": [],
+ "storage_classes": [
+ "STANDARD"
+ ]
+ }
+ ],
+ "default_placement": "default-placement",
+ ...
+ }
+
+The zone placement configuration can be queried with:
+
+::
+
+ $ radosgw-admin zone get
+ {
+ "id": "557cdcee-3aae-4e9e-85c7-2f86f5eddb1f",
+ "name": "default",
+ "domain_root": "default.rgw.meta:root",
+ ...
+ "placement_pools": [
+ {
+ "key": "default-placement",
+ "val": {
+ "index_pool": "default.rgw.buckets.index",
+ "storage_classes": {
+ "STANDARD": {
+ "data_pool": "default.rgw.buckets.data"
+ }
+ },
+ "data_extra_pool": "default.rgw.buckets.non-ec",
+ "index_type": 0,
+ "inline_data": true
+ }
+ }
+ ],
+ ...
+ }
+
+.. note:: If you have not done any previous `Multisite Configuration`_,
+ a ``default`` zone and zonegroup are created for you, and changes
+ to the zone/zonegroup will not take effect until the Ceph Object
+ Gateways are restarted. If you have created a realm for multisite,
+ the zone/zonegroup changes will take effect once the changes are
+ committed with ``radosgw-admin period update --commit``.
+
+Adding a Placement Target
+-------------------------
+
+To create a new placement target named ``temporary``, start by adding it to
+the zonegroup:
+
+::
+
+ $ radosgw-admin zonegroup placement add \
+ --rgw-zonegroup default \
+ --placement-id temporary
+
+Then provide the zone placement info for that target:
+
+::
+
+ $ radosgw-admin zone placement add \
+ --rgw-zone default \
+ --placement-id temporary \
+ --data-pool default.rgw.temporary.data \
+ --index-pool default.rgw.temporary.index \
+ --data-extra-pool default.rgw.temporary.non-ec
+
+.. note:: With default placement target settings, RGW stores an object's first data chunk in the RADOS "head" object along
+ with xattr metadata. The `--placement-inline-data=false` flag may be passed with the `zone placement add` or
+ `zone placement modify` commands to change this behavior for new objects stored on the target.
+ When data is stored inline (default), it may provide an advantage for read/write workloads since the first chunk of
+ an object's data can be retrieved/stored in a single librados call along with object metadata. On the other hand, a
+ target that does not store data inline can provide a performance benefit for RGW client delete requests when
+ the BlueStore DB is located on faster storage than bucket data since it eliminates the need to access
+ slower devices synchronously while processing the client request. In that case, data associated with the deleted
+ objects is removed asynchronously in the background by garbage collection.
+
+.. _adding_a_storage_class:
+
+Adding a Storage Class
+----------------------
+
+To add a new storage class named ``GLACIER`` to the ``default-placement`` target,
+start by adding it to the zonegroup:
+
+::
+
+ $ radosgw-admin zonegroup placement add \
+ --rgw-zonegroup default \
+ --placement-id default-placement \
+ --storage-class GLACIER
+
+Then provide the zone placement info for that storage class:
+
+::
+
+ $ radosgw-admin zone placement add \
+ --rgw-zone default \
+ --placement-id default-placement \
+ --storage-class GLACIER \
+ --data-pool default.rgw.glacier.data \
+ --compression lz4
+
+Customizing Placement
+=====================
+
+Default Placement
+-----------------
+
+By default, new buckets will use the zonegroup's ``default_placement`` target.
+This zonegroup setting can be changed with:
+
+::
+
+ $ radosgw-admin zonegroup placement default \
+ --rgw-zonegroup default \
+ --placement-id new-placement
+
+User Placement
+--------------
+
+A Ceph Object Gateway user can override the zonegroup's default placement
+target by setting a non-empty ``default_placement`` field in the user info.
+Similarly, the ``default_storage_class`` can override the ``STANDARD``
+storage class applied to objects by default.
+
+::
+
+ $ radosgw-admin user info --uid testid
+ {
+ ...
+ "default_placement": "",
+ "default_storage_class": "",
+ "placement_tags": [],
+ ...
+ }
+
+If a zonegroup's placement target contains any ``tags``, users will be unable
+to create buckets with that placement target unless their user info contains
+at least one matching tag in its ``placement_tags`` field. This can be useful
+to restrict access to certain types of storage.
+
+The ``radosgw-admin`` command can modify these fields directly with:
+
+::
+
+ $ radosgw-admin user modify \
+ --uid <user-id> \
+ --placement-id <default-placement-id> \
+ --storage-class <default-storage-class> \
+ --tags <tag1,tag2>
+
+.. _s3_bucket_placement:
+
+S3 Bucket Placement
+-------------------
+
+When creating a bucket with the S3 protocol, a placement target can be
+provided as part of the LocationConstraint to override the default placement
+targets from the user and zonegroup.
+
+Normally, the LocationConstraint must match the zonegroup's ``api_name``:
+
+::
+
+ <LocationConstraint>default</LocationConstraint>
+
+A custom placement target can be added to the ``api_name`` following a colon:
+
+::
+
+ <LocationConstraint>default:new-placement</LocationConstraint>
+
+Swift Bucket Placement
+----------------------
+
+When creating a bucket with the Swift protocol, a placement target can be
+provided in the HTTP header ``X-Storage-Policy``:
+
+::
+
+ X-Storage-Policy: new-placement
+
+Using Storage Classes
+=====================
+
+All placement targets have a ``STANDARD`` storage class which is applied to
+new objects by default. The user can override this default with its
+``default_storage_class``.
+
+To create an object in a non-default storage class, provide that storage class
+name in an HTTP header with the request. The S3 protocol uses the
+``X-Amz-Storage-Class`` header, while the Swift protocol uses the
+``X-Object-Storage-Class`` header.
+
+When using AWS S3 SDKs such as ``boto3``, it is important that non-default
+storage class names match those provided by AWS S3, or else the SDK
+will drop the request and raise an exception.
+
+S3 Object Lifecycle Management can then be used to move object data between
+storage classes using ``Transition`` actions.
+
+.. _`Pools`: ../pools
+.. _`Multisite Configuration`: ../multisite
diff --git a/doc/radosgw/pools.rst b/doc/radosgw/pools.rst
new file mode 100644
index 000000000..bb1246c1f
--- /dev/null
+++ b/doc/radosgw/pools.rst
@@ -0,0 +1,57 @@
+=====
+Pools
+=====
+
+The Ceph Object Gateway uses several pools for its various storage needs,
+which are listed in the Zone object (see ``radosgw-admin zone get``). A
+single zone named ``default`` is created automatically with pool names
+starting with ``default.rgw.``, but a `Multisite Configuration`_ will have
+multiple zones.
+
+Tuning
+======
+
+When ``radosgw`` first tries to operate on a zone pool that does not
+exist, it will create that pool with the default values from
+``osd pool default pg num`` and ``osd pool default pgp num``. These defaults
+are sufficient for some pools, but others (especially those listed in
+``placement_pools`` for the bucket index and data) will require additional
+tuning. We recommend using the `Ceph Placement Group’s per Pool
+Calculator <https://old.ceph.com/pgcalc/>`__ to calculate a suitable number of
+placement groups for these pools. See
+`Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
+for details on pool creation.
+
+.. _radosgw-pool-namespaces:
+
+Pool Namespaces
+===============
+
+.. versionadded:: Luminous
+
+Pool names particular to a zone follow the naming convention
+``{zone-name}.pool-name``. For example, a zone named ``us-east`` will
+have the following pools:
+
+- ``.rgw.root``
+
+- ``us-east.rgw.control``
+
+- ``us-east.rgw.meta``
+
+- ``us-east.rgw.log``
+
+- ``us-east.rgw.buckets.index``
+
+- ``us-east.rgw.buckets.data``
+
+The zone definitions list several more pools than that, but many of those
+are consolidated through the use of rados namespaces. For example, all of
+the following pool entries use namespaces of the ``us-east.rgw.meta`` pool::
+
+ "user_keys_pool": "us-east.rgw.meta:users.keys",
+ "user_email_pool": "us-east.rgw.meta:users.email",
+ "user_swift_pool": "us-east.rgw.meta:users.swift",
+ "user_uid_pool": "us-east.rgw.meta:users.uid",
+
+.. _`Multisite Configuration`: ../multisite
diff --git a/doc/radosgw/qat-accel.rst b/doc/radosgw/qat-accel.rst
new file mode 100644
index 000000000..b275e8a19
--- /dev/null
+++ b/doc/radosgw/qat-accel.rst
@@ -0,0 +1,155 @@
+===============================================
+QAT Acceleration for Encryption and Compression
+===============================================
+
+Intel QAT (QuickAssist Technology) can provide extended accelerated encryption
+and compression services by offloading the actual encryption and compression
+request(s) to the hardware QuickAssist accelerators, which are more efficient
+in terms of cost and power than general purpose CPUs for those specific
+compute-intensive workloads.
+
+See `QAT Support for Compression`_ and `QAT based Encryption for RGW`_.
+
+
+QAT in the Software Stack
+=========================
+
+Application developers can access QuickAssist features through the QAT API.
+The QAT API is the top-level API for QuickAssist technology, and enables easy
+interfacing between the customer application and the QuickAssist acceleration
+driver.
+
+The QAT API accesses the QuickAssist driver, which in turn drives the
+QuickAssist Accelerator hardware. The QuickAssist driver is responsible for
+exposing the acceleration services to the application software.
+
+A user can write directly to the QAT API, or the use of QAT can be done via
+frameworks that have been enabled by others including Intel (for example, zlib*,
+OpenSSL* libcrypto*, and the Linux* Kernel Crypto Framework).
+
+QAT Environment Setup
+=====================
+1. QuickAssist Accelerator hardware is necessary to make use of accelerated
+ encryption and compression services. And QAT driver in kernel space have to
+ be loaded to drive the hardware.
+
+The driver package can be downloaded from `Intel Quickassist Technology`_.
+
+2. The implementation for QAT based encryption is directly base on QAT API which
+ is included the driver package. But QAT support for compression depends on
+ QATzip project, which is a user space library which builds on top of the QAT
+ API. Currently, QATzip speeds up gzip compression and decompression at the
+ time of writing.
+
+See `QATzip`_.
+
+Implementation
+==============
+1. QAT based Encryption for RGW
+
+`OpenSSL support for RGW encryption`_ has been merged into Ceph, and Intel also
+provides one `QAT Engine`_ for OpenSSL. So, theoretically speaking, QAT based
+encryption in Ceph can be directly supported through OpenSSl+QAT Engine.
+
+But the QAT Engine for OpenSSL currently supports chained operations only, and
+so Ceph will not be able to utilize QAT hardware feature for crypto operations
+based on OpenSSL crypto plugin. As a result, one QAT plugin based on native
+QAT API is added into crypto framework.
+
+2. QAT Support for Compression
+
+As mentioned above, QAT support for compression is based on QATzip library in
+user space, which is designed to take full advantage of the performance provided
+by QuickAssist Technology. Unlike QAT based encryption, QAT based compression
+is supported through a tool class for QAT acceleration rather than a compressor
+plugin. The common tool class can transparently accelerate the existing compression
+types, but only zlib compressor can be supported at the time of writing. So
+user is allowed to use it to speed up zlib compressor as long as the QAT
+hardware is available and QAT is capable to handle it.
+
+Configuration
+=============
+#. Prerequisites
+
+ Make sure the QAT driver with version v1.7.L.4.14.0 or higher has been installed.
+ Remember to set an environment variable "ICP_ROOT" for your QAT driver package
+ root directory.
+
+ To enable the QAT based encryption and compression, user needs to modify the QAT
+ configuration files. For example, for Intel QuickAssist Adapter 8970 product, revise
+ c6xx_dev0/1/2.conf in the directory ``/etc/`` and keep them the same, e.g.:
+
+ .. code-block:: ini
+
+ #...
+ # User Process Instance Section
+ ##############################################
+ [CEPH]
+ NumberCyInstances = 1
+ NumberDcInstances = 1
+ NumProcesses = 8
+ LimitDevAccess = 1
+ # Crypto - User instance #0
+ Cy0Name = "SSL0"
+ Cy0IsPolled = 1
+ # List of core affinities
+ Cy0CoreAffinity = 0
+
+ # Data Compression - User instance #0
+ Dc0Name = "Dc0"
+ Dc0IsPolled = 1
+ # List of core affinities
+ Dc0CoreAffinity = 0
+
+#. QAT based Encryption for RGW
+
+ The CMake option ``WITH_QAT=ON`` must be configured. If you build Ceph from
+ source code (see: :ref:`build-ceph`), navigate to your cloned Ceph repository
+ and execute the following:
+
+ .. prompt:: bash $
+
+ cd ceph
+ ./do_cmake.sh -DWITH_QAT=ON
+ cd build
+ ininja
+
+ .. note::
+ The section name of the QAT configuration files must be ``CEPH`` since
+ the section name is set as "CEPH" in Ceph crypto source code.
+
+ Then, edit the Ceph configuration file to make use of QAT based crypto plugin::
+
+ plugin crypto accelerator = crypto_qat
+
+#. QAT Support for Compression
+
+ Before starting, make sure both QAT driver and `QATzip`_ have been installed. Besides
+ "ICP_ROOT", remember to set the environment variable "QZ_ROOT" for the root directory
+ of your QATzip source tree.
+
+ The following CMake options have to be configured to trigger QAT based compression
+ when building Ceph:
+
+ .. prompt:: bash $
+
+ ./do_cmake.sh -DWITH_QAT=ON -DWITH_QATZIP=ON
+
+ Then, set an environment variable to clarify the section name of User Process Instance
+ Section in QAT configuration files, e.g.:
+
+ .. prompt:: bash $
+
+ export QAT_SECTION_NAME=CEPH
+
+ Next, edit the Ceph configuration file to enable QAT support for compression::
+
+ qat compressor enabled=true
+
+
+.. _QAT Support for Compression: https://github.com/ceph/ceph/pull/19714
+.. _QAT based Encryption for RGW: https://github.com/ceph/ceph/pull/19386
+.. _Intel Quickassist Technology: https://01.org/intel-quickassist-technology
+.. _QATzip: https://github.com/intel/QATzip
+.. _OpenSSL support for RGW encryption: https://github.com/ceph/ceph/pull/15168
+.. _QAT Engine: https://github.com/intel/QAT_Engine
diff --git a/doc/radosgw/rgw-cache.rst b/doc/radosgw/rgw-cache.rst
new file mode 100644
index 000000000..116db8ed4
--- /dev/null
+++ b/doc/radosgw/rgw-cache.rst
@@ -0,0 +1,155 @@
+==========================
+RGW Data caching and CDN
+==========================
+
+.. versionadded:: Octopus
+
+.. contents::
+
+This feature adds to RGW the ability to securely cache objects and offload the workload from the cluster, using Nginx.
+After an object is accessed the first time it will be stored in the Nginx cache directory.
+When data is already cached, it need not be fetched from RGW. A permission check will be made against RGW to ensure the requesting user has access.
+This feature is based on some Nginx modules, ngx_http_auth_request_module, https://github.com/kaltura/nginx-aws-auth-module, Openresty for Lua capabilities.
+
+Currently, this feature will cache only AWSv4 requests (only s3 requests), caching-in the output of the 1st GET request
+and caching-out on subsequent GET requests, passing thru transparently PUT,POST,HEAD,DELETE and COPY requests.
+
+
+The feature introduces 2 new APIs: Auth and Cache.
+
+ NOTE: The `D3N RGW Data Cache`_ is an alternative data caching mechanism implemented natively in the Rados Gateway.
+
+New APIs
+-------------------------
+
+There are 2 new APIs for this feature:
+
+Auth API - The cache uses this to validate that a user can access the cached data
+
+Cache API - Adds the ability to override securely Range header, that way Nginx can use it is own smart cache on top of S3:
+https://www.nginx.com/blog/smart-efficient-byte-range-caching-nginx/
+Using this API gives the ability to read ahead objects when clients asking a specific range from the object.
+On subsequent accesses to the cached object, Nginx will satisfy requests for already-cached ranges from the cache. Uncached ranges will be read from RGW (and cached).
+
+Auth API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This API Validates a specific authenticated access being made to the cache, using RGW's knowledge of the client credentials and stored access policy.
+Returns success if the encapsulated request would be granted.
+
+Cache API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This API is meant to allow changing signed Range headers using a privileged user, cache user.
+
+Creating cache user
+
+::
+
+$ radosgw-admin user create --uid=<uid for cache user> --display-name="cache user" --caps="amz-cache=read"
+
+This user can send to the RGW the Cache API header ``X-Amz-Cache``, this header contains the headers from the original request(before changing the Range header).
+It means that ``X-Amz-Cache`` built from several headers.
+The headers that are building the ``X-Amz-Cache`` header are separated by char with ASCII code 177 and the header name and value are separated by char ASCII code 178.
+The RGW will check that the cache user is an authorized user and if it is a cache user,
+if yes it will use the ``X-Amz-Cache`` to revalidate that the user has permissions, using the headers from the X-Amz-Cache.
+During this flow, the RGW will override the Range header.
+
+
+Using Nginx with RGW
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Download the source of Openresty:
+
+::
+
+$ wget https://openresty.org/download/openresty-1.15.8.3.tar.gz
+
+git clone the AWS auth Nginx module:
+
+::
+
+$ git clone https://github.com/kaltura/nginx-aws-auth-module
+
+untar the openresty package:
+
+::
+
+$ tar xvzf openresty-1.15.8.3.tar.gz
+$ cd openresty-1.15.8.3
+
+Compile openresty, Make sure that you have pcre lib and openssl lib:
+
+::
+
+$ sudo yum install pcre-devel openssl-devel gcc curl zlib-devel nginx
+$ ./configure --add-module=<the nginx-aws-auth-module dir> --with-http_auth_request_module --with-http_slice_module --conf-path=/etc/nginx/nginx.conf
+$ gmake -j $(nproc)
+$ sudo gmake install
+$ sudo ln -sf /usr/local/openresty/bin/openresty /usr/bin/nginx
+
+Put in-place your Nginx configuration files and edit them according to your environment:
+
+All Nginx conf files are under: https://github.com/ceph/ceph/tree/main/examples/rgw/rgw-cache
+
+`nginx.conf` should go to `/etc/nginx/nginx.conf`
+
+`nginx-lua-file.lua` should go to `/etc/nginx/nginx-lua-file.lua`
+
+`nginx-default.conf` should go to `/etc/nginx/conf.d/nginx-default.conf`
+
+The parameters that are most likely to require adjustment according to the environment are located in the file `nginx-default.conf`
+
+Modify the example values of *proxy_cache_path* and *max_size* at:
+
+::
+
+ proxy_cache_path /data/cache levels=2:2:2 keys_zone=mycache:999m max_size=20G inactive=1d use_temp_path=off;
+
+
+And modify the example *server* values to point to the RGWs URIs:
+
+::
+
+ server rgw1:8000 max_fails=2 fail_timeout=5s;
+ server rgw2:8000 max_fails=2 fail_timeout=5s;
+ server rgw3:8000 max_fails=2 fail_timeout=5s;
+
+| It is important to substitute the *access key* and *secret key* located in the `nginx.conf` with those belong to the user with the `amz-cache` caps
+| for example, create the `cache` user as following:
+
+::
+
+ radosgw-admin user create --uid=cacheuser --display-name="cache user" --caps="amz-cache=read" --access-key <access> --secret <secret>
+
+It is possible to use Nginx slicing which is a better method for streaming purposes.
+
+For using slice you should use `nginx-slicing.conf` and not `nginx-default.conf`
+
+Further information about Nginx slicing:
+
+https://docs.nginx.com/nginx/admin-guide/content-cache/content-caching/#byte-range-caching
+
+
+If you do not want to use the prefetch caching, It is possible to replace `nginx-default.conf` with `nginx-noprefetch.conf`
+Using `noprefetch` means that if the client is sending range request of 0-4095 and then 0-4096 Nginx will cache those requests separately, So it will need to fetch those requests twice.
+
+
+Run Nginx(openresty):
+
+::
+
+$ sudo systemctl restart nginx
+
+Appendix
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+**A note about performance:** In certain instances like development environment, disabling the authentication by commenting the following line in `nginx-default.conf`:
+
+::
+
+ #auth_request /authentication;
+
+may (depending on the hardware) increases the performance significantly as it forgoes the auth API calls to radosgw.
+
+
+.. _D3N RGW Data Cache: ../d3n_datacache/
diff --git a/doc/radosgw/role.rst b/doc/radosgw/role.rst
new file mode 100644
index 000000000..e97449872
--- /dev/null
+++ b/doc/radosgw/role.rst
@@ -0,0 +1,570 @@
+======
+ Role
+======
+
+A role is similar to a user and has permission policies attached to it, that determine what a role can or can not do. A role can be assumed by any identity that needs it. If a user assumes a role, a set of dynamically created temporary credentials are returned to the user. A role can be used to delegate access to users, applications, services that do not have permissions to access some s3 resources.
+
+The following radosgw-admin commands can be used to create/ delete/ update a role and permissions associated with a role.
+
+Create a Role
+-------------
+
+To create a role, execute the following::
+
+ radosgw-admin role create --role-name={role-name} [--path=="{path to the role}"] [--assume-role-policy-doc={trust-policy-document}]
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+``path``
+
+:Description: Path to the role. The default value is a slash(/).
+:Type: String
+
+``assume-role-policy-doc``
+
+:Description: The trust relationship policy document that grants an entity permission to assume the role.
+:Type: String
+
+For example::
+
+ radosgw-admin role create --role-name=S3Access1 --path=/application_abc/component_xyz/ --assume-role-policy-doc=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"AWS\":\[\"arn:aws:iam:::user/TESTER\"\]\},\"Action\":\[\"sts:AssumeRole\"\]\}\]\}
+
+.. code-block:: javascript
+
+ {
+ "id": "ca43045c-082c-491a-8af1-2eebca13deec",
+ "name": "S3Access1",
+ "path": "/application_abc/component_xyz/",
+ "arn": "arn:aws:iam:::role/application_abc/component_xyz/S3Access1",
+ "create_date": "2018-10-17T10:18:29.116Z",
+ "max_session_duration": 3600,
+ "assume_role_policy_document": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
+ }
+
+
+Delete a Role
+-------------
+
+To delete a role, execute the following::
+
+ radosgw-admin role delete --role-name={role-name}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+For example::
+
+ radosgw-admin role delete --role-name=S3Access1
+
+Note: A role can be deleted only when it doesn't have any permission policy attached to it.
+
+Get a Role
+----------
+
+To get information about a role, execute the following::
+
+ radosgw-admin role get --role-name={role-name}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+For example::
+
+ radosgw-admin role get --role-name=S3Access1
+
+.. code-block:: javascript
+
+ {
+ "id": "ca43045c-082c-491a-8af1-2eebca13deec",
+ "name": "S3Access1",
+ "path": "/application_abc/component_xyz/",
+ "arn": "arn:aws:iam:::role/application_abc/component_xyz/S3Access1",
+ "create_date": "2018-10-17T10:18:29.116Z",
+ "max_session_duration": 3600,
+ "assume_role_policy_document": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
+ }
+
+
+List Roles
+----------
+
+To list roles with a specified path prefix, execute the following::
+
+ radosgw-admin role list [--path-prefix ={path prefix}]
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``path-prefix``
+
+:Description: Path prefix for filtering roles. If this is not specified, all roles are listed.
+:Type: String
+
+For example::
+
+ radosgw-admin role list --path-prefix="/application"
+
+.. code-block:: javascript
+
+ [
+ {
+ "id": "3e1c0ff7-8f2b-456c-8fdf-20f428ba6a7f",
+ "name": "S3Access1",
+ "path": "/application_abc/component_xyz/",
+ "arn": "arn:aws:iam:::role/application_abc/component_xyz/S3Access1",
+ "create_date": "2018-10-17T10:32:01.881Z",
+ "max_session_duration": 3600,
+ "assume_role_policy_document": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
+ }
+ ]
+
+
+Update Assume Role Policy Document of a role
+--------------------------------------------
+
+To modify a role's assume role policy document, execute the following::
+
+ radosgw-admin role-trust-policy modify --role-name={role-name} --assume-role-policy-doc={trust-policy-document}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+``assume-role-policy-doc``
+
+:Description: The trust relationship policy document that grants an entity permission to assume the role.
+:Type: String
+
+For example::
+
+ radosgw-admin role-trust-policy modify --role-name=S3Access1 --assume-role-policy-doc=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"AWS\":\[\"arn:aws:iam:::user/TESTER2\"\]\},\"Action\":\[\"sts:AssumeRole\"\]\}\]\}
+
+.. code-block:: javascript
+
+ {
+ "id": "ca43045c-082c-491a-8af1-2eebca13deec",
+ "name": "S3Access1",
+ "path": "/application_abc/component_xyz/",
+ "arn": "arn:aws:iam:::role/application_abc/component_xyz/S3Access1",
+ "create_date": "2018-10-17T10:18:29.116Z",
+ "max_session_duration": 3600,
+ "assume_role_policy_document": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER2\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
+ }
+
+
+In the above example, we are modifying the Principal from TESTER to TESTER2 in its assume role policy document.
+
+Add/ Update a Policy attached to a Role
+---------------------------------------
+
+To add or update the inline policy attached to a role, execute the following::
+
+ radosgw-admin role policy put --role-name={role-name} --policy-name={policy-name} --policy-doc={permission-policy-doc}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+``policy-name``
+
+:Description: Name of the policy.
+:Type: String
+
+``policy-doc``
+
+:Description: The Permission policy document.
+:Type: String
+
+For example::
+
+ radosgw-admin role-policy put --role-name=S3Access1 --policy-name=Policy1 --policy-doc=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Action\":\[\"s3:*\"\],\"Resource\":\"arn:aws:s3:::example_bucket\"\}\]\}
+
+For passing ``policy-doc`` as a file::
+
+ radosgw-admin role-policy put --role-name=S3Access1 --policy-name=Policy1 --infile policy-document.json
+
+In the above example, we are attaching a policy 'Policy1' to role 'S3Access1', which allows all s3 actions on 'example_bucket'.
+
+List Permission Policy Names attached to a Role
+-----------------------------------------------
+
+To list the names of permission policies attached to a role, execute the following::
+
+ radosgw-admin role policy get --role-name={role-name}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+For example::
+
+ radosgw-admin role-policy list --role-name=S3Access1
+
+.. code-block:: javascript
+
+ [
+ "Policy1"
+ ]
+
+
+Get Permission Policy attached to a Role
+----------------------------------------
+
+To get a specific permission policy attached to a role, execute the following::
+
+ radosgw-admin role policy get --role-name={role-name} --policy-name={policy-name}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+``policy-name``
+
+:Description: Name of the policy.
+:Type: String
+
+For example::
+
+ radosgw-admin role-policy get --role-name=S3Access1 --policy-name=Policy1
+
+.. code-block:: javascript
+
+ {
+ "Permission policy": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"s3:*\"],\"Resource\":\"arn:aws:s3:::example_bucket\"}]}"
+ }
+
+
+Delete Policy attached to a Role
+--------------------------------
+
+To delete permission policy attached to a role, execute the following::
+
+ radosgw-admin role policy delete --role-name={role-name} --policy-name={policy-name}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+``policy-name``
+
+:Description: Name of the policy.
+:Type: String
+
+For example::
+
+ radosgw-admin role-policy delete --role-name=S3Access1 --policy-name=Policy1
+
+
+Update a role
+-------------
+
+To update a role's max-session-duration, execute the following::
+
+ radosgw-admin role update --role-name={role-name} --max-session-duration={max-session-duration}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+``max-session-duration``
+
+:Description: Maximum session duration for a role.
+:Type: String
+
+For example::
+
+ radosgw-admin role update --role-name=S3Access1 --max-session-duration=43200
+
+Note: This command currently can only be used to update max-session-duration.
+
+REST APIs for Manipulating a Role
+=================================
+
+In addition to the above radosgw-admin commands, the following REST APIs can be used for manipulating a role. For the request parameters and their explanations, refer to the sections above.
+
+In order to invoke the REST admin APIs, a user with admin caps needs to be created.
+
+.. code-block:: javascript
+
+ radosgw-admin --uid TESTER --display-name "TestUser" --access_key TESTER --secret test123 user create
+ radosgw-admin caps add --uid="TESTER" --caps="roles=*"
+
+
+Create a Role
+-------------
+
+Example::
+ POST "<hostname>?Action=CreateRole&RoleName=S3Access&Path=/application_abc/component_xyz/&AssumeRolePolicyDocument=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"AWS\":\[\"arn:aws:iam:::user/TESTER\"\]\},\"Action\":\[\"sts:AssumeRole\"\]\}\]\}"
+
+.. code-block:: XML
+
+ <role>
+ <id>8f41f4e0-7094-4dc0-ac20-074a881ccbc5</id>
+ <name>S3Access</name>
+ <path>/application_abc/component_xyz/</path>
+ <arn>arn:aws:iam:::role/application_abc/component_xyz/S3Access</arn>
+ <create_date>2018-10-23T07:43:42.811Z</create_date>
+ <max_session_duration>3600</max_session_duration>
+ <assume_role_policy_document>{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":["arn:aws:iam:::user/TESTER"]},"Action":["sts:AssumeRole"]}]}</assume_role_policy_document>
+ </role>
+
+
+Delete a Role
+-------------
+
+Example::
+ POST "<hostname>?Action=DeleteRole&RoleName=S3Access"
+
+Note: A role can be deleted only when it doesn't have any permission policy attached to it.
+
+Get a Role
+----------
+
+Example::
+ POST "<hostname>?Action=GetRole&RoleName=S3Access"
+
+.. code-block:: XML
+
+ <role>
+ <id>8f41f4e0-7094-4dc0-ac20-074a881ccbc5</id>
+ <name>S3Access</name>
+ <path>/application_abc/component_xyz/</path>
+ <arn>arn:aws:iam:::role/application_abc/component_xyz/S3Access</arn>
+ <create_date>2018-10-23T07:43:42.811Z</create_date>
+ <max_session_duration>3600</max_session_duration>
+ <assume_role_policy_document>{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":["arn:aws:iam:::user/TESTER"]},"Action":["sts:AssumeRole"]}]}</assume_role_policy_document>
+ </role>
+
+
+List Roles
+----------
+
+Example::
+ POST "<hostname>?Action=ListRoles&RoleName=S3Access&PathPrefix=/application"
+
+.. code-block:: XML
+
+ <role>
+ <id>8f41f4e0-7094-4dc0-ac20-074a881ccbc5</id>
+ <name>S3Access</name>
+ <path>/application_abc/component_xyz/</path>
+ <arn>arn:aws:iam:::role/application_abc/component_xyz/S3Access</arn>
+ <create_date>2018-10-23T07:43:42.811Z</create_date>
+ <max_session_duration>3600</max_session_duration>
+ <assume_role_policy_document>{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":["arn:aws:iam:::user/TESTER"]},"Action":["sts:AssumeRole"]}]}</assume_role_policy_document>
+ </role>
+
+
+Update Assume Role Policy Document
+----------------------------------
+
+Example::
+ POST "<hostname>?Action=UpdateAssumeRolePolicy&RoleName=S3Access&PolicyDocument=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"AWS\":\[\"arn:aws:iam:::user/TESTER2\"\]\},\"Action\":\[\"sts:AssumeRole\"\]\}\]\}"
+
+Add/ Update a Policy attached to a Role
+---------------------------------------
+
+Example::
+ POST "<hostname>?Action=PutRolePolicy&RoleName=S3Access&PolicyName=Policy1&PolicyDocument=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Action\":\[\"s3:CreateBucket\"\],\"Resource\":\"arn:aws:s3:::example_bucket\"\}\]\}"
+
+List Permission Policy Names attached to a Role
+-----------------------------------------------
+
+Example::
+ POST "<hostname>?Action=ListRolePolicies&RoleName=S3Access"
+
+.. code-block:: XML
+
+ <PolicyNames>
+ <member>Policy1</member>
+ </PolicyNames>
+
+
+Get Permission Policy attached to a Role
+----------------------------------------
+
+Example::
+ POST "<hostname>?Action=GetRolePolicy&RoleName=S3Access&PolicyName=Policy1"
+
+.. code-block:: XML
+
+ <GetRolePolicyResult>
+ <PolicyName>Policy1</PolicyName>
+ <RoleName>S3Access</RoleName>
+ <Permission_policy>{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":["s3:CreateBucket"],"Resource":"arn:aws:s3:::example_bucket"}]}</Permission_policy>
+ </GetRolePolicyResult>
+
+
+Delete Policy attached to a Role
+--------------------------------
+
+Example::
+ POST "<hostname>?Action=DeleteRolePolicy&RoleName=S3Access&PolicyName=Policy1"
+
+Tag a role
+----------
+A role can have multivalued tags attached to it. These tags can be passed in as part of CreateRole REST API also.
+AWS does not support multi-valued role tags.
+
+Example::
+ POST "<hostname>?Action=TagRole&RoleName=S3Access&Tags.member.1.Key=Department&Tags.member.1.Value=Engineering"
+
+.. code-block:: XML
+
+ <TagRoleResponse>
+ <ResponseMetadata>
+ <RequestId>tx000000000000000000004-00611f337e-1027-default</RequestId>
+ </ResponseMetadata>
+ </TagRoleResponse>
+
+
+List role tags
+--------------
+Lists the tags attached to a role.
+
+Example::
+ POST "<hostname>?Action=ListRoleTags&RoleName=S3Access"
+
+.. code-block:: XML
+
+ <ListRoleTagsResponse>
+ <ListRoleTagsResult>
+ <Tags>
+ <member>
+ <Key>Department</Key>
+ <Value>Engineering</Value>
+ </member>
+ </Tags>
+ </ListRoleTagsResult>
+ <ResponseMetadata>
+ <RequestId>tx000000000000000000005-00611f337e-1027-default</RequestId>
+ </ResponseMetadata>
+ </ListRoleTagsResponse>
+
+Delete role tags
+----------------
+Delete a tag/ tags attached to a role.
+
+Example::
+ POST "<hostname>?Action=UntagRoles&RoleName=S3Access&TagKeys.member.1=Department"
+
+.. code-block:: XML
+
+ <UntagRoleResponse>
+ <ResponseMetadata>
+ <RequestId>tx000000000000000000007-00611f337e-1027-default</RequestId>
+ </ResponseMetadata>
+ </UntagRoleResponse>
+
+Update Role
+-----------
+
+Example::
+ POST "<hostname>?Action=UpdateRole&RoleName=S3Access&MaxSessionDuration=43200"
+
+.. code-block:: XML
+
+ <UpdateRoleResponse>
+ <UpdateRoleResult>
+ <ResponseMetadata>
+ <RequestId>tx000000000000000000007-00611f337e-1027-default</RequestId>
+ </ResponseMetadata>
+ </UpdateRoleResult>
+ </UpdateRoleResponse>
+
+Note: This API currently can only be used to update max-session-duration.
+
+Sample code for tagging, listing tags and untagging a role
+----------------------------------------------------------
+
+The following is sample code for adding tags to role, listing tags and untagging a role using boto3.
+
+.. code-block:: python
+
+ import boto3
+
+ access_key = 'TESTER'
+ secret_key = 'test123'
+
+ iam_client = boto3.client('iam',
+ aws_access_key_id=access_key,
+ aws_secret_access_key=secret_key,
+ endpoint_url='http://s3.us-east.localhost:8000',
+ region_name=''
+ )
+
+ policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"localhost:8080/auth/realms/quickstart:sub\":\"user1\"}}}]}"
+
+ print ("\n Creating Role with tags\n")
+ tags_list = [
+ {'Key':'Department','Value':'Engineering'}
+ ]
+ role_response = iam_client.create_role(
+ AssumeRolePolicyDocument=policy_document,
+ Path='/',
+ RoleName='S3Access',
+ Tags=tags_list,
+ )
+
+ print ("Adding tags to role\n")
+ response = iam_client.tag_role(
+ RoleName='S3Access',
+ Tags= [
+ {'Key':'CostCenter','Value':'123456'}
+ ]
+ )
+ print ("Listing role tags\n")
+ response = iam_client.list_role_tags(
+ RoleName='S3Access'
+ )
+ print (response)
+ print ("Untagging role\n")
+ response = iam_client.untag_role(
+ RoleName='S3Access',
+ TagKeys=[
+ 'Department',
+ ]
+ )
+
+
+
diff --git a/doc/radosgw/s3-notification-compatibility.rst b/doc/radosgw/s3-notification-compatibility.rst
new file mode 100644
index 000000000..1627ed0c4
--- /dev/null
+++ b/doc/radosgw/s3-notification-compatibility.rst
@@ -0,0 +1,149 @@
+=====================================
+S3 Bucket Notifications Compatibility
+=====================================
+
+Ceph's `Bucket Notifications`_ API follows `AWS S3 Bucket Notifications API`_. However, some differences exist, as listed below.
+
+
+.. note::
+
+ Compatibility is different depending on which of the above mechanism is used
+
+Supported Destination
+---------------------
+
+AWS supports: **SNS**, **SQS** and **Lambda** as possible destinations (AWS internal destinations).
+Currently, we support: **HTTP/S**, **Kafka** and **AMQP**.
+
+We are using the **SNS** ARNs to represent the **HTTP/S**, **Kafka** and **AMQP** destinations.
+
+Notification Configuration XML
+------------------------------
+
+Following tags (and the tags inside them) are not supported:
+
++-----------------------------------+----------------------------------------------+
+| Tag | Remaks |
++===================================+==============================================+
+| ``<QueueConfiguration>`` | not needed, we treat all destinations as SNS |
++-----------------------------------+----------------------------------------------+
+| ``<CloudFunctionConfiguration>`` | not needed, we treat all destinations as SNS |
++-----------------------------------+----------------------------------------------+
+
+REST API Extension
+------------------
+
+Ceph's bucket notification API has the following extensions:
+
+- Deletion of a specific notification, or all notifications on a bucket, using the ``DELETE`` verb
+
+ - In S3, all notifications are deleted when the bucket is deleted, or when an empty notification is set on the bucket
+
+- Getting the information on a specific notification (when more than one exists on a bucket)
+
+ - In S3, it is only possible to fetch all notifications on a bucket
+
+- In addition to filtering based on prefix/suffix of object keys we support:
+
+ - Filtering based on regular expression matching
+
+ - Filtering based on metadata attributes attached to the object
+
+ - Filtering based on object tags
+
+- Each one of the additional filters extends the S3 API and using it will require extension of the client SDK (unless you are using plain HTTP).
+
+- Filtering overlapping is allowed, so that same event could be sent as different notification
+
+
+Unsupported Fields in the Event Record
+--------------------------------------
+
+The records sent for bucket notification follows the format described in: `Event Message Structure`_.
+However, the ``requestParameters.sourceIPAddress`` field will be sent empty.
+
+
+Event Types
+-----------
+
++--------------------------------------------------------+-----------------------------------------+
+| Event | Note |
++========================================================+=========================================+
+| ``s3:ObjectCreated:*`` | Supported |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectCreated:Put`` | Supported |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectCreated:Post`` | Supported |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectCreated:Copy`` | Supported |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectCreated:CompleteMultipartUpload`` | Supported |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectRemoved:*`` | Supported |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectRemoved:Delete`` | Supported |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectRemoved:DeleteMarkerCreated`` | Supported |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectLifecycle:Expiration:Current`` | Ceph extension |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectLifecycle:Expiration:NonCurrent`` | Ceph extension |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectLifecycle:Expiration:DeleteMarker`` | Ceph extension |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectLifecycle:Expiration:AbortMultipartUpload`` | Defined, Ceph extension (not generated) |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectLifecycle:Transition:Current`` | Ceph extension |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectLifecycle:Transition:NonCurrent`` | Ceph extension |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectSynced:*`` | Ceph extension |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectSynced:Create`` | Ceph Extension |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectSynced:Delete`` | Defined, Ceph extension (not generated) |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectSynced:DeletionMarkerCreated`` | Defined, Ceph extension (not generated) |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectRestore:Post`` | Not applicable |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ObjectRestore:Complete`` | Not applicable |
++--------------------------------------------------------+-----------------------------------------+
+| ``s3:ReducedRedundancyLostObject`` | Not applicable |
++--------------------------------------------------------+-----------------------------------------+
+
+.. note::
+
+ The ``s3:ObjectRemoved:DeleteMarkerCreated`` event presents information on the latest version of the object
+
+.. note::
+
+ In case of multipart upload, an ``ObjectCreated:CompleteMultipartUpload`` notification will be sent at the end of the process.
+
+.. note::
+
+ The ``s3:ObjectSynced:Create`` event is sent when an object successfully syncs to a zone. It must be explicitly set for each zone.
+
+Topic Configuration
+-------------------
+In the case of bucket notifications, the topics management API will be derived from `AWS Simple Notification Service API`_.
+Note that most of the API is not applicable to Ceph, and only the following actions are implemented:
+
+ - ``CreateTopic``
+ - ``DeleteTopic``
+ - ``ListTopics``
+
+We also have the following extensions to topic configuration:
+
+ - In ``GetTopic`` we allow fetching a specific topic, instead of all user topics
+ - In ``CreateTopic``
+
+ - we allow setting endpoint attributes
+ - we allow setting opaque data that will be sent to the endpoint in the notification
+
+
+.. _AWS Simple Notification Service API: https://docs.aws.amazon.com/sns/latest/api/API_Operations.html
+.. _AWS S3 Bucket Notifications API: https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
+.. _Event Message Structure: https://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html
+.. _`Bucket Notifications`: ../notifications
+.. _`boto3 SDK filter extensions`: https://github.com/ceph/ceph/tree/main/examples/rgw/boto3
diff --git a/doc/radosgw/s3.rst b/doc/radosgw/s3.rst
new file mode 100644
index 000000000..cb5eb3adb
--- /dev/null
+++ b/doc/radosgw/s3.rst
@@ -0,0 +1,98 @@
+.. _radosgw s3:
+
+============================
+ Ceph Object Gateway S3 API
+============================
+
+Ceph supports a RESTful API that is compatible with the basic data access model of the `Amazon S3 API`_.
+
+API
+---
+
+.. toctree::
+ :maxdepth: 1
+
+ Common <s3/commons>
+ Authentication <s3/authentication>
+ Service Ops <s3/serviceops>
+ Bucket Ops <s3/bucketops>
+ Object Ops <s3/objectops>
+ C++ <s3/cpp>
+ C# <s3/csharp>
+ Java <s3/java>
+ Perl <s3/perl>
+ PHP <s3/php>
+ Python <s3/python>
+ Ruby <s3/ruby>
+
+
+Features Support
+----------------
+
+The following table describes the support status for current Amazon S3 functional features:
+
++---------------------------------+-----------------+----------------------------------------+
+| Feature | Status | Remarks |
++=================================+=================+========================================+
+| **List Buckets** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Delete Bucket** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Create Bucket** | Supported | Different set of canned ACLs |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Lifecycle** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Replication** | Partial | Permitted only across zones |
++---------------------------------+-----------------+----------------------------------------+
+| **Policy (Buckets, Objects)** | Supported | ACLs & bucket policies are supported |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Website** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket ACLs (Get, Put)** | Supported | Different set of canned ACLs |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Location** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Notification** | Supported | See `S3 Notification Compatibility`_ |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Object Versions** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Bucket Info (HEAD)** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Request Payment** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Put Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Delete Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Object ACLs (Get, Put)** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Object Info (HEAD)** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **POST Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Copy Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Multipart Uploads** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Object Tagging** | Supported | See :ref:`tag_policy` for Policy verbs |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Tagging** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Storage Class** | Supported | See :ref:`storage_classes` |
++---------------------------------+-----------------+----------------------------------------+
+
+Unsupported Header Fields
+-------------------------
+
+The following common request header fields are not supported:
+
++----------------------------+------------+
+| Name | Type |
++============================+============+
+| **x-amz-id-2** | Response |
++----------------------------+------------+
+
+.. _Amazon S3 API: http://docs.aws.amazon.com/AmazonS3/latest/API/APIRest.html
+.. _S3 Notification Compatibility: ../s3-notification-compatibility
diff --git a/doc/radosgw/s3/authentication.rst b/doc/radosgw/s3/authentication.rst
new file mode 100644
index 000000000..64747cde2
--- /dev/null
+++ b/doc/radosgw/s3/authentication.rst
@@ -0,0 +1,235 @@
+=========================
+ Authentication and ACLs
+=========================
+
+Requests to the RADOS Gateway (RGW) can be either authenticated or
+unauthenticated. RGW assumes unauthenticated requests are sent by an anonymous
+user. RGW supports canned ACLs.
+
+Authentication
+--------------
+Authenticating a request requires including an access key and a Hash-based
+Message Authentication Code (HMAC) in the request before it is sent to the
+RGW server. RGW uses an S3-compatible authentication approach.
+
+::
+
+ HTTP/1.1
+ PUT /buckets/bucket/object.mpeg
+ Host: cname.domain.com
+ Date: Mon, 2 Jan 2012 00:01:01 +0000
+ Content-Encoding: mpeg
+ Content-Length: 9999999
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+In the foregoing example, replace ``{access-key}`` with the value for your access
+key ID followed by a colon (``:``). Replace ``{hash-of-header-and-secret}`` with
+a hash of the header string and the secret corresponding to the access key ID.
+
+To generate the hash of the header string and secret, you must:
+
+#. Get the value of the header string.
+#. Normalize the request header string into canonical form.
+#. Generate an HMAC using a SHA-1 hashing algorithm.
+ See `RFC 2104`_ and `HMAC`_ for details.
+#. Encode the ``hmac`` result as base-64.
+
+To normalize the header into canonical form:
+
+#. Get all fields beginning with ``x-amz-``.
+#. Ensure that the fields are all lowercase.
+#. Sort the fields lexicographically.
+#. Combine multiple instances of the same field name into a
+ single field and separate the field values with a comma.
+#. Replace white space and line breaks in field values with a single space.
+#. Remove white space before and after colons.
+#. Append a new line after each field.
+#. Merge the fields back into the header.
+
+Replace the ``{hash-of-header-and-secret}`` with the base-64 encoded HMAC string.
+
+Authentication against OpenStack Keystone
+-----------------------------------------
+
+In a radosgw instance that is configured with authentication against
+OpenStack Keystone, it is possible to use Keystone as an authoritative
+source for S3 API authentication. To do so, you must set:
+
+* the ``rgw keystone`` configuration options explained in :doc:`../keystone`,
+* ``rgw s3 auth use keystone = true``.
+
+In addition, a user wishing to use the S3 API must obtain an AWS-style
+access key and secret key. They can do so with the ``openstack ec2
+credentials create`` command::
+
+ $ openstack --os-interface public ec2 credentials create
+ +------------+---------------------------------------------------------------------------------------------------------------------------------------------+
+ | Field | Value |
+ +------------+---------------------------------------------------------------------------------------------------------------------------------------------+
+ | access | c921676aaabbccdeadbeef7e8b0eeb2c |
+ | links | {u'self': u'https://auth.example.com:5000/v3/users/7ecbebaffeabbddeadbeefa23267ccbb24/credentials/OS-EC2/c921676aaabbccdeadbeef7e8b0eeb2c'} |
+ | project_id | 5ed51981aab4679851adeadbeef6ebf7 |
+ | secret | ******************************** |
+ | trust_id | None |
+ | user_id | 7ecbebaffeabbddeadbeefa23267cc24 |
+ +------------+---------------------------------------------------------------------------------------------------------------------------------------------+
+
+The thus-generated access and secret key can then be used for S3 API
+access to radosgw.
+
+.. note:: Consider that most production radosgw deployments
+ authenticating against OpenStack Keystone are also set up
+ for :doc:`../multitenancy`, for which special
+ considerations apply with respect to S3 signed URLs and
+ public read ACLs.
+
+Access Control Lists (ACLs)
+---------------------------
+
+RGW supports S3-compatible ACL functionality. An ACL is a list of access grants
+that specify which operations a user can perform on a bucket or on an object.
+Each grant has a different meaning when applied to a bucket versus applied to
+an object:
+
++------------------+--------------------------------------------------------+----------------------------------------------+
+| Permission | Bucket | Object |
++==================+========================================================+==============================================+
+| ``READ`` | Grantee can list the objects in the bucket. | Grantee can read the object. |
++------------------+--------------------------------------------------------+----------------------------------------------+
+| ``WRITE`` | Grantee can write or delete objects in the bucket. | N/A |
++------------------+--------------------------------------------------------+----------------------------------------------+
+| ``READ_ACP`` | Grantee can read bucket ACL. | Grantee can read the object ACL. |
++------------------+--------------------------------------------------------+----------------------------------------------+
+| ``WRITE_ACP`` | Grantee can write bucket ACL. | Grantee can write to the object ACL. |
++------------------+--------------------------------------------------------+----------------------------------------------+
+| ``FULL_CONTROL`` | Grantee has full permissions for object in the bucket. | Grantee can read or write to the object ACL. |
++------------------+--------------------------------------------------------+----------------------------------------------+
+
+Internally, S3 operations are mapped to ACL permissions thus:
+
++---------------------------------------+---------------+
+| Operation | Permission |
++=======================================+===============+
+| ``s3:GetObject`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectTorrent`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectVersion`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectVersionTorrent`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectTagging`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectVersionTagging`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:ListAllMyBuckets`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:ListBucket`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:ListBucketMultipartUploads`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:ListBucketVersions`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:ListMultipartUploadParts`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:AbortMultipartUpload`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:CreateBucket`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:DeleteBucket`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:DeleteObject`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:s3DeleteObjectVersion`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:PutObject`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:PutObjectTagging`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:PutObjectVersionTagging`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:DeleteObjectTagging`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:DeleteObjectVersionTagging`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:RestoreObject`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:GetAccelerateConfiguration`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketAcl`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketCORS`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketLocation`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketLogging`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketNotification`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketPolicy`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketRequestPayment`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketTagging`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketVersioning`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketWebsite`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetLifecycleConfiguration`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectAcl`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectVersionAcl`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetReplicationConfiguration`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketEncryption`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:DeleteBucketPolicy`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:DeleteBucketWebsite`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:DeleteReplicationConfiguration`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutAccelerateConfiguration`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketAcl`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketCORS`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketLogging`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketNotification`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketPolicy`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketRequestPayment`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketTagging`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutPutBucketVersioning`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketWebsite`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutLifecycleConfiguration`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutObjectAcl`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutObjectVersionAcl`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutReplicationConfiguration`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketEncryption`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+
+Some mappings, (e.g. ``s3:CreateBucket`` to ``WRITE``) are not
+applicable to S3 operation, but are required to allow Swift and S3 to
+access the same resources when things like Swift user ACLs are in
+play. This is one of the many reasons that you should use S3 bucket
+policies rather than S3 ACLs when possible.
+
+
+.. _RFC 2104: http://www.ietf.org/rfc/rfc2104.txt
+.. _HMAC: https://en.wikipedia.org/wiki/HMAC
diff --git a/doc/radosgw/s3/bucketops.rst b/doc/radosgw/s3/bucketops.rst
new file mode 100644
index 000000000..17da3a935
--- /dev/null
+++ b/doc/radosgw/s3/bucketops.rst
@@ -0,0 +1,706 @@
+===================
+ Bucket Operations
+===================
+
+PUT Bucket
+----------
+Creates a new bucket. To create a bucket, you must have a user ID and a valid AWS Access Key ID to authenticate requests. You may not
+create buckets as an anonymous user.
+
+Constraints
+~~~~~~~~~~~
+In general, bucket names should follow domain name constraints.
+
+- Bucket names must be unique.
+- Bucket names cannot be formatted as IP address.
+- Bucket names can be between 3 and 63 characters long.
+- Bucket names must not contain uppercase characters or underscores.
+- Bucket names must start with a lowercase letter or number.
+- Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.). Bucket names can contain lowercase letters, numbers, and hyphens. Each label must start and end with a lowercase letter or a number.
+
+.. note:: The above constraints are relaxed if the option 'rgw_relaxed_s3_bucket_names' is set to true except that the bucket names must still be unique, cannot be formatted as IP address and can contain letters, numbers, periods, dashes and underscores for up to 255 characters long.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket} HTTP/1.1
+ Host: cname.domain.com
+ x-amz-acl: public-read-write
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+Parameters
+~~~~~~~~~~
+
+
++---------------+----------------------+-----------------------------------------------------------------------------+------------+
+| Name | Description | Valid Values | Required |
++===============+======================+=============================================================================+============+
+| ``x-amz-acl`` | Canned ACLs. | ``private``, ``public-read``, ``public-read-write``, ``authenticated-read`` | No |
++---------------+----------------------+-----------------------------------------------------------------------------+------------+
+| ``x-amz-bucket-object-lock-enabled`` | Enable object lock on bucket. | ``true``, ``false`` | No |
++--------------------------------------+-------------------------------+---------------------------------------------+------------+
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++-------------------------------+-----------+----------------------------------------------------------------+
+| Name | Type | Description |
++===============================+===========+================================================================+
+| ``CreateBucketConfiguration`` | Container | A container for the bucket configuration. |
++-------------------------------+-----------+----------------------------------------------------------------+
+| ``LocationConstraint`` | String | A zonegroup api name, with optional :ref:`s3_bucket_placement` |
++-------------------------------+-----------+----------------------------------------------------------------+
+
+
+HTTP Response
+~~~~~~~~~~~~~
+
+If the bucket name is unique, within constraints and unused, the operation will succeed.
+If a bucket with the same name already exists and the user is the bucket owner, the operation will succeed.
+If the bucket name is already in use, the operation will fail.
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``409`` | BucketAlreadyExists | Bucket already exists under different user's ownership. |
++---------------+-----------------------+----------------------------------------------------------+
+
+DELETE Bucket
+-------------
+
+Deletes a bucket. You can reuse bucket names following a successful bucket removal.
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{bucket} HTTP/1.1
+ Host: cname.domain.com
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+---------------+------------------+
+| HTTP Status | Status Code | Description |
++===============+===============+==================+
+| ``204`` | No Content | Bucket removed. |
++---------------+---------------+------------------+
+
+GET Bucket
+----------
+Returns a list of bucket objects.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}?max-keys=25 HTTP/1.1
+ Host: cname.domain.com
+
+Parameters
+~~~~~~~~~~
+
++---------------------+-----------+-------------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++=====================+===========+=================================================================================================+
+| ``prefix`` | String | Only returns objects that contain the specified prefix. |
++---------------------+-----------+-------------------------------------------------------------------------------------------------+
+| ``delimiter`` | String | The delimiter between the prefix and the rest of the object name. |
++---------------------+-----------+-------------------------------------------------------------------------------------------------+
+| ``marker`` | String | A beginning index for the list of objects returned. |
++---------------------+-----------+-------------------------------------------------------------------------------------------------+
+| ``max-keys`` | Integer | The maximum number of keys to return. Default is 1000. |
++---------------------+-----------+-------------------------------------------------------------------------------------------------+
+| ``allow-unordered`` | Boolean | Non-standard extension. Allows results to be returned unordered. Cannot be used with delimiter. |
++---------------------+-----------+-------------------------------------------------------------------------------------------------+
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+---------------+--------------------+
+| HTTP Status | Status Code | Description |
++===============+===============+====================+
+| ``200`` | OK | Buckets retrieved |
++---------------+---------------+--------------------+
+
+Bucket Response Entities
+~~~~~~~~~~~~~~~~~~~~~~~~
+``GET /{bucket}`` returns a container for buckets with the following fields.
+
++------------------------+-----------+----------------------------------------------------------------------------------+
+| Name | Type | Description |
++========================+===========+==================================================================================+
+| ``ListBucketResult`` | Entity | The container for the list of objects. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``Name`` | String | The name of the bucket whose contents will be returned. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``Prefix`` | String | A prefix for the object keys. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``Marker`` | String | A beginning index for the list of objects returned. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``MaxKeys`` | Integer | The maximum number of keys returned. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``Delimiter`` | String | If set, objects with the same prefix will appear in the ``CommonPrefixes`` list. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``IsTruncated`` | Boolean | If ``true``, only a subset of the bucket's contents were returned. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``CommonPrefixes`` | Container | If multiple objects contain the same prefix, they will appear in this list. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+
+Object Response Entities
+~~~~~~~~~~~~~~~~~~~~~~~~
+The ``ListBucketResult`` contains objects, where each object is within a ``Contents`` container.
+
++------------------------+-----------+------------------------------------------+
+| Name | Type | Description |
++========================+===========+==========================================+
+| ``Contents`` | Object | A container for the object. |
++------------------------+-----------+------------------------------------------+
+| ``Key`` | String | The object's key. |
++------------------------+-----------+------------------------------------------+
+| ``LastModified`` | Date | The object's last-modified date/time. |
++------------------------+-----------+------------------------------------------+
+| ``ETag`` | String | An MD-5 hash of the object. (entity tag) |
++------------------------+-----------+------------------------------------------+
+| ``Size`` | Integer | The object's size. |
++------------------------+-----------+------------------------------------------+
+| ``StorageClass`` | String | Should always return ``STANDARD``. |
++------------------------+-----------+------------------------------------------+
+| ``Type`` | String | ``Appendable`` or ``Normal``. |
++------------------------+-----------+------------------------------------------+
+
+Get Bucket Location
+-------------------
+Retrieves the bucket's region. The user needs to be the bucket owner
+to call this. A bucket can be constrained to a region by providing
+``LocationConstraint`` during a PUT request.
+
+Syntax
+~~~~~~
+Add the ``location`` subresource to bucket resource as shown below
+
+::
+
+ GET /{bucket}?location HTTP/1.1
+ Host: cname.domain.com
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+Response Entities
+~~~~~~~~~~~~~~~~~~~~~~~~
+
++------------------------+-----------+------------------------------------------+
+| Name | Type | Description |
++========================+===========+==========================================+
+| ``LocationConstraint`` | String | The region where bucket resides, empty |
+| | | string for default region |
++------------------------+-----------+------------------------------------------+
+
+
+
+Get Bucket ACL
+--------------
+Retrieves the bucket access control list. The user needs to be the bucket
+owner or to have been granted ``READ_ACP`` permission on the bucket.
+
+Syntax
+~~~~~~
+Add the ``acl`` subresource to the bucket request as shown below.
+
+::
+
+ GET /{bucket}?acl HTTP/1.1
+ Host: cname.domain.com
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++===========================+=============+==============================================================================================+
+| ``AccessControlPolicy`` | Container | A container for the response. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``AccessControlList`` | Container | A container for the ACL information. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Owner`` | Container | A container for the bucket owner's ``ID`` and ``DisplayName``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``ID`` | String | The bucket owner's ID. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``DisplayName`` | String | The bucket owner's display name. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grant`` | Container | A container for ``Grantee`` and ``Permission``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grantee`` | Container | A container for the ``DisplayName`` and ``ID`` of the user receiving a grant of permission. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Permission`` | String | The permission given to the ``Grantee`` bucket. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+
+PUT Bucket ACL
+--------------
+Sets an access control to an existing bucket. The user needs to be the bucket
+owner or to have been granted ``WRITE_ACP`` permission on the bucket.
+
+Syntax
+~~~~~~
+Add the ``acl`` subresource to the bucket request as shown below.
+
+::
+
+ PUT /{bucket}?acl HTTP/1.1
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++===========================+=============+==============================================================================================+
+| ``AccessControlPolicy`` | Container | A container for the request. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``AccessControlList`` | Container | A container for the ACL information. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Owner`` | Container | A container for the bucket owner's ``ID`` and ``DisplayName``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``ID`` | String | The bucket owner's ID. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``DisplayName`` | String | The bucket owner's display name. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grant`` | Container | A container for ``Grantee`` and ``Permission``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grantee`` | Container | A container for the ``DisplayName`` and ``ID`` of the user receiving a grant of permission. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Permission`` | String | The permission given to the ``Grantee`` bucket. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+
+List Bucket Multipart Uploads
+-----------------------------
+
+``GET /?uploads`` returns a list of the current in-progress multipart uploads--i.e., the application initiates a multipart upload, but
+the service hasn't completed all the uploads yet.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}?uploads HTTP/1.1
+
+Parameters
+~~~~~~~~~~
+
+You may specify parameters for ``GET /{bucket}?uploads``, but none of them are required.
+
++------------------------+-----------+--------------------------------------------------------------------------------------+
+| Name | Type | Description |
++========================+===========+======================================================================================+
+| ``prefix`` | String | Returns in-progress uploads whose keys contains the specified prefix. |
++------------------------+-----------+--------------------------------------------------------------------------------------+
+| ``delimiter`` | String | The delimiter between the prefix and the rest of the object name. |
++------------------------+-----------+--------------------------------------------------------------------------------------+
+| ``key-marker`` | String | The beginning marker for the list of uploads. |
++------------------------+-----------+--------------------------------------------------------------------------------------+
+| ``max-keys`` | Integer | The maximum number of in-progress uploads. The default is 1000. |
++------------------------+-----------+--------------------------------------------------------------------------------------+
+| ``max-uploads`` | Integer | The maximum number of multipart uploads. The range from 1-1000. The default is 1000. |
++------------------------+-----------+--------------------------------------------------------------------------------------+
+| ``upload-id-marker`` | String | Ignored if ``key-marker`` is not specified. Specifies the ``ID`` of first |
+| | | upload to list in lexicographical order at or following the ``ID``. |
++------------------------+-----------+--------------------------------------------------------------------------------------+
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++=========================================+=============+==========================================================================================================+
+| ``ListMultipartUploadsResult`` | Container | A container for the results. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``ListMultipartUploadsResult.Prefix`` | String | The prefix specified by the ``prefix`` request parameter (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Bucket`` | String | The bucket that will receive the bucket contents. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``KeyMarker`` | String | The key marker specified by the ``key-marker`` request parameter (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``UploadIdMarker`` | String | The marker specified by the ``upload-id-marker`` request parameter (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``NextKeyMarker`` | String | The key marker to use in a subsequent request if ``IsTruncated`` is ``true``. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``NextUploadIdMarker`` | String | The upload ID marker to use in a subsequent request if ``IsTruncated`` is ``true``. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``MaxUploads`` | Integer | The max uploads specified by the ``max-uploads`` request parameter. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Delimiter`` | String | If set, objects with the same prefix will appear in the ``CommonPrefixes`` list. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``IsTruncated`` | Boolean | If ``true``, only a subset of the bucket's upload contents were returned. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Upload`` | Container | A container for ``Key``, ``UploadId``, ``InitiatorOwner``, ``StorageClass``, and ``Initiated`` elements. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Key`` | String | The key of the object once the multipart upload is complete. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``UploadId`` | String | The ``ID`` that identifies the multipart upload. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Initiator`` | Container | Contains the ``ID`` and ``DisplayName`` of the user who initiated the upload. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``DisplayName`` | String | The initiator's display name. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``ID`` | String | The initiator's ID. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Owner`` | Container | A container for the ``ID`` and ``DisplayName`` of the user who owns the uploaded object. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``StorageClass`` | String | The method used to store the resulting object. ``STANDARD`` or ``REDUCED_REDUNDANCY`` |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Initiated`` | Date | The date and time the user initiated the upload. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``CommonPrefixes`` | Container | If multiple objects contain the same prefix, they will appear in this list. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``CommonPrefixes.Prefix`` | String | The substring of the key after the prefix as defined by the ``prefix`` request parameter. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+
+ENABLE/SUSPEND BUCKET VERSIONING
+--------------------------------
+
+``PUT /?versioning`` This subresource set the versioning state of an existing bucket. To set the versioning state, you must be the bucket owner.
+
+You can set the versioning state with one of the following values:
+
+- Enabled : Enables versioning for the objects in the bucket, All objects added to the bucket receive a unique version ID.
+- Suspended : Disables versioning for the objects in the bucket, All objects added to the bucket receive the version ID null.
+
+If the versioning state has never been set on a bucket, it has no versioning state; a GET versioning request does not return a versioning state value.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}?versioning HTTP/1.1
+
+REQUEST ENTITIES
+~~~~~~~~~~~~~~~~
+
++-----------------------------+-----------+---------------------------------------------------------------------------+
+| Name | Type | Description |
++=============================+===========+===========================================================================+
+| ``VersioningConfiguration`` | Container | A container for the request. |
++-----------------------------+-----------+---------------------------------------------------------------------------+
+| ``Status`` | String | Sets the versioning state of the bucket. Valid Values: Suspended/Enabled |
++-----------------------------+-----------+---------------------------------------------------------------------------+
+
+PUT BUCKET OBJECT LOCK
+--------------------------------
+
+Places an Object Lock configuration on the specified bucket. The rule specified in the Object Lock configuration will be
+applied by default to every new object placed in the specified bucket.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}?object-lock HTTP/1.1
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| Name | Type | Description | Required |
++=============================+=============+========================================================================================+==========+
+| ``ObjectLockConfiguration`` | Container | A container for the request. | Yes |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``ObjectLockEnabled`` | String | Indicates whether this bucket has an Object Lock configuration enabled. | Yes |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Rule`` | Container | The Object Lock rule in place for the specified bucket. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``DefaultRetention`` | Container | The default retention period applied to new objects placed in the specified bucket. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Mode`` | String | The default Object Lock retention mode. Valid Values: GOVERNANCE/COMPLIANCE | Yes |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Days`` | Integer | The number of days specified for the default retention period. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Years`` | Integer | The number of years specified for the default retention period. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+
+HTTP Response
+~~~~~~~~~~~~~
+
+If the bucket object lock is not enabled when creating the bucket, the operation will fail.
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``400`` | MalformedXML | The XML is not well-formed |
++---------------+-----------------------+----------------------------------------------------------+
+| ``409`` | InvalidBucketState | The bucket object lock is not enabled |
++---------------+-----------------------+----------------------------------------------------------+
+
+GET BUCKET OBJECT LOCK
+--------------------------------
+
+Gets the Object Lock configuration for a bucket. The rule specified in the Object Lock configuration will be applied by
+default to every new object placed in the specified bucket.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}?object-lock HTTP/1.1
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| Name | Type | Description | Required |
++=============================+=============+========================================================================================+==========+
+| ``ObjectLockConfiguration`` | Container | A container for the request. | Yes |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``ObjectLockEnabled`` | String | Indicates whether this bucket has an Object Lock configuration enabled. | Yes |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Rule`` | Container | The Object Lock rule in place for the specified bucket. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``DefaultRetention`` | Container | The default retention period applied to new objects placed in the specified bucket. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Mode`` | String | The default Object Lock retention mode. Valid Values: GOVERNANCE/COMPLIANCE | Yes |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Days`` | Integer | The number of days specified for the default retention period. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Years`` | Integer | The number of years specified for the default retention period. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+
+Create Notification
+-------------------
+
+Create a publisher for a specific bucket into a topic.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}?notification HTTP/1.1
+
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
+Parameters are XML encoded in the body of the request, in the following format:
+
+::
+
+ <NotificationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <TopicConfiguration>
+ <Id></Id>
+ <Topic></Topic>
+ <Event></Event>
+ <Filter>
+ <S3Key>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Key>
+ <S3Metadata>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Metadata>
+ <S3Tags>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Tags>
+ </Filter>
+ </TopicConfiguration>
+ </NotificationConfiguration>
+
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| Name | Type | Description | Required |
++===============================+===========+======================================================================================+==========+
+| ``NotificationConfiguration`` | Container | Holding list of ``TopicConfiguration`` entities | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``TopicConfiguration`` | Container | Holding ``Id``, ``Topic`` and list of ``Event`` entities | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Id`` | String | Name of the notification | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Topic`` | String | Topic ARN. Topic must be created beforehand | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Event`` | String | List of supported events see: `S3 Notification Compatibility`_. Multiple ``Event`` | No |
+| | | entities can be used. If omitted, all events are handled | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Filter`` | Container | Holding ``S3Key``, ``S3Metadata`` and ``S3Tags`` entities | No |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Key`` | Container | Holding a list of ``FilterRule`` entities, for filtering based on object key. | No |
+| | | At most, 3 entities may be in the list, with ``Name`` be ``prefix``, ``suffix`` or | |
+| | | ``regex``. All filter rules in the list must match for the filter to match. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Metadata`` | Container | Holding a list of ``FilterRule`` entities, for filtering based on object metadata. | No |
+| | | All filter rules in the list must match the metadata defined on the object. However, | |
+| | | the object still match if it has other metadata entries not listed in the filter. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Tags`` | Container | Holding a list of ``FilterRule`` entities, for filtering based on object tags. | No |
+| | | All filter rules in the list must match the tags defined on the object. However, | |
+| | | the object still match it it has other tags not listed in the filter. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Key.FilterRule`` | Container | Holding ``Name`` and ``Value`` entities. ``Name`` would be: ``prefix``, ``suffix`` | Yes |
+| | | or ``regex``. The ``Value`` would hold the key prefix, key suffix or a regular | |
+| | | expression for matching the key, accordingly. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Metadata.FilterRule`` | Container | Holding ``Name`` and ``Value`` entities. ``Name`` would be the name of the metadata | Yes |
+| | | attribute (e.g. ``x-amz-meta-xxx``). The ``Value`` would be the expected value for | |
+| | | this attribute. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Tags.FilterRule`` | Container | Holding ``Name`` and ``Value`` entities. ``Name`` would be the tag key, | Yes |
+| | | and ``Value`` would be the tag value. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``400`` | MalformedXML | The XML is not well-formed |
++---------------+-----------------------+----------------------------------------------------------+
+| ``400`` | InvalidArgument | Missing Id; Missing/Invalid Topic ARN; Invalid Event |
++---------------+-----------------------+----------------------------------------------------------+
+| ``404`` | NoSuchBucket | The bucket does not exist |
++---------------+-----------------------+----------------------------------------------------------+
+| ``404`` | NoSuchKey | The topic does not exist |
++---------------+-----------------------+----------------------------------------------------------+
+
+
+Delete Notification
+-------------------
+
+Delete a specific, or all, notifications from a bucket.
+
+.. note::
+
+ - Notification deletion is an extension to the S3 notification API
+ - When the bucket is deleted, any notification defined on it is also deleted
+ - Deleting an unknown notification (e.g. double delete) is not considered an error
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{bucket}?notification[=<notification-id>] HTTP/1.1
+
+
+Parameters
+~~~~~~~~~~
+
++------------------------+-----------+----------------------------------------------------------------------------------------+
+| Name | Type | Description |
++========================+===========+========================================================================================+
+| ``notification-id`` | String | Name of the notification. If not provided, all notifications on the bucket are deleted |
++------------------------+-----------+----------------------------------------------------------------------------------------+
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``404`` | NoSuchBucket | The bucket does not exist |
++---------------+-----------------------+----------------------------------------------------------+
+
+Get/List Notification
+---------------------
+
+Get a specific notification, or list all notifications configured on a bucket.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}?notification[=<notification-id>] HTTP/1.1
+
+
+Parameters
+~~~~~~~~~~
+
++------------------------+-----------+----------------------------------------------------------------------------------------+
+| Name | Type | Description |
++========================+===========+========================================================================================+
+| ``notification-id`` | String | Name of the notification. If not provided, all notifications on the bucket are listed |
++------------------------+-----------+----------------------------------------------------------------------------------------+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+Response is XML encoded in the body of the request, in the following format:
+
+::
+
+ <NotificationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <TopicConfiguration>
+ <Id></Id>
+ <Topic></Topic>
+ <Event></Event>
+ <Filter>
+ <S3Key>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Key>
+ <S3Metadata>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Metadata>
+ <S3Tags>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Tags>
+ </Filter>
+ </TopicConfiguration>
+ </NotificationConfiguration>
+
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| Name | Type | Description | Required |
++===============================+===========+======================================================================================+==========+
+| ``NotificationConfiguration`` | Container | Holding list of ``TopicConfiguration`` entities | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``TopicConfiguration`` | Container | Holding ``Id``, ``Topic`` and list of ``Event`` entities | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Id`` | String | Name of the notification | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Topic`` | String | Topic ARN | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Event`` | String | Handled event. Multiple ``Event`` entities may exist | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Filter`` | Container | Holding the filters configured for this notification | No |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``404`` | NoSuchBucket | The bucket does not exist |
++---------------+-----------------------+----------------------------------------------------------+
+| ``404`` | NoSuchKey | The notification does not exist (if provided) |
++---------------+-----------------------+----------------------------------------------------------+
+
+.. _S3 Notification Compatibility: ../../s3-notification-compatibility
diff --git a/doc/radosgw/s3/commons.rst b/doc/radosgw/s3/commons.rst
new file mode 100644
index 000000000..4b9b4a040
--- /dev/null
+++ b/doc/radosgw/s3/commons.rst
@@ -0,0 +1,113 @@
+=================
+ Common Entities
+=================
+
+.. toctree::
+ :maxdepth: -1
+
+Bucket and Host Name
+--------------------
+There are two different modes of accessing the buckets. The first (preferred) method
+identifies the bucket as the top-level directory in the URI. ::
+
+ GET /mybucket HTTP/1.1
+ Host: cname.domain.com
+
+The second method identifies the bucket via a virtual bucket host name. For example::
+
+ GET / HTTP/1.1
+ Host: mybucket.cname.domain.com
+
+To configure virtual hosted buckets, you can either set ``rgw_dns_name = cname.domain.com`` in ceph.conf, or add ``cname.domain.com`` to the list of ``hostnames`` in your zonegroup configuration. See `Ceph Object Gateway - Multisite Configuration`_ for more on zonegroups.
+
+.. tip:: We prefer the first method, because the second method requires expensive domain certification and DNS wild cards.
+
+.. tip:: You can define multiple hostname directly with the :confval:`rgw_dns_name` parameter.
+
+Common Request Headers
+----------------------
+
++--------------------+------------------------------------------+
+| Request Header | Description |
++====================+==========================================+
+| ``CONTENT_LENGTH`` | Length of the request body. |
++--------------------+------------------------------------------+
+| ``DATE`` | Request time and date (in UTC). |
++--------------------+------------------------------------------+
+| ``HOST`` | The name of the host server. |
++--------------------+------------------------------------------+
+| ``AUTHORIZATION`` | Authorization token. |
++--------------------+------------------------------------------+
+
+Common Response Status
+----------------------
+
++---------------+-----------------------------------+
+| HTTP Status | Response Code |
++===============+===================================+
+| ``100`` | Continue |
++---------------+-----------------------------------+
+| ``200`` | Success |
++---------------+-----------------------------------+
+| ``201`` | Created |
++---------------+-----------------------------------+
+| ``202`` | Accepted |
++---------------+-----------------------------------+
+| ``204`` | NoContent |
++---------------+-----------------------------------+
+| ``206`` | Partial content |
++---------------+-----------------------------------+
+| ``304`` | NotModified |
++---------------+-----------------------------------+
+| ``400`` | InvalidArgument |
++---------------+-----------------------------------+
+| ``400`` | InvalidDigest |
++---------------+-----------------------------------+
+| ``400`` | BadDigest |
++---------------+-----------------------------------+
+| ``400`` | InvalidBucketName |
++---------------+-----------------------------------+
+| ``400`` | InvalidObjectName |
++---------------+-----------------------------------+
+| ``400`` | UnresolvableGrantByEmailAddress |
++---------------+-----------------------------------+
+| ``400`` | InvalidPart |
++---------------+-----------------------------------+
+| ``400`` | InvalidPartOrder |
++---------------+-----------------------------------+
+| ``400`` | RequestTimeout |
++---------------+-----------------------------------+
+| ``400`` | EntityTooLarge |
++---------------+-----------------------------------+
+| ``403`` | AccessDenied |
++---------------+-----------------------------------+
+| ``403`` | UserSuspended |
++---------------+-----------------------------------+
+| ``403`` | RequestTimeTooSkewed |
++---------------+-----------------------------------+
+| ``404`` | NoSuchKey |
++---------------+-----------------------------------+
+| ``404`` | NoSuchBucket |
++---------------+-----------------------------------+
+| ``404`` | NoSuchUpload |
++---------------+-----------------------------------+
+| ``405`` | MethodNotAllowed |
++---------------+-----------------------------------+
+| ``408`` | RequestTimeout |
++---------------+-----------------------------------+
+| ``409`` | BucketAlreadyExists |
++---------------+-----------------------------------+
+| ``409`` | BucketNotEmpty |
++---------------+-----------------------------------+
+| ``411`` | MissingContentLength |
++---------------+-----------------------------------+
+| ``412`` | PreconditionFailed |
++---------------+-----------------------------------+
+| ``416`` | InvalidRange |
++---------------+-----------------------------------+
+| ``422`` | UnprocessableEntity |
++---------------+-----------------------------------+
+| ``500`` | InternalError |
++---------------+-----------------------------------+
+
+.. _`Ceph Object Gateway - Multisite Configuration`: ../../multisite
diff --git a/doc/radosgw/s3/cpp.rst b/doc/radosgw/s3/cpp.rst
new file mode 100644
index 000000000..089c9c53a
--- /dev/null
+++ b/doc/radosgw/s3/cpp.rst
@@ -0,0 +1,337 @@
+.. _cpp:
+
+C++ S3 Examples
+===============
+
+Setup
+-----
+
+The following contains includes and globals that will be used in later examples:
+
+.. code-block:: cpp
+
+ #include "libs3.h"
+ #include <stdlib.h>
+ #include <iostream>
+ #include <fstream>
+
+ const char access_key[] = "ACCESS_KEY";
+ const char secret_key[] = "SECRET_KEY";
+ const char host[] = "HOST";
+ const char sample_bucket[] = "sample_bucket";
+ const char sample_key[] = "hello.txt";
+ const char sample_file[] = "resource/hello.txt";
+ const char *security_token = NULL;
+ const char *auth_region = NULL;
+
+ S3BucketContext bucketContext =
+ {
+ host,
+ sample_bucket,
+ S3ProtocolHTTP,
+ S3UriStylePath,
+ access_key,
+ secret_key,
+ security_token,
+ auth_region
+ };
+
+ S3Status responsePropertiesCallback(
+ const S3ResponseProperties *properties,
+ void *callbackData)
+ {
+ return S3StatusOK;
+ }
+
+ static void responseCompleteCallback(
+ S3Status status,
+ const S3ErrorDetails *error,
+ void *callbackData)
+ {
+ return;
+ }
+
+ S3ResponseHandler responseHandler =
+ {
+ &responsePropertiesCallback,
+ &responseCompleteCallback
+ };
+
+
+Creating (and Closing) a Connection
+-----------------------------------
+
+This creates a connection so that you can interact with the server.
+
+.. code-block:: cpp
+
+ S3_initialize("s3", S3_INIT_ALL, host);
+ // Do stuff...
+ S3_deinitialize();
+
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of Buckets that you own.
+This also prints out the bucket name, owner ID, and display name
+for each bucket.
+
+.. code-block:: cpp
+
+ static S3Status listServiceCallback(
+ const char *ownerId,
+ const char *ownerDisplayName,
+ const char *bucketName,
+ int64_t creationDate, void *callbackData)
+ {
+ bool *header_printed = (bool*) callbackData;
+ if (!*header_printed) {
+ *header_printed = true;
+ printf("%-22s", " Bucket");
+ printf(" %-20s %-12s", " Owner ID", "Display Name");
+ printf("\n");
+ printf("----------------------");
+ printf(" --------------------" " ------------");
+ printf("\n");
+ }
+
+ printf("%-22s", bucketName);
+ printf(" %-20s %-12s", ownerId ? ownerId : "", ownerDisplayName ? ownerDisplayName : "");
+ printf("\n");
+
+ return S3StatusOK;
+ }
+
+ S3ListServiceHandler listServiceHandler =
+ {
+ responseHandler,
+ &listServiceCallback
+ };
+ bool header_printed = false;
+ S3_list_service(S3ProtocolHTTP, access_key, secret_key, security_token, host,
+ auth_region, NULL, 0, &listServiceHandler, &header_printed);
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket.
+
+.. code-block:: cpp
+
+ S3_create_bucket(S3ProtocolHTTP, access_key, secret_key, NULL, host, sample_bucket, S3CannedAclPrivate, NULL, NULL, &responseHandler, NULL);
+
+
+Listing a Bucket's Content
+--------------------------
+
+This gets a list of objects in the bucket.
+This also prints out each object's name, the file size, and
+last modified date.
+
+.. code-block:: cpp
+
+ static S3Status listBucketCallback(
+ int isTruncated,
+ const char *nextMarker,
+ int contentsCount,
+ const S3ListBucketContent *contents,
+ int commonPrefixesCount,
+ const char **commonPrefixes,
+ void *callbackData)
+ {
+ printf("%-22s", " Object Name");
+ printf(" %-5s %-20s", "Size", " Last Modified");
+ printf("\n");
+ printf("----------------------");
+ printf(" -----" " --------------------");
+ printf("\n");
+
+ for (int i = 0; i < contentsCount; i++) {
+ char timebuf[256];
+ char sizebuf[16];
+ const S3ListBucketContent *content = &(contents[i]);
+ time_t t = (time_t) content->lastModified;
+
+ strftime(timebuf, sizeof(timebuf), "%Y-%m-%dT%H:%M:%SZ", gmtime(&t));
+ sprintf(sizebuf, "%5llu", (unsigned long long) content->size);
+ printf("%-22s %s %s\n", content->key, sizebuf, timebuf);
+ }
+
+ return S3StatusOK;
+ }
+
+ S3ListBucketHandler listBucketHandler =
+ {
+ responseHandler,
+ &listBucketCallback
+ };
+ S3_list_bucket(&bucketContext, NULL, NULL, NULL, 0, NULL, 0, &listBucketHandler, NULL);
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+
+.. note::
+
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: cpp
+
+ S3_delete_bucket(S3ProtocolHTTP, S3UriStylePath, access_key, secret_key, 0, host, sample_bucket, NULL, NULL, 0, &responseHandler, NULL);
+
+
+Creating an Object (from a file)
+--------------------------------
+
+This creates a file ``hello.txt``.
+
+.. code-block:: cpp
+
+ #include <sys/stat.h>
+ typedef struct put_object_callback_data
+ {
+ FILE *infile;
+ uint64_t contentLength;
+ } put_object_callback_data;
+
+
+ static int putObjectDataCallback(int bufferSize, char *buffer, void *callbackData)
+ {
+ put_object_callback_data *data = (put_object_callback_data *) callbackData;
+
+ int ret = 0;
+
+ if (data->contentLength) {
+ int toRead = ((data->contentLength > (unsigned) bufferSize) ? (unsigned) bufferSize : data->contentLength);
+ ret = fread(buffer, 1, toRead, data->infile);
+ }
+ data->contentLength -= ret;
+ return ret;
+ }
+
+ put_object_callback_data data;
+ struct stat statbuf;
+ if (stat(sample_file, &statbuf) == -1) {
+ fprintf(stderr, "\nERROR: Failed to stat file %s: ", sample_file);
+ perror(0);
+ exit(-1);
+ }
+
+ int contentLength = statbuf.st_size;
+ data.contentLength = contentLength;
+
+ if (!(data.infile = fopen(sample_file, "r"))) {
+ fprintf(stderr, "\nERROR: Failed to open input file %s: ", sample_file);
+ perror(0);
+ exit(-1);
+ }
+
+ S3PutObjectHandler putObjectHandler =
+ {
+ responseHandler,
+ &putObjectDataCallback
+ };
+
+ S3_put_object(&bucketContext, sample_key, contentLength, NULL, NULL, 0, &putObjectHandler, &data);
+ fclose(data.infile);
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads a file and prints the contents.
+
+.. code-block:: cpp
+
+ static S3Status getObjectDataCallback(int bufferSize, const char *buffer, void *callbackData)
+ {
+ FILE *outfile = (FILE *) callbackData;
+ size_t wrote = fwrite(buffer, 1, bufferSize, outfile);
+ return ((wrote < (size_t) bufferSize) ? S3StatusAbortedByCallback : S3StatusOK);
+ }
+
+ S3GetObjectHandler getObjectHandler =
+ {
+ responseHandler,
+ &getObjectDataCallback
+ };
+ FILE *outfile = stdout;
+ S3_get_object(&bucketContext, sample_key, NULL, 0, 0, NULL, 0, &getObjectHandler, outfile);
+
+
+Delete an Object
+----------------
+
+This deletes an object.
+
+.. code-block:: cpp
+
+ S3ResponseHandler deleteResponseHandler =
+ {
+ 0,
+ &responseCompleteCallback
+ };
+ S3_delete_object(&bucketContext, sample_key, 0, 0, &deleteResponseHandler, 0);
+
+
+Change an Object's ACL
+----------------------
+
+This changes an object's ACL to grant full control to another user.
+
+
+.. code-block:: cpp
+
+ #include <string.h>
+ char ownerId[] = "owner";
+ char ownerDisplayName[] = "owner";
+ char granteeId[] = "grantee";
+ char granteeDisplayName[] = "grantee";
+
+ S3AclGrant grants[] = {
+ {
+ S3GranteeTypeCanonicalUser,
+ {{}},
+ S3PermissionFullControl
+ },
+ {
+ S3GranteeTypeCanonicalUser,
+ {{}},
+ S3PermissionReadACP
+ },
+ {
+ S3GranteeTypeAllUsers,
+ {{}},
+ S3PermissionRead
+ }
+ };
+
+ strncpy(grants[0].grantee.canonicalUser.id, ownerId, S3_MAX_GRANTEE_USER_ID_SIZE);
+ strncpy(grants[0].grantee.canonicalUser.displayName, ownerDisplayName, S3_MAX_GRANTEE_DISPLAY_NAME_SIZE);
+
+ strncpy(grants[1].grantee.canonicalUser.id, granteeId, S3_MAX_GRANTEE_USER_ID_SIZE);
+ strncpy(grants[1].grantee.canonicalUser.displayName, granteeDisplayName, S3_MAX_GRANTEE_DISPLAY_NAME_SIZE);
+
+ S3_set_acl(&bucketContext, sample_key, ownerId, ownerDisplayName, 3, grants, 0, &responseHandler, 0);
+
+
+Generate Object Download URL (signed)
+-------------------------------------
+
+This generates a signed download URL that will be valid for 5 minutes.
+
+.. code-block:: cpp
+
+ #include <time.h>
+ char buffer[S3_MAX_AUTHENTICATED_QUERY_STRING_SIZE];
+ int64_t expires = time(NULL) + 60 * 5; // Current time + 5 minutes
+
+ S3_generate_authenticated_query_string(buffer, &bucketContext, sample_key, expires, NULL, "GET");
+
diff --git a/doc/radosgw/s3/csharp.rst b/doc/radosgw/s3/csharp.rst
new file mode 100644
index 000000000..af1c6e4b5
--- /dev/null
+++ b/doc/radosgw/s3/csharp.rst
@@ -0,0 +1,199 @@
+.. _csharp:
+
+C# S3 Examples
+==============
+
+Creating a Connection
+---------------------
+
+This creates a connection so that you can interact with the server.
+
+.. code-block:: csharp
+
+ using System;
+ using Amazon;
+ using Amazon.S3;
+ using Amazon.S3.Model;
+
+ string accessKey = "put your access key here!";
+ string secretKey = "put your secret key here!";
+
+ AmazonS3Config config = new AmazonS3Config();
+ config.ServiceURL = "objects.dreamhost.com";
+
+ AmazonS3Client s3Client = new AmazonS3Client(
+ accessKey,
+ secretKey,
+ config
+ );
+
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of Buckets that you own.
+This also prints out the bucket name and creation date of each bucket.
+
+.. code-block:: csharp
+
+ ListBucketsResponse response = client.ListBuckets();
+ foreach (S3Bucket b in response.Buckets)
+ {
+ Console.WriteLine("{0}\t{1}", b.BucketName, b.CreationDate);
+ }
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+This creates a new bucket called ``my-new-bucket``
+
+.. code-block:: csharp
+
+ PutBucketRequest request = new PutBucketRequest();
+ request.BucketName = "my-new-bucket";
+ client.PutBucket(request);
+
+Listing a Bucket's Content
+--------------------------
+
+This gets a list of objects in the bucket.
+This also prints out each object's name, the file size, and last
+modified date.
+
+.. code-block:: csharp
+
+ ListObjectsRequest request = new ListObjectsRequest();
+ request.BucketName = "my-new-bucket";
+ ListObjectsResponse response = client.ListObjects(request);
+ foreach (S3Object o in response.S3Objects)
+ {
+ Console.WriteLine("{0}\t{1}\t{2}", o.Key, o.Size, o.LastModified);
+ }
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+
+.. note::
+
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: csharp
+
+ DeleteBucketRequest request = new DeleteBucketRequest();
+ request.BucketName = "my-new-bucket";
+ client.DeleteBucket(request);
+
+
+Forced Delete for Non-empty Buckets
+-----------------------------------
+
+.. attention::
+
+ not available
+
+
+Creating an Object
+------------------
+
+This creates a file ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: csharp
+
+ PutObjectRequest request = new PutObjectRequest();
+ request.BucketName = "my-new-bucket";
+ request.Key = "hello.txt";
+ request.ContentType = "text/plain";
+ request.ContentBody = "Hello World!";
+ client.PutObject(request);
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable, and
+``secret_plans.txt`` to be private.
+
+.. code-block:: csharp
+
+ PutACLRequest request = new PutACLRequest();
+ request.BucketName = "my-new-bucket";
+ request.Key = "hello.txt";
+ request.CannedACL = S3CannedACL.PublicRead;
+ client.PutACL(request);
+
+ PutACLRequest request2 = new PutACLRequest();
+ request2.BucketName = "my-new-bucket";
+ request2.Key = "secret_plans.txt";
+ request2.CannedACL = S3CannedACL.Private;
+ client.PutACL(request2);
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``perl_poetry.pdf`` and saves it in
+``C:\Users\larry\Documents``
+
+.. code-block:: csharp
+
+ GetObjectRequest request = new GetObjectRequest();
+ request.BucketName = "my-new-bucket";
+ request.Key = "perl_poetry.pdf";
+ GetObjectResponse response = client.GetObject(request);
+ response.WriteResponseStreamToFile("C:\\Users\\larry\\Documents\\perl_poetry.pdf");
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: csharp
+
+ DeleteObjectRequest request = new DeleteObjectRequest();
+ request.BucketName = "my-new-bucket";
+ request.Key = "goodbye.txt";
+ client.DeleteObject(request);
+
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+
+This generates an unsigned download URL for ``hello.txt``. This works
+because we made ``hello.txt`` public by setting the ACL above.
+This then generates a signed download URL for ``secret_plans.txt`` that
+will work for 1 hour. Signed download URLs will work for the time
+period even if the object is private (when the time period is up, the
+URL will stop working).
+
+.. note::
+
+ The C# S3 Library does not have a method for generating unsigned
+ URLs, so the following example only shows generating signed URLs.
+
+.. code-block:: csharp
+
+ GetPreSignedUrlRequest request = new GetPreSignedUrlRequest();
+ request.BucketName = "my-bucket-name";
+ request.Key = "secret_plans.txt";
+ request.Expires = DateTime.Now.AddHours(1);
+ request.Protocol = Protocol.HTTP;
+ string url = client.GetPreSignedURL(request);
+ Console.WriteLine(url);
+
+The output of this will look something like::
+
+ http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&Expires=1316027075&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX
+
diff --git a/doc/radosgw/s3/java.rst b/doc/radosgw/s3/java.rst
new file mode 100644
index 000000000..057c09c2c
--- /dev/null
+++ b/doc/radosgw/s3/java.rst
@@ -0,0 +1,212 @@
+.. _java:
+
+Java S3 Examples
+================
+
+Setup
+-----
+
+The following examples may require some or all of the following java
+classes to be imported:
+
+.. code-block:: java
+
+ import java.io.ByteArrayInputStream;
+ import java.io.File;
+ import java.util.List;
+ import com.amazonaws.auth.AWSCredentials;
+ import com.amazonaws.auth.BasicAWSCredentials;
+ import com.amazonaws.util.StringUtils;
+ import com.amazonaws.services.s3.AmazonS3;
+ import com.amazonaws.services.s3.AmazonS3Client;
+ import com.amazonaws.services.s3.model.Bucket;
+ import com.amazonaws.services.s3.model.CannedAccessControlList;
+ import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;
+ import com.amazonaws.services.s3.model.GetObjectRequest;
+ import com.amazonaws.services.s3.model.ObjectListing;
+ import com.amazonaws.services.s3.model.ObjectMetadata;
+ import com.amazonaws.services.s3.model.S3ObjectSummary;
+
+
+If you are just testing the Ceph Object Storage services, consider
+using HTTP protocol instead of HTTPS protocol.
+
+First, import the ``ClientConfiguration`` and ``Protocol`` classes.
+
+.. code-block:: java
+
+ import com.amazonaws.ClientConfiguration;
+ import com.amazonaws.Protocol;
+
+
+Then, define the client configuration, and add the client configuration
+as an argument for the S3 client.
+
+.. code-block:: java
+
+ AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
+
+ ClientConfiguration clientConfig = new ClientConfiguration();
+ clientConfig.setProtocol(Protocol.HTTP);
+
+ AmazonS3 conn = new AmazonS3Client(credentials, clientConfig);
+ conn.setEndpoint("endpoint.com");
+
+
+Creating a Connection
+---------------------
+
+This creates a connection so that you can interact with the server.
+
+.. code-block:: java
+
+ String accessKey = "insert your access key here!";
+ String secretKey = "insert your secret key here!";
+
+ AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
+ AmazonS3 conn = new AmazonS3Client(credentials);
+ conn.setEndpoint("objects.dreamhost.com");
+
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of Buckets that you own.
+This also prints out the bucket name and creation date of each bucket.
+
+.. code-block:: java
+
+ List<Bucket> buckets = conn.listBuckets();
+ for (Bucket bucket : buckets) {
+ System.out.println(bucket.getName() + "\t" +
+ StringUtils.fromDate(bucket.getCreationDate()));
+ }
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket called ``my-new-bucket``
+
+.. code-block:: java
+
+ Bucket bucket = conn.createBucket("my-new-bucket");
+
+
+Listing a Bucket's Content
+--------------------------
+This gets a list of objects in the bucket.
+This also prints out each object's name, the file size, and last
+modified date.
+
+.. code-block:: java
+
+ ObjectListing objects = conn.listObjects(bucket.getName());
+ do {
+ for (S3ObjectSummary objectSummary : objects.getObjectSummaries()) {
+ System.out.println(objectSummary.getKey() + "\t" +
+ objectSummary.getSize() + "\t" +
+ StringUtils.fromDate(objectSummary.getLastModified()));
+ }
+ objects = conn.listNextBatchOfObjects(objects);
+ } while (objects.isTruncated());
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+
+.. note::
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: java
+
+ conn.deleteBucket(bucket.getName());
+
+
+Forced Delete for Non-empty Buckets
+-----------------------------------
+.. attention::
+ not available
+
+
+Creating an Object
+------------------
+
+This creates a file ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: java
+
+ ByteArrayInputStream input = new ByteArrayInputStream("Hello World!".getBytes());
+ conn.putObject(bucket.getName(), "hello.txt", input, new ObjectMetadata());
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable, and
+``secret_plans.txt`` to be private.
+
+.. code-block:: java
+
+ conn.setObjectAcl(bucket.getName(), "hello.txt", CannedAccessControlList.PublicRead);
+ conn.setObjectAcl(bucket.getName(), "secret_plans.txt", CannedAccessControlList.Private);
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``perl_poetry.pdf`` and saves it in
+``/home/larry/documents``
+
+.. code-block:: java
+
+ conn.getObject(
+ new GetObjectRequest(bucket.getName(), "perl_poetry.pdf"),
+ new File("/home/larry/documents/perl_poetry.pdf")
+ );
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: java
+
+ conn.deleteObject(bucket.getName(), "goodbye.txt");
+
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+
+This generates an unsigned download URL for ``hello.txt``. This works
+because we made ``hello.txt`` public by setting the ACL above.
+This then generates a signed download URL for ``secret_plans.txt`` that
+will work for 1 hour. Signed download URLs will work for the time
+period even if the object is private (when the time period is up, the
+URL will stop working).
+
+.. note::
+ The java library does not have a method for generating unsigned
+ URLs, so the example below just generates a signed URL.
+
+.. code-block:: java
+
+ GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest(bucket.getName(), "secret_plans.txt");
+ System.out.println(conn.generatePresignedUrl(request));
+
+The output will look something like this::
+
+ https://my-bucket-name.objects.dreamhost.com/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&Expires=1316027075&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX
+
diff --git a/doc/radosgw/s3/objectops.rst b/doc/radosgw/s3/objectops.rst
new file mode 100644
index 000000000..2ac52607f
--- /dev/null
+++ b/doc/radosgw/s3/objectops.rst
@@ -0,0 +1,558 @@
+Object Operations
+=================
+
+Put Object
+----------
+Adds an object to a bucket. You must have write permissions on the bucket to perform this operation.
+
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}/{object} HTTP/1.1
+
+Request Headers
+~~~~~~~~~~~~~~~
+
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| Name | Description | Valid Values | Required |
++======================+============================================+===============================================================================+============+
+| **content-md5** | A base64 encoded MD-5 hash of the message. | A string. No defaults or constraints. | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **content-type** | A standard MIME type. | Any MIME type. Default: ``binary/octet-stream`` | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **x-amz-meta-<...>** | User metadata. Stored with the object. | A string up to 8kb. No defaults. | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **x-amz-acl** | A canned ACL. | ``private``, ``public-read``, ``public-read-write``, ``authenticated-read`` | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+
+
+Copy Object
+-----------
+To copy an object, use ``PUT`` and specify a destination bucket and the object name.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{dest-bucket}/{dest-object} HTTP/1.1
+ x-amz-copy-source: {source-bucket}/{source-object}
+
+Request Headers
+~~~~~~~~~~~~~~~
+
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+| Name | Description | Valid Values | Required |
++======================================+=================================================+========================+============+
+| **x-amz-copy-source** | The source bucket name + object name. | {bucket}/{obj} | Yes |
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+| **x-amz-acl** | A canned ACL. | ``private``, | No |
+| | | ``public-read``, | |
+| | | ``public-read-write``, | |
+| | | ``authenticated-read`` | |
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+| **x-amz-copy-if-modified-since** | Copies only if modified since the timestamp. | Timestamp | No |
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+| **x-amz-copy-if-unmodified-since** | Copies only if unmodified since the timestamp. | Timestamp | No |
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+| **x-amz-copy-if-match** | Copies only if object ETag matches ETag. | Entity Tag | No |
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+| **x-amz-copy-if-none-match** | Copies only if object ETag doesn't match. | Entity Tag | No |
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++------------------------+-------------+-----------------------------------------------+
+| Name | Type | Description |
++========================+=============+===============================================+
+| **CopyObjectResult** | Container | A container for the response elements. |
++------------------------+-------------+-----------------------------------------------+
+| **LastModified** | Date | The last modified date of the source object. |
++------------------------+-------------+-----------------------------------------------+
+| **Etag** | String | The ETag of the new object. |
++------------------------+-------------+-----------------------------------------------+
+
+Remove Object
+-------------
+
+Removes an object. Requires WRITE permission set on the containing bucket.
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{bucket}/{object} HTTP/1.1
+
+
+
+Get Object
+----------
+Retrieves an object from a bucket within RADOS.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}/{object} HTTP/1.1
+
+Request Headers
+~~~~~~~~~~~~~~~
+
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| Name | Description | Valid Values | Required |
++===========================+================================================+================================+============+
+| **range** | The range of the object to retrieve. | Range: bytes=beginbyte-endbyte | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-modified-since** | Gets only if modified since the timestamp. | Timestamp | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-unmodified-since** | Gets only if not modified since the timestamp. | Timestamp | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-match** | Gets only if object ETag matches ETag. | Entity Tag | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-none-match** | Gets only if object ETag matches ETag. | Entity Tag | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+
+Response Headers
+~~~~~~~~~~~~~~~~
+
++-------------------+--------------------------------------------------------------------------------------------+
+| Name | Description |
++===================+============================================================================================+
+| **Content-Range** | Data range, will only be returned if the range header field was specified in the request |
++-------------------+--------------------------------------------------------------------------------------------+
+
+Get Object Info
+---------------
+
+Returns information about object. This request will return the same
+header information as with the Get Object request, but will include
+the metadata only, not the object data payload.
+
+Syntax
+~~~~~~
+
+::
+
+ HEAD /{bucket}/{object} HTTP/1.1
+
+Request Headers
+~~~~~~~~~~~~~~~
+
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| Name | Description | Valid Values | Required |
++===========================+================================================+================================+============+
+| **range** | The range of the object to retrieve. | Range: bytes=beginbyte-endbyte | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-modified-since** | Gets only if modified since the timestamp. | Timestamp | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-unmodified-since** | Gets only if not modified since the timestamp. | Timestamp | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-match** | Gets only if object ETag matches ETag. | Entity Tag | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-none-match** | Gets only if object ETag matches ETag. | Entity Tag | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+
+Get Object ACL
+--------------
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}/{object}?acl HTTP/1.1
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++===========================+=============+==============================================================================================+
+| ``AccessControlPolicy`` | Container | A container for the response. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``AccessControlList`` | Container | A container for the ACL information. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Owner`` | Container | A container for the object owner's ``ID`` and ``DisplayName``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``ID`` | String | The object owner's ID. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``DisplayName`` | String | The object owner's display name. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grant`` | Container | A container for ``Grantee`` and ``Permission``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grantee`` | Container | A container for the ``DisplayName`` and ``ID`` of the user receiving a grant of permission. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Permission`` | String | The permission given to the ``Grantee`` object. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+
+
+
+Set Object ACL
+--------------
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}/{object}?acl
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++===========================+=============+==============================================================================================+
+| ``AccessControlPolicy`` | Container | A container for the response. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``AccessControlList`` | Container | A container for the ACL information. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Owner`` | Container | A container for the object owner's ``ID`` and ``DisplayName``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``ID`` | String | The object owner's ID. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``DisplayName`` | String | The object owner's display name. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grant`` | Container | A container for ``Grantee`` and ``Permission``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grantee`` | Container | A container for the ``DisplayName`` and ``ID`` of the user receiving a grant of permission. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Permission`` | String | The permission given to the ``Grantee`` object. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+
+
+
+Initiate Multi-part Upload
+--------------------------
+
+Initiate a multi-part upload process.
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{bucket}/{object}?uploads
+
+Request Headers
+~~~~~~~~~~~~~~~
+
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| Name | Description | Valid Values | Required |
++======================+============================================+===============================================================================+============+
+| **content-md5** | A base64 encoded MD-5 hash of the message. | A string. No defaults or constraints. | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **content-type** | A standard MIME type. | Any MIME type. Default: ``binary/octet-stream`` | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **x-amz-meta-<...>** | User metadata. Stored with the object. | A string up to 8kb. No defaults. | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **x-amz-acl** | A canned ACL. | ``private``, ``public-read``, ``public-read-write``, ``authenticated-read`` | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++=========================================+=============+==========================================================================================================+
+| ``InitiatedMultipartUploadsResult`` | Container | A container for the results. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Bucket`` | String | The bucket that will receive the object contents. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Key`` | String | The key specified by the ``key`` request parameter (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``UploadId`` | String | The ID specified by the ``upload-id`` request parameter identifying the multipart upload (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+
+
+Multipart Upload Part
+---------------------
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}/{object}?partNumber=&uploadId= HTTP/1.1
+
+HTTP Response
+~~~~~~~~~~~~~
+
+The following HTTP response may be returned:
+
++---------------+----------------+--------------------------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+================+==========================================================================+
+| **404** | NoSuchUpload | Specified upload-id does not match any initiated upload on this object |
++---------------+----------------+--------------------------------------------------------------------------+
+
+List Multipart Upload Parts
+---------------------------
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}/{object}?uploadId=123 HTTP/1.1
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++=========================================+=============+==========================================================================================================+
+| ``ListPartsResult`` | Container | A container for the results. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Bucket`` | String | The bucket that will receive the object contents. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Key`` | String | The key specified by the ``key`` request parameter (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``UploadId`` | String | The ID specified by the ``upload-id`` request parameter identifying the multipart upload (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Initiator`` | Container | Contains the ``ID`` and ``DisplayName`` of the user who initiated the upload. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``ID`` | String | The initiator's ID. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``DisplayName`` | String | The initiator's display name. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Owner`` | Container | A container for the ``ID`` and ``DisplayName`` of the user who owns the uploaded object. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``StorageClass`` | String | The method used to store the resulting object. ``STANDARD`` or ``REDUCED_REDUNDANCY`` |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``PartNumberMarker`` | String | The part marker to use in a subsequent request if ``IsTruncated`` is ``true``. Precedes the list. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``NextPartNumberMarker`` | String | The next part marker to use in a subsequent request if ``IsTruncated`` is ``true``. The end of the list. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``MaxParts`` | Integer | The max parts allowed in the response as specified by the ``max-parts`` request parameter. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``IsTruncated`` | Boolean | If ``true``, only a subset of the object's upload contents were returned. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Part`` | Container | A container for ``LastModified``, ``PartNumber``, ``ETag`` and ``Size`` elements. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``LastModified`` | Date | Date and time at which the part was uploaded. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``PartNumber`` | Integer | The identification number of the part. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``ETag`` | String | The part's entity tag. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Size`` | Integer | The size of the uploaded part. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+
+
+
+Complete Multipart Upload
+-------------------------
+Assembles uploaded parts and creates a new object, thereby completing a multipart upload.
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{bucket}/{object}?uploadId= HTTP/1.1
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++----------------------------------+-------------+-----------------------------------------------------+----------+
+| Name | Type | Description | Required |
++==================================+=============+=====================================================+==========+
+| ``CompleteMultipartUpload`` | Container | A container consisting of one or more parts. | Yes |
++----------------------------------+-------------+-----------------------------------------------------+----------+
+| ``Part`` | Container | A container for the ``PartNumber`` and ``ETag``. | Yes |
++----------------------------------+-------------+-----------------------------------------------------+----------+
+| ``PartNumber`` | Integer | The identifier of the part. | Yes |
++----------------------------------+-------------+-----------------------------------------------------+----------+
+| ``ETag`` | String | The part's entity tag. | Yes |
++----------------------------------+-------------+-----------------------------------------------------+----------+
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++-------------------------------------+-------------+-------------------------------------------------------+
+| Name | Type | Description |
++=====================================+=============+=======================================================+
+| **CompleteMultipartUploadResult** | Container | A container for the response. |
++-------------------------------------+-------------+-------------------------------------------------------+
+| **Location** | URI | The resource identifier (path) of the new object. |
++-------------------------------------+-------------+-------------------------------------------------------+
+| **Bucket** | String | The name of the bucket that contains the new object. |
++-------------------------------------+-------------+-------------------------------------------------------+
+| **Key** | String | The object's key. |
++-------------------------------------+-------------+-------------------------------------------------------+
+| **ETag** | String | The entity tag of the new object. |
++-------------------------------------+-------------+-------------------------------------------------------+
+
+Abort Multipart Upload
+----------------------
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{bucket}/{object}?uploadId= HTTP/1.1
+
+
+
+Append Object
+-------------
+Append data to an object. You must have write permissions on the bucket to perform this operation.
+It is used to upload files in appending mode. The type of the objects created by the Append Object
+operation is Appendable Object, and the type of the objects uploaded with the Put Object operation is Normal Object.
+**Append Object can't be used if bucket versioning is enabled or suspended.**
+**Synced object will become normal in multisite, but you can still append to the original object.**
+**Compression and encryption features are disabled for Appendable objects.**
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}/{object}?append&position= HTTP/1.1
+
+Request Headers
+~~~~~~~~~~~~~~~
+
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| Name | Description | Valid Values | Required |
++======================+============================================+===============================================================================+============+
+| **content-md5** | A base64 encoded MD-5 hash of the message. | A string. No defaults or constraints. | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **content-type** | A standard MIME type. | Any MIME type. Default: ``binary/octet-stream`` | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **x-amz-meta-<...>** | User metadata. Stored with the object. | A string up to 8kb. No defaults. | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **x-amz-acl** | A canned ACL. | ``private``, ``public-read``, ``public-read-write``, ``authenticated-read`` | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+
+Response Headers
+~~~~~~~~~~~~~~~~
+
++--------------------------------+------------------------------------------------------------------+
+| Name | Description |
++================================+==================================================================+
+| **x-rgw-next-append-position** | Next position to append object |
++--------------------------------+------------------------------------------------------------------+
+
+HTTP Response
+~~~~~~~~~~~~~
+
+The following HTTP response may be returned:
+
++---------------+----------------------------+---------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+============================+===================================================+
+| **409** | PositionNotEqualToLength | Specified position does not match object length |
++---------------+----------------------------+---------------------------------------------------+
+| **409** | ObjectNotAppendable | Specified object can not be appended |
++---------------+----------------------------+---------------------------------------------------+
+| **409** | InvalidBucketstate | Bucket versioning is enabled or suspended |
++---------------+----------------------------+---------------------------------------------------+
+
+
+Put Object Retention
+--------------------
+Places an Object Retention configuration on an object.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}/{object}?retention&versionId= HTTP/1.1
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++---------------------+-------------+-------------------------------------------------------------------------------+------------+
+| Name | Type | Description | Required |
++=====================+=============+===============================================================================+============+
+| ``Retention`` | Container | A container for the request. | Yes |
++---------------------+-------------+-------------------------------------------------------------------------------+------------+
+| ``Mode`` | String | Retention mode for the specified object. Valid Values: GOVERNANCE/COMPLIANCE | Yes |
++---------------------+-------------+--------------------------------------------------------------------------------------------+
+| ``RetainUntilDate`` | Timestamp | Retention date. Format: 2020-01-05T00:00:00.000Z | Yes |
++---------------------+-------------+--------------------------------------------------------------------------------------------+
+
+
+Get Object Retention
+--------------------
+Gets an Object Retention configuration on an object.
+
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}/{object}?retention&versionId= HTTP/1.1
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++---------------------+-------------+-------------------------------------------------------------------------------+------------+
+| Name | Type | Description | Required |
++=====================+=============+===============================================================================+============+
+| ``Retention`` | Container | A container for the request. | Yes |
++---------------------+-------------+-------------------------------------------------------------------------------+------------+
+| ``Mode`` | String | Retention mode for the specified object. Valid Values: GOVERNANCE/COMPLIANCE | Yes |
++---------------------+-------------+--------------------------------------------------------------------------------------------+
+| ``RetainUntilDate`` | Timestamp | Retention date. Format: 2020-01-05T00:00:00.000Z | Yes |
++---------------------+-------------+--------------------------------------------------------------------------------------------+
+
+
+Put Object Legal Hold
+---------------------
+Applies a Legal Hold configuration to the specified object.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}/{object}?legal-hold&versionId= HTTP/1.1
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++----------------+-------------+----------------------------------------------------------------------------------------+------------+
+| Name | Type | Description | Required |
++================+=============+========================================================================================+============+
+| ``LegalHold`` | Container | A container for the request. | Yes |
++----------------+-------------+----------------------------------------------------------------------------------------+------------+
+| ``Status`` | String | Indicates whether the specified object has a Legal Hold in place. Valid Values: ON/OFF | Yes |
++----------------+-------------+----------------------------------------------------------------------------------------+------------+
+
+
+Get Object Legal Hold
+---------------------
+Gets an object's current Legal Hold status.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}/{object}?legal-hold&versionId= HTTP/1.1
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++----------------+-------------+----------------------------------------------------------------------------------------+------------+
+| Name | Type | Description | Required |
++================+=============+========================================================================================+============+
+| ``LegalHold`` | Container | A container for the request. | Yes |
++----------------+-------------+----------------------------------------------------------------------------------------+------------+
+| ``Status`` | String | Indicates whether the specified object has a Legal Hold in place. Valid Values: ON/OFF | Yes |
++----------------+-------------+----------------------------------------------------------------------------------------+------------+
+
diff --git a/doc/radosgw/s3/perl.rst b/doc/radosgw/s3/perl.rst
new file mode 100644
index 000000000..f12e5c698
--- /dev/null
+++ b/doc/radosgw/s3/perl.rst
@@ -0,0 +1,192 @@
+.. _perl:
+
+Perl S3 Examples
+================
+
+Creating a Connection
+---------------------
+
+This creates a connection so that you can interact with the server.
+
+.. code-block:: perl
+
+ use Amazon::S3;
+ my $access_key = 'put your access key here!';
+ my $secret_key = 'put your secret key here!';
+
+ my $conn = Amazon::S3->new({
+ aws_access_key_id => $access_key,
+ aws_secret_access_key => $secret_key,
+ host => 'objects.dreamhost.com',
+ secure => 1,
+ retry => 1,
+ });
+
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of `Amazon::S3::Bucket`_ objects that you own.
+We'll also print out the bucket name and creation date of each bucket.
+
+.. code-block:: perl
+
+ my @buckets = @{$conn->buckets->{buckets} || []};
+ foreach my $bucket (@buckets) {
+ print $bucket->bucket . "\t" . $bucket->creation_date . "\n";
+ }
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket called ``my-new-bucket``
+
+.. code-block:: perl
+
+ my $bucket = $conn->add_bucket({ bucket => 'my-new-bucket' });
+
+
+Listing a Bucket's Content
+--------------------------
+
+This gets a list of hashes with info about each object in the bucket.
+We'll also print out each object's name, the file size, and last
+modified date.
+
+.. code-block:: perl
+
+ my @keys = @{$bucket->list_all->{keys} || []};
+ foreach my $key (@keys) {
+ print "$key->{key}\t$key->{size}\t$key->{last_modified}\n";
+ }
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+
+.. note::
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: perl
+
+ $conn->delete_bucket($bucket);
+
+
+Forced Delete for Non-empty Buckets
+-----------------------------------
+
+.. attention::
+
+ not available in the `Amazon::S3`_ perl module
+
+
+Creating an Object
+------------------
+
+This creates a file ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: perl
+
+ $bucket->add_key(
+ 'hello.txt', 'Hello World!',
+ { content_type => 'text/plain' },
+ );
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable and
+``secret_plans.txt`` to be private.
+
+.. code-block:: perl
+
+ $bucket->set_acl({
+ key => 'hello.txt',
+ acl_short => 'public-read',
+ });
+ $bucket->set_acl({
+ key => 'secret_plans.txt',
+ acl_short => 'private',
+ });
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``perl_poetry.pdf`` and saves it in
+``/home/larry/documents/``
+
+.. code-block:: perl
+
+ $bucket->get_key_filename('perl_poetry.pdf', undef,
+ '/home/larry/documents/perl_poetry.pdf');
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: perl
+
+ $bucket->delete_key('goodbye.txt');
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+This generates an unsigned download URL for ``hello.txt``. This works
+because we made ``hello.txt`` public by setting the ACL above.
+Then this generates a signed download URL for ``secret_plans.txt`` that
+will work for 1 hour. Signed download URLs will work for the time
+period even if the object is private (when the time period is up, the
+URL will stop working).
+
+.. note::
+ The `Amazon::S3`_ module does not have a way to generate download
+ URLs, so we are going to be using another module instead. Unfortunately,
+ most modules for generating these URLs assume that you are using Amazon,
+ so we have had to go with using a more obscure module, `Muck::FS::S3`_. This
+ should be the same as Amazon's sample S3 perl module, but this sample
+ module is not in CPAN. So, you can either use CPAN to install
+ `Muck::FS::S3`_, or install Amazon's sample S3 module manually. If you go
+ the manual route, you can remove ``Muck::FS::`` from the example below.
+
+.. code-block:: perl
+
+ use Muck::FS::S3::QueryStringAuthGenerator;
+ my $generator = Muck::FS::S3::QueryStringAuthGenerator->new(
+ $access_key,
+ $secret_key,
+ 0, # 0 means use 'http'. set this to 1 for 'https'
+ 'objects.dreamhost.com',
+ );
+
+ my $hello_url = $generator->make_bare_url($bucket->bucket, 'hello.txt');
+ print $hello_url . "\n";
+
+ $generator->expires_in(3600); # 1 hour = 3600 seconds
+ my $plans_url = $generator->get($bucket->bucket, 'secret_plans.txt');
+ print $plans_url . "\n";
+
+The output will look something like this::
+
+ http://objects.dreamhost.com:80/my-bucket-name/hello.txt
+ http://objects.dreamhost.com:80/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&Expires=1316027075&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX
+
+
+.. _`Amazon::S3`: http://search.cpan.org/~tima/Amazon-S3-0.441/lib/Amazon/S3.pm
+.. _`Amazon::S3::Bucket`: http://search.cpan.org/~tima/Amazon-S3-0.441/lib/Amazon/S3/Bucket.pm
+.. _`Muck::FS::S3`: http://search.cpan.org/~mike/Muck-0.02/
+
diff --git a/doc/radosgw/s3/php.rst b/doc/radosgw/s3/php.rst
new file mode 100644
index 000000000..4878a3489
--- /dev/null
+++ b/doc/radosgw/s3/php.rst
@@ -0,0 +1,214 @@
+.. _php:
+
+PHP S3 Examples
+===============
+
+Installing AWS PHP SDK
+----------------------
+
+This installs AWS PHP SDK using composer (see here_ how to install composer).
+
+.. _here: https://getcomposer.org/download/
+
+.. code-block:: bash
+
+ $ composer install aws/aws-sdk-php
+
+Creating a Connection
+---------------------
+
+This creates a connection so that you can interact with the server.
+
+.. note::
+
+ The client initialization requires a region so we use ``''``.
+
+.. code-block:: php
+
+ <?php
+
+ use Aws\S3\S3Client;
+
+ define('AWS_KEY', 'place access key here');
+ define('AWS_SECRET_KEY', 'place secret key here');
+ $ENDPOINT = 'http://objects.dreamhost.com';
+
+ // require the amazon sdk from your composer vendor dir
+ require __DIR__.'/vendor/autoload.php';
+
+ // Instantiate the S3 class and point it at the desired host
+ $client = new S3Client([
+ 'region' => '',
+ 'version' => '2006-03-01',
+ 'endpoint' => $ENDPOINT,
+ 'credentials' => [
+ 'key' => AWS_KEY,
+ 'secret' => AWS_SECRET_KEY
+ ],
+ // Set the S3 class to use objects.dreamhost.com/bucket
+ // instead of bucket.objects.dreamhost.com
+ 'use_path_style_endpoint' => true
+ ]);
+
+Listing Owned Buckets
+---------------------
+This gets a ``AWS\Result`` instance that is more convenient to visit using array access way.
+This also prints out the bucket name and creation date of each bucket.
+
+.. code-block:: php
+
+ <?php
+ $listResponse = $client->listBuckets();
+ $buckets = $listResponse['Buckets'];
+ foreach ($buckets as $bucket) {
+ echo $bucket['Name'] . "\t" . $bucket['CreationDate'] . "\n";
+ }
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket called ``my-new-bucket`` and returns a
+``AWS\Result`` object.
+
+.. code-block:: php
+
+ <?php
+ $client->createBucket(['Bucket' => 'my-new-bucket']);
+
+
+List a Bucket's Content
+-----------------------
+
+This gets a ``AWS\Result`` instance that is more convenient to visit using array access way.
+This then prints out each object's name, the file size, and last modified date.
+
+.. code-block:: php
+
+ <?php
+ $objectsListResponse = $client->listObjects(['Bucket' => $bucketname]);
+ $objects = $objectsListResponse['Contents'] ?? [];
+ foreach ($objects as $object) {
+ echo $object['Key'] . "\t" . $object['Size'] . "\t" . $object['LastModified'] . "\n";
+ }
+
+.. note::
+
+ If there are more than 1000 objects in this bucket,
+ you need to check $objectsListResponse['isTruncated']
+ and run again with the name of the last key listed.
+ Keep doing this until isTruncated is not true.
+
+The output will look something like this if the bucket has some files::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+
+This deletes the bucket called ``my-old-bucket`` and returns a
+``AWS\Result`` object
+
+.. note::
+
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: php
+
+ <?php
+ $client->deleteBucket(['Bucket' => 'my-old-bucket']);
+
+
+Creating an Object
+------------------
+
+This creates an object ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: php
+
+ <?php
+ $client->putObject([
+ 'Bucket' => 'my-bucket-name',
+ 'Key' => 'hello.txt',
+ 'Body' => "Hello World!"
+ ]);
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable and
+``secret_plans.txt`` to be private.
+
+.. code-block:: php
+
+ <?php
+ $client->putObjectAcl([
+ 'Bucket' => 'my-bucket-name',
+ 'Key' => 'hello.txt',
+ 'ACL' => 'public-read'
+ ]);
+ $client->putObjectAcl([
+ 'Bucket' => 'my-bucket-name',
+ 'Key' => 'secret_plans.txt',
+ 'ACL' => 'private'
+ ]);
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: php
+
+ <?php
+ $client->deleteObject(['Bucket' => 'my-bucket-name', 'Key' => 'goodbye.txt']);
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``poetry.pdf`` and saves it in
+``/home/larry/documents/``
+
+.. code-block:: php
+
+ <?php
+ $object = $client->getObject(['Bucket' => 'my-bucket-name', 'Key' => 'poetry.pdf']);
+ file_put_contents('/home/larry/documents/poetry.pdf', $object['Body']->getContents());
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+
+This generates an unsigned download URL for ``hello.txt``.
+This works because we made ``hello.txt`` public by setting
+the ACL above. This then generates a signed download URL
+for ``secret_plans.txt`` that will work for 1 hour.
+Signed download URLs will work for the time period even
+if the object is private (when the time period is up,
+the URL will stop working).
+
+.. code-block:: php
+
+ <?php
+ $hello_url = $client->getObjectUrl('my-bucket-name', 'hello.txt');
+ echo $hello_url."\n";
+
+ $secret_plans_cmd = $client->getCommand('GetObject', ['Bucket' => 'my-bucket-name', 'Key' => 'secret_plans.txt']);
+ $request = $client->createPresignedRequest($secret_plans_cmd, '+1 hour');
+ echo $request->getUri()."\n";
+
+The output of this will look something like::
+
+ http://objects.dreamhost.com/my-bucket-name/hello.txt
+ http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=sandboxAccessKey%2F20190116%2F%2Fs3%2Faws4_request&X-Amz-Date=20190116T125520Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature=61921f07c73d7695e47a2192cf55ae030f34c44c512b2160bb5a936b2b48d923
+
diff --git a/doc/radosgw/s3/python.rst b/doc/radosgw/s3/python.rst
new file mode 100644
index 000000000..35f682893
--- /dev/null
+++ b/doc/radosgw/s3/python.rst
@@ -0,0 +1,197 @@
+.. _python:
+
+Python S3 Examples
+==================
+
+Creating a Connection
+---------------------
+
+This creates a connection so that you can interact with the server.
+
+.. code-block:: python
+
+ import boto
+ import boto.s3.connection
+ access_key = 'put your access key here!'
+ secret_key = 'put your secret key here!'
+
+ conn = boto.connect_s3(
+ aws_access_key_id = access_key,
+ aws_secret_access_key = secret_key,
+ host = 'objects.dreamhost.com',
+ #is_secure=False, # uncomment if you are not using ssl
+ calling_format = boto.s3.connection.OrdinaryCallingFormat(),
+ )
+
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of Buckets that you own.
+This also prints out the bucket name and creation date of each bucket.
+
+.. code-block:: python
+
+ for bucket in conn.get_all_buckets():
+ print("{name}\t{created}".format(
+ name = bucket.name,
+ created = bucket.creation_date,
+ ))
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket called ``my-new-bucket``
+
+.. code-block:: python
+
+ bucket = conn.create_bucket('my-new-bucket')
+
+
+Listing a Bucket's Content
+--------------------------
+
+This gets a list of objects in the bucket.
+This also prints out each object's name, the file size, and last
+modified date.
+
+.. code-block:: python
+
+ for key in bucket.list():
+ print("{name}\t{size}\t{modified}".format(
+ name = key.name,
+ size = key.size,
+ modified = key.last_modified,
+ ))
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+
+.. note::
+
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: python
+
+ conn.delete_bucket(bucket.name)
+
+
+Forced Delete for Non-empty Buckets
+-----------------------------------
+
+.. attention::
+
+ not available in python
+
+
+Creating an Object
+------------------
+
+This creates a file ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: python
+
+ key = bucket.new_key('hello.txt')
+ key.set_contents_from_string('Hello World!')
+
+
+Uploading an Object or a File
+-----------------------------
+
+This creates a file ``logo.png`` with the contents from the file ``"logo.png"``
+
+.. code-block:: python
+
+ key = bucket.new_key('logo.png')
+ key.set_contents_from_filename('logo.png')
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable, and
+``secret_plans.txt`` to be private.
+
+.. code-block:: python
+
+ hello_key = bucket.get_key('hello.txt')
+ hello_key.set_canned_acl('public-read')
+ plans_key = bucket.get_key('secret_plans.txt')
+ plans_key.set_canned_acl('private')
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``perl_poetry.pdf`` and saves it in
+``/home/larry/documents/``
+
+.. code-block:: python
+
+ key = bucket.get_key('perl_poetry.pdf')
+ key.get_contents_to_filename('/home/larry/documents/perl_poetry.pdf')
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: python
+
+ bucket.delete_key('goodbye.txt')
+
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+
+This generates an unsigned download URL for ``hello.txt``. This works
+because we made ``hello.txt`` public by setting the ACL above.
+This then generates a signed download URL for ``secret_plans.txt`` that
+will work for 1 hour. Signed download URLs will work for the time
+period even if the object is private (when the time period is up, the
+URL will stop working).
+
+.. code-block:: python
+
+ hello_key = bucket.get_key('hello.txt')
+ hello_url = hello_key.generate_url(0, query_auth=False, force_http=True)
+ print(hello_url)
+
+ plans_key = bucket.get_key('secret_plans.txt')
+ plans_url = plans_key.generate_url(3600, query_auth=True, force_http=True)
+ print(plans_url)
+
+The output of this will look something like::
+
+ http://objects.dreamhost.com/my-bucket-name/hello.txt
+ http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&Expires=1316027075&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX
+
+Using S3 API Extensions
+-----------------------
+
+To use the boto3 client to tests the RadosGW extensions to the S3 API, the `extensions file`_ should be placed under: ``~/.aws/models/s3/2006-03-01/`` directory.
+For example, unordered list of objects could be fetched using:
+
+.. code-block:: python
+
+ print(conn.list_objects(Bucket='my-new-bucket', AllowUnordered=True))
+
+
+Without the extensions file, in the above example, boto3 would complain that the ``AllowUnordered`` argument is invalid.
+
+
+.. _extensions file: https://github.com/ceph/ceph/blob/main/examples/rgw/boto3/service-2.sdk-extras.json
diff --git a/doc/radosgw/s3/ruby.rst b/doc/radosgw/s3/ruby.rst
new file mode 100644
index 000000000..435b3c630
--- /dev/null
+++ b/doc/radosgw/s3/ruby.rst
@@ -0,0 +1,364 @@
+.. _ruby:
+
+Ruby `AWS::SDK`_ Examples (aws-sdk gem ~>2)
+===========================================
+
+Settings
+---------------------
+
+You can setup the connection on global way:
+
+.. code-block:: ruby
+
+ Aws.config.update(
+ endpoint: 'https://objects.dreamhost.com.',
+ access_key_id: 'my-access-key',
+ secret_access_key: 'my-secret-key',
+ force_path_style: true,
+ region: 'us-east-1'
+ )
+
+
+and instantiate a client object:
+
+.. code-block:: ruby
+
+ s3_client = Aws::S3::Client.new
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of buckets that you own.
+This also prints out the bucket name and creation date of each bucket.
+
+.. code-block:: ruby
+
+ s3_client.list_buckets.buckets.each do |bucket|
+ puts "#{bucket.name}\t#{bucket.creation_date}"
+ end
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket called ``my-new-bucket``
+
+.. code-block:: ruby
+
+ s3_client.create_bucket(bucket: 'my-new-bucket')
+
+If you want a private bucket:
+
+`acl` option accepts: # private, public-read, public-read-write, authenticated-read
+
+.. code-block:: ruby
+
+ s3_client.create_bucket(bucket: 'my-new-bucket', acl: 'private')
+
+
+Listing a Bucket's Content
+--------------------------
+
+This gets a list of hashes with the contents of each object
+This also prints out each object's name, the file size, and last
+modified date.
+
+.. code-block:: ruby
+
+ s3_client.get_objects(bucket: 'my-new-bucket').contents.each do |object|
+ puts "#{object.key}\t#{object.size}\t#{object.last-modified}"
+ end
+
+The output will look something like this if the bucket has some files::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+.. note::
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: ruby
+
+ s3_client.delete_bucket(bucket: 'my-new-bucket')
+
+
+Forced Delete for Non-empty Buckets
+-----------------------------------
+First, you need to clear the bucket:
+
+.. code-block:: ruby
+
+ Aws::S3::Bucket.new('my-new-bucket', client: s3_client).clear!
+
+after, you can destroy the bucket
+
+.. code-block:: ruby
+
+ s3_client.delete_bucket(bucket: 'my-new-bucket')
+
+
+Creating an Object
+------------------
+
+This creates a file ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: ruby
+
+ s3_client.put_object(
+ key: 'hello.txt',
+ body: 'Hello World!',
+ bucket: 'my-new-bucket',
+ content_type: 'text/plain'
+ )
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable, and ``secret_plans.txt``
+to be private.
+
+.. code-block:: ruby
+
+ s3_client.put_object_acl(bucket: 'my-new-bucket', key: 'hello.txt', acl: 'public-read')
+
+ s3_client.put_object_acl(bucket: 'my-new-bucket', key: 'private.txt', acl: 'private')
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``poetry.pdf`` and saves it in
+``/home/larry/documents/``
+
+.. code-block:: ruby
+
+ s3_client.get_object(bucket: 'my-new-bucket', key: 'poetry.pdf', response_target: '/home/larry/documents/poetry.pdf')
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: ruby
+
+ s3_client.delete_object(key: 'goodbye.txt', bucket: 'my-new-bucket')
+
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+
+This generates an unsigned download URL for ``hello.txt``. This works
+because we made ``hello.txt`` public by setting the ACL above.
+This then generates a signed download URL for ``secret_plans.txt`` that
+will work for 1 hour. Signed download URLs will work for the time
+period even if the object is private (when the time period is up, the
+URL will stop working).
+
+.. code-block:: ruby
+
+ puts Aws::S3::Object.new(
+ key: 'hello.txt',
+ bucket_name: 'my-new-bucket',
+ client: s3_client
+ ).public_url
+
+ puts Aws::S3::Object.new(
+ key: 'secret_plans.txt',
+ bucket_name: 'hermes_ceph_gem',
+ client: s3_client
+ ).presigned_url(:get, expires_in: 60 * 60)
+
+The output of this will look something like::
+
+ http://objects.dreamhost.com/my-bucket-name/hello.txt
+ http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&Expires=1316027075&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX
+
+.. _`AWS::SDK`: http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Client.html
+
+
+
+Ruby `AWS::S3`_ Examples (aws-s3 gem)
+=====================================
+
+Creating a Connection
+---------------------
+
+This creates a connection so that you can interact with the server.
+
+.. code-block:: ruby
+
+ AWS::S3::Base.establish_connection!(
+ :server => 'objects.dreamhost.com',
+ :use_ssl => true,
+ :access_key_id => 'my-access-key',
+ :secret_access_key => 'my-secret-key'
+ )
+
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of `AWS::S3::Bucket`_ objects that you own.
+This also prints out the bucket name and creation date of each bucket.
+
+.. code-block:: ruby
+
+ AWS::S3::Service.buckets.each do |bucket|
+ puts "#{bucket.name}\t#{bucket.creation_date}"
+ end
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket called ``my-new-bucket``
+
+.. code-block:: ruby
+
+ AWS::S3::Bucket.create('my-new-bucket')
+
+
+Listing a Bucket's Content
+--------------------------
+
+This gets a list of hashes with the contents of each object
+This also prints out each object's name, the file size, and last
+modified date.
+
+.. code-block:: ruby
+
+ new_bucket = AWS::S3::Bucket.find('my-new-bucket')
+ new_bucket.each do |object|
+ puts "#{object.key}\t#{object.about['content-length']}\t#{object.about['last-modified']}"
+ end
+
+The output will look something like this if the bucket has some files::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+.. note::
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: ruby
+
+ AWS::S3::Bucket.delete('my-new-bucket')
+
+
+Forced Delete for Non-empty Buckets
+-----------------------------------
+
+.. code-block:: ruby
+
+ AWS::S3::Bucket.delete('my-new-bucket', :force => true)
+
+
+Creating an Object
+------------------
+
+This creates a file ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: ruby
+
+ AWS::S3::S3Object.store(
+ 'hello.txt',
+ 'Hello World!',
+ 'my-new-bucket',
+ :content_type => 'text/plain'
+ )
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable, and ``secret_plans.txt``
+to be private.
+
+.. code-block:: ruby
+
+ policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')
+ policy.grants = [ AWS::S3::ACL::Grant.grant(:public_read) ]
+ AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)
+
+ policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')
+ policy.grants = []
+ AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``poetry.pdf`` and saves it in
+``/home/larry/documents/``
+
+.. code-block:: ruby
+
+ open('/home/larry/documents/poetry.pdf', 'w') do |file|
+ AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|
+ file.write(chunk)
+ end
+ end
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: ruby
+
+ AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')
+
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+
+This generates an unsigned download URL for ``hello.txt``. This works
+because we made ``hello.txt`` public by setting the ACL above.
+This then generates a signed download URL for ``secret_plans.txt`` that
+will work for 1 hour. Signed download URLs will work for the time
+period even if the object is private (when the time period is up, the
+URL will stop working).
+
+.. code-block:: ruby
+
+ puts AWS::S3::S3Object.url_for(
+ 'hello.txt',
+ 'my-new-bucket',
+ :authenticated => false
+ )
+
+ puts AWS::S3::S3Object.url_for(
+ 'secret_plans.txt',
+ 'my-new-bucket',
+ :expires_in => 60 * 60
+ )
+
+The output of this will look something like::
+
+ http://objects.dreamhost.com/my-bucket-name/hello.txt
+ http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&Expires=1316027075&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX
+
+.. _`AWS::S3`: http://amazon.rubyforge.org/
+.. _`AWS::S3::Bucket`: http://amazon.rubyforge.org/doc/
+
diff --git a/doc/radosgw/s3/serviceops.rst b/doc/radosgw/s3/serviceops.rst
new file mode 100644
index 000000000..54b6ca375
--- /dev/null
+++ b/doc/radosgw/s3/serviceops.rst
@@ -0,0 +1,69 @@
+Service Operations
+==================
+
+List Buckets
+------------
+``GET /`` returns a list of buckets created by the user making the request. ``GET /`` only
+returns buckets created by an authenticated user. You cannot make an anonymous request.
+
+Syntax
+~~~~~~
+::
+
+ GET / HTTP/1.1
+ Host: cname.domain.com
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++----------------------------+-------------+-----------------------------------------------------------------+
+| Name | Type | Description |
++============================+=============+=================================================================+
+| ``Buckets`` | Container | Container for list of buckets. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``Bucket`` | Container | Container for bucket information. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``Name`` | String | Bucket name. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``CreationDate`` | Date | UTC time when the bucket was created. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``ListAllMyBucketsResult`` | Container | A container for the result. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``Owner`` | Container | A container for the bucket owner's ``ID`` and ``DisplayName``. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``ID`` | String | The bucket owner's ID. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``DisplayName`` | String | The bucket owner's display name. |
++----------------------------+-------------+-----------------------------------------------------------------+
+
+
+Get Usage Stats
+---------------
+
+Gets usage stats per user, similar to the admin command :ref:`rgw_user_usage_stats`.
+
+Syntax
+~~~~~~
+::
+
+ GET /?usage HTTP/1.1
+ Host: cname.domain.com
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++----------------------------+-------------+-----------------------------------------------------------------+
+| Name | Type | Description |
++============================+=============+=================================================================+
+| ``Summary`` | Container | Summary of total stats by user. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``TotalBytes`` | Integer | Bytes used by user |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``TotalBytesRounded`` | Integer | Bytes rounded to the nearest 4k boundary |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``TotalEntries`` | Integer | Total object entries |
++----------------------------+-------------+-----------------------------------------------------------------+
diff --git a/doc/radosgw/s3select.rst b/doc/radosgw/s3select.rst
new file mode 100644
index 000000000..d46d4e96f
--- /dev/null
+++ b/doc/radosgw/s3select.rst
@@ -0,0 +1,796 @@
+===============
+ Ceph s3 select
+===============
+
+.. contents::
+
+Overview
+--------
+
+The **S3 Select** engine creates an efficient pipe between clients and Ceph
+back end nodes. The S3 Select engine works best when implemented as closely as
+possible to back end storage.
+
+The S3 Select engine makes it possible to use an SQL-like syntax to select a
+restricted subset of data stored in an S3 object. The S3 Select engine
+facilitates the use of higher level, analytic applications (for example:
+SPARK-SQL). The ability of the S3 Select engine to target a proper subset of
+structed data within an S3 object decreases latency and increases throughput.
+
+For example: assume that a user needs to extract a single column that is
+filtered by another column, and that these colums are stored in a CSV file in
+an S3 object that is several GB in size. The following query performs this
+extraction: ``select customer-id from s3Object where age>30 and age<65;``
+
+Without the use of S3 Select, the whole S3 object must be retrieved from an OSD
+via RGW before the data is filtered and extracted. Significant network and CPU
+overhead are saved by "pushing down" the query into radosgw.
+
+**The bigger the object and the more accurate the query,
+the better the performance of s3select**.
+
+Basic Workflow
+--------------
+
+S3 Select queries are sent to RGW via `AWS-CLI
+<https://docs.aws.amazon.com/cli/latest/reference/s3api/select-object-content.html>`_
+
+S3 Select passes the authentication and permission parameters as an incoming
+message (POST). ``RGWSelectObj_ObjStore_S3::send_response_data`` is the entry
+point and handles each fetched chunk according to the object key that was
+input. ``send_response_data`` is the first to handle the input query: it
+extracts the query and other CLI parameters.
+
+RGW executes an S3 Select query on each new fetched chunk (up to 4 MB). The
+current implementation supports CSV objects. CSV rows are sometimes "cut" in
+the middle by the limits of the chunks, and those broken-lines (the first or
+last per chunk) are skipped while processing the query. Such broken lines are
+stored and later merged with the next broken line (which belongs to the next
+chunk), and only then processed.
+
+For each processed chunk, an output message is formatted according to `aws
+specification
+<https://docs.aws.amazon.com/amazons3/latest/api/archive-restobjectselectcontent.html#archive-restobjectselectcontent-responses>`_
+and sent back to the client. RGW supports the following response:
+``{:event-type,records} {:content-type,application/octet-stream}
+{:message-type,event}``. For aggregation queries, the last chunk should be
+identified as the end of input.
+
+
+Basic Functionalities
+~~~~~~~~~~~~~~~~~~~~~
+
+**S3select** has a definite set of functionalities compliant with AWS.
+
+The implemented software architecture supports basic arithmetic expressions,
+logical and compare expressions, including nested function calls and casting
+operators, which enables the user great flexibility.
+
+review the below s3-select-feature-table_.
+
+
+Error Handling
+~~~~~~~~~~~~~~
+
+Upon an error being detected, RGW returns 400-Bad-Request and a specific error message sends back to the client.
+Currently, there are 2 main types of error.
+
+**Syntax error**: the s3select parser rejects user requests that are not aligned with parser syntax definitions, as
+described in this documentation.
+Upon Syntax Error, the engine creates an error message that points to the location of the error.
+RGW sends back the error message in a specific error response.
+
+**Processing Time error**: the runtime engine may detect errors that occur only on processing time, for that type of
+error, a different error message would describe that.
+RGW sends back the error message in a specific error response.
+
+.. _s3-select-feature-table:
+
+Features Support
+----------------
+
+Currently only part of `AWS select command
+<https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-glacier-select-sql-reference-select.html>`_
+is implemented, table below describes what is currently supported.
+
+The following table describes the current implementation for s3-select
+functionalities:
+
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Feature | Detailed | Example / Description |
++=================================+=================+=======================================================================+
+| Arithmetic operators | ^ * % / + - ( ) | select (int(_1)+int(_2))*int(_9) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| | ``%`` modulo | select count(*) from s3object where cast(_1 as int)%2 = 0; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| | ``^`` power-of | select cast(2^10 as int) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Compare operators | > < >= <= = != | select _1,_2 from s3object where (int(_1)+int(_3))>int(_5); |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| logical operator | AND OR NOT | select count(*) from s3object where not (int(_1)>123 and int(_5)<200);|
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| logical operator | is null | return true/false for null indication in expression |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| logical operator | is not null | return true/false for null indication in expression |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| logical operator and NULL | unknown state | review null-handle_ observe how logical operator result with null. |
+| | | the following query return **0**. |
+| | | |
+| | | select count(*) from s3object where null and (3>2); |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Arithmetic operator with NULL | unknown state | review null-handle_ observe the results of binary operations with NULL|
+| | | the following query return **0**. |
+| | | |
+| | | select count(*) from s3object where (null+1) and (3>2); |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| compare with NULL | unknown state | review null-handle_ observe results of compare operations with NULL |
+| | | the following query return **0**. |
+| | | |
+| | | select count(*) from s3object where (null*1.5) != 3; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| missing column | unknown state | select count(*) from s3object where _1 is null; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| query is filtering rows where predicate | select count(*) from s3object where (_1 > 12 and _2 = 0) is not null; |
+| is returning non null results. | |
+| this predicate will return null | |
+| upon _1 or _2 is null | |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| projection column | similar to | select case |
+| | switch/case | cast(_1 as int) + 1 |
+| | default | when 2 then "a" |
+| | | when 3 then "b" |
+| | | else "c" end from s3object; |
+| | | |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| projection column | similar to | select case |
+| | if/then/else | when (1+1=(2+1)*3) then 'case_1' |
+| | | when ((4*3)=(12)) then 'case_2' |
+| | | else 'case_else' end, |
+| | | age*2 from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| logical operator | ``coalesce {expression,expression ...} :: return first non-null argument`` |
+| | |
+| | select coalesce(nullif(5,5),nullif(1,1.0),age+12) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| logical operator | ``nullif {expr1,expr2} ::return null in case both arguments are equal,`` |
+| | ``or else the first one`` |
+| | |
+| | select nullif(cast(_1 as int),cast(_2 as int)) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| logical operator | ``{expression} in ( .. {expression} ..)`` |
+| | |
+| | select count(*) from s3object |
+| | where 'ben' in (trim(_5),substring(_1,char_length(_1)-3,3),last_name); |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| logical operator | ``{expression} between {expression} and {expression}`` |
+| | |
+| | select count(*) from s3object |
+| | where substring(_3,char_length(_3),1) between "x" and trim(_1) |
+| | and substring(_3,char_length(_3)-1,1) = ":"; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| logical operator | ``{expression} like {match-pattern}`` |
+| | |
+| | select count(*) from s3object where first_name like '%de_'; |
+| | |
+| | select count(*) from s3object where _1 like \"%a[r-s]\; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| | ``{expression} like {match-pattern} escape {char}`` |
+| | |
+| logical operator | select count(*) from s3object where "jok_ai" like "%#_ai" escape "#"; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| true / false | select (cast(_1 as int)>123 = true) from s3object |
+| predicate as a projection | where address like '%new-york%'; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| an alias to | select (_1 like "_3_") as *likealias*,_1 from s3object |
+| predicate as a projection | where *likealias* = true and cast(_1 as int) between 800 and 900; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| casting operator | select cast(123 as int)%2 from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| casting operator | select cast(123.456 as float)%2 from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| casting operator | select cast('ABC0-9' as string),cast(substr('ab12cd',3,2) as int)*4 from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| casting operator | select cast(5 as bool) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| casting operator | select cast(substring('publish on 2007-01-01',12,10) as timestamp) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| non AWS casting operator | select int(_1),int( 1.2 + 3.4) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| non AWS casting operator | select float(1.2) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| not AWS casting operator | select to_timestamp('1999-10-10T12:23:44Z') from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Aggregation Function | sum | select sum(int(_1)) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Aggregation Function | avg | select avg(cast(_1 a float) + cast(_2 as int)) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Aggregation Function | min | select min( int(_1) * int(_5) ) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Aggregation Function | max | select max(float(_1)),min(int(_5)) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Aggregation Function | count | select count(*) from s3object where (int(_1)+int(_3))>int(_5); |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Timestamp Functions | extract | select count(*) from s3object where |
+| | | extract(year from to_timestamp(_2)) > 1950 |
+| | | and extract(year from to_timestamp(_1)) < 1960; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Timestamp Functions | date_add | select count(0) from s3object where |
+| | | date_diff(year,to_timestamp(_1),date_add(day,366, |
+| | | to_timestamp(_1))) = 1; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Timestamp Functions | date_diff | select count(0) from s3object where |
+| | | date_diff(month,to_timestamp(_1),to_timestamp(_2))) = 2; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Timestamp Functions | utcnow | select count(0) from s3object where |
+| | | date_diff(hours,utcnow(),date_add(day,1,utcnow())) = 24; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Timestamp Functions | to_string | select to_string( |
+| | | to_timestamp("2009-09-17T17:56:06.234567Z"), |
+| | | "yyyyMMdd-H:m:s") from s3object; |
+| | | |
+| | | ``result: "20090917-17:56:6"`` |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| String Functions | substring | select count(0) from s3object where |
+| | | int(substring(_1,1,4))>1950 and int(substring(_1,1,4))<1960; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| substring with ``from`` negative number is valid | select substring("123456789" from -4) from s3object; |
+| considered as first | |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| substring with ``from`` zero ``for`` out-of-bound | select substring("123456789" from 0 for 100) from s3object; |
+| number is valid just as (first,last) | |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| String Functions | trim | select trim(' foobar ') from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| String Functions | trim | select trim(trailing from ' foobar ') from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| String Functions | trim | select trim(leading from ' foobar ') from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| String Functions | trim | select trim(both '12' from '1112211foobar22211122') from s3objects; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| String Functions | lower/upper | select lower('ABcD12#$e') from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| String Functions | char_length | select count(*) from s3object where char_length(_3)=3; |
+| | character_length| |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Complex queries | select sum(cast(_1 as int)), |
+| | max(cast(_3 as int)), |
+| | substring('abcdefghijklm',(2-1)*3+sum(cast(_1 as int))/sum(cast(_1 as int))+1, |
+| | (count() + count(0))/count(0)) from s3object; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| alias support | | select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 |
+| | | from s3object where a3>100 and a3<300; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+
+.. _null-handle:
+
+NULL
+~~~~
+NULL is a legit value in ceph-s3select systems similar to other DB systems, i.e. systems needs to handle the case where a value is NULL.
+
+The definition of NULL in our context, is missing/unknown, in that sense **NULL can not produce a value on ANY arithmetic operations** ( a + NULL will produce NULL value).
+
+The Same is with arithmetic comparison, **any comparison to NULL is NULL**, i.e. unknown.
+Below is a truth table contains the NULL use-case.
+
++---------------------------------+-----------------------------+
+| A is NULL | Result (NULL=UNKNOWN) |
++=================================+=============================+
+| NOT A | NULL |
++---------------------------------+-----------------------------+
+| A OR False | NULL |
++---------------------------------+-----------------------------+
+| A OR True | True |
++---------------------------------+-----------------------------+
+| A OR A | NULL |
++---------------------------------+-----------------------------+
+| A AND False | False |
++---------------------------------+-----------------------------+
+| A AND True | NULL |
++---------------------------------+-----------------------------+
+| A and A | NULL |
++---------------------------------+-----------------------------+
+
+S3-select Function Interfaces
+-----------------------------
+
+Timestamp Functions
+~~~~~~~~~~~~~~~~~~~
+The timestamp functionalities as described in `AWS-specs
+<https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-glacier-select-sql-reference-date.html>`_
+is fully implemented.
+
+ ``to_timestamp( string )`` : The casting operator converts string to timestamp
+ basic type. to_timestamp operator is able to convert the following
+ ``YYYY-MM-DDTHH:mm:ss.SSSSSS+/-HH:mm`` , ``YYYY-MM-DDTHH:mm:ss.SSSSSSZ`` ,
+ ``YYYY-MM-DDTHH:mm:ss+/-HH:mm`` , ``YYYY-MM-DDTHH:mm:ssZ`` ,
+ ``YYYY-MM-DDTHH:mm+/-HH:mm`` , ``YYYY-MM-DDTHH:mmZ`` , ``YYYY-MM-DDT`` or
+ ``YYYYT`` string formats into timestamp. Where time (or part of it) is
+ missing in the string format, zero's are replacing the missing parts. And for
+ missing month and day, 1 is default value for them. Timezone part is in
+ format ``+/-HH:mm`` or ``Z`` , where the letter "Z" indicates Coordinated
+ Universal Time (UTC). Value of timezone can range between -12:00 and +14:00.
+
+ ``extract(date-part from timestamp)`` : The function extracts date-part from
+ input timestamp and returns it as integer. Supported date-part : year, month,
+ week, day, hour, minute, second, timezone_hour, timezone_minute.
+
+ ``date_add(date-part, quantity, timestamp)`` : The function adds quantity
+ (integer) to date-part of timestamp and returns result as timestamp. It also
+ includes timezone in calculation. Supported data-part : year, month, day,
+ hour, minute, second.
+
+ ``date_diff(date-part, timestamp, timestamp)`` : The function returns an
+ integer, a calculated result for difference between 2 timestamps according to
+ date-part. It includes timezone in calculation. supported date-part : year,
+ month, day, hour, minute, second.
+
+ ``utcnow()`` : return timestamp of current time.
+
+ ``to_string(timestamp, format_pattern)`` : returns a string representation of
+ the input timestamp in the given input string format.
+
+to_string parameters
+~~~~~~~~~~~~~~~~~~~~
+
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| Format | Example | Description |
++==============+=================+===================================================================================+
+| yy | 69 | 2-digit year |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| y | 1969 | 4-digit year |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| yyyy | 1969 | Zero-padded 4-digit year |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| M | 1 | Month of year |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| MM | 01 | Zero-padded month of year |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| MMM | Jan | Abbreviated month year name |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| MMMM | January | Full month of year name |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| MMMMM | J | Month of year first letter (NOTE: not valid for use with to_timestamp function) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| d | 2 | Day of month (1-31) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| dd | 02 | Zero-padded day of month (01-31) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| a | AM | AM or PM of day |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| h | 3 | Hour of day (1-12) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| hh | 03 | Zero-padded hour of day (01-12) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| H | 3 | Hour of day (0-23) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| HH | 03 | Zero-padded hour of day (00-23) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| m | 4 | Minute of hour (0-59) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| mm | 04 | Zero-padded minute of hour (00-59) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| s | 5 | Second of minute (0-59) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| ss | 05 | Zero-padded second of minute (00-59) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| S | 0 | Fraction of second (precision: 0.1, range: 0.0-0.9) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| SS | 6 | Fraction of second (precision: 0.01, range: 0.0-0.99) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| SSS | 60 | Fraction of second (precision: 0.001, range: 0.0-0.999) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| SSSSSS | 60000000 | Fraction of second (maximum precision: 1 nanosecond, range: 0.0-0999999999) |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| n | 60000000 | Nano of second |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| X | +07 or Z | Offset in hours or "Z" if the offset is 0 |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| XX or XXXX| +0700 or Z | Offset in hours and minutes or "Z" if the offset is 0 |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| XXX or XXXXX | +07:00 or Z | Offset in hours and minutes or "Z" if the offset is 0 |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| X | 7 | Offset in hours |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| xx or xxxx | 700 | Offset in hours and minutes |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+| xxx or xxxxx | +07:00 | Offset in hours and minutes |
++--------------+-----------------+-----------------------------------------------------------------------------------+
+
+
+Aggregation Functions
+~~~~~~~~~~~~~~~~~~~~~
+
+``count()`` : return integer according to number of rows matching condition(if such exist).
+
+``sum(expression)`` : return a summary of expression per all rows matching condition(if such exist).
+
+``avg(expression)`` : return a average of expression per all rows matching condition(if such exist).
+
+``max(expression)`` : return the maximal result for all expressions matching condition(if such exist).
+
+``min(expression)`` : return the minimal result for all expressions matching condition(if such exist).
+
+String Functions
+~~~~~~~~~~~~~~~~
+
+``substring(string,from,to)`` : substring( string ``from`` start [ ``for`` length ] )
+return a string extract from input string according to from,to inputs.
+``substring(string from )``
+``substring(string from for)``
+
+``char_length`` : return a number of characters in string (``character_length`` does the same).
+
+``trim`` : trim ( [[``leading`` | ``trailing`` | ``both`` remove_chars] ``from``] string )
+trims leading/trailing(or both) characters from target string, the default is blank character.
+
+``upper\lower`` : converts characters into lowercase/uppercase.
+
+SQL Limit Operator
+~~~~~~~~~~~~~~~~~~
+
+The SQL LIMIT operator is used to limit the number of rows processed by the query.
+Upon reaching the limit set by the user, the RGW stops fetching additional chunks.
+TODO : add examples, for aggregation and non-aggregation queries.
+
+Alias
+~~~~~
+**Alias** programming-construct is an essential part of s3-select language, it enables much better programming especially with objects containing many columns or in the case of complex queries.
+
+Upon parsing the statement containing alias construct, it replaces alias with reference to correct projection column, on query execution time the reference is evaluated as any other expression.
+
+There is a risk that self(or cyclic) reference may occur causing stack-overflow(endless-loop), for that concern upon evaluating an alias, it is validated for cyclic reference.
+
+Alias also maintains a result cache, meaning that successive uses of a given alias do not evaluate the expression again. The result is instead returned from the cache.
+
+With each new row the cache is invalidated as the results may then differ.
+
+Testing
+~~~~~~~
+
+``s3select`` contains several testing frameworks which provide a large coverage for its functionalities.
+
+(1) Tests comparison against a trusted engine, meaning, C/C++ compiler is a trusted expression evaluator,
+since the syntax for arithmetical and logical expressions are identical (s3select compare to C)
+the framework runs equal expressions and validates their results.
+A dedicated expression generator produces different sets of expressions per each new test session.
+
+(2) Compares results of queries whose syntax is different but which are semantically equivalent.
+This kind of test validates that different runtime flows produce an identical result
+on each run with a different, random dataset.
+
+For example, on a dataset which contains a random numbers(1-1000)
+the following queries will produce identical results.
+``select count(*) from s3object where char_length(_3)=3;``
+``select count(*) from s3object where cast(_3 as int)>99 and cast(_3 as int)<1000;``
+
+(3) Constant dataset, the conventional way of testing. A query is processing a constant dataset, its result is validated against constant results.
+
+Additional Syntax Support
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+S3select syntax supports table-alias ``select s._1 from s3object s where s._2 = ‘4’;``
+
+S3select syntax supports case insensitive ``Select SUM(Cast(_1 as int)) FROM S3Object;``
+
+S3select syntax supports statements without closing semicolon ``select count(*) from s3object``
+
+
+Sending Query to RGW
+--------------------
+
+Any HTTP client can send an ``s3-select`` request to RGW, which must be compliant with `AWS Request syntax <https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html#API_SelectObjectContent_RequestSyntax>`_.
+
+
+
+When sending an ``s3-select`` request to RGW using AWS CLI, clients must follow `AWS command reference <https://docs.aws.amazon.com/cli/latest/reference/s3api/select-object-content.html>`_.
+Below is an example:
+
+::
+
+ aws --endpoint-url http://localhost:8000 s3api select-object-content
+ --bucket {BUCKET-NAME}
+ --expression-type 'SQL'
+ --scan-range '{"Start" : 1000, "End" : 1000000}'
+ --input-serialization
+ '{"CSV": {"FieldDelimiter": "," , "QuoteCharacter": "\"" , "RecordDelimiter" : "\n" , "QuoteEscapeCharacter" : "\\" , "FileHeaderInfo": "USE" }, "CompressionType": "NONE"}'
+ --output-serialization '{"CSV": {"FieldDelimiter": ":", "RecordDelimiter":"\t", "QuoteFields": "ALWAYS"}}'
+ --key {OBJECT-NAME}
+ --request-progress '{"Enabled": True}'
+ --expression "select count(0) from s3object where int(_1)<10;" output.csv
+
+Input Serialization
+~~~~~~~~~~~~~~~~~~~
+
+**FileHeaderInfo** -> (string)
+Describes the first line of input. Valid values are:
+
+**NONE** : The first line is not a header.
+**IGNORE** : The first line is a header, but you can't use the header values to indicate the column in an expression.
+it's possible to use column position (such as _1, _2, …) to indicate the column (``SELECT s._1 FROM S3OBJECT s``).
+**USE** : First line is a header, and you can use the header value to identify a column in an expression (``SELECT column_name FROM S3OBJECT``).
+
+**QuoteEscapeCharacter** -> (string)
+A single character used for escaping the quotation mark character inside an already escaped value.
+
+**RecordDelimiter** -> (string)
+A single character is used to separate individual records in the input. Instead of the default value, you can specify an arbitrary delimiter.
+
+**FieldDelimiter** -> (string)
+A single character is used to separate individual fields in a record. You can specify an arbitrary delimiter.
+
+Output Serialization
+~~~~~~~~~~~~~~~~~~~~
+
+**AWS CLI example**
+
+ aws s3api select-object-content \
+ --bucket "mybucket" \
+ --key keyfile1 \
+ --expression "SELECT * FROM s3object s" \
+ --expression-type 'SQL' \
+ --request-progress '{"Enabled": false}' \
+ --input-serialization '{"CSV": {"FieldDelimiter": ","}, "CompressionType": "NONE"}' \
+ --output-serialization '{"CSV": {"FieldDelimiter": ":", "RecordDelimiter":"\\t", "QuoteFields": "ALWAYS"}}' /dev/stdout
+
+ **QuoteFields** -> (string)
+ Indicates whether to use quotation marks around output fields.
+ **ALWAYS**: Always use quotation marks for output fields.
+ **ASNEEDED** (not implemented): Use quotation marks for output fields when needed.
+
+ **RecordDelimiter** -> (string)
+ A single character is used to separate individual records in the output. Instead of the default value, you can specify an
+ arbitrary delimiter.
+
+ **FieldDelimiter** -> (string)
+ The value used to separate individual fields in a record. You can specify an arbitrary delimiter.
+
+Scan Range Option
+~~~~~~~~~~~~~~~~~
+
+ The scan range option to AWS-CLI enables the client to scan and process only a selected part of the object.
+ This option reduces input/output operations and bandwidth by skipping parts of the object that are not of interest.
+ TODO : different data-sources (CSV, JSON, Parquet)
+
+CSV Parsing Behavior
+--------------------
+
+ The ``s3-select`` engine contains a CSV parser, which parses s3-objects as follows.
+ - Each row ends with ``row-delimiter``.
+ - ``field-separator`` separates adjacent columns, successive instances of ``field separator`` define a NULL column.
+ - ``quote-character`` overrides ``field separator``, meaning that ``field separator`` is treated like any character between quotes.
+ - ``escape character`` disables interpretation of special characters, except for ``row delimiter``.
+
+ Below are examples of CSV parsing rules.
+
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Feature | Description | input ==> tokens |
++=================================+=================+=======================================================================+
+| NULL | successive | ,,1,,2, ==> {null}{null}{1}{null}{2}{null} |
+| | field delimiter | |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| QUOTE | quote character | 11,22,"a,b,c,d",last ==> {11}{22}{"a,b,c,d"}{last} |
+| | overrides | |
+| | field delimiter | |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Escape | escape char | 11,22,str=\\"abcd\\"\\,str2=\\"123\\",last |
+| | overrides | ==> {11}{22}{str="abcd",str2="123"}{last} |
+| | meta-character. | |
+| | escape removed | |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| row delimiter | no close quote, | 11,22,a="str,44,55,66 |
+| | row delimiter is| ==> {11}{22}{a="str,44,55,66} |
+| | closing line | |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| csv header info | FileHeaderInfo | "**USE**" value means each token on first line is column-name, |
+| | tag | "**IGNORE**" value means to skip the first line |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+
+JSON
+--------------------
+
+A JSON reader has been integrated with the ``s3select-engine``, which allows the client to use SQL statements to scan and extract information from JSON documents.
+It should be noted that the data readers and parsers for CSV, Parquet, and JSON documents are separated from the SQL engine itself, so all of these readers use the same SQL engine.
+
+It's important to note that values in a JSON document can be nested in various ways, such as within objects or arrays.
+These objects and arrays can be nested within each other without any limitations.
+When using SQL to query a specific value in a JSON document, the client must specify the location of the value
+via a path in the SELECT statement.
+
+The SQL engine processes the SELECT statement in a row-based fashion.
+It uses the columns specified in the statement to perform its projection calculation, and each row contains values for these columns.
+In other words, the SQL engine processes each row one at a time (and aggregates results), using the values in the columns to perform SQL calculations.
+However, the generic structure of a JSON document does not have a row-and-column structure like CSV or Parquet.
+Instead, it is the SQL statement itself that defines the rows and columns when querying a JSON document.
+
+When querying JSON documents using SQL, the FROM clause in the SELECT statement defines the row boundaries.
+A row in a JSON document should be similar to how the row delimiter is used to define rows when querying CSV objects, and how row groups are used to define rows when querying Parquet objects.
+The statement "SELECT ... FROM s3object[*].aaa.bb.cc" instructs the reader to search for the path "aaa.bb.cc" and defines the row boundaries based on the occurrence of this path.
+A row begins when the reader encounters the path, and it ends when the reader exits the innermost part of the path, which in this case is the object "cc".
+
+NOTE : The semantics of querying JSON document may change and may not be the same as the current methodology described.
+
+TODO : relevant example for object and array values.
+
+A JSON Query Example
+--------------------
+
+::
+
+ {
+ "firstName": "Joe",
+ "lastName": "Jackson",
+ "gender": "male",
+ "age": "twenty",
+ "address": {
+ "streetAddress": "101",
+ "city": "San Diego",
+ "state": "CA"
+ },
+
+ "firstName": "Joe_2",
+ "lastName": "Jackson_2",
+ "gender": "male",
+ "age": 21,
+ "address": {
+ "streetAddress": "101",
+ "city": "San Diego",
+ "state": "CA"
+ },
+
+ "phoneNumbers": [
+ { "type": "home1", "number": "734928_1","addr": 11 },
+ { "type": "home2", "number": "734928_2","addr": 22 },
+ { "type": "home3", "number": "734928_3","addr": 33 },
+ { "type": "home4", "number": "734928_4","addr": 44 },
+ { "type": "home5", "number": "734928_5","addr": 55 },
+ { "type": "home6", "number": "734928_6","addr": 66 },
+ { "type": "home7", "number": "734928_7","addr": 77 },
+ { "type": "home8", "number": "734928_8","addr": 88 },
+ { "type": "home9", "number": "734928_9","addr": 99 },
+ { "type": "home10", "number": "734928_10","addr": 100 }
+ ],
+
+ "key_after_array": "XXX",
+
+ "description" : {
+ "main_desc" : "value_1",
+ "second_desc" : "value_2"
+ }
+ }
+
+ # the from-clause define a single row.
+ # _1 points to root object level.
+ # _1.age appears twice in Documnet-row, the last value is used for the operation.
+ query = "select _1.firstname,_1.key_after_array,_1.age+4,_1.description.main_desc,_1.description.second_desc from s3object[*];";
+ expected_result = Joe_2,XXX,25,value_1,value_2
+
+
+ # the from-clause points the phonenumbers array (it defines the _1)
+ # each element in phoneNumbers array define a row.
+ # in this case each element is an object contains 3 keys/values.
+ # the query "can not access" values outside phonenumbers array, the query can access only values appears on _1.phonenumbers path.
+ query = "select cast(substring(_1.number,1,6) as int) *10 from s3object[*].phonenumbers where _1.type='home2';";
+ expected_result = 7349280
+
+
+BOTO3
+-----
+
+using BOTO3 is "natural" and easy due to AWS-cli support.
+
+::
+
+ import pprint
+
+ def run_s3select(bucket,key,query,column_delim=",",row_delim="\n",quot_char='"',esc_char='\\',csv_header_info="NONE"):
+
+ s3 = boto3.client('s3',
+ endpoint_url=endpoint,
+ aws_access_key_id=access_key,
+ region_name=region_name,
+ aws_secret_access_key=secret_key)
+
+ result = ""
+ try:
+ r = s3.select_object_content(
+ Bucket=bucket,
+ Key=key,
+ ExpressionType='SQL',
+ InputSerialization = {"CSV": {"RecordDelimiter" : row_delim, "FieldDelimiter" : column_delim,"QuoteEscapeCharacter": esc_char, "QuoteCharacter": quot_char, "FileHeaderInfo": csv_header_info}, "CompressionType": "NONE"},
+ OutputSerialization = {"CSV": {}},
+ Expression=query,
+ RequestProgress = {"Enabled": progress})
+
+ except ClientError as c:
+ result += str(c)
+ return result
+
+ for event in r['Payload']:
+ if 'Records' in event:
+ result = ""
+ records = event['Records']['Payload'].decode('utf-8')
+ result += records
+ if 'Progress' in event:
+ print("progress")
+ pprint.pprint(event['Progress'],width=1)
+ if 'Stats' in event:
+ print("Stats")
+ pprint.pprint(event['Stats'],width=1)
+ if 'End' in event:
+ print("End")
+ pprint.pprint(event['End'],width=1)
+
+ return result
+
+
+
+
+ run_s3select(
+ "my_bucket",
+ "my_csv_object",
+ "select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 from s3object where a3>100 and a3<300;")
+
+
+S3 SELECT Responses
+-------------------
+
+Error Response
+~~~~~~~~~~~~~~
+
+::
+
+ <?xml version="1.0" encoding="UTF-8"?>
+ <Error>
+ <Code>NoSuchKey</Code>
+ <Message>The resource you requested does not exist</Message>
+ <Resource>/mybucket/myfoto.jpg</Resource>
+ <RequestId>4442587FB7D0A2F9</RequestId>
+ </Error>
+
+Report Response
+~~~~~~~~~~~~~~~
+::
+
+ HTTP/1.1 200
+ <?xml version="1.0" encoding="UTF-8"?>
+ <Payload>
+ <Records>
+ <Payload>blob</Payload>
+ </Records>
+ <Stats>
+ <Details>
+ <BytesProcessed>long</BytesProcessed>
+ <BytesReturned>long</BytesReturned>
+ <BytesScanned>long</BytesScanned>
+ </Details>
+ </Stats>
+ <Progress>
+ <Details>
+ <BytesProcessed>long</BytesProcessed>
+ <BytesReturned>long</BytesReturned>
+ <BytesScanned>long</BytesScanned>
+ </Details>
+ </Progress>
+ <Cont>
+ </Cont>
+ <End>
+ </End>
+ </Payload>
+
+Response Description
+~~~~~~~~~~~~~~~~~~~~
+
+For CEPH S3 Select, responses can be messages of the following types:
+
+**Records message**: Can contain a single record, partial records, or multiple records. Depending on the size of the result, a response can contain one or more of these messages.
+
+**Error message**: Upon an error being detected, RGW returns 400 Bad Request, and a specific error message sends back to the client, according to its type.
+
+**Continuation message**: Ceph S3 periodically sends this message to keep the TCP connection open.
+These messages appear in responses at random. The client must detect the message type and process it accordingly.
+
+**Progress message**: Ceph S3 periodically sends this message if requested. It contains information about the progress of a query that has started but has not yet been completed.
+
+**Stats message**: Ceph S3 sends this message at the end of the request. It contains statistics about the query.
+
+**End message**: Indicates that the request is complete, and no more messages will be sent. You should not assume that request is complete until the client receives an End message.
diff --git a/doc/radosgw/session-tags.rst b/doc/radosgw/session-tags.rst
new file mode 100644
index 000000000..46722c382
--- /dev/null
+++ b/doc/radosgw/session-tags.rst
@@ -0,0 +1,427 @@
+=======================================================
+Session tags for Attribute Based Access Control in STS
+=======================================================
+
+Session tags are key-value pairs that can be passed while federating a user (currently it
+is only supported as part of the web token passed to AssumeRoleWithWebIdentity). The session
+tags are passed along as aws:PrincipalTag in the session credentials (temporary credentials)
+that is returned back by STS. These Principal Tags consists of the session tags that come in
+as part of the web token and the tags that are attached to the role being assumed. Please note
+that the tags have to be always specified in the following namespace: https://aws.amazon.com/tags.
+
+An example of the session tags that are passed in by the IDP in the web token is as follows:
+
+.. code-block:: python
+
+ {
+ "jti": "947960a3-7e91-4027-99f6-da719b0d4059",
+ "exp": 1627438044,
+ "nbf": 0,
+ "iat": 1627402044,
+ "iss": "http://localhost:8080/auth/realms/quickstart",
+ "aud": "app-profile-jsp",
+ "sub": "test",
+ "typ": "ID",
+ "azp": "app-profile-jsp",
+ "auth_time": 0,
+ "session_state": "3a46e3e7-d198-4a64-8b51-69682bcfc670",
+ "preferred_username": "test",
+ "email_verified": false,
+ "acr": "1",
+ "https://aws.amazon.com/tags": [
+ {
+ "principal_tags": {
+ "Department": [
+ "Engineering",
+ "Marketing"
+ ]
+ }
+ }
+ ],
+ "client_id": "app-profile-jsp",
+ "username": "test",
+ "active": true
+ }
+
+Steps to configure Keycloak to pass tags in the web token are described here:
+:ref:`radosgw_keycloak`.
+
+The trust policy must have 'sts:TagSession' permission if the web token passed
+in by the federated user contains session tags, otherwise the
+AssumeRoleWithWebIdentity action will fail. An example of the trust policy with
+sts:TagSession is as follows:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["sts:AssumeRoleWithWebIdentity","sts:TagSession"],
+ "Principal":{"Federated":["arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart"]},
+ "Condition":{"StringEquals":{"localhost:8080/auth/realms/quickstart:sub":"test"}}
+ }]
+ }
+
+Tag Keys
+========
+
+The following are the tag keys that can be used in the role's trust policy or the role's permission policy:
+
+1. aws:RequestTag: This key is used to compare the key-value pair passed in the request with the key-value pair
+in the role's trust policy. In case of AssumeRoleWithWebIdentity, the session tags that are passed by the idp
+in the web token can be used as aws:RequestTag in the role's trust policy based on which a federated user can be
+allowed to assume a role.
+
+An example of a role trust policy that uses aws:RequestTag is as follows:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["sts:AssumeRoleWithWebIdentity","sts:TagSession"],
+ "Principal":{"Federated":["arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart"]},
+ "Condition":{"StringEquals":{"aws:RequestTag/Department":"Engineering"}}
+ }]
+ }
+
+2. aws:PrincipalTag: This key is used to compare the key-value pair attached to the principal with the key-value pair
+in the policy. In case of AssumeRoleWithWebIdentity, the session tags that are passed by the idp in the web token appear
+as Principal tags in the temporary credentials once a user has been authenticated, and these tags can be used as
+aws:PrincipalTag in the role's permission policy.
+
+An example of a role permission policy that uses aws:PrincipalTag is as follows:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["s3:*"],
+ "Resource":["arn:aws:s3::t1tenant:my-test-bucket","arn:aws:s3::t1tenant:my-test-bucket/*],"+
+ "Condition":{"StringEquals":{"aws:PrincipalTag/Department":"Engineering"}}
+ }]
+ }
+
+3. iam:ResourceTag: This key is used to compare the key-value pair attached to the resource with the key-value pair
+in the policy. In case of AssumeRoleWithWebIdentity, tags attached to the role can be used to compare with that in
+the trust policy to allow a user to assume a role.
+RGW now supports REST APIs for tagging, listing tags and untagging actions on a role. More information related to
+role tagging can be found here :doc:`role`.
+
+An example of a role's trust policy that uses aws:ResourceTag is as follows:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["sts:AssumeRoleWithWebIdentity","sts:TagSession"],
+ "Principal":{"Federated":["arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart"]},
+ "Condition":{"StringEquals":{"iam:ResourceTag/Department":"Engineering"}}
+ }]
+ }
+
+For the above to work, you need to attach 'Department=Engineering' tag to the role.
+
+4. aws:TagKeys: This key is used to compare tags in the request with the tags in the policy. In case of
+AssumeRoleWithWebIdentity this can be used to check the tag keys in a role's trust policy before a user
+is allowed to assume a role.
+This can also be used in the role's permission policy.
+
+An example of a role's trust policy that uses aws:TagKeys is as follows:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["sts:AssumeRoleWithWebIdentity","sts:TagSession"],
+ "Principal":{"Federated":["arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart"]},
+ "Condition":{"ForAllValues:StringEquals":{"aws:TagKeys":["Department"]}}
+ }]
+ }
+
+'ForAllValues:StringEquals' tests whether every tag key in the request is a subset of the tag keys in the policy. So the above
+condition restricts the tag keys passed in the request.
+
+5. s3:ResourceTag: This key is used to compare tags present on the s3 resource (bucket or object) with the tags in
+the role's permission policy.
+
+An example of a role's permission policy that uses s3:ResourceTag is as follows:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["s3:PutBucketTagging"],
+ "Resource":["arn:aws:s3::t1tenant:my-test-bucket\","arn:aws:s3::t1tenant:my-test-bucket/*"]
+ },
+ {
+ "Effect":"Allow",
+ "Action":["s3:*"],
+ "Resource":["*"],
+ "Condition":{"StringEquals":{"s3:ResourceTag/Department":\"Engineering"}}
+ }
+ }
+
+For the above to work, you need to attach 'Department=Engineering' tag to the bucket (and on the object too) on which you want this policy
+to be applied.
+
+More examples of policies using tags
+====================================
+
+1. To assume a role by matching the tags in the incoming request with the tag attached to the role.
+aws:RequestTag is the incoming tag in the JWT (access token) and iam:ResourceTag is the tag attached to the role being assumed:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["sts:AssumeRoleWithWebIdentity","sts:TagSession"],
+ "Principal":{"Federated":["arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart"]},
+ "Condition":{"StringEquals":{"aws:RequestTag/Department":"${iam:ResourceTag/Department}"}}
+ }]
+ }
+
+2. To evaluate a role's permission policy by matching principal tags with s3 resource tags.
+aws:PrincipalTag is the tag passed in along with the temporary credentials and s3:ResourceTag is the tag attached to
+the s3 resource (object/ bucket):
+
+.. code-block:: python
+
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["s3:PutBucketTagging"],
+ "Resource":["arn:aws:s3::t1tenant:my-test-bucket\","arn:aws:s3::t1tenant:my-test-bucket/*"]
+ },
+ {
+ "Effect":"Allow",
+ "Action":["s3:*"],
+ "Resource":["*"],
+ "Condition":{"StringEquals":{"s3:ResourceTag/Department":"${aws:PrincipalTag/Department}"}}
+ }
+ }
+
+Properties of Session Tags
+==========================
+
+1. Session Tags can be multi-valued. (Multi-valued session tags are not supported in AWS)
+2. A maximum of 50 session tags are allowed to be passed in by the IDP.
+3. The maximum size of a key allowed is 128 characters.
+4. The maximum size of a value allowed is 256 characters.
+5. The tag or the value can not start with "aws:".
+
+s3 Resource Tags
+================
+
+As stated above 's3:ResourceTag' key can be used for authorizing an s3 operation in RGW (this is not allowed in AWS).
+
+s3:ResourceTag is a key used to refer to tags that have been attached to an object or a bucket. Tags can be attached to an object or
+a bucket using REST APIs available for the same.
+
+The following table shows which s3 resource tag type (bucket/object) are supported for authorizing a particular operation.
+
++-----------------------------------+-------------------+
+| Operation | Tag type |
++===================================+===================+
+| **GetObject** | Object tags |
+| **GetObjectTags** | |
+| **DeleteObjectTags** | |
+| **DeleteObject** | |
+| **PutACLs** | |
+| **InitMultipart** | |
+| **AbortMultipart** | |
+| **ListMultipart** | |
+| **GetAttrs** | |
+| **PutObjectRetention** | |
+| **GetObjectRetention** | |
+| **PutObjectLegalHold** | |
+| **GetObjectLegalHold** | |
++-----------------------------------+-------------------+
+| **PutObjectTags** | Bucket tags |
+| **GetBucketTags** | |
+| **PutBucketTags** | |
+| **DeleteBucketTags** | |
+| **GetBucketReplication** | |
+| **DeleteBucketReplication** | |
+| **GetBucketVersioning** | |
+| **SetBucketVersioning** | |
+| **GetBucketWebsite** | |
+| **SetBucketWebsite** | |
+| **DeleteBucketWebsite** | |
+| **StatBucket** | |
+| **ListBucket** | |
+| **GetBucketLogging** | |
+| **GetBucketLocation** | |
+| **DeleteBucket** | |
+| **GetLC** | |
+| **PutLC** | |
+| **DeleteLC** | |
+| **GetCORS** | |
+| **PutCORS** | |
+| **GetRequestPayment** | |
+| **SetRequestPayment** | |
+| **PutBucketPolicy** | |
+| **GetBucketPolicy** | |
+| **DeleteBucketPolicy** | |
+| **PutBucketObjectLock** | |
+| **GetBucketObjectLock** | |
+| **GetBucketPolicyStatus** | |
+| **PutBucketPublicAccessBlock** | |
+| **GetBucketPublicAccessBlock** | |
+| **DeleteBucketPublicAccessBlock** | |
++-----------------------------------+-------------------+
+| **GetACLs** | Bucket tags for |
+| **PutACLs** | bucket ACLs |
+| | Object tags for |
+| | object ACLs |
++-----------------------------------+-------------------+
+| **PutObject** | Object tags of |
+| **CopyObject** | source object |
+| | Bucket tags of |
+| | destination bucket|
++-----------------------------------+-------------------+
+
+
+Sample code demonstrating usage of session tags
+===============================================
+
+The following is a sample code for tagging a role, a bucket, an object in it and using tag keys in a role's
+trust policy and its permission policy, assuming that a tag 'Department=Engineering' is passed in the
+JWT (access token) by the IDP
+
+.. code-block:: python
+
+ # -*- coding: utf-8 -*-
+
+ import boto3
+ import json
+ from nose.tools import eq_ as eq
+
+ access_key = 'TESTER'
+ secret_key = 'test123'
+ endpoint = 'http://s3.us-east.localhost:8000'
+
+ s3client = boto3.client('s3',
+ aws_access_key_id = access_key,
+ aws_secret_access_key = secret_key,
+ endpoint_url = endpoint,
+ region_name='',)
+
+ s3res = boto3.resource('s3',
+ aws_access_key_id = access_key,
+ aws_secret_access_key = secret_key,
+ endpoint_url = endpoint,
+ region_name='',)
+
+ iam_client = boto3.client('iam',
+ aws_access_key_id=access_key,
+ aws_secret_access_key=secret_key,
+ endpoint_url=endpoint,
+ region_name=''
+ )
+
+ bucket_name = 'test-bucket'
+ s3bucket = s3client.create_bucket(Bucket=bucket_name)
+
+ bucket_tagging = s3res.BucketTagging(bucket_name)
+ Set_Tag = bucket_tagging.put(Tagging={'TagSet':[{'Key':'Department', 'Value': 'Engineering'}]})
+ try:
+ response = iam_client.create_open_id_connect_provider(
+ Url='http://localhost:8080/auth/realms/quickstart',
+ ClientIDList=[
+ 'app-profile-jsp',
+ 'app-jee-jsp'
+ ],
+ ThumbprintList=[
+ 'F7D7B3515DD0D319DD219A43A9EA727AD6065287'
+ ]
+ )
+ except ClientError as e:
+ print ("Provider already exists")
+
+ policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\",\"sts:TagSession\"],\"Condition\":{\"StringEquals\":{\"aws:RequestTag/Department\":\"${iam:ResourceTag/Department}\"}}}]}"
+ role_response = ""
+
+ print ("\n Getting Role \n")
+
+ try:
+ role_response = iam_client.get_role(
+ RoleName='S3Access'
+ )
+ print (role_response)
+ except ClientError as e:
+ if e.response['Code'] == 'NoSuchEntity':
+ print ("\n Creating Role \n")
+ tags_list = [
+ {'Key':'Department','Value':'Engineering'},
+ ]
+ role_response = iam_client.create_role(
+ AssumeRolePolicyDocument=policy_document,
+ Path='/',
+ RoleName='S3Access',
+ Tags=tags_list,
+ )
+ print (role_response)
+ else:
+ print("Unexpected error: %s" % e)
+
+ role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\",\"Condition\":{\"StringEquals\":{\"s3:ResourceTag/Department\":[\"${aws:PrincipalTag/Department}\"]}}}}"
+
+ response = iam_client.put_role_policy(
+ RoleName='S3Access',
+ PolicyName='Policy1',
+ PolicyDocument=role_policy
+ )
+
+ sts_client = boto3.client('sts',
+ aws_access_key_id='abc',
+ aws_secret_access_key='def',
+ endpoint_url = endpoint,
+ region_name = '',
+ )
+
+
+ print ("\n Assuming Role with Web Identity\n")
+ response = sts_client.assume_role_with_web_identity(
+ RoleArn=role_response['Role']['Arn'],
+ RoleSessionName='Bob',
+ DurationSeconds=900,
+ WebIdentityToken='<web-token>')
+
+ s3client2 = boto3.client('s3',
+ aws_access_key_id = response['Credentials']['AccessKeyId'],
+ aws_secret_access_key = response['Credentials']['SecretAccessKey'],
+ aws_session_token = response['Credentials']['SessionToken'],
+ endpoint_url='http://s3.us-east.localhost:8000',
+ region_name='',)
+
+ bucket_body = 'this is a test file'
+ tags = 'Department=Engineering'
+ key = "test-1.txt"
+ s3_put_obj = s3client2.put_object(Body=bucket_body, Bucket=bucket_name, Key=key, Tagging=tags)
+ eq(s3_put_obj['ResponseMetadata']['HTTPStatusCode'],200)
+
+ s3_get_obj = s3client2.get_object(Bucket=bucket_name, Key=key)
+ eq(s3_get_obj['ResponseMetadata']['HTTPStatusCode'],200)
diff --git a/doc/radosgw/swift.rst b/doc/radosgw/swift.rst
new file mode 100644
index 000000000..24abd7728
--- /dev/null
+++ b/doc/radosgw/swift.rst
@@ -0,0 +1,79 @@
+.. _radosgw swift:
+
+===============================
+ Ceph Object Gateway Swift API
+===============================
+
+Ceph supports a RESTful API that is compatible with the basic data access model of the `Swift API`_.
+
+API
+---
+
+.. toctree::
+ :maxdepth: 1
+
+ Authentication <swift/auth>
+ Service Ops <swift/serviceops>
+ Container Ops <swift/containerops>
+ Object Ops <swift/objectops>
+ Temp URL Ops <swift/tempurl>
+ Tutorial <swift/tutorial>
+ Java <swift/java>
+ Python <swift/python>
+ Ruby <swift/ruby>
+
+
+Features Support
+----------------
+
+The following table describes the support status for current Swift functional features:
+
++---------------------------------+-----------------+----------------------------------------+
+| Feature | Status | Remarks |
++=================================+=================+========================================+
+| **Authentication** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Account Metadata** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Swift ACLs** | Supported | Supports a subset of Swift ACLs |
++---------------------------------+-----------------+----------------------------------------+
+| **List Containers** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Delete Container** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Create Container** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Container Metadata** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Update Container Metadata** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Delete Container Metadata** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **List Objects** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Static Website** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Create Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Create Large Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Delete Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Copy Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Object Metadata** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Update Object Metadata** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Expiring Objects** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Temporary URLs** | Partial Support | No support for container-level keys |
++---------------------------------+-----------------+----------------------------------------+
+| **Object Versioning** | Partial Support | No support for ``X-History-Location`` |
++---------------------------------+-----------------+----------------------------------------+
+| **CORS** | Not Supported | |
++---------------------------------+-----------------+----------------------------------------+
+
+.. _Swift API: https://developer.openstack.org/api-ref/object-store/index.html
diff --git a/doc/radosgw/swift/auth.rst b/doc/radosgw/swift/auth.rst
new file mode 100644
index 000000000..12d6b23ff
--- /dev/null
+++ b/doc/radosgw/swift/auth.rst
@@ -0,0 +1,82 @@
+================
+ Authentication
+================
+
+Swift API requests that require authentication must contain an
+``X-Storage-Token`` authentication token in the request header.
+The token may be retrieved from RADOS Gateway, or from another authenticator.
+To obtain a token from RADOS Gateway, you must create a user. For example::
+
+ sudo radosgw-admin user create --subuser="{username}:{subusername}" --uid="{username}"
+ --display-name="{Display Name}" --key-type=swift --secret="{password}" --access=full
+
+For details on RADOS Gateway administration, see `radosgw-admin`_.
+
+.. _radosgw-admin: ../../../man/8/radosgw-admin/
+
+.. note::
+ For those used to the Swift API this is implementing the Swift auth v1.0 API, as such
+ `{username}` above is generally equivalent to a Swift `account` and `{subusername}`
+ is a user under that account.
+
+Auth Get
+--------
+
+To authenticate a user, make a request containing an ``X-Auth-User`` and a
+``X-Auth-Key`` in the header.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /auth HTTP/1.1
+ Host: swift.radosgwhost.com
+ X-Auth-User: johndoe
+ X-Auth-Key: R7UUOLFDI2ZI9PRCQ53K
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Auth-User``
+
+:Description: The key RADOS GW username to authenticate.
+:Type: String
+:Required: Yes
+
+``X-Auth-Key``
+
+:Description: The key associated to a RADOS GW username.
+:Type: String
+:Required: Yes
+
+
+Response Headers
+~~~~~~~~~~~~~~~~
+
+The response from the server should include an ``X-Auth-Token`` value. The
+response may also contain a ``X-Storage-Url`` that provides the
+``{api version}/{account}`` prefix that is specified in other requests
+throughout the API documentation.
+
+
+``X-Storage-Token``
+
+:Description: The authorization token for the ``X-Auth-User`` specified in the request.
+:Type: String
+
+
+``X-Storage-Url``
+
+:Description: The URL and ``{api version}/{account}`` path for the user.
+:Type: String
+
+A typical response looks like this::
+
+ HTTP/1.1 204 No Content
+ Date: Mon, 16 Jul 2012 11:05:33 GMT
+ Server: swift
+ X-Storage-Url: https://swift.radosgwhost.com/v1/ACCT-12345
+ X-Auth-Token: UOlCCC8TahFKlWuv9DB09TWHF0nDjpPElha0kAa
+ Content-Length: 0
+ Content-Type: text/plain; charset=UTF-8
diff --git a/doc/radosgw/swift/containerops.rst b/doc/radosgw/swift/containerops.rst
new file mode 100644
index 000000000..434b90ef5
--- /dev/null
+++ b/doc/radosgw/swift/containerops.rst
@@ -0,0 +1,341 @@
+======================
+ Container Operations
+======================
+
+A container is a mechanism for storing data objects. An account may
+have many containers, but container names must be unique. This API enables a
+client to create a container, set access controls and metadata,
+retrieve a container's contents, and delete a container. Since this API
+makes requests related to information in a particular user's account, all
+requests in this API must be authenticated unless a container's access control
+is deliberately made publicly accessible (i.e., allows anonymous requests).
+
+.. note:: The Amazon S3 API uses the term 'bucket' to describe a data container.
+ When you hear someone refer to a 'bucket' within the Swift API, the term
+ 'bucket' may be construed as the equivalent of the term 'container.'
+
+One facet of object storage is that it does not support hierarchical paths
+or directories. Instead, it supports one level consisting of one or more
+containers, where each container may have objects. The RADOS Gateway's
+Swift-compatible API supports the notion of 'pseudo-hierarchical containers,'
+which is a means of using object naming to emulate a container (or directory)
+hierarchy without actually implementing one in the storage system. You may
+name objects with pseudo-hierarchical names
+(e.g., photos/buildings/empire-state.jpg), but container names cannot
+contain a forward slash (``/``) character.
+
+
+Create a Container
+==================
+
+To create a new container, make a ``PUT`` request with the API version, account,
+and the name of the new container. The container name must be unique, must not
+contain a forward-slash (/) character, and should be less than 256 bytes. You
+may include access control headers and metadata headers in the request. The
+operation is idempotent; that is, if you make a request to create a container
+that already exists, it will return with a HTTP 202 return code, but will not
+create another container.
+
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{api version}/{account}/{container} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+ X-Container-Read: {comma-separated-uids}
+ X-Container-Write: {comma-separated-uids}
+ X-Container-Meta-{key}: {value}
+
+
+Headers
+~~~~~~~
+
+``X-Container-Read``
+
+:Description: The user IDs with read permissions for the container.
+:Type: Comma-separated string values of user IDs.
+:Required: No
+
+``X-Container-Write``
+
+:Description: The user IDs with write permissions for the container.
+:Type: Comma-separated string values of user IDs.
+:Required: No
+
+``X-Container-Meta-{key}``
+
+:Description: A user-defined meta data key that takes an arbitrary string value.
+:Type: String
+:Required: No
+
+
+HTTP Response
+~~~~~~~~~~~~~
+
+If a container with the same name already exists, and the user is the
+container owner then the operation will succeed. Otherwise the operation
+will fail.
+
+``409``
+
+:Description: The container already exists under a different user's ownership.
+:Status Code: ``BucketAlreadyExists``
+
+
+
+
+List a Container's Objects
+==========================
+
+To list the objects within a container, make a ``GET`` request with the
+API version, account, and the name of the container. You can specify query
+parameters to filter the full list, or leave out the parameters to return a list
+of the first 10,000 object names stored in the container.
+
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{api version}/{container} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+Parameters
+~~~~~~~~~~
+
+``format``
+
+:Description: Defines the format of the result.
+:Type: String
+:Valid Values: ``json`` | ``xml``
+:Required: No
+
+``prefix``
+
+:Description: Limits the result set to objects beginning with the specified prefix.
+:Type: String
+:Required: No
+
+``marker``
+
+:Description: Returns a list of results greater than the marker value.
+:Type: String
+:Required: No
+
+``limit``
+
+:Description: Limits the number of results to the specified value.
+:Type: Integer
+:Valid Range: 0 - 10,000
+:Required: No
+
+``delimiter``
+
+:Description: The delimiter between the prefix and the rest of the object name.
+:Type: String
+:Required: No
+
+``path``
+
+:Description: The pseudo-hierarchical path of the objects.
+:Type: String
+:Required: No
+
+``allow_unordered``
+
+:Description: Allows the results to be returned unordered to reduce computation overhead. Cannot be used with ``delimiter``.
+:Type: Boolean
+:Required: No
+:Non-Standard Extension: Yes
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+``container``
+
+:Description: The container.
+:Type: Container
+
+``object``
+
+:Description: An object within the container.
+:Type: Container
+
+``name``
+
+:Description: The name of an object within the container.
+:Type: String
+
+``hash``
+
+:Description: A hash code of the object's contents.
+:Type: String
+
+``last_modified``
+
+:Description: The last time the object's contents were modified.
+:Type: Date
+
+``content_type``
+
+:Description: The type of content within the object.
+:Type: String
+
+
+
+Update a Container's ACLs
+=========================
+
+When a user creates a container, the user has read and write access to the
+container by default. To allow other users to read a container's contents or
+write to a container, you must specifically enable the user.
+You may also specify ``*`` in the ``X-Container-Read`` or ``X-Container-Write``
+settings, which effectively enables all users to either read from or write
+to the container. Setting ``*`` makes the container public. That is it
+enables anonymous users to either read from or write to the container.
+
+.. note:: If you are planning to expose public read ACL functionality
+ for the Swift API, it is strongly recommended to include the
+ Swift account name in the endpoint definition, so as to most
+ closely emulate the behavior of native OpenStack Swift. To
+ do so, set the ``ceph.conf`` configuration option ``rgw
+ swift account in url = true``, and update your Keystone
+ endpoint to the URL suffix ``/v1/AUTH_%(tenant_id)s``
+ (instead of just ``/v1``).
+
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{api version}/{account}/{container} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+ X-Container-Read: *
+ X-Container-Write: {uid1}, {uid2}, {uid3}
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Container-Read``
+
+:Description: The user IDs with read permissions for the container.
+:Type: Comma-separated string values of user IDs.
+:Required: No
+
+``X-Container-Write``
+
+:Description: The user IDs with write permissions for the container.
+:Type: Comma-separated string values of user IDs.
+:Required: No
+
+
+Add/Update Container Metadata
+=============================
+
+To add metadata to a container, make a ``POST`` request with the API version,
+account, and container name. You must have write permissions on the
+container to add or update metadata.
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{api version}/{account}/{container} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+ X-Container-Meta-Color: red
+ X-Container-Meta-Taste: salty
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Container-Meta-{key}``
+
+:Description: A user-defined meta data key that takes an arbitrary string value.
+:Type: String
+:Required: No
+
+
+Enable Object Versioning for a Container
+========================================
+
+To enable object versioning a container, make a ``POST`` request with
+the API version, account, and container name. You must have write
+permissions on the container to add or update metadata.
+
+.. note:: Object versioning support is not enabled in radosgw by
+ default; you must set ``rgw swift versioning enabled =
+ true`` in ``ceph.conf`` to enable this feature.
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{api version}/{account}/{container} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+ X-Versions-Location: {archive-container}
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Versions-Location``
+
+:Description: The name of a container (the "archive container") that
+ will be used to store versions of the objects in the
+ container that the ``POST`` request is made on (the
+ "current container"). The archive container need not
+ exist at the time it is being referenced, but once
+ ``X-Versions-Location`` is set on the current container,
+ and object versioning is thus enabled, the archive
+ container must exist before any further objects are
+ updated or deleted in the current container.
+
+ .. note:: ``X-Versions-Location`` is the only
+ versioning-related header that radosgw
+ interprets. ``X-History-Location``, supported
+ by native OpenStack Swift, is currently not
+ supported by radosgw.
+:Type: String
+:Required: No (if this header is passed with an empty value, object
+ versioning on the current container is disabled, but the
+ archive container continues to exist.)
+
+
+Delete a Container
+==================
+
+To delete a container, make a ``DELETE`` request with the API version, account,
+and the name of the container. The container must be empty. If you'd like to check
+if the container is empty, execute a ``HEAD`` request against the container. Once
+you have successfully removed the container, you will be able to reuse the container name.
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{api version}/{account}/{container} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+HTTP Response
+~~~~~~~~~~~~~
+
+``204``
+
+:Description: The container was removed.
+:Status Code: ``NoContent``
+
diff --git a/doc/radosgw/swift/java.rst b/doc/radosgw/swift/java.rst
new file mode 100644
index 000000000..8977a3b16
--- /dev/null
+++ b/doc/radosgw/swift/java.rst
@@ -0,0 +1,175 @@
+.. _java_swift:
+
+=====================
+ Java Swift Examples
+=====================
+
+Setup
+=====
+
+The following examples may require some or all of the following Java
+classes to be imported:
+
+.. code-block:: java
+
+ import org.javaswift.joss.client.factory.AccountConfig;
+ import org.javaswift.joss.client.factory.AccountFactory;
+ import org.javaswift.joss.client.factory.AuthenticationMethod;
+ import org.javaswift.joss.model.Account;
+ import org.javaswift.joss.model.Container;
+ import org.javaswift.joss.model.StoredObject;
+ import java.io.File;
+ import java.io.IOException;
+ import java.util.*;
+
+
+Create a Connection
+===================
+
+This creates a connection so that you can interact with the server:
+
+.. code-block:: java
+
+ String username = "USERNAME";
+ String password = "PASSWORD";
+ String authUrl = "https://radosgw.endpoint/auth/1.0";
+
+ AccountConfig config = new AccountConfig();
+ config.setUsername(username);
+ config.setPassword(password);
+ config.setAuthUrl(authUrl);
+ config.setAuthenticationMethod(AuthenticationMethod.BASIC);
+ Account account = new AccountFactory(config).createAccount();
+
+
+Create a Container
+==================
+
+This creates a new container called ``my-new-container``:
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ container.create();
+
+
+Create an Object
+================
+
+This creates an object ``foo.txt`` from the file named ``foo.txt`` in
+the container ``my-new-container``:
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ StoredObject object = container.getObject("foo.txt");
+ object.uploadObject(new File("foo.txt"));
+
+
+Add/Update Object Metadata
+==========================
+
+This adds the metadata key-value pair ``key``:``value`` to the object named
+``foo.txt`` in the container ``my-new-container``:
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ StoredObject object = container.getObject("foo.txt");
+ Map<String, Object> metadata = new TreeMap<String, Object>();
+ metadata.put("key", "value");
+ object.setMetadata(metadata);
+
+
+List Owned Containers
+=====================
+
+This gets a list of Containers that you own.
+This also prints out the container name.
+
+.. code-block:: java
+
+ Collection<Container> containers = account.list();
+ for (Container currentContainer : containers) {
+ System.out.println(currentContainer.getName());
+ }
+
+The output will look something like this::
+
+ mahbuckat1
+ mahbuckat2
+ mahbuckat3
+
+
+List a Container's Content
+==========================
+
+This gets a list of objects in the container ``my-new-container``; and, it also
+prints out each object's name, the file size, and last modified date:
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ Collection<StoredObject> objects = container.list();
+ for (StoredObject currentObject : objects) {
+ System.out.println(currentObject.getName());
+ }
+
+The output will look something like this::
+
+ myphoto1.jpg
+ myphoto2.jpg
+
+
+Retrieve an Object's Metadata
+=============================
+
+This retrieves metadata and gets the MIME type for an object named ``foo.txt``
+in a container named ``my-new-container``:
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ StoredObject object = container.getObject("foo.txt");
+ Map<String, Object> returnedMetadata = object.getMetadata();
+ for (String name : returnedMetadata.keySet()) {
+ System.out.println("META / "+name+": "+returnedMetadata.get(name));
+ }
+
+
+Retrieve an Object
+==================
+
+This downloads the object ``foo.txt`` in the container ``my-new-container``
+and saves it in ``./outfile.txt``:
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ StoredObject object = container.getObject("foo.txt");
+ object.downloadObject(new File("outfile.txt"));
+
+
+Delete an Object
+================
+
+This deletes the object ``goodbye.txt`` in the container "my-new-container":
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ StoredObject object = container.getObject("foo.txt");
+ object.delete();
+
+
+Delete a Container
+==================
+
+This deletes a container named "my-new-container":
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ container.delete();
+
+.. note:: The container must be empty! Otherwise it won't work!
diff --git a/doc/radosgw/swift/objectops.rst b/doc/radosgw/swift/objectops.rst
new file mode 100644
index 000000000..fc8d21967
--- /dev/null
+++ b/doc/radosgw/swift/objectops.rst
@@ -0,0 +1,271 @@
+===================
+ Object Operations
+===================
+
+An object is a container for storing data and metadata. A container may
+have many objects, but the object names must be unique. This API enables a
+client to create an object, set access controls and metadata, retrieve an
+object's data and metadata, and delete an object. Since this API makes requests
+related to information in a particular user's account, all requests in this API
+must be authenticated unless the container or object's access control is
+deliberately made publicly accessible (i.e., allows anonymous requests).
+
+
+Create/Update an Object
+=======================
+
+To create a new object, make a ``PUT`` request with the API version, account,
+container name and the name of the new object. You must have write permission
+on the container to create or update an object. The object name must be
+unique within the container. The ``PUT`` request is not idempotent, so if you
+do not use a unique name, the request will update the object. However, you may
+use pseudo-hierarchical syntax in your object name to distinguish it from
+another object of the same name if it is under a different pseudo-hierarchical
+directory. You may include access control headers and metadata headers in the
+request.
+
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{api version}/{account}/{container}/{object} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``ETag``
+
+:Description: An MD5 hash of the object's contents. Recommended.
+:Type: String
+:Required: No
+
+
+``Content-Type``
+
+:Description: The type of content the object contains.
+:Type: String
+:Required: No
+
+
+``Transfer-Encoding``
+
+:Description: Indicates whether the object is part of a larger aggregate object.
+:Type: String
+:Valid Values: ``chunked``
+:Required: No
+
+
+Copy an Object
+==============
+
+Copying an object allows you to make a server-side copy of an object, so that
+you don't have to download it and upload it under another container/name.
+To copy the contents of one object to another object, you may make either a
+``PUT`` request or a ``COPY`` request with the API version, account, and the
+container name. For a ``PUT`` request, use the destination container and object
+name in the request, and the source container and object in the request header.
+For a ``Copy`` request, use the source container and object in the request, and
+the destination container and object in the request header. You must have write
+permission on the container to copy an object. The destination object name must be
+unique within the container. The request is not idempotent, so if you do not use
+a unique name, the request will update the destination object. However, you may
+use pseudo-hierarchical syntax in your object name to distinguish the destination
+object from the source object of the same name if it is under a different
+pseudo-hierarchical directory. You may include access control headers and metadata
+headers in the request.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{api version}/{account}/{dest-container}/{dest-object} HTTP/1.1
+ X-Copy-From: {source-container}/{source-object}
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+or alternatively:
+
+::
+
+ COPY /{api version}/{account}/{source-container}/{source-object} HTTP/1.1
+ Destination: {dest-container}/{dest-object}
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Copy-From``
+
+:Description: Used with a ``PUT`` request to define the source container/object path.
+:Type: String
+:Required: Yes, if using ``PUT``
+
+
+``Destination``
+
+:Description: Used with a ``COPY`` request to define the destination container/object path.
+:Type: String
+:Required: Yes, if using ``COPY``
+
+
+``If-Modified-Since``
+
+:Description: Only copies if modified since the date/time of the source object's ``last_modified`` attribute.
+:Type: Date
+:Required: No
+
+
+``If-Unmodified-Since``
+
+:Description: Only copies if not modified since the date/time of the source object's ``last_modified`` attribute.
+:Type: Date
+:Required: No
+
+``Copy-If-Match``
+
+:Description: Copies only if the ETag in the request matches the source object's ETag.
+:Type: ETag.
+:Required: No
+
+
+``Copy-If-None-Match``
+
+:Description: Copies only if the ETag in the request does not match the source object's ETag.
+:Type: ETag.
+:Required: No
+
+
+Delete an Object
+================
+
+To delete an object, make a ``DELETE`` request with the API version, account,
+container and object name. You must have write permissions on the container to delete
+an object within it. Once you have successfully deleted the object, you will be able to
+reuse the object name.
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{api version}/{account}/{container}/{object} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+Get an Object
+=============
+
+To retrieve an object, make a ``GET`` request with the API version, account,
+container and object name. You must have read permissions on the container to
+retrieve an object within it.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{api version}/{account}/{container}/{object} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``range``
+
+:Description: To retrieve a subset of an object's contents, you may specify a byte range.
+:Type: Date
+:Required: No
+
+
+``If-Modified-Since``
+
+:Description: Only copies if modified since the date/time of the source object's ``last_modified`` attribute.
+:Type: Date
+:Required: No
+
+
+``If-Unmodified-Since``
+
+:Description: Only copies if not modified since the date/time of the source object's ``last_modified`` attribute.
+:Type: Date
+:Required: No
+
+``Copy-If-Match``
+
+:Description: Copies only if the ETag in the request matches the source object's ETag.
+:Type: ETag.
+:Required: No
+
+
+``Copy-If-None-Match``
+
+:Description: Copies only if the ETag in the request does not match the source object's ETag.
+:Type: ETag.
+:Required: No
+
+
+
+Response Headers
+~~~~~~~~~~~~~~~~
+
+``Content-Range``
+
+:Description: The range of the subset of object contents. Returned only if the range header field was specified in the request
+
+
+Get Object Metadata
+===================
+
+To retrieve an object's metadata, make a ``HEAD`` request with the API version,
+account, container and object name. You must have read permissions on the
+container to retrieve metadata from an object within the container. This request
+returns the same header information as the request for the object itself, but
+it does not return the object's data.
+
+Syntax
+~~~~~~
+
+::
+
+ HEAD /{api version}/{account}/{container}/{object} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+
+Add/Update Object Metadata
+==========================
+
+To add metadata to an object, make a ``POST`` request with the API version,
+account, container and object name. You must have write permissions on the
+parent container to add or update metadata.
+
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{api version}/{account}/{container}/{object} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Object-Meta-{key}``
+
+:Description: A user-defined meta data key that takes an arbitrary string value.
+:Type: String
+:Required: No
+
diff --git a/doc/radosgw/swift/python.rst b/doc/radosgw/swift/python.rst
new file mode 100644
index 000000000..0b1f8d0da
--- /dev/null
+++ b/doc/radosgw/swift/python.rst
@@ -0,0 +1,114 @@
+.. _python_swift:
+
+=====================
+Python Swift Examples
+=====================
+
+Create a Connection
+===================
+
+This creates a connection so that you can interact with the server:
+
+.. code-block:: python
+
+ import swiftclient
+ user = 'account_name:username'
+ key = 'your_api_key'
+
+ conn = swiftclient.Connection(
+ user=user,
+ key=key,
+ authurl='https://objects.dreamhost.com/auth',
+ )
+
+
+Create a Container
+==================
+
+This creates a new container called ``my-new-container``:
+
+.. code-block:: python
+
+ container_name = 'my-new-container'
+ conn.put_container(container_name)
+
+
+Create an Object
+================
+
+This creates a file ``hello.txt`` from the file named ``my_hello.txt``:
+
+.. code-block:: python
+
+ with open('hello.txt', 'r') as hello_file:
+ conn.put_object(container_name, 'hello.txt',
+ contents= hello_file.read(),
+ content_type='text/plain')
+
+
+List Owned Containers
+=====================
+
+This gets a list of containers that you own, and prints out the container name:
+
+.. code-block:: python
+
+ for container in conn.get_account()[1]:
+ print(container['name'])
+
+The output will look something like this::
+
+ mahbuckat1
+ mahbuckat2
+ mahbuckat3
+
+List a Container's Content
+==========================
+
+This gets a list of objects in the container, and prints out each
+object's name, the file size, and last modified date:
+
+.. code-block:: python
+
+ for data in conn.get_container(container_name)[1]:
+ print('{0}\t{1}\t{2}'.format(data['name'], data['bytes'], data['last_modified']))
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Retrieve an Object
+==================
+
+This downloads the object ``hello.txt`` and saves it in
+``./my_hello.txt``:
+
+.. code-block:: python
+
+ obj_tuple = conn.get_object(container_name, 'hello.txt')
+ with open('my_hello.txt', 'w') as my_hello:
+ my_hello.write(obj_tuple[1])
+
+
+Delete an Object
+================
+
+This deletes the object ``hello.txt``:
+
+.. code-block:: python
+
+ conn.delete_object(container_name, 'hello.txt')
+
+Delete a Container
+==================
+
+.. note::
+
+ The container must be empty! Otherwise the request won't work!
+
+.. code-block:: python
+
+ conn.delete_container(container_name)
+
diff --git a/doc/radosgw/swift/ruby.rst b/doc/radosgw/swift/ruby.rst
new file mode 100644
index 000000000..a20b66d88
--- /dev/null
+++ b/doc/radosgw/swift/ruby.rst
@@ -0,0 +1,119 @@
+.. _ruby_swift:
+
+=====================
+ Ruby Swift Examples
+=====================
+
+Create a Connection
+===================
+
+This creates a connection so that you can interact with the server:
+
+.. code-block:: ruby
+
+ require 'cloudfiles'
+ username = 'account_name:user_name'
+ api_key = 'your_secret_key'
+
+ conn = CloudFiles::Connection.new(
+ :username => username,
+ :api_key => api_key,
+ :auth_url => 'http://objects.dreamhost.com/auth'
+ )
+
+
+Create a Container
+==================
+
+This creates a new container called ``my-new-container``
+
+.. code-block:: ruby
+
+ container = conn.create_container('my-new-container')
+
+
+Create an Object
+================
+
+This creates a file ``hello.txt`` from the file named ``my_hello.txt``
+
+.. code-block:: ruby
+
+ obj = container.create_object('hello.txt')
+ obj.load_from_filename('./my_hello.txt')
+ obj.content_type = 'text/plain'
+
+
+
+List Owned Containers
+=====================
+
+This gets a list of Containers that you own, and also prints out
+the container name:
+
+.. code-block:: ruby
+
+ conn.containers.each do |container|
+ puts container
+ end
+
+The output will look something like this::
+
+ mahbuckat1
+ mahbuckat2
+ mahbuckat3
+
+
+List a Container's Contents
+===========================
+
+This gets a list of objects in the container, and prints out each
+object's name, the file size, and last modified date:
+
+.. code-block:: ruby
+
+ require 'date' # not necessary in the next version
+
+ container.objects_detail.each do |name, data|
+ puts "#{name}\t#{data[:bytes]}\t#{data[:last_modified]}"
+ end
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+
+Retrieve an Object
+==================
+
+This downloads the object ``hello.txt`` and saves it in
+``./my_hello.txt``:
+
+.. code-block:: ruby
+
+ obj = container.object('hello.txt')
+ obj.save_to_filename('./my_hello.txt')
+
+
+Delete an Object
+================
+
+This deletes the object ``goodbye.txt``:
+
+.. code-block:: ruby
+
+ container.delete_object('goodbye.txt')
+
+
+Delete a Container
+==================
+
+.. note::
+
+ The container must be empty! Otherwise the request won't work!
+
+.. code-block:: ruby
+
+ container.delete_container('my-new-container')
diff --git a/doc/radosgw/swift/serviceops.rst b/doc/radosgw/swift/serviceops.rst
new file mode 100644
index 000000000..a00f3d807
--- /dev/null
+++ b/doc/radosgw/swift/serviceops.rst
@@ -0,0 +1,76 @@
+====================
+ Service Operations
+====================
+
+To retrieve data about our Swift-compatible service, you may execute ``GET``
+requests using the ``X-Storage-Url`` value retrieved during authentication.
+
+List Containers
+===============
+
+A ``GET`` request that specifies the API version and the account will return
+a list of containers for a particular user account. Since the request returns
+a particular user's containers, the request requires an authentication token.
+The request cannot be made anonymously.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{api version}/{account} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``limit``
+
+:Description: Limits the number of results to the specified value.
+:Type: Integer
+:Required: No
+
+``format``
+
+:Description: Defines the format of the result.
+:Type: String
+:Valid Values: ``json`` | ``xml``
+:Required: No
+
+
+``marker``
+
+:Description: Returns a list of results greater than the marker value.
+:Type: String
+:Required: No
+
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+The response contains a list of containers, or returns with an HTTP
+204 response code
+
+``account``
+
+:Description: A list for account information.
+:Type: Container
+
+``container``
+
+:Description: The list of containers.
+:Type: Container
+
+``name``
+
+:Description: The name of a container.
+:Type: String
+
+``bytes``
+
+:Description: The size of the container.
+:Type: Integer \ No newline at end of file
diff --git a/doc/radosgw/swift/tempurl.rst b/doc/radosgw/swift/tempurl.rst
new file mode 100644
index 000000000..41dbb0ccb
--- /dev/null
+++ b/doc/radosgw/swift/tempurl.rst
@@ -0,0 +1,102 @@
+====================
+ Temp URL Operations
+====================
+
+To allow temporary access (for eg for `GET` requests) to objects
+without the need to share credentials, temp url functionality is
+supported by swift endpoint of radosgw. For this functionality,
+initially the value of `X-Account-Meta-Temp-URL-Key` and optionally
+`X-Account-Meta-Temp-URL-Key-2` should be set. The Temp URL
+functionality relies on a HMAC-SHA1 signature against these secret
+keys.
+
+.. note:: If you are planning to expose Temp URL functionality for the
+ Swift API, it is strongly recommended to include the Swift
+ account name in the endpoint definition, so as to most
+ closely emulate the behavior of native OpenStack Swift. To
+ do so, set the ``ceph.conf`` configuration option ``rgw
+ swift account in url = true``, and update your Keystone
+ endpoint to the URL suffix ``/v1/AUTH_%(tenant_id)s``
+ (instead of just ``/v1``).
+
+
+POST Temp-URL Keys
+==================
+
+A ``POST`` request to the Swift account with the required key will set
+the secret temp URL key for the account, against which temporary URL
+access can be provided to accounts. Up to two keys are supported, and
+signatures are checked against both the keys, if present, so that keys
+can be rotated without invalidating the temporary URLs.
+
+.. note:: Native OpenStack Swift also supports the option to set
+ temporary URL keys at the container level, issuing a
+ ``POST`` or ``PUT`` request against a container that sets
+ ``X-Container-Meta-Temp-URL-Key`` or
+ ``X-Container-Meta-Temp-URL-Key-2``. This functionality is
+ not supported in radosgw; temporary URL keys can only be set
+ and used at the account level.
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{api version}/{account} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Account-Meta-Temp-URL-Key``
+
+:Description: A user-defined key that takes an arbitrary string value.
+:Type: String
+:Required: Yes
+
+``X-Account-Meta-Temp-URL-Key-2``
+
+:Description: A user-defined key that takes an arbitrary string value.
+:Type: String
+:Required: No
+
+
+GET Temp-URL Objects
+====================
+
+Temporary URL uses a cryptographic HMAC-SHA1 signature, which includes
+the following elements:
+
+#. The value of the Request method, "GET" for instance
+#. The expiry time, in format of seconds since the epoch, ie Unix time
+#. The request path starting from "v1" onwards
+
+The above items are normalized with newlines appended between them,
+and a HMAC is generated using the SHA-1 hashing algorithm against one
+of the Temp URL Keys posted earlier.
+
+A sample python script to demonstrate the above is given below:
+
+
+.. code-block:: python
+
+ import hmac
+ from hashlib import sha1
+ from time import time
+
+ method = 'GET'
+ host = 'https://objectstore.example.com/swift'
+ duration_in_seconds = 300 # Duration for which the url is valid
+ expires = int(time() + duration_in_seconds)
+ path = '/v1/your-bucket/your-object'
+ key = 'secret'
+ hmac_body = '%s\n%s\n%s' % (method, expires, path)
+ sig = hmac.new(key, hmac_body, sha1).hexdigest()
+ rest_uri = "{host}{path}?temp_url_sig={sig}&temp_url_expires={expires}".format(
+ host=host, path=path, sig=sig, expires=expires)
+ print(rest_uri)
+
+ # Example Output
+ # https://objectstore.example.com/swift/v1/your-bucket/your-object?temp_url_sig=ff4657876227fc6025f04fcf1e82818266d022c6&temp_url_expires=1423200992
+
diff --git a/doc/radosgw/swift/tutorial.rst b/doc/radosgw/swift/tutorial.rst
new file mode 100644
index 000000000..5d2889b19
--- /dev/null
+++ b/doc/radosgw/swift/tutorial.rst
@@ -0,0 +1,62 @@
+==========
+ Tutorial
+==========
+
+The Swift-compatible API tutorials follow a simple container-based object
+lifecycle. The first step requires you to setup a connection between your
+client and the RADOS Gateway server. Then, you may follow a natural
+container and object lifecycle, including adding and retrieving object
+metadata. See example code for the following languages:
+
+- `Java`_
+- `Python`_
+- `Ruby`_
+
+
+.. ditaa::
+
+ +----------------------------+ +-----------------------------+
+ | | | |
+ | Create a Connection |------->| Create a Container |
+ | | | |
+ +----------------------------+ +-----------------------------+
+ |
+ +--------------------------------------+
+ |
+ v
+ +----------------------------+ +-----------------------------+
+ | | | |
+ | Create an Object |------->| Add/Update Object Metadata |
+ | | | |
+ +----------------------------+ +-----------------------------+
+ |
+ +--------------------------------------+
+ |
+ v
+ +----------------------------+ +-----------------------------+
+ | | | |
+ | List Owned Containers |------->| List a Container's Contents |
+ | | | |
+ +----------------------------+ +-----------------------------+
+ |
+ +--------------------------------------+
+ |
+ v
+ +----------------------------+ +-----------------------------+
+ | | | |
+ | Get an Object's Metadata |------->| Retrieve an Object |
+ | | | |
+ +----------------------------+ +-----------------------------+
+ |
+ +--------------------------------------+
+ |
+ v
+ +----------------------------+ +-----------------------------+
+ | | | |
+ | Delete an Object |------->| Delete a Container |
+ | | | |
+ +----------------------------+ +-----------------------------+
+
+.. _Java: ../java
+.. _Python: ../python
+.. _Ruby: ../ruby
diff --git a/doc/radosgw/sync-modules.rst b/doc/radosgw/sync-modules.rst
new file mode 100644
index 000000000..61797edc8
--- /dev/null
+++ b/doc/radosgw/sync-modules.rst
@@ -0,0 +1,97 @@
+============
+Sync Modules
+============
+
+.. versionadded:: Kraken
+
+The :ref:`multisite` functionality of RGW introduced in Jewel allowed the ability to
+create multiple zones and mirror data and metadata between them. ``Sync Modules``
+are built atop of the multisite framework that allows for forwarding data and
+metadata to a different external tier. A sync module allows for a set of actions
+to be performed whenever a change in data occurs (metadata ops like bucket or
+user creation etc. are also regarded as changes in data). As the rgw multisite
+changes are eventually consistent at remote sites, changes are propagated
+asynchronously. This would allow for unlocking use cases such as backing up the
+object storage to an external cloud cluster or a custom backup solution using
+tape drives, indexing metadata in ElasticSearch etc.
+
+A sync module configuration is local to a zone. The sync module determines
+whether the zone exports data or can only consume data that was modified in
+another zone. As of luminous the supported sync plugins are `elasticsearch`_,
+``rgw``, which is the default sync plugin that synchronizes data between the
+zones and ``log`` which is a trivial sync plugin that logs the metadata
+operation that happens in the remote zones. The following docs are written with
+the example of a zone using `elasticsearch sync module`_, the process would be similar
+for configuring any sync plugin
+
+.. toctree::
+ :maxdepth: 1
+
+ ElasticSearch Sync Module <elastic-sync-module>
+ Cloud Sync Module <cloud-sync-module>
+ Archive Sync Module <archive-sync-module>
+
+.. note ``rgw`` is the default sync plugin and there is no need to explicitly
+ configure this
+
+Requirements and Assumptions
+----------------------------
+
+Let us assume a simple multisite configuration as described in the :ref:`multisite`
+docs, of 2 zones ``us-east`` and ``us-west``, let's add a third zone
+``us-east-es`` which is a zone that only processes metadata from the other
+sites. This zone can be in the same or a different ceph cluster as ``us-east``.
+This zone would only consume metadata from other zones and RGWs in this zone
+will not serve any end user requests directly.
+
+
+Configuring Sync Modules
+------------------------
+
+Create the third zone similar to the :ref:`multisite` docs, for example
+
+::
+
+ # radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-es \
+ --access-key={system-key} --secret={secret} --endpoints=http://rgw-es:80
+
+
+
+A sync module can be configured for this zone via the following
+
+::
+
+ # radosgw-admin zone modify --rgw-zone={zone-name} --tier-type={tier-type} --tier-config={set of key=value pairs}
+
+
+For example in the ``elasticsearch`` sync module
+
+::
+
+ # radosgw-admin zone modify --rgw-zone={zone-name} --tier-type=elasticsearch \
+ --tier-config=endpoint=http://localhost:9200,num_shards=10,num_replicas=1
+
+
+For the various supported tier-config options refer to the `elasticsearch sync module`_ docs
+
+Finally update the period
+
+
+::
+
+ # radosgw-admin period update --commit
+
+
+Now start the radosgw in the zone
+
+::
+
+ # systemctl start ceph-radosgw@rgw.`hostname -s`
+ # systemctl enable ceph-radosgw@rgw.`hostname -s`
+
+
+
+.. _`elasticsearch sync module`: ../elastic-sync-module
+.. _`elasticsearch`: ../elastic-sync-module
+.. _`cloud sync module`: ../cloud-sync-module
+.. _`archive sync module`: ../archive-sync-module
diff --git a/doc/radosgw/troubleshooting.rst b/doc/radosgw/troubleshooting.rst
new file mode 100644
index 000000000..4a084e82a
--- /dev/null
+++ b/doc/radosgw/troubleshooting.rst
@@ -0,0 +1,208 @@
+=================
+ Troubleshooting
+=================
+
+
+The Gateway Won't Start
+=======================
+
+If you cannot start the gateway (i.e., there is no existing ``pid``),
+check to see if there is an existing ``.asok`` file from another
+user. If an ``.asok`` file from another user exists and there is no
+running ``pid``, remove the ``.asok`` file and try to start the
+process again. This may occur when you start the process as a ``root`` user and
+the startup script is trying to start the process as a
+``www-data`` or ``apache`` user and an existing ``.asok`` is
+preventing the script from starting the daemon.
+
+The radosgw init script (/etc/init.d/radosgw) also has a verbose argument that
+can provide some insight as to what could be the issue::
+
+ /etc/init.d/radosgw start -v
+
+or ::
+
+ /etc/init.d radosgw start --verbose
+
+HTTP Request Errors
+===================
+
+Examining the access and error logs for the web server itself is
+probably the first step in identifying what is going on. If there is
+a 500 error, that usually indicates a problem communicating with the
+``radosgw`` daemon. Ensure the daemon is running, its socket path is
+configured, and that the web server is looking for it in the proper
+location.
+
+
+Crashed ``radosgw`` process
+===========================
+
+If the ``radosgw`` process dies, you will normally see a 500 error
+from the web server (apache, nginx, etc.). In that situation, simply
+restarting radosgw will restore service.
+
+To diagnose the cause of the crash, check the log in ``/var/log/ceph``
+and/or the core file (if one was generated).
+
+
+Blocked ``radosgw`` Requests
+============================
+
+If some (or all) radosgw requests appear to be blocked, you can get
+some insight into the internal state of the ``radosgw`` daemon via
+its admin socket. By default, there will be a socket configured to
+reside in ``/var/run/ceph``, and the daemon can be queried with::
+
+ ceph daemon /var/run/ceph/client.rgw help
+
+ help list available commands
+ objecter_requests show in-progress osd requests
+ perfcounters_dump dump perfcounters value
+ perfcounters_schema dump perfcounters schema
+ version get protocol version
+
+Of particular interest::
+
+ ceph daemon /var/run/ceph/client.rgw objecter_requests
+ ...
+
+will dump information about current in-progress requests with the
+RADOS cluster. This allows one to identify if any requests are blocked
+by a non-responsive OSD. For example, one might see::
+
+ { "ops": [
+ { "tid": 1858,
+ "pg": "2.d2041a48",
+ "osd": 1,
+ "last_sent": "2012-03-08 14:56:37.949872",
+ "attempts": 1,
+ "object_id": "fatty_25647_object1857",
+ "object_locator": "@2",
+ "snapid": "head",
+ "snap_context": "0=[]",
+ "mtime": "2012-03-08 14:56:37.949813",
+ "osd_ops": [
+ "write 0~4096"]},
+ { "tid": 1873,
+ "pg": "2.695e9f8e",
+ "osd": 1,
+ "last_sent": "2012-03-08 14:56:37.970615",
+ "attempts": 1,
+ "object_id": "fatty_25647_object1872",
+ "object_locator": "@2",
+ "snapid": "head",
+ "snap_context": "0=[]",
+ "mtime": "2012-03-08 14:56:37.970555",
+ "osd_ops": [
+ "write 0~4096"]}],
+ "linger_ops": [],
+ "pool_ops": [],
+ "pool_stat_ops": [],
+ "statfs_ops": []}
+
+In this dump, two requests are in progress. The ``last_sent`` field is
+the time the RADOS request was sent. If this is a while ago, it suggests
+that the OSD is not responding. For example, for request 1858, you could
+check the OSD status with::
+
+ ceph pg map 2.d2041a48
+
+ osdmap e9 pg 2.d2041a48 (2.0) -> up [1,0] acting [1,0]
+
+This tells us to look at ``osd.1``, the primary copy for this PG::
+
+ ceph daemon osd.1 ops
+ { "num_ops": 651,
+ "ops": [
+ { "description": "osd_op(client.4124.0:1858 fatty_25647_object1857 [write 0~4096] 2.d2041a48)",
+ "received_at": "1331247573.344650",
+ "age": "25.606449",
+ "flag_point": "waiting for sub ops",
+ "client_info": { "client": "client.4124",
+ "tid": 1858}},
+ ...
+
+The ``flag_point`` field indicates that the OSD is currently waiting
+for replicas to respond, in this case ``osd.0``.
+
+
+Java S3 API Troubleshooting
+===========================
+
+
+Peer Not Authenticated
+----------------------
+
+You may receive an error that looks like this::
+
+ [java] INFO: Unable to execute HTTP request: peer not authenticated
+
+The Java SDK for S3 requires a valid certificate from a recognized certificate
+authority, because it uses HTTPS by default. If you are just testing the Ceph
+Object Storage services, you can resolve this problem in a few ways:
+
+#. Prepend the IP address or hostname with ``http://``. For example, change this::
+
+ conn.setEndpoint("myserver");
+
+ To::
+
+ conn.setEndpoint("http://myserver")
+
+#. After setting your credentials, add a client configuration and set the
+ protocol to ``Protocol.HTTP``. ::
+
+ AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
+
+ ClientConfiguration clientConfig = new ClientConfiguration();
+ clientConfig.setProtocol(Protocol.HTTP);
+
+ AmazonS3 conn = new AmazonS3Client(credentials, clientConfig);
+
+
+
+405 MethodNotAllowed
+--------------------
+
+If you receive an 405 error, check to see if you have the S3 subdomain set up correctly.
+You will need to have a wild card setting in your DNS record for subdomain functionality
+to work properly.
+
+Also, check to ensure that the default site is disabled. ::
+
+ [java] Exception in thread "main" Status Code: 405, AWS Service: Amazon S3, AWS Request ID: null, AWS Error Code: MethodNotAllowed, AWS Error Message: null, S3 Extended Request ID: null
+
+
+
+Numerous objects in default.rgw.meta pool
+=========================================
+
+Clusters created prior to *jewel* have a metadata archival feature enabled by default, using the ``default.rgw.meta`` pool.
+This archive keeps all old versions of user and bucket metadata, resulting in large numbers of objects in the ``default.rgw.meta`` pool.
+
+Disabling the Metadata Heap
+---------------------------
+
+Users who want to disable this feature going forward should set the ``metadata_heap`` field to an empty string ``""``::
+
+ $ radosgw-admin zone get --rgw-zone=default > zone.json
+ [edit zone.json, setting "metadata_heap": ""]
+ $ radosgw-admin zone set --rgw-zone=default --infile=zone.json
+ $ radosgw-admin period update --commit
+
+This will stop new metadata from being written to the ``default.rgw.meta`` pool, but does not remove any existing objects or pool.
+
+Cleaning the Metadata Heap Pool
+-------------------------------
+
+Clusters created prior to *jewel* normally use ``default.rgw.meta`` only for the metadata archival feature.
+
+However, from *luminous* onwards, radosgw uses :ref:`Pool Namespaces <radosgw-pool-namespaces>` within ``default.rgw.meta`` for an entirely different purpose, that is, to store ``user_keys`` and other critical metadata.
+
+Users should check zone configuration before proceeding any cleanup procedures::
+
+ $ radosgw-admin zone get --rgw-zone=default | grep default.rgw.meta
+ [should not match any strings]
+
+Having confirmed that the pool is not used for any purpose, users may safely delete all objects in the ``default.rgw.meta`` pool, or optionally, delete the entire pool itself.
diff --git a/doc/radosgw/vault.rst b/doc/radosgw/vault.rst
new file mode 100644
index 000000000..da34a3919
--- /dev/null
+++ b/doc/radosgw/vault.rst
@@ -0,0 +1,442 @@
+===========================
+HashiCorp Vault Integration
+===========================
+
+HashiCorp `Vault`_ can be used as a secure key management service for
+`Server-Side Encryption`_ (SSE-KMS).
+
+.. ditaa::
+
+ +---------+ +---------+ +-------+ +-------+
+ | Client | | RadosGW | | Vault | | OSD |
+ +---------+ +---------+ +-------+ +-------+
+ | create secret | | |
+ | key for key ID | | |
+ |-----------------+---------------->| |
+ | | | |
+ | upload object | | |
+ | with key ID | | |
+ |---------------->| request secret | |
+ | | key for key ID | |
+ | |---------------->| |
+ | |<----------------| |
+ | | return secret | |
+ | | key | |
+ | | | |
+ | | encrypt object | |
+ | | with secret key | |
+ | |--------------+ | |
+ | | | | |
+ | |<-------------+ | |
+ | | | |
+ | | store encrypted | |
+ | | object | |
+ | |------------------------------>|
+
+#. `Vault secrets engines`_
+#. `Vault authentication`_
+#. `Vault namespaces`_
+#. `Create a key in Vault`_
+#. `Configure the Ceph Object Gateway`_
+#. `Upload object`_
+
+Some examples below use the Vault command line utility to interact with
+Vault. You may need to set the following environment variable with the correct
+address of your Vault server to use this utility::
+
+ export VAULT_ADDR='https://vault-server-fqdn:8200'
+
+Vault secrets engines
+=====================
+
+Vault provides several secrets engines, which can store, generate, and encrypt
+data. Currently, the Object Gateway supports:
+
+- `KV secrets engine`_ version 2
+- `Transit engine`_
+
+KV secrets engine
+-----------------
+
+The KV secrets engine is used to store arbitrary key/value secrets in Vault. To
+enable the KV engine version 2 in Vault, use the following command::
+
+ vault secrets enable -path secret kv-v2
+
+The Object Gateway can be configured to use the KV engine version 2 with the
+following setting::
+
+ rgw crypt vault secret engine = kv
+
+Transit secrets engine
+----------------------
+
+The transit engine handles cryptographic functions on data in-transit. To enable
+it in Vault, use the following command::
+
+ vault secrets enable transit
+
+The Object Gateway can be configured to use the transit engine with the
+following setting::
+
+ rgw crypt vault secret engine = transit
+
+Vault authentication
+====================
+
+Vault supports several authentication mechanisms. Currently, the Object
+Gateway can be configured to authenticate to Vault using the
+`Token authentication method`_ or a `Vault agent`_.
+
+Most tokens in Vault have limited lifetimes and powers. The only
+sort of Vault token that does not have a lifetime are root tokens.
+For all other tokens, it is necessary to periodically refresh them,
+either by performing initial authentication, or by renewing the token.
+Ceph does not have any logic to perform either operation.
+The simplest best way to use Vault tokens with ceph is to
+also run the Vault agent and have it refresh the token file.
+When the Vault agent is used in this mode, file system permissions
+can be used to restrict who has the use of tokens.
+
+Instead of having Vault agent refresh a token file, it can be told
+to act as a proxy server. In this mode, Vault will add a token when
+necessary and add it to requests passed to it before forwarding them on
+to the real server. Vault agent will still handle token renewal just
+as it would when storing a token in the filesystem. In this mode, it
+is necessary to properly secure the network path rgw uses to reach the
+Vault agent, such as having the Vault agent listen only to localhost.
+
+Token policies for the object gateway
+-------------------------------------
+
+All Vault tokens have powers as specified by the polices attached
+to that token. Multiple policies may be associated with one
+token. You should only use the policies necessary for your
+configuration.
+
+When using the kv secret engine with the object gateway::
+
+ vault policy write rgw-kv-policy -<<EOF
+ path "secret/data/*" {
+ capabilities = ["read"]
+ }
+ EOF
+
+When using the transit secret engine with the object gateway::
+
+ vault policy write rgw-transit-policy -<<EOF
+ path "transit/keys/*" {
+ capabilities = [ "create", "update" ]
+ denied_parameters = {"exportable" = [], "allow_plaintext_backup" = [] }
+ }
+
+ path "transit/keys/*" {
+ capabilities = ["read", "delete"]
+ }
+
+ path "transit/keys/" {
+ capabilities = ["list"]
+ }
+
+ path "transit/keys/+/rotate" {
+ capabilities = [ "update" ]
+ }
+
+ path "transit/*" {
+ capabilities = [ "update" ]
+ }
+ EOF
+
+If you had previously used an older version of ceph with the
+transit secret engine, you might need the following policy::
+
+ vault policy write old-rgw-transit-policy -<<EOF
+ path "transit/export/encryption-key/*" {
+ capabilities = ["read"]
+ }
+ EOF
+
+If you are using both sse-kms and sse-s3, then you should point
+each to separate containers. You could either use separate
+vault instances, or you could use either separately mounted
+transit instances, or different branches under a common transit
+point. If you are not using separate vault instances, you can
+use these to point kms and sse-s3 to separate containers:
+``rgw_crypt_vault_prefix``
+and/or
+``rgw_crypt_sse_s3_vault_prefix``.
+When granting vault permissions to sse-kms bucket owners, you should
+not give them permission to muck around with sse-s3 keys;
+only ceph itself should be doing that.
+
+Token authentication
+--------------------
+
+.. note: Never use root tokens with ceph in production environments.
+
+The token authentication method expects a Vault token to be present in a
+plaintext file. The Object Gateway can be configured to use token authentication
+with the following settings::
+
+ rgw crypt vault auth = token
+ rgw crypt vault token file = /run/.rgw-vault-token
+ rgw crypt vault addr = https://vault-server-fqdn:8200
+
+Adjust these settings to match your configuration.
+For security reasons, the token file must be readable by the Object Gateway
+only.
+
+Vault agent
+-----------
+
+The Vault agent is a client daemon that provides authentication to Vault and
+manages token renewal and caching. It typically runs on the same host as the
+Object Gateway. With a Vault agent, it is possible to use other Vault
+authentication mechanism such as AppRole, AWS, Certs, JWT, and Azure.
+
+The Object Gateway can be configured to use a Vault agent with the following
+settings::
+
+ rgw crypt vault auth = agent
+ rgw crypt vault addr = http://127.0.0.1:8100
+
+You might set up vault agent as follows::
+
+ vault write auth/approle/role/rgw-ap \
+ token_policies=rgw-transit-policy,default \
+ token_max_ttl=60m
+
+Change the policy here to match your configuration.
+
+Get the role-id:
+ vault read auth/approle/role/rgw-ap/role-id -format=json | \
+ jq -r .data.role_id
+
+Store the output in some file, such as /usr/local/etc/vault/.rgw-ap-role-id
+
+Get the secret-id:
+ vault read auth/approle/role/rgw-ap/role-id -format=json | \
+ jq -r .data.role_id
+
+Store the output in some file, such as /usr/local/etc/vault/.rgw-ap-secret-id
+
+Create configuration for the Vault agent, such as::
+
+ pid_file = "/run/rgw-vault-agent-pid"
+ auto_auth {
+ method "AppRole" {
+ mount_path = "auth/approle"
+ config = {
+ role_id_file_path ="/usr/local/etc/vault/.rgw-ap-role-id"
+ secret_id_file_path ="/usr/local/etc/vault/.rgw-ap-secret-id"
+ remove_secret_id_file_after_reading ="false"
+ }
+ }
+ }
+ cache {
+ use_auto_auth_token = true
+ }
+ listener "tcp" {
+ address = "127.0.0.1:8100"
+ tls_disable = true
+ }
+ vault {
+ address = "https://vault-server-fqdn:8200"
+ }
+
+Then use systemctl or another method of your choice to run
+a persistent daemon with the following arguments::
+
+ /usr/local/bin/vault agent -config=/usr/local/etc/vault/rgw-agent.hcl
+
+Once the vault agent is running, you should find it listening
+to port 8100 on localhost, and you should be able to interact
+with it using the vault command.
+
+Vault namespaces
+================
+
+In the Enterprise version, Vault supports the concept of `namespaces`_, which
+allows centralized management for teams within an organization while ensuring
+that those teams operate within isolated environments known as tenants.
+
+The Object Gateway can be configured to access Vault within a particular
+namespace using the following configuration setting::
+
+ rgw crypt vault namespace = tenant1
+
+Create a key in Vault
+=====================
+
+.. note:: Keys for server-side encryption must be 256-bit long and base-64
+ encoded.
+
+Using the KV engine
+-------------------
+
+A key for server-side encryption can be created in the KV version 2 engine using
+the command line utility, as in the following example::
+
+ vault kv put secret/myproject/mybucketkey key=$(openssl rand -base64 32)
+
+Sample output::
+
+ ====== Metadata ======
+ Key Value
+ --- -----
+ created_time 2019-08-29T17:01:09.095824999Z
+ deletion_time n/a
+ destroyed false
+ version 1
+
+Note that in the KV secrets engine, secrets are stored as key-value pairs, and
+the Gateway expects the key name to be ``key``, i.e. the secret must be in the
+form ``key=<secret key>``.
+
+Using the Transit engine
+------------------------
+
+Keys created for use with the Transit engine should no longer be marked
+exportable. They can be created with::
+
+ vault write -f transit/keys/mybucketkey
+
+The command above creates a keyring, which contains a key of type
+``aes256-gcm96`` by default. To verify that the key was correctly created, use
+the following command::
+
+ vault read transit/keys/mybucketkey
+
+Sample output::
+
+ Key Value
+ --- -----
+ derived false
+ exportable false
+ name mybucketkey
+ type aes256-gcm96
+
+Configure the Ceph Object Gateway
+=================================
+
+Edit the Ceph configuration file to enable Vault as a KMS backend for
+server-side encryption::
+
+ rgw crypt s3 kms backend = vault
+
+Choose the Vault authentication method, e.g.::
+
+ rgw crypt vault auth = token
+ rgw crypt vault token file = /run/.rgw-vault-token
+ rgw crypt vault addr = https://vault-server-fqdn:8200
+
+Or::
+
+ rgw crypt vault auth = agent
+ rgw crypt vault addr = http://localhost:8100
+
+Choose the secrets engine::
+
+ rgw crypt vault secret engine = kv
+
+Or::
+
+ rgw crypt vault secret engine = transit
+
+Optionally, set the Vault namespace where encryption keys will be fetched from::
+
+ rgw crypt vault namespace = tenant1
+
+Finally, the URLs where the Gateway will retrieve encryption keys from Vault can
+be restricted by setting a path prefix. For instance, the Gateway can be
+restricted to fetch KV keys as follows::
+
+ rgw crypt vault prefix = /v1/secret/data
+
+Or, when using the transit secret engine::
+
+ rgw crypt vault prefix = /v1/transit
+
+In the example above, the Gateway would only fetch transit encryption keys under
+``https://vault-server:8200/v1/transit``.
+
+You can use custom ssl certs to authenticate with vault with help of
+following options::
+
+ rgw crypt vault verify ssl = true
+ rgw crypt vault ssl cacert = /etc/ceph/vault.ca
+ rgw crypt vault ssl clientcert = /etc/ceph/vault.crt
+ rgw crypt vault ssl clientkey = /etc/ceph/vault.key
+
+where vault.ca is CA certificate and vault.key/vault.crt are private key and ssl
+certificate generated for RGW to access the vault server. It highly recommended to
+set this option true, setting false is very dangerous and need to avoid since this
+runs in very secured environments.
+
+Transit engine compatibility support
+------------------------------------
+The transit engine has compatibility support for previous
+versions of ceph, which used the transit engine as a simple key store.
+
+There is a "compat" option which can be given to the transit
+engine to configure the compatibility support,
+
+To entirely disable backwards support, use::
+
+ rgw crypt vault secret engine = transit compat=0
+
+This will be the default in future versions. and is safe to use
+for new installs using the current version.
+
+This is the normal default with the current version::
+
+ rgw crypt vault secret engine = transit compat=1
+
+This enables the new engine for newly created objects,
+but still allows the old engine to be used for old objects.
+In order to access old and new objects, the vault token given
+to ceph must have both the old and new transit policies.
+
+To force use of only the old engine, use::
+
+ rgw crypt vault secret engine = transit compat=2
+
+This mode is automatically selected if the vault prefix
+ends in export/encryption-key, which was the previously
+documented setting.
+
+Upload object
+=============
+
+When uploading an object to the Gateway, provide the SSE key ID in the request.
+As an example, for the kv engine, using the AWS command-line client::
+
+ aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id myproject/mybucketkey
+
+As an example, for the transit engine (new flavor), using the AWS command-line client::
+
+ aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id mybucketkey
+
+The Object Gateway will fetch the key from Vault, encrypt the object and store
+it in the bucket. Any request to download the object will make the Gateway
+automatically retrieve the correspondent key from Vault and decrypt the object.
+
+Note that the secret will be fetched from Vault using a URL constructed by
+concatenating the base address (``rgw crypt vault addr``), the (optional)
+URL prefix (``rgw crypt vault prefix``), and finally the key ID.
+
+In the kv engine example above, the Gateway would fetch the secret from::
+
+ http://vaultserver:8200/v1/secret/data/myproject/mybucketkey
+
+In the transit engine example above, the Gateway would encrypt the secret using this key::
+
+ http://vaultserver:8200/v1/transit/mybucketkey
+
+.. _Server-Side Encryption: ../encryption
+.. _Vault: https://www.vaultproject.io/docs/
+.. _Token authentication method: https://www.vaultproject.io/docs/auth/token.html
+.. _Vault agent: https://www.vaultproject.io/docs/agent/index.html
+.. _KV Secrets engine: https://www.vaultproject.io/docs/secrets/kv/
+.. _Transit engine: https://www.vaultproject.io/docs/secrets/transit
+.. _namespaces: https://www.vaultproject.io/docs/enterprise/namespaces/index.html