summaryrefslogtreecommitdiffstats
path: root/doc/radosgw
diff options
context:
space:
mode:
authorDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 18:45:59 +0000
committerDaniel Baumann <daniel.baumann@progress-linux.org>2024-04-07 18:45:59 +0000
commit19fcec84d8d7d21e796c7624e521b60d28ee21ed (patch)
tree42d26aa27d1e3f7c0b8bd3fd14e7d7082f5008dc /doc/radosgw
parentInitial commit. (diff)
downloadceph-19fcec84d8d7d21e796c7624e521b60d28ee21ed.tar.xz
ceph-19fcec84d8d7d21e796c7624e521b60d28ee21ed.zip
Adding upstream version 16.2.11+ds.upstream/16.2.11+dsupstream
Signed-off-by: Daniel Baumann <daniel.baumann@progress-linux.org>
Diffstat (limited to 'doc/radosgw')
-rw-r--r--doc/radosgw/STS.rst298
-rw-r--r--doc/radosgw/STSLite.rst196
-rw-r--r--doc/radosgw/admin.rst528
-rw-r--r--doc/radosgw/adminops.rst1994
-rw-r--r--doc/radosgw/api.rst14
-rw-r--r--doc/radosgw/archive-sync-module.rst44
-rw-r--r--doc/radosgw/barbican.rst123
-rw-r--r--doc/radosgw/bucketpolicy.rst216
-rw-r--r--doc/radosgw/cloud-sync-module.rst244
-rw-r--r--doc/radosgw/compression.rst87
-rw-r--r--doc/radosgw/config-ref.rst1161
-rw-r--r--doc/radosgw/dynamicresharding.rst235
-rw-r--r--doc/radosgw/elastic-sync-module.rst181
-rw-r--r--doc/radosgw/encryption.rst68
-rw-r--r--doc/radosgw/frontends.rst226
-rw-r--r--doc/radosgw/index.rst85
-rw-r--r--doc/radosgw/keycloak.rst127
-rw-r--r--doc/radosgw/keystone.rst160
-rw-r--r--doc/radosgw/kmip.rst219
-rw-r--r--doc/radosgw/layout.rst207
-rw-r--r--doc/radosgw/ldap-auth.rst167
-rw-r--r--doc/radosgw/lua-scripting.rst393
-rw-r--r--doc/radosgw/mfa.rst102
-rw-r--r--doc/radosgw/multisite-sync-policy.rst714
-rw-r--r--doc/radosgw/multisite.rst1537
-rw-r--r--doc/radosgw/multitenancy.rst169
-rw-r--r--doc/radosgw/nfs.rst371
-rw-r--r--doc/radosgw/notifications.rst543
-rw-r--r--doc/radosgw/oidc.rst97
-rw-r--r--doc/radosgw/opa.rst72
-rw-r--r--doc/radosgw/orphans.rst115
-rw-r--r--doc/radosgw/placement.rst250
-rw-r--r--doc/radosgw/pools.rst57
-rw-r--r--doc/radosgw/pubsub-module.rst646
-rw-r--r--doc/radosgw/qat-accel.rst94
-rw-r--r--doc/radosgw/rgw-cache.rst150
-rw-r--r--doc/radosgw/role.rst523
-rw-r--r--doc/radosgw/s3-notification-compatibility.rst139
-rw-r--r--doc/radosgw/s3.rst102
-rw-r--r--doc/radosgw/s3/authentication.rst231
-rw-r--r--doc/radosgw/s3/bucketops.rst706
-rw-r--r--doc/radosgw/s3/commons.rst111
-rw-r--r--doc/radosgw/s3/cpp.rst337
-rw-r--r--doc/radosgw/s3/csharp.rst199
-rw-r--r--doc/radosgw/s3/java.rst212
-rw-r--r--doc/radosgw/s3/objectops.rst558
-rw-r--r--doc/radosgw/s3/perl.rst192
-rw-r--r--doc/radosgw/s3/php.rst214
-rw-r--r--doc/radosgw/s3/python.rst186
-rw-r--r--doc/radosgw/s3/ruby.rst364
-rw-r--r--doc/radosgw/s3/serviceops.rst69
-rw-r--r--doc/radosgw/s3select.rst267
-rw-r--r--doc/radosgw/session-tags.rst424
-rw-r--r--doc/radosgw/swift.rst77
-rw-r--r--doc/radosgw/swift/auth.rst82
-rw-r--r--doc/radosgw/swift/containerops.rst341
-rw-r--r--doc/radosgw/swift/java.rst175
-rw-r--r--doc/radosgw/swift/objectops.rst271
-rw-r--r--doc/radosgw/swift/python.rst114
-rw-r--r--doc/radosgw/swift/ruby.rst119
-rw-r--r--doc/radosgw/swift/serviceops.rst76
-rw-r--r--doc/radosgw/swift/tempurl.rst102
-rw-r--r--doc/radosgw/swift/tutorial.rst62
-rw-r--r--doc/radosgw/sync-modules.rst99
-rw-r--r--doc/radosgw/troubleshooting.rst208
-rw-r--r--doc/radosgw/vault.rst482
66 files changed, 18932 insertions, 0 deletions
diff --git a/doc/radosgw/STS.rst b/doc/radosgw/STS.rst
new file mode 100644
index 000000000..1678f86d5
--- /dev/null
+++ b/doc/radosgw/STS.rst
@@ -0,0 +1,298 @@
+===========
+STS in Ceph
+===========
+
+Secure Token Service is a web service in AWS that returns a set of temporary security credentials for authenticating federated users.
+The link to official AWS documentation can be found here: https://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html.
+
+Ceph Object Gateway implements a subset of STS APIs that provide temporary credentials for identity and access management.
+These temporary credentials can be used to make subsequent S3 calls which will be authenticated by the STS engine in Ceph Object Gateway.
+Permissions of the temporary credentials can be further restricted via an IAM policy passed as a parameter to the STS APIs.
+
+STS REST APIs
+=============
+
+The following STS REST APIs have been implemented in Ceph Object Gateway:
+
+1. AssumeRole: Returns a set of temporary credentials that can be used for
+cross-account access. The temporary credentials will have permissions that are
+allowed by both - permission policies attached with the Role and policy attached
+with the AssumeRole API.
+
+Parameters:
+ **RoleArn** (String/ Required): ARN of the Role to Assume.
+
+ **RoleSessionName** (String/ Required): An Identifier for the assumed role
+ session.
+
+ **Policy** (String/ Optional): An IAM Policy in JSON format.
+
+ **DurationSeconds** (Integer/ Optional): The duration in seconds of the session.
+ Its default value is 3600.
+
+ **ExternalId** (String/ Optional): A unique Id that might be used when a role is
+ assumed in another account.
+
+ **SerialNumber** (String/ Optional): The Id number of the MFA device associated
+ with the user making the AssumeRole call.
+
+ **TokenCode** (String/ Optional): The value provided by the MFA device, if the
+ trust policy of the role being assumed requires MFA.
+
+2. AssumeRoleWithWebIdentity: Returns a set of temporary credentials for users that
+have been authenticated by a web/mobile app by an OpenID Connect /OAuth2.0 Identity Provider.
+Currently Keycloak has been tested and integrated with RGW.
+
+Parameters:
+ **RoleArn** (String/ Required): ARN of the Role to Assume.
+
+ **RoleSessionName** (String/ Required): An Identifier for the assumed role
+ session.
+
+ **Policy** (String/ Optional): An IAM Policy in JSON format.
+
+ **DurationSeconds** (Integer/ Optional): The duration in seconds of the session.
+ Its default value is 3600.
+
+ **ProviderId** (String/ Optional): Fully qualified host component of the domain name
+ of the IDP. Valid only for OAuth2.0 tokens (not for OpenID Connect tokens).
+
+ **WebIdentityToken** (String/ Required): The OpenID Connect/ OAuth2.0 token, which the
+ application gets in return after authenticating its user with an IDP.
+
+Before invoking AssumeRoleWithWebIdentity, an OpenID Connect Provider entity (which the web application
+authenticates with), needs to be created in RGW.
+
+The trust between the IDP and the role is created by adding a condition to the role's trust policy, which
+allows access only to applications which satisfy the given condition.
+All claims of the JWT are supported in the condition of the role's trust policy.
+An example of a policy that uses the 'aud' claim in the condition is of the form::
+
+ "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/<URL of IDP>\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"<URL of IDP> :app_id\":\"<aud>\"\}\}\}\]\}"
+
+The app_id in the condition above must match the 'aud' claim of the incoming token.
+
+An example of a policy that uses the 'sub' claim in the condition is of the form::
+
+ "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/<URL of IDP>\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"<URL of IDP> :sub\":\"<sub>\"\}\}\}\]\}"
+
+Similarly, an example of a policy that uses 'azp' claim in the condition is of the form::
+
+ "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/<URL of IDP>\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"<URL of IDP> :azp\":\"<azp>\"\}\}\}\]\}"
+
+A shadow user is created corresponding to every federated user. The user id is derived from the 'sub' field of the incoming web token.
+The user is created in a separate namespace - 'oidc' such that the user id doesn't clash with any other user ids in rgw. The format of the user id
+is - <tenant>$<user-namespace>$<sub> where user-namespace is 'oidc' for users that authenticate with oidc providers.
+
+RGW now supports Session tags that can be passed in the web token to AssumeRoleWithWebIdentity call. More information related to Session Tags can be found here
+:doc:`session-tags`.
+
+STS Configuration
+=================
+
+The following configurable options have to be added for STS integration::
+
+ [client.radosgw.gateway]
+ rgw sts key = {sts key for encrypting the session token}
+ rgw s3 auth use sts = true
+
+Note: By default, STS and S3 APIs co-exist in the same namespace, and both S3
+and STS APIs can be accessed via the same endpoint in Ceph Object Gateway.
+
+Examples
+========
+
+1. The following is an example of AssumeRole API call, which shows steps to create a role, assign a policy to it
+(that allows access to S3 resources), assuming a role to get temporary credentials and accessing s3 resources using
+those credentials. In this example, TESTER1 assumes a role created by TESTER, to access S3 resources owned by TESTER,
+according to the permission policy attached to the role.
+
+.. code-block:: console
+
+ radosgw-admin caps add --uid="TESTER" --caps="roles=*"
+
+2. The following is an example of the AssumeRole API call, which shows steps to create a role, assign a policy to it
+ (that allows access to S3 resources), assuming a role to get temporary credentials and accessing S3 resources using
+ those credentials. In this example, TESTER1 assumes a role created by TESTER, to access S3 resources owned by TESTER,
+ according to the permission policy attached to the role.
+
+.. code-block:: python
+
+ import boto3
+
+ iam_client = boto3.client('iam',
+ aws_access_key_id=<access_key of TESTER>,
+ aws_secret_access_key=<secret_key of TESTER>,
+ endpoint_url=<IAM URL>,
+ region_name=''
+ )
+
+ policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER1\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
+
+ role_response = iam_client.create_role(
+ AssumeRolePolicyDocument=policy_document,
+ Path='/',
+ RoleName='S3Access',
+ )
+
+ role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}"
+
+ response = iam_client.put_role_policy(
+ RoleName='S3Access',
+ PolicyName='Policy1',
+ PolicyDocument=role_policy
+ )
+
+ sts_client = boto3.client('sts',
+ aws_access_key_id=<access_key of TESTER1>,
+ aws_secret_access_key=<secret_key of TESTER1>,
+ endpoint_url=<STS URL>,
+ region_name='',
+ )
+
+ response = sts_client.assume_role(
+ RoleArn=role_response['Role']['Arn'],
+ RoleSessionName='Bob',
+ DurationSeconds=3600
+ )
+
+ s3client = boto3.client('s3',
+ aws_access_key_id = response['Credentials']['AccessKeyId'],
+ aws_secret_access_key = response['Credentials']['SecretAccessKey'],
+ aws_session_token = response['Credentials']['SessionToken'],
+ endpoint_url=<S3 URL>,
+ region_name='',)
+
+ bucket_name = 'my-bucket'
+ s3bucket = s3client.create_bucket(Bucket=bucket_name)
+ resp = s3client.list_buckets()
+
+2. The following is an example of AssumeRoleWithWebIdentity API call, where an external app that has users authenticated with
+an OpenID Connect/ OAuth2 IDP (Keycloak in this example), assumes a role to get back temporary credentials and access S3 resources
+according to permission policy of the role.
+
+.. code-block:: python
+
+ import boto3
+
+ iam_client = boto3.client('iam',
+ aws_access_key_id=<access_key of TESTER>,
+ aws_secret_access_key=<secret_key of TESTER>,
+ endpoint_url=<IAM URL>,
+ region_name=''
+ )
+
+ oidc_response = iam_client.create_open_id_connect_provider(
+ Url=<URL of the OpenID Connect Provider,
+ ClientIDList=[
+ <Client id registered with the IDP>
+ ],
+ ThumbprintList=[
+ <Thumbprint of the IDP>
+ ]
+ )
+
+ policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/demo\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"localhost:8080/auth/realms/demo:app_id\":\"customer-portal\"}}}]}"
+ role_response = iam_client.create_role(
+ AssumeRolePolicyDocument=policy_document,
+ Path='/',
+ RoleName='S3Access',
+ )
+
+ role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\"}}"
+
+ response = iam_client.put_role_policy(
+ RoleName='S3Access',
+ PolicyName='Policy1',
+ PolicyDocument=role_policy
+ )
+
+ sts_client = boto3.client('sts',
+ aws_access_key_id=<access_key of TESTER1>,
+ aws_secret_access_key=<secret_key of TESTER1>,
+ endpoint_url=<STS URL>,
+ region_name='',
+ )
+
+ response = client.assume_role_with_web_identity(
+ RoleArn=role_response['Role']['Arn'],
+ RoleSessionName='Bob',
+ DurationSeconds=3600,
+ WebIdentityToken=<Web Token>
+ )
+
+ s3client = boto3.client('s3',
+ aws_access_key_id = response['Credentials']['AccessKeyId'],
+ aws_secret_access_key = response['Credentials']['SecretAccessKey'],
+ aws_session_token = response['Credentials']['SessionToken'],
+ endpoint_url=<S3 URL>,
+ region_name='',)
+
+ bucket_name = 'my-bucket'
+ s3bucket = s3client.create_bucket(Bucket=bucket_name)
+ resp = s3client.list_buckets()
+
+How to obtain thumbprint of an OpenID Connect Provider IDP
+==========================================================
+1. Take the OpenID connect provider's URL and add /.well-known/openid-configuration
+to it to get the URL to get the IDP's configuration document. For example, if the URL
+of the IDP is http://localhost:8000/auth/realms/quickstart, then the URL to get the
+document from is http://localhost:8000/auth/realms/quickstart/.well-known/openid-configuration
+
+2. Use the following curl command to get the configuration document from the URL described
+in step 1::
+
+ curl -k -v \
+ -X GET \
+ -H "Content-Type: application/x-www-form-urlencoded" \
+ "http://localhost:8000/auth/realms/quickstart/.well-known/openid-configuration" \
+ | jq .
+
+ 3. From the response of step 2, use the value of "jwks_uri" to get the certificate of the IDP,
+ using the following code::
+ curl -k -v \
+ -X GET \
+ -H "Content-Type: application/x-www-form-urlencoded" \
+ "http://$KC_SERVER/$KC_CONTEXT/realms/$KC_REALM/protocol/openid-connect/certs" \
+ | jq .
+
+3. Copy the result of "x5c" in the response above, in a file certificate.crt, and add
+'-----BEGIN CERTIFICATE-----' at the beginning and "-----END CERTIFICATE-----"
+at the end.
+
+4. Use the following OpenSSL command to get the certificate thumbprint::
+
+ openssl x509 -in certificate.crt -fingerprint -noout
+
+5. The result of the above command in step 4, will be a SHA1 fingerprint, like the following::
+
+ SHA1 Fingerprint=F7:D7:B3:51:5D:D0:D3:19:DD:21:9A:43:A9:EA:72:7A:D6:06:52:87
+
+6. Remove the colons from the result above to get the final thumbprint which can be as input
+while creating the OpenID Connect Provider entity in IAM::
+
+ F7D7B3515DD0D319DD219A43A9EA727AD6065287
+
+Roles in RGW
+============
+
+More information for role manipulation can be found here
+:doc:`role`.
+
+OpenID Connect Provider in RGW
+==============================
+
+More information for OpenID Connect Provider entity manipulation
+can be found here
+:doc:`oidc`.
+
+Keycloak integration with Radosgw
+=================================
+
+Steps for integrating Radosgw with Keycloak can be found here
+:doc:`keycloak`.
+
+STSLite
+=======
+STSLite has been built on STS, and documentation for the same can be found here
+:doc:`STSLite`.
diff --git a/doc/radosgw/STSLite.rst b/doc/radosgw/STSLite.rst
new file mode 100644
index 000000000..63164e48d
--- /dev/null
+++ b/doc/radosgw/STSLite.rst
@@ -0,0 +1,196 @@
+=========
+STS Lite
+=========
+
+Ceph Object Gateway provides support for a subset of Amazon Secure Token Service
+(STS) APIs. STS Lite is an extension of STS and builds upon one of its APIs to
+decrease the load on external IDPs like Keystone and LDAP.
+
+A set of temporary security credentials is returned after authenticating
+a set of AWS credentials with the external IDP. These temporary credentials can be used
+to make subsequent S3 calls which will be authenticated by the STS engine in Ceph,
+resulting in less load on the Keystone/ LDAP server.
+
+Temporary and limited privileged credentials can be obtained for a local user
+also using the STS Lite API.
+
+STS Lite REST APIs
+==================
+
+The following STS Lite REST API is part of STS Lite in Ceph Object Gateway:
+
+1. GetSessionToken: Returns a set of temporary credentials for a set of AWS
+credentials. After initial authentication with Keystone/ LDAP, the temporary
+credentials returned can be used to make subsequent S3 calls. The temporary
+credentials will have the same permission as that of the AWS credentials.
+
+Parameters:
+ **DurationSeconds** (Integer/ Optional): The duration in seconds for which the
+ credentials should remain valid. Its default value is 3600. Its default max
+ value is 43200 which is can be configured using rgw sts max session duration.
+
+ **SerialNumber** (String/ Optional): The Id number of the MFA device associated
+ with the user making the GetSessionToken call.
+
+ **TokenCode** (String/ Optional): The value provided by the MFA device, if MFA is required.
+
+An administrative user needs to attach a policy to allow invocation of GetSessionToken API using its permanent
+credentials and to allow subsequent S3 operations invocation using only the temporary credentials returned
+by GetSessionToken.
+
+The user attaching the policy needs to have admin caps. For example::
+
+ radosgw-admin caps add --uid="TESTER" --caps="user-policy=*"
+
+The following is the policy that needs to be attached to a user 'TESTER1'::
+
+ user_policy = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Deny\",\"Action\":\"s3:*\",\"Resource\":[\"*\"],\"Condition\":{\"BoolIfExists\":{\"sts:authentication\":\"false\"}}},{\"Effect\":\"Allow\",\"Action\":\"sts:GetSessionToken\",\"Resource\":\"*\",\"Condition\":{\"BoolIfExists\":{\"sts:authentication\":\"false\"}}}]}"
+
+
+STS Lite Configuration
+======================
+
+The following configurable options are available for STS Lite integration::
+
+ [client.radosgw.gateway]
+ rgw sts key = {sts key for encrypting the session token}
+ rgw s3 auth use sts = true
+
+The above STS configurables can be used with the Keystone configurables if one
+needs to use STS Lite in conjunction with Keystone. The complete set of
+configurable options will be::
+
+ [client.radosgw.gateway]
+ rgw sts key = {sts key for encrypting/ decrypting the session token}
+ rgw s3 auth use sts = true
+
+ rgw keystone url = {keystone server url:keystone server admin port}
+ rgw keystone admin project = {keystone admin project name}
+ rgw keystone admin tenant = {keystone service tenant name}
+ rgw keystone admin domain = {keystone admin domain name}
+ rgw keystone api version = {keystone api version}
+ rgw keystone implicit tenants = {true for private tenant for each new user}
+ rgw keystone admin password = {keystone service tenant user name}
+ rgw keystone admin user = keystone service tenant user password}
+ rgw keystone accepted roles = {accepted user roles}
+ rgw keystone token cache size = {number of tokens to cache}
+ rgw s3 auth use keystone = true
+
+The details of the integrating ldap with Ceph Object Gateway can be found here:
+:doc:`keystone`
+
+The complete set of configurables to use STS Lite with LDAP are::
+
+ [client.radosgw.gateway]
+ rgw sts key = {sts key for encrypting/ decrypting the session token}
+ rgw s3 auth use sts = true
+
+ rgw_s3_auth_use_ldap = true
+ rgw_ldap_uri = {LDAP server to use}
+ rgw_ldap_binddn = {Distinguished Name (DN) of the service account}
+ rgw_ldap_secret = {password for the service account}
+ rgw_ldap_searchdn = {base in the directory information tree for searching users}
+ rgw_ldap_dnattr = {attribute being used in the constructed search filter to match a username}
+ rgw_ldap_searchfilter = {search filter}
+
+The details of the integrating ldap with Ceph Object Gateway can be found here:
+:doc:`ldap-auth`
+
+Note: By default, STS and S3 APIs co-exist in the same namespace, and both S3
+and STS APIs can be accessed via the same endpoint in Ceph Object Gateway.
+
+Example showing how to Use STS Lite with Keystone
+=================================================
+
+The following are the steps needed to use STS Lite with Keystone. Boto 3.x has
+been used to write an example code to show the integration of STS Lite with
+Keystone.
+
+1. Generate EC2 credentials :
+
+.. code-block:: javascript
+
+ openstack ec2 credentials create
+ +------------+--------------------------------------------------------+
+ | Field | Value |
+ +------------+--------------------------------------------------------+
+ | access | b924dfc87d454d15896691182fdeb0ef |
+ | links | {u'self': u'http://192.168.0.15/identity/v3/users/ |
+ | | 40a7140e424f493d8165abc652dc731c/credentials/ |
+ | | OS-EC2/b924dfc87d454d15896691182fdeb0ef'} |
+ | project_id | c703801dccaf4a0aaa39bec8c481e25a |
+ | secret | 6a2142613c504c42a94ba2b82147dc28 |
+ | trust_id | None |
+ | user_id | 40a7140e424f493d8165abc652dc731c |
+ +------------+--------------------------------------------------------+
+
+2. Use the credentials created in the step 1. to get back a set of temporary
+ credentials using GetSessionToken API.
+
+.. code-block:: python
+
+ import boto3
+
+ access_key = <ec2 access key>
+ secret_key = <ec2 secret key>
+
+ client = boto3.client('sts',
+ aws_access_key_id=access_key,
+ aws_secret_access_key=secret_key,
+ endpoint_url=<STS URL>,
+ region_name='',
+ )
+
+ response = client.get_session_token(
+ DurationSeconds=43200
+ )
+
+3. The temporary credentials obtained in step 2. can be used for making S3 calls:
+
+.. code-block:: python
+
+ s3client = boto3.client('s3',
+ aws_access_key_id = response['Credentials']['AccessKeyId'],
+ aws_secret_access_key = response['Credentials']['SecretAccessKey'],
+ aws_session_token = response['Credentials']['SessionToken'],
+ endpoint_url=<S3 URL>,
+ region_name='')
+
+ bucket = s3client.create_bucket(Bucket='my-new-shiny-bucket')
+ response = s3client.list_buckets()
+ for bucket in response["Buckets"]:
+ print "{name}\t{created}".format(
+ name = bucket['Name'],
+ created = bucket['CreationDate'],
+ )
+
+Similar steps can be performed for using GetSessionToken with LDAP.
+
+Limitations and Workarounds
+===========================
+
+1. Keystone currently supports only S3 requests, hence in order to successfully
+authenticate an STS request, the following workaround needs to be added to boto
+to the following file - botocore/auth.py
+
+Lines 13-16 have been added as a workaround in the code block below:
+
+.. code-block:: python
+
+ class SigV4Auth(BaseSigner):
+ """
+ Sign a request with Signature V4.
+ """
+ REQUIRES_REGION = True
+
+ def __init__(self, credentials, service_name, region_name):
+ self.credentials = credentials
+ # We initialize these value here so the unit tests can have
+ # valid values. But these will get overridden in ``add_auth``
+ # later for real requests.
+ self._region_name = region_name
+ if service_name == 'sts':
+ self._service_name = 's3'
+ else:
+ self._service_name = service_name
+
diff --git a/doc/radosgw/admin.rst b/doc/radosgw/admin.rst
new file mode 100644
index 000000000..5f47471ac
--- /dev/null
+++ b/doc/radosgw/admin.rst
@@ -0,0 +1,528 @@
+=============
+ Admin Guide
+=============
+
+Once you have your Ceph Object Storage service up and running, you may
+administer the service with user management, access controls, quotas
+and usage tracking among other features.
+
+
+User Management
+===============
+
+Ceph Object Storage user management refers to users of the Ceph Object Storage
+service (i.e., not the Ceph Object Gateway as a user of the Ceph Storage
+Cluster). You must create a user, access key and secret to enable end users to
+interact with Ceph Object Gateway services.
+
+There are two user types:
+
+- **User:** The term 'user' reflects a user of the S3 interface.
+
+- **Subuser:** The term 'subuser' reflects a user of the Swift interface. A subuser
+ is associated to a user .
+
+.. ditaa::
+ +---------+
+ | User |
+ +----+----+
+ |
+ | +-----------+
+ +-----+ Subuser |
+ +-----------+
+
+You can create, modify, view, suspend and remove users and subusers. In addition
+to user and subuser IDs, you may add a display name and an email address for a
+user. You can specify a key and secret, or generate a key and secret
+automatically. When generating or specifying keys, note that user IDs correspond
+to an S3 key type and subuser IDs correspond to a swift key type. Swift keys
+also have access levels of ``read``, ``write``, ``readwrite`` and ``full``.
+
+
+Create a User
+-------------
+
+To create a user (S3 interface), execute the following::
+
+ radosgw-admin user create --uid={username} --display-name="{display-name}" [--email={email}]
+
+For example::
+
+ radosgw-admin user create --uid=johndoe --display-name="John Doe" --email=john@example.com
+
+.. code-block:: javascript
+
+ { "user_id": "johndoe",
+ "display_name": "John Doe",
+ "email": "john@example.com",
+ "suspended": 0,
+ "max_buckets": 1000,
+ "subusers": [],
+ "keys": [
+ { "user": "johndoe",
+ "access_key": "11BS02LGFB6AL6H1ADMW",
+ "secret_key": "vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY"}],
+ "swift_keys": [],
+ "caps": [],
+ "op_mask": "read, write, delete",
+ "default_placement": "",
+ "placement_tags": [],
+ "bucket_quota": { "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1},
+ "user_quota": { "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1},
+ "temp_url_keys": []}
+
+Creating a user also creates an ``access_key`` and ``secret_key`` entry for use
+with any S3 API-compatible client.
+
+.. important:: Check the key output. Sometimes ``radosgw-admin``
+ generates a JSON escape (``\``) character, and some clients
+ do not know how to handle JSON escape characters. Remedies include
+ removing the JSON escape character (``\``), encapsulating the string
+ in quotes, regenerating the key and ensuring that it
+ does not have a JSON escape character or specify the key and secret
+ manually.
+
+
+Create a Subuser
+----------------
+
+To create a subuser (Swift interface) for the user, you must specify the user ID
+(``--uid={username}``), a subuser ID and the access level for the subuser. ::
+
+ radosgw-admin subuser create --uid={uid} --subuser={uid} --access=[ read | write | readwrite | full ]
+
+For example::
+
+ radosgw-admin subuser create --uid=johndoe --subuser=johndoe:swift --access=full
+
+
+.. note:: ``full`` is not ``readwrite``, as it also includes the access control policy.
+
+.. code-block:: javascript
+
+ { "user_id": "johndoe",
+ "display_name": "John Doe",
+ "email": "john@example.com",
+ "suspended": 0,
+ "max_buckets": 1000,
+ "subusers": [
+ { "id": "johndoe:swift",
+ "permissions": "full-control"}],
+ "keys": [
+ { "user": "johndoe",
+ "access_key": "11BS02LGFB6AL6H1ADMW",
+ "secret_key": "vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY"}],
+ "swift_keys": [],
+ "caps": [],
+ "op_mask": "read, write, delete",
+ "default_placement": "",
+ "placement_tags": [],
+ "bucket_quota": { "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1},
+ "user_quota": { "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1},
+ "temp_url_keys": []}
+
+
+Get User Info
+-------------
+
+To get information about a user, you must specify ``user info`` and the user ID
+(``--uid={username}``) . ::
+
+ radosgw-admin user info --uid=johndoe
+
+
+
+Modify User Info
+----------------
+
+To modify information about a user, you must specify the user ID (``--uid={username}``)
+and the attributes you want to modify. Typical modifications are to keys and secrets,
+email addresses, display names and access levels. For example::
+
+ radosgw-admin user modify --uid=johndoe --display-name="John E. Doe"
+
+To modify subuser values, specify ``subuser modify``, user ID and the subuser ID. For example::
+
+ radosgw-admin subuser modify --uid=johndoe --subuser=johndoe:swift --access=full
+
+
+User Enable/Suspend
+-------------------
+
+When you create a user, the user is enabled by default. However, you may suspend
+user privileges and re-enable them at a later time. To suspend a user, specify
+``user suspend`` and the user ID. ::
+
+ radosgw-admin user suspend --uid=johndoe
+
+To re-enable a suspended user, specify ``user enable`` and the user ID. ::
+
+ radosgw-admin user enable --uid=johndoe
+
+.. note:: Disabling the user disables the subuser.
+
+
+Remove a User
+-------------
+
+When you remove a user, the user and subuser are removed from the system.
+However, you may remove just the subuser if you wish. To remove a user (and
+subuser), specify ``user rm`` and the user ID. ::
+
+ radosgw-admin user rm --uid=johndoe
+
+To remove the subuser only, specify ``subuser rm`` and the subuser ID. ::
+
+ radosgw-admin subuser rm --subuser=johndoe:swift
+
+
+Options include:
+
+- **Purge Data:** The ``--purge-data`` option purges all data associated
+ to the UID.
+
+- **Purge Keys:** The ``--purge-keys`` option purges all keys associated
+ to the UID.
+
+
+Remove a Subuser
+----------------
+
+When you remove a sub user, you are removing access to the Swift interface.
+The user will remain in the system. To remove the subuser, specify
+``subuser rm`` and the subuser ID. ::
+
+ radosgw-admin subuser rm --subuser=johndoe:swift
+
+
+
+Options include:
+
+- **Purge Keys:** The ``--purge-keys`` option purges all keys associated
+ to the UID.
+
+
+Add / Remove a Key
+------------------------
+
+Both users and subusers require the key to access the S3 or Swift interface. To
+use S3, the user needs a key pair which is composed of an access key and a
+secret key. On the other hand, to use Swift, the user typically needs a secret
+key (password), and use it together with the associated user ID. You may create
+a key and either specify or generate the access key and/or secret key. You may
+also remove a key. Options include:
+
+- ``--key-type=<type>`` specifies the key type. The options are: s3, swift
+- ``--access-key=<key>`` manually specifies an S3 access key.
+- ``--secret-key=<key>`` manually specifies a S3 secret key or a Swift secret key.
+- ``--gen-access-key`` automatically generates a random S3 access key.
+- ``--gen-secret`` automatically generates a random S3 secret key or a random Swift secret key.
+
+An example how to add a specified S3 key pair for a user. ::
+
+ radosgw-admin key create --uid=foo --key-type=s3 --access-key fooAccessKey --secret-key fooSecretKey
+
+.. code-block:: javascript
+
+ { "user_id": "foo",
+ "rados_uid": 0,
+ "display_name": "foo",
+ "email": "foo@example.com",
+ "suspended": 0,
+ "keys": [
+ { "user": "foo",
+ "access_key": "fooAccessKey",
+ "secret_key": "fooSecretKey"}],
+ }
+
+Note that you may create multiple S3 key pairs for a user.
+
+To attach a specified swift secret key for a subuser. ::
+
+ radosgw-admin key create --subuser=foo:bar --key-type=swift --secret-key barSecret
+
+.. code-block:: javascript
+
+ { "user_id": "foo",
+ "rados_uid": 0,
+ "display_name": "foo",
+ "email": "foo@example.com",
+ "suspended": 0,
+ "subusers": [
+ { "id": "foo:bar",
+ "permissions": "full-control"}],
+ "swift_keys": [
+ { "user": "foo:bar",
+ "secret_key": "asfghjghghmgm"}]}
+
+Note that a subuser can have only one swift secret key.
+
+Subusers can also be used with S3 APIs if the subuser is associated with a S3 key pair. ::
+
+ radosgw-admin key create --subuser=foo:bar --key-type=s3 --access-key barAccessKey --secret-key barSecretKey
+
+.. code-block:: javascript
+
+ { "user_id": "foo",
+ "rados_uid": 0,
+ "display_name": "foo",
+ "email": "foo@example.com",
+ "suspended": 0,
+ "subusers": [
+ { "id": "foo:bar",
+ "permissions": "full-control"}],
+ "keys": [
+ { "user": "foo:bar",
+ "access_key": "barAccessKey",
+ "secret_key": "barSecretKey"}],
+ }
+
+
+To remove a S3 key pair, specify the access key. ::
+
+ radosgw-admin key rm --uid=foo --key-type=s3 --access-key=fooAccessKey
+
+To remove the swift secret key. ::
+
+ radosgw-admin key rm --subuser=foo:bar --key-type=swift
+
+
+Add / Remove Admin Capabilities
+-------------------------------
+
+The Ceph Storage Cluster provides an administrative API that enables users to
+execute administrative functions via the REST API. By default, users do NOT have
+access to this API. To enable a user to exercise administrative functionality,
+provide the user with administrative capabilities.
+
+To add administrative capabilities to a user, execute the following::
+
+ radosgw-admin caps add --uid={uid} --caps={caps}
+
+
+You can add read, write or all capabilities to users, buckets, metadata and
+usage (utilization). For example::
+
+ --caps="[users|buckets|metadata|usage|zone]=[*|read|write|read, write]"
+
+For example::
+
+ radosgw-admin caps add --uid=johndoe --caps="users=*;buckets=*"
+
+
+To remove administrative capabilities from a user, execute the following::
+
+ radosgw-admin caps rm --uid=johndoe --caps={caps}
+
+
+Quota Management
+================
+
+The Ceph Object Gateway enables you to set quotas on users and buckets owned by
+users. Quotas include the maximum number of objects in a bucket and the maximum
+storage size a bucket can hold.
+
+- **Bucket:** The ``--bucket`` option allows you to specify a quota for
+ buckets the user owns.
+
+- **Maximum Objects:** The ``--max-objects`` setting allows you to specify
+ the maximum number of objects. A negative value disables this setting.
+
+- **Maximum Size:** The ``--max-size`` option allows you to specify a quota
+ size in B/K/M/G/T, where B is the default. A negative value disables this setting.
+
+- **Quota Scope:** The ``--quota-scope`` option sets the scope for the quota.
+ The options are ``bucket`` and ``user``. Bucket quotas apply to buckets a
+ user owns. User quotas apply to a user.
+
+
+Set User Quota
+--------------
+
+Before you enable a quota, you must first set the quota parameters.
+For example::
+
+ radosgw-admin quota set --quota-scope=user --uid=<uid> [--max-objects=<num objects>] [--max-size=<max size>]
+
+For example::
+
+ radosgw-admin quota set --quota-scope=user --uid=johndoe --max-objects=1024 --max-size=1024B
+
+
+A negative value for num objects and / or max size means that the
+specific quota attribute check is disabled.
+
+
+Enable/Disable User Quota
+-------------------------
+
+Once you set a user quota, you may enable it. For example::
+
+ radosgw-admin quota enable --quota-scope=user --uid=<uid>
+
+You may disable an enabled user quota. For example::
+
+ radosgw-admin quota disable --quota-scope=user --uid=<uid>
+
+
+Set Bucket Quota
+----------------
+
+Bucket quotas apply to the buckets owned by the specified ``uid``. They are
+independent of the user. ::
+
+ radosgw-admin quota set --uid=<uid> --quota-scope=bucket [--max-objects=<num objects>] [--max-size=<max size]
+
+A negative value for num objects and / or max size means that the
+specific quota attribute check is disabled.
+
+
+Enable/Disable Bucket Quota
+---------------------------
+
+Once you set a bucket quota, you may enable it. For example::
+
+ radosgw-admin quota enable --quota-scope=bucket --uid=<uid>
+
+You may disable an enabled bucket quota. For example::
+
+ radosgw-admin quota disable --quota-scope=bucket --uid=<uid>
+
+
+Get Quota Settings
+------------------
+
+You may access each user's quota settings via the user information
+API. To read user quota setting information with the CLI interface,
+execute the following::
+
+ radosgw-admin user info --uid=<uid>
+
+
+Update Quota Stats
+------------------
+
+Quota stats get updated asynchronously. You can update quota
+statistics for all users and all buckets manually to retrieve
+the latest quota stats. ::
+
+ radosgw-admin user stats --uid=<uid> --sync-stats
+
+.. _rgw_user_usage_stats:
+
+Get User Usage Stats
+--------------------
+
+To see how much of the quota a user has consumed, execute the following::
+
+ radosgw-admin user stats --uid=<uid>
+
+.. note:: You should execute ``radosgw-admin user stats`` with the
+ ``--sync-stats`` option to receive the latest data.
+
+Default Quotas
+--------------
+
+You can set default quotas in the config. These defaults are used when
+creating a new user and have no effect on existing users. If the
+relevant default quota is set in config, then that quota is set on the
+new user, and that quota is enabled. See ``rgw bucket default quota max objects``,
+``rgw bucket default quota max size``, ``rgw user default quota max objects``, and
+``rgw user default quota max size`` in `Ceph Object Gateway Config Reference`_
+
+Quota Cache
+-----------
+
+Quota statistics are cached on each RGW instance. If there are multiple
+instances, then the cache can keep quotas from being perfectly enforced, as
+each instance will have a different view of quotas. The options that control
+this are ``rgw bucket quota ttl``, ``rgw user quota bucket sync interval`` and
+``rgw user quota sync interval``. The higher these values are, the more
+efficient quota operations are, but the more out-of-sync multiple instances
+will be. The lower these values are, the closer to perfect enforcement
+multiple instances will achieve. If all three are 0, then quota caching is
+effectively disabled, and multiple instances will have perfect quota
+enforcement. See `Ceph Object Gateway Config Reference`_
+
+Reading / Writing Global Quotas
+-------------------------------
+
+You can read and write global quota settings in the period configuration. To
+view the global quota settings::
+
+ radosgw-admin global quota get
+
+The global quota settings can be manipulated with the ``global quota``
+counterparts of the ``quota set``, ``quota enable``, and ``quota disable``
+commands. ::
+
+ radosgw-admin global quota set --quota-scope bucket --max-objects 1024
+ radosgw-admin global quota enable --quota-scope bucket
+
+.. note:: In a multisite configuration, where there is a realm and period
+ present, changes to the global quotas must be committed using ``period
+ update --commit``. If there is no period present, the rados gateway(s) must
+ be restarted for the changes to take effect.
+
+
+Usage
+=====
+
+The Ceph Object Gateway logs usage for each user. You can track
+user usage within date ranges too.
+
+- Add ``rgw enable usage log = true`` in [client.rgw] section of ceph.conf and restart the radosgw service.
+
+Options include:
+
+- **Start Date:** The ``--start-date`` option allows you to filter usage
+ stats from a particular start date (**format:** ``yyyy-mm-dd[HH:MM:SS]``).
+
+- **End Date:** The ``--end-date`` option allows you to filter usage up
+ to a particular date (**format:** ``yyyy-mm-dd[HH:MM:SS]``).
+
+- **Log Entries:** The ``--show-log-entries`` option allows you to specify
+ whether or not to include log entries with the usage stats
+ (options: ``true`` | ``false``).
+
+.. note:: You may specify time with minutes and seconds, but it is stored
+ with 1 hour resolution.
+
+
+Show Usage
+----------
+
+To show usage statistics, specify the ``usage show``. To show usage for a
+particular user, you must specify a user ID. You may also specify a start date,
+end date, and whether or not to show log entries.::
+
+ radosgw-admin usage show --uid=johndoe --start-date=2012-03-01 --end-date=2012-04-01
+
+You may also show a summary of usage information for all users by omitting a user ID. ::
+
+ radosgw-admin usage show --show-log-entries=false
+
+
+Trim Usage
+----------
+
+With heavy use, usage logs can begin to take up storage space. You can trim
+usage logs for all users and for specific users. You may also specify date
+ranges for trim operations. ::
+
+ radosgw-admin usage trim --start-date=2010-01-01 --end-date=2010-12-31
+ radosgw-admin usage trim --uid=johndoe
+ radosgw-admin usage trim --uid=johndoe --end-date=2013-12-31
+
+
+.. _radosgw-admin: ../../man/8/radosgw-admin/
+.. _Pool Configuration: ../../rados/configuration/pool-pg-config-ref/
+.. _Ceph Object Gateway Config Reference: ../config-ref/
diff --git a/doc/radosgw/adminops.rst b/doc/radosgw/adminops.rst
new file mode 100644
index 000000000..2dfd9f6d9
--- /dev/null
+++ b/doc/radosgw/adminops.rst
@@ -0,0 +1,1994 @@
+==================
+ Admin Operations
+==================
+
+An admin API request will be done on a URI that starts with the configurable 'admin'
+resource entry point. Authorization for the admin API duplicates the S3 authorization
+mechanism. Some operations require that the user holds special administrative capabilities.
+The response entity type (XML or JSON) may be specified as the 'format' option in the
+request and defaults to JSON if not specified.
+
+Get Usage
+=========
+
+Request bandwidth usage information.
+
+Note: this feature is disabled by default, can be enabled by setting ``rgw
+enable usage log = true`` in the appropriate section of ceph.conf. For changes
+in ceph.conf to take effect, radosgw process restart is needed.
+
+:caps: usage=read
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{admin}/usage?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user for which the information is requested. If not specified will apply to all users.
+:Type: String
+:Example: ``foo_user``
+:Required: No
+
+``start``
+
+:Description: Date and (optional) time that specifies the start time of the requested data.
+:Type: String
+:Example: ``2012-09-25 16:00:00``
+:Required: No
+
+``end``
+
+:Description: Date and (optional) time that specifies the end time of the requested data (non-inclusive).
+:Type: String
+:Example: ``2012-09-25 16:00:00``
+:Required: No
+
+
+``show-entries``
+
+:Description: Specifies whether data entries should be returned.
+:Type: Boolean
+:Example: True [True]
+:Required: No
+
+
+``show-summary``
+
+:Description: Specifies whether data summary should be returned.
+:Type: Boolean
+:Example: True [True]
+:Required: No
+
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the requested information.
+
+``usage``
+
+:Description: A container for the usage information.
+:Type: Container
+
+``entries``
+
+:Description: A container for the usage entries information.
+:Type: Container
+
+``user``
+
+:Description: A container for the user data information.
+:Type: Container
+
+``owner``
+
+:Description: The name of the user that owns the buckets.
+:Type: String
+
+``bucket``
+
+:Description: The bucket name.
+:Type: String
+
+``time``
+
+:Description: Time lower bound for which data is being specified (rounded to the beginning of the first relevant hour).
+:Type: String
+
+``epoch``
+
+:Description: The time specified in seconds since 1/1/1970.
+:Type: String
+
+``categories``
+
+:Description: A container for stats categories.
+:Type: Container
+
+``entry``
+
+:Description: A container for stats entry.
+:Type: Container
+
+``category``
+
+:Description: Name of request category for which the stats are provided.
+:Type: String
+
+``bytes_sent``
+
+:Description: Number of bytes sent by the RADOS Gateway.
+:Type: Integer
+
+``bytes_received``
+
+:Description: Number of bytes received by the RADOS Gateway.
+:Type: Integer
+
+``ops``
+
+:Description: Number of operations.
+:Type: Integer
+
+``successful_ops``
+
+:Description: Number of successful operations.
+:Type: Integer
+
+``summary``
+
+:Description: A container for stats summary.
+:Type: Container
+
+``total``
+
+:Description: A container for stats summary aggregated total.
+:Type: Container
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+TBD.
+
+Trim Usage
+==========
+
+Remove usage information. With no dates specified, removes all usage
+information.
+
+Note: this feature is disabled by default, can be enabled by setting ``rgw
+enable usage log = true`` in the appropriate section of ceph.conf. For changes
+in ceph.conf to take effect, radosgw process restart is needed.
+
+:caps: usage=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/usage?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user for which the information is requested. If not specified will apply to all users.
+:Type: String
+:Example: ``foo_user``
+:Required: No
+
+``start``
+
+:Description: Date and (optional) time that specifies the start time of the requested data.
+:Type: String
+:Example: ``2012-09-25 16:00:00``
+:Required: No
+
+``end``
+
+:Description: Date and (optional) time that specifies the end time of the requested data (none inclusive).
+:Type: String
+:Example: ``2012-09-25 16:00:00``
+:Required: No
+
+
+``remove-all``
+
+:Description: Required when uid is not specified, in order to acknowledge multi user data removal.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+TBD.
+
+Get User Info
+=============
+
+Get user information.
+
+:caps: users=read
+
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{admin}/user?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user for which the information is requested.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the user information.
+
+``user``
+
+:Description: A container for the user data information.
+:Type: Container
+
+``user_id``
+
+:Description: The user id.
+:Type: String
+:Parent: ``user``
+
+``display_name``
+
+:Description: Display name for the user.
+:Type: String
+:Parent: ``user``
+
+``suspended``
+
+:Description: True if the user is suspended.
+:Type: Boolean
+:Parent: ``user``
+
+``max_buckets``
+
+:Description: The maximum number of buckets to be owned by the user.
+:Type: Integer
+:Parent: ``user``
+
+``subusers``
+
+:Description: Subusers associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+``keys``
+
+:Description: S3 keys associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+``swift_keys``
+
+:Description: Swift keys associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+``caps``
+
+:Description: User capabilities.
+:Type: Container
+:Parent: ``user``
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+None.
+
+Create User
+===========
+
+Create a new user. By default, a S3 key pair will be created automatically
+and returned in the response. If only one of ``access-key`` or ``secret-key``
+is provided, the omitted key will be automatically generated. By default, a
+generated key is added to the keyring without replacing an existing key pair.
+If ``access-key`` is specified and refers to an existing key owned by the user
+then it will be modified.
+
+.. versionadded:: Luminous
+
+A ``tenant`` may either be specified as a part of uid or as an additional
+request param.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{admin}/user?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID to be created.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+A tenant name may also specified as a part of ``uid``, by following the syntax
+``tenant$user``, refer to :ref:`Multitenancy <rgw-multitenancy>` for more details.
+
+``display-name``
+
+:Description: The display name of the user to be created.
+:Type: String
+:Example: ``foo user``
+:Required: Yes
+
+
+``email``
+
+:Description: The email address associated with the user.
+:Type: String
+:Example: ``foo@bar.com``
+:Required: No
+
+``key-type``
+
+:Description: Key type to be generated, options are: swift, s3 (default).
+:Type: String
+:Example: ``s3`` [``s3``]
+:Required: No
+
+``access-key``
+
+:Description: Specify access key.
+:Type: String
+:Example: ``ABCD0EF12GHIJ2K34LMN``
+:Required: No
+
+
+``secret-key``
+
+:Description: Specify secret key.
+:Type: String
+:Example: ``0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8``
+:Required: No
+
+``user-caps``
+
+:Description: User capabilities.
+:Type: String
+:Example: ``usage=read, write; users=read``
+:Required: No
+
+``generate-key``
+
+:Description: Generate a new key pair and add to the existing keyring.
+:Type: Boolean
+:Example: True [True]
+:Required: No
+
+``max-buckets``
+
+:Description: Specify the maximum number of buckets the user can own.
+:Type: Integer
+:Example: 500 [1000]
+:Required: No
+
+``suspended``
+
+:Description: Specify whether the user should be suspended.
+:Type: Boolean
+:Example: False [False]
+:Required: No
+
+.. versionadded:: Jewel
+
+``tenant``
+
+:Description: the Tenant under which a user is a part of.
+:Type: string
+:Example: tenant1
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the user information.
+
+``user``
+
+:Description: A container for the user data information.
+:Type: Container
+
+``tenant``
+
+:Description: The tenant which user is a part of.
+:Type: String
+:Parent: ``user``
+
+``user_id``
+
+:Description: The user id.
+:Type: String
+:Parent: ``user``
+
+``display_name``
+
+:Description: Display name for the user.
+:Type: String
+:Parent: ``user``
+
+``suspended``
+
+:Description: True if the user is suspended.
+:Type: Boolean
+:Parent: ``user``
+
+``max_buckets``
+
+:Description: The maximum number of buckets to be owned by the user.
+:Type: Integer
+:Parent: ``user``
+
+``subusers``
+
+:Description: Subusers associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+``keys``
+
+:Description: S3 keys associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+``swift_keys``
+
+:Description: Swift keys associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+``caps``
+
+:Description: User capabilities.
+:Type: Container
+:Parent: ``user``
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``UserExists``
+
+:Description: Attempt to create existing user.
+:Code: 409 Conflict
+
+``InvalidAccessKey``
+
+:Description: Invalid access key specified.
+:Code: 400 Bad Request
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``InvalidSecretKey``
+
+:Description: Invalid secret key specified.
+:Code: 400 Bad Request
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``KeyExists``
+
+:Description: Provided access key exists and belongs to another user.
+:Code: 409 Conflict
+
+``EmailExists``
+
+:Description: Provided email address exists.
+:Code: 409 Conflict
+
+``InvalidCapability``
+
+:Description: Attempt to grant invalid admin capability.
+:Code: 400 Bad Request
+
+
+Modify User
+===========
+
+Modify a user.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{admin}/user?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID to be modified.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+``display-name``
+
+:Description: The display name of the user to be modified.
+:Type: String
+:Example: ``foo user``
+:Required: No
+
+``email``
+
+:Description: The email address to be associated with the user.
+:Type: String
+:Example: ``foo@bar.com``
+:Required: No
+
+``generate-key``
+
+:Description: Generate a new key pair and add to the existing keyring.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+``access-key``
+
+:Description: Specify access key.
+:Type: String
+:Example: ``ABCD0EF12GHIJ2K34LMN``
+:Required: No
+
+``secret-key``
+
+:Description: Specify secret key.
+:Type: String
+:Example: ``0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8``
+:Required: No
+
+``key-type``
+
+:Description: Key type to be generated, options are: swift, s3 (default).
+:Type: String
+:Example: ``s3``
+:Required: No
+
+``user-caps``
+
+:Description: User capabilities.
+:Type: String
+:Example: ``usage=read, write; users=read``
+:Required: No
+
+``max-buckets``
+
+:Description: Specify the maximum number of buckets the user can own.
+:Type: Integer
+:Example: 500 [1000]
+:Required: No
+
+``suspended``
+
+:Description: Specify whether the user should be suspended.
+:Type: Boolean
+:Example: False [False]
+:Required: No
+
+``op-mask``
+
+:Description: The op-mask of the user to be modified.
+:Type: String
+:Example: ``read, write, delete, *``
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the user information.
+
+``user``
+
+:Description: A container for the user data information.
+:Type: Container
+
+``user_id``
+
+:Description: The user id.
+:Type: String
+:Parent: ``user``
+
+``display_name``
+
+:Description: Display name for the user.
+:Type: String
+:Parent: ``user``
+
+
+``suspended``
+
+:Description: True if the user is suspended.
+:Type: Boolean
+:Parent: ``user``
+
+
+``max_buckets``
+
+:Description: The maximum number of buckets to be owned by the user.
+:Type: Integer
+:Parent: ``user``
+
+
+``subusers``
+
+:Description: Subusers associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+
+``keys``
+
+:Description: S3 keys associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+
+``swift_keys``
+
+:Description: Swift keys associated with this user account.
+:Type: Container
+:Parent: ``user``
+
+
+``caps``
+
+:Description: User capabilities.
+:Type: Container
+:Parent: ``user``
+
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``InvalidAccessKey``
+
+:Description: Invalid access key specified.
+:Code: 400 Bad Request
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``InvalidSecretKey``
+
+:Description: Invalid secret key specified.
+:Code: 400 Bad Request
+
+``KeyExists``
+
+:Description: Provided access key exists and belongs to another user.
+:Code: 409 Conflict
+
+``EmailExists``
+
+:Description: Provided email address exists.
+:Code: 409 Conflict
+
+``InvalidCapability``
+
+:Description: Attempt to grant invalid admin capability.
+:Code: 400 Bad Request
+
+Remove User
+===========
+
+Remove an existing user.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/user?format=json HTTP/1.1
+ Host: {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID to be removed.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes.
+
+``purge-data``
+
+:Description: When specified the buckets and objects belonging
+ to the user will also be removed.
+:Type: Boolean
+:Example: True
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+None
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+None.
+
+Create Subuser
+==============
+
+Create a new subuser (primarily useful for clients using the Swift API).
+Note that in general for a subuser to be useful, it must be granted
+permissions by specifying ``access``. As with user creation if
+``subuser`` is specified without ``secret``, then a secret key will
+be automatically generated.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{admin}/user?subuser&format=json HTTP/1.1
+ Host {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID under which a subuser is to be created.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+
+``subuser``
+
+:Description: Specify the subuser ID to be created.
+:Type: String
+:Example: ``sub_foo``
+:Required: Yes
+
+``secret-key``
+
+:Description: Specify secret key.
+:Type: String
+:Example: ``0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8``
+:Required: No
+
+``key-type``
+
+:Description: Key type to be generated, options are: swift (default), s3.
+:Type: String
+:Example: ``swift`` [``swift``]
+:Required: No
+
+``access``
+
+:Description: Set access permissions for sub-user, should be one
+ of ``read, write, readwrite, full``.
+:Type: String
+:Example: ``read``
+:Required: No
+
+``generate-secret``
+
+:Description: Generate the secret key.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the subuser information.
+
+
+``subusers``
+
+:Description: Subusers associated with the user account.
+:Type: Container
+
+``id``
+
+:Description: Subuser id.
+:Type: String
+:Parent: ``subusers``
+
+``permissions``
+
+:Description: Subuser access to user account.
+:Type: String
+:Parent: ``subusers``
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``SubuserExists``
+
+:Description: Specified subuser exists.
+:Code: 409 Conflict
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``InvalidSecretKey``
+
+:Description: Invalid secret key specified.
+:Code: 400 Bad Request
+
+``InvalidAccess``
+
+:Description: Invalid subuser access specified.
+:Code: 400 Bad Request
+
+Modify Subuser
+==============
+
+Modify an existing subuser
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{admin}/user?subuser&format=json HTTP/1.1
+ Host {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID under which the subuser is to be modified.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+``subuser``
+
+:Description: The subuser ID to be modified.
+:Type: String
+:Example: ``sub_foo``
+:Required: Yes
+
+``generate-secret``
+
+:Description: Generate a new secret key for the subuser,
+ replacing the existing key.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+``secret``
+
+:Description: Specify secret key.
+:Type: String
+:Example: ``0AbCDEFg1h2i34JklM5nop6QrSTUV+WxyzaBC7D8``
+:Required: No
+
+``key-type``
+
+:Description: Key type to be generated, options are: swift (default), s3 .
+:Type: String
+:Example: ``swift`` [``swift``]
+:Required: No
+
+``access``
+
+:Description: Set access permissions for sub-user, should be one
+ of ``read, write, readwrite, full``.
+:Type: String
+:Example: ``read``
+:Required: No
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the subuser information.
+
+
+``subusers``
+
+:Description: Subusers associated with the user account.
+:Type: Container
+
+``id``
+
+:Description: Subuser id.
+:Type: String
+:Parent: ``subusers``
+
+``permissions``
+
+:Description: Subuser access to user account.
+:Type: String
+:Parent: ``subusers``
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``InvalidSecretKey``
+
+:Description: Invalid secret key specified.
+:Code: 400 Bad Request
+
+``InvalidAccess``
+
+:Description: Invalid subuser access specified.
+:Code: 400 Bad Request
+
+Remove Subuser
+==============
+
+Remove an existing subuser
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/user?subuser&format=json HTTP/1.1
+ Host {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID under which the subuser is to be removed.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+
+``subuser``
+
+:Description: The subuser ID to be removed.
+:Type: String
+:Example: ``sub_foo``
+:Required: Yes
+
+``purge-keys``
+
+:Description: Remove keys belonging to the subuser.
+:Type: Boolean
+:Example: True [True]
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+None.
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+None.
+
+Create Key
+==========
+
+Create a new key. If a ``subuser`` is specified then by default created keys
+will be swift type. If only one of ``access-key`` or ``secret-key`` is provided the
+committed key will be automatically generated, that is if only ``secret-key`` is
+specified then ``access-key`` will be automatically generated. By default, a
+generated key is added to the keyring without replacing an existing key pair.
+If ``access-key`` is specified and refers to an existing key owned by the user
+then it will be modified. The response is a container listing all keys of the same
+type as the key created. Note that when creating a swift key, specifying the option
+``access-key`` will have no effect. Additionally, only one swift key may be held by
+each user or subuser.
+
+:caps: users=write
+
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{admin}/user?key&format=json HTTP/1.1
+ Host {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID to receive the new key.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+``subuser``
+
+:Description: The subuser ID to receive the new key.
+:Type: String
+:Example: ``sub_foo``
+:Required: No
+
+``key-type``
+
+:Description: Key type to be generated, options are: swift, s3 (default).
+:Type: String
+:Example: ``s3`` [``s3``]
+:Required: No
+
+``access-key``
+
+:Description: Specify the access key.
+:Type: String
+:Example: ``AB01C2D3EF45G6H7IJ8K``
+:Required: No
+
+``secret-key``
+
+:Description: Specify the secret key.
+:Type: String
+:Example: ``0ab/CdeFGhij1klmnopqRSTUv1WxyZabcDEFgHij``
+:Required: No
+
+``generate-key``
+
+:Description: Generate a new key pair and add to the existing keyring.
+:Type: Boolean
+:Example: True [``True``]
+:Required: No
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+``keys``
+
+:Description: Keys of type created associated with this user account.
+:Type: Container
+
+``user``
+
+:Description: The user account associated with the key.
+:Type: String
+:Parent: ``keys``
+
+``access-key``
+
+:Description: The access key.
+:Type: String
+:Parent: ``keys``
+
+``secret-key``
+
+:Description: The secret key
+:Type: String
+:Parent: ``keys``
+
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``InvalidAccessKey``
+
+:Description: Invalid access key specified.
+:Code: 400 Bad Request
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``InvalidSecretKey``
+
+:Description: Invalid secret key specified.
+:Code: 400 Bad Request
+
+``InvalidKeyType``
+
+:Description: Invalid key type specified.
+:Code: 400 Bad Request
+
+``KeyExists``
+
+:Description: Provided access key exists and belongs to another user.
+:Code: 409 Conflict
+
+Remove Key
+==========
+
+Remove an existing key.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/user?key&format=json HTTP/1.1
+ Host {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``access-key``
+
+:Description: The S3 access key belonging to the S3 key pair to remove.
+:Type: String
+:Example: ``AB01C2D3EF45G6H7IJ8K``
+:Required: Yes
+
+``uid``
+
+:Description: The user to remove the key from.
+:Type: String
+:Example: ``foo_user``
+:Required: No
+
+``subuser``
+
+:Description: The subuser to remove the key from.
+:Type: String
+:Example: ``sub_foo``
+:Required: No
+
+``key-type``
+
+:Description: Key type to be removed, options are: swift, s3.
+ NOTE: Required to remove swift key.
+:Type: String
+:Example: ``swift``
+:Required: No
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+None.
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+None.
+
+Get Bucket Info
+===============
+
+Get information about a subset of the existing buckets. If ``uid`` is specified
+without ``bucket`` then all buckets belonging to the user will be returned. If
+``bucket`` alone is specified, information for that particular bucket will be
+retrieved.
+
+:caps: buckets=read
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{admin}/bucket?format=json HTTP/1.1
+ Host {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket to return info on.
+:Type: String
+:Example: ``foo_bucket``
+:Required: No
+
+``uid``
+
+:Description: The user to retrieve bucket information for.
+:Type: String
+:Example: ``foo_user``
+:Required: No
+
+``stats``
+
+:Description: Return bucket statistics.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful the request returns a buckets container containing
+the desired bucket information.
+
+``stats``
+
+:Description: Per bucket information.
+:Type: Container
+
+``buckets``
+
+:Description: Contains a list of one or more bucket containers.
+:Type: Container
+
+``bucket``
+
+:Description: Container for single bucket information.
+:Type: Container
+:Parent: ``buckets``
+
+``name``
+
+:Description: The name of the bucket.
+:Type: String
+:Parent: ``bucket``
+
+``pool``
+
+:Description: The pool the bucket is stored in.
+:Type: String
+:Parent: ``bucket``
+
+``id``
+
+:Description: The unique bucket id.
+:Type: String
+:Parent: ``bucket``
+
+``marker``
+
+:Description: Internal bucket tag.
+:Type: String
+:Parent: ``bucket``
+
+``owner``
+
+:Description: The user id of the bucket owner.
+:Type: String
+:Parent: ``bucket``
+
+``usage``
+
+:Description: Storage usage information.
+:Type: Container
+:Parent: ``bucket``
+
+``index``
+
+:Description: Status of bucket index.
+:Type: String
+:Parent: ``bucket``
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``IndexRepairFailed``
+
+:Description: Bucket index repair failed.
+:Code: 409 Conflict
+
+Check Bucket Index
+==================
+
+Check the index of an existing bucket. NOTE: to check multipart object
+accounting with ``check-objects``, ``fix`` must be set to True.
+
+:caps: buckets=write
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{admin}/bucket?index&format=json HTTP/1.1
+ Host {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket to return info on.
+:Type: String
+:Example: ``foo_bucket``
+:Required: Yes
+
+``check-objects``
+
+:Description: Check multipart object accounting.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+``fix``
+
+:Description: Also fix the bucket index when checking.
+:Type: Boolean
+:Example: False [False]
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+``index``
+
+:Description: Status of bucket index.
+:Type: String
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``IndexRepairFailed``
+
+:Description: Bucket index repair failed.
+:Code: 409 Conflict
+
+Remove Bucket
+=============
+
+Delete an existing bucket.
+
+:caps: buckets=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/bucket?format=json HTTP/1.1
+ Host {fqdn}
+
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket to remove.
+:Type: String
+:Example: ``foo_bucket``
+:Required: Yes
+
+``purge-objects``
+
+:Description: Remove a buckets objects before deletion.
+:Type: Boolean
+:Example: True [False]
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+None.
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``BucketNotEmpty``
+
+:Description: Attempted to delete non-empty bucket.
+:Code: 409 Conflict
+
+``ObjectRemovalFailed``
+
+:Description: Unable to remove objects.
+:Code: 409 Conflict
+
+Unlink Bucket
+=============
+
+Unlink a bucket from a specified user. Primarily useful for changing
+bucket ownership.
+
+:caps: buckets=write
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{admin}/bucket?format=json HTTP/1.1
+ Host {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket to unlink.
+:Type: String
+:Example: ``foo_bucket``
+:Required: Yes
+
+``uid``
+
+:Description: The user ID to unlink the bucket from.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+None.
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``BucketUnlinkFailed``
+
+:Description: Unable to unlink bucket from specified user.
+:Code: 409 Conflict
+
+Link Bucket
+===========
+
+Link a bucket to a specified user, unlinking the bucket from
+any previous user.
+
+:caps: buckets=write
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{admin}/bucket?format=json HTTP/1.1
+ Host {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket name to unlink.
+:Type: String
+:Example: ``foo_bucket``
+:Required: Yes
+
+``bucket-id``
+
+:Description: The bucket id to unlink.
+:Type: String
+:Example: ``dev.6607669.420``
+:Required: No
+
+``uid``
+
+:Description: The user ID to link the bucket to.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: Container for single bucket information.
+:Type: Container
+
+``name``
+
+:Description: The name of the bucket.
+:Type: String
+:Parent: ``bucket``
+
+``pool``
+
+:Description: The pool the bucket is stored in.
+:Type: String
+:Parent: ``bucket``
+
+``id``
+
+:Description: The unique bucket id.
+:Type: String
+:Parent: ``bucket``
+
+``marker``
+
+:Description: Internal bucket tag.
+:Type: String
+:Parent: ``bucket``
+
+``owner``
+
+:Description: The user id of the bucket owner.
+:Type: String
+:Parent: ``bucket``
+
+``usage``
+
+:Description: Storage usage information.
+:Type: Container
+:Parent: ``bucket``
+
+``index``
+
+:Description: Status of bucket index.
+:Type: String
+:Parent: ``bucket``
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``BucketUnlinkFailed``
+
+:Description: Unable to unlink bucket from specified user.
+:Code: 409 Conflict
+
+``BucketLinkFailed``
+
+:Description: Unable to link bucket to specified user.
+:Code: 409 Conflict
+
+Remove Object
+=============
+
+Remove an existing object. NOTE: Does not require owner to be non-suspended.
+
+:caps: buckets=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/bucket?object&format=json HTTP/1.1
+ Host {fqdn}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket containing the object to be removed.
+:Type: String
+:Example: ``foo_bucket``
+:Required: Yes
+
+``object``
+
+:Description: The object to remove.
+:Type: String
+:Example: ``foo.txt``
+:Required: Yes
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+None.
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``NoSuchObject``
+
+:Description: Specified object does not exist.
+:Code: 404 Not Found
+
+``ObjectRemovalFailed``
+
+:Description: Unable to remove objects.
+:Code: 409 Conflict
+
+
+
+Get Bucket or Object Policy
+===========================
+
+Read the policy of an object or bucket.
+
+:caps: buckets=read
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{admin}/bucket?policy&format=json HTTP/1.1
+ Host {fqdn}
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``bucket``
+
+:Description: The bucket to read the policy from.
+:Type: String
+:Example: ``foo_bucket``
+:Required: Yes
+
+``object``
+
+:Description: The object to read the policy from.
+:Type: String
+:Example: ``foo.txt``
+:Required: No
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, returns the object or bucket policy
+
+``policy``
+
+:Description: Access control policy.
+:Type: Container
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``IncompleteBody``
+
+:Description: Either bucket was not specified for a bucket policy request or bucket
+ and object were not specified for an object policy request.
+:Code: 400 Bad Request
+
+Add A User Capability
+=====================
+
+Add an administrative capability to a specified user.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{admin}/user?caps&format=json HTTP/1.1
+ Host {fqdn}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID to add an administrative capability to.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+``user-caps``
+
+:Description: The administrative capability to add to the user.
+:Type: String
+:Example: ``usage=read,write;user=write``
+:Required: Yes
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the user's capabilities.
+
+``user``
+
+:Description: A container for the user data information.
+:Type: Container
+:Parent: ``user``
+
+``user_id``
+
+:Description: The user id.
+:Type: String
+:Parent: ``user``
+
+``caps``
+
+:Description: User capabilities.
+:Type: Container
+:Parent: ``user``
+
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``InvalidCapability``
+
+:Description: Attempt to grant invalid admin capability.
+:Code: 400 Bad Request
+
+Example Request
+~~~~~~~~~~~~~~~
+
+::
+
+ PUT /{admin}/user?caps&user-caps=usage=read,write;user=write&format=json HTTP/1.1
+ Host: {fqdn}
+ Content-Type: text/plain
+ Authorization: {your-authorization-token}
+
+
+
+Remove A User Capability
+========================
+
+Remove an administrative capability from a specified user.
+
+:caps: users=write
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{admin}/user?caps&format=json HTTP/1.1
+ Host {fqdn}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``uid``
+
+:Description: The user ID to remove an administrative capability from.
+:Type: String
+:Example: ``foo_user``
+:Required: Yes
+
+``user-caps``
+
+:Description: The administrative capabilities to remove from the user.
+:Type: String
+:Example: ``usage=read, write``
+:Required: Yes
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+If successful, the response contains the user's capabilities.
+
+``user``
+
+:Description: A container for the user data information.
+:Type: Container
+:Parent: ``user``
+
+``user_id``
+
+:Description: The user id.
+:Type: String
+:Parent: ``user``
+
+``caps``
+
+:Description: User capabilities.
+:Type: Container
+:Parent: ``user``
+
+
+Special Error Responses
+~~~~~~~~~~~~~~~~~~~~~~~
+
+``InvalidCapability``
+
+:Description: Attempt to remove an invalid admin capability.
+:Code: 400 Bad Request
+
+``NoSuchCap``
+
+:Description: User does not possess specified capability.
+:Code: 404 Not Found
+
+
+Quotas
+======
+
+The Admin Operations API enables you to set quotas on users and on bucket owned
+by users. See `Quota Management`_ for additional details. Quotas include the
+maximum number of objects in a bucket and the maximum storage size in megabytes.
+
+To view quotas, the user must have a ``users=read`` capability. To set,
+modify or disable a quota, the user must have ``users=write`` capability.
+See the `Admin Guide`_ for details.
+
+Valid parameters for quotas include:
+
+- **Bucket:** The ``bucket`` option allows you to specify a quota for
+ buckets owned by a user.
+
+- **Maximum Objects:** The ``max-objects`` setting allows you to specify
+ the maximum number of objects. A negative value disables this setting.
+
+- **Maximum Size:** The ``max-size`` option allows you to specify a quota
+ for the maximum number of bytes. The ``max-size-kb`` option allows you
+ to specify it in KiB. A negative value disables this setting.
+
+- **Quota Type:** The ``quota-type`` option sets the scope for the quota.
+ The options are ``bucket`` and ``user``.
+
+- **Enable/Disable Quota:** The ``enabled`` option specifies whether the
+ quota should be enabled. The value should be either 'True' or 'False'.
+
+Get User Quota
+~~~~~~~~~~~~~~
+
+To get a quota, the user must have ``users`` capability set with ``read``
+permission. ::
+
+ GET /admin/user?quota&uid=<uid>&quota-type=user
+
+
+Set User Quota
+~~~~~~~~~~~~~~
+
+To set a quota, the user must have ``users`` capability set with ``write``
+permission. ::
+
+ PUT /admin/user?quota&uid=<uid>&quota-type=user
+
+
+The content must include a JSON representation of the quota settings
+as encoded in the corresponding read operation.
+
+
+Get Bucket Quota
+~~~~~~~~~~~~~~~~
+
+To get a quota, the user must have ``users`` capability set with ``read``
+permission. ::
+
+ GET /admin/user?quota&uid=<uid>&quota-type=bucket
+
+
+Set Bucket Quota
+~~~~~~~~~~~~~~~~
+
+To set a quota, the user must have ``users`` capability set with ``write``
+permission. ::
+
+ PUT /admin/user?quota&uid=<uid>&quota-type=bucket
+
+The content must include a JSON representation of the quota settings
+as encoded in the corresponding read operation.
+
+
+Set Quota for an Individual Bucket
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To set a quota, the user must have ``buckets`` capability set with ``write``
+permission. ::
+
+ PUT /admin/bucket?quota&uid=<uid>&bucket=<bucket-name>&quota
+
+The content must include a JSON representation of the quota settings
+as mentioned in Set Bucket Quota section above.
+
+
+
+Standard Error Responses
+========================
+
+``AccessDenied``
+
+:Description: Access denied.
+:Code: 403 Forbidden
+
+``InternalError``
+
+:Description: Internal server error.
+:Code: 500 Internal Server Error
+
+``NoSuchUser``
+
+:Description: User does not exist.
+:Code: 404 Not Found
+
+``NoSuchBucket``
+
+:Description: Bucket does not exist.
+:Code: 404 Not Found
+
+``NoSuchKey``
+
+:Description: No such access key.
+:Code: 404 Not Found
+
+
+
+
+Binding libraries
+========================
+
+``Golang``
+
+ - `IrekFasikhov/go-rgwadmin`_
+ - `QuentinPerez/go-radosgw`_
+
+``Java``
+
+ - `twonote/radosgw-admin4j`_
+
+``Python``
+
+ - `UMIACS/rgwadmin`_
+ - `valerytschopp/python-radosgw-admin`_
+
+
+
+.. _Admin Guide: ../admin
+.. _Quota Management: ../admin#quota-management
+.. _IrekFasikhov/go-rgwadmin: https://github.com/IrekFasikhov/go-rgwadmin
+.. _QuentinPerez/go-radosgw: https://github.com/QuentinPerez/go-radosgw
+.. _twonote/radosgw-admin4j: https://github.com/twonote/radosgw-admin4j
+.. _UMIACS/rgwadmin: https://github.com/UMIACS/rgwadmin
+.. _valerytschopp/python-radosgw-admin: https://github.com/valerytschopp/python-radosgw-admin
+
diff --git a/doc/radosgw/api.rst b/doc/radosgw/api.rst
new file mode 100644
index 000000000..c01a3e579
--- /dev/null
+++ b/doc/radosgw/api.rst
@@ -0,0 +1,14 @@
+===============
+librgw (Python)
+===============
+
+.. highlight:: python
+
+The `rgw` python module provides file-like access to rgw.
+
+API Reference
+=============
+
+.. automodule:: rgw
+ :members: LibRGWFS, FileHandle
+
diff --git a/doc/radosgw/archive-sync-module.rst b/doc/radosgw/archive-sync-module.rst
new file mode 100644
index 000000000..b121ee6b1
--- /dev/null
+++ b/doc/radosgw/archive-sync-module.rst
@@ -0,0 +1,44 @@
+===================
+Archive Sync Module
+===================
+
+.. versionadded:: Nautilus
+
+This sync module leverages the versioning feature of the S3 objects in RGW to
+have an archive zone that captures the different versions of the S3 objects
+as they occur over time in the other zones.
+
+An archive zone allows to have a history of versions of S3 objects that can
+only be eliminated through the gateways associated with the archive zone.
+
+This functionality is useful to have a configuration where several
+non-versioned zones replicate their data and metadata through their zone
+gateways (mirror configuration) providing high availability to the end users,
+while the archive zone captures all the data updates and metadata for
+consolidate them as versions of S3 objects.
+
+Including an archive zone in a multizone configuration allows you to have the
+flexibility of an S3 object history in one only zone while saving the space
+that the replicas of the versioned S3 objects would consume in the rest of the
+zones.
+
+
+
+Archive Sync Tier Type Configuration
+------------------------------------
+
+How to Configure
+~~~~~~~~~~~~~~~~
+
+See `Multisite Configuration`_ for how to multisite config instructions. The
+archive sync module requires a creation of a new zone. The zone tier type needs
+to be defined as ``archive``:
+
+::
+
+ # radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --endpoints={http://fqdn}[,{http://fqdn}]
+ --tier-type=archive
+
+.. _Multisite Configuration: ../multisite
diff --git a/doc/radosgw/barbican.rst b/doc/radosgw/barbican.rst
new file mode 100644
index 000000000..a90d063fb
--- /dev/null
+++ b/doc/radosgw/barbican.rst
@@ -0,0 +1,123 @@
+==============================
+OpenStack Barbican Integration
+==============================
+
+OpenStack `Barbican`_ can be used as a secure key management service for
+`Server-Side Encryption`_.
+
+.. image:: ../images/rgw-encryption-barbican.png
+
+#. `Configure Keystone`_
+#. `Create a Keystone user`_
+#. `Configure the Ceph Object Gateway`_
+#. `Create a key in Barbican`_
+
+Configure Keystone
+==================
+
+Barbican depends on Keystone for authorization and access control of its keys.
+
+See `OpenStack Keystone Integration`_.
+
+Create a Keystone user
+======================
+
+Create a new user that will be used by the Ceph Object Gateway to retrieve
+keys.
+
+For example::
+
+ user = rgwcrypt-user
+ pass = rgwcrypt-password
+ tenant = rgwcrypt
+
+See OpenStack documentation for `Manage projects, users, and roles`_.
+
+Create a key in Barbican
+========================
+
+See Barbican documentation for `How to Create a Secret`_. Requests to
+Barbican must include a valid Keystone token in the ``X-Auth-Token`` header.
+
+.. note:: Server-side encryption keys must be 256-bit long and base64 encoded.
+
+Example request::
+
+ POST /v1/secrets HTTP/1.1
+ Host: barbican.example.com:9311
+ Accept: */*
+ Content-Type: application/json
+ X-Auth-Token: 7f7d588dd29b44df983bc961a6b73a10
+ Content-Length: 299
+
+ {
+ "name": "my-key",
+ "expiration": "2016-12-28T19:14:44.180394",
+ "algorithm": "aes",
+ "bit_length": 256,
+ "mode": "cbc",
+ "payload": "6b+WOZ1T3cqZMxgThRcXAQBrS5mXKdDUphvpxptl9/4=",
+ "payload_content_type": "application/octet-stream",
+ "payload_content_encoding": "base64"
+ }
+
+Response::
+
+ {"secret_ref": "http://barbican.example.com:9311/v1/secrets/d1e7ef3b-f841-4b7c-90b2-b7d90ca2d723"}
+
+In the response, ``d1e7ef3b-f841-4b7c-90b2-b7d90ca2d723`` is the key id that
+can be used in any `SSE-KMS`_ request.
+
+This newly created key is not accessible by user ``rgwcrypt-user``. This
+privilege must be added with an ACL. See `How to Set/Replace ACL`_ for more
+details.
+
+Example request (assuming that the Keystone id of ``rgwcrypt-user`` is
+``906aa90bd8a946c89cdff80d0869460f``)::
+
+ PUT /v1/secrets/d1e7ef3b-f841-4b7c-90b2-b7d90ca2d723/acl HTTP/1.1
+ Host: barbican.example.com:9311
+ Accept: */*
+ Content-Type: application/json
+ X-Auth-Token: 7f7d588dd29b44df983bc961a6b73a10
+ Content-Length: 101
+
+ {
+ "read":{
+ "users":[ "906aa90bd8a946c89cdff80d0869460f" ],
+ "project-access": true
+ }
+ }
+
+Response::
+
+ {"acl_ref": "http://barbican.example.com:9311/v1/secrets/d1e7ef3b-f841-4b7c-90b2-b7d90ca2d723/acl"}
+
+Configure the Ceph Object Gateway
+=================================
+
+Edit the Ceph configuration file to enable Barbican as a KMS and add information
+about the Barbican server and Keystone user::
+
+ rgw crypt s3 kms backend = barbican
+ rgw barbican url = http://barbican.example.com:9311
+ rgw keystone barbican user = rgwcrypt-user
+ rgw keystone barbican password = rgwcrypt-password
+
+When using Keystone API version 2::
+
+ rgw keystone barbican tenant = rgwcrypt
+
+When using API version 3::
+
+ rgw keystone barbican project
+ rgw keystone barbican domain
+
+
+.. _Barbican: https://wiki.openstack.org/wiki/Barbican
+.. _Server-Side Encryption: ../encryption
+.. _OpenStack Keystone Integration: ../keystone
+.. _Manage projects, users, and roles: https://docs.openstack.org/admin-guide/cli-manage-projects-users-and-roles.html#create-a-user
+.. _How to Create a Secret: https://developer.openstack.org/api-guide/key-manager/secrets.html#how-to-create-a-secret
+.. _SSE-KMS: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
+.. _How to Set/Replace ACL: https://developer.openstack.org/api-guide/key-manager/acls.html#how-to-set-replace-acl
diff --git a/doc/radosgw/bucketpolicy.rst b/doc/radosgw/bucketpolicy.rst
new file mode 100644
index 000000000..5d6dddc74
--- /dev/null
+++ b/doc/radosgw/bucketpolicy.rst
@@ -0,0 +1,216 @@
+===============
+Bucket Policies
+===============
+
+.. versionadded:: Luminous
+
+The Ceph Object Gateway supports a subset of the Amazon S3 policy
+language applied to buckets.
+
+
+Creation and Removal
+====================
+
+Bucket policies are managed through standard S3 operations rather than
+radosgw-admin.
+
+For example, one may use s3cmd to set or delete a policy thus::
+
+ $ cat > examplepol
+ {
+ "Version": "2012-10-17",
+ "Statement": [{
+ "Effect": "Allow",
+ "Principal": {"AWS": ["arn:aws:iam::usfolks:user/fred:subuser"]},
+ "Action": "s3:PutObjectAcl",
+ "Resource": [
+ "arn:aws:s3:::happybucket/*"
+ ]
+ }]
+ }
+
+ $ s3cmd setpolicy examplepol s3://happybucket
+ $ s3cmd delpolicy s3://happybucket
+
+
+Limitations
+===========
+
+Currently, we support only the following actions:
+
+- s3:AbortMultipartUpload
+- s3:CreateBucket
+- s3:DeleteBucketPolicy
+- s3:DeleteBucket
+- s3:DeleteBucketWebsite
+- s3:DeleteObject
+- s3:DeleteObjectVersion
+- s3:DeleteReplicationConfiguration
+- s3:GetAccelerateConfiguration
+- s3:GetBucketAcl
+- s3:GetBucketCORS
+- s3:GetBucketLocation
+- s3:GetBucketLogging
+- s3:GetBucketNotification
+- s3:GetBucketPolicy
+- s3:GetBucketRequestPayment
+- s3:GetBucketTagging
+- s3:GetBucketVersioning
+- s3:GetBucketWebsite
+- s3:GetLifecycleConfiguration
+- s3:GetObjectAcl
+- s3:GetObject
+- s3:GetObjectTorrent
+- s3:GetObjectVersionAcl
+- s3:GetObjectVersion
+- s3:GetObjectVersionTorrent
+- s3:GetReplicationConfiguration
+- s3:IPAddress
+- s3:NotIpAddress
+- s3:ListAllMyBuckets
+- s3:ListBucketMultipartUploads
+- s3:ListBucket
+- s3:ListBucketVersions
+- s3:ListMultipartUploadParts
+- s3:PutAccelerateConfiguration
+- s3:PutBucketAcl
+- s3:PutBucketCORS
+- s3:PutBucketLogging
+- s3:PutBucketNotification
+- s3:PutBucketPolicy
+- s3:PutBucketRequestPayment
+- s3:PutBucketTagging
+- s3:PutBucketVersioning
+- s3:PutBucketWebsite
+- s3:PutLifecycleConfiguration
+- s3:PutObjectAcl
+- s3:PutObject
+- s3:PutObjectVersionAcl
+- s3:PutReplicationConfiguration
+- s3:RestoreObject
+
+We do not yet support setting policies on users, groups, or roles.
+
+We use the RGW ‘tenant’ identifier in place of the Amazon twelve-digit
+account ID. In the future we may allow you to assign an account ID to
+a tenant, but for now if you want to use policies between AWS S3 and
+RGW S3 you will have to use the Amazon account ID as the tenant ID when
+creating users.
+
+Under AWS, all tenants share a single namespace. RGW gives every
+tenant its own namespace of buckets. There may be an option to enable
+an AWS-like 'flat' bucket namespace in future versions. At present, to
+access a bucket belonging to another tenant, address it as
+"tenant:bucket" in the S3 request.
+
+In AWS, a bucket policy can grant access to another account, and that
+account owner can then grant access to individual users with user
+permissions. Since we do not yet support user, role, and group
+permissions, account owners will currently need to grant access
+directly to individual users, and granting an entire account access to
+a bucket grants access to all users in that account.
+
+Bucket policies do not yet support string interpolation.
+
+For all requests, condition keys we support are:
+- aws:CurrentTime
+- aws:EpochTime
+- aws:PrincipalType
+- aws:Referer
+- aws:SecureTransport
+- aws:SourceIp
+- aws:UserAgent
+- aws:username
+
+We support certain s3 condition keys for bucket and object requests.
+
+.. versionadded:: Mimic
+
+Bucket Related Operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
++-----------------------+----------------------+----------------+
+| Permission | Condition Keys | Comments |
++-----------------------+----------------------+----------------+
+| | s3:x-amz-acl | |
+| | s3:x-amz-grant-<perm>| |
+|s3:createBucket | where perm is one of | |
+| | read/write/read-acp | |
+| | write-acp/ | |
+| | full-control | |
++-----------------------+----------------------+----------------+
+| | s3:prefix | |
+| +----------------------+----------------+
+| s3:ListBucket & | s3:delimiter | |
+| +----------------------+----------------+
+| s3:ListBucketVersions | s3:max-keys | |
++-----------------------+----------------------+----------------+
+| s3:PutBucketAcl | s3:x-amz-acl | |
+| | s3:x-amz-grant-<perm>| |
++-----------------------+----------------------+----------------+
+
+.. _tag_policy:
+
+Object Related Operations
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
++-----------------------------+-----------------------------------------------+-------------------+
+|Permission |Condition Keys | Comments |
+| | | |
++-----------------------------+-----------------------------------------------+-------------------+
+| |s3:x-amz-acl & s3:x-amz-grant-<perm> | |
+| | | |
+| +-----------------------------------------------+-------------------+
+| |s3:x-amz-copy-source | |
+| | | |
+| +-----------------------------------------------+-------------------+
+| |s3:x-amz-server-side-encryption | |
+| | | |
+| +-----------------------------------------------+-------------------+
+|s3:PutObject |s3:x-amz-server-side-encryption-aws-kms-key-id | |
+| | | |
+| +-----------------------------------------------+-------------------+
+| |s3:x-amz-metadata-directive |PUT & COPY to |
+| | |overwrite/preserve |
+| | |metadata in COPY |
+| | |requests |
+| +-----------------------------------------------+-------------------+
+| |s3:RequestObjectTag/<tag-key> | |
+| | | |
++-----------------------------+-----------------------------------------------+-------------------+
+|s3:PutObjectAcl |s3:x-amz-acl & s3-amz-grant-<perm> | |
+|s3:PutObjectVersionAcl | | |
+| +-----------------------------------------------+-------------------+
+| |s3:ExistingObjectTag/<tag-key> | |
+| | | |
++-----------------------------+-----------------------------------------------+-------------------+
+| |s3:RequestObjectTag/<tag-key> | |
+|s3:PutObjectTagging & +-----------------------------------------------+-------------------+
+|s3:PutObjectVersionTagging |s3:ExistingObjectTag/<tag-key> | |
+| | | |
++-----------------------------+-----------------------------------------------+-------------------+
+|s3:GetObject & |s3:ExistingObjectTag/<tag-key> | |
+|s3:GetObjectVersion | | |
++-----------------------------+-----------------------------------------------+-------------------+
+|s3:GetObjectAcl & |s3:ExistingObjectTag/<tag-key> | |
+|s3:GetObjectVersionAcl | | |
++-----------------------------+-----------------------------------------------+-------------------+
+|s3:GetObjectTagging & |s3:ExistingObjectTag/<tag-key> | |
+|s3:GetObjectVersionTagging | | |
++-----------------------------+-----------------------------------------------+-------------------+
+|s3:DeleteObjectTagging & |s3:ExistingOBjectTag/<tag-key> | |
+|s3:DeleteObjectVersionTagging| | |
++-----------------------------+-----------------------------------------------+-------------------+
+
+
+More may be supported soon as we integrate with the recently rewritten
+Authentication/Authorization subsystem.
+
+Swift
+=====
+
+There is no way to set bucket policies under Swift, but bucket
+policies that have been set govern Swift as well as S3 operations.
+
+Swift credentials are matched against Principals specified in a policy
+in a way specific to whatever backend is being used.
diff --git a/doc/radosgw/cloud-sync-module.rst b/doc/radosgw/cloud-sync-module.rst
new file mode 100644
index 000000000..bea7f441b
--- /dev/null
+++ b/doc/radosgw/cloud-sync-module.rst
@@ -0,0 +1,244 @@
+=========================
+Cloud Sync Module
+=========================
+
+.. versionadded:: Mimic
+
+This module syncs zone data to a remote cloud service. The sync is unidirectional; data is not synced back from the
+remote zone. The goal of this module is to enable syncing data to multiple cloud providers. The currently supported
+cloud providers are those that are compatible with AWS (S3).
+
+User credentials for the remote cloud object store service need to be configured. Since many cloud services impose limits
+on the number of buckets that each user can create, the mapping of source objects and buckets is configurable.
+It is possible to configure different targets to different buckets and bucket prefixes. Note that source ACLs will not
+be preserved. It is possible to map permissions of specific source users to specific destination users.
+
+Due to API limitations there is no way to preserve original object modification time and ETag. The cloud sync module
+stores these as metadata attributes on the destination objects.
+
+
+
+Cloud Sync Tier Type Configuration
+-------------------------------------
+
+Trivial Configuration:
+~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+ {
+ "connection": {
+ "access_key": <access>,
+ "secret": <secret>,
+ "endpoint": <endpoint>,
+ "host_style": <path | virtual>,
+ },
+ "acls": [ { "type": <id | email | uri>,
+ "source_id": <source_id>,
+ "dest_id": <dest_id> } ... ],
+ "target_path": <target_path>,
+ }
+
+
+Non Trivial Configuration:
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+ {
+ "default": {
+ "connection": {
+ "access_key": <access>,
+ "secret": <secret>,
+ "endpoint": <endpoint>,
+ "host_style" <path | virtual>,
+ },
+ "acls": [
+ {
+ "type" : <id | email | uri>, # optional, default is id
+ "source_id": <id>,
+ "dest_id": <id>
+ } ... ]
+ "target_path": <path> # optional
+ },
+ "connections": [
+ {
+ "connection_id": <id>,
+ "access_key": <access>,
+ "secret": <secret>,
+ "endpoint": <endpoint>,
+ "host_style" <path | virtual>, # optional
+ } ... ],
+ "acl_profiles": [
+ {
+ "acls_id": <id>, # acl mappings
+ "acls": [ {
+ "type": <id | email | uri>,
+ "source_id": <id>,
+ "dest_id": <id>
+ } ... ]
+ }
+ ],
+ "profiles": [
+ {
+ "source_bucket": <source>,
+ "connection_id": <connection_id>,
+ "acls_id": <mappings_id>,
+ "target_path": <dest>, # optional
+ } ... ],
+ }
+
+
+.. Note:: Trivial configuration can coincide with the non-trivial one.
+
+
+* ``connection`` (container)
+
+Represents a connection to the remote cloud service. Contains ``conection_id`, ``access_key``,
+``secret``, ``endpoint``, and ``host_style``.
+
+* ``access_key`` (string)
+
+The remote cloud access key that will be used for a specific connection.
+
+* ``secret`` (string)
+
+The secret key for the remote cloud service.
+
+* ``endpoint`` (string)
+
+URL of remote cloud service endpoint.
+
+* ``host_style`` (path | virtual)
+
+Type of host style to be used when accessing remote cloud endpoint (default: ``path``).
+
+* ``acls`` (array)
+
+Contains a list of ``acl_mappings``.
+
+* ``acl_mapping`` (container)
+
+Each ``acl_mapping`` structure contains ``type``, ``source_id``, and ``dest_id``. These
+will define the ACL mutation that will be done on each object. An ACL mutation allows converting source
+user id to a destination id.
+
+* ``type`` (id | email | uri)
+
+ACL type: ``id`` defines user id, ``email`` defines user by email, and ``uri`` defines user by ``uri`` (group).
+
+* ``source_id`` (string)
+
+ID of user in the source zone.
+
+* ``dest_id`` (string)
+
+ID of user in the destination.
+
+* ``target_path`` (string)
+
+A string that defines how the target path is created. The target path specifies a prefix to which
+the source object name is appended. The target path configurable can include any of the following
+variables:
+- ``sid``: unique string that represents the sync instance ID
+- ``zonegroup``: the zonegroup name
+- ``zonegroup_id``: the zonegroup ID
+- ``zone``: the zone name
+- ``zone_id``: the zone id
+- ``bucket``: source bucket name
+- ``owner``: source bucket owner ID
+
+For example: ``target_path = rgwx-${zone}-${sid}/${owner}/${bucket}``
+
+
+* ``acl_profiles`` (array)
+
+An array of ``acl_profile``.
+
+* ``acl_profile`` (container)
+
+Each profile contains ``acls_id`` (string) that represents the profile, and ``acls`` array that
+holds a list of ``acl_mappings``.
+
+* ``profiles`` (array)
+
+A list of profiles. Each profile contains the following:
+- ``source_bucket``: either a bucket name, or a bucket prefix (if ends with ``*``) that defines the source bucket(s) for this profile
+- ``target_path``: as defined above
+- ``connection_id``: ID of the connection that will be used for this profile
+- ``acls_id``: ID of ACLs profile that will be used for this profile
+
+
+S3 Specific Configurables:
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Currently cloud sync will only work with backends that are compatible with AWS S3. There are
+a few configurables that can be used to tweak its behavior when accessing these cloud services:
+
+::
+
+ {
+ "multipart_sync_threshold": {object_size},
+ "multipart_min_part_size": {part_size}
+ }
+
+
+* ``multipart_sync_threshold`` (integer)
+
+Objects this size or larger will be synced to the cloud using multipart upload.
+
+* ``multipart_min_part_size`` (integer)
+
+Minimum parts size to use when syncing objects using multipart upload.
+
+
+How to Configure
+~~~~~~~~~~~~~~~~
+
+See :ref:`multisite` for how to multisite config instructions. The cloud sync module requires a creation of a new zone. The zone
+tier type needs to be defined as ``cloud``:
+
+::
+
+ # radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --endpoints={http://fqdn}[,{http://fqdn}]
+ --tier-type=cloud
+
+
+The tier configuration can be then done using the following command
+
+::
+
+ # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --tier-config={key}={val}[,{key}={val}]
+
+The ``key`` in the configuration specifies the config variable that needs to be updated, and
+the ``val`` specifies its new value. Nested values can be accessed using period. For example:
+
+::
+
+ # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --tier-config=connection.access_key={key},connection.secret={secret}
+
+
+Configuration array entries can be accessed by specifying the specific entry to be referenced enclosed
+in square brackets, and adding new array entry can be done by using `[]`. Index value of `-1` references
+the last entry in the array. At the moment it is not possible to create a new entry and reference it
+again at the same command.
+For example, creating a new profile for buckets starting with {prefix}:
+
+::
+
+ # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --tier-config=profiles[].source_bucket={prefix}'*'
+
+ # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --tier-config=profiles[-1].connection_id={conn_id},profiles[-1].acls_id={acls_id}
+
+
+An entry can be removed by using ``--tier-config-rm={key}``.
diff --git a/doc/radosgw/compression.rst b/doc/radosgw/compression.rst
new file mode 100644
index 000000000..23655f1dc
--- /dev/null
+++ b/doc/radosgw/compression.rst
@@ -0,0 +1,87 @@
+===========
+Compression
+===========
+
+.. versionadded:: Kraken
+
+The Ceph Object Gateway supports server-side compression of uploaded objects,
+using any of Ceph's existing compression plugins.
+
+
+Configuration
+=============
+
+Compression can be enabled on a storage class in the Zone's placement target
+by providing the ``--compression=<type>`` option to the command
+``radosgw-admin zone placement modify``.
+
+The compression ``type`` refers to the name of the compression plugin to use
+when writing new object data. Each compressed object remembers which plugin
+was used, so changing this setting does not hinder the ability to decompress
+existing objects, nor does it force existing objects to be recompressed.
+
+This compression setting applies to all new objects uploaded to buckets using
+this placement target. Compression can be disabled by setting the ``type`` to
+an empty string or ``none``.
+
+For example::
+
+ $ radosgw-admin zone placement modify \
+ --rgw-zone default \
+ --placement-id default-placement \
+ --storage-class STANDARD \
+ --compression zlib
+ {
+ ...
+ "placement_pools": [
+ {
+ "key": "default-placement",
+ "val": {
+ "index_pool": "default.rgw.buckets.index",
+ "storage_classes": {
+ "STANDARD": {
+ "data_pool": "default.rgw.buckets.data",
+ "compression_type": "zlib"
+ }
+ },
+ "data_extra_pool": "default.rgw.buckets.non-ec",
+ "index_type": 0,
+ }
+ }
+ ],
+ ...
+ }
+
+.. note:: A ``default`` zone is created for you if you have not done any
+ previous `Multisite Configuration`_.
+
+
+Statistics
+==========
+
+While all existing commands and APIs continue to report object and bucket
+sizes based their uncompressed data, compression statistics for a given bucket
+are included in its ``bucket stats``::
+
+ $ radosgw-admin bucket stats --bucket=<name>
+ {
+ ...
+ "usage": {
+ "rgw.main": {
+ "size": 1075028,
+ "size_actual": 1331200,
+ "size_utilized": 592035,
+ "size_kb": 1050,
+ "size_kb_actual": 1300,
+ "size_kb_utilized": 579,
+ "num_objects": 104
+ }
+ },
+ ...
+ }
+
+The ``size_utilized`` and ``size_kb_utilized`` fields represent the total
+size of compressed data, in bytes and kilobytes respectively.
+
+
+.. _`Multisite Configuration`: ../multisite
diff --git a/doc/radosgw/config-ref.rst b/doc/radosgw/config-ref.rst
new file mode 100644
index 000000000..1ed6085a6
--- /dev/null
+++ b/doc/radosgw/config-ref.rst
@@ -0,0 +1,1161 @@
+======================================
+ Ceph Object Gateway Config Reference
+======================================
+
+The following settings may added to the Ceph configuration file (i.e., usually
+``ceph.conf``) under the ``[client.radosgw.{instance-name}]`` section. The
+settings may contain default values. If you do not specify each setting in the
+Ceph configuration file, the default value will be set automatically.
+
+Configuration variables set under the ``[client.radosgw.{instance-name}]``
+section will not apply to rgw or radosgw-admin commands without an instance-name
+specified in the command. Thus variables meant to be applied to all RGW
+instances or all radosgw-admin options can be put into the ``[global]`` or the
+``[client]`` section to avoid specifying ``instance-name``.
+
+``rgw_frontends``
+
+:Description: Configures the HTTP frontend(s). The configuration for multiple
+ frontends can be provided in a comma-delimited list. Each frontend
+ configuration may include a list of options separated by spaces,
+ where each option is in the form "key=value" or "key". See
+ `HTTP Frontends`_ for more on supported options.
+
+:Type: String
+:Default: ``beast port=7480``
+
+``rgw_data``
+
+:Description: Sets the location of the data files for Ceph RADOS Gateway.
+:Type: String
+:Default: ``/var/lib/ceph/radosgw/$cluster-$id``
+
+
+``rgw_enable_apis``
+
+:Description: Enables the specified APIs.
+
+ .. note:: Enabling the ``s3`` API is a requirement for
+ any ``radosgw`` instance that is meant to
+ participate in a `multi-site <../multisite>`_
+ configuration.
+:Type: String
+:Default: ``s3, s3website, swift, swift_auth, admin, sts, iam, notifications`` All APIs.
+
+
+``rgw_cache_enabled``
+
+:Description: Whether the Ceph Object Gateway cache is enabled.
+:Type: Boolean
+:Default: ``true``
+
+
+``rgw_cache_lru_size``
+
+:Description: The number of entries in the Ceph Object Gateway cache.
+:Type: Integer
+:Default: ``10000``
+
+
+``rgw_socket_path``
+
+:Description: The socket path for the domain socket. ``FastCgiExternalServer``
+ uses this socket. If you do not specify a socket path, Ceph
+ Object Gateway will not run as an external server. The path you
+ specify here must be the same as the path specified in the
+ ``rgw.conf`` file.
+
+:Type: String
+:Default: N/A
+
+``rgw_fcgi_socket_backlog``
+
+:Description: The socket backlog for fcgi.
+:Type: Integer
+:Default: ``1024``
+
+``rgw_host``
+
+:Description: The host for the Ceph Object Gateway instance. Can be an IP
+ address or a hostname.
+
+:Type: String
+:Default: ``0.0.0.0``
+
+
+``rgw_port``
+
+:Description: Port the instance listens for requests. If not specified,
+ Ceph Object Gateway runs external FastCGI.
+
+:Type: String
+:Default: None
+
+
+``rgw_dns_name``
+
+:Description: The DNS name of the served domain. See also the ``hostnames`` setting within regions.
+:Type: String
+:Default: None
+
+
+``rgw_script_uri``
+
+:Description: The alternative value for the ``SCRIPT_URI`` if not set
+ in the request.
+
+:Type: String
+:Default: None
+
+
+``rgw_request_uri``
+
+:Description: The alternative value for the ``REQUEST_URI`` if not set
+ in the request.
+
+:Type: String
+:Default: None
+
+
+``rgw_print_continue``
+
+:Description: Enable ``100-continue`` if it is operational.
+:Type: Boolean
+:Default: ``true``
+
+
+``rgw_remote_addr_param``
+
+:Description: The remote address parameter. For example, the HTTP field
+ containing the remote address, or the ``X-Forwarded-For``
+ address if a reverse proxy is operational.
+
+:Type: String
+:Default: ``REMOTE_ADDR``
+
+
+``rgw_op_thread_timeout``
+
+:Description: The timeout in seconds for open threads.
+:Type: Integer
+:Default: 600
+
+
+``rgw_op_thread_suicide_timeout``
+
+:Description: The time ``timeout`` in seconds before a Ceph Object Gateway
+ process dies. Disabled if set to ``0``.
+
+:Type: Integer
+:Default: ``0``
+
+
+``rgw_thread_pool_size``
+
+:Description: The size of the thread pool.
+:Type: Integer
+:Default: 100 threads.
+
+
+``rgw_num_control_oids``
+
+:Description: The number of notification objects used for cache synchronization
+ between different ``rgw`` instances.
+
+:Type: Integer
+:Default: ``8``
+
+
+``rgw_init_timeout``
+
+:Description: The number of seconds before Ceph Object Gateway gives up on
+ initialization.
+
+:Type: Integer
+:Default: ``30``
+
+
+``rgw_mime_types_file``
+
+:Description: The path and location of the MIME-types file. Used for Swift
+ auto-detection of object types.
+
+:Type: String
+:Default: ``/etc/mime.types``
+
+
+``rgw_s3_success_create_obj_status``
+
+:Description: The alternate success status response for ``create-obj``.
+:Type: Integer
+:Default: ``0``
+
+
+``rgw_resolve_cname``
+
+:Description: Whether ``rgw`` should use DNS CNAME record of the request
+ hostname field (if hostname is not equal to ``rgw dns name``).
+
+:Type: Boolean
+:Default: ``false``
+
+
+``rgw_obj_stripe_size``
+
+:Description: The size of an object stripe for Ceph Object Gateway objects.
+ See `Architecture`_ for details on striping.
+
+:Type: Integer
+:Default: ``4 << 20``
+
+
+``rgw_extended_http_attrs``
+
+:Description: Add new set of attributes that could be set on an entity
+ (user, bucket or object). These extra attributes can be set
+ through HTTP header fields when putting the entity or modifying
+ it using POST method. If set, these attributes will return as
+ HTTP fields when doing GET/HEAD on the entity.
+
+:Type: String
+:Default: None
+:Example: "content_foo, content_bar, x-foo-bar"
+
+
+``rgw_exit_timeout_secs``
+
+:Description: Number of seconds to wait for a process before exiting
+ unconditionally.
+
+:Type: Integer
+:Default: ``120``
+
+
+``rgw_get_obj_window_size``
+
+:Description: The window size in bytes for a single object request.
+:Type: Integer
+:Default: ``16 << 20``
+
+
+``rgw_get_obj_max_req_size``
+
+:Description: The maximum request size of a single get operation sent to the
+ Ceph Storage Cluster.
+
+:Type: Integer
+:Default: ``4 << 20``
+
+
+``rgw_relaxed_s3_bucket_names``
+
+:Description: Enables relaxed S3 bucket names rules for US region buckets.
+:Type: Boolean
+:Default: ``false``
+
+
+``rgw_list_buckets_max_chunk``
+
+:Description: The maximum number of buckets to retrieve in a single operation
+ when listing user buckets.
+
+:Type: Integer
+:Default: ``1000``
+
+
+``rgw_override_bucket_index_max_shards``
+
+:Description: Represents the number of shards for the bucket index object,
+ a value of zero indicates there is no sharding. It is not
+ recommended to set a value too large (e.g. thousand) as it
+ increases the cost for bucket listing.
+ This variable should be set in the client or global sections
+ so that it is automatically applied to radosgw-admin commands.
+
+:Type: Integer
+:Default: ``0``
+
+
+``rgw_curl_wait_timeout_ms``
+
+:Description: The timeout in milliseconds for certain ``curl`` calls.
+:Type: Integer
+:Default: ``1000``
+
+
+``rgw_copy_obj_progress``
+
+:Description: Enables output of object progress during long copy operations.
+:Type: Boolean
+:Default: ``true``
+
+
+``rgw_copy_obj_progress_every_bytes``
+
+:Description: The minimum bytes between copy progress output.
+:Type: Integer
+:Default: ``1024 * 1024``
+
+
+``rgw_admin_entry``
+
+:Description: The entry point for an admin request URL.
+:Type: String
+:Default: ``admin``
+
+
+``rgw_content_length_compat``
+
+:Description: Enable compatibility handling of FCGI requests with both ``CONTENT_LENGTH`` and ``HTTP_CONTENT_LENGTH`` set.
+:Type: Boolean
+:Default: ``false``
+
+
+``rgw_bucket_quota_ttl``
+
+:Description: The amount of time in seconds cached quota information is
+ trusted. After this timeout, the quota information will be
+ re-fetched from the cluster.
+:Type: Integer
+:Default: ``600``
+
+
+``rgw_user_quota_bucket_sync_interval``
+
+:Description: The amount of time in seconds bucket quota information is
+ accumulated before syncing to the cluster. During this time,
+ other RGW instances will not see the changes in bucket quota
+ stats from operations on this instance.
+:Type: Integer
+:Default: ``180``
+
+
+``rgw_user_quota_sync_interval``
+
+:Description: The amount of time in seconds user quota information is
+ accumulated before syncing to the cluster. During this time,
+ other RGW instances will not see the changes in user quota stats
+ from operations on this instance.
+:Type: Integer
+:Default: ``180``
+
+
+``rgw_bucket_default_quota_max_objects``
+
+:Description: Default max number of objects per bucket. Set on new users,
+ if no other quota is specified. Has no effect on existing users.
+ This variable should be set in the client or global sections
+ so that it is automatically applied to radosgw-admin commands.
+:Type: Integer
+:Default: ``-1``
+
+
+``rgw_bucket_default_quota_max_size``
+
+:Description: Default max capacity per bucket, in bytes. Set on new users,
+ if no other quota is specified. Has no effect on existing users.
+:Type: Integer
+:Default: ``-1``
+
+
+``rgw_user_default_quota_max_objects``
+
+:Description: Default max number of objects for a user. This includes all
+ objects in all buckets owned by the user. Set on new users,
+ if no other quota is specified. Has no effect on existing users.
+:Type: Integer
+:Default: ``-1``
+
+
+``rgw_user_default_quota_max_size``
+
+:Description: The value for user max size quota in bytes set on new users,
+ if no other quota is specified. Has no effect on existing users.
+:Type: Integer
+:Default: ``-1``
+
+
+``rgw_verify_ssl``
+
+:Description: Verify SSL certificates while making requests.
+:Type: Boolean
+:Default: ``true``
+
+Lifecycle Settings
+==================
+
+Bucket Lifecycle configuration can be used to manage your objects so they are stored
+effectively throughout their lifetime. In past releases Lifecycle processing was rate-limited
+by single threaded processing. With the Nautilus release this has been addressed and the
+Ceph Object Gateway now allows for parallel thread processing of bucket lifecycles across
+additional Ceph Object Gateway instances and replaces the in-order
+index shard enumeration with a random ordered sequence.
+
+There are two options in particular to look at when looking to increase the
+aggressiveness of lifecycle processing:
+
+``rgw_lc_max_worker``
+
+:Description: This option specifies the number of lifecycle worker threads
+ to run in parallel, thereby processing bucket and index
+ shards simultaneously.
+
+:Type: Integer
+:Default: ``3``
+
+``rgw_lc_max_wp_worker``
+
+:Description: This option specifies the number of threads in each lifecycle
+ workers work pool. This option can help accelerate processing each bucket.
+
+These values can be tuned based upon your specific workload to further increase the
+aggressiveness of lifecycle processing. For a workload with a larger number of buckets (thousands)
+you would look at increasing the ``rgw_lc_max_worker`` value from the default value of 3 whereas for a
+workload with a smaller number of buckets but higher number of objects (hundreds of thousands)
+per bucket you would consider decreasing ``rgw_lc_max_wp_worker`` from the default value of 3.
+
+:NOTE: When looking to tune either of these specific values please validate the
+ current Cluster performance and Ceph Object Gateway utilization before increasing.
+
+Garbage Collection Settings
+===========================
+
+The Ceph Object Gateway allocates storage for new objects immediately.
+
+The Ceph Object Gateway purges the storage space used for deleted and overwritten
+objects in the Ceph Storage cluster some time after the gateway deletes the
+objects from the bucket index. The process of purging the deleted object data
+from the Ceph Storage cluster is known as Garbage Collection or GC.
+
+To view the queue of objects awaiting garbage collection, execute the following::
+
+ $ radosgw-admin gc list
+
+ Note: specify ``--include-all`` to list all entries, including unexpired
+
+Garbage collection is a background activity that may
+execute continuously or during times of low loads, depending upon how the
+administrator configures the Ceph Object Gateway. By default, the Ceph Object
+Gateway conducts GC operations continuously. Since GC operations are a normal
+part of Ceph Object Gateway operations, especially with object delete
+operations, objects eligible for garbage collection exist most of the time.
+
+Some workloads may temporarily or permanently outpace the rate of garbage
+collection activity. This is especially true of delete-heavy workloads, where
+many objects get stored for a short period of time and then deleted. For these
+types of workloads, administrators can increase the priority of garbage
+collection operations relative to other operations with the following
+configuration parameters.
+
+
+``rgw_gc_max_objs``
+
+:Description: The maximum number of objects that may be handled by
+ garbage collection in one garbage collection processing cycle.
+ Please do not change this value after the first deployment.
+
+:Type: Integer
+:Default: ``32``
+
+
+``rgw_gc_obj_min_wait``
+
+:Description: The minimum wait time before a deleted object may be removed
+ and handled by garbage collection processing.
+
+:Type: Integer
+:Default: ``2 * 3600``
+
+
+``rgw_gc_processor_max_time``
+
+:Description: The maximum time between the beginning of two consecutive garbage
+ collection processing cycles.
+
+:Type: Integer
+:Default: ``3600``
+
+
+``rgw_gc_processor_period``
+
+:Description: The cycle time for garbage collection processing.
+:Type: Integer
+:Default: ``3600``
+
+
+``rgw_gc_max_concurrent_io``
+
+:Description: The maximum number of concurrent IO operations that the RGW garbage
+ collection thread will use when purging old data.
+:Type: Integer
+:Default: ``10``
+
+
+:Tuning Garbage Collection for Delete Heavy Workloads:
+
+As an initial step towards tuning Ceph Garbage Collection to be more aggressive the following options are suggested to be increased from their default configuration values:
+
+``rgw_gc_max_concurrent_io = 20``
+``rgw_gc_max_trim_chunk = 64``
+
+:NOTE: Modifying these values requires a restart of the RGW service.
+
+Once these values have been increased from default please monitor for performance of the cluster during Garbage Collection to verify no adverse performance issues due to the increased values.
+
+Multisite Settings
+==================
+
+.. versionadded:: Jewel
+
+You may include the following settings in your Ceph configuration
+file under each ``[client.radosgw.{instance-name}]`` instance.
+
+
+``rgw_zone``
+
+:Description: The name of the zone for the gateway instance. If no zone is
+ set, a cluster-wide default can be configured with the command
+ ``radosgw-admin zone default``.
+:Type: String
+:Default: None
+
+
+``rgw_zonegroup``
+
+:Description: The name of the zonegroup for the gateway instance. If no
+ zonegroup is set, a cluster-wide default can be configured with
+ the command ``radosgw-admin zonegroup default``.
+:Type: String
+:Default: None
+
+
+``rgw_realm``
+
+:Description: The name of the realm for the gateway instance. If no realm is
+ set, a cluster-wide default can be configured with the command
+ ``radosgw-admin realm default``.
+:Type: String
+:Default: None
+
+
+``rgw_run_sync_thread``
+
+:Description: If there are other zones in the realm to sync from, spawn threads
+ to handle the sync of data and metadata.
+:Type: Boolean
+:Default: ``true``
+
+
+``rgw_data_log_window``
+
+:Description: The data log entries window in seconds.
+:Type: Integer
+:Default: ``30``
+
+
+``rgw_data_log_changes_size``
+
+:Description: The number of in-memory entries to hold for the data changes log.
+:Type: Integer
+:Default: ``1000``
+
+
+``rgw_data_log_obj_prefix``
+
+:Description: The object name prefix for the data log.
+:Type: String
+:Default: ``data_log``
+
+
+``rgw_data_log_num_shards``
+
+:Description: The number of shards (objects) on which to keep the
+ data changes log.
+
+:Type: Integer
+:Default: ``128``
+
+
+``rgw_md_log_max_shards``
+
+:Description: The maximum number of shards for the metadata log.
+:Type: Integer
+:Default: ``64``
+
+.. important:: The values of ``rgw_data_log_num_shards`` and
+ ``rgw_md_log_max_shards`` should not be changed after sync has
+ started.
+
+S3 Settings
+===========
+
+``rgw_s3_auth_use_ldap``
+
+:Description: Should S3 authentication use LDAP?
+:Type: Boolean
+:Default: ``false``
+
+
+Swift Settings
+==============
+
+``rgw_enforce_swift_acls``
+
+:Description: Enforces the Swift Access Control List (ACL) settings.
+:Type: Boolean
+:Default: ``true``
+
+
+``rgw_swift_token_expiration``
+
+:Description: The time in seconds for expiring a Swift token.
+:Type: Integer
+:Default: ``24 * 3600``
+
+
+``rgw_swift_url``
+
+:Description: The URL for the Ceph Object Gateway Swift API.
+:Type: String
+:Default: None
+
+
+``rgw_swift_url_prefix``
+
+:Description: The URL prefix for the Swift API, to distinguish it from
+ the S3 API endpoint. The default is ``swift``, which
+ makes the Swift API available at the URL
+ ``http://host:port/swift/v1`` (or
+ ``http://host:port/swift/v1/AUTH_%(tenant_id)s`` if
+ ``rgw swift account in url`` is enabled).
+
+ For compatibility, setting this configuration variable
+ to the empty string causes the default ``swift`` to be
+ used; if you do want an empty prefix, set this option to
+ ``/``.
+
+ .. warning:: If you set this option to ``/``, you must
+ disable the S3 API by modifying ``rgw
+ enable apis`` to exclude ``s3``. It is not
+ possible to operate radosgw with ``rgw
+ swift url prefix = /`` and simultaneously
+ support both the S3 and Swift APIs. If you
+ do need to support both APIs without
+ prefixes, deploy multiple radosgw instances
+ to listen on different hosts (or ports)
+ instead, enabling some for S3 and some for
+ Swift.
+:Default: ``swift``
+:Example: "/swift-testing"
+
+
+``rgw_swift_auth_url``
+
+:Description: Default URL for verifying v1 auth tokens (if not using internal
+ Swift auth).
+
+:Type: String
+:Default: None
+
+
+``rgw_swift_auth_entry``
+
+:Description: The entry point for a Swift auth URL.
+:Type: String
+:Default: ``auth``
+
+
+``rgw_swift_account_in_url``
+
+:Description: Whether or not the Swift account name should be included
+ in the Swift API URL.
+
+ If set to ``false`` (the default), then the Swift API
+ will listen on a URL formed like
+ ``http://host:port/<rgw_swift_url_prefix>/v1``, and the
+ account name (commonly a Keystone project UUID if
+ radosgw is configured with `Keystone integration
+ <../keystone>`_) will be inferred from request
+ headers.
+
+ If set to ``true``, the Swift API URL will be
+ ``http://host:port/<rgw_swift_url_prefix>/v1/AUTH_<account_name>``
+ (or
+ ``http://host:port/<rgw_swift_url_prefix>/v1/AUTH_<keystone_project_id>``)
+ instead, and the Keystone ``object-store`` endpoint must
+ accordingly be configured to include the
+ ``AUTH_%(tenant_id)s`` suffix.
+
+ You **must** set this option to ``true`` (and update the
+ Keystone service catalog) if you want radosgw to support
+ publicly-readable containers and `temporary URLs
+ <../swift/tempurl>`_.
+:Type: Boolean
+:Default: ``false``
+
+
+``rgw_swift_versioning_enabled``
+
+:Description: Enables the Object Versioning of OpenStack Object Storage API.
+ This allows clients to put the ``X-Versions-Location`` attribute
+ on containers that should be versioned. The attribute specifies
+ the name of container storing archived versions. It must be owned
+ by the same user that the versioned container due to access
+ control verification - ACLs are NOT taken into consideration.
+ Those containers cannot be versioned by the S3 object versioning
+ mechanism.
+
+ A slightly different attribute, ``X-History-Location``, which is also understood by
+ `OpenStack Swift <https://docs.openstack.org/swift/latest/api/object_versioning.html>`_
+ for handling ``DELETE`` operations, is currently not supported.
+:Type: Boolean
+:Default: ``false``
+
+
+``rgw_trust_forwarded_https``
+
+:Description: When a proxy in front of radosgw is used for ssl termination, radosgw
+ does not know whether incoming http connections are secure. Enable
+ this option to trust the ``Forwarded`` and ``X-Forwarded-Proto`` headers
+ sent by the proxy when determining whether the connection is secure.
+ This is required for some features, such as server side encryption.
+ (Never enable this setting if you do not have a trusted proxy in front of
+ radosgw, or else malicious users will be able to set these headers in
+ any request.)
+:Type: Boolean
+:Default: ``false``
+
+
+
+Logging Settings
+================
+
+
+``rgw_log_nonexistent_bucket``
+
+:Description: Enables Ceph Object Gateway to log a request for a non-existent
+ bucket.
+
+:Type: Boolean
+:Default: ``false``
+
+
+``rgw_log_object_name``
+
+:Description: The logging format for an object name. See ma npage
+ :manpage:`date` for details about format specifiers.
+
+:Type: Date
+:Default: ``%Y-%m-%d-%H-%i-%n``
+
+
+``rgw_log_object_name_utc``
+
+:Description: Whether a logged object name includes a UTC time.
+ If ``false``, it uses the local time.
+
+:Type: Boolean
+:Default: ``false``
+
+
+``rgw_usage_max_shards``
+
+:Description: The maximum number of shards for usage logging.
+:Type: Integer
+:Default: ``32``
+
+
+``rgw_usage_max_user_shards``
+
+:Description: The maximum number of shards used for a single user's
+ usage logging.
+
+:Type: Integer
+:Default: ``1``
+
+
+``rgw_enable_ops_log``
+
+:Description: Enable logging for each successful Ceph Object Gateway operation.
+:Type: Boolean
+:Default: ``false``
+
+
+``rgw_enable_usage_log``
+
+:Description: Enable the usage log.
+:Type: Boolean
+:Default: ``false``
+
+
+``rgw_ops_log_rados``
+
+:Description: Whether the operations log should be written to the
+ Ceph Storage Cluster backend.
+
+:Type: Boolean
+:Default: ``true``
+
+
+``rgw_ops_log_socket_path``
+
+:Description: The Unix domain socket for writing operations logs.
+:Type: String
+:Default: None
+
+
+``rgw_ops_log_file_path``
+
+:Description: The file for writing operations logs.
+:Type: String
+:Default: None
+
+
+``rgw_ops_log_data_backlog``
+
+:Description: The maximum data backlog data size for operations logs written
+ to a Unix domain socket.
+
+:Type: Integer
+:Default: ``5 << 20``
+
+
+``rgw_usage_log_flush_threshold``
+
+:Description: The number of dirty merged entries in the usage log before
+ flushing synchronously.
+
+:Type: Integer
+:Default: 1024
+
+
+``rgw_usage_log_tick_interval``
+
+:Description: Flush pending usage log data every ``n`` seconds.
+:Type: Integer
+:Default: ``30``
+
+
+``rgw_log_http_headers``
+
+:Description: Comma-delimited list of HTTP headers to include with ops
+ log entries. Header names are case insensitive, and use
+ the full header name with words separated by underscores.
+
+:Type: String
+:Default: None
+:Example: "http_x_forwarded_for, http_x_special_k"
+
+
+``rgw_intent_log_object_name``
+
+:Description: The logging format for the intent log object name. See the manpage
+ :manpage:`date` for details about format specifiers.
+
+:Type: Date
+:Default: ``%Y-%m-%d-%i-%n``
+
+
+``rgw_intent_log_object_name_utc``
+
+:Description: Whether the intent log object name uses UTC time.
+ If ``false``, it uses the local time.
+
+:Type: Boolean
+:Default: ``false``
+
+
+
+Keystone Settings
+=================
+
+
+``rgw_keystone_url``
+
+:Description: The URL for the Keystone server.
+:Type: String
+:Default: None
+
+
+``rgw_keystone_api_version``
+
+:Description: The version (2 or 3) of OpenStack Identity API that should be
+ used for communication with the Keystone server.
+:Type: Integer
+:Default: ``2``
+
+
+``rgw_keystone_admin_domain``
+
+:Description: The name of OpenStack domain with admin privilege when using
+ OpenStack Identity API v3.
+:Type: String
+:Default: None
+
+
+``rgw_keystone_admin_project``
+
+:Description: The name of OpenStack project with admin privilege when using
+ OpenStack Identity API v3. If left unspecified, value of
+ ``rgw keystone admin tenant`` will be used instead.
+:Type: String
+:Default: None
+
+
+``rgw_keystone_admin_token``
+
+:Description: The Keystone admin token (shared secret). In Ceph RGW
+ authentication with the admin token has priority over
+ authentication with the admin credentials
+ (``rgw_keystone_admin_user``, ``rgw_keystone_admin_password``,
+ ``rgw_keystone_admin_tenant``, ``rgw_keystone_admin_project``,
+ ``rgw_keystone_admin_domain``). The Keystone admin token
+ has been deprecated, but can be used to integrate with
+ older environments. It is preferred to instead configure
+ ``rgw_keystone_admin_token_path`` to avoid exposing the token.
+:Type: String
+:Default: None
+
+``rgw_keystone_admin_token_path``
+
+:Description: Path to a file containing the Keystone admin token
+ (shared secret). In Ceph RadosGW authentication with
+ the admin token has priority over authentication with
+ the admin credentials
+ (``rgw_keystone_admin_user``, ``rgw_keystone_admin_password``,
+ ``rgw_keystone_admin_tenant``, ``rgw_keystone_admin_project``,
+ ``rgw_keystone_admin_domain``).
+ The Keystone admin token has been deprecated, but can be
+ used to integrate with older environments.
+:Type: String
+:Default: None
+
+``rgw_keystone_admin_tenant``
+
+:Description: The name of OpenStack tenant with admin privilege (Service Tenant) when
+ using OpenStack Identity API v2
+:Type: String
+:Default: None
+
+
+``rgw_keystone_admin_user``
+
+:Description: The name of OpenStack user with admin privilege for Keystone
+ authentication (Service User) when using OpenStack Identity API v2
+:Type: String
+:Default: None
+
+
+``rgw_keystone_admin_password``
+
+:Description: The password for OpenStack admin user when using OpenStack
+ Identity API v2. It is preferred to instead configure
+ ``rgw_keystone_admin_password_path`` to avoid exposing the token.
+:Type: String
+:Default: None
+
+``rgw_keystone_admin_password_path``
+
+:Description: Path to a file containing the password for OpenStack
+ admin user when using OpenStack Identity API v2.
+:Type: String
+:Default: None
+
+
+``rgw_keystone_accepted_roles``
+
+:Description: The roles required to serve requests.
+:Type: String
+:Default: ``Member, admin``
+
+
+``rgw_keystone_token_cache_size``
+
+:Description: The maximum number of entries in each Keystone token cache.
+:Type: Integer
+:Default: ``10000``
+
+
+``rgw_keystone_revocation_interval``
+
+:Description: The number of seconds between token revocation checks.
+:Type: Integer
+:Default: ``15 * 60``
+
+
+``rgw_keystone_verify_ssl``
+
+:Description: Verify SSL certificates while making token requests to keystone.
+:Type: Boolean
+:Default: ``true``
+
+
+Server-side encryption Settings
+===============================
+
+``rgw_crypt_s3_kms_backend``
+
+:Description: Where the SSE-KMS encryption keys are stored. Supported KMS
+ systems are OpenStack Barbican (``barbican``, the default) and
+ HashiCorp Vault (``vault``).
+:Type: String
+:Default: None
+
+
+Barbican Settings
+=================
+
+``rgw_barbican_url``
+
+:Description: The URL for the Barbican server.
+:Type: String
+:Default: None
+
+``rgw_keystone_barbican_user``
+
+:Description: The name of the OpenStack user with access to the `Barbican`_
+ secrets used for `Encryption`_.
+:Type: String
+:Default: None
+
+``rgw_keystone_barbican_password``
+
+:Description: The password associated with the `Barbican`_ user.
+:Type: String
+:Default: None
+
+``rgw_keystone_barbican_tenant``
+
+:Description: The name of the OpenStack tenant associated with the `Barbican`_
+ user when using OpenStack Identity API v2.
+:Type: String
+:Default: None
+
+``rgw_keystone_barbican_project``
+
+:Description: The name of the OpenStack project associated with the `Barbican`_
+ user when using OpenStack Identity API v3.
+:Type: String
+:Default: None
+
+``rgw_keystone_barbican_domain``
+
+:Description: The name of the OpenStack domain associated with the `Barbican`_
+ user when using OpenStack Identity API v3.
+:Type: String
+:Default: None
+
+
+HashiCorp Vault Settings
+========================
+
+``rgw_crypt_vault_auth``
+
+:Description: Type of authentication method to be used. The only method
+ currently supported is ``token``.
+:Type: String
+:Default: ``token``
+
+``rgw_crypt_vault_token_file``
+
+:Description: If authentication method is ``token``, provide a path to the token
+ file, which should be readable only by Rados Gateway.
+:Type: String
+:Default: None
+
+``rgw_crypt_vault_addr``
+
+:Description: Vault server base address, e.g. ``http://vaultserver:8200``.
+:Type: String
+:Default: None
+
+``rgw_crypt_vault_prefix``
+
+:Description: The Vault secret URL prefix, which can be used to restrict access
+ to a particular subset of the secret space, e.g. ``/v1/secret/data``.
+:Type: String
+:Default: None
+
+``rgw_crypt_vault_secret_engine``
+
+:Description: Vault Secret Engine to be used to retrieve encryption keys: choose
+ between kv-v2, transit.
+:Type: String
+:Default: None
+
+``rgw_crypt_vault_namespace``
+
+:Description: If set, Vault Namespace provides tenant isolation for teams and individuals
+ on the same Vault Enterprise instance, e.g. ``acme/tenant1``
+:Type: String
+:Default: None
+
+
+QoS settings
+------------
+
+.. versionadded:: Nautilus
+
+The ``civetweb`` frontend has a threading model that uses a thread per
+connection and hence is automatically throttled by ``rgw_thread_pool_size``
+configurable when it comes to accepting connections. The newer ``beast`` frontend is
+not restricted by the thread pool size when it comes to accepting new
+connections, so a scheduler abstraction is introduced in the Nautilus release
+to support future methods of scheduling requests.
+
+Currently the scheduler defaults to a throttler which throttles the active
+connections to a configured limit. QoS based on mClock is currently in an
+*experimental* phase and not recommended for production yet. Current
+implementation of *dmclock_client* op queue divides RGW Ops on admin, auth
+(swift auth, sts) metadata & data requests.
+
+
+``rgw_max_concurrent_requests``
+
+:Description: Maximum number of concurrent HTTP requests that the Beast front end
+ will process. Tuning this can help to limit memory usage under
+ heavy load.
+:Type: Integer
+:Default: 1024
+
+
+``rgw_scheduler_type``
+
+:Description: The RGW scheduler to use. Valid values are ``throttler` and
+ ``dmclock``. Currently defaults to ``throttler`` which throttles Beast
+ frontend requests. ``dmclock` is *experimental* and requires the
+ ``dmclock`` to be included in the ``experimental_feature_enabled``
+ configuration option.
+
+
+The options below tune the experimental dmclock scheduler. For
+additional reading on dmclock, see :ref:`dmclock-qos`. `op_class` for the flags below is
+one of ``admin``, ``auth``, ``metadata``, or ``data``.
+
+``rgw_dmclock_<op_class>_res``
+
+:Description: The mclock reservation for `op_class` requests
+:Type: float
+:Default: 100.0
+
+``rgw_dmclock_<op_class>_wgt``
+
+:Description: The mclock weight for `op_class` requests
+:Type: float
+:Default: 1.0
+
+``rgw_dmclock_<op_class>_lim``
+
+:Description: The mclock limit for `op_class` requests
+:Type: float
+:Default: 0.0
+
+
+
+.. _Architecture: ../../architecture#data-striping
+.. _Pool Configuration: ../../rados/configuration/pool-pg-config-ref/
+.. _Cluster Pools: ../../rados/operations/pools
+.. _Rados cluster handles: ../../rados/api/librados-intro/#step-2-configuring-a-cluster-handle
+.. _Barbican: ../barbican
+.. _Encryption: ../encryption
+.. _HTTP Frontends: ../frontends
diff --git a/doc/radosgw/dynamicresharding.rst b/doc/radosgw/dynamicresharding.rst
new file mode 100644
index 000000000..a7698e7bb
--- /dev/null
+++ b/doc/radosgw/dynamicresharding.rst
@@ -0,0 +1,235 @@
+.. _rgw_dynamic_bucket_index_resharding:
+
+===================================
+RGW Dynamic Bucket Index Resharding
+===================================
+
+.. versionadded:: Luminous
+
+A large bucket index can lead to performance problems. In order
+to address this problem we introduced bucket index sharding.
+Until Luminous, changing the number of bucket shards (resharding)
+needed to be done offline. Starting with Luminous we support
+online bucket resharding.
+
+Each bucket index shard can handle its entries efficiently up until
+reaching a certain threshold number of entries. If this threshold is
+exceeded the system can suffer from performance issues. The dynamic
+resharding feature detects this situation and automatically increases
+the number of shards used by the bucket index, resulting in a
+reduction of the number of entries in each bucket index shard. This
+process is transparent to the user.
+
+By default dynamic bucket index resharding can only increase the
+number of bucket index shards to 1999, although this upper-bound is a
+configuration parameter (see Configuration below). When
+possible, the process chooses a prime number of bucket index shards to
+spread the number of bucket index entries across the bucket index
+shards more evenly.
+
+The detection process runs in a background process that periodically
+scans all the buckets. A bucket that requires resharding is added to
+the resharding queue and will be scheduled to be resharded later. The
+reshard thread runs in the background and execute the scheduled
+resharding tasks, one at a time.
+
+Multisite
+=========
+
+Dynamic resharding is not supported in a multisite environment.
+
+
+Configuration
+=============
+
+Enable/Disable dynamic bucket index resharding:
+
+- ``rgw_dynamic_resharding``: true/false, default: true
+
+Configuration options that control the resharding process:
+
+- ``rgw_max_objs_per_shard``: maximum number of objects per bucket index shard before resharding is triggered, default: 100000 objects
+
+- ``rgw_max_dynamic_shards``: maximum number of shards that dynamic bucket index resharding can increase to, default: 1999
+
+- ``rgw_reshard_bucket_lock_duration``: duration, in seconds, of lock on bucket obj during resharding, default: 360 seconds (i.e., 6 minutes)
+
+- ``rgw_reshard_thread_interval``: maximum time, in seconds, between rounds of resharding queue processing, default: 600 seconds (i.e., 10 minutes)
+
+- ``rgw_reshard_num_logs``: number of shards for the resharding queue, default: 16
+
+Admin commands
+==============
+
+Add a bucket to the resharding queue
+------------------------------------
+
+::
+
+ # radosgw-admin reshard add --bucket <bucket_name> --num-shards <new number of shards>
+
+List resharding queue
+---------------------
+
+::
+
+ # radosgw-admin reshard list
+
+Process tasks on the resharding queue
+-------------------------------------
+
+::
+
+ # radosgw-admin reshard process
+
+Bucket resharding status
+------------------------
+
+::
+
+ # radosgw-admin reshard status --bucket <bucket_name>
+
+The output is a json array of 3 objects (reshard_status, new_bucket_instance_id, num_shards) per shard.
+
+For example, the output at different Dynamic Resharding stages is shown below:
+
+``1. Before resharding occurred:``
+::
+
+ [
+ {
+ "reshard_status": "not-resharding",
+ "new_bucket_instance_id": "",
+ "num_shards": -1
+ }
+ ]
+
+``2. During resharding:``
+::
+
+ [
+ {
+ "reshard_status": "in-progress",
+ "new_bucket_instance_id": "1179f470-2ebf-4630-8ec3-c9922da887fd.8652.1",
+ "num_shards": 2
+ },
+ {
+ "reshard_status": "in-progress",
+ "new_bucket_instance_id": "1179f470-2ebf-4630-8ec3-c9922da887fd.8652.1",
+ "num_shards": 2
+ }
+ ]
+
+``3, After resharding completed:``
+::
+
+ [
+ {
+ "reshard_status": "not-resharding",
+ "new_bucket_instance_id": "",
+ "num_shards": -1
+ },
+ {
+ "reshard_status": "not-resharding",
+ "new_bucket_instance_id": "",
+ "num_shards": -1
+ }
+ ]
+
+
+Cancel pending bucket resharding
+--------------------------------
+
+Note: Ongoing bucket resharding operations cannot be cancelled. ::
+
+ # radosgw-admin reshard cancel --bucket <bucket_name>
+
+Manual immediate bucket resharding
+----------------------------------
+
+::
+
+ # radosgw-admin bucket reshard --bucket <bucket_name> --num-shards <new number of shards>
+
+When choosing a number of shards, the administrator should keep a
+number of items in mind. Ideally the administrator is aiming for no
+more than 100000 entries per shard, now and through some future point
+in time.
+
+Additionally, bucket index shards that are prime numbers tend to work
+better in evenly distributing bucket index entries across the
+shards. For example, 7001 bucket index shards is better than 7000
+since the former is prime. A variety of web sites have lists of prime
+numbers; search for "list of prime numbers" withy your favorite web
+search engine to locate some web sites.
+
+Troubleshooting
+===============
+
+Clusters prior to Luminous 12.2.11 and Mimic 13.2.5 left behind stale bucket
+instance entries, which were not automatically cleaned up. The issue also affected
+LifeCycle policies, which were not applied to resharded buckets anymore. Both of
+these issues can be worked around using a couple of radosgw-admin commands.
+
+Stale instance management
+-------------------------
+
+List the stale instances in a cluster that are ready to be cleaned up.
+
+::
+
+ # radosgw-admin reshard stale-instances list
+
+Clean up the stale instances in a cluster. Note: cleanup of these
+instances should only be done on a single site cluster.
+
+::
+
+ # radosgw-admin reshard stale-instances rm
+
+
+Lifecycle fixes
+---------------
+
+For clusters that had resharded instances, it is highly likely that the old
+lifecycle processes would have flagged and deleted lifecycle processing as the
+bucket instance changed during a reshard. While this is fixed for newer clusters
+(from Mimic 13.2.6 and Luminous 12.2.12), older buckets that had lifecycle policies and
+that have undergone resharding will have to be manually fixed.
+
+The command to do so is:
+
+::
+
+ # radosgw-admin lc reshard fix --bucket {bucketname}
+
+
+As a convenience wrapper, if the ``--bucket`` argument is dropped then this
+command will try and fix lifecycle policies for all the buckets in the cluster.
+
+Object Expirer fixes
+--------------------
+
+Objects subject to Swift object expiration on older clusters may have
+been dropped from the log pool and never deleted after the bucket was
+resharded. This would happen if their expiration time was before the
+cluster was upgraded, but if their expiration was after the upgrade
+the objects would be correctly handled. To manage these expire-stale
+objects, radosgw-admin provides two subcommands.
+
+Listing:
+
+::
+
+ # radosgw-admin objects expire-stale list --bucket {bucketname}
+
+Displays a list of object names and expiration times in JSON format.
+
+Deleting:
+
+::
+
+ # radosgw-admin objects expire-stale rm --bucket {bucketname}
+
+
+Initiates deletion of such objects, displaying a list of object names, expiration times, and deletion status in JSON format.
diff --git a/doc/radosgw/elastic-sync-module.rst b/doc/radosgw/elastic-sync-module.rst
new file mode 100644
index 000000000..60c806e87
--- /dev/null
+++ b/doc/radosgw/elastic-sync-module.rst
@@ -0,0 +1,181 @@
+=========================
+ElasticSearch Sync Module
+=========================
+
+.. versionadded:: Kraken
+
+.. note::
+ As of 31 May 2020, only Elasticsearch 6 and lower are supported. ElasticSearch 7 is not supported.
+
+This sync module writes the metadata from other zones to `ElasticSearch`_. As of
+luminous this is a json of data fields we currently store in ElasticSearch.
+
+::
+
+ {
+ "_index" : "rgw-gold-ee5863d6",
+ "_type" : "object",
+ "_id" : "34137443-8592-48d9-8ca7-160255d52ade.34137.1:object1:null",
+ "_score" : 1.0,
+ "_source" : {
+ "bucket" : "testbucket123",
+ "name" : "object1",
+ "instance" : "null",
+ "versioned_epoch" : 0,
+ "owner" : {
+ "id" : "user1",
+ "display_name" : "user1"
+ },
+ "permissions" : [
+ "user1"
+ ],
+ "meta" : {
+ "size" : 712354,
+ "mtime" : "2017-05-04T12:54:16.462Z",
+ "etag" : "7ac66c0f148de9519b8bd264312c4d64"
+ }
+ }
+ }
+
+
+
+ElasticSearch tier type configurables
+-------------------------------------
+
+* ``endpoint``
+
+Specifies the Elasticsearch server endpoint to access
+
+* ``num_shards`` (integer)
+
+The number of shards that Elasticsearch will be configured with on
+data sync initialization. Note that this cannot be changed after init.
+Any change here requires rebuild of the Elasticsearch index and reinit
+of the data sync process.
+
+* ``num_replicas`` (integer)
+
+The number of the replicas that Elasticsearch will be configured with
+on data sync initialization.
+
+* ``explicit_custom_meta`` (true | false)
+
+Specifies whether all user custom metadata will be indexed, or whether
+user will need to configure (at the bucket level) what custom
+metadata entries should be indexed. This is false by default
+
+* ``index_buckets_list`` (comma separated list of strings)
+
+If empty, all buckets will be indexed. Otherwise, only buckets
+specified here will be indexed. It is possible to provide bucket
+prefixes (e.g., foo\*), or bucket suffixes (e.g., \*bar).
+
+* ``approved_owners_list`` (comma separated list of strings)
+
+If empty, buckets of all owners will be indexed (subject to other
+restrictions), otherwise, only buckets owned by specified owners will
+be indexed. Suffixes and prefixes can also be provided.
+
+* ``override_index_path`` (string)
+
+if not empty, this string will be used as the elasticsearch index
+path. Otherwise the index path will be determined and generated on
+sync initialization.
+
+
+End user metadata queries
+-------------------------
+
+.. versionadded:: Luminous
+
+Since the ElasticSearch cluster now stores object metadata, it is important that
+the ElasticSearch endpoint is not exposed to the public and only accessible to
+the cluster administrators. For exposing metadata queries to the end user itself
+this poses a problem since we'd want the user to only query their metadata and
+not of any other users, this would require the ElasticSearch cluster to
+authenticate users in a way similar to RGW does which poses a problem.
+
+As of Luminous RGW in the metadata master zone can now service end user
+requests. This allows for not exposing the elasticsearch endpoint in public and
+also solves the authentication and authorization problem since RGW itself can
+authenticate the end user requests. For this purpose RGW introduces a new query
+in the bucket APIs that can service elasticsearch requests. All these requests
+must be sent to the metadata master zone.
+
+Syntax
+~~~~~~
+
+Get an elasticsearch query
+``````````````````````````
+
+::
+
+ GET /{bucket}?query={query-expr}
+
+request params:
+ - max-keys: max number of entries to return
+ - marker: pagination marker
+
+``expression := [(]<arg> <op> <value> [)][<and|or> ...]``
+
+op is one of the following:
+<, <=, ==, >=, >
+
+For example ::
+
+ GET /?query=name==foo
+
+Will return all the indexed keys that user has read permission to, and
+are named 'foo'.
+
+The output will be a list of keys in XML that is similar to the S3
+list buckets response.
+
+Configure custom metadata fields
+````````````````````````````````
+
+Define which custom metadata entries should be indexed (under the
+specified bucket), and what are the types of these keys. If explicit
+custom metadata indexing is configured, this is needed so that rgw
+will index the specified custom metadata values. Otherwise it is
+needed in cases where the indexed metadata keys are of a type other
+than string.
+
+::
+
+ POST /{bucket}?mdsearch
+ x-amz-meta-search: <key [; type]> [, ...]
+
+Multiple metadata fields must be comma separated, a type can be forced for a
+field with a `;`. The currently allowed types are string(default), integer and
+date
+
+eg. if you want to index a custom object metadata x-amz-meta-year as int,
+x-amz-meta-date as type date and x-amz-meta-title as string, you'd do
+
+::
+
+ POST /mybooks?mdsearch
+ x-amz-meta-search: x-amz-meta-year;int, x-amz-meta-release-date;date, x-amz-meta-title;string
+
+
+Delete custom metadata configuration
+````````````````````````````````````
+
+Delete custom metadata bucket configuration.
+
+::
+
+ DELETE /<bucket>?mdsearch
+
+Get custom metadata configuration
+`````````````````````````````````
+
+Retrieve custom metadata bucket configuration.
+
+::
+
+ GET /<bucket>?mdsearch
+
+
+.. _`Elasticsearch`: https://github.com/elastic/elasticsearch
diff --git a/doc/radosgw/encryption.rst b/doc/radosgw/encryption.rst
new file mode 100644
index 000000000..07fd60ac5
--- /dev/null
+++ b/doc/radosgw/encryption.rst
@@ -0,0 +1,68 @@
+==========
+Encryption
+==========
+
+.. versionadded:: Luminous
+
+The Ceph Object Gateway supports server-side encryption of uploaded objects,
+with 3 options for the management of encryption keys. Server-side encryption
+means that the data is sent over HTTP in its unencrypted form, and the Ceph
+Object Gateway stores that data in the Ceph Storage Cluster in encrypted form.
+
+.. note:: Requests for server-side encryption must be sent over a secure HTTPS
+ connection to avoid sending secrets in plaintext. If a proxy is used
+ for SSL termination, ``rgw trust forwarded https`` must be enabled
+ before forwarded requests will be trusted as secure.
+
+.. note:: Server-side encryption keys must be 256-bit long and base64 encoded.
+
+Customer-Provided Keys
+======================
+
+In this mode, the client passes an encryption key along with each request to
+read or write encrypted data. It is the client's responsibility to manage those
+keys and remember which key was used to encrypt each object.
+
+This is implemented in S3 according to the `Amazon SSE-C`_ specification.
+
+As all key management is handled by the client, no special configuration is
+needed to support this encryption mode.
+
+Key Management Service
+======================
+
+This mode allows keys to be stored in a secure key management service and
+retrieved on demand by the Ceph Object Gateway to serve requests to encrypt
+or decrypt data.
+
+This is implemented in S3 according to the `Amazon SSE-KMS`_ specification.
+
+In principle, any key management service could be used here. Currently
+integration with `Barbican`_, `Vault`_, and `KMIP`_ are implemented.
+
+See `OpenStack Barbican Integration`_, `HashiCorp Vault Integration`_,
+and `KMIP Integration`_.
+
+Automatic Encryption (for testing only)
+=======================================
+
+A ``rgw crypt default encryption key`` can be set in ceph.conf to force the
+encryption of all objects that do not otherwise specify an encryption mode.
+
+The configuration expects a base64-encoded 256 bit key. For example::
+
+ rgw crypt default encryption key = 4YSmvJtBv0aZ7geVgAsdpRnLBEwWSWlMIGnRS8a9TSA=
+
+.. important:: This mode is for diagnostic purposes only! The ceph configuration
+ file is not a secure method for storing encryption keys. Keys that are
+ accidentally exposed in this way should be considered compromised.
+
+
+.. _Amazon SSE-C: https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
+.. _Amazon SSE-KMS: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
+.. _Barbican: https://wiki.openstack.org/wiki/Barbican
+.. _Vault: https://www.vaultproject.io/docs/
+.. _KMIP: http://www.oasis-open.org/committees/kmip/
+.. _OpenStack Barbican Integration: ../barbican
+.. _HashiCorp Vault Integration: ../vault
+.. _KMIP Integration: ../kmip
diff --git a/doc/radosgw/frontends.rst b/doc/radosgw/frontends.rst
new file mode 100644
index 000000000..274cdce87
--- /dev/null
+++ b/doc/radosgw/frontends.rst
@@ -0,0 +1,226 @@
+.. _rgw_frontends:
+
+==============
+HTTP Frontends
+==============
+
+.. contents::
+
+The Ceph Object Gateway supports two embedded HTTP frontend libraries
+that can be configured with ``rgw_frontends``. See `Config Reference`_
+for details about the syntax.
+
+Beast
+=====
+
+.. versionadded:: Mimic
+
+The ``beast`` frontend uses the Boost.Beast library for HTTP parsing
+and the Boost.Asio library for asynchronous network i/o.
+
+Options
+-------
+
+``port`` and ``ssl_port``
+
+:Description: Sets the ipv4 & ipv6 listening port number. Can be specified multiple
+ times as in ``port=80 port=8000``.
+:Type: Integer
+:Default: ``80``
+
+
+``endpoint`` and ``ssl_endpoint``
+
+:Description: Sets the listening address in the form ``address[:port]``, where
+ the address is an IPv4 address string in dotted decimal form, or
+ an IPv6 address in hexadecimal notation surrounded by square
+ brackets. Specifying a IPv6 endpoint would listen to v6 only. The
+ optional port defaults to 80 for ``endpoint`` and 443 for
+ ``ssl_endpoint``. Can be specified multiple times as in
+ ``endpoint=[::1] endpoint=192.168.0.100:8000``.
+
+:Type: Integer
+:Default: None
+
+
+``ssl_certificate``
+
+:Description: Path to the SSL certificate file used for SSL-enabled endpoints.
+ If path is prefixed with ``config://``, the certificate will be
+ pulled from the ceph monitor ``config-key`` database.
+
+:Type: String
+:Default: None
+
+
+``ssl_private_key``
+
+:Description: Optional path to the private key file used for SSL-enabled
+ endpoints. If one is not given, the ``ssl_certificate`` file
+ is used as the private key.
+ If path is prefixed with ``config://``, the certificate will be
+ pulled from the ceph monitor ``config-key`` database.
+
+:Type: String
+:Default: None
+
+``ssl_options``
+
+:Description: Optional colon separated list of ssl context options:
+
+ ``default_workarounds`` Implement various bug workarounds.
+
+ ``no_compression`` Disable compression.
+
+ ``no_sslv2`` Disable SSL v2.
+
+ ``no_sslv3`` Disable SSL v3.
+
+ ``no_tlsv1`` Disable TLS v1.
+
+ ``no_tlsv1_1`` Disable TLS v1.1.
+
+ ``no_tlsv1_2`` Disable TLS v1.2.
+
+ ``single_dh_use`` Always create a new key when using tmp_dh parameters.
+
+:Type: String
+:Default: ``no_sslv2:no_sslv3:no_tlsv1:no_tlsv1_1``
+
+``ssl_ciphers``
+
+:Description: Optional list of one or more cipher strings separated by colons.
+ The format of the string is described in openssl's ciphers(1)
+ manual.
+
+:Type: String
+:Default: None
+
+``tcp_nodelay``
+
+:Description: If set the socket option will disable Nagle's algorithm on
+ the connection which means that packets will be sent as soon
+ as possible instead of waiting for a full buffer or timeout to occur.
+
+ ``1`` Disable Nagel's algorithm for all sockets.
+
+ ``0`` Keep the default: Nagel's algorithm enabled.
+
+:Type: Integer (0 or 1)
+:Default: 0
+
+``max_connection_backlog``
+
+:Description: Optional value to define the maximum size for the queue of
+ connections waiting to be accepted. If not configured, the value
+ from ``boost::asio::socket_base::max_connections`` will be used.
+
+:Type: Integer
+:Default: None
+
+``request_timeout_ms``
+
+:Description: The amount of time in milliseconds that Beast will wait
+ for more incoming data or outgoing data before giving up.
+ Setting this value to 0 will disable timeout.
+
+:Type: Integer
+:Default: ``65000``
+
+
+Civetweb
+========
+
+.. versionadded:: Firefly
+.. deprecated:: Pacific
+
+The ``civetweb`` frontend uses the Civetweb HTTP library, which is a
+fork of Mongoose.
+
+
+Options
+-------
+
+``port``
+
+:Description: Sets the listening port number. For SSL-enabled ports, add an
+ ``s`` suffix like ``443s``. To bind a specific IPv4 or IPv6
+ address, use the form ``address:port``. Multiple endpoints
+ can either be separated by ``+`` as in ``127.0.0.1:8000+443s``,
+ or by providing multiple options as in ``port=8000 port=443s``.
+
+:Type: String
+:Default: ``7480``
+
+
+``num_threads``
+
+:Description: Sets the number of threads spawned by Civetweb to handle
+ incoming HTTP connections. This effectively limits the number
+ of concurrent connections that the frontend can service.
+
+:Type: Integer
+:Default: ``rgw_thread_pool_size``
+
+
+``request_timeout_ms``
+
+:Description: The amount of time in milliseconds that Civetweb will wait
+ for more incoming data before giving up.
+
+:Type: Integer
+:Default: ``30000``
+
+
+``ssl_certificate``
+
+:Description: Path to the SSL certificate file used for SSL-enabled ports.
+
+:Type: String
+:Default: None
+
+``access_log_file``
+
+:Description: Path to a file for access logs. Either full path, or relative
+ to the current working directory. If absent (default), then
+ accesses are not logged.
+
+:Type: String
+:Default: ``EMPTY``
+
+
+``error_log_file``
+
+:Description: Path to a file for error logs. Either full path, or relative
+ to the current working directory. If absent (default), then
+ errors are not logged.
+
+:Type: String
+:Default: ``EMPTY``
+
+
+The following is an example of the ``/etc/ceph/ceph.conf`` file with some of these options set: ::
+
+ [client.rgw.gateway-node1]
+ rgw_frontends = civetweb request_timeout_ms=30000 error_log_file=/var/log/radosgw/civetweb.error.log access_log_file=/var/log/radosgw/civetweb.access.log
+
+A complete list of supported options can be found in the `Civetweb User Manual`_.
+
+
+Generic Options
+===============
+
+Some frontend options are generic and supported by all frontends:
+
+``prefix``
+
+:Description: A prefix string that is inserted into the URI of all
+ requests. For example, a swift-only frontend could supply
+ a uri prefix of ``/swift``.
+
+:Type: String
+:Default: None
+
+
+.. _Civetweb User Manual: https://civetweb.github.io/civetweb/UserManual.html
+.. _Config Reference: ../config-ref
diff --git a/doc/radosgw/index.rst b/doc/radosgw/index.rst
new file mode 100644
index 000000000..efdb12d28
--- /dev/null
+++ b/doc/radosgw/index.rst
@@ -0,0 +1,85 @@
+.. _object-gateway:
+
+=====================
+ Ceph Object Gateway
+=====================
+
+:term:`Ceph Object Gateway` is an object storage interface built on top of
+``librados``. It provides a RESTful gateway between applications and Ceph
+Storage Clusters. :term:`Ceph Object Storage` supports two interfaces:
+
+#. **S3-compatible:** Provides object storage functionality with an interface
+ that is compatible with a large subset of the Amazon S3 RESTful API.
+
+#. **Swift-compatible:** Provides object storage functionality with an interface
+ that is compatible with a large subset of the OpenStack Swift API.
+
+Ceph Object Storage uses the Ceph Object Gateway daemon (``radosgw``), an HTTP
+server designed for interacting with a Ceph Storage Cluster. The Ceph Object
+Gateway provides interfaces that are compatible with both Amazon S3 and
+OpenStack Swift, and it has its own user management. Ceph Object Gateway can
+store data in the same Ceph Storage Cluster in which data from Ceph File System
+clients and Ceph Block Device clients is stored. The S3 API and the Swift API
+share a common namespace, which makes it possible to write data to a Ceph
+Storage Cluster with one API and then retrieve that data with the other API.
+
+.. ditaa::
+
+ +------------------------+ +------------------------+
+ | S3 compatible API | | Swift compatible API |
+ +------------------------+-+------------------------+
+ | radosgw |
+ +---------------------------------------------------+
+ | librados |
+ +------------------------+-+------------------------+
+ | OSDs | | Monitors |
+ +------------------------+ +------------------------+
+
+.. note:: Ceph Object Storage does **NOT** use the Ceph Metadata Server.
+
+
+.. toctree::
+ :maxdepth: 1
+
+ HTTP Frontends <frontends>
+ Pool Placement and Storage Classes <placement>
+ Multisite Configuration <multisite>
+ Multisite Sync Policy Configuration <multisite-sync-policy>
+ Configuring Pools <pools>
+ Config Reference <config-ref>
+ Admin Guide <admin>
+ S3 API <s3>
+ Data caching and CDN <rgw-cache.rst>
+ Swift API <swift>
+ Admin Ops API <adminops>
+ Python binding <api>
+ Export over NFS <nfs>
+ OpenStack Keystone Integration <keystone>
+ OpenStack Barbican Integration <barbican>
+ HashiCorp Vault Integration <vault>
+ KMIP Integration <kmip>
+ Open Policy Agent Integration <opa>
+ Multi-tenancy <multitenancy>
+ Compression <compression>
+ LDAP Authentication <ldap-auth>
+ Server-Side Encryption <encryption>
+ Bucket Policy <bucketpolicy>
+ Dynamic bucket index resharding <dynamicresharding>
+ Multi factor authentication <mfa>
+ Sync Modules <sync-modules>
+ Bucket Notifications <notifications>
+ Data Layout in RADOS <layout>
+ STS <STS>
+ STS Lite <STSLite>
+ Keycloak <keycloak>
+ Session Tags <session-tags>
+ Role <role>
+ Orphan List and Associated Tooling <orphans>
+ OpenID Connect Provider <oidc>
+ troubleshooting
+ Manpage radosgw <../../man/8/radosgw>
+ Manpage radosgw-admin <../../man/8/radosgw-admin>
+ QAT Acceleration for Encryption and Compression <qat-accel>
+ S3-select <s3select>
+ Lua Scripting <lua-scripting>
+
diff --git a/doc/radosgw/keycloak.rst b/doc/radosgw/keycloak.rst
new file mode 100644
index 000000000..534c4733a
--- /dev/null
+++ b/doc/radosgw/keycloak.rst
@@ -0,0 +1,127 @@
+=================================
+Keycloak integration with RadosGW
+=================================
+
+Keycloak can be setup as an OpenID Connect Identity Provider, which can be used by mobile/ web apps
+to authenticate their users. The Web token returned as a result of authentication can be used by the
+mobile/ web app to call AssumeRoleWithWebIdentity to get back a set of temporary S3 credentials,
+which can be used by the app to make S3 calls.
+
+Setting up Keycloak
+====================
+
+Installing and bringing up Keycloak can be found here: https://www.keycloak.org/docs/latest/server_installation/.
+
+Configuring Keycloak to talk to RGW
+===================================
+
+The following configurables have to be added for RGW to talk to Keycloak::
+
+ [client.radosgw.gateway]
+ rgw sts key = {sts key for encrypting/ decrypting the session token}
+ rgw s3 auth use sts = true
+
+Example showing how to fetch a web token from Keycloak
+======================================================
+
+Several examples of apps authenticating with Keycloak are given here: https://github.com/keycloak/keycloak-quickstarts/blob/latest/docs/getting-started.md
+Taking the example of app-profile-jee-jsp app given in the link above, its client id and client secret, can be used to fetch the
+access token (web token) for an application using grant type 'client_credentials' as given below::
+
+ KC_REALM=demo
+ KC_CLIENT=<client id>
+ KC_CLIENT_SECRET=<client secret>
+ KC_SERVER=<host>:8080
+ KC_CONTEXT=auth
+
+ # Request Tokens for credentials
+ KC_RESPONSE=$( \
+ curl -k -v -X POST \
+ -H "Content-Type: application/x-www-form-urlencoded" \
+ -d "scope=openid" \
+ -d "grant_type=client_credentials" \
+ -d "client_id=$KC_CLIENT" \
+ -d "client_secret=$KC_CLIENT_SECRET" \
+ "http://$KC_SERVER/$KC_CONTEXT/realms/$KC_REALM/protocol/openid-connect/token" \
+ | jq .
+ )
+
+ KC_ACCESS_TOKEN=$(echo $KC_RESPONSE| jq -r .access_token)
+
+An access token can also be fetched for a particular user with grant type 'password', using client id, client secret, username and its password
+as given below::
+
+ KC_REALM=demo
+ KC_USERNAME=<username>
+ KC_PASSWORD=<userpassword>
+ KC_CLIENT=<client id>
+ KC_CLIENT_SECRET=<client secret>
+ KC_SERVER=<host>:8080
+ KC_CONTEXT=auth
+
+ # Request Tokens for credentials
+ KC_RESPONSE=$( \
+ curl -k -v -X POST \
+ -H "Content-Type: application/x-www-form-urlencoded" \
+ -d "scope=openid" \
+ -d "grant_type=password" \
+ -d "client_id=$KC_CLIENT" \
+ -d "client_secret=$KC_CLIENT_SECRET" \
+ -d "username=$KC_USERNAME" \
+ -d "password=$KC_PASSWORD" \
+ "http://$KC_SERVER/$KC_CONTEXT/realms/$KC_REALM/protocol/openid-connect/token" \
+ | jq .
+ )
+
+ KC_ACCESS_TOKEN=$(echo $KC_RESPONSE| jq -r .access_token)
+
+
+KC_ACCESS_TOKEN can be used to invoke AssumeRoleWithWebIdentity as given in
+:doc:`STS`.
+
+Attaching tags to a user in Keycloak
+====================================
+
+We need to create a user in keycloak, and add tags to it as its attributes.
+
+Add a user as shown below:
+
+.. image:: ../images/keycloak-adduser.png
+ :align: center
+
+Add user details as shown below:
+
+.. image:: ../images/keycloak-userdetails.png
+ :align: center
+
+Add user credentials as shown below:
+
+.. image:: ../images/keycloak-usercredentials.png
+ :align: center
+
+Add tags to the 'attributes' tab of the user as shown below:
+
+.. image:: ../images/keycloak-usertags.png
+ :align: center
+
+Add a protocol mapper for the user attribute to a client as shown below:
+
+.. image:: ../images/keycloak-userclientmapper.png
+ :align: center
+
+
+After following the steps shown above, the tag 'Department' will appear in the JWT (web token), under 'https://aws.amazon.com/tags' namespace.
+The tags can be verified using token introspection of the JWT. The command to introspect a token using client id and client secret is shown below::
+
+ KC_REALM=demo
+ KC_CLIENT=<client id>
+ KC_CLIENT_SECRET=<client secret>
+ KC_SERVER=<host>:8080
+ KC_CONTEXT=auth
+
+ curl -k -v \
+ -X POST \
+ -u "$KC_CLIENT:$KC_CLIENT_SECRET" \
+ -d "token=$KC_ACCESS_TOKEN" \
+ "http://$KC_SERVER/$KC_CONTEXT/realms/$KC_REALM/protocol/openid-connect/token/introspect" \
+ | jq .
diff --git a/doc/radosgw/keystone.rst b/doc/radosgw/keystone.rst
new file mode 100644
index 000000000..628810ad3
--- /dev/null
+++ b/doc/radosgw/keystone.rst
@@ -0,0 +1,160 @@
+=====================================
+ Integrating with OpenStack Keystone
+=====================================
+
+It is possible to integrate the Ceph Object Gateway with Keystone, the OpenStack
+identity service. This sets up the gateway to accept Keystone as the users
+authority. A user that Keystone authorizes to access the gateway will also be
+automatically created on the Ceph Object Gateway (if didn't exist beforehand). A
+token that Keystone validates will be considered as valid by the gateway.
+
+The following configuration options are available for Keystone integration::
+
+ [client.radosgw.gateway]
+ rgw keystone api version = {keystone api version}
+ rgw keystone url = {keystone server url:keystone server admin port}
+ rgw keystone admin token = {keystone admin token}
+ rgw keystone admin token path = {path to keystone admin token} #preferred
+ rgw keystone accepted roles = {accepted user roles}
+ rgw keystone token cache size = {number of tokens to cache}
+ rgw keystone implicit tenants = {true for private tenant for each new user}
+
+It is also possible to configure a Keystone service tenant, user & password for
+Keystone (for v2.0 version of the OpenStack Identity API), similar to the way
+OpenStack services tend to be configured, this avoids the need for setting the
+shared secret ``rgw keystone admin token`` in the configuration file, which is
+recommended to be disabled in production environments. The service tenant
+credentials should have admin privileges, for more details refer the `OpenStack
+Keystone documentation`_, which explains the process in detail. The requisite
+configuration options for are::
+
+ rgw keystone admin user = {keystone service tenant user name}
+ rgw keystone admin password = {keystone service tenant user password}
+ rgw keystone admin password = {keystone service tenant user password path} # preferred
+ rgw keystone admin tenant = {keystone service tenant name}
+
+
+A Ceph Object Gateway user is mapped into a Keystone ``tenant``. A Keystone user
+has different roles assigned to it on possibly more than a single tenant. When
+the Ceph Object Gateway gets the ticket, it looks at the tenant, and the user
+roles that are assigned to that ticket, and accepts/rejects the request
+according to the ``rgw keystone accepted roles`` configurable.
+
+For a v3 version of the OpenStack Identity API you should replace
+``rgw keystone admin tenant`` with::
+
+ rgw keystone admin domain = {keystone admin domain name}
+ rgw keystone admin project = {keystone admin project name}
+
+For compatibility with previous versions of ceph, it is also
+possible to set ``rgw keystone implicit tenants`` to either
+``s3`` or ``swift``. This has the effect of splitting
+the identity space such that the indicated protocol will
+only use implicit tenants, and the other protocol will
+never use implicit tenants. Some older versions of ceph
+only supported implicit tenants with swift.
+
+Ocata (and later)
+-----------------
+
+Keystone itself needs to be configured to point to the Ceph Object Gateway as an
+object-storage endpoint::
+
+ openstack service create --name=swift \
+ --description="Swift Service" \
+ object-store
+ +-------------+----------------------------------+
+ | Field | Value |
+ +-------------+----------------------------------+
+ | description | Swift Service |
+ | enabled | True |
+ | id | 37c4c0e79571404cb4644201a4a6e5ee |
+ | name | swift |
+ | type | object-store |
+ +-------------+----------------------------------+
+
+ openstack endpoint create --region RegionOne \
+ --publicurl "http://radosgw.example.com:8080/swift/v1" \
+ --adminurl "http://radosgw.example.com:8080/swift/v1" \
+ --internalurl "http://radosgw.example.com:8080/swift/v1" \
+ swift
+ +--------------+------------------------------------------+
+ | Field | Value |
+ +--------------+------------------------------------------+
+ | adminurl | http://radosgw.example.com:8080/swift/v1 |
+ | id | e4249d2b60e44743a67b5e5b38c18dd3 |
+ | internalurl | http://radosgw.example.com:8080/swift/v1 |
+ | publicurl | http://radosgw.example.com:8080/swift/v1 |
+ | region | RegionOne |
+ | service_id | 37c4c0e79571404cb4644201a4a6e5ee |
+ | service_name | swift |
+ | service_type | object-store |
+ +--------------+------------------------------------------+
+
+ $ openstack endpoint show object-store
+ +--------------+------------------------------------------+
+ | Field | Value |
+ +--------------+------------------------------------------+
+ | adminurl | http://radosgw.example.com:8080/swift/v1 |
+ | enabled | True |
+ | id | e4249d2b60e44743a67b5e5b38c18dd3 |
+ | internalurl | http://radosgw.example.com:8080/swift/v1 |
+ | publicurl | http://radosgw.example.com:8080/swift/v1 |
+ | region | RegionOne |
+ | service_id | 37c4c0e79571404cb4644201a4a6e5ee |
+ | service_name | swift |
+ | service_type | object-store |
+ +--------------+------------------------------------------+
+
+.. note:: If your radosgw ``ceph.conf`` sets the configuration option
+ ``rgw swift account in url = true``, your ``object-store``
+ endpoint URLs must be set to include the suffix
+ ``/v1/AUTH_%(tenant_id)s`` (instead of just ``/v1``).
+
+The Keystone URL is the Keystone admin RESTful API URL. The admin token is the
+token that is configured internally in Keystone for admin requests.
+
+OpenStack Keystone may be terminated with a self signed ssl certificate, in
+order for radosgw to interact with Keystone in such a case, you could either
+install Keystone's ssl certificate in the node running radosgw. Alternatively
+radosgw could be made to not verify the ssl certificate at all (similar to
+OpenStack clients with a ``--insecure`` switch) by setting the value of the
+configurable ``rgw keystone verify ssl`` to false.
+
+
+.. _OpenStack Keystone documentation: http://docs.openstack.org/developer/keystone/configuringservices.html#setting-up-projects-users-and-roles
+
+Cross Project(Tenant) Access
+----------------------------
+
+In order to let a project (earlier called a 'tenant') access buckets belonging to a different project, the following config option needs to be enabled::
+
+ rgw swift account in url = true
+
+The Keystone object-store endpoint must accordingly be configured to include the AUTH_%(project_id)s suffix::
+
+ openstack endpoint create --region RegionOne \
+ --publicurl "http://radosgw.example.com:8080/swift/v1/AUTH_$(project_id)s" \
+ --adminurl "http://radosgw.example.com:8080/swift/v1/AUTH_$(project_id)s" \
+ --internalurl "http://radosgw.example.com:8080/swift/v1/AUTH_$(project_id)s" \
+ swift
+ +--------------+--------------------------------------------------------------+
+ | Field | Value |
+ +--------------+--------------------------------------------------------------+
+ | adminurl | http://radosgw.example.com:8080/swift/v1/AUTH_$(project_id)s |
+ | id | e4249d2b60e44743a67b5e5b38c18dd3 |
+ | internalurl | http://radosgw.example.com:8080/swift/v1/AUTH_$(project_id)s |
+ | publicurl | http://radosgw.example.com:8080/swift/v1/AUTH_$(project_id)s |
+ | region | RegionOne |
+ | service_id | 37c4c0e79571404cb4644201a4a6e5ee |
+ | service_name | swift |
+ | service_type | object-store |
+ +--------------+--------------------------------------------------------------+
+
+Keystone integration with the S3 API
+------------------------------------
+
+It is possible to use Keystone for authentication even when using the
+S3 API (with AWS-like access and secret keys), if the ``rgw s3 auth
+use keystone`` option is set. For details, see
+:doc:`s3/authentication`.
diff --git a/doc/radosgw/kmip.rst b/doc/radosgw/kmip.rst
new file mode 100644
index 000000000..988897121
--- /dev/null
+++ b/doc/radosgw/kmip.rst
@@ -0,0 +1,219 @@
+================
+KMIP Integration
+================
+
+`KMIP`_ can be used as a secure key management service for
+`Server-Side Encryption`_ (SSE-KMS).
+
+.. ditaa::
+
+ +---------+ +---------+ +------+ +-------+
+ | Client | | RadosGW | | KMIP | | OSD |
+ +---------+ +---------+ +------+ +-------+
+ | create secret | | |
+ | key for key ID | | |
+ |-----------------+---------------->| |
+ | | | |
+ | upload object | | |
+ | with key ID | | |
+ |---------------->| request secret | |
+ | | key for key ID | |
+ | |---------------->| |
+ | |<----------------| |
+ | | return secret | |
+ | | key | |
+ | | | |
+ | | encrypt object | |
+ | | with secret key | |
+ | |--------------+ | |
+ | | | | |
+ | |<-------------+ | |
+ | | | |
+ | | store encrypted | |
+ | | object | |
+ | |------------------------------>|
+
+#. `Setting KMIP Access for Ceph`_
+#. `Creating Keys in KMIP`_
+#. `Configure the Ceph Object Gateway`_
+#. `Upload object`_
+
+Before you can use KMIP with ceph, you will need to do three things.
+You will need to associate ceph with client information in KMIP,
+and configure ceph to use that client information.
+You will also need to create 1 or more keys in KMIP.
+
+Setting KMIP Access for Ceph
+============================
+
+Setting up Ceph in KMIP is very dependent on the mechanism(s) supported
+by your implementation of KMIP. Two implementations are described
+here,
+
+1. `IBM Security Guardium Key Lifecycle Manager (SKLM)`__. This is a well
+ supported commercial product.
+
+__ SKLM_
+
+2. PyKMIP_. This is a small python project, suitable for experimental
+ and testing use only.
+
+Using IBM SKLM
+--------------
+
+IBM SKLM__ supports client authentication using certificates.
+Certificates may either be self-signed certificates created,
+for instance, using openssl, or certificates may be created
+using SKLM. Ceph should then be configured (see below) to
+use KMIP and an attempt made to use it. This will fail,
+but it will leave an "untrusted client device certificate" in SKLM.
+This can be then upgraded to a registered client using the web
+interface to complete the registration process.
+
+__ SKLM_
+
+Find untrusted clients under ``Advanced Configuration``,
+``Client Device Communication Certificates``. Select
+``Modify SSL/KMIP Certificates for Clients``, then toggle the flag
+``allow the server to trust this certificate and communicate...``.
+
+Using PyKMIP
+------------
+
+PyKMIP_ has no special registration process, it simply
+trusts the certificate. However, the certificate has to
+be issued by a certificate authority that is trusted by
+pykmip. PyKMIP also prefers that the certificate contain
+an extension for "extended key usage". However, that
+can be defeated by specifying ``enable_tls_client_auth=False``
+in the server configuration.
+
+Creating Keys in KMIP
+=====================
+
+Some KMIP implementations come with a web interface or other
+administrative tools to create and manage keys. Refer to your
+documentation on that if you wish to use it. The KMIP protocol can also
+be used to create and manage keys. PyKMIP comes with a python client
+library that can be used this way.
+
+In preparation to using the pykmip client, you'll need to have a valid
+kmip client key & certificate, such as the one you created for ceph.
+
+Next, you'll then need to download and install it::
+
+ virtualenv $HOME/my-kmip-env
+ source $HOME/my-kmip-env/bin/activate
+ pip install pykmip
+
+Then you'll need to prepare a configuration file
+for the client, something like this::
+
+ cat <<EOF >$HOME/my-kmip-configuration
+ [client]
+ host={hostname}
+ port=5696
+ certfile={clientcert}
+ keyfile={clientkey}
+ ca_certs={clientca}
+ ssl_version=PROTOCOL_TLSv1_2
+ EOF
+
+You will need to replace {hostname} with the name of your kmip host,
+also replace {clientcert} {clientkey} and {clientca} with pathnames to
+a suitable pem encoded certificate, such as the one you created for
+ceph to use.
+
+Now, you can run this python script directly from
+the shell::
+
+ python
+ from kmip.pie import client
+ from kmip import enums
+ import ssl
+ import os
+ import sys
+ import json
+ c = client.ProxyKmipClient(config_file=os.environ['HOME']+"/my-kmip-configuration")
+
+ while True:
+ l=sys.stdin.readline()
+ keyname=l.strip()
+ if keyname == "": break
+ with c:
+ key_id = c.create(
+ enums.CryptographicAlgorithm.AES,
+ 256,
+ operation_policy_name='default',
+ name=keyname,
+ cryptographic_usage_mask=[
+ enums.CryptographicUsageMask.ENCRYPT,
+ enums.CryptographicUsageMask.DECRYPT
+ ]
+ )
+ c.activate(key_id)
+ attrs = c.get_attributes(uid=key_id)
+ r = {}
+ for a in attrs[1]:
+ r[str(a.attribute_name)] = str(a.attribute_value)
+ print (json.dumps(r))
+
+If this is all entered at the shell prompt, python will
+prompt with ">>>" then "..." until the script is read in,
+after which it will read and process names with no prompt
+until a blank line or end of file (^D) is given it, or
+an error occurs. Of course you can turn this into a regular
+python script if you prefer.
+
+Configure the Ceph Object Gateway
+=================================
+
+Edit the Ceph configuration file to enable Vault as a KMS backend for
+server-side encryption::
+
+ rgw crypt s3 kms backend = kmip
+ rgw crypt kmip ca path: /etc/ceph/kmiproot.crt
+ rgw crypt kmip client cert: /etc/ceph/kmip-client.crt
+ rgw crypt kmip client key: /etc/ceph/private/kmip-client.key
+ rgw crypt kmip kms key template: pykmip-$keyid
+
+You may need to change the paths above to match where
+you actually want to store kmip certificate data.
+
+The kmip key template describes how ceph will modify
+the name given to it before it looks it up
+in kmip. The default is just "$keyid".
+If you don't want ceph to see all your kmip
+keys, you can use this to limit ceph to just the
+designated subset of your kmip key namespace.
+
+Upload object
+=============
+
+When uploading an object to the Gateway, provide the SSE key ID in the request.
+As an example, for the kv engine, using the AWS command-line client::
+
+ aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt \
+ s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id mybucketkey
+
+As an example, for the transit engine, using the AWS command-line client::
+
+ aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt \
+ s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id mybucketkey
+
+The Object Gateway will fetch the key from Vault, encrypt the object and store
+it in the bucket. Any request to download the object will make the Gateway
+automatically retrieve the correspondent key from Vault and decrypt the object.
+
+Note that the secret will be fetched from kmip using a name constructed
+from the key template, replacing ``$keyid`` with the key provided.
+
+With the ceph configuration given above,
+radosgw would fetch the secret from::
+
+ pykmip-mybucketkey
+
+.. _Server-Side Encryption: ../encryption
+.. _KMIP: http://www.oasis-open.org/committees/kmip/
+.. _SKLM: https://www.ibm.com/products/ibm-security-key-lifecycle-manager
+.. _PyKMIP: https://pykmip.readthedocs.io/en/latest/
diff --git a/doc/radosgw/layout.rst b/doc/radosgw/layout.rst
new file mode 100644
index 000000000..37c3a72cd
--- /dev/null
+++ b/doc/radosgw/layout.rst
@@ -0,0 +1,207 @@
+===========================
+ Rados Gateway Data Layout
+===========================
+
+Although the source code is the ultimate guide, this document helps
+new developers to get up to speed with the implementation details.
+
+Introduction
+------------
+
+Swift offers something called a *container*, which we use interchangeably with
+the term *bucket*, so we say that RGW's buckets implement Swift containers.
+
+This document does not consider how RGW operates on these structures,
+e.g. the use of encode() and decode() methods for serialization and so on.
+
+Conceptual View
+---------------
+
+Although RADOS only knows about pools and objects with their xattrs and
+omap[1], conceptually RGW organizes its data into three different kinds:
+metadata, bucket index, and data.
+
+Metadata
+^^^^^^^^
+
+We have 3 'sections' of metadata: 'user', 'bucket', and 'bucket.instance'.
+You can use the following commands to introspect metadata entries: ::
+
+ $ radosgw-admin metadata list
+ $ radosgw-admin metadata list bucket
+ $ radosgw-admin metadata list bucket.instance
+ $ radosgw-admin metadata list user
+
+ $ radosgw-admin metadata get bucket:<bucket>
+ $ radosgw-admin metadata get bucket.instance:<bucket>:<bucket_id>
+ $ radosgw-admin metadata get user:<user> # get or set
+
+Some variables have been used in above commands, they are:
+
+- user: Holds user information
+- bucket: Holds a mapping between bucket name and bucket instance id
+- bucket.instance: Holds bucket instance information[2]
+
+Every metadata entry is kept on a single RADOS object. See below for implementation details.
+
+Note that the metadata is not indexed. When listing a metadata section we do a
+RADOS ``pgls`` operation on the containing pool.
+
+Bucket Index
+^^^^^^^^^^^^
+
+It's a different kind of metadata, and kept separately. The bucket index holds
+a key-value map in RADOS objects. By default it is a single RADOS object per
+bucket, but it is possible since Hammer to shard that map over multiple RADOS
+objects. The map itself is kept in omap, associated with each RADOS object.
+The key of each omap is the name of the objects, and the value holds some basic
+metadata of that object -- metadata that shows up when listing the bucket.
+Also, each omap holds a header, and we keep some bucket accounting metadata
+in that header (number of objects, total size, etc.).
+
+Note that we also hold other information in the bucket index, and it's kept in
+other key namespaces. We can hold the bucket index log there, and for versioned
+objects there is more information that we keep on other keys.
+
+Data
+^^^^
+
+Objects data is kept in one or more RADOS objects for each rgw object.
+
+Object Lookup Path
+------------------
+
+When accessing objects, REST APIs come to RGW with three parameters:
+account information (access key in S3 or account name in Swift),
+bucket or container name, and object name (or key). At present, RGW only
+uses account information to find out the user ID and for access control.
+Only the bucket name and object key are used to address the object in a pool.
+
+The user ID in RGW is a string, typically the actual user name from the user
+credentials and not a hashed or mapped identifier.
+
+When accessing a user's data, the user record is loaded from an object
+"<user_id>" in pool "default.rgw.meta" with namespace "users.uid".
+
+Bucket names are represented in the pool "default.rgw.meta" with namespace
+"root". Bucket record is
+loaded in order to obtain so-called marker, which serves as a bucket ID.
+
+The object is located in pool "default.rgw.buckets.data".
+Object name is "<marker>_<key>",
+for example "default.7593.4_image.png", where the marker is "default.7593.4"
+and the key is "image.png". Since these concatenated names are not parsed,
+only passed down to RADOS, the choice of the separator is not important and
+causes no ambiguity. For the same reason, slashes are permitted in object
+names (keys).
+
+It is also possible to create multiple data pools and make it so that
+different users buckets will be created in different RADOS pools by default,
+thus providing the necessary scaling. The layout and naming of these pools
+is controlled by a 'policy' setting.[3]
+
+An RGW object may consist of several RADOS objects, the first of which
+is the head that contains the metadata, such as manifest, ACLs, content type,
+ETag, and user-defined metadata. The metadata is stored in xattrs.
+The head may also contain up to 512 kilobytes of object data, for efficiency
+and atomicity. The manifest describes how each object is laid out in RADOS
+objects.
+
+Bucket and Object Listing
+-------------------------
+
+Buckets that belong to a given user are listed in an omap of an object named
+"<user_id>.buckets" (for example, "foo.buckets") in pool "default.rgw.meta"
+with namespace "users.uid".
+These objects are accessed when listing buckets, when updating bucket
+contents, and updating and retrieving bucket statistics (e.g. for quota).
+
+See the user-visible, encoded class 'cls_user_bucket_entry' and its
+nested class 'cls_user_bucket' for the values of these omap entires.
+
+These listings are kept consistent with buckets in pool ".rgw".
+
+Objects that belong to a given bucket are listed in a bucket index,
+as discussed in sub-section 'Bucket Index' above. The default naming
+for index objects is ".dir.<marker>" in pool "default.rgw.buckets.index".
+
+Footnotes
+---------
+
+[1] Omap is a key-value store, associated with an object, in a way similar
+to how Extended Attributes associate with a POSIX file. An object's omap
+is not physically located in the object's storage, but its precise
+implementation is invisible and immaterial to RADOS Gateway.
+In Hammer, one LevelDB is used to store omap in each OSD.
+
+[2] Before the Dumpling release, the 'bucket.instance' metadata did not
+exist and the 'bucket' metadata contained its information. It is possible
+to encounter such buckets in old installations.
+
+[3] The pool names have been changed starting with the Infernalis release.
+If you are looking at an older setup, some details may be different. In
+particular there was a different pool for each of the namespaces that are
+now being used inside the default.root.meta pool.
+
+Appendix: Compendium
+--------------------
+
+Known pools:
+
+.rgw.root
+ Unspecified region, zone, and global information records, one per object.
+
+<zone>.rgw.control
+ notify.<N>
+
+<zone>.rgw.meta
+ Multiple namespaces with different kinds of metadata:
+
+ namespace: root
+ <bucket>
+ .bucket.meta.<bucket>:<marker> # see put_bucket_instance_info()
+
+ The tenant is used to disambiguate buckets, but not bucket instances.
+ Example::
+
+ .bucket.meta.prodtx:test%25star:default.84099.6
+ .bucket.meta.testcont:default.4126.1
+ .bucket.meta.prodtx:testcont:default.84099.4
+ prodtx/testcont
+ prodtx/test%25star
+ testcont
+
+ namespace: users.uid
+ Contains _both_ per-user information (RGWUserInfo) in "<user>" objects
+ and per-user lists of buckets in omaps of "<user>.buckets" objects.
+ The "<user>" may contain the tenant if non-empty, for example::
+
+ prodtx$prodt
+ test2.buckets
+ prodtx$prodt.buckets
+ test2
+
+ namespace: users.email
+ Unimportant
+
+ namespace: users.keys
+ 47UA98JSTJZ9YAN3OS3O
+
+ This allows ``radosgw`` to look up users by their access keys during authentication.
+
+ namespace: users.swift
+ test:tester
+
+<zone>.rgw.buckets.index
+ Objects are named ".dir.<marker>", each contains a bucket index.
+ If the index is sharded, each shard appends the shard index after
+ the marker.
+
+<zone>.rgw.buckets.data
+ default.7593.4__shadow_.488urDFerTYXavx4yAd-Op8mxehnvTI_1
+ <marker>_<key>
+
+An example of a marker would be "default.16004.1" or "default.7593.4".
+The current format is "<zone>.<instance_id>.<bucket_id>". But once
+generated, a marker is not parsed again, so its format may change
+freely in the future.
diff --git a/doc/radosgw/ldap-auth.rst b/doc/radosgw/ldap-auth.rst
new file mode 100644
index 000000000..486d0c623
--- /dev/null
+++ b/doc/radosgw/ldap-auth.rst
@@ -0,0 +1,167 @@
+===================
+LDAP Authentication
+===================
+
+.. versionadded:: Jewel
+
+You can delegate the Ceph Object Gateway authentication to an LDAP server.
+
+How it works
+============
+
+The Ceph Object Gateway extracts the users LDAP credentials from a token. A
+search filter is constructed with the user name. The Ceph Object Gateway uses
+the configured service account to search the directory for a matching entry. If
+an entry is found, the Ceph Object Gateway attempts to bind to the found
+distinguished name with the password from the token. If the credentials are
+valid, the bind will succeed, and the Ceph Object Gateway will grant access and
+radosgw-user will be created with the provided username.
+
+You can limit the allowed users by setting the base for the search to a
+specific organizational unit or by specifying a custom search filter, for
+example requiring specific group membership, custom object classes, or
+attributes.
+
+The LDAP credentials must be available on the server to perform the LDAP
+authentication. Make sure to set the ``rgw`` log level low enough to hide the
+base-64-encoded credentials / access tokens.
+
+Requirements
+============
+
+- **LDAP or Active Directory:** A running LDAP instance accessible by the Ceph
+ Object Gateway
+- **Service account:** LDAP credentials to be used by the Ceph Object Gateway
+ with search permissions
+- **User account:** At least one user account in the LDAP directory
+- **Do not overlap LDAP and local users:** You should not use the same user
+ names for local users and for users being authenticated by using LDAP. The
+ Ceph Object Gateway cannot distinguish them and it treats them as the same
+ user.
+
+Sanity checks
+=============
+
+Use the ``ldapsearch`` utility to verify the service account or the LDAP connection:
+
+::
+
+ # ldapsearch -x -D "uid=ceph,ou=system,dc=example,dc=com" -W \
+ -H ldaps://example.com -b "ou=users,dc=example,dc=com" 'uid=*' dn
+
+.. note:: Make sure to use the same LDAP parameters like in the Ceph configuration file to
+ eliminate possible problems.
+
+Configuring the Ceph Object Gateway to use LDAP authentication
+==============================================================
+
+The following parameters in the Ceph configuration file are related to the LDAP
+authentication:
+
+- ``rgw_s3_auth_use_ldap``: Set this to ``true`` to enable S3 authentication with LDAP
+- ``rgw_ldap_uri``: Specifies the LDAP server to use. Make sure to use the
+ ``ldaps://<fqdn>:<port>`` parameter to not transmit clear text credentials
+ over the wire.
+- ``rgw_ldap_binddn``: The Distinguished Name (DN) of the service account used
+ by the Ceph Object Gateway
+- ``rgw_ldap_secret``: Path to file containing credentials for ``rgw_ldap_binddn``
+- ``rgw_ldap_searchdn``: Specifies the base in the directory information tree
+ for searching users. This might be your users organizational unit or some
+ more specific Organizational Unit (OU).
+- ``rgw_ldap_dnattr``: The attribute being used in the constructed search
+ filter to match a username. Depending on your Directory Information Tree
+ (DIT) this would probably be ``uid`` or ``cn``. The generated filter string
+ will be, e.g., ``cn=some_username``.
+- ``rgw_ldap_searchfilter``: If not specified, the Ceph Object Gateway
+ automatically constructs the search filter with the ``rgw_ldap_dnattr``
+ setting. Use this parameter to narrow the list of allowed users in very
+ flexible ways. Consult the *Using a custom search filter to limit user access
+ section* for details
+
+Using a custom search filter to limit user access
+=================================================
+
+There are two ways to use the ``rgw_search_filter`` parameter:
+
+Specifying a partial filter to further limit the constructed search filter
+--------------------------------------------------------------------------
+
+An example for a partial filter:
+
+::
+
+ "objectclass=inetorgperson"
+
+The Ceph Object Gateway will generate the search filter as usual with the
+user name from the token and the value of ``rgw_ldap_dnattr``. The constructed
+filter is then combined with the partial filter from the ``rgw_search_filter``
+attribute. Depending on the user name and the settings the final search filter
+might become:
+
+::
+
+ "(&(uid=hari)(objectclass=inetorgperson))"
+
+So user ``hari`` will only be granted access if he is found in the LDAP
+directory, has an object class of ``inetorgperson``, and did specify a valid
+password.
+
+Specifying a complete filter
+----------------------------
+
+A complete filter must contain a ``@USERNAME@`` token which will be substituted
+with the user name during the authentication attempt. The ``rgw_ldap_dnattr``
+parameter is not used anymore in this case. For example, to limit valid users
+to a specific group, use the following filter:
+
+::
+
+ "(&(uid=@USERNAME@)(memberOf=cn=ceph-users,ou=groups,dc=mycompany,dc=com))"
+
+.. note:: Using the ``memberOf`` attribute in LDAP searches requires server side
+ support from you specific LDAP server implementation.
+
+Generating an access token for LDAP authentication
+==================================================
+
+The ``radosgw-token`` utility generates the access token based on the LDAP
+user name and password. It will output a base-64 encoded string which is the
+access token.
+
+::
+
+ # export RGW_ACCESS_KEY_ID="<username>"
+ # export RGW_SECRET_ACCESS_KEY="<password>"
+ # radosgw-token --encode
+
+.. important:: The access token is a base-64 encoded JSON struct and contains
+ the LDAP credentials as a clear text.
+
+Alternatively, users can also generate the token manually by base-64-encoding
+this JSON snippet, if they do not have the ``radosgw-token`` tool installed.
+
+::
+
+ {
+ "RGW_TOKEN": {
+ "version": 1,
+ "type": "ldap",
+ "id": "your_username",
+ "key": "your_clear_text_password_here"
+ }
+ }
+
+Using the access token
+======================
+
+Use your favorite S3 client and specify the token as the access key in your
+client or environment variables.
+
+::
+
+ # export AWS_ACCESS_KEY_ID=<base64-encoded token generated by radosgw-token>
+ # export AWS_SECRET_ACCESS_KEY="" # define this with an empty string, otherwise tools might complain about missing env variables.
+
+.. important:: The access token is a base-64 encoded JSON struct and contains
+ the LDAP credentials as a clear text. DO NOT share it unless
+ you want to share your clear text password!
diff --git a/doc/radosgw/lua-scripting.rst b/doc/radosgw/lua-scripting.rst
new file mode 100644
index 000000000..0f385c46f
--- /dev/null
+++ b/doc/radosgw/lua-scripting.rst
@@ -0,0 +1,393 @@
+=============
+Lua Scripting
+=============
+
+.. versionadded:: Pacific
+
+.. contents::
+
+This feature allows users to upload Lua scripts to different context in the radosgw. The two supported context are "preRequest" that will execute a script before the
+operation was taken, and "postRequest" that will execute after each operation is taken. Script may be uploaded to address requests for users of a specific tenant.
+The script can access fields in the request and modify some fields. All Lua language features can be used in the script.
+
+By default, all lua standard libraries are available in the script, however, in order to allow for other lua modules to be used in the script, we support adding packages to an allowlist:
+
+ - All packages in the allowlist are being re-installed using the luarocks package manager on radosgw restart. Therefore a restart is needed for adding or removing of packages to take effect
+ - To add a package that contains C source code that needs to be compiled, use the `--allow-compilation` flag. In this case a C compiler needs to be available on the host
+ - Lua packages are installed in, and used from, a directory local to the radosgw. Meaning that lua packages in the allowlist are separated from any lua packages available on the host.
+ By default, this directory would be `/tmp/luarocks/<entity name>`. Its prefix part (`/tmp/luarocks/`) could be set to a different location via the `rgw_luarocks_location` configuration parameter.
+ Note that this parameter should not be set to one of the default locations where luarocks install packages (e.g. `$HOME/.luarocks`, `/usr/lib64/lua`, `/usr/share/lua`)
+
+
+.. toctree::
+ :maxdepth: 1
+
+
+Script Management via CLI
+-------------------------
+
+To upload a script:
+
+::
+
+ # radosgw-admin script put --infile={lua-file} --context={preRequest|postRequest} [--tenant={tenant-name}]
+
+
+To print the content of the script to standard output:
+
+::
+
+ # radosgw-admin script get --context={preRequest|postRequest} [--tenant={tenant-name}]
+
+
+To remove the script:
+
+::
+
+ # radosgw-admin script rm --context={preRequest|postRequest} [--tenant={tenant-name}]
+
+
+Package Management via CLI
+--------------------------
+
+To add a package to the allowlist:
+
+::
+
+ # radosgw-admin script-package add --package={package name} [--allow-compilation]
+
+
+To remove a package from the allowlist:
+
+::
+
+ # radosgw-admin script-package rm --package={package name}
+
+
+To print the list of packages in the allowlist:
+
+::
+
+ # radosgw-admin script-package list
+
+
+Context Free Functions
+----------------------
+Debug Log
+~~~~~~~~~
+The ``RGWDebugLog()`` function accepts a string and prints it to the debug log with priority 20.
+Each log message is prefixed ``Lua INFO:``. This function has no return value.
+
+Request Fields
+-----------------
+
+.. warning:: This feature is experimental. Fields may be removed or renamed in the future.
+
+.. note::
+
+ - Although Lua is a case-sensitive language, field names provided by the radosgw are case-insensitive. Function names remain case-sensitive.
+ - Fields marked "optional" can have a nil value.
+ - Fields marked as "iterable" can be used by the pairs() function and with the # length operator.
+ - All table fields can be used with the bracket operator ``[]``.
+ - ``time`` fields are strings with the following format: ``%Y-%m-%d %H:%M:%S``.
+
+
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| Field | Type | Description | Iterable | Writeable | Optional |
++====================================================+==========+==============================================================+==========+===========+==========+
+| ``Request.RGWOp`` | string | radosgw operation | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.DecodedURI`` | string | decoded URI | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ContentLength`` | integer | size of the request | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.GenericAttributes`` | table | string to string generic attributes map | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Response`` | table | response to the request | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Response.HTTPStatusCode`` | integer | HTTP status code | no | yes | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Response.HTTPStatus`` | string | HTTP status text | no | yes | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Response.RGWCode`` | integer | radosgw error code | no | yes | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Response.Message`` | string | response message | no | yes | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.SwiftAccountName`` | string | swift account name | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket`` | table | info on the bucket | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Tenant`` | string | tenant of the bucket | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Name`` | string | bucket name | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Marker`` | string | bucket marker (initial id) | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Id`` | string | bucket id | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Count`` | integer | number of objects in the bucket | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Size`` | integer | total size of objects in the bucket | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.ZoneGroupId`` | string | zone group of the bucket | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.CreationTime`` | time | creation time of the bucket | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.MTime`` | time | modification time of the bucket | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Quota`` | table | bucket quota | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Quota.MaxSize`` | integer | bucket quota max size | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Quota.MaxObjects`` | integer | bucket quota max number of objects | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Reques.Bucket.Quota.Enabled`` | boolean | bucket quota is enabled | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.Quota.Rounded`` | boolean | bucket quota is rounded to 4K | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.PlacementRule`` | table | bucket placement rule | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.PlacementRule.Name`` | string | bucket placement rule name | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.PlacementRule.StorageClass`` | string | bucket placement rule storage class | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.User`` | table | bucket owner | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.User.Tenant`` | string | bucket owner tenant | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Bucket.User.Id`` | string | bucket owner id | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Object`` | table | info on the object | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Object.Name`` | string | object name | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Object.Instance`` | string | object version | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Object.Id`` | string | object id | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Object.Size`` | integer | object size | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Object.MTime`` | time | object mtime | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.CopyFrom`` | table | information on copy operation | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.CopyFrom.Tenant`` | string | tenant of the object copied from | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.CopyFrom.Bucket`` | string | bucket of the object copied from | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.CopyFrom.Object`` | table | object copied from. See: ``Request.Object`` | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ObjectOwner`` | table | object owner | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ObjectOwner.DisplayName`` | string | object owner display name | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ObjectOwner.User`` | table | object user. See: ``Request.Bucket.User`` | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ZoneGroup.Name`` | string | name of zone group | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ZoneGroup.Endpoint`` | string | endpoint of zone group | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl`` | table | user ACL | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Owner`` | table | user ACL owner. See: ``Request.ObjectOwner`` | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants`` | table | user ACL map of string to grant | yes | no | no |
+| | | note: grants without an Id are not presented when iterated | | | |
+| | | and only one of them can be accessed via brackets | | | |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"]`` | table | user ACL grant | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"].Type`` | integer | user ACL grant type | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"].User`` | table | user ACL grant user | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"].User.Tenant`` | table | user ACL grant user tenant | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"].User.Id`` | table | user ACL grant user id | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"].GroupType`` | integer | user ACL grant group type | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserAcl.Grants["<name>"].Referer`` | string | user ACL grant referer | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.BucketAcl`` | table | bucket ACL. See: ``Request.UserAcl`` | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.ObjectAcl`` | table | object ACL. See: ``Request.UserAcl`` | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Environment`` | table | string to string environment map | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Policy`` | table | policy | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Policy.Text`` | string | policy text | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Policy.Id`` | string | policy Id | no | no | yes |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Policy.Statements`` | table | list of string statements | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserPolicies`` | table | list of user policies | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.UserPolicies[<index>]`` | table | user policy. See: ``Request.Policy`` | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.RGWId`` | string | radosgw host id: ``<host>-<zone>-<zonegroup>`` | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP`` | table | HTTP header | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.Parameters`` | table | string to string parameter map | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.Resources`` | table | string to string resource map | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.Metadata`` | table | string to string metadata map | yes | yes | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.Host`` | string | host name | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.Method`` | string | HTTP method | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.URI`` | string | URI | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.QueryString`` | string | HTTP query string | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.HTTP.Domain`` | string | domain name | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Time`` | time | request time | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Dialect`` | string | "S3" or "Swift" | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Id`` | string | request Id | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.TransactionId`` | string | transaction Id | no | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+| ``Request.Tags`` | table | object tags map | yes | no | no |
++----------------------------------------------------+----------+--------------------------------------------------------------+----------+-----------+----------+
+
+Request Functions
+--------------------
+Operations Log
+~~~~~~~~~~~~~~
+The ``Request.Log()`` function prints the requests into the operations log. This function has no parameters. It returns 0 for success and an error code if it fails.
+
+Lua Code Samples
+----------------
+- Print information on source and destination objects in case of copy:
+
+.. code-block:: lua
+
+ function print_object(object)
+ RGWDebugLog(" Name: " .. object.Name)
+ RGWDebugLog(" Instance: " .. object.Instance)
+ RGWDebugLog(" Id: " .. object.Id)
+ RGWDebugLog(" Size: " .. object.Size)
+ RGWDebugLog(" MTime: " .. object.MTime)
+ end
+
+ if Request.CopyFrom and Request.Object and Request.CopyFrom.Object then
+ RGWDebugLog("copy from object:")
+ print_object(Request.CopyFrom.Object)
+ RGWDebugLog("to object:")
+ print_object(Request.Object)
+ end
+
+- Print ACLs via a "generic function":
+
+.. code-block:: lua
+
+ function print_owner(owner)
+ RGWDebugLog("Owner:")
+ RGWDebugLog(" Dispaly Name: " .. owner.DisplayName)
+ RGWDebugLog(" Id: " .. owner.User.Id)
+ RGWDebugLog(" Tenanet: " .. owner.User.Tenant)
+ end
+
+ function print_acl(acl_type)
+ index = acl_type .. "ACL"
+ acl = Request[index]
+ if acl then
+ RGWDebugLog(acl_type .. "ACL Owner")
+ print_owner(acl.Owner)
+ RGWDebugLog(" there are " .. #acl.Grants .. " grant for owner")
+ for k,v in pairs(acl.Grants) do
+ RGWDebugLog(" Grant Key: " .. k)
+ RGWDebugLog(" Grant Type: " .. v.Type)
+ RGWDebugLog(" Grant Group Type: " .. v.GroupType)
+ RGWDebugLog(" Grant Referer: " .. v.Referer)
+ RGWDebugLog(" Grant User Tenant: " .. v.User.Tenant)
+ RGWDebugLog(" Grant User Id: " .. v.User.Id)
+ end
+ else
+ RGWDebugLog("no " .. acl_type .. " ACL in request: " .. Request.Id)
+ end
+ end
+
+ print_acl("User")
+ print_acl("Bucket")
+ print_acl("Object")
+
+- Use of operations log only in case of errors:
+
+.. code-block:: lua
+
+ if Request.Response.HTTPStatusCode ~= 200 then
+ RGWDebugLog("request is bad, use ops log")
+ rc = Request.Log()
+ RGWDebugLog("ops log return code: " .. rc)
+ end
+
+- Set values into the error message:
+
+.. code-block:: lua
+
+ if Request.Response.HTTPStatusCode == 500 then
+ Request.Response.Message = "<Message> something bad happened :-( </Message>"
+ end
+
+- Add metadata to objects that was not originally sent by the client:
+
+In the `preRequest` context we should add:
+
+.. code-block:: lua
+
+ if Request.RGWOp == 'put_obj' then
+ Request.HTTP.Metadata["x-amz-meta-mydata"] = "my value"
+ end
+
+In the `postRequest` context we look at the metadata:
+
+.. code-block:: lua
+
+ RGWDebugLog("number of metadata entries is: " .. #Request.HTTP.Metadata)
+ for k, v in pairs(Request.HTTP.Metadata) do
+ RGWDebugLog("key=" .. k .. ", " .. "value=" .. v)
+ end
+
+- Use modules to create Unix socket based, JSON encoded, "access log":
+
+First we should add the following packages to the allowlist:
+
+::
+
+ # radosgw-admin script-package add --package=luajson
+ # radosgw-admin script-package add --package=luasocket --allow-compilation
+
+
+Then, do a restart for the radosgw and upload the following script to the `postRequest` context:
+
+.. code-block:: lua
+
+ if Request.RGWOp == "get_obj" then
+ local json = require("json")
+ local socket = require("socket")
+ local unix = require("socket.unix")
+ local s = assert(unix())
+ E = {}
+
+ msg = {bucket = (Request.Bucket or (Request.CopyFrom or E).Bucket).Name,
+ time = Request.Time,
+ operation = Request.RGWOp,
+ http_status = Request.Response.HTTPStatusCode,
+ error_code = Request.Response.HTTPStatus,
+ object_size = Request.Object.Size,
+ trans_id = Request.TransactionId}
+
+ assert(s:connect("/tmp/socket"))
+ assert(s:send(json.encode(msg).."\n"))
+ assert(s:close())
+ end
+
diff --git a/doc/radosgw/mfa.rst b/doc/radosgw/mfa.rst
new file mode 100644
index 000000000..0cbead85f
--- /dev/null
+++ b/doc/radosgw/mfa.rst
@@ -0,0 +1,102 @@
+.. _rgw_mfa:
+
+==========================================
+RGW Support for Multifactor Authentication
+==========================================
+
+.. versionadded:: Mimic
+
+The S3 multifactor authentication (MFA) feature allows
+users to require the use of one-time password when removing
+objects on certain buckets. The buckets need to be configured
+with versioning and MFA enabled which can be done through
+the S3 api.
+
+Time-based one time password tokens can be assigned to a user
+through radosgw-admin. Each token has a secret seed, and a serial
+id that is assigned to it. Tokens are added to the user, can
+be listedm removed, and can also be re-synchronized.
+
+Multisite
+=========
+
+While the MFA IDs are set on the user's metadata, the
+actual MFA one time password configuration resides in the local zone's
+osds. Therefore, in a multi-site environment it is advisable to use
+different tokens for different zones.
+
+
+Terminology
+=============
+
+-``TOTP``: Time-based One Time Password
+
+-``token serial``: a string that represents the ID of a TOTP token
+
+-``token seed``: the secret seed that is used to calculate the TOTP
+
+-``totp seconds``: the time resolution that is being used for TOTP generation
+
+-``totp window``: the number of TOTP tokens that are checked before and after the current token when validating token
+
+-``totp pin``: the valid value of a TOTP token at a certain time
+
+
+Admin commands
+==============
+
+Create a new MFA TOTP token
+------------------------------------
+
+::
+
+ # radosgw-admin mfa create --uid=<user-id> \
+ --totp-serial=<serial> \
+ --totp-seed=<seed> \
+ [ --totp-seed-type=<hex|base32> ] \
+ [ --totp-seconds=<num-seconds> ] \
+ [ --totp-window=<twindow> ]
+
+List MFA TOTP tokens
+---------------------
+
+::
+
+ # radosgw-admin mfa list --uid=<user-id>
+
+
+Show MFA TOTP token
+------------------------------------
+
+::
+
+ # radosgw-admin mfa get --uid=<user-id> --totp-serial=<serial>
+
+
+Delete MFA TOTP token
+------------------------
+
+::
+
+ # radosgw-admin mfa remove --uid=<user-id> --totp-serial=<serial>
+
+
+Check MFA TOTP token
+--------------------------------
+
+Test a TOTP token pin, needed for validating that TOTP functions correctly. ::
+
+ # radosgw-admin mfa check --uid=<user-id> --totp-serial=<serial> \
+ --totp-pin=<pin>
+
+
+Re-sync MFA TOTP token
+--------------------------------
+
+In order to re-sync the TOTP token (in case of time skew). This requires
+feeding two consecutive pins: the previous pin, and the current pin. ::
+
+ # radosgw-admin mfa resync --uid=<user-id> --totp-serial=<serial> \
+ --totp-pin=<prev-pin> --totp=pin=<current-pin>
+
+
diff --git a/doc/radosgw/multisite-sync-policy.rst b/doc/radosgw/multisite-sync-policy.rst
new file mode 100644
index 000000000..56342473c
--- /dev/null
+++ b/doc/radosgw/multisite-sync-policy.rst
@@ -0,0 +1,714 @@
+=====================
+Multisite Sync Policy
+=====================
+
+.. versionadded:: Octopus
+
+Multisite bucket-granularity sync policy provides fine grained control of data movement between buckets in different zones. It extends the zone sync mechanism. Previously buckets were being treated symmetrically, that is -- each (data) zone holds a mirror of that bucket that should be the same as all the other zones. Whereas leveraging the bucket-granularity sync policy is possible for buckets to diverge, and a bucket can pull data from other buckets (ones that don't share its name or its ID) in different zone. The sync process was assuming therefore that the bucket sync source and the bucket sync destination were always referring to the same bucket, now that is not the case anymore.
+
+The sync policy supersedes the old zonegroup coarse configuration (sync_from*). The sync policy can be configured at the zonegroup level (and if it is configured it replaces the old style config), but it can also be configured at the bucket level.
+
+In the sync policy multiple groups that can contain lists of data-flow configurations can be defined, as well as lists of pipe configurations. The data-flow defines the flow of data between the different zones. It can define symmetrical data flow, in which multiple zones sync data from each other, and it can define directional data flow, in which the data moves in one way from one zone to another.
+
+A pipe defines the actual buckets that can use these data flows, and the properties that are associated with it (for example: source object prefix).
+
+A sync policy group can be in 3 states:
+
++----------------------------+----------------------------------------+
+| Value | Description |
++============================+========================================+
+| ``enabled`` | sync is allowed and enabled |
++----------------------------+----------------------------------------+
+| ``allowed`` | sync is allowed |
++----------------------------+----------------------------------------+
+| ``forbidden`` | sync (as defined by this group) is not |
+| | allowed and can override other groups |
++----------------------------+----------------------------------------+
+
+A policy can be defined at the bucket level. A bucket level sync policy inherits the data flow of the zonegroup policy, and can only define a subset of what the zonegroup allows.
+
+A wildcard zone, and a wildcard bucket parameter in the policy defines all relevant zones, or all relevant buckets. In the context of a bucket policy it means the current bucket instance. A disaster recovery configuration where entire zones are mirrored doesn't require configuring anything on the buckets. However, for a fine grained bucket sync it would be better to configure the pipes to be synced by allowing (status=allowed) them at the zonegroup level (e.g., using wildcards), but only enable the specific sync at the bucket level (status=enabled). If needed, the policy at the bucket level can limit the data movement to specific relevant zones.
+
+.. important:: Any changes to the zonegroup policy needs to be applied on the
+ zonegroup master zone, and require period update and commit. Changes
+ to the bucket policy needs to be applied on the zonegroup master
+ zone. The changes are dynamically handled by rgw.
+
+
+S3 Replication API
+~~~~~~~~~~~~~~~~~~
+
+The S3 bucket replication api has also been implemented, and allows users to create replication rules between different buckets. Note though that while the AWS replication feature allows bucket replication within the same zone, rgw does not allow it at the moment. However, the rgw api also added a new 'Zone' array that allows users to select to what zones the specific bucket will be synced.
+
+
+Sync Policy Control Reference
+=============================
+
+
+Get Sync Policy
+~~~~~~~~~~~~~~~
+
+To retrieve the current zonegroup sync policy, or a specific bucket policy:
+
+::
+
+ # radosgw-admin sync policy get [--bucket=<bucket>]
+
+
+Create Sync Policy Group
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To create a sync policy group:
+
+::
+
+ # radosgw-admin sync group create [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --status=<enabled | allowed | forbidden> \
+
+
+Modify Sync Policy Group
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To modify a sync policy group:
+
+::
+
+ # radosgw-admin sync group modify [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --status=<enabled | allowed | forbidden> \
+
+
+Show Sync Policy Group
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To show a sync policy group:
+
+::
+
+ # radosgw-admin sync group get [--bucket=<bucket>] \
+ --group-id=<group-id>
+
+
+Remove Sync Policy Group
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+To remove a sync policy group:
+
+::
+
+ # radosgw-admin sync group remove [--bucket=<bucket>] \
+ --group-id=<group-id>
+
+
+
+Create Sync Flow
+~~~~~~~~~~~~~~~~
+
+- To create or update directional sync flow:
+
+::
+
+ # radosgw-admin sync group flow create [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --flow-id=<flow-id> \
+ --flow-type=directional \
+ --source-zone=<source_zone> \
+ --dest-zone=<dest_zone>
+
+
+- To create or update symmetrical sync flow:
+
+::
+
+ # radosgw-admin sync group flow create [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --flow-id=<flow-id> \
+ --flow-type=symmetrical \
+ --zones=<zones>
+
+
+Where zones are a comma separated list of all the zones that need to add to the flow.
+
+
+Remove Sync Flow Zones
+~~~~~~~~~~~~~~~~~~~~~~
+
+- To remove directional sync flow:
+
+::
+
+ # radosgw-admin sync group flow remove [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --flow-id=<flow-id> \
+ --flow-type=directional \
+ --source-zone=<source_zone> \
+ --dest-zone=<dest_zone>
+
+
+- To remove specific zones from symmetrical sync flow:
+
+::
+
+ # radosgw-admin sync group flow remove [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --flow-id=<flow-id> \
+ --flow-type=symmetrical \
+ --zones=<zones>
+
+
+Where zones are a comma separated list of all zones to remove from the flow.
+
+
+- To remove symmetrical sync flow:
+
+::
+
+ # radosgw-admin sync group flow remove [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --flow-id=<flow-id> \
+ --flow-type=symmetrical
+
+
+Create Sync Pipe
+~~~~~~~~~~~~~~~~
+
+To create sync group pipe, or update its parameters:
+
+
+::
+
+ # radosgw-admin sync group pipe create [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --pipe-id=<pipe-id> \
+ --source-zones=<source_zones> \
+ [--source-bucket=<source_buckets>] \
+ [--source-bucket-id=<source_bucket_id>] \
+ --dest-zones=<dest_zones> \
+ [--dest-bucket=<dest_buckets>] \
+ [--dest-bucket-id=<dest_bucket_id>] \
+ [--prefix=<source_prefix>] \
+ [--prefix-rm] \
+ [--tags-add=<tags>] \
+ [--tags-rm=<tags>] \
+ [--dest-owner=<owner>] \
+ [--storage-class=<storage_class>] \
+ [--mode=<system | user>] \
+ [--uid=<user_id>]
+
+
+Zones are either a list of zones, or '*' (wildcard). Wildcard zones mean any zone that matches the sync flow rules.
+Buckets are either a bucket name, or '*' (wildcard). Wildcard bucket means the current bucket
+Prefix can be defined to filter source objects.
+Tags are passed by a comma separated list of 'key=value'.
+Destination owner can be set to force a destination owner of the objects. If user mode is selected, only the destination bucket owner can be set.
+Destinatino storage class can also be condfigured.
+User id can be set for user mode, and will be the user under which the sync operation will be executed (for permissions validation).
+
+
+Remove Sync Pipe
+~~~~~~~~~~~~~~~~
+
+To remove specific sync group pipe params, or the entire pipe:
+
+
+::
+
+ # radosgw-admin sync group pipe remove [--bucket=<bucket>] \
+ --group-id=<group-id> \
+ --pipe-id=<pipe-id> \
+ [--source-zones=<source_zones>] \
+ [--source-bucket=<source_buckets>] \
+ [--source-bucket-id=<source_bucket_id>] \
+ [--dest-zones=<dest_zones>] \
+ [--dest-bucket=<dest_buckets>] \
+ [--dest-bucket-id=<dest_bucket_id>]
+
+
+Sync Info
+~~~~~~~~~
+
+To get information about the expected sync sources and targets (as defined by the sync policy):
+
+::
+
+ # radosgw-admin sync info [--bucket=<bucket>] \
+ [--effective-zone-name=<zone>]
+
+
+Since a bucket can define a policy that defines data movement from it towards a different bucket at a different zone, when the policy is created we also generate a list of bucket dependencies that are used as hints when a sync of any particular bucket happens. The fact that a bucket references another bucket does not mean it actually syncs to/from it, as the data flow might not permit it.
+
+
+Examples
+========
+
+The system in these examples includes 3 zones: ``us-east`` (the master zone), ``us-west``, ``us-west-2``.
+
+Example 1: Two Zones, Complete Mirror
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This is similar to older (pre ``Octopus``) sync capabilities, but being done via the new sync policy engine. Note that changes to the zonegroup sync policy require a period update and commit.
+
+
+::
+
+ [us-east] $ radosgw-admin sync group create --group-id=group1 --status=allowed
+ [us-east] $ radosgw-admin sync group flow create --group-id=group1 \
+ --flow-id=flow-mirror --flow-type=symmetrical \
+ --zones=us-east,us-west
+ [us-east] $ radosgw-admin sync group pipe create --group-id=group1 \
+ --pipe-id=pipe1 --source-zones='*' \
+ --source-bucket='*' --dest-zones='*' \
+ --dest-bucket='*'
+ [us-east] $ radosgw-admin sync group modify --group-id=group1 --status=enabled
+ [us-east] $ radosgw-admin period update --commit
+
+ $ radosgw-admin sync info --bucket=buck
+ {
+ "sources": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-east",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "params": {
+ ...
+ }
+ }
+ ],
+ "dests": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-east",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ }
+ ],
+ ...
+ }
+ }
+
+
+Note that the "id" field in the output above reflects the pipe rule
+that generated that entry, a single rule can generate multiple sync
+entries as can be seen in the example.
+
+::
+
+ [us-west] $ radosgw-admin sync info --bucket=buck
+ {
+ "sources": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-east",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ }
+ ],
+ "dests": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-east",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ }
+ ],
+ ...
+ }
+
+
+
+Example 2: Directional, Entire Zone Backup
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Also similar to older sync capabilities. In here we add a third zone, ``us-west-2`` that will be a replica of ``us-west``, but data will not be replicated back from it.
+
+::
+
+ [us-east] $ radosgw-admin sync group flow create --group-id=group1 \
+ --flow-id=us-west-backup --flow-type=directional \
+ --source-zone=us-west --dest-zone=us-west-2
+ [us-east] $ radosgw-admin period update --commit
+
+
+Note that us-west has two dests:
+
+::
+
+ [us-west] $ radosgw-admin sync info --bucket=buck
+ {
+ "sources": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-east",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ }
+ ],
+ "dests": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-east",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ },
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-west-2",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ }
+ ],
+ ...
+ }
+
+
+Whereas us-west-2 has only source and no destinations:
+
+::
+
+ [us-west-2] $ radosgw-admin sync info --bucket=buck
+ {
+ "sources": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ "dest": {
+ "zone": "us-west-2",
+ "bucket": "buck:115b12b3-....4409.1"
+ },
+ ...
+ }
+ ],
+ "dests": [],
+ ...
+ }
+
+
+
+Example 3: Mirror a Specific Bucket
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Using the same group configuration, but this time switching it to ``allowed`` state, which means that sync is allowed but not enabled.
+
+::
+
+ [us-east] $ radosgw-admin sync group modify --group-id=group1 --status=allowed
+ [us-east] $ radosgw-admin period update --commit
+
+
+And we will create a bucket level policy rule for existing bucket ``buck2``. Note that the bucket needs to exist before being able to set this policy, and admin commands that modify bucket policies need to run on the master zone, however, they do not require period update. There is no need to change the data flow, as it is inherited from the zonegroup policy. A bucket policy flow will only be a subset of the flow defined in the zonegroup policy. Same goes for pipes, although a bucket policy can enable pipes that are not enabled (albeit not forbidden) at the zonegroup policy.
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck2 \
+ --group-id=buck2-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck2 \
+ --group-id=buck2-default --pipe-id=pipe1 \
+ --source-zones='*' --dest-zones='*'
+
+
+
+Example 4: Limit Bucket Sync To Specific Zones
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This will only sync ``buck3`` to ``us-east`` (from any zone that flow allows to sync into ``us-east``).
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck3 \
+ --group-id=buck3-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck3 \
+ --group-id=buck3-default --pipe-id=pipe1 \
+ --source-zones='*' --dest-zones=us-east
+
+
+
+Example 5: Sync From a Different Bucket
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Note that bucket sync only works (currently) across zones and not within the same zone.
+
+Set ``buck4`` to pull data from ``buck5``:
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck4 '
+ --group-id=buck4-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck4 \
+ --group-id=buck4-default --pipe-id=pipe1 \
+ --source-zones='*' --source-bucket=buck5 \
+ --dest-zones='*'
+
+
+can also limit it to specific zones, for example the following will
+only sync data originated in us-west:
+
+::
+
+ [us-east] $ radosgw-admin sync group pipe modify --bucket=buck4 \
+ --group-id=buck4-default --pipe-id=pipe1 \
+ --source-zones=us-west --source-bucket=buck5 \
+ --dest-zones='*'
+
+
+Checking the sync info for ``buck5`` on ``us-west`` is interesting:
+
+::
+
+ [us-west] $ radosgw-admin sync info --bucket=buck5
+ {
+ "sources": [],
+ "dests": [],
+ "hints": {
+ "sources": [],
+ "dests": [
+ "buck4:115b12b3-....14433.2"
+ ]
+ },
+ "resolved-hints-1": {
+ "sources": [],
+ "dests": [
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck5"
+ },
+ "dest": {
+ "zone": "us-east",
+ "bucket": "buck4:115b12b3-....14433.2"
+ },
+ ...
+ },
+ {
+ "id": "pipe1",
+ "source": {
+ "zone": "us-west",
+ "bucket": "buck5"
+ },
+ "dest": {
+ "zone": "us-west-2",
+ "bucket": "buck4:115b12b3-....14433.2"
+ },
+ ...
+ }
+ ]
+ },
+ "resolved-hints": {
+ "sources": [],
+ "dests": []
+ }
+ }
+
+
+Note that there are resolved hints, which means that the bucket ``buck5`` found about ``buck4`` syncing from it indirectly, and not from its own policy (the policy for ``buck5`` itself is empty).
+
+
+Example 6: Sync To Different Bucket
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The same mechanism can work for configuring data to be synced to (vs. synced from as in the previous example). Note that internally data is still pulled from the source at the destination zone:
+
+Set ``buck6`` to "push" data to ``buck5``:
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck6 \
+ --group-id=buck6-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck6 \
+ --group-id=buck6-default --pipe-id=pipe1 \
+ --source-zones='*' --source-bucket='*' \
+ --dest-zones='*' --dest-bucket=buck5
+
+
+A wildcard bucket name means the current bucket in the context of bucket sync policy.
+
+Combined with the configuration in Example 5, we can now write data to ``buck6`` on ``us-east``, data will sync to ``buck5`` on ``us-west``, and from there it will be distributed to ``buck4`` on ``us-east``, and on ``us-west-2``.
+
+Example 7: Source Filters
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Sync from ``buck8`` to ``buck9``, but only objects that start with ``foo/``:
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck8 \
+ --group-id=buck8-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck8 \
+ --group-id=buck8-default --pipe-id=pipe-prefix \
+ --prefix=foo/ --source-zones='*' --dest-zones='*' \
+ --dest-bucket=buck9
+
+
+Also sync from ``buck8`` to ``buck9`` any object that has the tags ``color=blue`` or ``color=red``:
+
+::
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck8 \
+ --group-id=buck8-default --pipe-id=pipe-tags \
+ --tags-add=color=blue,color=red --source-zones='*' \
+ --dest-zones='*' --dest-bucket=buck9
+
+
+And we can check the expected sync in ``us-east`` (for example):
+
+::
+
+ [us-east] $ radosgw-admin sync info --bucket=buck8
+ {
+ "sources": [],
+ "dests": [
+ {
+ "id": "pipe-prefix",
+ "source": {
+ "zone": "us-east",
+ "bucket": "buck8:115b12b3-....14433.5"
+ },
+ "dest": {
+ "zone": "us-west",
+ "bucket": "buck9"
+ },
+ "params": {
+ "source": {
+ "filter": {
+ "prefix": "foo/",
+ "tags": []
+ }
+ },
+ ...
+ }
+ },
+ {
+ "id": "pipe-tags",
+ "source": {
+ "zone": "us-east",
+ "bucket": "buck8:115b12b3-....14433.5"
+ },
+ "dest": {
+ "zone": "us-west",
+ "bucket": "buck9"
+ },
+ "params": {
+ "source": {
+ "filter": {
+ "tags": [
+ {
+ "key": "color",
+ "value": "blue"
+ },
+ {
+ "key": "color",
+ "value": "red"
+ }
+ ]
+ }
+ },
+ ...
+ }
+ }
+ ],
+ ...
+ }
+
+
+Note that there aren't any sources, only two different destinations (one for each configuration). When the sync process happens it will select the relevant rule for each object it syncs.
+
+Prefixes and tags can be combined, in which object will need to have both in order to be synced. The priority param can also be passed, and it can be used to determine when there are multiple different rules that are matched (and have the same source and destination), to determine which of the rules to be used.
+
+
+Example 8: Destination Params: Storage Class
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Storage class of the destination objects can be configured:
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck10 \
+ --group-id=buck10-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck10 \
+ --group-id=buck10-default \
+ --pipe-id=pipe-storage-class \
+ --source-zones='*' --dest-zones=us-west-2 \
+ --storage-class=CHEAP_AND_SLOW
+
+
+Example 9: Destination Params: Destination Owner Translation
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Set the destination objects owner as the destination bucket owner.
+This requires specifying the uid of the destination bucket:
+
+::
+
+ [us-east] $ radosgw-admin sync group create --bucket=buck11 \
+ --group-id=buck11-default --status=enabled
+
+ [us-east] $ radosgw-admin sync group pipe create --bucket=buck11 \
+ --group-id=buck11-default --pipe-id=pipe-dest-owner \
+ --source-zones='*' --dest-zones='*' \
+ --dest-bucket=buck12 --dest-owner=joe
+
+Example 10: Destination Params: User Mode
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+User mode makes sure that the user has permissions to both read the objects, and write to the destination bucket. This requires that the uid of the user (which in its context the operation executes) is specified.
+
+::
+
+ [us-east] $ radosgw-admin sync group pipe modify --bucket=buck11 \
+ --group-id=buck11-default --pipe-id=pipe-dest-owner \
+ --mode=user --uid=jenny
+
+
+
diff --git a/doc/radosgw/multisite.rst b/doc/radosgw/multisite.rst
new file mode 100644
index 000000000..a4a89d268
--- /dev/null
+++ b/doc/radosgw/multisite.rst
@@ -0,0 +1,1537 @@
+.. _multisite:
+
+==========
+Multi-Site
+==========
+
+.. versionadded:: Jewel
+
+A single zone configuration typically consists of one zone group containing one
+zone and one or more `ceph-radosgw` instances where you may load-balance gateway
+client requests between the instances. In a single zone configuration, typically
+multiple gateway instances point to a single Ceph storage cluster. However, Kraken
+supports several multi-site configuration options for the Ceph Object Gateway:
+
+- **Multi-zone:** A more advanced configuration consists of one zone group and
+ multiple zones, each zone with one or more `ceph-radosgw` instances. Each zone
+ is backed by its own Ceph Storage Cluster. Multiple zones in a zone group
+ provides disaster recovery for the zone group should one of the zones experience
+ a significant failure. In Kraken, each zone is active and may receive write
+ operations. In addition to disaster recovery, multiple active zones may also
+ serve as a foundation for content delivery networks.
+
+- **Multi-zone-group:** Formerly called 'regions', Ceph Object Gateway can also
+ support multiple zone groups, each zone group with one or more zones. Objects
+ stored to zones in one zone group within the same realm as another zone
+ group will share a global object namespace, ensuring unique object IDs across
+ zone groups and zones.
+
+- **Multiple Realms:** In Kraken, the Ceph Object Gateway supports the notion
+ of realms, which can be a single zone group or multiple zone groups and
+ a globally unique namespace for the realm. Multiple realms provide the ability
+ to support numerous configurations and namespaces.
+
+Replicating object data between zones within a zone group looks something
+like this:
+
+.. image:: ../images/zone-sync2.png
+ :align: center
+
+For additional details on setting up a cluster, see `Ceph Object Gateway for
+Production <https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html/ceph_object_gateway_for_production/index/>`__.
+
+Functional Changes from Infernalis
+==================================
+
+In Kraken, you can configure each Ceph Object Gateway to
+work in an active-active zone configuration, allowing for writes to
+non-master zones.
+
+The multi-site configuration is stored within a container called a
+"realm." The realm stores zone groups, zones, and a time "period" with
+multiple epochs for tracking changes to the configuration. In Kraken,
+the ``ceph-radosgw`` daemons handle the synchronization,
+eliminating the need for a separate synchronization agent. Additionally,
+the new approach to synchronization allows the Ceph Object Gateway to
+operate with an "active-active" configuration instead of
+"active-passive".
+
+Requirements and Assumptions
+============================
+
+A multi-site configuration requires at least two Ceph storage clusters,
+preferably given a distinct cluster name. At least two Ceph object
+gateway instances, one for each Ceph storage cluster.
+
+This guide assumes at least two Ceph storage clusters are in geographically
+separate locations; however, the configuration can work on the same
+site. This guide also assumes two Ceph object gateway servers named
+``rgw1`` and ``rgw2``.
+
+.. important:: Running a single Ceph storage cluster is NOT recommended unless you have
+ low latency WAN connections.
+
+A multi-site configuration requires a master zone group and a master
+zone. Additionally, each zone group requires a master zone. Zone groups
+may have one or more secondary or non-master zones.
+
+In this guide, the ``rgw1`` host will serve as the master zone of the
+master zone group; and, the ``rgw2`` host will serve as the secondary zone
+of the master zone group.
+
+See `Pools`_ for instructions on creating and tuning pools for Ceph
+Object Storage.
+
+See `Sync Policy Config`_ for instructions on defining fine grained bucket sync
+policy rules.
+
+.. _master-zone-label:
+
+Configuring a Master Zone
+=========================
+
+All gateways in a multi-site configuration will retrieve their
+configuration from a ``ceph-radosgw`` daemon on a host within the master
+zone group and master zone. To configure your gateways in a multi-site
+configuration, choose a ``ceph-radosgw`` instance to configure the
+master zone group and master zone.
+
+Create a Realm
+--------------
+
+A realm contains the multi-site configuration of zone groups and zones
+and also serves to enforce a globally unique namespace within the realm.
+
+Create a new realm for the multi-site configuration by opening a command
+line interface on a host identified to serve in the master zone group
+and zone. Then, execute the following:
+
+.. prompt:: bash #
+
+ radosgw-admin realm create --rgw-realm={realm-name} [--default]
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin realm create --rgw-realm=movies --default
+
+If the cluster will have a single realm, specify the ``--default`` flag.
+If ``--default`` is specified, ``radosgw-admin`` will use this realm by
+default. If ``--default`` is not specified, adding zone-groups and zones
+requires specifying either the ``--rgw-realm`` flag or the
+``--realm-id`` flag to identify the realm when adding zone groups and
+zones.
+
+After creating the realm, ``radosgw-admin`` will echo back the realm
+configuration. For example:
+
+::
+
+ {
+ "id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62",
+ "name": "movies",
+ "current_period": "1950b710-3e63-4c41-a19e-46a715000980",
+ "epoch": 1
+ }
+
+.. note:: Ceph generates a unique ID for the realm, which allows the renaming
+ of a realm if the need arises.
+
+Create a Master Zone Group
+--------------------------
+
+A realm must have at least one zone group, which will serve as the
+master zone group for the realm.
+
+Create a new master zone group for the multi-site configuration by
+opening a command line interface on a host identified to serve in the
+master zone group and zone. Then, execute the following:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup create --rgw-zonegroup={name} --endpoints={url} [--rgw-realm={realm-name}|--realm-id={realm-id}] --master --default
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup create --rgw-zonegroup=us --endpoints=http://rgw1:80 --rgw-realm=movies --master --default
+
+If the realm will only have a single zone group, specify the
+``--default`` flag. If ``--default`` is specified, ``radosgw-admin``
+will use this zone group by default when adding new zones. If
+``--default`` is not specified, adding zones will require either the
+``--rgw-zonegroup`` flag or the ``--zonegroup-id`` flag to identify the
+zone group when adding or modifying zones.
+
+After creating the master zone group, ``radosgw-admin`` will echo back
+the zone group configuration. For example:
+
+::
+
+ {
+ "id": "f1a233f5-c354-4107-b36c-df66126475a6",
+ "name": "us",
+ "api_name": "us",
+ "is_master": "true",
+ "endpoints": [
+ "http:\/\/rgw1:80"
+ ],
+ "hostnames": [],
+ "hostnames_s3webzone": [],
+ "master_zone": "",
+ "zones": [],
+ "placement_targets": [],
+ "default_placement": "",
+ "realm_id": "0956b174-fe14-4f97-8b50-bb7ec5e1cf62"
+ }
+
+Create a Master Zone
+--------------------
+
+.. important:: Zones must be created on a Ceph Object Gateway node that will be
+ within the zone.
+
+Create a new master zone for the multi-site configuration by opening a
+command line interface on a host identified to serve in the master zone
+group and zone. Then, execute the following:
+
+.. prompt:: bash #
+
+ radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --master --default \
+ --endpoints={http://fqdn}[,{http://fqdn}]
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east \
+ --master --default \
+ --endpoints={http://fqdn}[,{http://fqdn}]
+
+
+.. note:: The ``--access-key`` and ``--secret`` aren’t specified. These
+ settings will be added to the zone once the user is created in the
+ next section.
+
+.. important:: The following steps assume a multi-site configuration using newly
+ installed systems that aren’t storing data yet. DO NOT DELETE the
+ ``default`` zone and its pools if you are already using it to store
+ data, or the data will be deleted and unrecoverable.
+
+Delete Default Zone Group and Zone
+----------------------------------
+
+Delete the ``default`` zone if it exists. Make sure to remove it from
+the default zone group first.
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup delete --rgw-zonegroup=default --rgw-zone=default
+ radosgw-admin period update --commit
+ radosgw-admin zone delete --rgw-zone=default
+ radosgw-admin period update --commit
+ radosgw-admin zonegroup delete --rgw-zonegroup=default
+ radosgw-admin period update --commit
+
+Finally, delete the ``default`` pools in your Ceph storage cluster if
+they exist.
+
+.. important:: The following step assumes a multi-site configuration using newly
+ installed systems that aren’t currently storing data. DO NOT DELETE
+ the ``default`` zone group if you are already using it to store
+ data.
+
+.. prompt:: bash #
+
+ ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it
+
+Create a System User
+--------------------
+
+The ``ceph-radosgw`` daemons must authenticate before pulling realm and
+period information. In the master zone, create a system user to
+facilitate authentication between daemons.
+
+
+.. prompt:: bash #
+
+ radosgw-admin user create --uid="{user-name}" --display-name="{Display Name}" --system
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin user create --uid="synchronization-user" --display-name="Synchronization User" --system
+
+Make a note of the ``access_key`` and ``secret_key``, as the secondary
+zones will require them to authenticate with the master zone.
+
+Finally, add the system user to the master zone.
+
+.. prompt:: bash #
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --access-key={access-key} --secret={secret}
+ radosgw-admin period update --commit
+
+Update the Period
+-----------------
+
+After updating the master zone configuration, update the period.
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+.. note:: Updating the period changes the epoch, and ensures that other zones
+ will receive the updated configuration.
+
+Update the Ceph Configuration File
+----------------------------------
+
+Update the Ceph configuration file on master zone hosts by adding the
+``rgw_zone`` configuration option and the name of the master zone to the
+instance entry.
+
+::
+
+ [client.rgw.{instance-name}]
+ ...
+ rgw_zone={zone-name}
+
+For example:
+
+::
+
+ [client.rgw.rgw1]
+ host = rgw1
+ rgw frontends = "civetweb port=80"
+ rgw_zone=us-east
+
+Start the Gateway
+-----------------
+
+On the object gateway host, start and enable the Ceph Object Gateway
+service:
+
+.. prompt:: bash #
+
+ systemctl start ceph-radosgw@rgw.`hostname -s`
+ systemctl enable ceph-radosgw@rgw.`hostname -s`
+
+.. _secondary-zone-label:
+
+Configure Secondary Zones
+=========================
+
+Zones within a zone group replicate all data to ensure that each zone
+has the same data. When creating the secondary zone, execute all of the
+following operations on a host identified to serve the secondary zone.
+
+.. note:: To add a third zone, follow the same procedures as for adding the
+ secondary zone. Use different zone name.
+
+.. important:: You must execute metadata operations, such as user creation, on a
+ host within the master zone. The master zone and the secondary zone
+ can receive bucket operations, but the secondary zone redirects
+ bucket operations to the master zone. If the master zone is down,
+ bucket operations will fail.
+
+Pull the Realm
+--------------
+
+Using the URL path, access key and secret of the master zone in the
+master zone group, pull the realm configuration to the host. To pull a
+non-default realm, specify the realm using the ``--rgw-realm`` or
+``--realm-id`` configuration options.
+
+.. prompt:: bash #
+
+ radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}
+
+.. note:: Pulling the realm also retrieves the remote's current period
+ configuration, and makes it the current period on this host as well.
+
+If this realm is the default realm or the only realm, make the realm the
+default realm.
+
+.. prompt:: bash #
+
+ radosgw-admin realm default --rgw-realm={realm-name}
+
+Create a Secondary Zone
+-----------------------
+
+.. important:: Zones must be created on a Ceph Object Gateway node that will be
+ within the zone.
+
+Create a secondary zone for the multi-site configuration by opening a
+command line interface on a host identified to serve the secondary zone.
+Specify the zone group ID, the new zone name and an endpoint for the
+zone. **DO NOT** use the ``--master`` or ``--default`` flags. In Kraken,
+all zones run in an active-active configuration by
+default; that is, a gateway client may write data to any zone and the
+zone will replicate the data to all other zones within the zone group.
+If the secondary zone should not accept write operations, specify the
+``--read-only`` flag to create an active-passive configuration between
+the master zone and the secondary zone. Additionally, provide the
+``access_key`` and ``secret_key`` of the generated system user stored in
+the master zone of the master zone group. Execute the following:
+
+.. prompt:: bash #
+
+ radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --access-key={system-key} --secret={secret} \
+ --endpoints=http://{fqdn}:80 \
+ [--read-only]
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-west \
+ --access-key={system-key} --secret={secret} \
+ --endpoints=http://rgw2:80
+
+.. important:: The following steps assume a multi-site configuration using newly
+ installed systems that aren’t storing data. **DO NOT DELETE** the
+ ``default`` zone and its pools if you are already using it to store
+ data, or the data will be lost and unrecoverable.
+
+Delete the default zone if needed.
+
+.. prompt:: bash #
+
+ radosgw-admin zone delete --rgw-zone=default
+
+Finally, delete the default pools in your Ceph storage cluster if
+needed.
+
+.. prompt:: bash #
+
+ ceph osd pool rm default.rgw.control default.rgw.control --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.data.root default.rgw.data.root --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.gc default.rgw.gc --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.log default.rgw.log --yes-i-really-really-mean-it
+ ceph osd pool rm default.rgw.users.uid default.rgw.users.uid --yes-i-really-really-mean-it
+
+Update the Ceph Configuration File
+----------------------------------
+
+Update the Ceph configuration file on the secondary zone hosts by adding
+the ``rgw_zone`` configuration option and the name of the secondary zone
+to the instance entry.
+
+::
+
+ [client.rgw.{instance-name}]
+ ...
+ rgw_zone={zone-name}
+
+For example:
+
+::
+
+ [client.rgw.rgw2]
+ host = rgw2
+ rgw frontends = "civetweb port=80"
+ rgw_zone=us-west
+
+Update the Period
+-----------------
+
+After updating the master zone configuration, update the period.
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+.. note:: Updating the period changes the epoch, and ensures that other zones
+ will receive the updated configuration.
+
+Start the Gateway
+-----------------
+
+On the object gateway host, start and enable the Ceph Object Gateway
+service:
+
+.. prompt:: bash #
+
+ systemctl start ceph-radosgw@rgw.`hostname -s`
+ systemctl enable ceph-radosgw@rgw.`hostname -s`
+
+Check Synchronization Status
+----------------------------
+
+Once the secondary zone is up and running, check the synchronization
+status. Synchronization copies users and buckets created in the master
+zone to the secondary zone.
+
+.. prompt:: bash #
+
+ radosgw-admin sync status
+
+The output will provide the status of synchronization operations. For
+example:
+
+::
+
+ realm f3239bc5-e1a8-4206-a81d-e1576480804d (earth)
+ zonegroup c50dbb7e-d9ce-47cc-a8bb-97d9b399d388 (us)
+ zone 4c453b70-4a16-4ce8-8185-1893b05d346e (us-west)
+ metadata sync syncing
+ full sync: 0/64 shards
+ metadata is caught up with master
+ incremental sync: 64/64 shards
+ data sync source: 1ee9da3e-114d-4ae3-a8a4-056e8a17f532 (us-east)
+ syncing
+ full sync: 0/128 shards
+ incremental sync: 128/128 shards
+ data is caught up with source
+
+.. note:: Secondary zones accept bucket operations; however, secondary zones
+ redirect bucket operations to the master zone and then synchronize
+ with the master zone to receive the result of the bucket operations.
+ If the master zone is down, bucket operations executed on the
+ secondary zone will fail, but object operations should succeed.
+
+
+Maintenance
+===========
+
+Checking the Sync Status
+------------------------
+
+Information about the replication status of a zone can be queried with:
+
+.. prompt:: bash $
+
+ radosgw-admin sync status
+
+::
+
+ realm b3bc1c37-9c44-4b89-a03b-04c269bea5da (earth)
+ zonegroup f54f9b22-b4b6-4a0e-9211-fa6ac1693f49 (us)
+ zone adce11c9-b8ed-4a90-8bc5-3fc029ff0816 (us-2)
+ metadata sync syncing
+ full sync: 0/64 shards
+ incremental sync: 64/64 shards
+ metadata is behind on 1 shards
+ oldest incremental change not applied: 2017-03-22 10:20:00.0.881361s
+ data sync source: 341c2d81-4574-4d08-ab0f-5a2a7b168028 (us-1)
+ syncing
+ full sync: 0/128 shards
+ incremental sync: 128/128 shards
+ data is caught up with source
+ source: 3b5d1a3f-3f27-4e4a-8f34-6072d4bb1275 (us-3)
+ syncing
+ full sync: 0/128 shards
+ incremental sync: 128/128 shards
+ data is caught up with source
+
+Changing the Metadata Master Zone
+---------------------------------
+
+.. important:: Care must be taken when changing which zone is the metadata
+ master. If a zone has not finished syncing metadata from the current
+ master zone, it will be unable to serve any remaining entries when
+ promoted to master and those changes will be lost. For this reason,
+ waiting for a zone's ``radosgw-admin sync status`` to catch up on
+ metadata sync before promoting it to master is recommended.
+
+Similarly, if changes to metadata are being processed by the current master
+zone while another zone is being promoted to master, those changes are
+likely to be lost. To avoid this, shutting down any ``radosgw`` instances
+on the previous master zone is recommended. After promoting another zone,
+its new period can be fetched with ``radosgw-admin period pull`` and the
+gateway(s) can be restarted.
+
+To promote a zone (for example, zone ``us-2`` in zonegroup ``us``) to metadata
+master, run the following commands on that zone:
+
+.. prompt:: bash $
+
+ radosgw-admin zone modify --rgw-zone=us-2 --master
+ radosgw-admin zonegroup modify --rgw-zonegroup=us --master
+ radosgw-admin period update --commit
+
+This will generate a new period, and the radosgw instance(s) in zone ``us-2``
+will send this period to other zones.
+
+Failover and Disaster Recovery
+==============================
+
+If the master zone should fail, failover to the secondary zone for
+disaster recovery.
+
+1. Make the secondary zone the master and default zone. For example:
+
+ .. prompt:: bash #
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --master --default
+
+ By default, Ceph Object Gateway will run in an active-active
+ configuration. If the cluster was configured to run in an
+ active-passive configuration, the secondary zone is a read-only zone.
+ Remove the ``--read-only`` status to allow the zone to receive write
+ operations. For example:
+
+ .. prompt:: bash #
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --master --default \
+ --read-only=false
+
+2. Update the period to make the changes take effect.
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+3. Finally, restart the Ceph Object Gateway.
+
+ .. prompt:: bash #
+
+ systemctl restart ceph-radosgw@rgw.`hostname -s`
+
+If the former master zone recovers, revert the operation.
+
+1. From the recovered zone, pull the latest realm configuration
+ from the current master zone:
+
+ .. prompt:: bash #
+
+ radosgw-admin realm pull --url={url-to-master-zone-gateway} \
+ --access-key={access-key} --secret={secret}
+
+2. Make the recovered zone the master and default zone.
+
+ .. prompt:: bash #
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --master --default
+
+3. Update the period to make the changes take effect.
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+4. Then, restart the Ceph Object Gateway in the recovered zone.
+
+ .. prompt:: bash #
+
+ systemctl restart ceph-radosgw@rgw.`hostname -s`
+
+5. If the secondary zone needs to be a read-only configuration, update
+ the secondary zone.
+
+ .. prompt:: bash #
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --read-only
+
+6. Update the period to make the changes take effect.
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+7. Finally, restart the Ceph Object Gateway in the secondary zone.
+
+ .. prompt:: bash #
+
+ systemctl restart ceph-radosgw@rgw.`hostname -s`
+
+.. _rgw-multisite-migrate-from-single-site:
+
+Migrating a Single Site System to Multi-Site
+============================================
+
+To migrate from a single site system with a ``default`` zone group and
+zone to a multi site system, use the following steps:
+
+1. Create a realm. Replace ``<name>`` with the realm name.
+
+ .. prompt:: bash #
+
+ radosgw-admin realm create --rgw-realm=<name> --default
+
+2. Rename the default zone and zonegroup. Replace ``<name>`` with the
+ zonegroup or zone name.
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup rename --rgw-zonegroup default --zonegroup-new-name=<name>
+ radosgw-admin zone rename --rgw-zone default --zone-new-name us-east-1 --rgw-zonegroup=<name>
+
+3. Configure the master zonegroup. Replace ``<name>`` with the realm or
+ zonegroup name. Replace ``<fqdn>`` with the fully qualified domain
+ name(s) in the zonegroup.
+
+ .. prompt:: bash #
+
+ radosgw-admin zonegroup modify --rgw-realm=<name> --rgw-zonegroup=<name> --endpoints http://<fqdn>:80 --master --default
+
+4. Configure the master zone. Replace ``<name>`` with the realm,
+ zonegroup or zone name. Replace ``<fqdn>`` with the fully qualified
+ domain name(s) in the zonegroup.
+
+ .. prompt:: bash #
+
+ radosgw-admin zone modify --rgw-realm=<name> --rgw-zonegroup=<name> \
+ --rgw-zone=<name> --endpoints http://<fqdn>:80 \
+ --access-key=<access-key> --secret=<secret-key> \
+ --master --default
+
+5. Create a system user. Replace ``<user-id>`` with the username.
+ Replace ``<display-name>`` with a display name. It may contain
+ spaces.
+
+ .. prompt:: bash #
+
+ radosgw-admin user create --uid=<user-id> \
+ --display-name="<display-name>" \
+ --access-key=<access-key> \
+ --secret=<secret-key> --system
+
+6. Commit the updated configuration.
+
+ .. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+7. Finally, restart the Ceph Object Gateway.
+
+ .. prompt:: bash #
+
+ systemctl restart ceph-radosgw@rgw.`hostname -s`
+
+After completing this procedure, proceed to `Configure a Secondary
+Zone <#configure-secondary-zones>`__ to create a secondary zone
+in the master zone group.
+
+
+Multi-Site Configuration Reference
+==================================
+
+The following sections provide additional details and command-line
+usage for realms, periods, zone groups and zones.
+
+Realms
+------
+
+A realm represents a globally unique namespace consisting of one or more
+zonegroups containing one or more zones, and zones containing buckets,
+which in turn contain objects. A realm enables the Ceph Object Gateway
+to support multiple namespaces and their configuration on the same
+hardware.
+
+A realm contains the notion of periods. Each period represents the state
+of the zone group and zone configuration in time. Each time you make a
+change to a zonegroup or zone, update the period and commit it.
+
+By default, the Ceph Object Gateway does not create a realm
+for backward compatibility with Infernalis and earlier releases.
+However, as a best practice, we recommend creating realms for new
+clusters.
+
+Create a Realm
+~~~~~~~~~~~~~~
+
+To create a realm, execute ``realm create`` and specify the realm name.
+If the realm is the default, specify ``--default``.
+
+.. prompt:: bash #
+
+ radosgw-admin realm create --rgw-realm={realm-name} [--default]
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin realm create --rgw-realm=movies --default
+
+By specifying ``--default``, the realm will be called implicitly with
+each ``radosgw-admin`` call unless ``--rgw-realm`` and the realm name
+are explicitly provided.
+
+Make a Realm the Default
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+One realm in the list of realms should be the default realm. There may
+be only one default realm. If there is only one realm and it wasn’t
+specified as the default realm when it was created, make it the default
+realm. Alternatively, to change which realm is the default, execute:
+
+.. prompt:: bash #
+
+ radosgw-admin realm default --rgw-realm=movies
+
+.. note:: When the realm is default, the command line assumes
+ ``--rgw-realm=<realm-name>`` as an argument.
+
+Delete a Realm
+~~~~~~~~~~~~~~
+
+To delete a realm, execute ``realm delete`` and specify the realm name.
+
+.. prompt:: bash #
+
+ radosgw-admin realm rm --rgw-realm={realm-name}
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin realm rm --rgw-realm=movies
+
+Get a Realm
+~~~~~~~~~~~
+
+To get a realm, execute ``realm get`` and specify the realm name.
+
+.. prompt:: bash #
+
+ radosgw-admin realm get --rgw-realm=<name>
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin realm get --rgw-realm=movies [> filename.json]
+
+The CLI will echo a JSON object with the realm properties.
+
+::
+
+ {
+ "id": "0a68d52e-a19c-4e8e-b012-a8f831cb3ebc",
+ "name": "movies",
+ "current_period": "b0c5bbef-4337-4edd-8184-5aeab2ec413b",
+ "epoch": 1
+ }
+
+Use ``>`` and an output file name to output the JSON object to a file.
+
+Set a Realm
+~~~~~~~~~~~
+
+To set a realm, execute ``realm set``, specify the realm name, and
+``--infile=`` with an input file name.
+
+.. prompt:: bash #
+
+ radosgw-admin realm set --rgw-realm=<name> --infile=<infilename>
+
+For example:
+
+.. prompt:: bash #
+
+ radosgw-admin realm set --rgw-realm=movies --infile=filename.json
+
+List Realms
+~~~~~~~~~~~
+
+To list realms, execute ``realm list``.
+
+.. prompt:: bash #
+
+ radosgw-admin realm list
+
+List Realm Periods
+~~~~~~~~~~~~~~~~~~
+
+To list realm periods, execute ``realm list-periods``.
+
+.. prompt:: bash #
+
+ radosgw-admin realm list-periods
+
+Pull a Realm
+~~~~~~~~~~~~
+
+To pull a realm from the node containing the master zone group and
+master zone to a node containing a secondary zone group or zone, execute
+``realm pull`` on the node that will receive the realm configuration.
+
+.. prompt:: bash #
+
+ radosgw-admin realm pull --url={url-to-master-zone-gateway} --access-key={access-key} --secret={secret}
+
+Rename a Realm
+~~~~~~~~~~~~~~
+
+A realm is not part of the period. Consequently, renaming the realm is
+only applied locally, and will not get pulled with ``realm pull``. When
+renaming a realm with multiple zones, run the command on each zone. To
+rename a realm, execute the following:
+
+.. prompt:: bash #
+
+ radosgw-admin realm rename --rgw-realm=<current-name> --realm-new-name=<new-realm-name>
+
+.. note:: DO NOT use ``realm set`` to change the ``name`` parameter. That
+ changes the internal name only. Specifying ``--rgw-realm`` would
+ still use the old realm name.
+
+Zone Groups
+-----------
+
+The Ceph Object Gateway supports multi-site deployments and a global
+namespace by using the notion of zone groups. Formerly called a region
+in Infernalis, a zone group defines the geographic location of one or more Ceph
+Object Gateway instances within one or more zones.
+
+Configuring zone groups differs from typical configuration procedures,
+because not all of the settings end up in a Ceph configuration file. You
+can list zone groups, get a zone group configuration, and set a zone
+group configuration.
+
+Create a Zone Group
+~~~~~~~~~~~~~~~~~~~
+
+Creating a zone group consists of specifying the zone group name.
+Creating a zone assumes it will live in the default realm unless
+``--rgw-realm=<realm-name>`` is specified. If the zonegroup is the
+default zonegroup, specify the ``--default`` flag. If the zonegroup is
+the master zonegroup, specify the ``--master`` flag. For example:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup create --rgw-zonegroup=<name> [--rgw-realm=<name>][--master] [--default]
+
+
+.. note:: Use ``zonegroup modify --rgw-zonegroup=<zonegroup-name>`` to modify
+ an existing zone group’s settings.
+
+Make a Zone Group the Default
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+One zonegroup in the list of zonegroups should be the default zonegroup.
+There may be only one default zonegroup. If there is only one zonegroup
+and it wasn’t specified as the default zonegroup when it was created,
+make it the default zonegroup. Alternatively, to change which zonegroup
+is the default, execute:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup default --rgw-zonegroup=comedy
+
+.. note:: When the zonegroup is default, the command line assumes
+ ``--rgw-zonegroup=<zonegroup-name>`` as an argument.
+
+Then, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Add a Zone to a Zone Group
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To add a zone to a zonegroup, execute the following:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup add --rgw-zonegroup=<name> --rgw-zone=<name>
+
+Then, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Remove a Zone from a Zone Group
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To remove a zone from a zonegroup, execute the following:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup remove --rgw-zonegroup=<name> --rgw-zone=<name>
+
+Then, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Rename a Zone Group
+~~~~~~~~~~~~~~~~~~~
+
+To rename a zonegroup, execute the following:
+
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup rename --rgw-zonegroup=<name> --zonegroup-new-name=<name>
+
+Then, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Delete a Zone Group
+~~~~~~~~~~~~~~~~~~~
+
+To delete a zonegroup, execute the following:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup delete --rgw-zonegroup=<name>
+
+Then, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+List Zone Groups
+~~~~~~~~~~~~~~~~
+
+A Ceph cluster contains a list of zone groups. To list the zone groups,
+execute:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup list
+
+The ``radosgw-admin`` returns a JSON formatted list of zone groups.
+
+::
+
+ {
+ "default_info": "90b28698-e7c3-462c-a42d-4aa780d24eda",
+ "zonegroups": [
+ "us"
+ ]
+ }
+
+Get a Zone Group Map
+~~~~~~~~~~~~~~~~~~~~
+
+To list the details of each zone group, execute:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup-map get
+
+.. note:: If you receive a ``failed to read zonegroup map`` error, run
+ ``radosgw-admin zonegroup-map update`` as ``root`` first.
+
+Get a Zone Group
+~~~~~~~~~~~~~~~~
+
+To view the configuration of a zone group, execute:
+
+.. prompt:: bash #
+
+ dosgw-admin zonegroup get [--rgw-zonegroup=<zonegroup>]
+
+The zone group configuration looks like this:
+
+::
+
+ {
+ "id": "90b28698-e7c3-462c-a42d-4aa780d24eda",
+ "name": "us",
+ "api_name": "us",
+ "is_master": "true",
+ "endpoints": [
+ "http:\/\/rgw1:80"
+ ],
+ "hostnames": [],
+ "hostnames_s3website": [],
+ "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e",
+ "zones": [
+ {
+ "id": "9248cab2-afe7-43d8-a661-a40bf316665e",
+ "name": "us-east",
+ "endpoints": [
+ "http:\/\/rgw1"
+ ],
+ "log_meta": "true",
+ "log_data": "true",
+ "bucket_index_max_shards": 0,
+ "read_only": "false"
+ },
+ {
+ "id": "d1024e59-7d28-49d1-8222-af101965a939",
+ "name": "us-west",
+ "endpoints": [
+ "http:\/\/rgw2:80"
+ ],
+ "log_meta": "false",
+ "log_data": "true",
+ "bucket_index_max_shards": 0,
+ "read_only": "false"
+ }
+ ],
+ "placement_targets": [
+ {
+ "name": "default-placement",
+ "tags": []
+ }
+ ],
+ "default_placement": "default-placement",
+ "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe"
+ }
+
+Set a Zone Group
+~~~~~~~~~~~~~~~~
+
+Defining a zone group consists of creating a JSON object, specifying at
+least the required settings:
+
+1. ``name``: The name of the zone group. Required.
+
+2. ``api_name``: The API name for the zone group. Optional.
+
+3. ``is_master``: Determines if the zone group is the master zone group.
+ Required. **note:** You can only have one master zone group.
+
+4. ``endpoints``: A list of all the endpoints in the zone group. For
+ example, you may use multiple domain names to refer to the same zone
+ group. Remember to escape the forward slashes (``\/``). You may also
+ specify a port (``fqdn:port``) for each endpoint. Optional.
+
+5. ``hostnames``: A list of all the hostnames in the zone group. For
+ example, you may use multiple domain names to refer to the same zone
+ group. Optional. The ``rgw dns name`` setting will automatically be
+ included in this list. You should restart the gateway daemon(s) after
+ changing this setting.
+
+6. ``master_zone``: The master zone for the zone group. Optional. Uses
+ the default zone if not specified. **note:** You can only have one
+ master zone per zone group.
+
+7. ``zones``: A list of all zones within the zone group. Each zone has a
+ name (required), a list of endpoints (optional), and whether or not
+ the gateway will log metadata and data operations (false by default).
+
+8. ``placement_targets``: A list of placement targets (optional). Each
+ placement target contains a name (required) for the placement target
+ and a list of tags (optional) so that only users with the tag can use
+ the placement target (i.e., the user’s ``placement_tags`` field in
+ the user info).
+
+9. ``default_placement``: The default placement target for the object
+ index and object data. Set to ``default-placement`` by default. You
+ may also set a per-user default placement in the user info for each
+ user.
+
+To set a zone group, create a JSON object consisting of the required
+fields, save the object to a file (e.g., ``zonegroup.json``); then,
+execute the following command:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup set --infile zonegroup.json
+
+Where ``zonegroup.json`` is the JSON file you created.
+
+.. important:: The ``default`` zone group ``is_master`` setting is ``true`` by
+ default. If you create a new zone group and want to make it the
+ master zone group, you must either set the ``default`` zone group
+ ``is_master`` setting to ``false``, or delete the ``default`` zone
+ group.
+
+Finally, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Set a Zone Group Map
+~~~~~~~~~~~~~~~~~~~~
+
+Setting a zone group map consists of creating a JSON object consisting
+of one or more zone groups, and setting the ``master_zonegroup`` for the
+cluster. Each zone group in the zone group map consists of a key/value
+pair, where the ``key`` setting is equivalent to the ``name`` setting
+for an individual zone group configuration, and the ``val`` is a JSON
+object consisting of an individual zone group configuration.
+
+You may only have one zone group with ``is_master`` equal to ``true``,
+and it must be specified as the ``master_zonegroup`` at the end of the
+zone group map. The following JSON object is an example of a default
+zone group map.
+
+::
+
+ {
+ "zonegroups": [
+ {
+ "key": "90b28698-e7c3-462c-a42d-4aa780d24eda",
+ "val": {
+ "id": "90b28698-e7c3-462c-a42d-4aa780d24eda",
+ "name": "us",
+ "api_name": "us",
+ "is_master": "true",
+ "endpoints": [
+ "http:\/\/rgw1:80"
+ ],
+ "hostnames": [],
+ "hostnames_s3website": [],
+ "master_zone": "9248cab2-afe7-43d8-a661-a40bf316665e",
+ "zones": [
+ {
+ "id": "9248cab2-afe7-43d8-a661-a40bf316665e",
+ "name": "us-east",
+ "endpoints": [
+ "http:\/\/rgw1"
+ ],
+ "log_meta": "true",
+ "log_data": "true",
+ "bucket_index_max_shards": 0,
+ "read_only": "false"
+ },
+ {
+ "id": "d1024e59-7d28-49d1-8222-af101965a939",
+ "name": "us-west",
+ "endpoints": [
+ "http:\/\/rgw2:80"
+ ],
+ "log_meta": "false",
+ "log_data": "true",
+ "bucket_index_max_shards": 0,
+ "read_only": "false"
+ }
+ ],
+ "placement_targets": [
+ {
+ "name": "default-placement",
+ "tags": []
+ }
+ ],
+ "default_placement": "default-placement",
+ "realm_id": "ae031368-8715-4e27-9a99-0c9468852cfe"
+ }
+ }
+ ],
+ "master_zonegroup": "90b28698-e7c3-462c-a42d-4aa780d24eda",
+ "bucket_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ },
+ "user_quota": {
+ "enabled": false,
+ "max_size_kb": -1,
+ "max_objects": -1
+ }
+ }
+
+To set a zone group map, execute the following:
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup-map set --infile zonegroupmap.json
+
+Where ``zonegroupmap.json`` is the JSON file you created. Ensure that
+you have zones created for the ones specified in the zone group map.
+Finally, update the period.
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Zones
+-----
+
+Ceph Object Gateway supports the notion of zones. A zone defines a
+logical group consisting of one or more Ceph Object Gateway instances.
+
+Configuring zones differs from typical configuration procedures, because
+not all of the settings end up in a Ceph configuration file. You can
+list zones, get a zone configuration and set a zone configuration.
+
+Create a Zone
+~~~~~~~~~~~~~
+
+To create a zone, specify a zone name. If it is a master zone, specify
+the ``--master`` option. Only one zone in a zone group may be a master
+zone. To add the zone to a zonegroup, specify the ``--rgw-zonegroup``
+option with the zonegroup name.
+
+.. prompt:: bash #
+
+ radosgw-admin zone create --rgw-zone=<name> \
+ [--zonegroup=<zonegroup-name]\
+ [--endpoints=<endpoint>[,<endpoint>] \
+ [--master] [--default] \
+ --access-key $SYSTEM_ACCESS_KEY --secret $SYSTEM_SECRET_KEY
+
+Then, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Delete a Zone
+~~~~~~~~~~~~~
+
+To delete zone, first remove it from the zonegroup.
+
+.. prompt:: bash #
+
+ radosgw-admin zonegroup remove --zonegroup=<name>\
+ --zone=<name>
+
+Then, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Next, delete the zone. Execute the following:
+
+.. prompt:: bash #
+
+ radosgw-admin zone delete --rgw-zone<name>
+
+Finally, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+.. important:: Do not delete a zone without removing it from a zone group first.
+ Otherwise, updating the period will fail.
+
+If the pools for the deleted zone will not be used anywhere else,
+consider deleting the pools. Replace ``<del-zone>`` in the example below
+with the deleted zone’s name.
+
+.. important:: Only delete the pools with prepended zone names. Deleting the root
+ pool, such as, ``.rgw.root`` will remove all of the system’s
+ configuration.
+
+.. important:: Once the pools are deleted, all of the data within them are deleted
+ in an unrecoverable manner. Only delete the pools if the pool
+ contents are no longer needed.
+
+.. prompt:: bash #
+
+ ceph osd pool rm <del-zone>.rgw.control <del-zone>.rgw.control --yes-i-really-really-mean-it
+ ceph osd pool rm <del-zone>.rgw.meta <del-zone>.rgw.meta --yes-i-really-really-mean-it
+ ceph osd pool rm <del-zone>.rgw.log <del-zone>.rgw.log --yes-i-really-really-mean-it
+ ceph osd pool rm <del-zone>.rgw.otp <del-zone>.rgw.otp --yes-i-really-really-mean-it
+ ceph osd pool rm <del-zone>.rgw.buckets.index <del-zone>.rgw.buckets.index --yes-i-really-really-mean-it
+ ceph osd pool rm <del-zone>.rgw.buckets.non-ec <del-zone>.rgw.buckets.non-ec --yes-i-really-really-mean-it
+ ceph osd pool rm <del-zone>.rgw.buckets.data <del-zone>.rgw.buckets.data --yes-i-really-really-mean-it
+
+Modify a Zone
+~~~~~~~~~~~~~
+
+To modify a zone, specify the zone name and the parameters you wish to
+modify.
+
+.. prompt:: bash #
+
+ radosgw-admin zone modify [options]
+
+Where ``[options]``:
+
+- ``--access-key=<key>``
+- ``--secret/--secret-key=<key>``
+- ``--master``
+- ``--default``
+- ``--endpoints=<list>``
+
+Then, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+List Zones
+~~~~~~~~~~
+
+As ``root``, to list the zones in a cluster, execute:
+
+.. prompt:: bash #
+
+ radosgw-admin zone list
+
+Get a Zone
+~~~~~~~~~~
+
+As ``root``, to get the configuration of a zone, execute:
+
+.. prompt:: bash #
+
+ radosgw-admin zone get [--rgw-zone=<zone>]
+
+The ``default`` zone looks like this:
+
+::
+
+ { "domain_root": ".rgw",
+ "control_pool": ".rgw.control",
+ "gc_pool": ".rgw.gc",
+ "log_pool": ".log",
+ "intent_log_pool": ".intent-log",
+ "usage_log_pool": ".usage",
+ "user_keys_pool": ".users",
+ "user_email_pool": ".users.email",
+ "user_swift_pool": ".users.swift",
+ "user_uid_pool": ".users.uid",
+ "system_key": { "access_key": "", "secret_key": ""},
+ "placement_pools": [
+ { "key": "default-placement",
+ "val": { "index_pool": ".rgw.buckets.index",
+ "data_pool": ".rgw.buckets"}
+ }
+ ]
+ }
+
+Set a Zone
+~~~~~~~~~~
+
+Configuring a zone involves specifying a series of Ceph Object Gateway
+pools. For consistency, we recommend using a pool prefix that is the
+same as the zone name. See
+`Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
+for details of configuring pools.
+
+To set a zone, create a JSON object consisting of the pools, save the
+object to a file (e.g., ``zone.json``); then, execute the following
+command, replacing ``{zone-name}`` with the name of the zone:
+
+.. prompt:: bash #
+
+ radosgw-admin zone set --rgw-zone={zone-name} --infile zone.json
+
+Where ``zone.json`` is the JSON file you created.
+
+Then, as ``root``, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Rename a Zone
+~~~~~~~~~~~~~
+
+To rename a zone, specify the zone name and the new zone name.
+
+.. prompt:: bash #
+
+ radosgw-admin zone rename --rgw-zone=<name> --zone-new-name=<name>
+
+Then, update the period:
+
+.. prompt:: bash #
+
+ radosgw-admin period update --commit
+
+Zone Group and Zone Settings
+----------------------------
+
+When configuring a default zone group and zone, the pool name includes
+the zone name. For example:
+
+- ``default.rgw.control``
+
+To change the defaults, include the following settings in your Ceph
+configuration file under each ``[client.radosgw.{instance-name}]``
+instance.
+
++-------------------------------------+-----------------------------------+---------+-----------------------+
+| Name | Description | Type | Default |
++=====================================+===================================+=========+=======================+
+| ``rgw_zone`` | The name of the zone for the | String | None |
+| | gateway instance. | | |
++-------------------------------------+-----------------------------------+---------+-----------------------+
+| ``rgw_zonegroup`` | The name of the zone group for | String | None |
+| | the gateway instance. | | |
++-------------------------------------+-----------------------------------+---------+-----------------------+
+| ``rgw_zonegroup_root_pool`` | The root pool for the zone group. | String | ``.rgw.root`` |
++-------------------------------------+-----------------------------------+---------+-----------------------+
+| ``rgw_zone_root_pool`` | The root pool for the zone. | String | ``.rgw.root`` |
++-------------------------------------+-----------------------------------+---------+-----------------------+
+| ``rgw_default_zone_group_info_oid`` | The OID for storing the default | String | ``default.zonegroup`` |
+| | zone group. We do not recommend | | |
+| | changing this setting. | | |
++-------------------------------------+-----------------------------------+---------+-----------------------+
+
+
+Zone Features
+=============
+
+Some multisite features require support from all zones before they can be enabled. Each zone lists its ``supported_features``, and each zonegroup lists its ``enabled_features``. Before a feature can be enabled in the zonegroup, it must be supported by all of its zones.
+
+On creation of new zones and zonegroups, all known features are supported/enabled. After upgrading an existing multisite configuration, however, new features must be enabled manually.
+
+Supported Features
+------------------
+
++---------------------------+---------+
+| Feature | Release |
++===========================+=========+
+| :ref:`feature_resharding` | Quincy |
++---------------------------+---------+
+
+.. _feature_resharding:
+
+resharding
+~~~~~~~~~~
+
+Allows buckets to be resharded in a multisite configuration without interrupting the replication of their objects. When ``rgw_dynamic_resharding`` is enabled, it runs on each zone independently, and zones may choose different shard counts for the same bucket. When buckets are resharded manually with ``radosgw-admin bucket reshard``, only that zone's bucket is modified. A zone feature should only be marked as supported after all of its radosgws and osds have upgraded.
+
+
+Commands
+-----------------
+
+Add support for a zone feature
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+On the cluster that contains the given zone:
+
+.. prompt:: bash $
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --enable-feature={feature-name}
+ radosgw-admin period update --commit
+
+
+Remove support for a zone feature
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+On the cluster that contains the given zone:
+
+.. prompt:: bash $
+
+ radosgw-admin zone modify --rgw-zone={zone-name} --disable-feature={feature-name}
+ radosgw-admin period update --commit
+
+Enable a zonegroup feature
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+On any cluster in the realm:
+
+.. prompt:: bash $
+
+ radosgw-admin zonegroup modify --rgw-zonegroup={zonegroup-name} --enable-feature={feature-name}
+ radosgw-admin period update --commit
+
+Disable a zonegroup feature
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+On any cluster in the realm:
+
+.. prompt:: bash $
+
+ radosgw-admin zonegroup modify --rgw-zonegroup={zonegroup-name} --disable-feature={feature-name}
+ radosgw-admin period update --commit
+
+
+.. _`Pools`: ../pools
+.. _`Sync Policy Config`: ../multisite-sync-policy
diff --git a/doc/radosgw/multitenancy.rst b/doc/radosgw/multitenancy.rst
new file mode 100644
index 000000000..0cca50d96
--- /dev/null
+++ b/doc/radosgw/multitenancy.rst
@@ -0,0 +1,169 @@
+.. _rgw-multitenancy:
+
+=================
+RGW Multi-tenancy
+=================
+
+.. versionadded:: Jewel
+
+The multi-tenancy feature allows to use buckets and users of the same
+name simultaneously by segregating them under so-called ``tenants``.
+This may be useful, for instance, to permit users of Swift API to
+create buckets with easily conflicting names such as "test" or "trove".
+
+From the Jewel release onward, each user and bucket lies under a tenant.
+For compatibility, a "legacy" tenant with an empty name is provided.
+Whenever a bucket is referred without an explicit tenant, an implicit
+tenant is used, taken from the user performing the operation. Since
+the pre-existing users are under the legacy tenant, they continue
+to create and access buckets as before. The layout of objects in RADOS
+is extended in a compatible way, ensuring a smooth upgrade to Jewel.
+
+Administering Users With Explicit Tenants
+=========================================
+
+Tenants as such do not have any operations on them. They appear and
+disappear as needed, when users are administered. In order to create,
+modify, and remove users with explicit tenants, either an additional
+option --tenant is supplied, or a syntax "<tenant>$<user>" is used
+in the parameters of the radosgw-admin command.
+
+Examples
+--------
+
+Create a user testx$tester to be accessed with S3::
+
+ # radosgw-admin --tenant testx --uid tester --display-name "Test User" --access_key TESTER --secret test123 user create
+
+Create a user testx$tester to be accessed with Swift::
+
+ # radosgw-admin --tenant testx --uid tester --display-name "Test User" --subuser tester:test --key-type swift --access full user create
+ # radosgw-admin --subuser 'testx$tester:test' --key-type swift --secret test123
+
+.. note:: The subuser with explicit tenant has to be quoted in the shell.
+
+ Tenant names may contain only alphanumeric characters and underscores.
+
+Accessing Buckets with Explicit Tenants
+=======================================
+
+When a client application accesses buckets, it always operates with
+credentials of a particular user. As mentioned above, every user belongs
+to a tenant. Therefore, every operation has an implicit tenant in its
+context, to be used if no tenant is specified explicitly. Thus a complete
+compatibility is maintained with previous releases, as long as the
+referred buckets and referring user belong to the same tenant.
+In other words, anything unusual occurs when accessing another tenant's
+buckets *only*.
+
+Extensions employed to specify an explicit tenant differ according
+to the protocol and authentication system used.
+
+S3
+--
+
+In case of S3, a colon character is used to separate tenant and bucket.
+Thus a sample URL would be::
+
+ https://ep.host.dom/tenant:bucket
+
+Here's a simple Python sample:
+
+.. code-block:: python
+ :linenos:
+
+ from boto.s3.connection import S3Connection, OrdinaryCallingFormat
+ c = S3Connection(
+ aws_access_key_id="TESTER",
+ aws_secret_access_key="test123",
+ host="ep.host.dom",
+ calling_format = OrdinaryCallingFormat())
+ bucket = c.get_bucket("test5b:testbucket")
+
+Note that it's not possible to supply an explicit tenant using
+a hostname. Hostnames cannot contain colons, or any other separators
+that are not already valid in bucket names. Using a period creates an
+ambiguous syntax. Therefore, the bucket-in-URL-path format has to be
+used.
+
+Due to the fact that the native S3 API does not deal with
+multi-tenancy and radosgw's implementation does, things get a bit
+involved when dealing with signed URLs and public read ACLs.
+
+* A **signed URL** does contain the ``AWSAccessKeyId`` query
+ parameters, from which radosgw is able to discern the correct user
+ and tenant owning the bucket. In other words, an application
+ generating signed URLs should be able to take just the un-prefixed
+ bucket name, and produce a signed URL that itself contains the
+ bucket name without the tenant prefix. However, it is *possible* to
+ include the prefix if you so choose.
+
+ Thus, accessing a signed URL of an object ``bar`` in a container
+ ``foo`` belonging to the tenant ``7188e165c0ae4424ac68ae2e89a05c50``
+ would be possible either via
+ ``http://<host>:<port>/foo/bar?AWSAccessKeyId=b200fb6634c547199e436a0f93c0c46e&Expires=1542890806&Signature=eok6CYQC%2FDwmQQmqvY5jTg6ehXU%3D``,
+ or via
+ ``http://<host>:<port>/7188e165c0ae4424ac68ae2e89a05c50:foo/bar?AWSAccessKeyId=b200fb6634c547199e436a0f93c0c46e&Expires=1542890806&Signature=eok6CYQC%2FDwmQQmqvY5jTg6ehXU%3D``,
+ depending on whether or not the tenant prefix was passed in on
+ signature generation.
+
+* A bucket with a **public read ACL** is meant to be read by an HTTP
+ client *without* including any query parameters that would allow
+ radosgw to discern tenants. Thus, publicly readable objects must
+ always be accessed using the bucket name with the tenant prefix.
+
+ Thus, if you set a public read ACL on an object ``bar`` in a
+ container ``foo`` belonging to the tenant
+ ``7188e165c0ae4424ac68ae2e89a05c50``, you would need to access that
+ object via the public URL
+ ``http://<host>:<port>/7188e165c0ae4424ac68ae2e89a05c50:foo/bar``.
+
+Swift with built-in authenticator
+---------------------------------
+
+TBD -- not in test_multen.py yet
+
+Swift with Keystone
+-------------------
+
+In the default configuration, although native Swift has inherent
+multi-tenancy, radosgw does not enable multi-tenancy for the Swift
+API. This is to ensure that a setup with legacy buckets --- that is,
+buckets that were created before radosgw supported multitenancy ---,
+those buckets retain their dual-API capability to be queried and
+modified using either S3 or Swift.
+
+If you want to enable multitenancy for Swift, particularly if your
+users only ever authenticate against OpenStack Keystone, you should
+enable Keystone-based multitenancy with the following ``ceph.conf``
+configuration option::
+
+ rgw keystone implicit tenants = true
+
+Once you enable this option, any newly connecting user (whether they
+are using the Swift API, or Keystone-authenticated S3) will prompt
+radosgw to create a user named ``<tenant_id>$<tenant_id``, where
+``<tenant_id>`` is a Keystone tenant (project) UUID --- for example,
+``7188e165c0ae4424ac68ae2e89a05c50$7188e165c0ae4424ac68ae2e89a05c50``.
+
+Whenever that user then creates an Swift container, radosgw internally
+translates the given container name into
+``<tenant_id>/<container_name>``, such as
+``7188e165c0ae4424ac68ae2e89a05c50/foo``. This ensures that if there
+are two or more different tenants all creating a container named
+``foo``, radosgw is able to transparently discern them by their tenant
+prefix.
+
+It is also possible to limit the effects of implicit tenants
+to only apply to swift or s3, by setting ``rgw keystone implicit tenants``
+to either ``s3`` or ``swift``. This will likely primarily
+be of use to users who had previously used implicit tenants
+with older versions of ceph, where implicit tenants
+only applied to the swift protocol.
+
+Notes and known issues
+----------------------
+
+Just to be clear, it is not possible to create buckets in other
+tenants at present. The owner of newly created bucket is extracted
+from authentication information.
diff --git a/doc/radosgw/nfs.rst b/doc/radosgw/nfs.rst
new file mode 100644
index 000000000..0a506599a
--- /dev/null
+++ b/doc/radosgw/nfs.rst
@@ -0,0 +1,371 @@
+===
+NFS
+===
+
+.. versionadded:: Jewel
+
+.. note:: Only the NFSv4 protocol is supported when using a cephadm or rook based deployment.
+
+Ceph Object Gateway namespaces can be exported over the file-based
+NFSv4 protocols, alongside traditional HTTP access
+protocols (S3 and Swift).
+
+In particular, the Ceph Object Gateway can now be configured to
+provide file-based access when embedded in the NFS-Ganesha NFS server.
+
+The simplest and preferred way of managing nfs-ganesha clusters and rgw exports
+is using ``ceph nfs ...`` commands. See :doc:`/mgr/nfs` for more details.
+
+librgw
+======
+
+The librgw.so shared library (Unix) provides a loadable interface to
+Ceph Object Gateway services, and instantiates a full Ceph Object Gateway
+instance on initialization.
+
+In turn, librgw.so exports rgw_file, a stateful API for file-oriented
+access to RGW buckets and objects. The API is general, but its design
+is strongly influenced by the File System Abstraction Layer (FSAL) API
+of NFS-Ganesha, for which it has been primarily designed.
+
+A set of Python bindings is also provided.
+
+Namespace Conventions
+=====================
+
+The implementation conforms to Amazon Web Services (AWS) hierarchical
+namespace conventions which map UNIX-style path names onto S3 buckets
+and objects.
+
+The top level of the attached namespace consists of S3 buckets,
+represented as NFS directories. Files and directories subordinate to
+buckets are each represented as objects, following S3 prefix and
+delimiter conventions, with '/' being the only supported path
+delimiter [#]_.
+
+For example, if an NFS client has mounted an RGW namespace at "/nfs",
+then a file "/nfs/mybucket/www/index.html" in the NFS namespace
+corresponds to an RGW object "www/index.html" in a bucket/container
+"mybucket."
+
+Although it is generally invisible to clients, the NFS namespace is
+assembled through concatenation of the corresponding paths implied by
+the objects in the namespace. Leaf objects, whether files or
+directories, will always be materialized in an RGW object of the
+corresponding key name, "<name>" if a file, "<name>/" if a directory.
+Non-leaf directories (e.g., "www" above) might only be implied by
+their appearance in the names of one or more leaf objects. Directories
+created within NFS or directly operated on by an NFS client (e.g., via
+an attribute-setting operation such as chown or chmod) always have a
+leaf object representation used to store materialized attributes such
+as Unix ownership and permissions.
+
+Supported Operations
+====================
+
+The RGW NFS interface supports most operations on files and
+directories, with the following restrictions:
+
+- Links, including symlinks, are not supported.
+- NFS ACLs are not supported.
+
+ + Unix user and group ownership and permissions *are* supported.
+
+- Directories may not be moved/renamed.
+
+ + Files may be moved between directories.
+
+- Only full, sequential *write* I/O is supported
+
+ + i.e., write operations are constrained to be **uploads**.
+ + Many typical I/O operations such as editing files in place will necessarily fail as they perform non-sequential stores.
+ + Some file utilities *apparently* writing sequentially (e.g., some versions of GNU tar) may fail due to infrequent non-sequential stores.
+ + When mounting via NFS, sequential application I/O can generally be constrained to be written sequentially to the NFS server via a synchronous mount option (e.g. -osync in Linux).
+ + NFS clients which cannot mount synchronously (e.g., MS Windows) will not be able to upload files.
+
+Security
+========
+
+The RGW NFS interface provides a hybrid security model with the
+following characteristics:
+
+- NFS protocol security is provided by the NFS-Ganesha server, as negotiated by the NFS server and clients
+
+ + e.g., clients can by trusted (AUTH_SYS), or required to present Kerberos user credentials (RPCSEC_GSS)
+ + RPCSEC_GSS wire security can be integrity only (krb5i) or integrity and privacy (encryption, krb5p)
+ + various NFS-specific security and permission rules are available
+
+ * e.g., root-squashing
+
+- a set of RGW/S3 security credentials (unknown to NFS) is associated with each RGW NFS mount (i.e., NFS-Ganesha EXPORT)
+
+ + all RGW object operations performed via the NFS server will be performed by the RGW user associated with the credentials stored in the export being accessed (currently only RGW and RGW LDAP credentials are supported)
+
+ * additional RGW authentication types such as Keystone are not currently supported
+
+Manually configuring an NFS-Ganesha Instance
+============================================
+
+Each NFS RGW instance is an NFS-Ganesha server instance *embeddding*
+a full Ceph RGW instance.
+
+Therefore, the RGW NFS configuration includes Ceph and Ceph Object
+Gateway-specific configuration in a local ceph.conf, as well as
+NFS-Ganesha-specific configuration in the NFS-Ganesha config file,
+ganesha.conf.
+
+ceph.conf
+---------
+
+Required ceph.conf configuration for RGW NFS includes:
+
+* valid [client.radosgw.{instance-name}] section
+* valid values for minimal instance configuration, in particular, an installed and correct ``keyring``
+
+Other config variables are optional, front-end-specific and front-end
+selection variables (e.g., ``rgw data`` and ``rgw frontends``) are
+optional and in some cases ignored.
+
+A small number of config variables (e.g., ``rgw_nfs_namespace_expire_secs``)
+are unique to RGW NFS.
+
+ganesha.conf
+------------
+
+A strictly minimal ganesha.conf for use with RGW NFS includes one
+EXPORT block with embedded FSAL block of type RGW::
+
+ EXPORT
+ {
+ Export_ID={numeric-id};
+ Path = "/";
+ Pseudo = "/";
+ Access_Type = RW;
+ SecType = "sys";
+ NFS_Protocols = 4;
+ Transport_Protocols = TCP;
+
+ # optional, permit unsquashed access by client "root" user
+ #Squash = No_Root_Squash;
+
+ FSAL {
+ Name = RGW;
+ User_Id = {s3-user-id};
+ Access_Key_Id ="{s3-access-key}";
+ Secret_Access_Key = "{s3-secret}";
+ }
+ }
+
+``Export_ID`` must have an integer value, e.g., "77"
+
+``Path`` (for RGW) should be "/"
+
+``Pseudo`` defines an NFSv4 pseudo root name (NFSv4 only)
+
+``SecType = sys;`` allows clients to attach without Kerberos
+authentication
+
+``Squash = No_Root_Squash;`` enables the client root user to override
+permissions (Unix convention). When root-squashing is enabled,
+operations attempted by the root user are performed as if by the local
+"nobody" (and "nogroup") user on the NFS-Ganesha server
+
+The RGW FSAL additionally supports RGW-specific configuration
+variables in the RGW config section::
+
+ RGW {
+ cluster = "{cluster name, default 'ceph'}";
+ name = "client.rgw.{instance-name}";
+ ceph_conf = "/opt/ceph-rgw/etc/ceph/ceph.conf";
+ init_args = "-d --debug-rgw=16";
+ }
+
+``cluster`` sets a Ceph cluster name (must match the cluster being exported)
+
+``name`` sets an RGW instance name (must match the cluster being exported)
+
+``ceph_conf`` gives a path to a non-default ceph.conf file to use
+
+
+Other useful NFS-Ganesha configuration:
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Any EXPORT block which should support NFSv3 should include version 3
+in the NFS_Protocols setting. Additionally, NFSv3 is the last major
+version to support the UDP transport. To enable UDP, include it in the
+Transport_Protocols setting. For example::
+
+ EXPORT {
+ ...
+ NFS_Protocols = 3,4;
+ Transport_Protocols = UDP,TCP;
+ ...
+ }
+
+One important family of options pertains to interaction with the Linux
+idmapping service, which is used to normalize user and group names
+across systems. Details of idmapper integration are not provided here.
+
+With Linux NFS clients, NFS-Ganesha can be configured
+to accept client-supplied numeric user and group identifiers with
+NFSv4, which by default stringifies these--this may be useful in small
+setups and for experimentation::
+
+ NFSV4 {
+ Allow_Numeric_Owners = true;
+ Only_Numeric_Owners = true;
+ }
+
+Troubleshooting
+~~~~~~~~~~~~~~~
+
+NFS-Ganesha configuration problems are usually debugged by running the
+server with debugging options, controlled by the LOG config section.
+
+NFS-Ganesha log messages are grouped into various components, logging
+can be enabled separately for each component. Valid values for
+component logging include::
+
+ *FATAL* critical errors only
+ *WARN* unusual condition
+ *DEBUG* mildly verbose trace output
+ *FULL_DEBUG* verbose trace output
+
+Example::
+
+ LOG {
+
+ Components {
+ MEMLEAKS = FATAL;
+ FSAL = FATAL;
+ NFSPROTO = FATAL;
+ NFS_V4 = FATAL;
+ EXPORT = FATAL;
+ FILEHANDLE = FATAL;
+ DISPATCH = FATAL;
+ CACHE_INODE = FATAL;
+ CACHE_INODE_LRU = FATAL;
+ HASHTABLE = FATAL;
+ HASHTABLE_CACHE = FATAL;
+ DUPREQ = FATAL;
+ INIT = DEBUG;
+ MAIN = DEBUG;
+ IDMAPPER = FATAL;
+ NFS_READDIR = FATAL;
+ NFS_V4_LOCK = FATAL;
+ CONFIG = FATAL;
+ CLIENTID = FATAL;
+ SESSIONS = FATAL;
+ PNFS = FATAL;
+ RW_LOCK = FATAL;
+ NLM = FATAL;
+ RPC = FATAL;
+ NFS_CB = FATAL;
+ THREAD = FATAL;
+ NFS_V4_ACL = FATAL;
+ STATE = FATAL;
+ FSAL_UP = FATAL;
+ DBUS = FATAL;
+ }
+ # optional: redirect log output
+ # Facility {
+ # name = FILE;
+ # destination = "/tmp/ganesha-rgw.log";
+ # enable = active;
+ }
+ }
+
+Running Multiple NFS Gateways
+=============================
+
+Each NFS-Ganesha instance acts as a full gateway endpoint, with the
+limitation that currently an NFS-Ganesha instance cannot be configured
+to export HTTP services. As with ordinary gateway instances, any
+number of NFS-Ganesha instances can be started, exporting the same or
+different resources from the cluster. This enables the clustering of
+NFS-Ganesha instances. However, this does not imply high availability.
+
+When regular gateway instances and NFS-Ganesha instances overlap the
+same data resources, they will be accessible from both the standard S3
+API and through the NFS-Ganesha instance as exported. You can
+co-locate the NFS-Ganesha instance with a Ceph Object Gateway instance
+on the same host.
+
+RGW vs RGW NFS
+==============
+
+Exporting an NFS namespace and other RGW namespaces (e.g., S3 or Swift
+via the Civetweb HTTP front-end) from the same program instance is
+currently not supported.
+
+When adding objects and buckets outside of NFS, those objects will
+appear in the NFS namespace in the time set by
+``rgw_nfs_namespace_expire_secs``, which defaults to 300 seconds (5 minutes).
+Override the default value for ``rgw_nfs_namespace_expire_secs`` in the
+Ceph configuration file to change the refresh rate.
+
+If exporting Swift containers that do not conform to valid S3 bucket
+naming requirements, set ``rgw_relaxed_s3_bucket_names`` to true in the
+[client.radosgw] section of the Ceph configuration file. For example,
+if a Swift container name contains underscores, it is not a valid S3
+bucket name and will be rejected unless ``rgw_relaxed_s3_bucket_names``
+is set to true.
+
+Configuring NFSv4 clients
+=========================
+
+To access the namespace, mount the configured NFS-Ganesha export(s)
+into desired locations in the local POSIX namespace. As noted, this
+implementation has a few unique restrictions:
+
+- NFS 4.1 and higher protocol flavors are preferred
+
+ + NFSv4 OPEN and CLOSE operations are used to track upload transactions
+
+- To upload data successfully, clients must preserve write ordering
+
+ + on Linux and many Unix NFS clients, use the -osync mount option
+
+Conventions for mounting NFS resources are platform-specific. The
+following conventions work on Linux and some Unix platforms:
+
+From the command line::
+
+ mount -t nfs -o nfsvers=4.1,noauto,soft,sync,proto=tcp <ganesha-host-name>:/ <mount-point>
+
+In /etc/fstab::
+
+<ganesha-host-name>:/ <mount-point> nfs noauto,soft,nfsvers=4.1,sync,proto=tcp 0 0
+
+Specify the NFS-Ganesha host name and the path to the mount point on
+the client.
+
+Configuring NFSv3 Clients
+=========================
+
+Linux clients can be configured to mount with NFSv3 by supplying
+``nfsvers=3`` and ``noacl`` as mount options. To use UDP as the
+transport, add ``proto=udp`` to the mount options. However, TCP is the
+preferred transport::
+
+ <ganesha-host-name>:/ <mount-point> nfs noauto,noacl,soft,nfsvers=3,sync,proto=tcp 0 0
+
+Configure the NFS Ganesha EXPORT block Protocols setting with version
+3 and the Transports setting with UDP if the mount will use version 3 with UDP.
+
+NFSv3 Semantics
+---------------
+
+Since NFSv3 does not communicate client OPEN and CLOSE operations to
+file servers, RGW NFS cannot use these operations to mark the
+beginning and ending of file upload transactions. Instead, RGW NFS
+starts a new upload when the first write is sent to a file at offset
+0, and finalizes the upload when no new writes to the file have been
+seen for a period of time, by default, 10 seconds. To change this
+timeout, set an alternate value for ``rgw_nfs_write_completion_interval_s``
+in the RGW section(s) of the Ceph configuration file.
+
+References
+==========
+
+.. [#] http://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html
diff --git a/doc/radosgw/notifications.rst b/doc/radosgw/notifications.rst
new file mode 100644
index 000000000..a294e6c15
--- /dev/null
+++ b/doc/radosgw/notifications.rst
@@ -0,0 +1,543 @@
+====================
+Bucket Notifications
+====================
+
+.. versionadded:: Nautilus
+
+.. contents::
+
+Bucket notifications provide a mechanism for sending information out of radosgw
+when certain events happen on the bucket. Notifications can be sent to HTTP
+endpoints, AMQP0.9.1 endpoints, and Kafka endpoints.
+
+The `PubSub Module`_ (and *not* the bucket-notification mechanism) should be
+used for events stored in Ceph.
+
+A user can create topics. A topic entity is defined by its name and is "per
+tenant". A user can associate its topics (via notification configuration) only
+with buckets it owns.
+
+A notification entity must be created in order to send event notifications for
+a specific bucket. A notification entity can be created either for a subset
+of event types or for all event types (which is the default). The
+notification may also filter out events based on matches of the prefixes and
+suffixes of (1) the keys, (2) the metadata attributes attached to the object,
+or (3) the object tags. Regular-expression matching can also be used on these
+to create filters. There can be multiple notifications for any specific topic,
+and the same topic can used for multiple notifications.
+
+REST API has been defined so as to provide configuration and control interfaces
+for the bucket notification mechanism. This API is similar to the one defined
+as the S3-compatible API of the `PubSub Module`_.
+
+.. toctree::
+ :maxdepth: 1
+
+ S3 Bucket Notification Compatibility <s3-notification-compatibility>
+
+.. note:: To enable bucket notifications API, the `rgw_enable_apis` configuration parameter should contain: "notifications".
+
+Notification Reliability
+------------------------
+
+Notifications can be sent synchronously or asynchronously. This section
+describes the latency and reliability that you should expect for synchronous
+and asynchronous notifications.
+
+Synchronous Notifications
+~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Notifications can be sent synchronously, as part of the operation that
+triggered them. In this mode, the operation is acknowledged (acked) only after
+the notification is sent to the topic's configured endpoint. This means that
+the round trip time of the notification (the time it takes to send the
+notification to the topic's endpoint plus the time it takes to receive the
+acknowledgement) is added to the latency of the operation itself.
+
+.. note:: The original triggering operation is considered successful even if
+ the notification fails with an error, cannot be delivered, or times out.
+
+Asynchronous Notifications
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Notifications can be sent asynchronously. They are committed into persistent
+storage and then asynchronously sent to the topic's configured endpoint. In
+this case, the only latency added to the original operation is the latency
+added when the notification is committed to persistent storage.
+
+.. note:: If the notification fails with an error, cannot be delivered, or
+ times out, it is retried until it is successfully acknowledged.
+
+.. tip:: To minimize the latency added by asynchronous notification, we
+ recommended placing the "log" pool on fast media.
+
+
+Topic Management via CLI
+------------------------
+
+Configuration of all topics, associated with a tenant, could be fetched using the following command:
+
+.. prompt:: bash #
+
+ radosgw-admin topic list [--tenant={tenant}]
+
+
+Configuration of a specific topic could be fetched using:
+
+.. prompt:: bash #
+
+ radosgw-admin topic get --topic={topic-name} [--tenant={tenant}]
+
+
+And removed using:
+
+.. prompt:: bash #
+
+ radosgw-admin topic rm --topic={topic-name} [--tenant={tenant}]
+
+
+Notification Performance Stats
+------------------------------
+The same counters are shared between the pubsub sync module and the bucket notification mechanism.
+
+- ``pubsub_event_triggered``: running counter of events with at least one topic associated with them
+- ``pubsub_event_lost``: running counter of events that had topics associated with them but that were not pushed to any of the endpoints
+- ``pubsub_push_ok``: running counter, for all notifications, of events successfully pushed to their endpoint
+- ``pubsub_push_fail``: running counter, for all notifications, of events failed to be pushed to their endpoint
+- ``pubsub_push_pending``: gauge value of events pushed to an endpoint but not acked or nacked yet
+
+.. note::
+
+ ``pubsub_event_triggered`` and ``pubsub_event_lost`` are incremented per event, while:
+ ``pubsub_push_ok``, ``pubsub_push_fail``, are incremented per push action on each notification
+
+Bucket Notification REST API
+----------------------------
+
+Topics
+~~~~~~
+
+.. note::
+
+ In all topic actions, the parameters are URL-encoded and sent in the
+ message body using this content type:
+ ``application/x-www-form-urlencoded``.
+
+
+Create a Topic
+``````````````
+
+This creates a new topic. Provide the topic with push endpoint parameters,
+which will be used later when a notification is created. A response is
+generated. A successful response includes the the topic's `ARN
+<https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`_
+(the "Amazon Resource Name", a unique identifier used to reference the topic).
+To update a topic, use the same command that you used to create it (but when
+updating, use the name of an existing topic and different endpoint values).
+
+.. tip:: Any notification already associated with the topic must be re-created
+ in order for the topic to update.
+
+.. note:: For rabbitmq, ``push-endpoint`` (with a hyphen in the middle) must be
+ changed to ``push_endpoint`` (with an underscore in the middle).
+
+::
+
+ POST
+
+ Action=CreateTopic
+ &Name=<topic-name>
+ [&Attributes.entry.1.key=amqp-exchange&Attributes.entry.1.value=<exchange>]
+ [&Attributes.entry.2.key=amqp-ack-level&Attributes.entry.2.value=none|broker|routable]
+ [&Attributes.entry.3.key=verify-ssl&Attributes.entry.3.value=true|false]
+ [&Attributes.entry.4.key=kafka-ack-level&Attributes.entry.4.value=none|broker]
+ [&Attributes.entry.5.key=use-ssl&Attributes.entry.5.value=true|false]
+ [&Attributes.entry.6.key=ca-location&Attributes.entry.6.value=<file path>]
+ [&Attributes.entry.7.key=OpaqueData&Attributes.entry.7.value=<opaque data>]
+ [&Attributes.entry.8.key=push-endpoint&Attributes.entry.8.value=<endpoint>]
+ [&Attributes.entry.9.key=persistent&Attributes.entry.9.value=true|false]
+
+Request parameters:
+
+- push-endpoint: This is the URI of an endpoint to send push notifications to.
+- OpaqueData: Opaque data is set in the topic configuration and added to all
+ notifications that are triggered by the topic.
+- persistent: This indicates whether notifications to this endpoint are
+ persistent (=asynchronous) or not persistent. (This is "false" by default.)
+
+- HTTP endpoint
+
+ - URI: ``http[s]://<fqdn>[:<port]``
+ - port: This defaults to 80 for HTTP and 443 for HTTPS.
+ - verify-ssl: This indicates whether the server certificate is validated by
+ the client. (This is "true" by default.)
+ - cloudevents: This indicates whether the HTTP header should contain
+ attributes according to the `S3 CloudEvents Spec`_. (This is "false" by
+ default.)
+
+- AMQP0.9.1 endpoint
+
+ - URI: ``amqp[s]://[<user>:<password>@]<fqdn>[:<port>][/<vhost>]``
+ - user/password: This defaults to "guest/guest".
+ - user/password: This must be provided only over HTTPS. Topic creation
+ requests will otherwise be rejected.
+ - port: This defaults to 5672 for unencrypted connections and 5671 for
+ SSL-encrypted connections.
+ - vhost: This defaults to "/".
+ - verify-ssl: This indicates whether the server certificate is validated by
+ the client. (This is "true" by default.)
+ - If ``ca-location`` is provided and a secure connection is used, the
+ specified CA will be used to authenticate the broker. The default CA will
+ not be used.
+ - amqp-exchange: The exchanges must exist and must be able to route messages
+ based on topics. This parameter is mandatory. Different topics that point
+ to the same endpoint must use the same exchange.
+ - amqp-ack-level: No end2end acking is required. Messages may persist in the
+ broker before being delivered to their final destinations. Three ack methods
+ exist:
+
+ - "none": The message is considered "delivered" if it is sent to the broker.
+ - "broker": The message is considered "delivered" if it is acked by the broker (default).
+ - "routable": The message is considered "delivered" if the broker can route to a consumer.
+
+.. tip:: The topic-name (see :ref:`radosgw-create-a-topic`) is used for the
+ AMQP topic ("routing key" for a topic exchange).
+
+- Kafka endpoint
+
+ - URI: ``kafka://[<user>:<password>@]<fqdn>[:<port]``
+ - ``use-ssl``: If this is set to "true", a secure connection is used to
+ connect to the broker. (This is "false" by default.)
+ - ``ca-location``: If this is provided and a secure connection is used, the
+ specified CA will be used insted of the default CA to authenticate the
+ broker.
+ - user/password: This must be provided only over HTTPS. Topic creation
+ requests will otherwise be rejected.
+ - user/password: This must be provided along with ``use-ssl``. Connections to
+ the broker will otherwise fail.
+ - port: This defaults to 9092.
+ - kafka-ack-level: No end2end acking is required. Messages may persist in the
+ broker before being delivered to their final destinations. Two ack methods
+ exist:
+
+ - "none": Messages are considered "delivered" if sent to the broker.
+ - "broker": Messages are considered "delivered" if acked by the broker. (This
+ is the default.)
+
+.. note::
+
+ - The key-value pair of a specific parameter need not reside in the same
+ line as the parameter, and need not appear in any specific order, but it
+ must use the same index.
+ - Attribute indexing need not be sequential and need not start from any
+ specific value.
+ - `AWS Create Topic`_ provides a detailed explanation of the endpoint
+ attributes format. In our case, however, different keys and values are
+ used.
+
+The response has the following format:
+
+::
+
+ <CreateTopicResponse xmlns="https://sns.amazonaws.com/doc/2010-03-31/">
+ <CreateTopicResult>
+ <TopicArn></TopicArn>
+ </CreateTopicResult>
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </CreateTopicResponse>
+
+The topic `ARN
+<https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`_
+in the response has the following format:
+
+::
+
+ arn:aws:sns:<zone-group>:<tenant>:<topic>
+
+Get Topic Attributes
+````````````````````
+
+This returns information about a specific topic. This includes push-endpoint
+information, if provided.
+
+::
+
+ POST
+
+ Action=GetTopicAttributes
+ &TopicArn=<topic-arn>
+
+The response has the following format:
+
+::
+
+ <GetTopicAttributesResponse>
+ <GetTopicAttributesRersult>
+ <Attributes>
+ <entry>
+ <key>User</key>
+ <value></value>
+ </entry>
+ <entry>
+ <key>Name</key>
+ <value></value>
+ </entry>
+ <entry>
+ <key>EndPoint</key>
+ <value></value>
+ </entry>
+ <entry>
+ <key>TopicArn</key>
+ <value></value>
+ </entry>
+ <entry>
+ <key>OpaqueData</key>
+ <value></value>
+ </entry>
+ </Attributes>
+ </GetTopicAttributesResult>
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </GetTopicAttributesResponse>
+
+- User: the name of the user that created the topic.
+- Name: the name of the topic.
+- EndPoint: The JSON-formatted endpoint parameters, including:
+ - EndpointAddress: The push-endpoint URL.
+ - EndpointArgs: The push-endpoint args.
+ - EndpointTopic: The topic name to be sent to the endpoint (can be different
+ than the above topic name).
+ - HasStoredSecret: This is "true" if the endpoint URL contains user/password
+ information. In this case, the request must be made over HTTPS. The "topic
+ get" request will otherwise be rejected.
+ - Persistent: This is "true" if the topic is persistent.
+- TopicArn: topic `ARN
+ <https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`_.
+- OpaqueData: The opaque data set on the topic.
+
+Get Topic Information
+`````````````````````
+
+This returns information about a specific topic. This includes push-endpoint
+information, if provided. Note that this API is now deprecated in favor of the
+AWS compliant `GetTopicAttributes` API.
+
+::
+
+ POST
+
+ Action=GetTopic
+ &TopicArn=<topic-arn>
+
+The response has the following format:
+
+::
+
+ <GetTopicResponse>
+ <GetTopicRersult>
+ <Topic>
+ <User></User>
+ <Name></Name>
+ <EndPoint>
+ <EndpointAddress></EndpointAddress>
+ <EndpointArgs></EndpointArgs>
+ <EndpointTopic></EndpointTopic>
+ <HasStoredSecret></HasStoredSecret>
+ <Persistent></Persistent>
+ </EndPoint>
+ <TopicArn></TopicArn>
+ <OpaqueData></OpaqueData>
+ </Topic>
+ </GetTopicResult>
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </GetTopicResponse>
+
+- User: The name of the user that created the topic.
+- Name: The name of the topic.
+- EndpointAddress: The push-endpoint URL.
+- EndpointArgs: The push-endpoint args.
+- EndpointTopic: The topic name to be sent to the endpoint (which can be
+ different than the above topic name).
+- HasStoredSecret: This is "true" if the endpoint URL contains user/password
+ information. In this case, the request must be made over HTTPS. The "topic
+ get" request will otherwise be rejected.
+- Persistent: "true" if topic is persistent.
+- TopicArn: topic `ARN
+ <https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html>`_.
+- OpaqueData: the opaque data set on the topic.
+
+Delete Topic
+````````````
+
+::
+
+ POST
+
+ Action=DeleteTopic
+ &TopicArn=<topic-arn>
+
+This deletes the specified topic.
+
+.. note::
+
+ - Deleting an unknown notification (for example, double delete) is not
+ considered an error.
+ - Deleting a topic does not automatically delete all notifications associated
+ with it.
+
+The response has the following format:
+
+::
+
+ <DeleteTopicResponse xmlns="https://sns.amazonaws.com/doc/2010-03-31/">
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </DeleteTopicResponse>
+
+List Topics
+```````````
+
+List all topics associated with a tenant.
+
+::
+
+ POST
+
+ Action=ListTopics
+
+The response has the following format:
+
+::
+
+ <ListTopicdResponse xmlns="https://sns.amazonaws.com/doc/2010-03-31/">
+ <ListTopicsRersult>
+ <Topics>
+ <member>
+ <User></User>
+ <Name></Name>
+ <EndPoint>
+ <EndpointAddress></EndpointAddress>
+ <EndpointArgs></EndpointArgs>
+ <EndpointTopic></EndpointTopic>
+ </EndPoint>
+ <TopicArn></TopicArn>
+ <OpaqueData></OpaqueData>
+ </member>
+ </Topics>
+ </ListTopicsResult>
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </ListTopicsResponse>
+
+- If the endpoint URL contains user/password information in any part of the
+ topic, the request must be made over HTTPS. The "topic list" request will
+ otherwise be rejected.
+
+Notifications
+~~~~~~~~~~~~~
+
+Detailed under: `Bucket Operations`_.
+
+.. note::
+
+ - "Abort Multipart Upload" request does not emit a notification
+ - Both "Initiate Multipart Upload" and "POST Object" requests will emit an ``s3:ObjectCreated:Post`` notification
+
+Events
+~~~~~~
+
+Events are in JSON format (regardless of the actual endpoint), and share the
+same structure as S3-compatible events that are pushed or pulled using the
+pubsub sync module. For example:
+
+::
+
+ {"Records":[
+ {
+ "eventVersion":"2.1",
+ "eventSource":"ceph:s3",
+ "awsRegion":"us-east-1",
+ "eventTime":"2019-11-22T13:47:35.124724Z",
+ "eventName":"ObjectCreated:Put",
+ "userIdentity":{
+ "principalId":"tester"
+ },
+ "requestParameters":{
+ "sourceIPAddress":""
+ },
+ "responseElements":{
+ "x-amz-request-id":"503a4c37-85eb-47cd-8681-2817e80b4281.5330.903595",
+ "x-amz-id-2":"14d2-zone1-zonegroup1"
+ },
+ "s3":{
+ "s3SchemaVersion":"1.0",
+ "configurationId":"mynotif1",
+ "bucket":{
+ "name":"mybucket1",
+ "ownerIdentity":{
+ "principalId":"tester"
+ },
+ "arn":"arn:aws:s3:us-east-1::mybucket1",
+ "id":"503a4c37-85eb-47cd-8681-2817e80b4281.5332.38"
+ },
+ "object":{
+ "key":"myimage1.jpg",
+ "size":"1024",
+ "eTag":"37b51d194a7513e45b56f6524f2d51f2",
+ "versionId":"",
+ "sequencer": "F7E6D75DC742D108",
+ "metadata":[],
+ "tags":[]
+ }
+ },
+ "eventId":"",
+ "opaqueData":"me@example.com"
+ }
+ ]}
+
+- awsRegion: The zonegroup.
+- eventTime: The timestamp, indicating when the event was triggered.
+- eventName: For the list of supported events see: `S3 Notification
+ Compatibility`_. Note that eventName values do not start with the `s3:`
+ prefix.
+- userIdentity.principalId: The user that triggered the change.
+- requestParameters.sourceIPAddress: not supported
+- responseElements.x-amz-request-id: The request ID of the original change.
+- responseElements.x_amz_id_2: The RGW on which the change was made.
+- s3.configurationId: The notification ID that created the event.
+- s3.bucket.name: The name of the bucket.
+- s3.bucket.ownerIdentity.principalId: The owner of the bucket.
+- s3.bucket.arn: The ARN of the bucket.
+- s3.bucket.id: The ID of the bucket. (This is an extension to the S3
+ notification API.)
+- s3.object.key: The object key.
+- s3.object.size: The object size.
+- s3.object.eTag: The object etag.
+- s3.object.versionId: The object version, if the bucket is versioned. When a
+ copy is made, it includes the version of the target object. When a delete
+ marker is created, it includes the version of the delete marker.
+- s3.object.sequencer: The monotonically-increasing identifier of the "change
+ per object" (hexadecimal format).
+- s3.object.metadata: Any metadata set on the object that is sent as
+ ``x-amz-meta-`` (that is, any metadata set on the object that is sent as an
+ extension to the S3 notification API).
+- s3.object.tags: Any tags set on the object. (This is an extension to the S3
+ notification API.)
+- s3.eventId: The unique ID of the event, which could be used for acking. (This
+ is an extension to the S3 notification API.)
+- s3.opaqueData: This means that "opaque data" is set in the topic configuration
+ and is added to all notifications triggered by the topic. (This is an
+ extension to the S3 notification API.)
+
+.. _PubSub Module : ../pubsub-module
+.. _S3 Notification Compatibility: ../s3-notification-compatibility
+.. _AWS Create Topic: https://docs.aws.amazon.com/sns/latest/api/API_CreateTopic.html
+.. _Bucket Operations: ../s3/bucketops
+.. _S3 CloudEvents Spec: https://github.com/cloudevents/spec/blob/main/cloudevents/adapters/aws-s3.md
diff --git a/doc/radosgw/oidc.rst b/doc/radosgw/oidc.rst
new file mode 100644
index 000000000..46593f1d8
--- /dev/null
+++ b/doc/radosgw/oidc.rst
@@ -0,0 +1,97 @@
+===============================
+ OpenID Connect Provider in RGW
+===============================
+
+An entity describing the OpenID Connect Provider needs to be created in RGW, in order to establish trust between the two.
+
+REST APIs for Manipulating an OpenID Connect Provider
+=====================================================
+
+The following REST APIs can be used for creating and managing an OpenID Connect Provider entity in RGW.
+
+In order to invoke the REST admin APIs, a user with admin caps needs to be created.
+
+.. code-block:: javascript
+
+ radosgw-admin --uid TESTER --display-name "TestUser" --access_key TESTER --secret test123 user create
+ radosgw-admin caps add --uid="TESTER" --caps="oidc-provider=*"
+
+
+CreateOpenIDConnectProvider
+---------------------------------
+
+Create an OpenID Connect Provider entity in RGW
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``ClientIDList.member.N``
+
+:Description: List of Client Ids that needs access to S3 resources.
+:Type: Array of Strings
+
+``ThumbprintList.member.N``
+
+:Description: List of OpenID Connect IDP's server certificates' thumbprints. A maximum of 5 thumbprints are allowed.
+:Type: Array of Strings
+
+``Url``
+
+:Description: URL of the IDP.
+:Type: String
+
+
+Example::
+ POST "<hostname>?Action=Action=CreateOpenIDConnectProvider
+ &ThumbprintList.list.1=F7D7B3515DD0D319DD219A43A9EA727AD6065287
+ &ClientIDList.list.1=app-profile-jsp
+ &Url=http://localhost:8080/auth/realms/quickstart
+
+
+DeleteOpenIDConnectProvider
+---------------------------
+
+Deletes an OpenID Connect Provider entity in RGW
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``OpenIDConnectProviderArn``
+
+:Description: ARN of the IDP which is returned by the Create API.
+:Type: String
+
+Example::
+ POST "<hostname>?Action=Action=DeleteOpenIDConnectProvider
+ &OpenIDConnectProviderArn=arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart
+
+
+GetOpenIDConnectProvider
+---------------------------
+
+Gets information about an IDP.
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``OpenIDConnectProviderArn``
+
+:Description: ARN of the IDP which is returned by the Create API.
+:Type: String
+
+Example::
+ POST "<hostname>?Action=Action=GetOpenIDConnectProvider
+ &OpenIDConnectProviderArn=arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart
+
+ListOpenIDConnectProviders
+--------------------------
+
+Lists information about all IDPs
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+None
+
+Example::
+ POST "<hostname>?Action=Action=ListOpenIDConnectProviders
diff --git a/doc/radosgw/opa.rst b/doc/radosgw/opa.rst
new file mode 100644
index 000000000..f1b76b5ef
--- /dev/null
+++ b/doc/radosgw/opa.rst
@@ -0,0 +1,72 @@
+==============================
+Open Policy Agent Integration
+==============================
+
+Open Policy Agent (OPA) is a lightweight general-purpose policy engine
+that can be co-located with a service. OPA can be integrated as a
+sidecar, host-level daemon, or library.
+
+Services can offload policy decisions to OPA by executing queries. Hence,
+policy enforcement can be decoupled from policy decisions.
+
+Configure OPA
+=============
+
+To configure OPA, load custom policies into OPA that control the resources users
+are allowed to access. Relevant data or context can also be loaded into OPA to make decisions.
+
+Policies and data can be loaded into OPA in the following ways::
+ * OPA's RESTful APIs
+ * OPA's *bundle* feature that downloads policies and data from remote HTTP servers
+ * Filesystem
+
+Configure the Ceph Object Gateway
+=================================
+
+The following configuration options are available for OPA integration::
+
+ rgw use opa authz = {use opa server to authorize client requests}
+ rgw opa url = {opa server url:opa server port}
+ rgw opa token = {opa bearer token}
+ rgw opa verify ssl = {verify opa server ssl certificate}
+
+How does the RGW-OPA integration work
+=====================================
+
+After a user is authenticated, OPA can be used to check if the user is authorized
+to perform the given action on the resource. OPA responds with an allow or deny
+decision which is sent back to the RGW which enforces the decision.
+
+Example request::
+
+ POST /v1/data/ceph/authz HTTP/1.1
+ Host: opa.example.com:8181
+ Content-Type: application/json
+
+ {
+ "input": {
+ "method": "GET",
+ "subuser": "subuser",
+ "user_info": {
+ "user_id": "john",
+ "display_name": "John"
+ },
+ "bucket_info": {
+ "bucket": {
+ "name": "Testbucket",
+ "bucket_id": "testbucket"
+ },
+ "owner": "john"
+ }
+ }
+ }
+
+Response::
+
+ {"result": true}
+
+The above is a sample request sent to OPA which contains information about the
+user, resource and the action to be performed on the resource. Based on the polices
+and data loaded into OPA, it will verify whether the request should be allowed or denied.
+In the sample request, RGW makes a POST request to the endpoint */v1/data/ceph/authz*,
+where *ceph* is the package name and *authz* is the rule name.
diff --git a/doc/radosgw/orphans.rst b/doc/radosgw/orphans.rst
new file mode 100644
index 000000000..9a77d60de
--- /dev/null
+++ b/doc/radosgw/orphans.rst
@@ -0,0 +1,115 @@
+==================================
+Orphan List and Associated Tooling
+==================================
+
+.. version added:: Luminous
+
+.. contents::
+
+Orphans are RADOS objects that are left behind after their associated
+RGW objects are removed. Normally these RADOS objects are removed
+automatically, either immediately or through a process known as
+"garbage collection". Over the history of RGW, however, there may have
+been bugs that prevented these RADOS objects from being deleted, and
+these RADOS objects may be consuming space on the Ceph cluster without
+being of any use. From the perspective of RGW, we call such RADOS
+objects "orphans".
+
+Orphans Find -- DEPRECATED
+--------------------------
+
+The `radosgw-admin` tool has/had three subcommands to help manage
+orphans, however these subcommands are (or will soon be)
+deprecated. These subcommands are:
+
+::
+ # radosgw-admin orphans find ...
+ # radosgw-admin orphans finish ...
+ # radosgw-admin orphans list-jobs ...
+
+There are two key problems with these subcommands, however. First,
+these subcommands have not been actively maintained and therefore have
+not tracked RGW as it has evolved in terms of features and updates. As
+a result the confidence that these subcommands can accurately identify
+true orphans is presently low.
+
+Second, these subcommands store intermediate results on the cluster
+itself. This can be problematic when cluster administrators are
+confronting insufficient storage space and want to remove orphans as a
+means of addressing the issue. The intermediate results could strain
+the existing cluster storage capacity even further.
+
+For these reasons "orphans find" has been deprecated.
+
+Orphan List
+-----------
+
+Because "orphans find" has been deprecated, RGW now includes an
+additional tool -- 'rgw-orphan-list'. When run it will list the
+available pools and prompt the user to enter the name of the data
+pool. At that point the tool will, perhaps after an extended period of
+time, produce a local file containing the RADOS objects from the
+designated pool that appear to be orphans. The administrator is free
+to examine this file and the decide on a course of action, perhaps
+removing those RADOS objects from the designated pool.
+
+All intermediate results are stored on the local file system rather
+than the Ceph cluster. So running the 'rgw-orphan-list' tool should
+have no appreciable impact on the amount of cluster storage consumed.
+
+WARNING: Experimental Status
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The 'rgw-orphan-list' tool is new and therefore currently considered
+experimental. The list of orphans produced should be "sanity checked"
+before being used for a large delete operation.
+
+WARNING: Specifying a Data Pool
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If a pool other than an RGW data pool is specified, the results of the
+tool will be erroneous. All RADOS objects found on such a pool will
+falsely be designated as orphans.
+
+WARNING: Unindexed Buckets
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+RGW allows for unindexed buckets, that is buckets that do not maintain
+an index of their contents. This is not a typical configuration, but
+it is supported. Because the 'rgw-orphan-list' tool uses the bucket
+indices to determine what RADOS objects should exist, objects in the
+unindexed buckets will falsely be listed as orphans.
+
+
+RADOS List
+----------
+
+One of the sub-steps in computing a list of orphans is to map each RGW
+object into its corresponding set of RADOS objects. This is done using
+a subcommand of 'radosgw-admin'.
+
+::
+ # radosgw-admin bucket radoslist [--bucket={bucket-name}]
+
+The subcommand will produce a list of RADOS objects that support all
+of the RGW objects. If a bucket is specified then the subcommand will
+only produce a list of RADOS objects that correspond back the RGW
+objects in the specified bucket.
+
+Note: Shared Bucket Markers
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Some administrators will be aware of the coding schemes used to name
+the RADOS objects that correspond to RGW objects, which include a
+"marker" unique to a given bucket.
+
+RADOS objects that correspond with the contents of one RGW bucket,
+however, may contain a marker that specifies a different bucket. This
+behavior is a consequence of the "shallow copy" optimization used by
+RGW. When larger objects are copied from bucket to bucket, only the
+"head" objects are actually copied, and the tail objects are
+shared. Those shared objects will contain the marker of the original
+bucket.
+
+.. _Data Layout in RADOS : ../layout
+.. _Pool Placement and Storage Classes : ../placement
diff --git a/doc/radosgw/placement.rst b/doc/radosgw/placement.rst
new file mode 100644
index 000000000..4931cc87d
--- /dev/null
+++ b/doc/radosgw/placement.rst
@@ -0,0 +1,250 @@
+==================================
+Pool Placement and Storage Classes
+==================================
+
+.. contents::
+
+Placement Targets
+=================
+
+.. versionadded:: Jewel
+
+Placement targets control which `Pools`_ are associated with a particular
+bucket. A bucket's placement target is selected on creation, and cannot be
+modified. The ``radosgw-admin bucket stats`` command will display its
+``placement_rule``.
+
+The zonegroup configuration contains a list of placement targets with an
+initial target named ``default-placement``. The zone configuration then maps
+each zonegroup placement target name onto its local storage. This zone
+placement information includes the ``index_pool`` name for the bucket index,
+the ``data_extra_pool`` name for metadata about incomplete multipart uploads,
+and a ``data_pool`` name for each storage class.
+
+.. _storage_classes:
+
+Storage Classes
+===============
+
+.. versionadded:: Nautilus
+
+Storage classes are used to customize the placement of object data. S3 Bucket
+Lifecycle rules can automate the transition of objects between storage classes.
+
+Storage classes are defined in terms of placement targets. Each zonegroup
+placement target lists its available storage classes with an initial class
+named ``STANDARD``. The zone configuration is responsible for providing a
+``data_pool`` pool name for each of the zonegroup's storage classes.
+
+Zonegroup/Zone Configuration
+============================
+
+Placement configuration is performed with ``radosgw-admin`` commands on
+the zonegroups and zones.
+
+The zonegroup placement configuration can be queried with:
+
+::
+
+ $ radosgw-admin zonegroup get
+ {
+ "id": "ab01123f-e0df-4f29-9d71-b44888d67cd5",
+ "name": "default",
+ "api_name": "default",
+ ...
+ "placement_targets": [
+ {
+ "name": "default-placement",
+ "tags": [],
+ "storage_classes": [
+ "STANDARD"
+ ]
+ }
+ ],
+ "default_placement": "default-placement",
+ ...
+ }
+
+The zone placement configuration can be queried with:
+
+::
+
+ $ radosgw-admin zone get
+ {
+ "id": "557cdcee-3aae-4e9e-85c7-2f86f5eddb1f",
+ "name": "default",
+ "domain_root": "default.rgw.meta:root",
+ ...
+ "placement_pools": [
+ {
+ "key": "default-placement",
+ "val": {
+ "index_pool": "default.rgw.buckets.index",
+ "storage_classes": {
+ "STANDARD": {
+ "data_pool": "default.rgw.buckets.data"
+ }
+ },
+ "data_extra_pool": "default.rgw.buckets.non-ec",
+ "index_type": 0
+ }
+ }
+ ],
+ ...
+ }
+
+.. note:: If you have not done any previous `Multisite Configuration`_,
+ a ``default`` zone and zonegroup are created for you, and changes
+ to the zone/zonegroup will not take effect until the Ceph Object
+ Gateways are restarted. If you have created a realm for multisite,
+ the zone/zonegroup changes will take effect once the changes are
+ committed with ``radosgw-admin period update --commit``.
+
+Adding a Placement Target
+-------------------------
+
+To create a new placement target named ``temporary``, start by adding it to
+the zonegroup:
+
+::
+
+ $ radosgw-admin zonegroup placement add \
+ --rgw-zonegroup default \
+ --placement-id temporary
+
+Then provide the zone placement info for that target:
+
+::
+
+ $ radosgw-admin zone placement add \
+ --rgw-zone default \
+ --placement-id temporary \
+ --data-pool default.rgw.temporary.data \
+ --index-pool default.rgw.temporary.index \
+ --data-extra-pool default.rgw.temporary.non-ec
+
+Adding a Storage Class
+----------------------
+
+To add a new storage class named ``GLACIER`` to the ``default-placement`` target,
+start by adding it to the zonegroup:
+
+::
+
+ $ radosgw-admin zonegroup placement add \
+ --rgw-zonegroup default \
+ --placement-id default-placement \
+ --storage-class GLACIER
+
+Then provide the zone placement info for that storage class:
+
+::
+
+ $ radosgw-admin zone placement add \
+ --rgw-zone default \
+ --placement-id default-placement \
+ --storage-class GLACIER \
+ --data-pool default.rgw.glacier.data \
+ --compression lz4
+
+Customizing Placement
+=====================
+
+Default Placement
+-----------------
+
+By default, new buckets will use the zonegroup's ``default_placement`` target.
+This zonegroup setting can be changed with:
+
+::
+
+ $ radosgw-admin zonegroup placement default \
+ --rgw-zonegroup default \
+ --placement-id new-placement
+
+User Placement
+--------------
+
+A Ceph Object Gateway user can override the zonegroup's default placement
+target by setting a non-empty ``default_placement`` field in the user info.
+Similarly, the ``default_storage_class`` can override the ``STANDARD``
+storage class applied to objects by default.
+
+::
+
+ $ radosgw-admin user info --uid testid
+ {
+ ...
+ "default_placement": "",
+ "default_storage_class": "",
+ "placement_tags": [],
+ ...
+ }
+
+If a zonegroup's placement target contains any ``tags``, users will be unable
+to create buckets with that placement target unless their user info contains
+at least one matching tag in its ``placement_tags`` field. This can be useful
+to restrict access to certain types of storage.
+
+The ``radosgw-admin`` command can modify these fields directly with:
+
+::
+
+ $ radosgw-admin user modify \
+ --uid <user-id> \
+ --placement-id <default-placement-id> \
+ --storage-class <default-storage-class> \
+ --tags <tag1,tag2>
+
+.. _s3_bucket_placement:
+
+S3 Bucket Placement
+-------------------
+
+When creating a bucket with the S3 protocol, a placement target can be
+provided as part of the LocationConstraint to override the default placement
+targets from the user and zonegroup.
+
+Normally, the LocationConstraint must match the zonegroup's ``api_name``:
+
+::
+
+ <LocationConstraint>default</LocationConstraint>
+
+A custom placement target can be added to the ``api_name`` following a colon:
+
+::
+
+ <LocationConstraint>default:new-placement</LocationConstraint>
+
+Swift Bucket Placement
+----------------------
+
+When creating a bucket with the Swift protocol, a placement target can be
+provided in the HTTP header ``X-Storage-Policy``:
+
+::
+
+ X-Storage-Policy: new-placement
+
+Using Storage Classes
+=====================
+
+All placement targets have a ``STANDARD`` storage class which is applied to
+new objects by default. The user can override this default with its
+``default_storage_class``.
+
+To create an object in a non-default storage class, provide that storage class
+name in an HTTP header with the request. The S3 protocol uses the
+``X-Amz-Storage-Class`` header, while the Swift protocol uses the
+``X-Object-Storage-Class`` header.
+
+When using AWS S3 SDKs such as python boto3, it is important that the non-default
+storage class will be called as one on of the AWS S3 allowed storage classes, or else the SDK
+will drop the request and raise an exception.
+
+S3 Object Lifecycle Management can then be used to move object data between
+storage classes using ``Transition`` actions.
+
+.. _`Pools`: ../pools
+.. _`Multisite Configuration`: ../multisite
diff --git a/doc/radosgw/pools.rst b/doc/radosgw/pools.rst
new file mode 100644
index 000000000..bb1246c1f
--- /dev/null
+++ b/doc/radosgw/pools.rst
@@ -0,0 +1,57 @@
+=====
+Pools
+=====
+
+The Ceph Object Gateway uses several pools for its various storage needs,
+which are listed in the Zone object (see ``radosgw-admin zone get``). A
+single zone named ``default`` is created automatically with pool names
+starting with ``default.rgw.``, but a `Multisite Configuration`_ will have
+multiple zones.
+
+Tuning
+======
+
+When ``radosgw`` first tries to operate on a zone pool that does not
+exist, it will create that pool with the default values from
+``osd pool default pg num`` and ``osd pool default pgp num``. These defaults
+are sufficient for some pools, but others (especially those listed in
+``placement_pools`` for the bucket index and data) will require additional
+tuning. We recommend using the `Ceph Placement Group’s per Pool
+Calculator <https://old.ceph.com/pgcalc/>`__ to calculate a suitable number of
+placement groups for these pools. See
+`Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
+for details on pool creation.
+
+.. _radosgw-pool-namespaces:
+
+Pool Namespaces
+===============
+
+.. versionadded:: Luminous
+
+Pool names particular to a zone follow the naming convention
+``{zone-name}.pool-name``. For example, a zone named ``us-east`` will
+have the following pools:
+
+- ``.rgw.root``
+
+- ``us-east.rgw.control``
+
+- ``us-east.rgw.meta``
+
+- ``us-east.rgw.log``
+
+- ``us-east.rgw.buckets.index``
+
+- ``us-east.rgw.buckets.data``
+
+The zone definitions list several more pools than that, but many of those
+are consolidated through the use of rados namespaces. For example, all of
+the following pool entries use namespaces of the ``us-east.rgw.meta`` pool::
+
+ "user_keys_pool": "us-east.rgw.meta:users.keys",
+ "user_email_pool": "us-east.rgw.meta:users.email",
+ "user_swift_pool": "us-east.rgw.meta:users.swift",
+ "user_uid_pool": "us-east.rgw.meta:users.uid",
+
+.. _`Multisite Configuration`: ../multisite
diff --git a/doc/radosgw/pubsub-module.rst b/doc/radosgw/pubsub-module.rst
new file mode 100644
index 000000000..8a45f4105
--- /dev/null
+++ b/doc/radosgw/pubsub-module.rst
@@ -0,0 +1,646 @@
+==================
+PubSub Sync Module
+==================
+
+.. versionadded:: Nautilus
+
+.. contents::
+
+This sync module provides a publish and subscribe mechanism for the object store modification
+events. Events are published into predefined topics. Topics can be subscribed to, and events
+can be pulled from them. Events need to be acked. Also, events will expire and disappear
+after a period of time.
+
+A push notification mechanism exists too, currently supporting HTTP,
+AMQP0.9.1 and Kafka endpoints. In this case, the events are pushed to an endpoint on top of storing them in Ceph. If events should only be pushed to an endpoint
+and do not need to be stored in Ceph, the `Bucket Notification`_ mechanism should be used instead of pubsub sync module.
+
+A user can create different topics. A topic entity is defined by its name and is per tenant. A
+user can only associate its topics (via notification configuration) with buckets it owns.
+
+In order to publish events for specific bucket a notification entity needs to be created. A
+notification can be created on a subset of event types, or for all event types (default).
+There can be multiple notifications for any specific topic, and the same topic could be used for multiple notifications.
+
+A subscription to a topic can also be defined. There can be multiple subscriptions for any
+specific topic.
+
+REST API has been defined to provide configuration and control interfaces for the pubsub
+mechanisms. This API has two flavors, one is S3-compatible and one is not. The two flavors can be used
+together, although it is recommended to use the S3-compatible one.
+The S3-compatible API is similar to the one used in the bucket notification mechanism.
+
+Events are stored as RGW objects in a special bucket, under a special user (pubsub control user). Events cannot
+be accessed directly, but need to be pulled and acked using the new REST API.
+
+.. toctree::
+ :maxdepth: 1
+
+ S3 Bucket Notification Compatibility <s3-notification-compatibility>
+
+.. note:: To enable bucket notifications API, the `rgw_enable_apis` configuration parameter should contain: "notifications".
+
+PubSub Zone Configuration
+-------------------------
+
+The pubsub sync module requires the creation of a new zone in a :ref:`multisite` environment...
+First, a master zone must exist (see: :ref:`master-zone-label`),
+then a secondary zone should be created (see :ref:`secondary-zone-label`).
+In the creation of the secondary zone, its tier type must be set to ``pubsub``:
+
+::
+
+ # radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --endpoints={http://fqdn}[,{http://fqdn}] \
+ --sync-from-all=0 \
+ --sync-from={master-zone-name} \
+ --tier-type=pubsub
+
+
+PubSub Zone Configuration Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+
+ {
+ "tenant": <tenant>, # default: <empty>
+ "uid": <uid>, # default: "pubsub"
+ "data_bucket_prefix": <prefix> # default: "pubsub-"
+ "data_oid_prefix": <prefix> #
+ "events_retention_days": <days> # default: 7
+ }
+
+* ``tenant`` (string)
+
+The tenant of the pubsub control user.
+
+* ``uid`` (string)
+
+The uid of the pubsub control user.
+
+* ``data_bucket_prefix`` (string)
+
+The prefix of the bucket name that will be created to store events for specific topic.
+
+* ``data_oid_prefix`` (string)
+
+The oid prefix for the stored events.
+
+* ``events_retention_days`` (integer)
+
+How many days to keep events that weren't acked.
+
+Configuring Parameters via CLI
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The tier configuration could be set using the following command:
+
+::
+
+ # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --tier-config={key}={val}[,{key}={val}]
+
+Where the ``key`` in the configuration specifies the configuration variable that needs to be updated (from the list above), and
+the ``val`` specifies its new value. For example, setting the pubsub control user ``uid`` to ``user_ps``:
+
+::
+
+ # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --tier-config=uid=pubsub
+
+A configuration field can be removed by using ``--tier-config-rm={key}``.
+
+
+Topic and Subscription Management via CLI
+-----------------------------------------
+
+Configuration of all topics, associated with a tenant, could be fetched using the following command:
+
+::
+
+ # radosgw-admin topic list [--tenant={tenant}]
+
+
+Configuration of a specific topic could be fetched using:
+
+::
+
+ # radosgw-admin topic get --topic={topic-name} [--tenant={tenant}]
+
+
+And removed using:
+
+::
+
+ # radosgw-admin topic rm --topic={topic-name} [--tenant={tenant}]
+
+
+Configuration of a subscription could be fetched using:
+
+::
+
+ # radosgw-admin subscription get --subscription={topic-name} [--tenant={tenant}]
+
+And removed using:
+
+::
+
+ # radosgw-admin subscription rm --subscription={topic-name} [--tenant={tenant}]
+
+
+To fetch all of the events stored in a subcription, use:
+
+::
+
+ # radosgw-admin subscription pull --subscription={topic-name} [--marker={last-marker}] [--tenant={tenant}]
+
+
+To ack (and remove) an event from a subscription, use:
+
+::
+
+ # radosgw-admin subscription ack --subscription={topic-name} --event-id={event-id} [--tenant={tenant}]
+
+
+PubSub Performance Stats
+-------------------------
+Same counters are shared between the pubsub sync module and the notification mechanism.
+
+- ``pubsub_event_triggered``: running counter of events with at lease one topic associated with them
+- ``pubsub_event_lost``: running counter of events that had topics and subscriptions associated with them but that were not stored or pushed to any of the subscriptions
+- ``pubsub_store_ok``: running counter, for all subscriptions, of stored events
+- ``pubsub_store_fail``: running counter, for all subscriptions, of events failed to be stored
+- ``pubsub_push_ok``: running counter, for all subscriptions, of events successfully pushed to their endpoint
+- ``pubsub_push_fail``: running counter, for all subscriptions, of events failed to be pushed to their endpoint
+- ``pubsub_push_pending``: gauge value of events pushed to an endpoint but not acked or nacked yet
+
+.. note::
+
+ ``pubsub_event_triggered`` and ``pubsub_event_lost`` are incremented per event, while:
+ ``pubsub_store_ok``, ``pubsub_store_fail``, ``pubsub_push_ok``, ``pubsub_push_fail``, are incremented per store/push action on each subscriptions.
+
+PubSub REST API
+---------------
+
+.. tip:: PubSub REST calls, and only them, should be sent to an RGW which belong to a PubSub zone
+
+Topics
+~~~~~~
+
+.. _radosgw-create-a-topic:
+
+Create a Topic
+``````````````
+
+This will create a new topic. Topic creation is needed both for both flavors of the API.
+Optionally the topic could be provided with push endpoint parameters that would be used later
+when an S3-compatible notification is created.
+Upon successful request, the response will include the topic ARN that could be later used to reference this topic in an S3-compatible notification request.
+To update a topic, use the same command used for topic creation, with the topic name of an existing topic and different endpoint values.
+
+.. tip:: Any S3-compatible notification already associated with the topic needs to be re-created for the topic update to take effect
+
+::
+
+ PUT /topics/<topic-name>[?OpaqueData=<opaque data>][&push-endpoint=<endpoint>[&amqp-exchange=<exchange>][&amqp-ack-level=none|broker|routable][&verify-ssl=true|false][&kafka-ack-level=none|broker][&use-ssl=true|false][&ca-location=<file path>]]
+
+Request parameters:
+
+- push-endpoint: URI of an endpoint to send push notification to
+- OpaqueData: opaque data is set in the topic configuration and added to all notifications triggered by the ropic
+
+The endpoint URI may include parameters depending with the type of endpoint:
+
+- HTTP endpoint
+
+ - URI: ``http[s]://<fqdn>[:<port]``
+ - port defaults to: 80/443 for HTTP/S accordingly
+ - verify-ssl: indicate whether the server certificate is validated by the client or not ("true" by default)
+
+- AMQP0.9.1 endpoint
+
+ - URI: ``amqp[s]://[<user>:<password>@]<fqdn>[:<port>][/<vhost>]``
+ - user/password defaults to: guest/guest
+ - user/password may only be provided over HTTPS. Topic creation request will be rejected if not
+ - port defaults to: 5672/5671 for unencrypted/SSL-encrypted connections
+ - vhost defaults to: "/"
+ - verify-ssl: indicate whether the server certificate is validated by the client or not ("true" by default)
+ - if ``ca-location`` is provided, and secure connection is used, the specified CA will be used, instead of the default one, to authenticate the broker
+ - amqp-exchange: the exchanges must exist and be able to route messages based on topics (mandatory parameter for AMQP0.9.1). Different topics pointing to the same endpoint must use the same exchange
+ - amqp-ack-level: no end2end acking is required, as messages may persist in the broker before delivered into their final destination. Three ack methods exist:
+
+ - "none": message is considered "delivered" if sent to broker
+ - "broker": message is considered "delivered" if acked by broker (default)
+ - "routable": message is considered "delivered" if broker can route to a consumer
+
+.. tip:: The topic-name (see :ref:`radosgw-create-a-topic`) is used for the AMQP topic ("routing key" for a topic exchange)
+
+- Kafka endpoint
+
+ - URI: ``kafka://[<user>:<password>@]<fqdn>[:<port]``
+ - if ``use-ssl`` is set to "true", secure connection will be used for connecting with the broker ("false" by default)
+ - if ``ca-location`` is provided, and secure connection is used, the specified CA will be used, instead of the default one, to authenticate the broker
+ - user/password may only be provided over HTTPS. Topic creation request will be rejected if not
+ - user/password may only be provided together with ``use-ssl``, connection to the broker would fail if not
+ - port defaults to: 9092
+ - kafka-ack-level: no end2end acking is required, as messages may persist in the broker before delivered into their final destination. Two ack methods exist:
+
+ - "none": message is considered "delivered" if sent to broker
+ - "broker": message is considered "delivered" if acked by broker (default)
+
+The topic ARN in the response will have the following format:
+
+::
+
+ arn:aws:sns:<zone-group>:<tenant>:<topic>
+
+Get Topic Information
+`````````````````````
+
+Returns information about specific topic. This includes subscriptions to that topic, and push-endpoint information, if provided.
+
+::
+
+ GET /topics/<topic-name>
+
+Response will have the following format (JSON):
+
+::
+
+ {
+ "topic":{
+ "user":"",
+ "name":"",
+ "dest":{
+ "bucket_name":"",
+ "oid_prefix":"",
+ "push_endpoint":"",
+ "push_endpoint_args":"",
+ "push_endpoint_topic":"",
+ "stored_secret":"",
+ "persistent":""
+ },
+ "arn":""
+ "opaqueData":""
+ },
+ "subs":[]
+ }
+
+- topic.user: name of the user that created the topic
+- name: name of the topic
+- dest.bucket_name: not used
+- dest.oid_prefix: not used
+- dest.push_endpoint: in case of S3-compliant notifications, this value will be used as the push-endpoint URL
+- if push-endpoint URL contain user/password information, request must be made over HTTPS. Topic get request will be rejected if not
+- dest.push_endpoint_args: in case of S3-compliant notifications, this value will be used as the push-endpoint args
+- dest.push_endpoint_topic: in case of S3-compliant notifications, this value will hold the topic name as sent to the endpoint (may be different than the internal topic name)
+- topic.arn: topic ARN
+- subs: list of subscriptions associated with this topic
+
+Delete Topic
+````````````
+
+::
+
+ DELETE /topics/<topic-name>
+
+Delete the specified topic.
+
+List Topics
+```````````
+
+List all topics associated with a tenant.
+
+::
+
+ GET /topics
+
+- if push-endpoint URL contain user/password information, in any of the topic, request must be made over HTTPS. Topic list request will be rejected if not
+
+S3-Compliant Notifications
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Detailed under: `Bucket Operations`_.
+
+.. note::
+
+ - Notification creation will also create a subscription for pushing/pulling events
+ - The generated subscription's name will have the same as the notification Id, and could be used later to fetch and ack events with the subscription API.
+ - Notification deletion will deletes all generated subscriptions
+ - In case that bucket deletion implicitly deletes the notification,
+ the associated subscription will not be deleted automatically (any events of the deleted bucket could still be access),
+ and will have to be deleted explicitly with the subscription deletion API
+ - Filtering based on metadata (which is an extension to S3) is not supported, and such rules will be ignored
+ - Filtering based on tags (which is an extension to S3) is not supported, and such rules will be ignored
+
+
+Non S3-Compliant Notifications
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Create a Notification
+`````````````````````
+
+This will create a publisher for a specific bucket into a topic.
+
+::
+
+ PUT /notifications/bucket/<bucket>?topic=<topic-name>[&events=<event>[,<event>]]
+
+Request parameters:
+
+- topic-name: name of topic
+- event: event type (string), one of: ``OBJECT_CREATE``, ``OBJECT_DELETE``, ``DELETE_MARKER_CREATE``
+
+Delete Notification Information
+```````````````````````````````
+
+Delete publisher from a specific bucket into a specific topic.
+
+::
+
+ DELETE /notifications/bucket/<bucket>?topic=<topic-name>
+
+Request parameters:
+
+- topic-name: name of topic
+
+.. note:: When the bucket is deleted, any notification defined on it is also deleted
+
+List Notifications
+``````````````````
+
+List all topics with associated events defined on a bucket.
+
+::
+
+ GET /notifications/bucket/<bucket>
+
+Response will have the following format (JSON):
+
+::
+
+ {"topics":[
+ {
+ "topic":{
+ "user":"",
+ "name":"",
+ "dest":{
+ "bucket_name":"",
+ "oid_prefix":"",
+ "push_endpoint":"",
+ "push_endpoint_args":"",
+ "push_endpoint_topic":""
+ }
+ "arn":""
+ },
+ "events":[]
+ }
+ ]}
+
+Subscriptions
+~~~~~~~~~~~~~
+
+Create a Subscription
+`````````````````````
+
+Creates a new subscription.
+
+::
+
+ PUT /subscriptions/<sub-name>?topic=<topic-name>[?push-endpoint=<endpoint>[&amqp-exchange=<exchange>][&amqp-ack-level=none|broker|routable][&verify-ssl=true|false][&kafka-ack-level=none|broker][&ca-location=<file path>]]
+
+Request parameters:
+
+- topic-name: name of topic
+- push-endpoint: URI of endpoint to send push notification to
+
+The endpoint URI may include parameters depending with the type of endpoint:
+
+- HTTP endpoint
+
+ - URI: ``http[s]://<fqdn>[:<port]``
+ - port defaults to: 80/443 for HTTP/S accordingly
+ - verify-ssl: indicate whether the server certificate is validated by the client or not ("true" by default)
+
+- AMQP0.9.1 endpoint
+
+ - URI: ``amqp://[<user>:<password>@]<fqdn>[:<port>][/<vhost>]``
+ - user/password defaults to : guest/guest
+ - port defaults to: 5672
+ - vhost defaults to: "/"
+ - amqp-exchange: the exchanges must exist and be able to route messages based on topics (mandatory parameter for AMQP0.9.1)
+ - amqp-ack-level: no end2end acking is required, as messages may persist in the broker before delivered into their final destination. Three ack methods exist:
+
+ - "none": message is considered "delivered" if sent to broker
+ - "broker": message is considered "delivered" if acked by broker (default)
+ - "routable": message is considered "delivered" if broker can route to a consumer
+
+- Kafka endpoint
+
+ - URI: ``kafka://[<user>:<password>@]<fqdn>[:<port]``
+ - if ``ca-location`` is provided, secure connection will be used for connection with the broker
+ - user/password may only be provided over HTTPS. Topic creation request will be rejected if not
+ - user/password may only be provided together with ``ca-location``. Topic creation request will be rejected if not
+ - port defaults to: 9092
+ - kafka-ack-level: no end2end acking is required, as messages may persist in the broker before delivered into their final destination. Two ack methods exist:
+
+ - "none": message is considered "delivered" if sent to broker
+ - "broker": message is considered "delivered" if acked by broker (default)
+
+
+Get Subscription Information
+````````````````````````````
+
+Returns information about specific subscription.
+
+::
+
+ GET /subscriptions/<sub-name>
+
+Response will have the following format (JSON):
+
+::
+
+ {
+ "user":"",
+ "name":"",
+ "topic":"",
+ "dest":{
+ "bucket_name":"",
+ "oid_prefix":"",
+ "push_endpoint":"",
+ "push_endpoint_args":"",
+ "push_endpoint_topic":""
+ }
+ "s3_id":""
+ }
+
+- user: name of the user that created the subscription
+- name: name of the subscription
+- topic: name of the topic the subscription is associated with
+- dest.bucket_name: name of the bucket storing the events
+- dest.oid_prefix: oid prefix for the events stored in the bucket
+- dest.push_endpoint: in case of S3-compliant notifications, this value will be used as the push-endpoint URL
+- if push-endpoint URL contain user/password information, request must be made over HTTPS. Topic get request will be rejected if not
+- dest.push_endpoint_args: in case of S3-compliant notifications, this value will be used as the push-endpoint args
+- dest.push_endpoint_topic: in case of S3-compliant notifications, this value will hold the topic name as sent to the endpoint (may be different than the internal topic name)
+- s3_id: in case of S3-compliant notifications, this will hold the notification name that created the subscription
+
+Delete Subscription
+```````````````````
+
+Removes a subscription.
+
+::
+
+ DELETE /subscriptions/<sub-name>
+
+Events
+~~~~~~
+
+Pull Events
+```````````
+
+Pull events sent to a specific subscription.
+
+::
+
+ GET /subscriptions/<sub-name>?events[&max-entries=<max-entries>][&marker=<marker>]
+
+Request parameters:
+
+- marker: pagination marker for list of events, if not specified will start from the oldest
+- max-entries: max number of events to return
+
+The response will hold information on the current marker and whether there are more events not fetched:
+
+::
+
+ {"next_marker":"","is_truncated":"",...}
+
+
+The actual content of the response is depended with how the subscription was created.
+In case that the subscription was created via an S3-compatible notification,
+the events will have an S3-compatible record format (JSON):
+
+::
+
+ {"Records":[
+ {
+ "eventVersion":"2.1"
+ "eventSource":"aws:s3",
+ "awsRegion":"",
+ "eventTime":"",
+ "eventName":"",
+ "userIdentity":{
+ "principalId":""
+ },
+ "requestParameters":{
+ "sourceIPAddress":""
+ },
+ "responseElements":{
+ "x-amz-request-id":"",
+ "x-amz-id-2":""
+ },
+ "s3":{
+ "s3SchemaVersion":"1.0",
+ "configurationId":"",
+ "bucket":{
+ "name":"",
+ "ownerIdentity":{
+ "principalId":""
+ },
+ "arn":"",
+ "id":""
+ },
+ "object":{
+ "key":"",
+ "size":"0",
+ "eTag":"",
+ "versionId":"",
+ "sequencer":"",
+ "metadata":[],
+ "tags":[]
+ }
+ },
+ "eventId":"",
+ "opaqueData":"",
+ }
+ ]}
+
+- awsRegion: zonegroup
+- eventTime: timestamp indicating when the event was triggered
+- eventName: either ``s3:ObjectCreated:``, or ``s3:ObjectRemoved:``
+- userIdentity: not supported
+- requestParameters: not supported
+- responseElements: not supported
+- s3.configurationId: notification ID that created the subscription for the event
+- s3.bucket.name: name of the bucket
+- s3.bucket.ownerIdentity.principalId: owner of the bucket
+- s3.bucket.arn: ARN of the bucket
+- s3.bucket.id: Id of the bucket (an extension to the S3 notification API)
+- s3.object.key: object key
+- s3.object.size: not supported
+- s3.object.eTag: object etag
+- s3.object.version: object version in case of versioned bucket
+- s3.object.sequencer: monotonically increasing identifier of the change per object (hexadecimal format)
+- s3.object.metadata: not supported (an extension to the S3 notification API)
+- s3.object.tags: not supported (an extension to the S3 notification API)
+- s3.eventId: unique ID of the event, that could be used for acking (an extension to the S3 notification API)
+- s3.opaqueData: opaque data is set in the topic configuration and added to all notifications triggered by the ropic (an extension to the S3 notification API)
+
+In case that the subscription was not created via a non S3-compatible notification,
+the events will have the following event format (JSON):
+
+::
+
+ {"events":[
+ {
+ "id":"",
+ "event":"",
+ "timestamp":"",
+ "info":{
+ "attrs":{
+ "mtime":""
+ },
+ "bucket":{
+ "bucket_id":"",
+ "name":"",
+ "tenant":""
+ },
+ "key":{
+ "instance":"",
+ "name":""
+ }
+ }
+ }
+ ]}
+
+- id: unique ID of the event, that could be used for acking
+- event: one of: ``OBJECT_CREATE``, ``OBJECT_DELETE``, ``DELETE_MARKER_CREATE``
+- timestamp: timestamp indicating when the event was sent
+- info.attrs.mtime: timestamp indicating when the event was triggered
+- info.bucket.bucket_id: id of the bucket
+- info.bucket.name: name of the bucket
+- info.bucket.tenant: tenant the bucket belongs to
+- info.key.instance: object version in case of versioned bucket
+- info.key.name: object key
+
+Ack Event
+`````````
+
+Ack event so that it can be removed from the subscription history.
+
+::
+
+ POST /subscriptions/<sub-name>?ack&event-id=<event-id>
+
+Request parameters:
+
+- event-id: id of event to be acked
+
+.. _Bucket Notification : ../notifications
+.. _Bucket Operations: ../s3/bucketops
diff --git a/doc/radosgw/qat-accel.rst b/doc/radosgw/qat-accel.rst
new file mode 100644
index 000000000..4e661c63d
--- /dev/null
+++ b/doc/radosgw/qat-accel.rst
@@ -0,0 +1,94 @@
+===============================================
+QAT Acceleration for Encryption and Compression
+===============================================
+
+Intel QAT (QuickAssist Technology) can provide extended accelerated encryption
+and compression services by offloading the actual encryption and compression
+request(s) to the hardware QuickAssist accelerators, which are more efficient
+in terms of cost and power than general purpose CPUs for those specific
+compute-intensive workloads.
+
+See `QAT Support for Compression`_ and `QAT based Encryption for RGW`_.
+
+
+QAT in the Software Stack
+=========================
+
+Application developers can access QuickAssist features through the QAT API.
+The QAT API is the top-level API for QuickAssist technology, and enables easy
+interfacing between the customer application and the QuickAssist acceleration
+driver.
+
+The QAT API accesses the QuickAssist driver, which in turn drives the
+QuickAssist Accelerator hardware. The QuickAssist driver is responsible for
+exposing the acceleration services to the application software.
+
+A user can write directly to the QAT API, or the use of QAT can be done via
+frameworks that have been enabled by others including Intel (for example, zlib*,
+OpenSSL* libcrypto*, and the Linux* Kernel Crypto Framework).
+
+QAT Environment Setup
+=====================
+1. QuickAssist Accelerator hardware is necessary to make use of accelerated
+ encryption and compression services. And QAT driver in kernel space have to
+ be loaded to drive the hardware.
+
+The driver package can be downloaded from `Intel Quickassist Technology`_.
+
+2. The implementation for QAT based encryption is directly base on QAT API which
+ is included the driver package. But QAT support for compression depends on
+ QATzip project, which is a user space library which builds on top of the QAT
+ API. Currently, QATzip speeds up gzip compression and decompression at the
+ time of writing.
+
+See `QATzip`_.
+
+Implementation
+==============
+1. QAT based Encryption for RGW
+
+`OpenSSL support for RGW encryption`_ has been merged into Ceph, and Intel also
+provides one `QAT Engine`_ for OpenSSL. So, theoretically speaking, QAT based
+encryption in Ceph can be directly supported through OpenSSl+QAT Engine.
+
+But the QAT Engine for OpenSSL currently supports chained operations only, and
+so Ceph will not be able to utilize QAT hardware feature for crypto operations
+based on OpenSSL crypto plugin. As a result, one QAT plugin based on native
+QAT API is added into crypto framework.
+
+2. QAT Support for Compression
+
+As mentioned above, QAT support for compression is based on QATzip library in
+user space, which is designed to take full advantage of the performance provided
+by QuickAssist Technology. Unlike QAT based encryption, QAT based compression
+is supported through a tool class for QAT acceleration rather than a compressor
+plugin. The common tool class can transparently accelerate the existing compression
+types, but only zlib compressor can be supported at the time of writing. So
+user is allowed to use it to speed up zlib compressor as long as the QAT
+hardware is available and QAT is capable to handle it.
+
+Configuration
+=============
+1. QAT based Encryption for RGW
+
+Edit the Ceph configuration file to make use of QAT based crypto plugin::
+
+ plugin crypto accelerator = crypto_qat
+
+2. QAT Support for Compression
+
+One CMake option have to be used to trigger QAT based compression::
+
+ -DWITH_QATZIP=ON
+
+Edit the Ceph configuration file to enable QAT support for compression::
+
+ qat compressor enabled=true
+
+
+.. _QAT Support for Compression: https://github.com/ceph/ceph/pull/19714
+.. _QAT based Encryption for RGW: https://github.com/ceph/ceph/pull/19386
+.. _Intel Quickassist Technology: https://01.org/intel-quickassist-technology
+.. _QATzip: https://github.com/intel/QATzip
+.. _OpenSSL support for RGW encryption: https://github.com/ceph/ceph/pull/15168
+.. _QAT Engine: https://github.com/intel/QAT_Engine
diff --git a/doc/radosgw/rgw-cache.rst b/doc/radosgw/rgw-cache.rst
new file mode 100644
index 000000000..9b4c96f1e
--- /dev/null
+++ b/doc/radosgw/rgw-cache.rst
@@ -0,0 +1,150 @@
+==========================
+RGW Data caching and CDN
+==========================
+
+.. versionadded:: Octopus
+
+.. contents::
+
+This feature adds to RGW the ability to securely cache objects and offload the workload from the cluster, using Nginx.
+After an object is accessed the first time it will be stored in the Nginx cache directory.
+When data is already cached, it need not be fetched from RGW. A permission check will be made against RGW to ensure the requesting user has access.
+This feature is based on some Nginx modules, ngx_http_auth_request_module, https://github.com/kaltura/nginx-aws-auth-module, Openresty for Lua capabilities.
+
+Currently, this feature will cache only AWSv4 requests (only s3 requests), caching-in the output of the 1st GET request
+and caching-out on subsequent GET requests, passing thru transparently PUT,POST,HEAD,DELETE and COPY requests.
+
+
+The feature introduces 2 new APIs: Auth and Cache.
+
+New APIs
+-------------------------
+
+There are 2 new APIs for this feature:
+
+Auth API - The cache uses this to validate that a user can access the cached data
+
+Cache API - Adds the ability to override securely Range header, that way Nginx can use it is own smart cache on top of S3:
+https://www.nginx.com/blog/smart-efficient-byte-range-caching-nginx/
+Using this API gives the ability to read ahead objects when clients asking a specific range from the object.
+On subsequent accesses to the cached object, Nginx will satisfy requests for already-cached ranges from the cache. Uncached ranges will be read from RGW (and cached).
+
+Auth API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This API Validates a specific authenticated access being made to the cache, using RGW's knowledge of the client credentials and stored access policy.
+Returns success if the encapsulated request would be granted.
+
+Cache API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This API is meant to allow changing signed Range headers using a privileged user, cache user.
+
+Creating cache user
+
+::
+
+$ radosgw-admin user create --uid=<uid for cache user> --display-name="cache user" --caps="amz-cache=read"
+
+This user can send to the RGW the Cache API header ``X-Amz-Cache``, this header contains the headers from the original request(before changing the Range header).
+It means that ``X-Amz-Cache`` built from several headers.
+The headers that are building the ``X-Amz-Cache`` header are separated by char with ASCII code 177 and the header name and value are separated by char ASCII code 178.
+The RGW will check that the cache user is an authorized user and if it is a cache user,
+if yes it will use the ``X-Amz-Cache`` to revalidate that the user has permissions, using the headers from the X-Amz-Cache.
+During this flow, the RGW will override the Range header.
+
+
+Using Nginx with RGW
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Download the source of Openresty:
+
+::
+
+$ wget https://openresty.org/download/openresty-1.15.8.3.tar.gz
+
+git clone the AWS auth Nginx module:
+
+::
+
+$ git clone https://github.com/kaltura/nginx-aws-auth-module
+
+untar the openresty package:
+
+::
+
+$ tar xvzf openresty-1.15.8.3.tar.gz
+$ cd openresty-1.15.8.3
+
+Compile openresty, Make sure that you have pcre lib and openssl lib:
+
+::
+
+$ sudo yum install pcre-devel openssl-devel gcc curl zlib-devel nginx
+$ ./configure --add-module=<the nginx-aws-auth-module dir> --with-http_auth_request_module --with-http_slice_module --conf-path=/etc/nginx/nginx.conf
+$ gmake -j $(nproc)
+$ sudo gmake install
+$ sudo ln -sf /usr/local/openresty/bin/openresty /usr/bin/nginx
+
+Put in-place your Nginx configuration files and edit them according to your environment:
+
+All Nginx conf files are under: https://github.com/ceph/ceph/tree/master/examples/rgw-cache
+
+`nginx.conf` should go to `/etc/nginx/nginx.conf`
+
+`nginx-lua-file.lua` should go to `/etc/nginx/nginx-lua-file.lua`
+
+`nginx-default.conf` should go to `/etc/nginx/conf.d/nginx-default.conf`
+
+The parameters that are most likely to require adjustment according to the environment are located in the file `nginx-default.conf`
+
+Modify the example values of *proxy_cache_path* and *max_size* at:
+
+::
+
+ proxy_cache_path /data/cache levels=2:2:2 keys_zone=mycache:999m max_size=20G inactive=1d use_temp_path=off;
+
+
+And modify the example *server* values to point to the RGWs URIs:
+
+::
+
+ server rgw1:8000 max_fails=2 fail_timeout=5s;
+ server rgw2:8000 max_fails=2 fail_timeout=5s;
+ server rgw3:8000 max_fails=2 fail_timeout=5s;
+
+| It is important to substitute the *access key* and *secret key* located in the `nginx.conf` with those belong to the user with the `amz-cache` caps
+| for example, create the `cache` user as following:
+
+::
+
+ radosgw-admin user create --uid=cacheuser --display-name="cache user" --caps="amz-cache=read" --access-key <access> --secret <secret>
+
+It is possible to use Nginx slicing which is a better method for streaming purposes.
+
+For using slice you should use `nginx-slicing.conf` and not `nginx-default.conf`
+
+Further information about Nginx slicing:
+
+https://docs.nginx.com/nginx/admin-guide/content-cache/content-caching/#byte-range-caching
+
+
+If you do not want to use the prefetch caching, It is possible to replace `nginx-default.conf` with `nginx-noprefetch.conf`
+Using `noprefetch` means that if the client is sending range request of 0-4095 and then 0-4096 Nginx will cache those requests separately, So it will need to fetch those requests twice.
+
+
+Run Nginx(openresty):
+
+::
+
+$ sudo systemctl restart nginx
+
+Appendix
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+**A note about performance:** In certain instances like development environment, disabling the authentication by commenting the following line in `nginx-default.conf`:
+
+::
+
+ #auth_request /authentication;
+
+may (depending on the hardware) increases the performance significantly as it forgoes the auth API calls to radosgw.
diff --git a/doc/radosgw/role.rst b/doc/radosgw/role.rst
new file mode 100644
index 000000000..2e2697f57
--- /dev/null
+++ b/doc/radosgw/role.rst
@@ -0,0 +1,523 @@
+======
+ Role
+======
+
+A role is similar to a user and has permission policies attached to it, that determine what a role can or can not do. A role can be assumed by any identity that needs it. If a user assumes a role, a set of dynamically created temporary credentials are returned to the user. A role can be used to delegate access to users, applications, services that do not have permissions to access some s3 resources.
+
+The following radosgw-admin commands can be used to create/ delete/ update a role and permissions asscociated with a role.
+
+Create a Role
+-------------
+
+To create a role, execute the following::
+
+ radosgw-admin role create --role-name={role-name} [--path=="{path to the role}"] [--assume-role-policy-doc={trust-policy-document}]
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+``path``
+
+:Description: Path to the role. The default value is a slash(/).
+:Type: String
+
+``assume-role-policy-doc``
+
+:Description: The trust relationship policy document that grants an entity permission to assume the role.
+:Type: String
+
+For example::
+
+ radosgw-admin role create --role-name=S3Access1 --path=/application_abc/component_xyz/ --assume-role-policy-doc=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"AWS\":\[\"arn:aws:iam:::user/TESTER\"\]\},\"Action\":\[\"sts:AssumeRole\"\]\}\]\}
+
+.. code-block:: javascript
+
+ {
+ "id": "ca43045c-082c-491a-8af1-2eebca13deec",
+ "name": "S3Access1",
+ "path": "/application_abc/component_xyz/",
+ "arn": "arn:aws:iam:::role/application_abc/component_xyz/S3Access1",
+ "create_date": "2018-10-17T10:18:29.116Z",
+ "max_session_duration": 3600,
+ "assume_role_policy_document": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
+ }
+
+
+Delete a Role
+-------------
+
+To delete a role, execute the following::
+
+ radosgw-admin role rm --role-name={role-name}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+For example::
+
+ radosgw-admin role rm --role-name=S3Access1
+
+Note: A role can be deleted only when it doesn't have any permission policy attached to it.
+
+Get a Role
+----------
+
+To get information about a role, execute the following::
+
+ radosgw-admin role get --role-name={role-name}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+For example::
+
+ radosgw-admin role get --role-name=S3Access1
+
+.. code-block:: javascript
+
+ {
+ "id": "ca43045c-082c-491a-8af1-2eebca13deec",
+ "name": "S3Access1",
+ "path": "/application_abc/component_xyz/",
+ "arn": "arn:aws:iam:::role/application_abc/component_xyz/S3Access1",
+ "create_date": "2018-10-17T10:18:29.116Z",
+ "max_session_duration": 3600,
+ "assume_role_policy_document": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
+ }
+
+
+List Roles
+----------
+
+To list roles with a specified path prefix, execute the following::
+
+ radosgw-admin role list [--path-prefix ={path prefix}]
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``path-prefix``
+
+:Description: Path prefix for filtering roles. If this is not specified, all roles are listed.
+:Type: String
+
+For example::
+
+ radosgw-admin role list --path-prefix="/application"
+
+.. code-block:: javascript
+
+ [
+ {
+ "id": "3e1c0ff7-8f2b-456c-8fdf-20f428ba6a7f",
+ "name": "S3Access1",
+ "path": "/application_abc/component_xyz/",
+ "arn": "arn:aws:iam:::role/application_abc/component_xyz/S3Access1",
+ "create_date": "2018-10-17T10:32:01.881Z",
+ "max_session_duration": 3600,
+ "assume_role_policy_document": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
+ }
+ ]
+
+
+Update Assume Role Policy Document of a role
+--------------------------------------------
+
+To modify a role's assume role policy document, execute the following::
+
+ radosgw-admin role modify --role-name={role-name} --assume-role-policy-doc={trust-policy-document}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+``assume-role-policy-doc``
+
+:Description: The trust relationship policy document that grants an entity permission to assume the role.
+:Type: String
+
+For example::
+
+ radosgw-admin role modify --role-name=S3Access1 --assume-role-policy-doc=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"AWS\":\[\"arn:aws:iam:::user/TESTER2\"\]\},\"Action\":\[\"sts:AssumeRole\"\]\}\]\}
+
+.. code-block:: javascript
+
+ {
+ "id": "ca43045c-082c-491a-8af1-2eebca13deec",
+ "name": "S3Access1",
+ "path": "/application_abc/component_xyz/",
+ "arn": "arn:aws:iam:::role/application_abc/component_xyz/S3Access1",
+ "create_date": "2018-10-17T10:18:29.116Z",
+ "max_session_duration": 3600,
+ "assume_role_policy_document": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"AWS\":[\"arn:aws:iam:::user/TESTER2\"]},\"Action\":[\"sts:AssumeRole\"]}]}"
+ }
+
+
+In the above example, we are modifying the Principal from TESTER to TESTER2 in its assume role policy document.
+
+Add/ Update a Policy attached to a Role
+---------------------------------------
+
+To add or update the inline policy attached to a role, execute the following::
+
+ radosgw-admin role policy put --role-name={role-name} --policy-name={policy-name} --policy-doc={permission-policy-doc}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+``policy-name``
+
+:Description: Name of the policy.
+:Type: String
+
+``policy-doc``
+
+:Description: The Permission policy document.
+:Type: String
+
+For example::
+
+ radosgw-admin role-policy put --role-name=S3Access1 --policy-name=Policy1 --policy-doc=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Action\":\[\"s3:*\"\],\"Resource\":\"arn:aws:s3:::example_bucket\"\}\]\}
+
+In the above example, we are attaching a policy 'Policy1' to role 'S3Access1', which allows all s3 actions on 'example_bucket'.
+
+List Permission Policy Names attached to a Role
+-----------------------------------------------
+
+To list the names of permission policies attached to a role, execute the following::
+
+ radosgw-admin role policy get --role-name={role-name}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+For example::
+
+ radosgw-admin role-policy list --role-name=S3Access1
+
+.. code-block:: javascript
+
+ [
+ "Policy1"
+ ]
+
+
+Get Permission Policy attached to a Role
+----------------------------------------
+
+To get a specific permission policy attached to a role, execute the following::
+
+ radosgw-admin role policy get --role-name={role-name} --policy-name={policy-name}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+``policy-name``
+
+:Description: Name of the policy.
+:Type: String
+
+For example::
+
+ radosgw-admin role-policy get --role-name=S3Access1 --policy-name=Policy1
+
+.. code-block:: javascript
+
+ {
+ "Permission policy": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"s3:*\"],\"Resource\":\"arn:aws:s3:::example_bucket\"}]}"
+ }
+
+
+Delete Policy attached to a Role
+--------------------------------
+
+To delete permission policy attached to a role, execute the following::
+
+ radosgw-admin role policy rm --role-name={role-name} --policy-name={policy-name}
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``role-name``
+
+:Description: Name of the role.
+:Type: String
+
+``policy-name``
+
+:Description: Name of the policy.
+:Type: String
+
+For example::
+
+ radosgw-admin role-policy get --role-name=S3Access1 --policy-name=Policy1
+
+
+REST APIs for Manipulating a Role
+=================================
+
+In addition to the above radosgw-admin commands, the following REST APIs can be used for manipulating a role. For the request parameters and their explanations, refer to the sections above.
+
+In order to invoke the REST admin APIs, a user with admin caps needs to be created.
+
+.. code-block:: javascript
+
+ radosgw-admin --uid TESTER --display-name "TestUser" --access_key TESTER --secret test123 user create
+ radosgw-admin caps add --uid="TESTER" --caps="roles=*"
+
+
+Create a Role
+-------------
+
+Example::
+ POST "<hostname>?Action=CreateRole&RoleName=S3Access&Path=/application_abc/component_xyz/&AssumeRolePolicyDocument=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"AWS\":\[\"arn:aws:iam:::user/TESTER\"\]\},\"Action\":\[\"sts:AssumeRole\"\]\}\]\}"
+
+.. code-block:: XML
+
+ <role>
+ <id>8f41f4e0-7094-4dc0-ac20-074a881ccbc5</id>
+ <name>S3Access</name>
+ <path>/application_abc/component_xyz/</path>
+ <arn>arn:aws:iam:::role/application_abc/component_xyz/S3Access</arn>
+ <create_date>2018-10-23T07:43:42.811Z</create_date>
+ <max_session_duration>3600</max_session_duration>
+ <assume_role_policy_document>{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":["arn:aws:iam:::user/TESTER"]},"Action":["sts:AssumeRole"]}]}</assume_role_policy_document>
+ </role>
+
+
+Delete a Role
+-------------
+
+Example::
+ POST "<hostname>?Action=DeleteRole&RoleName=S3Access"
+
+Note: A role can be deleted only when it doesn't have any permission policy attached to it.
+
+Get a Role
+----------
+
+Example::
+ POST "<hostname>?Action=GetRole&RoleName=S3Access"
+
+.. code-block:: XML
+
+ <role>
+ <id>8f41f4e0-7094-4dc0-ac20-074a881ccbc5</id>
+ <name>S3Access</name>
+ <path>/application_abc/component_xyz/</path>
+ <arn>arn:aws:iam:::role/application_abc/component_xyz/S3Access</arn>
+ <create_date>2018-10-23T07:43:42.811Z</create_date>
+ <max_session_duration>3600</max_session_duration>
+ <assume_role_policy_document>{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":["arn:aws:iam:::user/TESTER"]},"Action":["sts:AssumeRole"]}]}</assume_role_policy_document>
+ </role>
+
+
+List Roles
+----------
+
+Example::
+ POST "<hostname>?Action=ListRoles&RoleName=S3Access&PathPrefix=/application"
+
+.. code-block:: XML
+
+ <role>
+ <id>8f41f4e0-7094-4dc0-ac20-074a881ccbc5</id>
+ <name>S3Access</name>
+ <path>/application_abc/component_xyz/</path>
+ <arn>arn:aws:iam:::role/application_abc/component_xyz/S3Access</arn>
+ <create_date>2018-10-23T07:43:42.811Z</create_date>
+ <max_session_duration>3600</max_session_duration>
+ <assume_role_policy_document>{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Principal":{"AWS":["arn:aws:iam:::user/TESTER"]},"Action":["sts:AssumeRole"]}]}</assume_role_policy_document>
+ </role>
+
+
+Update Assume Role Policy Document
+----------------------------------
+
+Example::
+ POST "<hostname>?Action=UpdateAssumeRolePolicy&RoleName=S3Access&PolicyDocument=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Principal\":\{\"AWS\":\[\"arn:aws:iam:::user/TESTER2\"\]\},\"Action\":\[\"sts:AssumeRole\"\]\}\]\}"
+
+Add/ Update a Policy attached to a Role
+---------------------------------------
+
+Example::
+ POST "<hostname>?Action=PutRolePolicy&RoleName=S3Access&PolicyName=Policy1&PolicyDocument=\{\"Version\":\"2012-10-17\",\"Statement\":\[\{\"Effect\":\"Allow\",\"Action\":\[\"s3:CreateBucket\"\],\"Resource\":\"arn:aws:s3:::example_bucket\"\}\]\}"
+
+List Permission Policy Names attached to a Role
+-----------------------------------------------
+
+Example::
+ POST "<hostname>?Action=ListRolePolicies&RoleName=S3Access"
+
+.. code-block:: XML
+
+ <PolicyNames>
+ <member>Policy1</member>
+ </PolicyNames>
+
+
+Get Permission Policy attached to a Role
+----------------------------------------
+
+Example::
+ POST "<hostname>?Action=GetRolePolicy&RoleName=S3Access&PolicyName=Policy1"
+
+.. code-block:: XML
+
+ <GetRolePolicyResult>
+ <PolicyName>Policy1</PolicyName>
+ <RoleName>S3Access</RoleName>
+ <Permission_policy>{"Version":"2012-10-17","Statement":[{"Effect":"Allow","Action":["s3:CreateBucket"],"Resource":"arn:aws:s3:::example_bucket"}]}</Permission_policy>
+ </GetRolePolicyResult>
+
+
+Delete Policy attached to a Role
+--------------------------------
+
+Example::
+ POST "<hostname>?Action=DeleteRolePolicy&RoleName=S3Access&PolicyName=Policy1"
+
+Tag a role
+----------
+A role can have multivalued tags attached to it. These tags can be passed in as part of CreateRole REST API also.
+AWS does not support multi-valued role tags.
+
+Example::
+ POST "<hostname>?Action=TagRole&RoleName=S3Access&Tags.member.1.Key=Department&Tags.member.1.Value=Engineering"
+
+.. code-block:: XML
+
+ <TagRoleResponse>
+ <ResponseMetadata>
+ <RequestId>tx000000000000000000004-00611f337e-1027-default</RequestId>
+ </ResponseMetadata>
+ </TagRoleResponse>
+
+
+List role tags
+--------------
+Lists the tags attached to a role.
+
+Example::
+ POST "<hostname>?Action=ListRoleTags&RoleName=S3Access"
+
+.. code-block:: XML
+
+ <ListRoleTagsResponse>
+ <ListRoleTagsResult>
+ <Tags>
+ <member>
+ <Key>Department</Key>
+ <Value>Engineering</Value>
+ </member>
+ </Tags>
+ </ListRoleTagsResult>
+ <ResponseMetadata>
+ <RequestId>tx000000000000000000005-00611f337e-1027-default</RequestId>
+ </ResponseMetadata>
+ </ListRoleTagsResponse>
+
+Delete role tags
+----------------
+Delete a tag/ tags attached to a role.
+
+Example::
+ POST "<hostname>?Action=UntagRoles&RoleName=S3Access&TagKeys.member.1=Department"
+
+.. code-block:: XML
+
+ <UntagRoleResponse>
+ <ResponseMetadata>
+ <RequestId>tx000000000000000000007-00611f337e-1027-default</RequestId>
+ </ResponseMetadata>
+ </UntagRoleResponse>
+
+
+Sample code for tagging, listing tags and untagging a role
+----------------------------------------------------------
+
+The following is sample code for adding tags to role, listing tags and untagging a role using boto3.
+
+.. code-block:: python
+
+ import boto3
+
+ access_key = 'TESTER'
+ secret_key = 'test123'
+
+ iam_client = boto3.client('iam',
+ aws_access_key_id=access_key,
+ aws_secret_access_key=secret_key,
+ endpoint_url='http://s3.us-east.localhost:8000',
+ region_name=''
+ )
+
+ policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\"],\"Condition\":{\"StringEquals\":{\"localhost:8080/auth/realms/quickstart:sub\":\"user1\"}}}]}"
+
+ print ("\n Creating Role with tags\n")
+ tags_list = [
+ {'Key':'Department','Value':'Engineering'}
+ ]
+ role_response = iam_client.create_role(
+ AssumeRolePolicyDocument=policy_document,
+ Path='/',
+ RoleName='S3Access',
+ Tags=tags_list,
+ )
+
+ print ("Adding tags to role\n")
+ response = iam_client.tag_role(
+ RoleName='S3Access',
+ Tags= [
+ {'Key':'CostCenter','Value':'123456'}
+ ]
+ )
+ print ("Listing role tags\n")
+ response = iam_client.list_role_tags(
+ RoleName='S3Access'
+ )
+ print (response)
+ print ("Untagging role\n")
+ response = iam_client.untag_role(
+ RoleName='S3Access',
+ TagKeys=[
+ 'Department',
+ ]
+ )
+
+
+
diff --git a/doc/radosgw/s3-notification-compatibility.rst b/doc/radosgw/s3-notification-compatibility.rst
new file mode 100644
index 000000000..09054bed3
--- /dev/null
+++ b/doc/radosgw/s3-notification-compatibility.rst
@@ -0,0 +1,139 @@
+=====================================
+S3 Bucket Notifications Compatibility
+=====================================
+
+Ceph's `Bucket Notifications`_ and `PubSub Module`_ APIs follow `AWS S3 Bucket Notifications API`_. However, some differences exist, as listed below.
+
+
+.. note::
+
+ Compatibility is different depending on which of the above mechanism is used
+
+Supported Destination
+---------------------
+
+AWS supports: **SNS**, **SQS** and **Lambda** as possible destinations (AWS internal destinations).
+Currently, we support: **HTTP/S**, **Kafka** and **AMQP**. And also support pulling and acking of events stored in Ceph (as an intrenal destination).
+
+We are using the **SNS** ARNs to represent the **HTTP/S**, **Kafka** and **AMQP** destinations.
+
+Notification Configuration XML
+------------------------------
+
+Following tags (and the tags inside them) are not supported:
+
++-----------------------------------+----------------------------------------------+
+| Tag | Remaks |
++===================================+==============================================+
+| ``<QueueConfiguration>`` | not needed, we treat all destinations as SNS |
++-----------------------------------+----------------------------------------------+
+| ``<CloudFunctionConfiguration>`` | not needed, we treat all destinations as SNS |
++-----------------------------------+----------------------------------------------+
+
+REST API Extension
+------------------
+
+Ceph's bucket notification API has the following extensions:
+
+- Deletion of a specific notification, or all notifications on a bucket, using the ``DELETE`` verb
+
+ - In S3, all notifications are deleted when the bucket is deleted, or when an empty notification is set on the bucket
+
+- Getting the information on a specific notification (when more than one exists on a bucket)
+
+ - In S3, it is only possible to fetch all notifications on a bucket
+
+- In addition to filtering based on prefix/suffix of object keys we support:
+
+ - Filtering based on regular expression matching
+
+ - Filtering based on metadata attributes attached to the object
+
+ - Filtering based on object tags
+
+- Each one of the additional filters extends the S3 API and using it will require extension of the client SDK (unless you are using plain HTTP).
+
+- Filtering overlapping is allowed, so that same event could be sent as different notification
+
+
+Unsupported Fields in the Event Record
+--------------------------------------
+
+The records sent for bucket notification follow format described in: `Event Message Structure`_.
+However, the following fields may be sent empty, under the different deployment options (Notification/PubSub):
+
++----------------------------------------+--------------+---------------+------------------------------------------------------------+
+| Field | Notification | PubSub | Description |
++========================================+==============+===============+============================================================+
+| ``userIdentity.principalId`` | Supported | Not Supported | The identity of the user that triggered the event |
++----------------------------------------+--------------+---------------+------------------------------------------------------------+
+| ``requestParameters.sourceIPAddress`` | Not Supported | The IP address of the client that triggered the event |
++----------------------------------------+--------------+---------------+------------------------------------------------------------+
+| ``requestParameters.x-amz-request-id`` | Supported | Not Supported | The request id that triggered the event |
++----------------------------------------+--------------+---------------+------------------------------------------------------------+
+| ``requestParameters.x-amz-id-2`` | Supported | Not Supported | The IP address of the RGW on which the event was triggered |
++----------------------------------------+--------------+---------------+------------------------------------------------------------+
+| ``s3.object.size`` | Supported | Not Supported | The size of the object |
++----------------------------------------+--------------+---------------+------------------------------------------------------------+
+
+Event Types
+-----------
+
++----------------------------------------------+-----------------+-------------------------------------------+
+| Event | Notification | PubSub |
++==============================================+=================+===========================================+
+| ``s3:ObjectCreated:*`` | Supported |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectCreated:Put`` | Supported | Supported at ``s3:ObjectCreated:*`` level |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectCreated:Post`` | Supported | Not Supported |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectCreated:Copy`` | Supported | Supported at ``s3:ObjectCreated:*`` level |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectCreated:CompleteMultipartUpload`` | Supported | Supported at ``s3:ObjectCreated:*`` level |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectRemoved:*`` | Supported | Supported only the specific events below |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectRemoved:Delete`` | Supported |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectRemoved:DeleteMarkerCreated`` | Supported |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectRestore:Post`` | Not applicable to Ceph |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectRestore:Complete`` | Not applicable to Ceph |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ReducedRedundancyLostObject`` | Not applicable to Ceph |
++----------------------------------------------+-----------------+-------------------------------------------+
+
+.. note::
+
+ The ``s3:ObjectRemoved:DeleteMarkerCreated`` event presents information on the latest version of the object
+
+.. note::
+
+ In case of multipart upload, an ``ObjectCreated:CompleteMultipartUpload`` notification will be sent at the end of the process.
+
+Topic Configuration
+-------------------
+In the case of bucket notifications, the topics management API will be derived from `AWS Simple Notification Service API`_.
+Note that most of the API is not applicable to Ceph, and only the following actions are implemented:
+
+ - ``CreateTopic``
+ - ``DeleteTopic``
+ - ``ListTopics``
+
+We also have the following extensions to topic configuration:
+
+ - In ``GetTopic`` we allow fetching a specific topic, instead of all user topics
+ - In ``CreateTopic``
+
+ - we allow setting endpoint attributes
+ - we allow setting opaque data thta will be sent to the endpoint in the notification
+
+
+.. _AWS Simple Notification Service API: https://docs.aws.amazon.com/sns/latest/api/API_Operations.html
+.. _AWS S3 Bucket Notifications API: https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
+.. _Event Message Structure: https://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html
+.. _`PubSub Module`: ../pubsub-module
+.. _`Bucket Notifications`: ../notifications
+.. _`boto3 SDK filter extensions`: https://github.com/ceph/ceph/tree/master/examples/boto3
diff --git a/doc/radosgw/s3.rst b/doc/radosgw/s3.rst
new file mode 100644
index 000000000..d68863d6b
--- /dev/null
+++ b/doc/radosgw/s3.rst
@@ -0,0 +1,102 @@
+============================
+ Ceph Object Gateway S3 API
+============================
+
+Ceph supports a RESTful API that is compatible with the basic data access model of the `Amazon S3 API`_.
+
+API
+---
+
+.. toctree::
+ :maxdepth: 1
+
+ Common <s3/commons>
+ Authentication <s3/authentication>
+ Service Ops <s3/serviceops>
+ Bucket Ops <s3/bucketops>
+ Object Ops <s3/objectops>
+ C++ <s3/cpp>
+ C# <s3/csharp>
+ Java <s3/java>
+ Perl <s3/perl>
+ PHP <s3/php>
+ Python <s3/python>
+ Ruby <s3/ruby>
+
+
+Features Support
+----------------
+
+The following table describes the support status for current Amazon S3 functional features:
+
++---------------------------------+-----------------+----------------------------------------+
+| Feature | Status | Remarks |
++=================================+=================+========================================+
+| **List Buckets** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Delete Bucket** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Create Bucket** | Supported | Different set of canned ACLs |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Lifecycle** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Replication** | Partial | Only permitted across zones |
++---------------------------------+-----------------+----------------------------------------+
+| **Policy (Buckets, Objects)** | Supported | ACLs & bucket policies are supported |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Website** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket ACLs (Get, Put)** | Supported | Different set of canned ACLs |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Location** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Notification** | Supported | See `S3 Notification Compatibility`_ |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Object Versions** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Bucket Info (HEAD)** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Request Payment** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Put Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Delete Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Object ACLs (Get, Put)** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Object Info (HEAD)** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **POST Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Copy Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Multipart Uploads** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Object Tagging** | Supported | See :ref:`tag_policy` for Policy verbs |
++---------------------------------+-----------------+----------------------------------------+
+| **Bucket Tagging** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Storage Class** | Supported | See :ref:`storage_classes` |
++---------------------------------+-----------------+----------------------------------------+
+
+Unsupported Header Fields
+-------------------------
+
+The following common request header fields are not supported:
+
++----------------------------+------------+
+| Name | Type |
++============================+============+
+| **Server** | Response |
++----------------------------+------------+
+| **x-amz-delete-marker** | Response |
++----------------------------+------------+
+| **x-amz-id-2** | Response |
++----------------------------+------------+
+| **x-amz-version-id** | Response |
++----------------------------+------------+
+
+.. _Amazon S3 API: http://docs.aws.amazon.com/AmazonS3/latest/API/APIRest.html
+.. _S3 Notification Compatibility: ../s3-notification-compatibility
diff --git a/doc/radosgw/s3/authentication.rst b/doc/radosgw/s3/authentication.rst
new file mode 100644
index 000000000..10143290d
--- /dev/null
+++ b/doc/radosgw/s3/authentication.rst
@@ -0,0 +1,231 @@
+=========================
+ Authentication and ACLs
+=========================
+
+Requests to the RADOS Gateway (RGW) can be either authenticated or
+unauthenticated. RGW assumes unauthenticated requests are sent by an anonymous
+user. RGW supports canned ACLs.
+
+Authentication
+--------------
+Authenticating a request requires including an access key and a Hash-based
+Message Authentication Code (HMAC) in the request before it is sent to the
+RGW server. RGW uses an S3-compatible authentication approach.
+
+::
+
+ HTTP/1.1
+ PUT /buckets/bucket/object.mpeg
+ Host: cname.domain.com
+ Date: Mon, 2 Jan 2012 00:01:01 +0000
+ Content-Encoding: mpeg
+ Content-Length: 9999999
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+In the foregoing example, replace ``{access-key}`` with the value for your access
+key ID followed by a colon (``:``). Replace ``{hash-of-header-and-secret}`` with
+a hash of the header string and the secret corresponding to the access key ID.
+
+To generate the hash of the header string and secret, you must:
+
+#. Get the value of the header string.
+#. Normalize the request header string into canonical form.
+#. Generate an HMAC using a SHA-1 hashing algorithm.
+ See `RFC 2104`_ and `HMAC`_ for details.
+#. Encode the ``hmac`` result as base-64.
+
+To normalize the header into canonical form:
+
+#. Get all fields beginning with ``x-amz-``.
+#. Ensure that the fields are all lowercase.
+#. Sort the fields lexicographically.
+#. Combine multiple instances of the same field name into a
+ single field and separate the field values with a comma.
+#. Replace white space and line breaks in field values with a single space.
+#. Remove white space before and after colons.
+#. Append a new line after each field.
+#. Merge the fields back into the header.
+
+Replace the ``{hash-of-header-and-secret}`` with the base-64 encoded HMAC string.
+
+Authentication against OpenStack Keystone
+-----------------------------------------
+
+In a radosgw instance that is configured with authentication against
+OpenStack Keystone, it is possible to use Keystone as an authoritative
+source for S3 API authentication. To do so, you must set:
+
+* the ``rgw keystone`` configuration options explained in :doc:`../keystone`,
+* ``rgw s3 auth use keystone = true``.
+
+In addition, a user wishing to use the S3 API must obtain an AWS-style
+access key and secret key. They can do so with the ``openstack ec2
+credentials create`` command::
+
+ $ openstack --os-interface public ec2 credentials create
+ +------------+---------------------------------------------------------------------------------------------------------------------------------------------+
+ | Field | Value |
+ +------------+---------------------------------------------------------------------------------------------------------------------------------------------+
+ | access | c921676aaabbccdeadbeef7e8b0eeb2c |
+ | links | {u'self': u'https://auth.example.com:5000/v3/users/7ecbebaffeabbddeadbeefa23267ccbb24/credentials/OS-EC2/c921676aaabbccdeadbeef7e8b0eeb2c'} |
+ | project_id | 5ed51981aab4679851adeadbeef6ebf7 |
+ | secret | ******************************** |
+ | trust_id | None |
+ | user_id | 7ecbebaffeabbddeadbeefa23267cc24 |
+ +------------+---------------------------------------------------------------------------------------------------------------------------------------------+
+
+The thus-generated access and secret key can then be used for S3 API
+access to radosgw.
+
+.. note:: Consider that most production radosgw deployments
+ authenticating against OpenStack Keystone are also set up
+ for :doc:`../multitenancy`, for which special
+ considerations apply with respect to S3 signed URLs and
+ public read ACLs.
+
+Access Control Lists (ACLs)
+---------------------------
+
+RGW supports S3-compatible ACL functionality. An ACL is a list of access grants
+that specify which operations a user can perform on a bucket or on an object.
+Each grant has a different meaning when applied to a bucket versus applied to
+an object:
+
++------------------+--------------------------------------------------------+----------------------------------------------+
+| Permission | Bucket | Object |
++==================+========================================================+==============================================+
+| ``READ`` | Grantee can list the objects in the bucket. | Grantee can read the object. |
++------------------+--------------------------------------------------------+----------------------------------------------+
+| ``WRITE`` | Grantee can write or delete objects in the bucket. | N/A |
++------------------+--------------------------------------------------------+----------------------------------------------+
+| ``READ_ACP`` | Grantee can read bucket ACL. | Grantee can read the object ACL. |
++------------------+--------------------------------------------------------+----------------------------------------------+
+| ``WRITE_ACP`` | Grantee can write bucket ACL. | Grantee can write to the object ACL. |
++------------------+--------------------------------------------------------+----------------------------------------------+
+| ``FULL_CONTROL`` | Grantee has full permissions for object in the bucket. | Grantee can read or write to the object ACL. |
++------------------+--------------------------------------------------------+----------------------------------------------+
+
+Internally, S3 operations are mapped to ACL permissions thus:
+
++---------------------------------------+---------------+
+| Operation | Permission |
++=======================================+===============+
+| ``s3:GetObject`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectTorrent`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectVersion`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectVersionTorrent`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectTagging`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectVersionTagging`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:ListAllMyBuckets`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:ListBucket`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:ListBucketMultipartUploads`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:ListBucketVersions`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:ListMultipartUploadParts`` | ``READ`` |
++---------------------------------------+---------------+
+| ``s3:AbortMultipartUpload`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:CreateBucket`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:DeleteBucket`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:DeleteObject`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:s3DeleteObjectVersion`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:PutObject`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:PutObjectTagging`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:PutObjectVersionTagging`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:DeleteObjectTagging`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:DeleteObjectVersionTagging`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:RestoreObject`` | ``WRITE`` |
++---------------------------------------+---------------+
+| ``s3:GetAccelerateConfiguration`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketAcl`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketCORS`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketLocation`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketLogging`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketNotification`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketPolicy`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketRequestPayment`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketTagging`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketVersioning`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetBucketWebsite`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetLifecycleConfiguration`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectAcl`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetObjectVersionAcl`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:GetReplicationConfiguration`` | ``READ_ACP`` |
++---------------------------------------+---------------+
+| ``s3:DeleteBucketPolicy`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:DeleteBucketWebsite`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:DeleteReplicationConfiguration`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutAccelerateConfiguration`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketAcl`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketCORS`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketLogging`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketNotification`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketPolicy`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketRequestPayment`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketTagging`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutPutBucketVersioning`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutBucketWebsite`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutLifecycleConfiguration`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutObjectAcl`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutObjectVersionAcl`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+| ``s3:PutReplicationConfiguration`` | ``WRITE_ACP`` |
++---------------------------------------+---------------+
+
+Some mappings, (e.g. ``s3:CreateBucket`` to ``WRITE``) are not
+applicable to S3 operation, but are required to allow Swift and S3 to
+access the same resources when things like Swift user ACLs are in
+play. This is one of the many reasons that you should use S3 bucket
+policies rather than S3 ACLs when possible.
+
+
+.. _RFC 2104: http://www.ietf.org/rfc/rfc2104.txt
+.. _HMAC: https://en.wikipedia.org/wiki/HMAC
diff --git a/doc/radosgw/s3/bucketops.rst b/doc/radosgw/s3/bucketops.rst
new file mode 100644
index 000000000..378eb5f04
--- /dev/null
+++ b/doc/radosgw/s3/bucketops.rst
@@ -0,0 +1,706 @@
+===================
+ Bucket Operations
+===================
+
+PUT Bucket
+----------
+Creates a new bucket. To create a bucket, you must have a user ID and a valid AWS Access Key ID to authenticate requests. You may not
+create buckets as an anonymous user.
+
+Constraints
+~~~~~~~~~~~
+In general, bucket names should follow domain name constraints.
+
+- Bucket names must be unique.
+- Bucket names cannot be formatted as IP address.
+- Bucket names can be between 3 and 63 characters long.
+- Bucket names must not contain uppercase characters or underscores.
+- Bucket names must start with a lowercase letter or number.
+- Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.). Bucket names can contain lowercase letters, numbers, and hyphens. Each label must start and end with a lowercase letter or a number.
+
+.. note:: The above constraints are relaxed if the option 'rgw_relaxed_s3_bucket_names' is set to true except that the bucket names must still be unique, cannot be formatted as IP address and can contain letters, numbers, periods, dashes and underscores for up to 255 characters long.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket} HTTP/1.1
+ Host: cname.domain.com
+ x-amz-acl: public-read-write
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+Parameters
+~~~~~~~~~~
+
+
++---------------+----------------------+-----------------------------------------------------------------------------+------------+
+| Name | Description | Valid Values | Required |
++===============+======================+=============================================================================+============+
+| ``x-amz-acl`` | Canned ACLs. | ``private``, ``public-read``, ``public-read-write``, ``authenticated-read`` | No |
++---------------+----------------------+-----------------------------------------------------------------------------+------------+
+| ``x-amz-bucket-object-lock-enabled`` | Enable object lock on bucket. | ``true``, ``false`` | No |
++--------------------------------------+-------------------------------+---------------------------------------------+------------+
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++-------------------------------+-----------+----------------------------------------------------------------+
+| Name | Type | Description |
++===============================+===========+================================================================+
+| ``CreateBucketConfiguration`` | Container | A container for the bucket configuration. |
++-------------------------------+-----------+----------------------------------------------------------------+
+| ``LocationConstraint`` | String | A zonegroup api name, with optional :ref:`s3_bucket_placement` |
++-------------------------------+-----------+----------------------------------------------------------------+
+
+
+HTTP Response
+~~~~~~~~~~~~~
+
+If the bucket name is unique, within constraints and unused, the operation will succeed.
+If a bucket with the same name already exists and the user is the bucket owner, the operation will succeed.
+If the bucket name is already in use, the operation will fail.
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``409`` | BucketAlreadyExists | Bucket already exists under different user's ownership. |
++---------------+-----------------------+----------------------------------------------------------+
+
+DELETE Bucket
+-------------
+
+Deletes a bucket. You can reuse bucket names following a successful bucket removal.
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{bucket} HTTP/1.1
+ Host: cname.domain.com
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+---------------+------------------+
+| HTTP Status | Status Code | Description |
++===============+===============+==================+
+| ``204`` | No Content | Bucket removed. |
++---------------+---------------+------------------+
+
+GET Bucket
+----------
+Returns a list of bucket objects.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}?max-keys=25 HTTP/1.1
+ Host: cname.domain.com
+
+Parameters
+~~~~~~~~~~
+
++---------------------+-----------+-------------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++=====================+===========+=================================================================================================+
+| ``prefix`` | String | Only returns objects that contain the specified prefix. |
++---------------------+-----------+-------------------------------------------------------------------------------------------------+
+| ``delimiter`` | String | The delimiter between the prefix and the rest of the object name. |
++---------------------+-----------+-------------------------------------------------------------------------------------------------+
+| ``marker`` | String | A beginning index for the list of objects returned. |
++---------------------+-----------+-------------------------------------------------------------------------------------------------+
+| ``max-keys`` | Integer | The maximum number of keys to return. Default is 1000. |
++---------------------+-----------+-------------------------------------------------------------------------------------------------+
+| ``allow-unordered`` | Boolean | Non-standard extension. Allows results to be returned unordered. Cannot be used with delimiter. |
++---------------------+-----------+-------------------------------------------------------------------------------------------------+
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+---------------+--------------------+
+| HTTP Status | Status Code | Description |
++===============+===============+====================+
+| ``200`` | OK | Buckets retrieved |
++---------------+---------------+--------------------+
+
+Bucket Response Entities
+~~~~~~~~~~~~~~~~~~~~~~~~
+``GET /{bucket}`` returns a container for buckets with the following fields.
+
++------------------------+-----------+----------------------------------------------------------------------------------+
+| Name | Type | Description |
++========================+===========+==================================================================================+
+| ``ListBucketResult`` | Entity | The container for the list of objects. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``Name`` | String | The name of the bucket whose contents will be returned. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``Prefix`` | String | A prefix for the object keys. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``Marker`` | String | A beginning index for the list of objects returned. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``MaxKeys`` | Integer | The maximum number of keys returned. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``Delimiter`` | String | If set, objects with the same prefix will appear in the ``CommonPrefixes`` list. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``IsTruncated`` | Boolean | If ``true``, only a subset of the bucket's contents were returned. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+| ``CommonPrefixes`` | Container | If multiple objects contain the same prefix, they will appear in this list. |
++------------------------+-----------+----------------------------------------------------------------------------------+
+
+Object Response Entities
+~~~~~~~~~~~~~~~~~~~~~~~~
+The ``ListBucketResult`` contains objects, where each object is within a ``Contents`` container.
+
++------------------------+-----------+------------------------------------------+
+| Name | Type | Description |
++========================+===========+==========================================+
+| ``Contents`` | Object | A container for the object. |
++------------------------+-----------+------------------------------------------+
+| ``Key`` | String | The object's key. |
++------------------------+-----------+------------------------------------------+
+| ``LastModified`` | Date | The object's last-modified date/time. |
++------------------------+-----------+------------------------------------------+
+| ``ETag`` | String | An MD-5 hash of the object. (entity tag) |
++------------------------+-----------+------------------------------------------+
+| ``Size`` | Integer | The object's size. |
++------------------------+-----------+------------------------------------------+
+| ``StorageClass`` | String | Should always return ``STANDARD``. |
++------------------------+-----------+------------------------------------------+
+| ``Type`` | String | ``Appendable`` or ``Normal``. |
++------------------------+-----------+------------------------------------------+
+
+Get Bucket Location
+-------------------
+Retrieves the bucket's region. The user needs to be the bucket owner
+to call this. A bucket can be constrained to a region by providing
+``LocationConstraint`` during a PUT request.
+
+Syntax
+~~~~~~
+Add the ``location`` subresource to bucket resource as shown below
+
+::
+
+ GET /{bucket}?location HTTP/1.1
+ Host: cname.domain.com
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+Response Entities
+~~~~~~~~~~~~~~~~~~~~~~~~
+
++------------------------+-----------+------------------------------------------+
+| Name | Type | Description |
++========================+===========+==========================================+
+| ``LocationConstraint`` | String | The region where bucket resides, empty |
+| | | string for default region |
++------------------------+-----------+------------------------------------------+
+
+
+
+Get Bucket ACL
+--------------
+Retrieves the bucket access control list. The user needs to be the bucket
+owner or to have been granted ``READ_ACP`` permission on the bucket.
+
+Syntax
+~~~~~~
+Add the ``acl`` subresource to the bucket request as shown below.
+
+::
+
+ GET /{bucket}?acl HTTP/1.1
+ Host: cname.domain.com
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++===========================+=============+==============================================================================================+
+| ``AccessControlPolicy`` | Container | A container for the response. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``AccessControlList`` | Container | A container for the ACL information. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Owner`` | Container | A container for the bucket owner's ``ID`` and ``DisplayName``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``ID`` | String | The bucket owner's ID. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``DisplayName`` | String | The bucket owner's display name. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grant`` | Container | A container for ``Grantee`` and ``Permission``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grantee`` | Container | A container for the ``DisplayName`` and ``ID`` of the user receiving a grant of permission. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Permission`` | String | The permission given to the ``Grantee`` bucket. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+
+PUT Bucket ACL
+--------------
+Sets an access control to an existing bucket. The user needs to be the bucket
+owner or to have been granted ``WRITE_ACP`` permission on the bucket.
+
+Syntax
+~~~~~~
+Add the ``acl`` subresource to the bucket request as shown below.
+
+::
+
+ PUT /{bucket}?acl HTTP/1.1
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++===========================+=============+==============================================================================================+
+| ``AccessControlPolicy`` | Container | A container for the request. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``AccessControlList`` | Container | A container for the ACL information. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Owner`` | Container | A container for the bucket owner's ``ID`` and ``DisplayName``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``ID`` | String | The bucket owner's ID. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``DisplayName`` | String | The bucket owner's display name. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grant`` | Container | A container for ``Grantee`` and ``Permission``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grantee`` | Container | A container for the ``DisplayName`` and ``ID`` of the user receiving a grant of permission. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Permission`` | String | The permission given to the ``Grantee`` bucket. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+
+List Bucket Multipart Uploads
+-----------------------------
+
+``GET /?uploads`` returns a list of the current in-progress multipart uploads--i.e., the application initiates a multipart upload, but
+the service hasn't completed all the uploads yet.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}?uploads HTTP/1.1
+
+Parameters
+~~~~~~~~~~
+
+You may specify parameters for ``GET /{bucket}?uploads``, but none of them are required.
+
++------------------------+-----------+--------------------------------------------------------------------------------------+
+| Name | Type | Description |
++========================+===========+======================================================================================+
+| ``prefix`` | String | Returns in-progress uploads whose keys contains the specified prefix. |
++------------------------+-----------+--------------------------------------------------------------------------------------+
+| ``delimiter`` | String | The delimiter between the prefix and the rest of the object name. |
++------------------------+-----------+--------------------------------------------------------------------------------------+
+| ``key-marker`` | String | The beginning marker for the list of uploads. |
++------------------------+-----------+--------------------------------------------------------------------------------------+
+| ``max-keys`` | Integer | The maximum number of in-progress uploads. The default is 1000. |
++------------------------+-----------+--------------------------------------------------------------------------------------+
+| ``max-uploads`` | Integer | The maximum number of multipart uploads. The range from 1-1000. The default is 1000. |
++------------------------+-----------+--------------------------------------------------------------------------------------+
+| ``upload-id-marker`` | String | Ignored if ``key-marker`` is not specified. Specifies the ``ID`` of first |
+| | | upload to list in lexicographical order at or following the ``ID``. |
++------------------------+-----------+--------------------------------------------------------------------------------------+
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++=========================================+=============+==========================================================================================================+
+| ``ListMultipartUploadsResult`` | Container | A container for the results. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``ListMultipartUploadsResult.Prefix`` | String | The prefix specified by the ``prefix`` request parameter (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Bucket`` | String | The bucket that will receive the bucket contents. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``KeyMarker`` | String | The key marker specified by the ``key-marker`` request parameter (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``UploadIdMarker`` | String | The marker specified by the ``upload-id-marker`` request parameter (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``NextKeyMarker`` | String | The key marker to use in a subsequent request if ``IsTruncated`` is ``true``. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``NextUploadIdMarker`` | String | The upload ID marker to use in a subsequent request if ``IsTruncated`` is ``true``. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``MaxUploads`` | Integer | The max uploads specified by the ``max-uploads`` request parameter. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Delimiter`` | String | If set, objects with the same prefix will appear in the ``CommonPrefixes`` list. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``IsTruncated`` | Boolean | If ``true``, only a subset of the bucket's upload contents were returned. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Upload`` | Container | A container for ``Key``, ``UploadId``, ``InitiatorOwner``, ``StorageClass``, and ``Initiated`` elements. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Key`` | String | The key of the object once the multipart upload is complete. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``UploadId`` | String | The ``ID`` that identifies the multipart upload. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Initiator`` | Container | Contains the ``ID`` and ``DisplayName`` of the user who initiated the upload. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``DisplayName`` | String | The initiator's display name. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``ID`` | String | The initiator's ID. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Owner`` | Container | A container for the ``ID`` and ``DisplayName`` of the user who owns the uploaded object. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``StorageClass`` | String | The method used to store the resulting object. ``STANDARD`` or ``REDUCED_REDUNDANCY`` |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Initiated`` | Date | The date and time the user initiated the upload. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``CommonPrefixes`` | Container | If multiple objects contain the same prefix, they will appear in this list. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``CommonPrefixes.Prefix`` | String | The substring of the key after the prefix as defined by the ``prefix`` request parameter. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+
+ENABLE/SUSPEND BUCKET VERSIONING
+--------------------------------
+
+``PUT /?versioning`` This subresource set the versioning state of an existing bucket. To set the versioning state, you must be the bucket owner.
+
+You can set the versioning state with one of the following values:
+
+- Enabled : Enables versioning for the objects in the bucket, All objects added to the bucket receive a unique version ID.
+- Suspended : Disables versioning for the objects in the bucket, All objects added to the bucket receive the version ID null.
+
+If the versioning state has never been set on a bucket, it has no versioning state; a GET versioning request does not return a versioning state value.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}?versioning HTTP/1.1
+
+REQUEST ENTITIES
+~~~~~~~~~~~~~~~~
+
++-----------------------------+-----------+---------------------------------------------------------------------------+
+| Name | Type | Description |
++=============================+===========+===========================================================================+
+| ``VersioningConfiguration`` | Container | A container for the request. |
++-----------------------------+-----------+---------------------------------------------------------------------------+
+| ``Status`` | String | Sets the versioning state of the bucket. Valid Values: Suspended/Enabled |
++-----------------------------+-----------+---------------------------------------------------------------------------+
+
+PUT BUCKET OBJECT LOCK
+--------------------------------
+
+Places an Object Lock configuration on the specified bucket. The rule specified in the Object Lock configuration will be
+applied by default to every new object placed in the specified bucket.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}?object-lock HTTP/1.1
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| Name | Type | Description | Required |
++=============================+=============+========================================================================================+==========+
+| ``ObjectLockConfiguration`` | Container | A container for the request. | Yes |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``ObjectLockEnabled`` | String | Indicates whether this bucket has an Object Lock configuration enabled. | Yes |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Rule`` | Container | The Object Lock rule in place for the specified bucket. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``DefaultRetention`` | Container | The default retention period applied to new objects placed in the specified bucket. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Mode`` | String | The default Object Lock retention mode. Valid Values: GOVERNANCE/COMPLIANCE | Yes |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Days`` | Integer | The number of days specified for the default retention period. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Years`` | Integer | The number of years specified for the default retention period. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+
+HTTP Response
+~~~~~~~~~~~~~
+
+If the bucket object lock is not enabled when creating the bucket, the operation will fail.
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``400`` | MalformedXML | The XML is not well-formed |
++---------------+-----------------------+----------------------------------------------------------+
+| ``409`` | InvalidBucketState | The bucket object lock is not enabled |
++---------------+-----------------------+----------------------------------------------------------+
+
+GET BUCKET OBJECT LOCK
+--------------------------------
+
+Gets the Object Lock configuration for a bucket. The rule specified in the Object Lock configuration will be applied by
+default to every new object placed in the specified bucket.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}?object-lock HTTP/1.1
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| Name | Type | Description | Required |
++=============================+=============+========================================================================================+==========+
+| ``ObjectLockConfiguration`` | Container | A container for the request. | Yes |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``ObjectLockEnabled`` | String | Indicates whether this bucket has an Object Lock configuration enabled. | Yes |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Rule`` | Container | The Object Lock rule in place for the specified bucket. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``DefaultRetention`` | Container | The default retention period applied to new objects placed in the specified bucket. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Mode`` | String | The default Object Lock retention mode. Valid Values: GOVERNANCE/COMPLIANCE | Yes |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Days`` | Integer | The number of days specified for the default retention period. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+| ``Years`` | Integer | The number of years specified for the default retention period. | No |
++-----------------------------+-------------+----------------------------------------------------------------------------------------+----------+
+
+Create Notification
+-------------------
+
+Create a publisher for a specific bucket into a topic.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /<bucket name>?notification HTTP/1.1
+
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
+Parameters are XML encoded in the body of the request, in the following format:
+
+::
+
+ <NotificationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <TopicConfiguration>
+ <Id></Id>
+ <Topic></Topic>
+ <Event></Event>
+ <Filter>
+ <S3Key>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Key>
+ <S3Metadata>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Metadata>
+ <S3Tags>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Tags>
+ </Filter>
+ </TopicConfiguration>
+ </NotificationConfiguration>
+
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| Name | Type | Description | Required |
++===============================+===========+======================================================================================+==========+
+| ``NotificationConfiguration`` | Container | Holding list of ``TopicConfiguration`` entities | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``TopicConfiguration`` | Container | Holding ``Id``, ``Topic`` and list of ``Event`` entities | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Id`` | String | Name of the notification | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Topic`` | String | Topic ARN. Topic must be created beforehand | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Event`` | String | List of supported events see: `S3 Notification Compatibility`_. Multiple ``Event`` | No |
+| | | entities can be used. If omitted, all events are handled | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Filter`` | Container | Holding ``S3Key``, ``S3Metadata`` and ``S3Tags`` entities | No |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Key`` | Container | Holding a list of ``FilterRule`` entities, for filtering based on object key. | No |
+| | | At most, 3 entities may be in the list, with ``Name`` be ``prefix``, ``suffix`` or | |
+| | | ``regex``. All filter rules in the list must match for the filter to match. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Metadata`` | Container | Holding a list of ``FilterRule`` entities, for filtering based on object metadata. | No |
+| | | All filter rules in the list must match the metadata defined on the object. However, | |
+| | | the object still match if it has other metadata entries not listed in the filter. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Tags`` | Container | Holding a list of ``FilterRule`` entities, for filtering based on object tags. | No |
+| | | All filter rules in the list must match the tags defined on the object. However, | |
+| | | the object still match it it has other tags not listed in the filter. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Key.FilterRule`` | Container | Holding ``Name`` and ``Value`` entities. ``Name`` would be: ``prefix``, ``suffix`` | Yes |
+| | | or ``regex``. The ``Value`` would hold the key prefix, key suffix or a regular | |
+| | | expression for matching the key, accordingly. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Metadata.FilterRule`` | Container | Holding ``Name`` and ``Value`` entities. ``Name`` would be the name of the metadata | Yes |
+| | | attribute (e.g. ``x-amz-meta-xxx``). The ``Value`` would be the expected value for | |
+| | | this attribute. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Tags.FilterRule`` | Container | Holding ``Name`` and ``Value`` entities. ``Name`` would be the tag key, | Yes |
+| | | and ``Value`` would be the tag value. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``400`` | MalformedXML | The XML is not well-formed |
++---------------+-----------------------+----------------------------------------------------------+
+| ``400`` | InvalidArgument | Missing Id; Missing/Invalid Topic ARN; Invalid Event |
++---------------+-----------------------+----------------------------------------------------------+
+| ``404`` | NoSuchBucket | The bucket does not exist |
++---------------+-----------------------+----------------------------------------------------------+
+| ``404`` | NoSuchKey | The topic does not exist |
++---------------+-----------------------+----------------------------------------------------------+
+
+
+Delete Notification
+-------------------
+
+Delete a specific, or all, notifications from a bucket.
+
+.. note::
+
+ - Notification deletion is an extension to the S3 notification API
+ - When the bucket is deleted, any notification defined on it is also deleted
+ - Deleting an unknown notification (e.g. double delete) is not considered an error
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /bucket?notification[=<notification-id>] HTTP/1.1
+
+
+Parameters
+~~~~~~~~~~
+
++------------------------+-----------+----------------------------------------------------------------------------------------+
+| Name | Type | Description |
++========================+===========+========================================================================================+
+| ``notification-id`` | String | Name of the notification. If not provided, all notifications on the bucket are deleted |
++------------------------+-----------+----------------------------------------------------------------------------------------+
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``404`` | NoSuchBucket | The bucket does not exist |
++---------------+-----------------------+----------------------------------------------------------+
+
+Get/List Notification
+---------------------
+
+Get a specific notification, or list all notifications configured on a bucket.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /bucket?notification[=<notification-id>] HTTP/1.1
+
+
+Parameters
+~~~~~~~~~~
+
++------------------------+-----------+----------------------------------------------------------------------------------------+
+| Name | Type | Description |
++========================+===========+========================================================================================+
+| ``notification-id`` | String | Name of the notification. If not provided, all notifications on the bucket are listed |
++------------------------+-----------+----------------------------------------------------------------------------------------+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+Response is XML encoded in the body of the request, in the following format:
+
+::
+
+ <NotificationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <TopicConfiguration>
+ <Id></Id>
+ <Topic></Topic>
+ <Event></Event>
+ <Filter>
+ <S3Key>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Key>
+ <S3Metadata>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Metadata>
+ <S3Tags>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Tags>
+ </Filter>
+ </TopicConfiguration>
+ </NotificationConfiguration>
+
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| Name | Type | Description | Required |
++===============================+===========+======================================================================================+==========+
+| ``NotificationConfiguration`` | Container | Holding list of ``TopicConfiguration`` entities | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``TopicConfiguration`` | Container | Holding ``Id``, ``Topic`` and list of ``Event`` entities | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Id`` | String | Name of the notification | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Topic`` | String | Topic ARN | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Event`` | String | Handled event. Multiple ``Event`` entities may exist | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Filter`` | Container | Holding the filters configured for this notification | No |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``404`` | NoSuchBucket | The bucket does not exist |
++---------------+-----------------------+----------------------------------------------------------+
+| ``404`` | NoSuchKey | The notification does not exist (if provided) |
++---------------+-----------------------+----------------------------------------------------------+
+
+.. _S3 Notification Compatibility: ../../s3-notification-compatibility
diff --git a/doc/radosgw/s3/commons.rst b/doc/radosgw/s3/commons.rst
new file mode 100644
index 000000000..ca848bc27
--- /dev/null
+++ b/doc/radosgw/s3/commons.rst
@@ -0,0 +1,111 @@
+=================
+ Common Entities
+=================
+
+.. toctree::
+ :maxdepth: -1
+
+Bucket and Host Name
+--------------------
+There are two different modes of accessing the buckets. The first (preferred) method
+identifies the bucket as the top-level directory in the URI. ::
+
+ GET /mybucket HTTP/1.1
+ Host: cname.domain.com
+
+The second method identifies the bucket via a virtual bucket host name. For example::
+
+ GET / HTTP/1.1
+ Host: mybucket.cname.domain.com
+
+To configure virtual hosted buckets, you can either set ``rgw_dns_name = cname.domain.com`` in ceph.conf, or add ``cname.domain.com`` to the list of ``hostnames`` in your zonegroup configuration. See `Ceph Object Gateway - Multisite Configuration`_ for more on zonegroups.
+
+.. tip:: We prefer the first method, because the second method requires expensive domain certification and DNS wild cards.
+
+Common Request Headers
+----------------------
+
++--------------------+------------------------------------------+
+| Request Header | Description |
++====================+==========================================+
+| ``CONTENT_LENGTH`` | Length of the request body. |
++--------------------+------------------------------------------+
+| ``DATE`` | Request time and date (in UTC). |
++--------------------+------------------------------------------+
+| ``HOST`` | The name of the host server. |
++--------------------+------------------------------------------+
+| ``AUTHORIZATION`` | Authorization token. |
++--------------------+------------------------------------------+
+
+Common Response Status
+----------------------
+
++---------------+-----------------------------------+
+| HTTP Status | Response Code |
++===============+===================================+
+| ``100`` | Continue |
++---------------+-----------------------------------+
+| ``200`` | Success |
++---------------+-----------------------------------+
+| ``201`` | Created |
++---------------+-----------------------------------+
+| ``202`` | Accepted |
++---------------+-----------------------------------+
+| ``204`` | NoContent |
++---------------+-----------------------------------+
+| ``206`` | Partial content |
++---------------+-----------------------------------+
+| ``304`` | NotModified |
++---------------+-----------------------------------+
+| ``400`` | InvalidArgument |
++---------------+-----------------------------------+
+| ``400`` | InvalidDigest |
++---------------+-----------------------------------+
+| ``400`` | BadDigest |
++---------------+-----------------------------------+
+| ``400`` | InvalidBucketName |
++---------------+-----------------------------------+
+| ``400`` | InvalidObjectName |
++---------------+-----------------------------------+
+| ``400`` | UnresolvableGrantByEmailAddress |
++---------------+-----------------------------------+
+| ``400`` | InvalidPart |
++---------------+-----------------------------------+
+| ``400`` | InvalidPartOrder |
++---------------+-----------------------------------+
+| ``400`` | RequestTimeout |
++---------------+-----------------------------------+
+| ``400`` | EntityTooLarge |
++---------------+-----------------------------------+
+| ``403`` | AccessDenied |
++---------------+-----------------------------------+
+| ``403`` | UserSuspended |
++---------------+-----------------------------------+
+| ``403`` | RequestTimeTooSkewed |
++---------------+-----------------------------------+
+| ``404`` | NoSuchKey |
++---------------+-----------------------------------+
+| ``404`` | NoSuchBucket |
++---------------+-----------------------------------+
+| ``404`` | NoSuchUpload |
++---------------+-----------------------------------+
+| ``405`` | MethodNotAllowed |
++---------------+-----------------------------------+
+| ``408`` | RequestTimeout |
++---------------+-----------------------------------+
+| ``409`` | BucketAlreadyExists |
++---------------+-----------------------------------+
+| ``409`` | BucketNotEmpty |
++---------------+-----------------------------------+
+| ``411`` | MissingContentLength |
++---------------+-----------------------------------+
+| ``412`` | PreconditionFailed |
++---------------+-----------------------------------+
+| ``416`` | InvalidRange |
++---------------+-----------------------------------+
+| ``422`` | UnprocessableEntity |
++---------------+-----------------------------------+
+| ``500`` | InternalError |
++---------------+-----------------------------------+
+
+.. _`Ceph Object Gateway - Multisite Configuration`: ../../multisite
diff --git a/doc/radosgw/s3/cpp.rst b/doc/radosgw/s3/cpp.rst
new file mode 100644
index 000000000..089c9c53a
--- /dev/null
+++ b/doc/radosgw/s3/cpp.rst
@@ -0,0 +1,337 @@
+.. _cpp:
+
+C++ S3 Examples
+===============
+
+Setup
+-----
+
+The following contains includes and globals that will be used in later examples:
+
+.. code-block:: cpp
+
+ #include "libs3.h"
+ #include <stdlib.h>
+ #include <iostream>
+ #include <fstream>
+
+ const char access_key[] = "ACCESS_KEY";
+ const char secret_key[] = "SECRET_KEY";
+ const char host[] = "HOST";
+ const char sample_bucket[] = "sample_bucket";
+ const char sample_key[] = "hello.txt";
+ const char sample_file[] = "resource/hello.txt";
+ const char *security_token = NULL;
+ const char *auth_region = NULL;
+
+ S3BucketContext bucketContext =
+ {
+ host,
+ sample_bucket,
+ S3ProtocolHTTP,
+ S3UriStylePath,
+ access_key,
+ secret_key,
+ security_token,
+ auth_region
+ };
+
+ S3Status responsePropertiesCallback(
+ const S3ResponseProperties *properties,
+ void *callbackData)
+ {
+ return S3StatusOK;
+ }
+
+ static void responseCompleteCallback(
+ S3Status status,
+ const S3ErrorDetails *error,
+ void *callbackData)
+ {
+ return;
+ }
+
+ S3ResponseHandler responseHandler =
+ {
+ &responsePropertiesCallback,
+ &responseCompleteCallback
+ };
+
+
+Creating (and Closing) a Connection
+-----------------------------------
+
+This creates a connection so that you can interact with the server.
+
+.. code-block:: cpp
+
+ S3_initialize("s3", S3_INIT_ALL, host);
+ // Do stuff...
+ S3_deinitialize();
+
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of Buckets that you own.
+This also prints out the bucket name, owner ID, and display name
+for each bucket.
+
+.. code-block:: cpp
+
+ static S3Status listServiceCallback(
+ const char *ownerId,
+ const char *ownerDisplayName,
+ const char *bucketName,
+ int64_t creationDate, void *callbackData)
+ {
+ bool *header_printed = (bool*) callbackData;
+ if (!*header_printed) {
+ *header_printed = true;
+ printf("%-22s", " Bucket");
+ printf(" %-20s %-12s", " Owner ID", "Display Name");
+ printf("\n");
+ printf("----------------------");
+ printf(" --------------------" " ------------");
+ printf("\n");
+ }
+
+ printf("%-22s", bucketName);
+ printf(" %-20s %-12s", ownerId ? ownerId : "", ownerDisplayName ? ownerDisplayName : "");
+ printf("\n");
+
+ return S3StatusOK;
+ }
+
+ S3ListServiceHandler listServiceHandler =
+ {
+ responseHandler,
+ &listServiceCallback
+ };
+ bool header_printed = false;
+ S3_list_service(S3ProtocolHTTP, access_key, secret_key, security_token, host,
+ auth_region, NULL, 0, &listServiceHandler, &header_printed);
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket.
+
+.. code-block:: cpp
+
+ S3_create_bucket(S3ProtocolHTTP, access_key, secret_key, NULL, host, sample_bucket, S3CannedAclPrivate, NULL, NULL, &responseHandler, NULL);
+
+
+Listing a Bucket's Content
+--------------------------
+
+This gets a list of objects in the bucket.
+This also prints out each object's name, the file size, and
+last modified date.
+
+.. code-block:: cpp
+
+ static S3Status listBucketCallback(
+ int isTruncated,
+ const char *nextMarker,
+ int contentsCount,
+ const S3ListBucketContent *contents,
+ int commonPrefixesCount,
+ const char **commonPrefixes,
+ void *callbackData)
+ {
+ printf("%-22s", " Object Name");
+ printf(" %-5s %-20s", "Size", " Last Modified");
+ printf("\n");
+ printf("----------------------");
+ printf(" -----" " --------------------");
+ printf("\n");
+
+ for (int i = 0; i < contentsCount; i++) {
+ char timebuf[256];
+ char sizebuf[16];
+ const S3ListBucketContent *content = &(contents[i]);
+ time_t t = (time_t) content->lastModified;
+
+ strftime(timebuf, sizeof(timebuf), "%Y-%m-%dT%H:%M:%SZ", gmtime(&t));
+ sprintf(sizebuf, "%5llu", (unsigned long long) content->size);
+ printf("%-22s %s %s\n", content->key, sizebuf, timebuf);
+ }
+
+ return S3StatusOK;
+ }
+
+ S3ListBucketHandler listBucketHandler =
+ {
+ responseHandler,
+ &listBucketCallback
+ };
+ S3_list_bucket(&bucketContext, NULL, NULL, NULL, 0, NULL, 0, &listBucketHandler, NULL);
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+
+.. note::
+
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: cpp
+
+ S3_delete_bucket(S3ProtocolHTTP, S3UriStylePath, access_key, secret_key, 0, host, sample_bucket, NULL, NULL, 0, &responseHandler, NULL);
+
+
+Creating an Object (from a file)
+--------------------------------
+
+This creates a file ``hello.txt``.
+
+.. code-block:: cpp
+
+ #include <sys/stat.h>
+ typedef struct put_object_callback_data
+ {
+ FILE *infile;
+ uint64_t contentLength;
+ } put_object_callback_data;
+
+
+ static int putObjectDataCallback(int bufferSize, char *buffer, void *callbackData)
+ {
+ put_object_callback_data *data = (put_object_callback_data *) callbackData;
+
+ int ret = 0;
+
+ if (data->contentLength) {
+ int toRead = ((data->contentLength > (unsigned) bufferSize) ? (unsigned) bufferSize : data->contentLength);
+ ret = fread(buffer, 1, toRead, data->infile);
+ }
+ data->contentLength -= ret;
+ return ret;
+ }
+
+ put_object_callback_data data;
+ struct stat statbuf;
+ if (stat(sample_file, &statbuf) == -1) {
+ fprintf(stderr, "\nERROR: Failed to stat file %s: ", sample_file);
+ perror(0);
+ exit(-1);
+ }
+
+ int contentLength = statbuf.st_size;
+ data.contentLength = contentLength;
+
+ if (!(data.infile = fopen(sample_file, "r"))) {
+ fprintf(stderr, "\nERROR: Failed to open input file %s: ", sample_file);
+ perror(0);
+ exit(-1);
+ }
+
+ S3PutObjectHandler putObjectHandler =
+ {
+ responseHandler,
+ &putObjectDataCallback
+ };
+
+ S3_put_object(&bucketContext, sample_key, contentLength, NULL, NULL, 0, &putObjectHandler, &data);
+ fclose(data.infile);
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads a file and prints the contents.
+
+.. code-block:: cpp
+
+ static S3Status getObjectDataCallback(int bufferSize, const char *buffer, void *callbackData)
+ {
+ FILE *outfile = (FILE *) callbackData;
+ size_t wrote = fwrite(buffer, 1, bufferSize, outfile);
+ return ((wrote < (size_t) bufferSize) ? S3StatusAbortedByCallback : S3StatusOK);
+ }
+
+ S3GetObjectHandler getObjectHandler =
+ {
+ responseHandler,
+ &getObjectDataCallback
+ };
+ FILE *outfile = stdout;
+ S3_get_object(&bucketContext, sample_key, NULL, 0, 0, NULL, 0, &getObjectHandler, outfile);
+
+
+Delete an Object
+----------------
+
+This deletes an object.
+
+.. code-block:: cpp
+
+ S3ResponseHandler deleteResponseHandler =
+ {
+ 0,
+ &responseCompleteCallback
+ };
+ S3_delete_object(&bucketContext, sample_key, 0, 0, &deleteResponseHandler, 0);
+
+
+Change an Object's ACL
+----------------------
+
+This changes an object's ACL to grant full control to another user.
+
+
+.. code-block:: cpp
+
+ #include <string.h>
+ char ownerId[] = "owner";
+ char ownerDisplayName[] = "owner";
+ char granteeId[] = "grantee";
+ char granteeDisplayName[] = "grantee";
+
+ S3AclGrant grants[] = {
+ {
+ S3GranteeTypeCanonicalUser,
+ {{}},
+ S3PermissionFullControl
+ },
+ {
+ S3GranteeTypeCanonicalUser,
+ {{}},
+ S3PermissionReadACP
+ },
+ {
+ S3GranteeTypeAllUsers,
+ {{}},
+ S3PermissionRead
+ }
+ };
+
+ strncpy(grants[0].grantee.canonicalUser.id, ownerId, S3_MAX_GRANTEE_USER_ID_SIZE);
+ strncpy(grants[0].grantee.canonicalUser.displayName, ownerDisplayName, S3_MAX_GRANTEE_DISPLAY_NAME_SIZE);
+
+ strncpy(grants[1].grantee.canonicalUser.id, granteeId, S3_MAX_GRANTEE_USER_ID_SIZE);
+ strncpy(grants[1].grantee.canonicalUser.displayName, granteeDisplayName, S3_MAX_GRANTEE_DISPLAY_NAME_SIZE);
+
+ S3_set_acl(&bucketContext, sample_key, ownerId, ownerDisplayName, 3, grants, 0, &responseHandler, 0);
+
+
+Generate Object Download URL (signed)
+-------------------------------------
+
+This generates a signed download URL that will be valid for 5 minutes.
+
+.. code-block:: cpp
+
+ #include <time.h>
+ char buffer[S3_MAX_AUTHENTICATED_QUERY_STRING_SIZE];
+ int64_t expires = time(NULL) + 60 * 5; // Current time + 5 minutes
+
+ S3_generate_authenticated_query_string(buffer, &bucketContext, sample_key, expires, NULL, "GET");
+
diff --git a/doc/radosgw/s3/csharp.rst b/doc/radosgw/s3/csharp.rst
new file mode 100644
index 000000000..af1c6e4b5
--- /dev/null
+++ b/doc/radosgw/s3/csharp.rst
@@ -0,0 +1,199 @@
+.. _csharp:
+
+C# S3 Examples
+==============
+
+Creating a Connection
+---------------------
+
+This creates a connection so that you can interact with the server.
+
+.. code-block:: csharp
+
+ using System;
+ using Amazon;
+ using Amazon.S3;
+ using Amazon.S3.Model;
+
+ string accessKey = "put your access key here!";
+ string secretKey = "put your secret key here!";
+
+ AmazonS3Config config = new AmazonS3Config();
+ config.ServiceURL = "objects.dreamhost.com";
+
+ AmazonS3Client s3Client = new AmazonS3Client(
+ accessKey,
+ secretKey,
+ config
+ );
+
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of Buckets that you own.
+This also prints out the bucket name and creation date of each bucket.
+
+.. code-block:: csharp
+
+ ListBucketsResponse response = client.ListBuckets();
+ foreach (S3Bucket b in response.Buckets)
+ {
+ Console.WriteLine("{0}\t{1}", b.BucketName, b.CreationDate);
+ }
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+This creates a new bucket called ``my-new-bucket``
+
+.. code-block:: csharp
+
+ PutBucketRequest request = new PutBucketRequest();
+ request.BucketName = "my-new-bucket";
+ client.PutBucket(request);
+
+Listing a Bucket's Content
+--------------------------
+
+This gets a list of objects in the bucket.
+This also prints out each object's name, the file size, and last
+modified date.
+
+.. code-block:: csharp
+
+ ListObjectsRequest request = new ListObjectsRequest();
+ request.BucketName = "my-new-bucket";
+ ListObjectsResponse response = client.ListObjects(request);
+ foreach (S3Object o in response.S3Objects)
+ {
+ Console.WriteLine("{0}\t{1}\t{2}", o.Key, o.Size, o.LastModified);
+ }
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+
+.. note::
+
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: csharp
+
+ DeleteBucketRequest request = new DeleteBucketRequest();
+ request.BucketName = "my-new-bucket";
+ client.DeleteBucket(request);
+
+
+Forced Delete for Non-empty Buckets
+-----------------------------------
+
+.. attention::
+
+ not available
+
+
+Creating an Object
+------------------
+
+This creates a file ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: csharp
+
+ PutObjectRequest request = new PutObjectRequest();
+ request.BucketName = "my-new-bucket";
+ request.Key = "hello.txt";
+ request.ContentType = "text/plain";
+ request.ContentBody = "Hello World!";
+ client.PutObject(request);
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable, and
+``secret_plans.txt`` to be private.
+
+.. code-block:: csharp
+
+ PutACLRequest request = new PutACLRequest();
+ request.BucketName = "my-new-bucket";
+ request.Key = "hello.txt";
+ request.CannedACL = S3CannedACL.PublicRead;
+ client.PutACL(request);
+
+ PutACLRequest request2 = new PutACLRequest();
+ request2.BucketName = "my-new-bucket";
+ request2.Key = "secret_plans.txt";
+ request2.CannedACL = S3CannedACL.Private;
+ client.PutACL(request2);
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``perl_poetry.pdf`` and saves it in
+``C:\Users\larry\Documents``
+
+.. code-block:: csharp
+
+ GetObjectRequest request = new GetObjectRequest();
+ request.BucketName = "my-new-bucket";
+ request.Key = "perl_poetry.pdf";
+ GetObjectResponse response = client.GetObject(request);
+ response.WriteResponseStreamToFile("C:\\Users\\larry\\Documents\\perl_poetry.pdf");
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: csharp
+
+ DeleteObjectRequest request = new DeleteObjectRequest();
+ request.BucketName = "my-new-bucket";
+ request.Key = "goodbye.txt";
+ client.DeleteObject(request);
+
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+
+This generates an unsigned download URL for ``hello.txt``. This works
+because we made ``hello.txt`` public by setting the ACL above.
+This then generates a signed download URL for ``secret_plans.txt`` that
+will work for 1 hour. Signed download URLs will work for the time
+period even if the object is private (when the time period is up, the
+URL will stop working).
+
+.. note::
+
+ The C# S3 Library does not have a method for generating unsigned
+ URLs, so the following example only shows generating signed URLs.
+
+.. code-block:: csharp
+
+ GetPreSignedUrlRequest request = new GetPreSignedUrlRequest();
+ request.BucketName = "my-bucket-name";
+ request.Key = "secret_plans.txt";
+ request.Expires = DateTime.Now.AddHours(1);
+ request.Protocol = Protocol.HTTP;
+ string url = client.GetPreSignedURL(request);
+ Console.WriteLine(url);
+
+The output of this will look something like::
+
+ http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&Expires=1316027075&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX
+
diff --git a/doc/radosgw/s3/java.rst b/doc/radosgw/s3/java.rst
new file mode 100644
index 000000000..057c09c2c
--- /dev/null
+++ b/doc/radosgw/s3/java.rst
@@ -0,0 +1,212 @@
+.. _java:
+
+Java S3 Examples
+================
+
+Setup
+-----
+
+The following examples may require some or all of the following java
+classes to be imported:
+
+.. code-block:: java
+
+ import java.io.ByteArrayInputStream;
+ import java.io.File;
+ import java.util.List;
+ import com.amazonaws.auth.AWSCredentials;
+ import com.amazonaws.auth.BasicAWSCredentials;
+ import com.amazonaws.util.StringUtils;
+ import com.amazonaws.services.s3.AmazonS3;
+ import com.amazonaws.services.s3.AmazonS3Client;
+ import com.amazonaws.services.s3.model.Bucket;
+ import com.amazonaws.services.s3.model.CannedAccessControlList;
+ import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest;
+ import com.amazonaws.services.s3.model.GetObjectRequest;
+ import com.amazonaws.services.s3.model.ObjectListing;
+ import com.amazonaws.services.s3.model.ObjectMetadata;
+ import com.amazonaws.services.s3.model.S3ObjectSummary;
+
+
+If you are just testing the Ceph Object Storage services, consider
+using HTTP protocol instead of HTTPS protocol.
+
+First, import the ``ClientConfiguration`` and ``Protocol`` classes.
+
+.. code-block:: java
+
+ import com.amazonaws.ClientConfiguration;
+ import com.amazonaws.Protocol;
+
+
+Then, define the client configuration, and add the client configuration
+as an argument for the S3 client.
+
+.. code-block:: java
+
+ AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
+
+ ClientConfiguration clientConfig = new ClientConfiguration();
+ clientConfig.setProtocol(Protocol.HTTP);
+
+ AmazonS3 conn = new AmazonS3Client(credentials, clientConfig);
+ conn.setEndpoint("endpoint.com");
+
+
+Creating a Connection
+---------------------
+
+This creates a connection so that you can interact with the server.
+
+.. code-block:: java
+
+ String accessKey = "insert your access key here!";
+ String secretKey = "insert your secret key here!";
+
+ AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
+ AmazonS3 conn = new AmazonS3Client(credentials);
+ conn.setEndpoint("objects.dreamhost.com");
+
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of Buckets that you own.
+This also prints out the bucket name and creation date of each bucket.
+
+.. code-block:: java
+
+ List<Bucket> buckets = conn.listBuckets();
+ for (Bucket bucket : buckets) {
+ System.out.println(bucket.getName() + "\t" +
+ StringUtils.fromDate(bucket.getCreationDate()));
+ }
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket called ``my-new-bucket``
+
+.. code-block:: java
+
+ Bucket bucket = conn.createBucket("my-new-bucket");
+
+
+Listing a Bucket's Content
+--------------------------
+This gets a list of objects in the bucket.
+This also prints out each object's name, the file size, and last
+modified date.
+
+.. code-block:: java
+
+ ObjectListing objects = conn.listObjects(bucket.getName());
+ do {
+ for (S3ObjectSummary objectSummary : objects.getObjectSummaries()) {
+ System.out.println(objectSummary.getKey() + "\t" +
+ objectSummary.getSize() + "\t" +
+ StringUtils.fromDate(objectSummary.getLastModified()));
+ }
+ objects = conn.listNextBatchOfObjects(objects);
+ } while (objects.isTruncated());
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+
+.. note::
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: java
+
+ conn.deleteBucket(bucket.getName());
+
+
+Forced Delete for Non-empty Buckets
+-----------------------------------
+.. attention::
+ not available
+
+
+Creating an Object
+------------------
+
+This creates a file ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: java
+
+ ByteArrayInputStream input = new ByteArrayInputStream("Hello World!".getBytes());
+ conn.putObject(bucket.getName(), "hello.txt", input, new ObjectMetadata());
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable, and
+``secret_plans.txt`` to be private.
+
+.. code-block:: java
+
+ conn.setObjectAcl(bucket.getName(), "hello.txt", CannedAccessControlList.PublicRead);
+ conn.setObjectAcl(bucket.getName(), "secret_plans.txt", CannedAccessControlList.Private);
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``perl_poetry.pdf`` and saves it in
+``/home/larry/documents``
+
+.. code-block:: java
+
+ conn.getObject(
+ new GetObjectRequest(bucket.getName(), "perl_poetry.pdf"),
+ new File("/home/larry/documents/perl_poetry.pdf")
+ );
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: java
+
+ conn.deleteObject(bucket.getName(), "goodbye.txt");
+
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+
+This generates an unsigned download URL for ``hello.txt``. This works
+because we made ``hello.txt`` public by setting the ACL above.
+This then generates a signed download URL for ``secret_plans.txt`` that
+will work for 1 hour. Signed download URLs will work for the time
+period even if the object is private (when the time period is up, the
+URL will stop working).
+
+.. note::
+ The java library does not have a method for generating unsigned
+ URLs, so the example below just generates a signed URL.
+
+.. code-block:: java
+
+ GeneratePresignedUrlRequest request = new GeneratePresignedUrlRequest(bucket.getName(), "secret_plans.txt");
+ System.out.println(conn.generatePresignedUrl(request));
+
+The output will look something like this::
+
+ https://my-bucket-name.objects.dreamhost.com/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&Expires=1316027075&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX
+
diff --git a/doc/radosgw/s3/objectops.rst b/doc/radosgw/s3/objectops.rst
new file mode 100644
index 000000000..2ac52607f
--- /dev/null
+++ b/doc/radosgw/s3/objectops.rst
@@ -0,0 +1,558 @@
+Object Operations
+=================
+
+Put Object
+----------
+Adds an object to a bucket. You must have write permissions on the bucket to perform this operation.
+
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}/{object} HTTP/1.1
+
+Request Headers
+~~~~~~~~~~~~~~~
+
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| Name | Description | Valid Values | Required |
++======================+============================================+===============================================================================+============+
+| **content-md5** | A base64 encoded MD-5 hash of the message. | A string. No defaults or constraints. | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **content-type** | A standard MIME type. | Any MIME type. Default: ``binary/octet-stream`` | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **x-amz-meta-<...>** | User metadata. Stored with the object. | A string up to 8kb. No defaults. | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **x-amz-acl** | A canned ACL. | ``private``, ``public-read``, ``public-read-write``, ``authenticated-read`` | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+
+
+Copy Object
+-----------
+To copy an object, use ``PUT`` and specify a destination bucket and the object name.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{dest-bucket}/{dest-object} HTTP/1.1
+ x-amz-copy-source: {source-bucket}/{source-object}
+
+Request Headers
+~~~~~~~~~~~~~~~
+
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+| Name | Description | Valid Values | Required |
++======================================+=================================================+========================+============+
+| **x-amz-copy-source** | The source bucket name + object name. | {bucket}/{obj} | Yes |
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+| **x-amz-acl** | A canned ACL. | ``private``, | No |
+| | | ``public-read``, | |
+| | | ``public-read-write``, | |
+| | | ``authenticated-read`` | |
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+| **x-amz-copy-if-modified-since** | Copies only if modified since the timestamp. | Timestamp | No |
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+| **x-amz-copy-if-unmodified-since** | Copies only if unmodified since the timestamp. | Timestamp | No |
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+| **x-amz-copy-if-match** | Copies only if object ETag matches ETag. | Entity Tag | No |
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+| **x-amz-copy-if-none-match** | Copies only if object ETag doesn't match. | Entity Tag | No |
++--------------------------------------+-------------------------------------------------+------------------------+------------+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++------------------------+-------------+-----------------------------------------------+
+| Name | Type | Description |
++========================+=============+===============================================+
+| **CopyObjectResult** | Container | A container for the response elements. |
++------------------------+-------------+-----------------------------------------------+
+| **LastModified** | Date | The last modified date of the source object. |
++------------------------+-------------+-----------------------------------------------+
+| **Etag** | String | The ETag of the new object. |
++------------------------+-------------+-----------------------------------------------+
+
+Remove Object
+-------------
+
+Removes an object. Requires WRITE permission set on the containing bucket.
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{bucket}/{object} HTTP/1.1
+
+
+
+Get Object
+----------
+Retrieves an object from a bucket within RADOS.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}/{object} HTTP/1.1
+
+Request Headers
+~~~~~~~~~~~~~~~
+
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| Name | Description | Valid Values | Required |
++===========================+================================================+================================+============+
+| **range** | The range of the object to retrieve. | Range: bytes=beginbyte-endbyte | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-modified-since** | Gets only if modified since the timestamp. | Timestamp | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-unmodified-since** | Gets only if not modified since the timestamp. | Timestamp | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-match** | Gets only if object ETag matches ETag. | Entity Tag | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-none-match** | Gets only if object ETag matches ETag. | Entity Tag | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+
+Response Headers
+~~~~~~~~~~~~~~~~
+
++-------------------+--------------------------------------------------------------------------------------------+
+| Name | Description |
++===================+============================================================================================+
+| **Content-Range** | Data range, will only be returned if the range header field was specified in the request |
++-------------------+--------------------------------------------------------------------------------------------+
+
+Get Object Info
+---------------
+
+Returns information about object. This request will return the same
+header information as with the Get Object request, but will include
+the metadata only, not the object data payload.
+
+Syntax
+~~~~~~
+
+::
+
+ HEAD /{bucket}/{object} HTTP/1.1
+
+Request Headers
+~~~~~~~~~~~~~~~
+
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| Name | Description | Valid Values | Required |
++===========================+================================================+================================+============+
+| **range** | The range of the object to retrieve. | Range: bytes=beginbyte-endbyte | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-modified-since** | Gets only if modified since the timestamp. | Timestamp | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-unmodified-since** | Gets only if not modified since the timestamp. | Timestamp | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-match** | Gets only if object ETag matches ETag. | Entity Tag | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+| **if-none-match** | Gets only if object ETag matches ETag. | Entity Tag | No |
++---------------------------+------------------------------------------------+--------------------------------+------------+
+
+Get Object ACL
+--------------
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}/{object}?acl HTTP/1.1
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++===========================+=============+==============================================================================================+
+| ``AccessControlPolicy`` | Container | A container for the response. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``AccessControlList`` | Container | A container for the ACL information. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Owner`` | Container | A container for the object owner's ``ID`` and ``DisplayName``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``ID`` | String | The object owner's ID. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``DisplayName`` | String | The object owner's display name. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grant`` | Container | A container for ``Grantee`` and ``Permission``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grantee`` | Container | A container for the ``DisplayName`` and ``ID`` of the user receiving a grant of permission. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Permission`` | String | The permission given to the ``Grantee`` object. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+
+
+
+Set Object ACL
+--------------
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}/{object}?acl
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++===========================+=============+==============================================================================================+
+| ``AccessControlPolicy`` | Container | A container for the response. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``AccessControlList`` | Container | A container for the ACL information. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Owner`` | Container | A container for the object owner's ``ID`` and ``DisplayName``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``ID`` | String | The object owner's ID. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``DisplayName`` | String | The object owner's display name. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grant`` | Container | A container for ``Grantee`` and ``Permission``. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Grantee`` | Container | A container for the ``DisplayName`` and ``ID`` of the user receiving a grant of permission. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+| ``Permission`` | String | The permission given to the ``Grantee`` object. |
++---------------------------+-------------+----------------------------------------------------------------------------------------------+
+
+
+
+Initiate Multi-part Upload
+--------------------------
+
+Initiate a multi-part upload process.
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{bucket}/{object}?uploads
+
+Request Headers
+~~~~~~~~~~~~~~~
+
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| Name | Description | Valid Values | Required |
++======================+============================================+===============================================================================+============+
+| **content-md5** | A base64 encoded MD-5 hash of the message. | A string. No defaults or constraints. | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **content-type** | A standard MIME type. | Any MIME type. Default: ``binary/octet-stream`` | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **x-amz-meta-<...>** | User metadata. Stored with the object. | A string up to 8kb. No defaults. | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **x-amz-acl** | A canned ACL. | ``private``, ``public-read``, ``public-read-write``, ``authenticated-read`` | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++=========================================+=============+==========================================================================================================+
+| ``InitiatedMultipartUploadsResult`` | Container | A container for the results. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Bucket`` | String | The bucket that will receive the object contents. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Key`` | String | The key specified by the ``key`` request parameter (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``UploadId`` | String | The ID specified by the ``upload-id`` request parameter identifying the multipart upload (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+
+
+Multipart Upload Part
+---------------------
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}/{object}?partNumber=&uploadId= HTTP/1.1
+
+HTTP Response
+~~~~~~~~~~~~~
+
+The following HTTP response may be returned:
+
++---------------+----------------+--------------------------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+================+==========================================================================+
+| **404** | NoSuchUpload | Specified upload-id does not match any initiated upload on this object |
++---------------+----------------+--------------------------------------------------------------------------+
+
+List Multipart Upload Parts
+---------------------------
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}/{object}?uploadId=123 HTTP/1.1
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| Name | Type | Description |
++=========================================+=============+==========================================================================================================+
+| ``ListPartsResult`` | Container | A container for the results. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Bucket`` | String | The bucket that will receive the object contents. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Key`` | String | The key specified by the ``key`` request parameter (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``UploadId`` | String | The ID specified by the ``upload-id`` request parameter identifying the multipart upload (if any). |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Initiator`` | Container | Contains the ``ID`` and ``DisplayName`` of the user who initiated the upload. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``ID`` | String | The initiator's ID. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``DisplayName`` | String | The initiator's display name. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Owner`` | Container | A container for the ``ID`` and ``DisplayName`` of the user who owns the uploaded object. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``StorageClass`` | String | The method used to store the resulting object. ``STANDARD`` or ``REDUCED_REDUNDANCY`` |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``PartNumberMarker`` | String | The part marker to use in a subsequent request if ``IsTruncated`` is ``true``. Precedes the list. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``NextPartNumberMarker`` | String | The next part marker to use in a subsequent request if ``IsTruncated`` is ``true``. The end of the list. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``MaxParts`` | Integer | The max parts allowed in the response as specified by the ``max-parts`` request parameter. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``IsTruncated`` | Boolean | If ``true``, only a subset of the object's upload contents were returned. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Part`` | Container | A container for ``LastModified``, ``PartNumber``, ``ETag`` and ``Size`` elements. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``LastModified`` | Date | Date and time at which the part was uploaded. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``PartNumber`` | Integer | The identification number of the part. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``ETag`` | String | The part's entity tag. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+| ``Size`` | Integer | The size of the uploaded part. |
++-----------------------------------------+-------------+----------------------------------------------------------------------------------------------------------+
+
+
+
+Complete Multipart Upload
+-------------------------
+Assembles uploaded parts and creates a new object, thereby completing a multipart upload.
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{bucket}/{object}?uploadId= HTTP/1.1
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++----------------------------------+-------------+-----------------------------------------------------+----------+
+| Name | Type | Description | Required |
++==================================+=============+=====================================================+==========+
+| ``CompleteMultipartUpload`` | Container | A container consisting of one or more parts. | Yes |
++----------------------------------+-------------+-----------------------------------------------------+----------+
+| ``Part`` | Container | A container for the ``PartNumber`` and ``ETag``. | Yes |
++----------------------------------+-------------+-----------------------------------------------------+----------+
+| ``PartNumber`` | Integer | The identifier of the part. | Yes |
++----------------------------------+-------------+-----------------------------------------------------+----------+
+| ``ETag`` | String | The part's entity tag. | Yes |
++----------------------------------+-------------+-----------------------------------------------------+----------+
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++-------------------------------------+-------------+-------------------------------------------------------+
+| Name | Type | Description |
++=====================================+=============+=======================================================+
+| **CompleteMultipartUploadResult** | Container | A container for the response. |
++-------------------------------------+-------------+-------------------------------------------------------+
+| **Location** | URI | The resource identifier (path) of the new object. |
++-------------------------------------+-------------+-------------------------------------------------------+
+| **Bucket** | String | The name of the bucket that contains the new object. |
++-------------------------------------+-------------+-------------------------------------------------------+
+| **Key** | String | The object's key. |
++-------------------------------------+-------------+-------------------------------------------------------+
+| **ETag** | String | The entity tag of the new object. |
++-------------------------------------+-------------+-------------------------------------------------------+
+
+Abort Multipart Upload
+----------------------
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{bucket}/{object}?uploadId= HTTP/1.1
+
+
+
+Append Object
+-------------
+Append data to an object. You must have write permissions on the bucket to perform this operation.
+It is used to upload files in appending mode. The type of the objects created by the Append Object
+operation is Appendable Object, and the type of the objects uploaded with the Put Object operation is Normal Object.
+**Append Object can't be used if bucket versioning is enabled or suspended.**
+**Synced object will become normal in multisite, but you can still append to the original object.**
+**Compression and encryption features are disabled for Appendable objects.**
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}/{object}?append&position= HTTP/1.1
+
+Request Headers
+~~~~~~~~~~~~~~~
+
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| Name | Description | Valid Values | Required |
++======================+============================================+===============================================================================+============+
+| **content-md5** | A base64 encoded MD-5 hash of the message. | A string. No defaults or constraints. | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **content-type** | A standard MIME type. | Any MIME type. Default: ``binary/octet-stream`` | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **x-amz-meta-<...>** | User metadata. Stored with the object. | A string up to 8kb. No defaults. | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+| **x-amz-acl** | A canned ACL. | ``private``, ``public-read``, ``public-read-write``, ``authenticated-read`` | No |
++----------------------+--------------------------------------------+-------------------------------------------------------------------------------+------------+
+
+Response Headers
+~~~~~~~~~~~~~~~~
+
++--------------------------------+------------------------------------------------------------------+
+| Name | Description |
++================================+==================================================================+
+| **x-rgw-next-append-position** | Next position to append object |
++--------------------------------+------------------------------------------------------------------+
+
+HTTP Response
+~~~~~~~~~~~~~
+
+The following HTTP response may be returned:
+
++---------------+----------------------------+---------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+============================+===================================================+
+| **409** | PositionNotEqualToLength | Specified position does not match object length |
++---------------+----------------------------+---------------------------------------------------+
+| **409** | ObjectNotAppendable | Specified object can not be appended |
++---------------+----------------------------+---------------------------------------------------+
+| **409** | InvalidBucketstate | Bucket versioning is enabled or suspended |
++---------------+----------------------------+---------------------------------------------------+
+
+
+Put Object Retention
+--------------------
+Places an Object Retention configuration on an object.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}/{object}?retention&versionId= HTTP/1.1
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++---------------------+-------------+-------------------------------------------------------------------------------+------------+
+| Name | Type | Description | Required |
++=====================+=============+===============================================================================+============+
+| ``Retention`` | Container | A container for the request. | Yes |
++---------------------+-------------+-------------------------------------------------------------------------------+------------+
+| ``Mode`` | String | Retention mode for the specified object. Valid Values: GOVERNANCE/COMPLIANCE | Yes |
++---------------------+-------------+--------------------------------------------------------------------------------------------+
+| ``RetainUntilDate`` | Timestamp | Retention date. Format: 2020-01-05T00:00:00.000Z | Yes |
++---------------------+-------------+--------------------------------------------------------------------------------------------+
+
+
+Get Object Retention
+--------------------
+Gets an Object Retention configuration on an object.
+
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}/{object}?retention&versionId= HTTP/1.1
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++---------------------+-------------+-------------------------------------------------------------------------------+------------+
+| Name | Type | Description | Required |
++=====================+=============+===============================================================================+============+
+| ``Retention`` | Container | A container for the request. | Yes |
++---------------------+-------------+-------------------------------------------------------------------------------+------------+
+| ``Mode`` | String | Retention mode for the specified object. Valid Values: GOVERNANCE/COMPLIANCE | Yes |
++---------------------+-------------+--------------------------------------------------------------------------------------------+
+| ``RetainUntilDate`` | Timestamp | Retention date. Format: 2020-01-05T00:00:00.000Z | Yes |
++---------------------+-------------+--------------------------------------------------------------------------------------------+
+
+
+Put Object Legal Hold
+---------------------
+Applies a Legal Hold configuration to the specified object.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{bucket}/{object}?legal-hold&versionId= HTTP/1.1
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
++----------------+-------------+----------------------------------------------------------------------------------------+------------+
+| Name | Type | Description | Required |
++================+=============+========================================================================================+============+
+| ``LegalHold`` | Container | A container for the request. | Yes |
++----------------+-------------+----------------------------------------------------------------------------------------+------------+
+| ``Status`` | String | Indicates whether the specified object has a Legal Hold in place. Valid Values: ON/OFF | Yes |
++----------------+-------------+----------------------------------------------------------------------------------------+------------+
+
+
+Get Object Legal Hold
+---------------------
+Gets an object's current Legal Hold status.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{bucket}/{object}?legal-hold&versionId= HTTP/1.1
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++----------------+-------------+----------------------------------------------------------------------------------------+------------+
+| Name | Type | Description | Required |
++================+=============+========================================================================================+============+
+| ``LegalHold`` | Container | A container for the request. | Yes |
++----------------+-------------+----------------------------------------------------------------------------------------+------------+
+| ``Status`` | String | Indicates whether the specified object has a Legal Hold in place. Valid Values: ON/OFF | Yes |
++----------------+-------------+----------------------------------------------------------------------------------------+------------+
+
diff --git a/doc/radosgw/s3/perl.rst b/doc/radosgw/s3/perl.rst
new file mode 100644
index 000000000..f12e5c698
--- /dev/null
+++ b/doc/radosgw/s3/perl.rst
@@ -0,0 +1,192 @@
+.. _perl:
+
+Perl S3 Examples
+================
+
+Creating a Connection
+---------------------
+
+This creates a connection so that you can interact with the server.
+
+.. code-block:: perl
+
+ use Amazon::S3;
+ my $access_key = 'put your access key here!';
+ my $secret_key = 'put your secret key here!';
+
+ my $conn = Amazon::S3->new({
+ aws_access_key_id => $access_key,
+ aws_secret_access_key => $secret_key,
+ host => 'objects.dreamhost.com',
+ secure => 1,
+ retry => 1,
+ });
+
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of `Amazon::S3::Bucket`_ objects that you own.
+We'll also print out the bucket name and creation date of each bucket.
+
+.. code-block:: perl
+
+ my @buckets = @{$conn->buckets->{buckets} || []};
+ foreach my $bucket (@buckets) {
+ print $bucket->bucket . "\t" . $bucket->creation_date . "\n";
+ }
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket called ``my-new-bucket``
+
+.. code-block:: perl
+
+ my $bucket = $conn->add_bucket({ bucket => 'my-new-bucket' });
+
+
+Listing a Bucket's Content
+--------------------------
+
+This gets a list of hashes with info about each object in the bucket.
+We'll also print out each object's name, the file size, and last
+modified date.
+
+.. code-block:: perl
+
+ my @keys = @{$bucket->list_all->{keys} || []};
+ foreach my $key (@keys) {
+ print "$key->{key}\t$key->{size}\t$key->{last_modified}\n";
+ }
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+
+.. note::
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: perl
+
+ $conn->delete_bucket($bucket);
+
+
+Forced Delete for Non-empty Buckets
+-----------------------------------
+
+.. attention::
+
+ not available in the `Amazon::S3`_ perl module
+
+
+Creating an Object
+------------------
+
+This creates a file ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: perl
+
+ $bucket->add_key(
+ 'hello.txt', 'Hello World!',
+ { content_type => 'text/plain' },
+ );
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable and
+``secret_plans.txt`` to be private.
+
+.. code-block:: perl
+
+ $bucket->set_acl({
+ key => 'hello.txt',
+ acl_short => 'public-read',
+ });
+ $bucket->set_acl({
+ key => 'secret_plans.txt',
+ acl_short => 'private',
+ });
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``perl_poetry.pdf`` and saves it in
+``/home/larry/documents/``
+
+.. code-block:: perl
+
+ $bucket->get_key_filename('perl_poetry.pdf', undef,
+ '/home/larry/documents/perl_poetry.pdf');
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: perl
+
+ $bucket->delete_key('goodbye.txt');
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+This generates an unsigned download URL for ``hello.txt``. This works
+because we made ``hello.txt`` public by setting the ACL above.
+Then this generates a signed download URL for ``secret_plans.txt`` that
+will work for 1 hour. Signed download URLs will work for the time
+period even if the object is private (when the time period is up, the
+URL will stop working).
+
+.. note::
+ The `Amazon::S3`_ module does not have a way to generate download
+ URLs, so we are going to be using another module instead. Unfortunately,
+ most modules for generating these URLs assume that you are using Amazon,
+ so we have had to go with using a more obscure module, `Muck::FS::S3`_. This
+ should be the same as Amazon's sample S3 perl module, but this sample
+ module is not in CPAN. So, you can either use CPAN to install
+ `Muck::FS::S3`_, or install Amazon's sample S3 module manually. If you go
+ the manual route, you can remove ``Muck::FS::`` from the example below.
+
+.. code-block:: perl
+
+ use Muck::FS::S3::QueryStringAuthGenerator;
+ my $generator = Muck::FS::S3::QueryStringAuthGenerator->new(
+ $access_key,
+ $secret_key,
+ 0, # 0 means use 'http'. set this to 1 for 'https'
+ 'objects.dreamhost.com',
+ );
+
+ my $hello_url = $generator->make_bare_url($bucket->bucket, 'hello.txt');
+ print $hello_url . "\n";
+
+ $generator->expires_in(3600); # 1 hour = 3600 seconds
+ my $plans_url = $generator->get($bucket->bucket, 'secret_plans.txt');
+ print $plans_url . "\n";
+
+The output will look something like this::
+
+ http://objects.dreamhost.com:80/my-bucket-name/hello.txt
+ http://objects.dreamhost.com:80/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&Expires=1316027075&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX
+
+
+.. _`Amazon::S3`: http://search.cpan.org/~tima/Amazon-S3-0.441/lib/Amazon/S3.pm
+.. _`Amazon::S3::Bucket`: http://search.cpan.org/~tima/Amazon-S3-0.441/lib/Amazon/S3/Bucket.pm
+.. _`Muck::FS::S3`: http://search.cpan.org/~mike/Muck-0.02/
+
diff --git a/doc/radosgw/s3/php.rst b/doc/radosgw/s3/php.rst
new file mode 100644
index 000000000..4878a3489
--- /dev/null
+++ b/doc/radosgw/s3/php.rst
@@ -0,0 +1,214 @@
+.. _php:
+
+PHP S3 Examples
+===============
+
+Installing AWS PHP SDK
+----------------------
+
+This installs AWS PHP SDK using composer (see here_ how to install composer).
+
+.. _here: https://getcomposer.org/download/
+
+.. code-block:: bash
+
+ $ composer install aws/aws-sdk-php
+
+Creating a Connection
+---------------------
+
+This creates a connection so that you can interact with the server.
+
+.. note::
+
+ The client initialization requires a region so we use ``''``.
+
+.. code-block:: php
+
+ <?php
+
+ use Aws\S3\S3Client;
+
+ define('AWS_KEY', 'place access key here');
+ define('AWS_SECRET_KEY', 'place secret key here');
+ $ENDPOINT = 'http://objects.dreamhost.com';
+
+ // require the amazon sdk from your composer vendor dir
+ require __DIR__.'/vendor/autoload.php';
+
+ // Instantiate the S3 class and point it at the desired host
+ $client = new S3Client([
+ 'region' => '',
+ 'version' => '2006-03-01',
+ 'endpoint' => $ENDPOINT,
+ 'credentials' => [
+ 'key' => AWS_KEY,
+ 'secret' => AWS_SECRET_KEY
+ ],
+ // Set the S3 class to use objects.dreamhost.com/bucket
+ // instead of bucket.objects.dreamhost.com
+ 'use_path_style_endpoint' => true
+ ]);
+
+Listing Owned Buckets
+---------------------
+This gets a ``AWS\Result`` instance that is more convenient to visit using array access way.
+This also prints out the bucket name and creation date of each bucket.
+
+.. code-block:: php
+
+ <?php
+ $listResponse = $client->listBuckets();
+ $buckets = $listResponse['Buckets'];
+ foreach ($buckets as $bucket) {
+ echo $bucket['Name'] . "\t" . $bucket['CreationDate'] . "\n";
+ }
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket called ``my-new-bucket`` and returns a
+``AWS\Result`` object.
+
+.. code-block:: php
+
+ <?php
+ $client->createBucket(['Bucket' => 'my-new-bucket']);
+
+
+List a Bucket's Content
+-----------------------
+
+This gets a ``AWS\Result`` instance that is more convenient to visit using array access way.
+This then prints out each object's name, the file size, and last modified date.
+
+.. code-block:: php
+
+ <?php
+ $objectsListResponse = $client->listObjects(['Bucket' => $bucketname]);
+ $objects = $objectsListResponse['Contents'] ?? [];
+ foreach ($objects as $object) {
+ echo $object['Key'] . "\t" . $object['Size'] . "\t" . $object['LastModified'] . "\n";
+ }
+
+.. note::
+
+ If there are more than 1000 objects in this bucket,
+ you need to check $objectsListResponse['isTruncated']
+ and run again with the name of the last key listed.
+ Keep doing this until isTruncated is not true.
+
+The output will look something like this if the bucket has some files::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+
+This deletes the bucket called ``my-old-bucket`` and returns a
+``AWS\Result`` object
+
+.. note::
+
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: php
+
+ <?php
+ $client->deleteBucket(['Bucket' => 'my-old-bucket']);
+
+
+Creating an Object
+------------------
+
+This creates an object ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: php
+
+ <?php
+ $client->putObject([
+ 'Bucket' => 'my-bucket-name',
+ 'Key' => 'hello.txt',
+ 'Body' => "Hello World!"
+ ]);
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable and
+``secret_plans.txt`` to be private.
+
+.. code-block:: php
+
+ <?php
+ $client->putObjectAcl([
+ 'Bucket' => 'my-bucket-name',
+ 'Key' => 'hello.txt',
+ 'ACL' => 'public-read'
+ ]);
+ $client->putObjectAcl([
+ 'Bucket' => 'my-bucket-name',
+ 'Key' => 'secret_plans.txt',
+ 'ACL' => 'private'
+ ]);
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: php
+
+ <?php
+ $client->deleteObject(['Bucket' => 'my-bucket-name', 'Key' => 'goodbye.txt']);
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``poetry.pdf`` and saves it in
+``/home/larry/documents/``
+
+.. code-block:: php
+
+ <?php
+ $object = $client->getObject(['Bucket' => 'my-bucket-name', 'Key' => 'poetry.pdf']);
+ file_put_contents('/home/larry/documents/poetry.pdf', $object['Body']->getContents());
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+
+This generates an unsigned download URL for ``hello.txt``.
+This works because we made ``hello.txt`` public by setting
+the ACL above. This then generates a signed download URL
+for ``secret_plans.txt`` that will work for 1 hour.
+Signed download URLs will work for the time period even
+if the object is private (when the time period is up,
+the URL will stop working).
+
+.. code-block:: php
+
+ <?php
+ $hello_url = $client->getObjectUrl('my-bucket-name', 'hello.txt');
+ echo $hello_url."\n";
+
+ $secret_plans_cmd = $client->getCommand('GetObject', ['Bucket' => 'my-bucket-name', 'Key' => 'secret_plans.txt']);
+ $request = $client->createPresignedRequest($secret_plans_cmd, '+1 hour');
+ echo $request->getUri()."\n";
+
+The output of this will look something like::
+
+ http://objects.dreamhost.com/my-bucket-name/hello.txt
+ http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=sandboxAccessKey%2F20190116%2F%2Fs3%2Faws4_request&X-Amz-Date=20190116T125520Z&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature=61921f07c73d7695e47a2192cf55ae030f34c44c512b2160bb5a936b2b48d923
+
diff --git a/doc/radosgw/s3/python.rst b/doc/radosgw/s3/python.rst
new file mode 100644
index 000000000..4b9faef11
--- /dev/null
+++ b/doc/radosgw/s3/python.rst
@@ -0,0 +1,186 @@
+.. _python:
+
+Python S3 Examples
+==================
+
+Creating a Connection
+---------------------
+
+This creates a connection so that you can interact with the server.
+
+.. code-block:: python
+
+ import boto
+ import boto.s3.connection
+ access_key = 'put your access key here!'
+ secret_key = 'put your secret key here!'
+
+ conn = boto.connect_s3(
+ aws_access_key_id = access_key,
+ aws_secret_access_key = secret_key,
+ host = 'objects.dreamhost.com',
+ #is_secure=False, # uncomment if you are not using ssl
+ calling_format = boto.s3.connection.OrdinaryCallingFormat(),
+ )
+
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of Buckets that you own.
+This also prints out the bucket name and creation date of each bucket.
+
+.. code-block:: python
+
+ for bucket in conn.get_all_buckets():
+ print "{name}\t{created}".format(
+ name = bucket.name,
+ created = bucket.creation_date,
+ )
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket called ``my-new-bucket``
+
+.. code-block:: python
+
+ bucket = conn.create_bucket('my-new-bucket')
+
+
+Listing a Bucket's Content
+--------------------------
+
+This gets a list of objects in the bucket.
+This also prints out each object's name, the file size, and last
+modified date.
+
+.. code-block:: python
+
+ for key in bucket.list():
+ print "{name}\t{size}\t{modified}".format(
+ name = key.name,
+ size = key.size,
+ modified = key.last_modified,
+ )
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+
+.. note::
+
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: python
+
+ conn.delete_bucket(bucket.name)
+
+
+Forced Delete for Non-empty Buckets
+-----------------------------------
+
+.. attention::
+
+ not available in python
+
+
+Creating an Object
+------------------
+
+This creates a file ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: python
+
+ key = bucket.new_key('hello.txt')
+ key.set_contents_from_string('Hello World!')
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable, and
+``secret_plans.txt`` to be private.
+
+.. code-block:: python
+
+ hello_key = bucket.get_key('hello.txt')
+ hello_key.set_canned_acl('public-read')
+ plans_key = bucket.get_key('secret_plans.txt')
+ plans_key.set_canned_acl('private')
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``perl_poetry.pdf`` and saves it in
+``/home/larry/documents/``
+
+.. code-block:: python
+
+ key = bucket.get_key('perl_poetry.pdf')
+ key.get_contents_to_filename('/home/larry/documents/perl_poetry.pdf')
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: python
+
+ bucket.delete_key('goodbye.txt')
+
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+
+This generates an unsigned download URL for ``hello.txt``. This works
+because we made ``hello.txt`` public by setting the ACL above.
+This then generates a signed download URL for ``secret_plans.txt`` that
+will work for 1 hour. Signed download URLs will work for the time
+period even if the object is private (when the time period is up, the
+URL will stop working).
+
+.. code-block:: python
+
+ hello_key = bucket.get_key('hello.txt')
+ hello_url = hello_key.generate_url(0, query_auth=False, force_http=True)
+ print hello_url
+
+ plans_key = bucket.get_key('secret_plans.txt')
+ plans_url = plans_key.generate_url(3600, query_auth=True, force_http=True)
+ print plans_url
+
+The output of this will look something like::
+
+ http://objects.dreamhost.com/my-bucket-name/hello.txt
+ http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&Expires=1316027075&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX
+
+Using S3 API Extensions
+-----------------------
+
+To use the boto3 client to tests the RadosGW extensions to the S3 API, the `extensions file`_ should be placed under: ``~/.aws/models/s3/2006-03-01/`` directory.
+For example, unordered list of objects could be fetched using:
+
+.. code-block:: python
+
+ print conn.list_objects(Bucket='my-new-bucket', AllowUnordered=True)
+
+
+Without the extensions file, in the above example, boto3 would complain that the ``AllowUnordered`` argument is invalid.
+
+
+.. _extensions file: https://github.com/ceph/ceph/blob/master/examples/boto3/service-2.sdk-extras.json
diff --git a/doc/radosgw/s3/ruby.rst b/doc/radosgw/s3/ruby.rst
new file mode 100644
index 000000000..435b3c630
--- /dev/null
+++ b/doc/radosgw/s3/ruby.rst
@@ -0,0 +1,364 @@
+.. _ruby:
+
+Ruby `AWS::SDK`_ Examples (aws-sdk gem ~>2)
+===========================================
+
+Settings
+---------------------
+
+You can setup the connection on global way:
+
+.. code-block:: ruby
+
+ Aws.config.update(
+ endpoint: 'https://objects.dreamhost.com.',
+ access_key_id: 'my-access-key',
+ secret_access_key: 'my-secret-key',
+ force_path_style: true,
+ region: 'us-east-1'
+ )
+
+
+and instantiate a client object:
+
+.. code-block:: ruby
+
+ s3_client = Aws::S3::Client.new
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of buckets that you own.
+This also prints out the bucket name and creation date of each bucket.
+
+.. code-block:: ruby
+
+ s3_client.list_buckets.buckets.each do |bucket|
+ puts "#{bucket.name}\t#{bucket.creation_date}"
+ end
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket called ``my-new-bucket``
+
+.. code-block:: ruby
+
+ s3_client.create_bucket(bucket: 'my-new-bucket')
+
+If you want a private bucket:
+
+`acl` option accepts: # private, public-read, public-read-write, authenticated-read
+
+.. code-block:: ruby
+
+ s3_client.create_bucket(bucket: 'my-new-bucket', acl: 'private')
+
+
+Listing a Bucket's Content
+--------------------------
+
+This gets a list of hashes with the contents of each object
+This also prints out each object's name, the file size, and last
+modified date.
+
+.. code-block:: ruby
+
+ s3_client.get_objects(bucket: 'my-new-bucket').contents.each do |object|
+ puts "#{object.key}\t#{object.size}\t#{object.last-modified}"
+ end
+
+The output will look something like this if the bucket has some files::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+.. note::
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: ruby
+
+ s3_client.delete_bucket(bucket: 'my-new-bucket')
+
+
+Forced Delete for Non-empty Buckets
+-----------------------------------
+First, you need to clear the bucket:
+
+.. code-block:: ruby
+
+ Aws::S3::Bucket.new('my-new-bucket', client: s3_client).clear!
+
+after, you can destroy the bucket
+
+.. code-block:: ruby
+
+ s3_client.delete_bucket(bucket: 'my-new-bucket')
+
+
+Creating an Object
+------------------
+
+This creates a file ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: ruby
+
+ s3_client.put_object(
+ key: 'hello.txt',
+ body: 'Hello World!',
+ bucket: 'my-new-bucket',
+ content_type: 'text/plain'
+ )
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable, and ``secret_plans.txt``
+to be private.
+
+.. code-block:: ruby
+
+ s3_client.put_object_acl(bucket: 'my-new-bucket', key: 'hello.txt', acl: 'public-read')
+
+ s3_client.put_object_acl(bucket: 'my-new-bucket', key: 'private.txt', acl: 'private')
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``poetry.pdf`` and saves it in
+``/home/larry/documents/``
+
+.. code-block:: ruby
+
+ s3_client.get_object(bucket: 'my-new-bucket', key: 'poetry.pdf', response_target: '/home/larry/documents/poetry.pdf')
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: ruby
+
+ s3_client.delete_object(key: 'goodbye.txt', bucket: 'my-new-bucket')
+
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+
+This generates an unsigned download URL for ``hello.txt``. This works
+because we made ``hello.txt`` public by setting the ACL above.
+This then generates a signed download URL for ``secret_plans.txt`` that
+will work for 1 hour. Signed download URLs will work for the time
+period even if the object is private (when the time period is up, the
+URL will stop working).
+
+.. code-block:: ruby
+
+ puts Aws::S3::Object.new(
+ key: 'hello.txt',
+ bucket_name: 'my-new-bucket',
+ client: s3_client
+ ).public_url
+
+ puts Aws::S3::Object.new(
+ key: 'secret_plans.txt',
+ bucket_name: 'hermes_ceph_gem',
+ client: s3_client
+ ).presigned_url(:get, expires_in: 60 * 60)
+
+The output of this will look something like::
+
+ http://objects.dreamhost.com/my-bucket-name/hello.txt
+ http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&Expires=1316027075&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX
+
+.. _`AWS::SDK`: http://docs.aws.amazon.com/sdkforruby/api/Aws/S3/Client.html
+
+
+
+Ruby `AWS::S3`_ Examples (aws-s3 gem)
+=====================================
+
+Creating a Connection
+---------------------
+
+This creates a connection so that you can interact with the server.
+
+.. code-block:: ruby
+
+ AWS::S3::Base.establish_connection!(
+ :server => 'objects.dreamhost.com',
+ :use_ssl => true,
+ :access_key_id => 'my-access-key',
+ :secret_access_key => 'my-secret-key'
+ )
+
+
+Listing Owned Buckets
+---------------------
+
+This gets a list of `AWS::S3::Bucket`_ objects that you own.
+This also prints out the bucket name and creation date of each bucket.
+
+.. code-block:: ruby
+
+ AWS::S3::Service.buckets.each do |bucket|
+ puts "#{bucket.name}\t#{bucket.creation_date}"
+ end
+
+The output will look something like this::
+
+ mahbuckat1 2011-04-21T18:05:39.000Z
+ mahbuckat2 2011-04-21T18:05:48.000Z
+ mahbuckat3 2011-04-21T18:07:18.000Z
+
+
+Creating a Bucket
+-----------------
+
+This creates a new bucket called ``my-new-bucket``
+
+.. code-block:: ruby
+
+ AWS::S3::Bucket.create('my-new-bucket')
+
+
+Listing a Bucket's Content
+--------------------------
+
+This gets a list of hashes with the contents of each object
+This also prints out each object's name, the file size, and last
+modified date.
+
+.. code-block:: ruby
+
+ new_bucket = AWS::S3::Bucket.find('my-new-bucket')
+ new_bucket.each do |object|
+ puts "#{object.key}\t#{object.about['content-length']}\t#{object.about['last-modified']}"
+ end
+
+The output will look something like this if the bucket has some files::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Deleting a Bucket
+-----------------
+.. note::
+ The Bucket must be empty! Otherwise it won't work!
+
+.. code-block:: ruby
+
+ AWS::S3::Bucket.delete('my-new-bucket')
+
+
+Forced Delete for Non-empty Buckets
+-----------------------------------
+
+.. code-block:: ruby
+
+ AWS::S3::Bucket.delete('my-new-bucket', :force => true)
+
+
+Creating an Object
+------------------
+
+This creates a file ``hello.txt`` with the string ``"Hello World!"``
+
+.. code-block:: ruby
+
+ AWS::S3::S3Object.store(
+ 'hello.txt',
+ 'Hello World!',
+ 'my-new-bucket',
+ :content_type => 'text/plain'
+ )
+
+
+Change an Object's ACL
+----------------------
+
+This makes the object ``hello.txt`` to be publicly readable, and ``secret_plans.txt``
+to be private.
+
+.. code-block:: ruby
+
+ policy = AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket')
+ policy.grants = [ AWS::S3::ACL::Grant.grant(:public_read) ]
+ AWS::S3::S3Object.acl('hello.txt', 'my-new-bucket', policy)
+
+ policy = AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket')
+ policy.grants = []
+ AWS::S3::S3Object.acl('secret_plans.txt', 'my-new-bucket', policy)
+
+
+Download an Object (to a file)
+------------------------------
+
+This downloads the object ``poetry.pdf`` and saves it in
+``/home/larry/documents/``
+
+.. code-block:: ruby
+
+ open('/home/larry/documents/poetry.pdf', 'w') do |file|
+ AWS::S3::S3Object.stream('poetry.pdf', 'my-new-bucket') do |chunk|
+ file.write(chunk)
+ end
+ end
+
+
+Delete an Object
+----------------
+
+This deletes the object ``goodbye.txt``
+
+.. code-block:: ruby
+
+ AWS::S3::S3Object.delete('goodbye.txt', 'my-new-bucket')
+
+
+Generate Object Download URLs (signed and unsigned)
+---------------------------------------------------
+
+This generates an unsigned download URL for ``hello.txt``. This works
+because we made ``hello.txt`` public by setting the ACL above.
+This then generates a signed download URL for ``secret_plans.txt`` that
+will work for 1 hour. Signed download URLs will work for the time
+period even if the object is private (when the time period is up, the
+URL will stop working).
+
+.. code-block:: ruby
+
+ puts AWS::S3::S3Object.url_for(
+ 'hello.txt',
+ 'my-new-bucket',
+ :authenticated => false
+ )
+
+ puts AWS::S3::S3Object.url_for(
+ 'secret_plans.txt',
+ 'my-new-bucket',
+ :expires_in => 60 * 60
+ )
+
+The output of this will look something like::
+
+ http://objects.dreamhost.com/my-bucket-name/hello.txt
+ http://objects.dreamhost.com/my-bucket-name/secret_plans.txt?Signature=XXXXXXXXXXXXXXXXXXXXXXXXXXX&Expires=1316027075&AWSAccessKeyId=XXXXXXXXXXXXXXXXXXX
+
+.. _`AWS::S3`: http://amazon.rubyforge.org/
+.. _`AWS::S3::Bucket`: http://amazon.rubyforge.org/doc/
+
diff --git a/doc/radosgw/s3/serviceops.rst b/doc/radosgw/s3/serviceops.rst
new file mode 100644
index 000000000..54b6ca375
--- /dev/null
+++ b/doc/radosgw/s3/serviceops.rst
@@ -0,0 +1,69 @@
+Service Operations
+==================
+
+List Buckets
+------------
+``GET /`` returns a list of buckets created by the user making the request. ``GET /`` only
+returns buckets created by an authenticated user. You cannot make an anonymous request.
+
+Syntax
+~~~~~~
+::
+
+ GET / HTTP/1.1
+ Host: cname.domain.com
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++----------------------------+-------------+-----------------------------------------------------------------+
+| Name | Type | Description |
++============================+=============+=================================================================+
+| ``Buckets`` | Container | Container for list of buckets. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``Bucket`` | Container | Container for bucket information. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``Name`` | String | Bucket name. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``CreationDate`` | Date | UTC time when the bucket was created. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``ListAllMyBucketsResult`` | Container | A container for the result. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``Owner`` | Container | A container for the bucket owner's ``ID`` and ``DisplayName``. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``ID`` | String | The bucket owner's ID. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``DisplayName`` | String | The bucket owner's display name. |
++----------------------------+-------------+-----------------------------------------------------------------+
+
+
+Get Usage Stats
+---------------
+
+Gets usage stats per user, similar to the admin command :ref:`rgw_user_usage_stats`.
+
+Syntax
+~~~~~~
+::
+
+ GET /?usage HTTP/1.1
+ Host: cname.domain.com
+
+ Authorization: AWS {access-key}:{hash-of-header-and-secret}
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
++----------------------------+-------------+-----------------------------------------------------------------+
+| Name | Type | Description |
++============================+=============+=================================================================+
+| ``Summary`` | Container | Summary of total stats by user. |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``TotalBytes`` | Integer | Bytes used by user |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``TotalBytesRounded`` | Integer | Bytes rounded to the nearest 4k boundary |
++----------------------------+-------------+-----------------------------------------------------------------+
+| ``TotalEntries`` | Integer | Total object entries |
++----------------------------+-------------+-----------------------------------------------------------------+
diff --git a/doc/radosgw/s3select.rst b/doc/radosgw/s3select.rst
new file mode 100644
index 000000000..fc7ae339c
--- /dev/null
+++ b/doc/radosgw/s3select.rst
@@ -0,0 +1,267 @@
+===============
+ Ceph s3 select
+===============
+
+.. contents::
+
+Overview
+--------
+
+ | The purpose of the **s3 select** engine is to create an efficient pipe between user client and storage nodes (the engine should be close as possible to storage).
+ | It enables selection of a restricted subset of (structured) data stored in an S3 object using an SQL-like syntax.
+ | It also enables for higher level analytic-applications (such as SPARK-SQL) , using that feature to improve their latency and throughput.
+
+ | For example, a s3-object of several GB (CSV file), a user needs to extract a single column which filtered by another column.
+ | As the following query:
+ | ``select customer-id from s3Object where age>30 and age<65;``
+
+ | Currently the whole s3-object must retrieve from OSD via RGW before filtering and extracting data.
+ | By "pushing down" the query into OSD , it's possible to save a lot of network and CPU(serialization / deserialization).
+
+ | **The bigger the object, and the more accurate the query, the better the performance**.
+
+Basic workflow
+--------------
+
+ | S3-select query is sent to RGW via `AWS-CLI <https://docs.aws.amazon.com/cli/latest/reference/s3api/select-object-content.html>`_
+
+ | It passes the authentication and permission process as an incoming message (POST).
+ | **RGWSelectObj_ObjStore_S3::send_response_data** is the “entry point”, it handles each fetched chunk according to input object-key.
+ | **send_response_data** is first handling the input query, it extracts the query and other CLI parameters.
+
+ | Per each new fetched chunk (~4m), RGW executes s3-select query on it.
+ | The current implementation supports CSV objects and since chunks are randomly “cutting” the CSV rows in the middle, those broken-lines (first or last per chunk) are skipped while processing the query.
+ | Those “broken” lines are stored and later merged with the next broken-line (belong to the next chunk), and finally processed.
+
+ | Per each processed chunk an output message is formatted according to `AWS specification <https://docs.aws.amazon.com/AmazonS3/latest/API/archive-RESTObjectSELECTContent.html#archive-RESTObjectSELECTContent-responses>`_ and sent back to the client.
+ | RGW supports the following response: ``{:event-type,records} {:content-type,application/octet-stream} {:message-type,event}``.
+ | For aggregation queries the last chunk should be identified as the end of input, following that the s3-select-engine initiates end-of-process and produces an aggregate result.
+
+
+Basic functionalities
+~~~~~~~~~~~~~~~~~~~~~
+
+ | **S3select** has a definite set of functionalities that should be implemented (if we wish to stay compliant with AWS), currently only a portion of it is implemented.
+
+ | The implemented software architecture supports basic arithmetic expressions, logical and compare expressions, including nested function calls and casting operators, that alone enables the user reasonable flexibility.
+ | review the below s3-select-feature-table_.
+
+
+Error Handling
+~~~~~~~~~~~~~~
+
+ | Any error occurs while the input query processing, i.e. parsing phase or execution phase, is returned to client as response error message.
+
+ | Fatal severity (attached to the exception) will end query execution immediately, other error severity are counted, upon reaching 100, it ends query execution with an error message.
+
+
+
+
+.. _s3-select-feature-table:
+
+Features Support
+----------------
+
+ | Currently only part of `AWS select command <https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-glacier-select-sql-reference-select.html>`_ is implemented, table bellow describes what is currently supported.
+ | The following table describes the current implementation for s3-select functionalities:
+
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Feature | Detailed | Example |
++=================================+=================+=======================================================================+
+| Arithmetic operators | ^ * / + - ( ) | select (int(_1)+int(_2))*int(_9) from stdin; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| | | select ((1+2)*3.14) ^ 2 from stdin; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Compare operators | > < >= <= == != | select _1,_2 from stdin where (int(1)+int(_3))>int(_5); |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| logical operator | AND OR | select count(*) from stdin where int(1)>123 and int(_5)<200; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| casting operator | int(expression) | select int(_1),int( 1.2 + 3.4) from stdin; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| |float(expression)| select float(1.2) from stdin; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| | timestamp(...) | select timestamp("1999:10:10-12:23:44") from stdin; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Aggregation Function | sum | select sum(int(_1)) from stdin; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Aggregation Function | min | select min( int(_1) * int(_5) ) from stdin; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Aggregation Function | max | select max(float(_1)),min(int(_5)) from stdin; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Aggregation Function | count | select count(*) from stdin where (int(1)+int(_3))>int(_5); |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Timestamp Functions | extract | select count(*) from stdin where |
+| | | extract("year",timestamp(_2)) > 1950 |
+| | | and extract("year",timestamp(_1)) < 1960; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Timestamp Functions | dateadd | select count(0) from stdin where |
+| | | datediff("year",timestamp(_1),dateadd("day",366,timestamp(_1))) == 1; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Timestamp Functions | datediff | select count(0) from stdin where |
+| | | datediff("month",timestamp(_1),timestamp(_2))) == 2; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Timestamp Functions | utcnow | select count(0) from stdin where |
+| | | datediff("hours",utcnow(),dateadd("day",1,utcnow())) == 24 ; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| String Functions | substr | select count(0) from stdin where |
+| | | int(substr(_1,1,4))>1950 and int(substr(_1,1,4))<1960; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| alias support | | select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 |
+| | | from stdin where a3>100 and a3<300; |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+
+s3-select function interfaces
+-----------------------------
+
+Timestamp functions
+~~~~~~~~~~~~~~~~~~~
+ | The `timestamp functionalities <https://docs.aws.amazon.com/AmazonS3/latest/dev/s3-glacier-select-sql-reference-date.html>`_ is partially implemented.
+ | the casting operator( ``timestamp( string )`` ), converts string to timestamp basic type.
+ | Currently it can convert the following pattern ``yyyy:mm:dd hh:mi:dd``
+
+ | ``extract( date-part , timestamp)`` : function return integer according to date-part extract from input timestamp.
+ | supported date-part : year,month,week,day.
+
+ | ``dateadd(date-part , integer,timestamp)`` : function return timestamp, a calculation results of input timestamp and date-part.
+ | supported data-part : year,month,day.
+
+ | ``datediff(date-part,timestamp,timestamp)`` : function return an integer, a calculated result for difference between 2 timestamps according to date-part.
+ | supported date-part : year,month,day,hours.
+
+
+ | ``utcnow()`` : return timestamp of current time.
+
+Aggregation functions
+~~~~~~~~~~~~~~~~~~~~~
+
+ | ``count()`` : return integer according to number of rows matching condition(if such exist).
+
+ | ``sum(expression)`` : return a summary of expression per all rows matching condition(if such exist).
+
+ | ``max(expression)`` : return the maximal result for all expressions matching condition(if such exist).
+
+ | ``min(expression)`` : return the minimal result for all expressions matching condition(if such exist).
+
+String functions
+~~~~~~~~~~~~~~~~
+
+ | ``substr(string,from,to)`` : return a string extract from input string according to from,to inputs.
+
+
+Alias
+~~~~~
+ | **Alias** programming-construct is an essential part of s3-select language, it enables much better programming especially with objects containing many columns or in the case of complex queries.
+
+ | Upon parsing the statement containing alias construct, it replaces alias with reference to correct projection column, on query execution time the reference is evaluated as any other expression.
+
+ | There is a risk that self(or cyclic) reference may occur causing stack-overflow(endless-loop), for that concern upon evaluating an alias, it is validated for cyclic reference.
+
+ | Alias also maintains result-cache, meaning upon using the same alias more than once, it’s not evaluating the same expression again(it will return the same result),instead it uses the result from cache.
+
+ | Of Course, per each new row the cache is invalidated.
+
+Sending Query to RGW
+--------------------
+
+ | Any http-client can send s3-select request to RGW, it must be compliant with `AWS Request syntax <https://docs.aws.amazon.com/AmazonS3/latest/API/API_SelectObjectContent.html#API_SelectObjectContent_RequestSyntax>`_.
+
+
+
+ | Sending s3-select request to RGW using AWS cli, should follow `AWS command reference <https://docs.aws.amazon.com/cli/latest/reference/s3api/select-object-content.html>`_.
+ | bellow is an example for it.
+
+::
+
+ aws --endpoint-url http://localhost:8000 s3api select-object-content
+ --bucket {BUCKET-NAME}
+ --expression-type 'SQL'
+ --input-serialization
+ '{"CSV": {"FieldDelimiter": "," , "QuoteCharacter": "\"" , "RecordDelimiter" : "\n" , "QuoteEscapeCharacter" : "\\" , "FileHeaderInfo": "USE" }, "CompressionType": "NONE"}'
+ --output-serialization '{"CSV": {}}'
+ --key {OBJECT-NAME}
+ --expression "select count(0) from stdin where int(_1)<10;" output.csv
+
+Syntax
+~~~~~~
+
+ | **Input serialization** (Implemented), it let the user define the CSV definitions; the default values are {\\n} for row-delimiter {,} for field delimiter, {"} for quote, {\\} for escape characters.
+ | it handle the **csv-header-info**, the first row in input object containing the schema.
+ | **Output serialization** is currently not implemented, the same for **compression-type**.
+
+ | s3-select engine contain a CSV parser, which parse s3-objects as follows.
+ | - each row ends with row-delimiter.
+ | - field-separator separates between adjacent columns, successive field separator define NULL column.
+ | - quote-character overrides field separator, meaning , field separator become as any character between quotes.
+ | - escape character disables any special characters, except for row delimiter.
+
+ | Below are examples for CSV parsing rules.
+
+
+CSV parsing behavior
+--------------------
+
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Feature | Description | input ==> tokens |
++=================================+=================+=======================================================================+
+| NULL | successive | ,,1,,2, ==> {null}{null}{1}{null}{2}{null} |
+| | field delimiter | |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| QUOTE | quote character | 11,22,"a,b,c,d",last ==> {11}{22}{"a,b,c,d"}{last} |
+| | overrides | |
+| | field delimiter | |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| Escape | escape char | 11,22,str=\\"abcd\\"\\,str2=\\"123\\",last |
+| | overrides | ==> {11}{22}{str="abcd",str2="123"}{last} |
+| | meta-character. | |
+| | escape removed | |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| row delimiter | no close quote, | 11,22,a="str,44,55,66 |
+| | row delimiter is| ==> {11}{22}{a="str,44,55,66} |
+| | closing line | |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+| csv header info | FileHeaderInfo | "**USE**" value means each token on first line is column-name, |
+| | tag | "**IGNORE**" value means to skip the first line |
++---------------------------------+-----------------+-----------------------------------------------------------------------+
+
+
+BOTO3
+-----
+
+ | using BOTO3 is "natural" and easy due to AWS-cli support.
+
+::
+
+
+ def run_s3select(bucket,key,query,column_delim=",",row_delim="\n",quot_char='"',esc_char='\\',csv_header_info="NONE"):
+ s3 = boto3.client('s3',
+ endpoint_url=endpoint,
+ aws_access_key_id=access_key,
+ region_name=region_name,
+ aws_secret_access_key=secret_key)
+
+
+
+ r = s3.select_object_content(
+ Bucket=bucket,
+ Key=key,
+ ExpressionType='SQL',
+ InputSerialization = {"CSV": {"RecordDelimiter" : row_delim, "FieldDelimiter" : column_delim,"QuoteEscapeCharacter": esc_char, "QuoteCharacter": quot_char, "FileHeaderInfo": csv_header_info}, "CompressionType": "NONE"},
+ OutputSerialization = {"CSV": {}},
+ Expression=query,)
+
+ result = ""
+ for event in r['Payload']:
+ if 'Records' in event:
+ records = event['Records']['Payload'].decode('utf-8')
+ result += records
+
+ return result
+
+
+
+
+ run_s3select(
+ "my_bucket",
+ "my_csv_object",
+ "select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 from stdin where a3>100 and a3<300;")
+
diff --git a/doc/radosgw/session-tags.rst b/doc/radosgw/session-tags.rst
new file mode 100644
index 000000000..abfb89e87
--- /dev/null
+++ b/doc/radosgw/session-tags.rst
@@ -0,0 +1,424 @@
+=======================================================
+Session tags for Attribute Based Access Control in STS
+=======================================================
+
+Session tags are key-value pairs that can be passed while federating a user (currently it
+is only supported as part of the web token passed to AssumeRoleWithWebIdentity). The session
+tags are passed along as aws:PrincipalTag in the session credentials (temporary credentials)
+that is returned back by STS. These Principal Tags consists of the session tags that come in
+as part of the web token and the tags that are attached to the role being assumed. Please note
+that the tags have to be always specified in the following namespace: https://aws.amazon.com/tags.
+
+An example of the session tags that are passed in by the IDP in the web token is as follows:
+
+.. code-block:: python
+
+ {
+ "jti": "947960a3-7e91-4027-99f6-da719b0d4059",
+ "exp": 1627438044,
+ "nbf": 0,
+ "iat": 1627402044,
+ "iss": "http://localhost:8080/auth/realms/quickstart",
+ "aud": "app-profile-jsp",
+ "sub": "test",
+ "typ": "ID",
+ "azp": "app-profile-jsp",
+ "auth_time": 0,
+ "session_state": "3a46e3e7-d198-4a64-8b51-69682bcfc670",
+ "preferred_username": "test",
+ "email_verified": false,
+ "acr": "1",
+ "https://aws.amazon.com/tags": [
+ {
+ "principal_tags": {
+ "Department": [
+ "Engineering",
+ "Marketing"
+ ]
+ }
+ }
+ ],
+ "client_id": "app-profile-jsp",
+ "username": "test",
+ "active": true
+ }
+
+Steps to configure Keycloak to pass tags in the web token are described here:doc:`keycloak`.
+
+The trust policy must have 'sts:TagSession' permission if the web token passed in by the federated user contains session tags, otherwise
+the AssumeRoleWithWebIdentity action will fail. An example of the trust policy with sts:TagSession is as follows:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["sts:AssumeRoleWithWebIdentity","sts:TagSession"],
+ "Principal":{"Federated":["arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart"]},
+ "Condition":{"StringEquals":{"localhost:8080/auth/realms/quickstart:sub":"test"}}
+ }]
+ }
+
+Tag Keys
+========
+
+The following are the tag keys that can be used in the role's trust policy or the role's permission policy:
+
+1. aws:RequestTag: This key is used to compare the key-value pair passed in the request with the key-value pair
+in the role's trust policy. In case of AssumeRoleWithWebIdentity, the session tags that are passed by the idp
+in the web token can be used as aws:RequestTag in the role's trust policy based on which a federated user can be
+allowed to assume a role.
+
+An example of a role trust policy that uses aws:RequestTag is as follows:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["sts:AssumeRoleWithWebIdentity","sts:TagSession"],
+ "Principal":{"Federated":["arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart"]},
+ "Condition":{"StringEquals":{"aws:RequestTag/Department":"Engineering"}}
+ }]
+ }
+
+2. aws:PrincipalTag: This key is used to compare the key-value pair attached to the principal with the key-value pair
+in the policy. In case of AssumeRoleWithWebIdentity, the session tags that are passed by the idp in the web token appear
+as Principal tags in the temporary credentials once a user has been authenticated, and these tags can be used as
+aws:PrincipalTag in the role's permission policy.
+
+An example of a role permission policy that uses aws:PrincipalTag is as follows:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["s3:*"],
+ "Resource":["arn:aws:s3::t1tenant:my-test-bucket","arn:aws:s3::t1tenant:my-test-bucket/*],"+
+ "Condition":{"StringEquals":{"aws:PrincipalTag/Department":"Engineering"}}
+ }]
+ }
+
+3. iam:ResourceTag: This key is used to compare the key-value pair attached to the resource with the key-value pair
+in the policy. In case of AssumeRoleWithWebIdentity, tags attached to the role can be used to compare with that in
+the trust policy to allow a user to assume a role.
+RGW now supports REST APIs for tagging, listing tags and untagging actions on a role. More information related to
+role tagging can be found here :doc:`role`.
+
+An example of a role's trust policy that uses aws:ResourceTag is as follows:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["sts:AssumeRoleWithWebIdentity","sts:TagSession"],
+ "Principal":{"Federated":["arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart"]},
+ "Condition":{"StringEquals":{"iam:ResourceTag/Department":"Engineering"}}
+ }]
+ }
+
+For the above to work, you need to attach 'Department=Engineering' tag to the role.
+
+4. aws:TagKeys: This key is used to compare tags in the request with the tags in the policy. In case of
+AssumeRoleWithWebIdentity this can be used to check the tag keys in a role's trust policy before a user
+is allowed to assume a role.
+This can also be used in the role's permission policy.
+
+An example of a role's trust policy that uses aws:TagKeys is as follows:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["sts:AssumeRoleWithWebIdentity","sts:TagSession"],
+ "Principal":{"Federated":["arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart"]},
+ "Condition":{"ForAllValues:StringEquals":{"aws:TagKeys":["Marketing,Engineering"]}}
+ }]
+ }
+
+'ForAllValues:StringEquals' tests whether every tag key in the request is a subset of the tag keys in the policy. So the above
+condition restricts the tag keys passed in the request.
+
+5. s3:ResourceTag: This key is used to compare tags present on the s3 resource (bucket or object) with the tags in
+the role's permission policy.
+
+An example of a role's permission policy that uses s3:ResourceTag is as follows:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["s3:PutBucketTagging"],
+ "Resource":["arn:aws:s3::t1tenant:my-test-bucket\","arn:aws:s3::t1tenant:my-test-bucket/*"]
+ },
+ {
+ "Effect":"Allow",
+ "Action":["s3:*"],
+ "Resource":["*"],
+ "Condition":{"StringEquals":{"s3:ResourceTag/Department":\"Engineering"}}
+ }
+ }
+
+For the above to work, you need to attach 'Department=Engineering' tag to the bucket (and on the object too) on which you want this policy
+to be applied.
+
+More examples of policies using tags
+====================================
+
+1. To assume a role by matching the tags in the incoming request with the tag attached to the role.
+aws:RequestTag is the incoming tag in the JWT (access token) and iam:ResourceTag is the tag attached to the role being assumed:
+
+.. code-block:: python
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["sts:AssumeRoleWithWebIdentity","sts:TagSession"],
+ "Principal":{"Federated":["arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart"]},
+ "Condition":{"StringEquals":{"aws:RequestTag/Department":"${iam:ResourceTag/Department}"}}
+ }]
+ }
+
+2. To evaluate a role's permission policy by matching principal tags with s3 resource tags.
+aws:PrincipalTag is the tag passed in along with the temporary credentials and s3:ResourceTag is the tag attached to
+the s3 resource (object/ bucket):
+
+.. code-block:: python
+
+
+ {
+ "Version":"2012-10-17",
+ "Statement":[
+ {
+ "Effect":"Allow",
+ "Action":["s3:PutBucketTagging"],
+ "Resource":["arn:aws:s3::t1tenant:my-test-bucket\","arn:aws:s3::t1tenant:my-test-bucket/*"]
+ },
+ {
+ "Effect":"Allow",
+ "Action":["s3:*"],
+ "Resource":["*"],
+ "Condition":{"StringEquals":{"s3:ResourceTag/Department":"${aws:PrincipalTag/Department}"}}
+ }
+ }
+
+Properties of Session Tags
+==========================
+
+1. Session Tags can be multi-valued. (Multi-valued session tags are not supported in AWS)
+2. A maximum of 50 session tags are allowed to be passed in by the IDP.
+3. The maximum size of a key allowed is 128 characters.
+4. The maximum size of a value allowed is 256 characters.
+5. The tag or the value can not start with "aws:".
+
+s3 Resource Tags
+================
+
+As stated above 's3:ResourceTag' key can be used for authorizing an s3 operation in RGW (this is not allowed in AWS).
+
+s3:ResourceTag is a key used to refer to tags that have been attached to an object or a bucket. Tags can be attached to an object or
+a bucket using REST APIs available for the same.
+
+The following table shows which s3 resource tag type (bucket/object) are supported for authorizing a particular operation.
+
++-----------------------------------+-------------------+
+| Operation | Tag type |
++===================================+===================+
+| **GetObject** | Object tags |
+| **GetObjectTags** | |
+| **DeleteObjectTags** | |
+| **DeleteObject** | |
+| **PutACLs** | |
+| **InitMultipart** | |
+| **AbortMultipart** | |
+| **ListMultipart** | |
+| **GetAttrs** | |
+| **PutObjectRetention** | |
+| **GetObjectRetention** | |
+| **PutObjectLegalHold** | |
+| **GetObjectLegalHold** | |
++-----------------------------------+-------------------+
+| **PutObjectTags** | Bucket tags |
+| **GetBucketTags** | |
+| **PutBucketTags** | |
+| **DeleteBucketTags** | |
+| **GetBucketReplication** | |
+| **DeleteBucketReplication** | |
+| **GetBucketVersioning** | |
+| **SetBucketVersioning** | |
+| **GetBucketWebsite** | |
+| **SetBucketWebsite** | |
+| **DeleteBucketWebsite** | |
+| **StatBucket** | |
+| **ListBucket** | |
+| **GetBucketLogging** | |
+| **GetBucketLocation** | |
+| **DeleteBucket** | |
+| **GetLC** | |
+| **PutLC** | |
+| **DeleteLC** | |
+| **GetCORS** | |
+| **PutCORS** | |
+| **GetRequestPayment** | |
+| **SetRequestPayment** | |
+| **PutBucketPolicy** | |
+| **GetBucketPolicy** | |
+| **DeleteBucketPolicy** | |
+| **PutBucketObjectLock** | |
+| **GetBucketObjectLock** | |
+| **GetBucketPolicyStatus** | |
+| **PutBucketPublicAccessBlock** | |
+| **GetBucketPublicAccessBlock** | |
+| **DeleteBucketPublicAccessBlock** | |
++-----------------------------------+-------------------+
+| **GetACLs** | Bucket tags for |
+| **PutACLs** | bucket ACLs |
+| | Object tags for |
+| | object ACLs |
++-----------------------------------+-------------------+
+| **PutObject** | Object tags of |
+| **CopyObject** | source object |
+| | Bucket tags of |
+| | destination bucket|
++-----------------------------------+-------------------+
+
+
+Sample code demonstrating usage of session tags
+===============================================
+
+The following is a sample code for tagging a role, a bucket, an object in it and using tag keys in a role's
+trust policy and its permission policy, assuming that a tag 'Department=Engineering' is passed in the
+JWT (access token) by the IDP
+
+.. code-block:: python
+
+ # -*- coding: utf-8 -*-
+
+ import boto3
+ import json
+ from nose.tools import eq_ as eq
+
+ access_key = 'TESTER'
+ secret_key = 'test123'
+ endpoint = 'http://s3.us-east.localhost:8000'
+
+ s3client = boto3.client('s3',
+ aws_access_key_id = access_key,
+ aws_secret_access_key = secret_key,
+ endpoint_url = endpoint,
+ region_name='',)
+
+ s3res = boto3.resource('s3',
+ aws_access_key_id = access_key,
+ aws_secret_access_key = secret_key,
+ endpoint_url = endpoint,
+ region_name='',)
+
+ iam_client = boto3.client('iam',
+ aws_access_key_id=access_key,
+ aws_secret_access_key=secret_key,
+ endpoint_url=endpoint,
+ region_name=''
+ )
+
+ bucket_name = 'test-bucket'
+ s3bucket = s3client.create_bucket(Bucket=bucket_name)
+
+ bucket_tagging = s3res.BucketTagging(bucket_name)
+ Set_Tag = bucket_tagging.put(Tagging={'TagSet':[{'Key':'Department', 'Value': 'Engineering'}]})
+ try:
+ response = iam_client.create_open_id_connect_provider(
+ Url='http://localhost:8080/auth/realms/quickstart',
+ ClientIDList=[
+ 'app-profile-jsp',
+ 'app-jee-jsp'
+ ],
+ ThumbprintList=[
+ 'F7D7B3515DD0D319DD219A43A9EA727AD6065287'
+ ]
+ )
+ except ClientError as e:
+ print ("Provider already exists")
+
+ policy_document = "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Principal\":{\"Federated\":[\"arn:aws:iam:::oidc-provider/localhost:8080/auth/realms/quickstart\"]},\"Action\":[\"sts:AssumeRoleWithWebIdentity\",\"sts:TagSession\"],\"Condition\":{\"StringEquals\":{\"aws:RequestTag/Department\":\"${iam:ResourceTag/Department}\"}}}]}"
+ role_response = ""
+
+ print ("\n Getting Role \n")
+
+ try:
+ role_response = iam_client.get_role(
+ RoleName='S3Access'
+ )
+ print (role_response)
+ except ClientError as e:
+ if e.response['Code'] == 'NoSuchEntity':
+ print ("\n Creating Role \n")
+ tags_list = [
+ {'Key':'Department','Value':'Engineering'},
+ ]
+ role_response = iam_client.create_role(
+ AssumeRolePolicyDocument=policy_document,
+ Path='/',
+ RoleName='S3Access',
+ Tags=tags_list,
+ )
+ print (role_response)
+ else:
+ print("Unexpected error: %s" % e)
+
+ role_policy = "{\"Version\":\"2012-10-17\",\"Statement\":{\"Effect\":\"Allow\",\"Action\":\"s3:*\",\"Resource\":\"arn:aws:s3:::*\",\"Condition\":{\"StringEquals\":{\"s3:ResourceTag/Department\":[\"${aws:PrincipalTag/Department}\"]}}}}"
+
+ response = iam_client.put_role_policy(
+ RoleName='S3Access',
+ PolicyName='Policy1',
+ PolicyDocument=role_policy
+ )
+
+ sts_client = boto3.client('sts',
+ aws_access_key_id='abc',
+ aws_secret_access_key='def',
+ endpoint_url = endpoint,
+ region_name = '',
+ )
+
+
+ print ("\n Assuming Role with Web Identity\n")
+ response = sts_client.assume_role_with_web_identity(
+ RoleArn=role_response['Role']['Arn'],
+ RoleSessionName='Bob',
+ DurationSeconds=900,
+ WebIdentityToken='<web-token>')
+
+ s3client2 = boto3.client('s3',
+ aws_access_key_id = response['Credentials']['AccessKeyId'],
+ aws_secret_access_key = response['Credentials']['SecretAccessKey'],
+ aws_session_token = response['Credentials']['SessionToken'],
+ endpoint_url='http://s3.us-east.localhost:8000',
+ region_name='',)
+
+ bucket_body = 'this is a test file'
+ tags = 'Department=Engineering'
+ key = "test-1.txt"
+ s3_put_obj = s3client2.put_object(Body=bucket_body, Bucket=bucket_name, Key=key, Tagging=tags)
+ eq(s3_put_obj['ResponseMetadata']['HTTPStatusCode'],200)
+
+ s3_get_obj = s3client2.get_object(Bucket=bucket_name, Key=key)
+ eq(s3_get_obj['ResponseMetadata']['HTTPStatusCode'],200) \ No newline at end of file
diff --git a/doc/radosgw/swift.rst b/doc/radosgw/swift.rst
new file mode 100644
index 000000000..2cb2dde67
--- /dev/null
+++ b/doc/radosgw/swift.rst
@@ -0,0 +1,77 @@
+===============================
+ Ceph Object Gateway Swift API
+===============================
+
+Ceph supports a RESTful API that is compatible with the basic data access model of the `Swift API`_.
+
+API
+---
+
+.. toctree::
+ :maxdepth: 1
+
+ Authentication <swift/auth>
+ Service Ops <swift/serviceops>
+ Container Ops <swift/containerops>
+ Object Ops <swift/objectops>
+ Temp URL Ops <swift/tempurl>
+ Tutorial <swift/tutorial>
+ Java <swift/java>
+ Python <swift/python>
+ Ruby <swift/ruby>
+
+
+Features Support
+----------------
+
+The following table describes the support status for current Swift functional features:
+
++---------------------------------+-----------------+----------------------------------------+
+| Feature | Status | Remarks |
++=================================+=================+========================================+
+| **Authentication** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Account Metadata** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Swift ACLs** | Supported | Supports a subset of Swift ACLs |
++---------------------------------+-----------------+----------------------------------------+
+| **List Containers** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Delete Container** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Create Container** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Container Metadata** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Update Container Metadata** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Delete Container Metadata** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **List Objects** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Static Website** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Create Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Create Large Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Delete Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Copy Object** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Get Object Metadata** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Update Object Metadata** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Expiring Objects** | Supported | |
++---------------------------------+-----------------+----------------------------------------+
+| **Temporary URLs** | Partial Support | No support for container-level keys |
++---------------------------------+-----------------+----------------------------------------+
+| **Object Versioning** | Partial Support | No support for ``X-History-Location`` |
++---------------------------------+-----------------+----------------------------------------+
+| **CORS** | Not Supported | |
++---------------------------------+-----------------+----------------------------------------+
+
+.. _Swift API: https://developer.openstack.org/api-ref/object-store/index.html
diff --git a/doc/radosgw/swift/auth.rst b/doc/radosgw/swift/auth.rst
new file mode 100644
index 000000000..12d6b23ff
--- /dev/null
+++ b/doc/radosgw/swift/auth.rst
@@ -0,0 +1,82 @@
+================
+ Authentication
+================
+
+Swift API requests that require authentication must contain an
+``X-Storage-Token`` authentication token in the request header.
+The token may be retrieved from RADOS Gateway, or from another authenticator.
+To obtain a token from RADOS Gateway, you must create a user. For example::
+
+ sudo radosgw-admin user create --subuser="{username}:{subusername}" --uid="{username}"
+ --display-name="{Display Name}" --key-type=swift --secret="{password}" --access=full
+
+For details on RADOS Gateway administration, see `radosgw-admin`_.
+
+.. _radosgw-admin: ../../../man/8/radosgw-admin/
+
+.. note::
+ For those used to the Swift API this is implementing the Swift auth v1.0 API, as such
+ `{username}` above is generally equivalent to a Swift `account` and `{subusername}`
+ is a user under that account.
+
+Auth Get
+--------
+
+To authenticate a user, make a request containing an ``X-Auth-User`` and a
+``X-Auth-Key`` in the header.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /auth HTTP/1.1
+ Host: swift.radosgwhost.com
+ X-Auth-User: johndoe
+ X-Auth-Key: R7UUOLFDI2ZI9PRCQ53K
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Auth-User``
+
+:Description: The key RADOS GW username to authenticate.
+:Type: String
+:Required: Yes
+
+``X-Auth-Key``
+
+:Description: The key associated to a RADOS GW username.
+:Type: String
+:Required: Yes
+
+
+Response Headers
+~~~~~~~~~~~~~~~~
+
+The response from the server should include an ``X-Auth-Token`` value. The
+response may also contain a ``X-Storage-Url`` that provides the
+``{api version}/{account}`` prefix that is specified in other requests
+throughout the API documentation.
+
+
+``X-Storage-Token``
+
+:Description: The authorization token for the ``X-Auth-User`` specified in the request.
+:Type: String
+
+
+``X-Storage-Url``
+
+:Description: The URL and ``{api version}/{account}`` path for the user.
+:Type: String
+
+A typical response looks like this::
+
+ HTTP/1.1 204 No Content
+ Date: Mon, 16 Jul 2012 11:05:33 GMT
+ Server: swift
+ X-Storage-Url: https://swift.radosgwhost.com/v1/ACCT-12345
+ X-Auth-Token: UOlCCC8TahFKlWuv9DB09TWHF0nDjpPElha0kAa
+ Content-Length: 0
+ Content-Type: text/plain; charset=UTF-8
diff --git a/doc/radosgw/swift/containerops.rst b/doc/radosgw/swift/containerops.rst
new file mode 100644
index 000000000..17ef4658b
--- /dev/null
+++ b/doc/radosgw/swift/containerops.rst
@@ -0,0 +1,341 @@
+======================
+ Container Operations
+======================
+
+A container is a mechanism for storing data objects. An account may
+have many containers, but container names must be unique. This API enables a
+client to create a container, set access controls and metadata,
+retrieve a container's contents, and delete a container. Since this API
+makes requests related to information in a particular user's account, all
+requests in this API must be authenticated unless a container's access control
+is deliberately made publicly accessible (i.e., allows anonymous requests).
+
+.. note:: The Amazon S3 API uses the term 'bucket' to describe a data container.
+ When you hear someone refer to a 'bucket' within the Swift API, the term
+ 'bucket' may be construed as the equivalent of the term 'container.'
+
+One facet of object storage is that it does not support hierarchical paths
+or directories. Instead, it supports one level consisting of one or more
+containers, where each container may have objects. The RADOS Gateway's
+Swift-compatible API supports the notion of 'pseudo-hierarchical containers,'
+which is a means of using object naming to emulate a container (or directory)
+hierarchy without actually implementing one in the storage system. You may
+name objects with pseudo-hierarchical names
+(e.g., photos/buildings/empire-state.jpg), but container names cannot
+contain a forward slash (``/``) character.
+
+
+Create a Container
+==================
+
+To create a new container, make a ``PUT`` request with the API version, account,
+and the name of the new container. The container name must be unique, must not
+contain a forward-slash (/) character, and should be less than 256 bytes. You
+may include access control headers and metadata headers in the request. The
+operation is idempotent; that is, if you make a request to create a container
+that already exists, it will return with a HTTP 202 return code, but will not
+create another container.
+
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{api version}/{account}/{container} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+ X-Container-Read: {comma-separated-uids}
+ X-Container-Write: {comma-separated-uids}
+ X-Container-Meta-{key}: {value}
+
+
+Headers
+~~~~~~~
+
+``X-Container-Read``
+
+:Description: The user IDs with read permissions for the container.
+:Type: Comma-separated string values of user IDs.
+:Required: No
+
+``X-Container-Write``
+
+:Description: The user IDs with write permissions for the container.
+:Type: Comma-separated string values of user IDs.
+:Required: No
+
+``X-Container-Meta-{key}``
+
+:Description: A user-defined meta data key that takes an arbitrary string value.
+:Type: String
+:Required: No
+
+
+HTTP Response
+~~~~~~~~~~~~~
+
+If a container with the same name already exists, and the user is the
+container owner then the operation will succeed. Otherwise the operation
+will fail.
+
+``409``
+
+:Description: The container already exists under a different user's ownership.
+:Status Code: ``BucketAlreadyExists``
+
+
+
+
+List a Container's Objects
+==========================
+
+To list the objects within a container, make a ``GET`` request with the with the
+API version, account, and the name of the container. You can specify query
+parameters to filter the full list, or leave out the parameters to return a list
+of the first 10,000 object names stored in the container.
+
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{api version}/{container} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+Parameters
+~~~~~~~~~~
+
+``format``
+
+:Description: Defines the format of the result.
+:Type: String
+:Valid Values: ``json`` | ``xml``
+:Required: No
+
+``prefix``
+
+:Description: Limits the result set to objects beginning with the specified prefix.
+:Type: String
+:Required: No
+
+``marker``
+
+:Description: Returns a list of results greater than the marker value.
+:Type: String
+:Required: No
+
+``limit``
+
+:Description: Limits the number of results to the specified value.
+:Type: Integer
+:Valid Range: 0 - 10,000
+:Required: No
+
+``delimiter``
+
+:Description: The delimiter between the prefix and the rest of the object name.
+:Type: String
+:Required: No
+
+``path``
+
+:Description: The pseudo-hierarchical path of the objects.
+:Type: String
+:Required: No
+
+``allow_unordered``
+
+:Description: Allows the results to be returned unordered to reduce computation overhead. Cannot be used with ``delimiter``.
+:Type: Boolean
+:Required: No
+:Non-Standard Extension: Yes
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+``container``
+
+:Description: The container.
+:Type: Container
+
+``object``
+
+:Description: An object within the container.
+:Type: Container
+
+``name``
+
+:Description: The name of an object within the container.
+:Type: String
+
+``hash``
+
+:Description: A hash code of the object's contents.
+:Type: String
+
+``last_modified``
+
+:Description: The last time the object's contents were modified.
+:Type: Date
+
+``content_type``
+
+:Description: The type of content within the object.
+:Type: String
+
+
+
+Update a Container's ACLs
+=========================
+
+When a user creates a container, the user has read and write access to the
+container by default. To allow other users to read a container's contents or
+write to a container, you must specifically enable the user.
+You may also specify ``*`` in the ``X-Container-Read`` or ``X-Container-Write``
+settings, which effectively enables all users to either read from or write
+to the container. Setting ``*`` makes the container public. That is it
+enables anonymous users to either read from or write to the container.
+
+.. note:: If you are planning to expose public read ACL functionality
+ for the Swift API, it is strongly recommended to include the
+ Swift account name in the endpoint definition, so as to most
+ closely emulate the behavior of native OpenStack Swift. To
+ do so, set the ``ceph.conf`` configuration option ``rgw
+ swift account in url = true``, and update your Keystone
+ endpoint to the URL suffix ``/v1/AUTH_%(tenant_id)s``
+ (instead of just ``/v1``).
+
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{api version}/{account}/{container} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+ X-Container-Read: *
+ X-Container-Write: {uid1}, {uid2}, {uid3}
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Container-Read``
+
+:Description: The user IDs with read permissions for the container.
+:Type: Comma-separated string values of user IDs.
+:Required: No
+
+``X-Container-Write``
+
+:Description: The user IDs with write permissions for the container.
+:Type: Comma-separated string values of user IDs.
+:Required: No
+
+
+Add/Update Container Metadata
+=============================
+
+To add metadata to a container, make a ``POST`` request with the API version,
+account, and container name. You must have write permissions on the
+container to add or update metadata.
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{api version}/{account}/{container} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+ X-Container-Meta-Color: red
+ X-Container-Meta-Taste: salty
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Container-Meta-{key}``
+
+:Description: A user-defined meta data key that takes an arbitrary string value.
+:Type: String
+:Required: No
+
+
+Enable Object Versioning for a Container
+========================================
+
+To enable object versioning a container, make a ``POST`` request with
+the API version, account, and container name. You must have write
+permissions on the container to add or update metadata.
+
+.. note:: Object versioning support is not enabled in radosgw by
+ default; you must set ``rgw swift versioning enabled =
+ true`` in ``ceph.conf`` to enable this feature.
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{api version}/{account}/{container} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+ X-Versions-Location: {archive-container}
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Versions-Location``
+
+:Description: The name of a container (the "archive container") that
+ will be used to store versions of the objects in the
+ container that the ``POST`` request is made on (the
+ "current container"). The archive container need not
+ exist at the time it is being referenced, but once
+ ``X-Versions-Location`` is set on the current container,
+ and object versioning is thus enabled, the archive
+ container must exist before any further objects are
+ updated or deleted in the current container.
+
+ .. note:: ``X-Versions-Location`` is the only
+ versioning-related header that radosgw
+ interprets. ``X-History-Location``, supported
+ by native OpenStack Swift, is currently not
+ supported by radosgw.
+:Type: String
+:Required: No (if this header is passed with an empty value, object
+ versioning on the current container is disabled, but the
+ archive container continues to exist.)
+
+
+Delete a Container
+==================
+
+To delete a container, make a ``DELETE`` request with the API version, account,
+and the name of the container. The container must be empty. If you'd like to check
+if the container is empty, execute a ``HEAD`` request against the container. Once
+you have successfully removed the container, you will be able to reuse the container name.
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{api version}/{account}/{container} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+HTTP Response
+~~~~~~~~~~~~~
+
+``204``
+
+:Description: The container was removed.
+:Status Code: ``NoContent``
+
diff --git a/doc/radosgw/swift/java.rst b/doc/radosgw/swift/java.rst
new file mode 100644
index 000000000..8977a3b16
--- /dev/null
+++ b/doc/radosgw/swift/java.rst
@@ -0,0 +1,175 @@
+.. _java_swift:
+
+=====================
+ Java Swift Examples
+=====================
+
+Setup
+=====
+
+The following examples may require some or all of the following Java
+classes to be imported:
+
+.. code-block:: java
+
+ import org.javaswift.joss.client.factory.AccountConfig;
+ import org.javaswift.joss.client.factory.AccountFactory;
+ import org.javaswift.joss.client.factory.AuthenticationMethod;
+ import org.javaswift.joss.model.Account;
+ import org.javaswift.joss.model.Container;
+ import org.javaswift.joss.model.StoredObject;
+ import java.io.File;
+ import java.io.IOException;
+ import java.util.*;
+
+
+Create a Connection
+===================
+
+This creates a connection so that you can interact with the server:
+
+.. code-block:: java
+
+ String username = "USERNAME";
+ String password = "PASSWORD";
+ String authUrl = "https://radosgw.endpoint/auth/1.0";
+
+ AccountConfig config = new AccountConfig();
+ config.setUsername(username);
+ config.setPassword(password);
+ config.setAuthUrl(authUrl);
+ config.setAuthenticationMethod(AuthenticationMethod.BASIC);
+ Account account = new AccountFactory(config).createAccount();
+
+
+Create a Container
+==================
+
+This creates a new container called ``my-new-container``:
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ container.create();
+
+
+Create an Object
+================
+
+This creates an object ``foo.txt`` from the file named ``foo.txt`` in
+the container ``my-new-container``:
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ StoredObject object = container.getObject("foo.txt");
+ object.uploadObject(new File("foo.txt"));
+
+
+Add/Update Object Metadata
+==========================
+
+This adds the metadata key-value pair ``key``:``value`` to the object named
+``foo.txt`` in the container ``my-new-container``:
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ StoredObject object = container.getObject("foo.txt");
+ Map<String, Object> metadata = new TreeMap<String, Object>();
+ metadata.put("key", "value");
+ object.setMetadata(metadata);
+
+
+List Owned Containers
+=====================
+
+This gets a list of Containers that you own.
+This also prints out the container name.
+
+.. code-block:: java
+
+ Collection<Container> containers = account.list();
+ for (Container currentContainer : containers) {
+ System.out.println(currentContainer.getName());
+ }
+
+The output will look something like this::
+
+ mahbuckat1
+ mahbuckat2
+ mahbuckat3
+
+
+List a Container's Content
+==========================
+
+This gets a list of objects in the container ``my-new-container``; and, it also
+prints out each object's name, the file size, and last modified date:
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ Collection<StoredObject> objects = container.list();
+ for (StoredObject currentObject : objects) {
+ System.out.println(currentObject.getName());
+ }
+
+The output will look something like this::
+
+ myphoto1.jpg
+ myphoto2.jpg
+
+
+Retrieve an Object's Metadata
+=============================
+
+This retrieves metadata and gets the MIME type for an object named ``foo.txt``
+in a container named ``my-new-container``:
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ StoredObject object = container.getObject("foo.txt");
+ Map<String, Object> returnedMetadata = object.getMetadata();
+ for (String name : returnedMetadata.keySet()) {
+ System.out.println("META / "+name+": "+returnedMetadata.get(name));
+ }
+
+
+Retrieve an Object
+==================
+
+This downloads the object ``foo.txt`` in the container ``my-new-container``
+and saves it in ``./outfile.txt``:
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ StoredObject object = container.getObject("foo.txt");
+ object.downloadObject(new File("outfile.txt"));
+
+
+Delete an Object
+================
+
+This deletes the object ``goodbye.txt`` in the container "my-new-container":
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ StoredObject object = container.getObject("foo.txt");
+ object.delete();
+
+
+Delete a Container
+==================
+
+This deletes a container named "my-new-container":
+
+.. code-block:: java
+
+ Container container = account.getContainer("my-new-container");
+ container.delete();
+
+.. note:: The container must be empty! Otherwise it won't work!
diff --git a/doc/radosgw/swift/objectops.rst b/doc/radosgw/swift/objectops.rst
new file mode 100644
index 000000000..fc8d21967
--- /dev/null
+++ b/doc/radosgw/swift/objectops.rst
@@ -0,0 +1,271 @@
+===================
+ Object Operations
+===================
+
+An object is a container for storing data and metadata. A container may
+have many objects, but the object names must be unique. This API enables a
+client to create an object, set access controls and metadata, retrieve an
+object's data and metadata, and delete an object. Since this API makes requests
+related to information in a particular user's account, all requests in this API
+must be authenticated unless the container or object's access control is
+deliberately made publicly accessible (i.e., allows anonymous requests).
+
+
+Create/Update an Object
+=======================
+
+To create a new object, make a ``PUT`` request with the API version, account,
+container name and the name of the new object. You must have write permission
+on the container to create or update an object. The object name must be
+unique within the container. The ``PUT`` request is not idempotent, so if you
+do not use a unique name, the request will update the object. However, you may
+use pseudo-hierarchical syntax in your object name to distinguish it from
+another object of the same name if it is under a different pseudo-hierarchical
+directory. You may include access control headers and metadata headers in the
+request.
+
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{api version}/{account}/{container}/{object} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``ETag``
+
+:Description: An MD5 hash of the object's contents. Recommended.
+:Type: String
+:Required: No
+
+
+``Content-Type``
+
+:Description: The type of content the object contains.
+:Type: String
+:Required: No
+
+
+``Transfer-Encoding``
+
+:Description: Indicates whether the object is part of a larger aggregate object.
+:Type: String
+:Valid Values: ``chunked``
+:Required: No
+
+
+Copy an Object
+==============
+
+Copying an object allows you to make a server-side copy of an object, so that
+you don't have to download it and upload it under another container/name.
+To copy the contents of one object to another object, you may make either a
+``PUT`` request or a ``COPY`` request with the API version, account, and the
+container name. For a ``PUT`` request, use the destination container and object
+name in the request, and the source container and object in the request header.
+For a ``Copy`` request, use the source container and object in the request, and
+the destination container and object in the request header. You must have write
+permission on the container to copy an object. The destination object name must be
+unique within the container. The request is not idempotent, so if you do not use
+a unique name, the request will update the destination object. However, you may
+use pseudo-hierarchical syntax in your object name to distinguish the destination
+object from the source object of the same name if it is under a different
+pseudo-hierarchical directory. You may include access control headers and metadata
+headers in the request.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /{api version}/{account}/{dest-container}/{dest-object} HTTP/1.1
+ X-Copy-From: {source-container}/{source-object}
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+or alternatively:
+
+::
+
+ COPY /{api version}/{account}/{source-container}/{source-object} HTTP/1.1
+ Destination: {dest-container}/{dest-object}
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Copy-From``
+
+:Description: Used with a ``PUT`` request to define the source container/object path.
+:Type: String
+:Required: Yes, if using ``PUT``
+
+
+``Destination``
+
+:Description: Used with a ``COPY`` request to define the destination container/object path.
+:Type: String
+:Required: Yes, if using ``COPY``
+
+
+``If-Modified-Since``
+
+:Description: Only copies if modified since the date/time of the source object's ``last_modified`` attribute.
+:Type: Date
+:Required: No
+
+
+``If-Unmodified-Since``
+
+:Description: Only copies if not modified since the date/time of the source object's ``last_modified`` attribute.
+:Type: Date
+:Required: No
+
+``Copy-If-Match``
+
+:Description: Copies only if the ETag in the request matches the source object's ETag.
+:Type: ETag.
+:Required: No
+
+
+``Copy-If-None-Match``
+
+:Description: Copies only if the ETag in the request does not match the source object's ETag.
+:Type: ETag.
+:Required: No
+
+
+Delete an Object
+================
+
+To delete an object, make a ``DELETE`` request with the API version, account,
+container and object name. You must have write permissions on the container to delete
+an object within it. Once you have successfully deleted the object, you will be able to
+reuse the object name.
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /{api version}/{account}/{container}/{object} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+Get an Object
+=============
+
+To retrieve an object, make a ``GET`` request with the API version, account,
+container and object name. You must have read permissions on the container to
+retrieve an object within it.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{api version}/{account}/{container}/{object} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``range``
+
+:Description: To retrieve a subset of an object's contents, you may specify a byte range.
+:Type: Date
+:Required: No
+
+
+``If-Modified-Since``
+
+:Description: Only copies if modified since the date/time of the source object's ``last_modified`` attribute.
+:Type: Date
+:Required: No
+
+
+``If-Unmodified-Since``
+
+:Description: Only copies if not modified since the date/time of the source object's ``last_modified`` attribute.
+:Type: Date
+:Required: No
+
+``Copy-If-Match``
+
+:Description: Copies only if the ETag in the request matches the source object's ETag.
+:Type: ETag.
+:Required: No
+
+
+``Copy-If-None-Match``
+
+:Description: Copies only if the ETag in the request does not match the source object's ETag.
+:Type: ETag.
+:Required: No
+
+
+
+Response Headers
+~~~~~~~~~~~~~~~~
+
+``Content-Range``
+
+:Description: The range of the subset of object contents. Returned only if the range header field was specified in the request
+
+
+Get Object Metadata
+===================
+
+To retrieve an object's metadata, make a ``HEAD`` request with the API version,
+account, container and object name. You must have read permissions on the
+container to retrieve metadata from an object within the container. This request
+returns the same header information as the request for the object itself, but
+it does not return the object's data.
+
+Syntax
+~~~~~~
+
+::
+
+ HEAD /{api version}/{account}/{container}/{object} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+
+Add/Update Object Metadata
+==========================
+
+To add metadata to an object, make a ``POST`` request with the API version,
+account, container and object name. You must have write permissions on the
+parent container to add or update metadata.
+
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{api version}/{account}/{container}/{object} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Object-Meta-{key}``
+
+:Description: A user-defined meta data key that takes an arbitrary string value.
+:Type: String
+:Required: No
+
diff --git a/doc/radosgw/swift/python.rst b/doc/radosgw/swift/python.rst
new file mode 100644
index 000000000..28d92d7cc
--- /dev/null
+++ b/doc/radosgw/swift/python.rst
@@ -0,0 +1,114 @@
+.. _python_swift:
+
+=====================
+Python Swift Examples
+=====================
+
+Create a Connection
+===================
+
+This creates a connection so that you can interact with the server:
+
+.. code-block:: python
+
+ import swiftclient
+ user = 'account_name:username'
+ key = 'your_api_key'
+
+ conn = swiftclient.Connection(
+ user=user,
+ key=key,
+ authurl='https://objects.dreamhost.com/auth',
+ )
+
+
+Create a Container
+==================
+
+This creates a new container called ``my-new-container``:
+
+.. code-block:: python
+
+ container_name = 'my-new-container'
+ conn.put_container(container_name)
+
+
+Create an Object
+================
+
+This creates a file ``hello.txt`` from the file named ``my_hello.txt``:
+
+.. code-block:: python
+
+ with open('hello.txt', 'r') as hello_file:
+ conn.put_object(container_name, 'hello.txt',
+ contents= hello_file.read(),
+ content_type='text/plain')
+
+
+List Owned Containers
+=====================
+
+This gets a list of containers that you own, and prints out the container name:
+
+.. code-block:: python
+
+ for container in conn.get_account()[1]:
+ print container['name']
+
+The output will look something like this::
+
+ mahbuckat1
+ mahbuckat2
+ mahbuckat3
+
+List a Container's Content
+==========================
+
+This gets a list of objects in the container, and prints out each
+object's name, the file size, and last modified date:
+
+.. code-block:: python
+
+ for data in conn.get_container(container_name)[1]:
+ print '{0}\t{1}\t{2}'.format(data['name'], data['bytes'], data['last_modified'])
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+Retrieve an Object
+==================
+
+This downloads the object ``hello.txt`` and saves it in
+``./my_hello.txt``:
+
+.. code-block:: python
+
+ obj_tuple = conn.get_object(container_name, 'hello.txt')
+ with open('my_hello.txt', 'w') as my_hello:
+ my_hello.write(obj_tuple[1])
+
+
+Delete an Object
+================
+
+This deletes the object ``hello.txt``:
+
+.. code-block:: python
+
+ conn.delete_object(container_name, 'hello.txt')
+
+Delete a Container
+==================
+
+.. note::
+
+ The container must be empty! Otherwise the request won't work!
+
+.. code-block:: python
+
+ conn.delete_container(container_name)
+
diff --git a/doc/radosgw/swift/ruby.rst b/doc/radosgw/swift/ruby.rst
new file mode 100644
index 000000000..a20b66d88
--- /dev/null
+++ b/doc/radosgw/swift/ruby.rst
@@ -0,0 +1,119 @@
+.. _ruby_swift:
+
+=====================
+ Ruby Swift Examples
+=====================
+
+Create a Connection
+===================
+
+This creates a connection so that you can interact with the server:
+
+.. code-block:: ruby
+
+ require 'cloudfiles'
+ username = 'account_name:user_name'
+ api_key = 'your_secret_key'
+
+ conn = CloudFiles::Connection.new(
+ :username => username,
+ :api_key => api_key,
+ :auth_url => 'http://objects.dreamhost.com/auth'
+ )
+
+
+Create a Container
+==================
+
+This creates a new container called ``my-new-container``
+
+.. code-block:: ruby
+
+ container = conn.create_container('my-new-container')
+
+
+Create an Object
+================
+
+This creates a file ``hello.txt`` from the file named ``my_hello.txt``
+
+.. code-block:: ruby
+
+ obj = container.create_object('hello.txt')
+ obj.load_from_filename('./my_hello.txt')
+ obj.content_type = 'text/plain'
+
+
+
+List Owned Containers
+=====================
+
+This gets a list of Containers that you own, and also prints out
+the container name:
+
+.. code-block:: ruby
+
+ conn.containers.each do |container|
+ puts container
+ end
+
+The output will look something like this::
+
+ mahbuckat1
+ mahbuckat2
+ mahbuckat3
+
+
+List a Container's Contents
+===========================
+
+This gets a list of objects in the container, and prints out each
+object's name, the file size, and last modified date:
+
+.. code-block:: ruby
+
+ require 'date' # not necessary in the next version
+
+ container.objects_detail.each do |name, data|
+ puts "#{name}\t#{data[:bytes]}\t#{data[:last_modified]}"
+ end
+
+The output will look something like this::
+
+ myphoto1.jpg 251262 2011-08-08T21:35:48.000Z
+ myphoto2.jpg 262518 2011-08-08T21:38:01.000Z
+
+
+
+Retrieve an Object
+==================
+
+This downloads the object ``hello.txt`` and saves it in
+``./my_hello.txt``:
+
+.. code-block:: ruby
+
+ obj = container.object('hello.txt')
+ obj.save_to_filename('./my_hello.txt')
+
+
+Delete an Object
+================
+
+This deletes the object ``goodbye.txt``:
+
+.. code-block:: ruby
+
+ container.delete_object('goodbye.txt')
+
+
+Delete a Container
+==================
+
+.. note::
+
+ The container must be empty! Otherwise the request won't work!
+
+.. code-block:: ruby
+
+ container.delete_container('my-new-container')
diff --git a/doc/radosgw/swift/serviceops.rst b/doc/radosgw/swift/serviceops.rst
new file mode 100644
index 000000000..a00f3d807
--- /dev/null
+++ b/doc/radosgw/swift/serviceops.rst
@@ -0,0 +1,76 @@
+====================
+ Service Operations
+====================
+
+To retrieve data about our Swift-compatible service, you may execute ``GET``
+requests using the ``X-Storage-Url`` value retrieved during authentication.
+
+List Containers
+===============
+
+A ``GET`` request that specifies the API version and the account will return
+a list of containers for a particular user account. Since the request returns
+a particular user's containers, the request requires an authentication token.
+The request cannot be made anonymously.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /{api version}/{account} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+
+
+Request Parameters
+~~~~~~~~~~~~~~~~~~
+
+``limit``
+
+:Description: Limits the number of results to the specified value.
+:Type: Integer
+:Required: No
+
+``format``
+
+:Description: Defines the format of the result.
+:Type: String
+:Valid Values: ``json`` | ``xml``
+:Required: No
+
+
+``marker``
+
+:Description: Returns a list of results greater than the marker value.
+:Type: String
+:Required: No
+
+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+The response contains a list of containers, or returns with an HTTP
+204 response code
+
+``account``
+
+:Description: A list for account information.
+:Type: Container
+
+``container``
+
+:Description: The list of containers.
+:Type: Container
+
+``name``
+
+:Description: The name of a container.
+:Type: String
+
+``bytes``
+
+:Description: The size of the container.
+:Type: Integer \ No newline at end of file
diff --git a/doc/radosgw/swift/tempurl.rst b/doc/radosgw/swift/tempurl.rst
new file mode 100644
index 000000000..79b392de6
--- /dev/null
+++ b/doc/radosgw/swift/tempurl.rst
@@ -0,0 +1,102 @@
+====================
+ Temp URL Operations
+====================
+
+To allow temporary access (for eg for `GET` requests) to objects
+without the need to share credentials, temp url functionality is
+supported by swift endpoint of radosgw. For this functionality,
+initially the value of `X-Account-Meta-Temp-URL-Key` and optionally
+`X-Account-Meta-Temp-URL-Key-2` should be set. The Temp URL
+functionality relies on a HMAC-SHA1 signature against these secret
+keys.
+
+.. note:: If you are planning to expose Temp URL functionality for the
+ Swift API, it is strongly recommended to include the Swift
+ account name in the endpoint definition, so as to most
+ closely emulate the behavior of native OpenStack Swift. To
+ do so, set the ``ceph.conf`` configuration option ``rgw
+ swift account in url = true``, and update your Keystone
+ endpoint to the URL suffix ``/v1/AUTH_%(tenant_id)s``
+ (instead of just ``/v1``).
+
+
+POST Temp-URL Keys
+==================
+
+A ``POST`` request to the Swift account with the required key will set
+the secret temp URL key for the account, against which temporary URL
+access can be provided to accounts. Up to two keys are supported, and
+signatures are checked against both the keys, if present, so that keys
+can be rotated without invalidating the temporary URLs.
+
+.. note:: Native OpenStack Swift also supports the option to set
+ temporary URL keys at the container level, issuing a
+ ``POST`` or ``PUT`` request against a container that sets
+ ``X-Container-Meta-Temp-URL-Key`` or
+ ``X-Container-Meta-Temp-URL-Key-2``. This functionality is
+ not supported in radosgw; temporary URL keys can only be set
+ and used at the account level.
+
+Syntax
+~~~~~~
+
+::
+
+ POST /{api version}/{account} HTTP/1.1
+ Host: {fqdn}
+ X-Auth-Token: {auth-token}
+
+Request Headers
+~~~~~~~~~~~~~~~
+
+``X-Account-Meta-Temp-URL-Key``
+
+:Description: A user-defined key that takes an arbitrary string value.
+:Type: String
+:Required: Yes
+
+``X-Account-Meta-Temp-URL-Key-2``
+
+:Description: A user-defined key that takes an arbitrary string value.
+:Type: String
+:Required: No
+
+
+GET Temp-URL Objects
+====================
+
+Temporary URL uses a cryptographic HMAC-SHA1 signature, which includes
+the following elements:
+
+#. The value of the Request method, "GET" for instance
+#. The expiry time, in format of seconds since the epoch, ie Unix time
+#. The request path starting from "v1" onwards
+
+The above items are normalized with newlines appended between them,
+and a HMAC is generated using the SHA-1 hashing algorithm against one
+of the Temp URL Keys posted earlier.
+
+A sample python script to demonstrate the above is given below:
+
+
+.. code-block:: python
+
+ import hmac
+ from hashlib import sha1
+ from time import time
+
+ method = 'GET'
+ host = 'https://objectstore.example.com/swift'
+ duration_in_seconds = 300 # Duration for which the url is valid
+ expires = int(time() + duration_in_seconds)
+ path = '/v1/your-bucket/your-object'
+ key = 'secret'
+ hmac_body = '%s\n%s\n%s' % (method, expires, path)
+ sig = hmac.new(key, hmac_body, sha1).hexdigest()
+ rest_uri = "{host}{path}?temp_url_sig={sig}&temp_url_expires={expires}".format(
+ host=host, path=path, sig=sig, expires=expires)
+ print rest_uri
+
+ # Example Output
+ # https://objectstore.example.com/swift/v1/your-bucket/your-object?temp_url_sig=ff4657876227fc6025f04fcf1e82818266d022c6&temp_url_expires=1423200992
+
diff --git a/doc/radosgw/swift/tutorial.rst b/doc/radosgw/swift/tutorial.rst
new file mode 100644
index 000000000..5d2889b19
--- /dev/null
+++ b/doc/radosgw/swift/tutorial.rst
@@ -0,0 +1,62 @@
+==========
+ Tutorial
+==========
+
+The Swift-compatible API tutorials follow a simple container-based object
+lifecycle. The first step requires you to setup a connection between your
+client and the RADOS Gateway server. Then, you may follow a natural
+container and object lifecycle, including adding and retrieving object
+metadata. See example code for the following languages:
+
+- `Java`_
+- `Python`_
+- `Ruby`_
+
+
+.. ditaa::
+
+ +----------------------------+ +-----------------------------+
+ | | | |
+ | Create a Connection |------->| Create a Container |
+ | | | |
+ +----------------------------+ +-----------------------------+
+ |
+ +--------------------------------------+
+ |
+ v
+ +----------------------------+ +-----------------------------+
+ | | | |
+ | Create an Object |------->| Add/Update Object Metadata |
+ | | | |
+ +----------------------------+ +-----------------------------+
+ |
+ +--------------------------------------+
+ |
+ v
+ +----------------------------+ +-----------------------------+
+ | | | |
+ | List Owned Containers |------->| List a Container's Contents |
+ | | | |
+ +----------------------------+ +-----------------------------+
+ |
+ +--------------------------------------+
+ |
+ v
+ +----------------------------+ +-----------------------------+
+ | | | |
+ | Get an Object's Metadata |------->| Retrieve an Object |
+ | | | |
+ +----------------------------+ +-----------------------------+
+ |
+ +--------------------------------------+
+ |
+ v
+ +----------------------------+ +-----------------------------+
+ | | | |
+ | Delete an Object |------->| Delete a Container |
+ | | | |
+ +----------------------------+ +-----------------------------+
+
+.. _Java: ../java
+.. _Python: ../python
+.. _Ruby: ../ruby
diff --git a/doc/radosgw/sync-modules.rst b/doc/radosgw/sync-modules.rst
new file mode 100644
index 000000000..ca201e90a
--- /dev/null
+++ b/doc/radosgw/sync-modules.rst
@@ -0,0 +1,99 @@
+============
+Sync Modules
+============
+
+.. versionadded:: Kraken
+
+The :ref:`multisite` functionality of RGW introduced in Jewel allowed the ability to
+create multiple zones and mirror data and metadata between them. ``Sync Modules``
+are built atop of the multisite framework that allows for forwarding data and
+metadata to a different external tier. A sync module allows for a set of actions
+to be performed whenever a change in data occurs (metadata ops like bucket or
+user creation etc. are also regarded as changes in data). As the rgw multisite
+changes are eventually consistent at remote sites, changes are propagated
+asynchronously. This would allow for unlocking use cases such as backing up the
+object storage to an external cloud cluster or a custom backup solution using
+tape drives, indexing metadata in ElasticSearch etc.
+
+A sync module configuration is local to a zone. The sync module determines
+whether the zone exports data or can only consume data that was modified in
+another zone. As of luminous the supported sync plugins are `elasticsearch`_,
+``rgw``, which is the default sync plugin that synchronises data between the
+zones and ``log`` which is a trivial sync plugin that logs the metadata
+operation that happens in the remote zones. The following docs are written with
+the example of a zone using `elasticsearch sync module`_, the process would be similar
+for configuring any sync plugin
+
+.. toctree::
+ :maxdepth: 1
+
+ ElasticSearch Sync Module <elastic-sync-module>
+ Cloud Sync Module <cloud-sync-module>
+ PubSub Module <pubsub-module>
+ Archive Sync Module <archive-sync-module>
+
+.. note ``rgw`` is the default sync plugin and there is no need to explicitly
+ configure this
+
+Requirements and Assumptions
+----------------------------
+
+Let us assume a simple multisite configuration as described in the :ref:`multisite`
+docs, of 2 zones ``us-east`` and ``us-west``, let's add a third zone
+``us-east-es`` which is a zone that only processes metadata from the other
+sites. This zone can be in the same or a different ceph cluster as ``us-east``.
+This zone would only consume metadata from other zones and RGWs in this zone
+will not serve any end user requests directly.
+
+
+Configuring Sync Modules
+------------------------
+
+Create the third zone similar to the :ref:`multisite` docs, for example
+
+::
+
+ # radosgw-admin zone create --rgw-zonegroup=us --rgw-zone=us-east-es \
+ --access-key={system-key} --secret={secret} --endpoints=http://rgw-es:80
+
+
+
+A sync module can be configured for this zone via the following
+
+::
+
+ # radosgw-admin zone modify --rgw-zone={zone-name} --tier-type={tier-type} --tier-config={set of key=value pairs}
+
+
+For example in the ``elasticsearch`` sync module
+
+::
+
+ # radosgw-admin zone modify --rgw-zone={zone-name} --tier-type=elasticsearch \
+ --tier-config=endpoint=http://localhost:9200,num_shards=10,num_replicas=1
+
+
+For the various supported tier-config options refer to the `elasticsearch sync module`_ docs
+
+Finally update the period
+
+
+::
+
+ # radosgw-admin period update --commit
+
+
+Now start the radosgw in the zone
+
+::
+
+ # systemctl start ceph-radosgw@rgw.`hostname -s`
+ # systemctl enable ceph-radosgw@rgw.`hostname -s`
+
+
+
+.. _`elasticsearch sync module`: ../elastic-sync-module
+.. _`elasticsearch`: ../elastic-sync-module
+.. _`cloud sync module`: ../cloud-sync-module
+.. _`pubsub module`: ../pubsub-module
+.. _`archive sync module`: ../archive-sync-module
diff --git a/doc/radosgw/troubleshooting.rst b/doc/radosgw/troubleshooting.rst
new file mode 100644
index 000000000..4a084e82a
--- /dev/null
+++ b/doc/radosgw/troubleshooting.rst
@@ -0,0 +1,208 @@
+=================
+ Troubleshooting
+=================
+
+
+The Gateway Won't Start
+=======================
+
+If you cannot start the gateway (i.e., there is no existing ``pid``),
+check to see if there is an existing ``.asok`` file from another
+user. If an ``.asok`` file from another user exists and there is no
+running ``pid``, remove the ``.asok`` file and try to start the
+process again. This may occur when you start the process as a ``root`` user and
+the startup script is trying to start the process as a
+``www-data`` or ``apache`` user and an existing ``.asok`` is
+preventing the script from starting the daemon.
+
+The radosgw init script (/etc/init.d/radosgw) also has a verbose argument that
+can provide some insight as to what could be the issue::
+
+ /etc/init.d/radosgw start -v
+
+or ::
+
+ /etc/init.d radosgw start --verbose
+
+HTTP Request Errors
+===================
+
+Examining the access and error logs for the web server itself is
+probably the first step in identifying what is going on. If there is
+a 500 error, that usually indicates a problem communicating with the
+``radosgw`` daemon. Ensure the daemon is running, its socket path is
+configured, and that the web server is looking for it in the proper
+location.
+
+
+Crashed ``radosgw`` process
+===========================
+
+If the ``radosgw`` process dies, you will normally see a 500 error
+from the web server (apache, nginx, etc.). In that situation, simply
+restarting radosgw will restore service.
+
+To diagnose the cause of the crash, check the log in ``/var/log/ceph``
+and/or the core file (if one was generated).
+
+
+Blocked ``radosgw`` Requests
+============================
+
+If some (or all) radosgw requests appear to be blocked, you can get
+some insight into the internal state of the ``radosgw`` daemon via
+its admin socket. By default, there will be a socket configured to
+reside in ``/var/run/ceph``, and the daemon can be queried with::
+
+ ceph daemon /var/run/ceph/client.rgw help
+
+ help list available commands
+ objecter_requests show in-progress osd requests
+ perfcounters_dump dump perfcounters value
+ perfcounters_schema dump perfcounters schema
+ version get protocol version
+
+Of particular interest::
+
+ ceph daemon /var/run/ceph/client.rgw objecter_requests
+ ...
+
+will dump information about current in-progress requests with the
+RADOS cluster. This allows one to identify if any requests are blocked
+by a non-responsive OSD. For example, one might see::
+
+ { "ops": [
+ { "tid": 1858,
+ "pg": "2.d2041a48",
+ "osd": 1,
+ "last_sent": "2012-03-08 14:56:37.949872",
+ "attempts": 1,
+ "object_id": "fatty_25647_object1857",
+ "object_locator": "@2",
+ "snapid": "head",
+ "snap_context": "0=[]",
+ "mtime": "2012-03-08 14:56:37.949813",
+ "osd_ops": [
+ "write 0~4096"]},
+ { "tid": 1873,
+ "pg": "2.695e9f8e",
+ "osd": 1,
+ "last_sent": "2012-03-08 14:56:37.970615",
+ "attempts": 1,
+ "object_id": "fatty_25647_object1872",
+ "object_locator": "@2",
+ "snapid": "head",
+ "snap_context": "0=[]",
+ "mtime": "2012-03-08 14:56:37.970555",
+ "osd_ops": [
+ "write 0~4096"]}],
+ "linger_ops": [],
+ "pool_ops": [],
+ "pool_stat_ops": [],
+ "statfs_ops": []}
+
+In this dump, two requests are in progress. The ``last_sent`` field is
+the time the RADOS request was sent. If this is a while ago, it suggests
+that the OSD is not responding. For example, for request 1858, you could
+check the OSD status with::
+
+ ceph pg map 2.d2041a48
+
+ osdmap e9 pg 2.d2041a48 (2.0) -> up [1,0] acting [1,0]
+
+This tells us to look at ``osd.1``, the primary copy for this PG::
+
+ ceph daemon osd.1 ops
+ { "num_ops": 651,
+ "ops": [
+ { "description": "osd_op(client.4124.0:1858 fatty_25647_object1857 [write 0~4096] 2.d2041a48)",
+ "received_at": "1331247573.344650",
+ "age": "25.606449",
+ "flag_point": "waiting for sub ops",
+ "client_info": { "client": "client.4124",
+ "tid": 1858}},
+ ...
+
+The ``flag_point`` field indicates that the OSD is currently waiting
+for replicas to respond, in this case ``osd.0``.
+
+
+Java S3 API Troubleshooting
+===========================
+
+
+Peer Not Authenticated
+----------------------
+
+You may receive an error that looks like this::
+
+ [java] INFO: Unable to execute HTTP request: peer not authenticated
+
+The Java SDK for S3 requires a valid certificate from a recognized certificate
+authority, because it uses HTTPS by default. If you are just testing the Ceph
+Object Storage services, you can resolve this problem in a few ways:
+
+#. Prepend the IP address or hostname with ``http://``. For example, change this::
+
+ conn.setEndpoint("myserver");
+
+ To::
+
+ conn.setEndpoint("http://myserver")
+
+#. After setting your credentials, add a client configuration and set the
+ protocol to ``Protocol.HTTP``. ::
+
+ AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
+
+ ClientConfiguration clientConfig = new ClientConfiguration();
+ clientConfig.setProtocol(Protocol.HTTP);
+
+ AmazonS3 conn = new AmazonS3Client(credentials, clientConfig);
+
+
+
+405 MethodNotAllowed
+--------------------
+
+If you receive an 405 error, check to see if you have the S3 subdomain set up correctly.
+You will need to have a wild card setting in your DNS record for subdomain functionality
+to work properly.
+
+Also, check to ensure that the default site is disabled. ::
+
+ [java] Exception in thread "main" Status Code: 405, AWS Service: Amazon S3, AWS Request ID: null, AWS Error Code: MethodNotAllowed, AWS Error Message: null, S3 Extended Request ID: null
+
+
+
+Numerous objects in default.rgw.meta pool
+=========================================
+
+Clusters created prior to *jewel* have a metadata archival feature enabled by default, using the ``default.rgw.meta`` pool.
+This archive keeps all old versions of user and bucket metadata, resulting in large numbers of objects in the ``default.rgw.meta`` pool.
+
+Disabling the Metadata Heap
+---------------------------
+
+Users who want to disable this feature going forward should set the ``metadata_heap`` field to an empty string ``""``::
+
+ $ radosgw-admin zone get --rgw-zone=default > zone.json
+ [edit zone.json, setting "metadata_heap": ""]
+ $ radosgw-admin zone set --rgw-zone=default --infile=zone.json
+ $ radosgw-admin period update --commit
+
+This will stop new metadata from being written to the ``default.rgw.meta`` pool, but does not remove any existing objects or pool.
+
+Cleaning the Metadata Heap Pool
+-------------------------------
+
+Clusters created prior to *jewel* normally use ``default.rgw.meta`` only for the metadata archival feature.
+
+However, from *luminous* onwards, radosgw uses :ref:`Pool Namespaces <radosgw-pool-namespaces>` within ``default.rgw.meta`` for an entirely different purpose, that is, to store ``user_keys`` and other critical metadata.
+
+Users should check zone configuration before proceeding any cleanup procedures::
+
+ $ radosgw-admin zone get --rgw-zone=default | grep default.rgw.meta
+ [should not match any strings]
+
+Having confirmed that the pool is not used for any purpose, users may safely delete all objects in the ``default.rgw.meta`` pool, or optionally, delete the entire pool itself.
diff --git a/doc/radosgw/vault.rst b/doc/radosgw/vault.rst
new file mode 100644
index 000000000..0f3cb8fd1
--- /dev/null
+++ b/doc/radosgw/vault.rst
@@ -0,0 +1,482 @@
+===========================
+HashiCorp Vault Integration
+===========================
+
+HashiCorp `Vault`_ can be used as a secure key management service for
+`Server-Side Encryption`_ (SSE-KMS).
+
+.. ditaa::
+
+ +---------+ +---------+ +-------+ +-------+
+ | Client | | RadosGW | | Vault | | OSD |
+ +---------+ +---------+ +-------+ +-------+
+ | create secret | | |
+ | key for key ID | | |
+ |-----------------+---------------->| |
+ | | | |
+ | upload object | | |
+ | with key ID | | |
+ |---------------->| request secret | |
+ | | key for key ID | |
+ | |---------------->| |
+ | |<----------------| |
+ | | return secret | |
+ | | key | |
+ | | | |
+ | | encrypt object | |
+ | | with secret key | |
+ | |--------------+ | |
+ | | | | |
+ | |<-------------+ | |
+ | | | |
+ | | store encrypted | |
+ | | object | |
+ | |------------------------------>|
+
+#. `Vault secrets engines`_
+#. `Vault authentication`_
+#. `Vault namespaces`_
+#. `Create a key in Vault`_
+#. `Configure the Ceph Object Gateway`_
+#. `Upload object`_
+
+Some examples below use the Vault command line utility to interact with
+Vault. You may need to set the following environment variable with the correct
+address of your Vault server to use this utility::
+
+ export VAULT_ADDR='https://vault-server-fqdn:8200'
+
+Vault secrets engines
+=====================
+
+Vault provides several secrets engines, which can store, generate, and encrypt
+data. Currently, the Object Gateway supports:
+
+- `KV secrets engine`_ version 2
+- `Transit engine`_
+
+KV secrets engine
+-----------------
+
+The KV secrets engine is used to store arbitrary key/value secrets in Vault. To
+enable the KV engine version 2 in Vault, use the following command::
+
+ vault secrets enable -path secret kv-v2
+
+The Object Gateway can be configured to use the KV engine version 2 with the
+following setting::
+
+ rgw crypt vault secret engine = kv
+
+Transit secrets engine
+----------------------
+
+The transit engine handles cryptographic functions on data in-transit. To enable
+it in Vault, use the following command::
+
+ vault secrets enable transit
+
+The Object Gateway can be configured to use the transit engine with the
+following setting::
+
+ rgw crypt vault secret engine = transit
+
+Vault authentication
+====================
+
+Vault supports several authentication mechanisms. Currently, the Object
+Gateway can be configured to authenticate to Vault using the
+`Token authentication method`_ or a `Vault agent`_.
+
+Most tokens in Vault have limited lifetimes and powers. The only
+sort of Vault token that does not have a lifetime are root tokens.
+For all other tokens, it is necesary to periodically refresh them,
+either by performing initial authentication, or by renewing the token.
+Ceph does not have any logic to perform either operation.
+The simplest best way to use Vault tokens with ceph is to
+also run the Vault agent and have it refresh the token file.
+When the Vault agent is used in this mode, file system permissions
+can be used to restrict who has the use of tokens.
+
+Instead of having Vault agent refresh a token file, it can be told
+to act as a proxy server. In this mode, Vault will add a token when
+necessary and add it to requests passed to it before forwarding them on
+to the real server. Vault agent will still handle token renewal just
+as it would when storing a token in the filesystem. In this mode, it
+is necessary to properly secure the network path rgw uses to reach the
+Vault agent, such as having the Vault agent listen only to localhost.
+
+Token policies for the object gateway
+-------------------------------------
+
+All Vault tokens have powers as specified by the polices attached
+to that token. Multiple policies may be associated with one
+token. You should only use the policy necessary for your
+configuration.
+
+When using the kv secret engine with the object gateway::
+
+ vault policy write rgw-kv-policy -<<EOF
+ path "secret/data/*" {
+ capabilities = ["read"]
+ }
+ EOF
+
+When using the transit secret engine with the object gateway::
+
+ vault policy write rgw-transit-policy -<<EOF
+ path "transit/keys/*" {
+ capabilities = [ "create", "update" ]
+ denied_parameters = {"exportable" = [], "allow_plaintext_backup" = [] }
+ }
+
+ path "transit/keys/*" {
+ capabilities = ["read", "delete"]
+ }
+
+ path "transit/keys/" {
+ capabilities = ["list"]
+ }
+
+ path "transit/keys/+/rotate" {
+ capabilities = [ "update" ]
+ }
+
+ path "transit/*" {
+ capabilities = [ "update" ]
+ }
+ EOF
+
+If you had previously used an older version of ceph with the
+transit secret engine, you might need the following policy::
+
+ vault policy write old-rgw-transit-policy -<<EOF
+ path "transit/export/encryption-key/*" {
+ capabilities = ["read"]
+ }
+ EOF
+
+
+Token authentication
+--------------------
+
+.. note: Never use root tokens with ceph in production environments.
+
+The token authentication method expects a Vault token to be present in a
+plaintext file. The Object Gateway can be configured to use token authentication
+with the following settings::
+
+ rgw crypt vault auth = token
+ rgw crypt vault token file = /run/.rgw-vault-token
+ rgw crypt vault addr = https://vault-server-fqdn:8200
+
+Adjust these settinsg to match your configuration.
+For security reasons, the token file must be readable by the Object Gateway
+only.
+
+You might set up vault agent as follows::
+
+ vault write auth/approle/role/rgw-ap \
+ token_policies=rgw-transit-policy,default \
+ token_max_ttl=60m
+
+Change the policy here to match your configuration.
+
+Get the role-id::
+
+ vault read auth/approle/role/rgw-ap/role-id -format=json | \
+ jq -r .data.role_id
+
+Store the output in some file, such as /usr/local/etc/vault/.rgw-ap-role-id
+
+Get the secret-id::
+
+ vault read auth/approle/role/rgw-ap/role-id -format=json | \
+ jq -r .data.role_id
+
+Store the output in some file, such as /usr/local/etc/vault/.rgw-ap-secret-id
+
+Create configuration for the Vault agent, such as::
+
+ pid_file = "/run/rgw-vault-agent-pid"
+ auto_auth {
+ method "AppRole" {
+ mount_path = "auth/approle"
+ config = {
+ role_id_file_path ="/usr/local/etc/vault/.rgw-ap-role-id"
+ secret_id_file_path ="/usr/local/etc/vault/.rgw-ap-secret-id"
+ remove_secret_id_file_after_reading ="false"
+ }
+ }
+ sink "file" {
+ config = {
+ path = "/run/.rgw-vault-token"
+ }
+ }
+ }
+ vault {
+ address = "https://vault-server-fqdn:8200"
+ }
+
+Then use systemctl or another method of your choice to run
+a persistent daemon with the following arguments::
+
+ /usr/local/bin/vault agent -config=/usr/local/etc/vault/rgw-agent.hcl
+
+Once the vault agent is running, the token file should be populated
+with a valid token.
+
+Vault agent
+-----------
+
+The Vault agent is a client daemon that provides authentication to Vault and
+manages token renewal and caching. It typically runs on the same host as the
+Object Gateway. With a Vault agent, it is possible to use other Vault
+authentication mechanism such as AppRole, AWS, Certs, JWT, and Azure.
+
+The Object Gateway can be configured to use a Vault agent with the following
+settings::
+
+ rgw crypt vault auth = agent
+ rgw crypt vault addr = http://127.0.0.1:8100
+
+You might set up vault agent as follows::
+
+ vault write auth/approle/role/rgw-ap \
+ token_policies=rgw-transit-policy,default \
+ token_max_ttl=60m
+
+Change the policy here to match your configuration.
+
+Get the role-id:
+ vault read auth/approle/role/rgw-ap/role-id -format=json | \
+ jq -r .data.role_id
+
+Store the output in some file, such as /usr/local/etc/vault/.rgw-ap-role-id
+
+Get the secret-id:
+ vault read auth/approle/role/rgw-ap/role-id -format=json | \
+ jq -r .data.role_id
+
+Store the output in some file, such as /usr/local/etc/vault/.rgw-ap-secret-id
+
+Create configuration for the Vault agent, such as::
+
+ pid_file = "/run/rgw-vault-agent-pid"
+ auto_auth {
+ method "AppRole" {
+ mount_path = "auth/approle"
+ config = {
+ role_id_file_path ="/usr/local/etc/vault/.rgw-ap-role-id"
+ secret_id_file_path ="/usr/local/etc/vault/.rgw-ap-secret-id"
+ remove_secret_id_file_after_reading ="false"
+ }
+ }
+ }
+ cache {
+ use_auto_auth_token = true
+ }
+ listener "tcp" {
+ address = "127.0.0.1:8100"
+ tls_disable = true
+ }
+ vault {
+ address = "https://vault-server-fqdn:8200"
+ }
+
+Then use systemctl or another method of your choice to run
+a persistent daemon with the following arguments::
+
+ /usr/local/bin/vault agent -config=/usr/local/etc/vault/rgw-agent.hcl
+
+Once the vault agent is running, you should find it listening
+to port 8100 on localhost, and you should be able to interact
+with it using the vault command.
+
+Vault namespaces
+================
+
+In the Enterprise version, Vault supports the concept of `namespaces`_, which
+allows centralized management for teams within an organization while ensuring
+that those teams operate within isolated environments known as tenants.
+
+The Object Gateway can be configured to access Vault within a particular
+namespace using the following configuration setting::
+
+ rgw crypt vault namespace = tenant1
+
+Create a key in Vault
+=====================
+
+.. note:: Keys for server-side encryption must be 256-bit long and base-64
+ encoded.
+
+Using the KV engine
+-------------------
+
+A key for server-side encryption can be created in the KV version 2 engine using
+the command line utility, as in the following example::
+
+ vault kv put secret/myproject/mybucketkey key=$(openssl rand -base64 32)
+
+Sample output::
+
+ ====== Metadata ======
+ Key Value
+ --- -----
+ created_time 2019-08-29T17:01:09.095824999Z
+ deletion_time n/a
+ destroyed false
+ version 1
+
+Note that in the KV secrets engine, secrets are stored as key-value pairs, and
+the Gateway expects the key name to be ``key``, i.e. the secret must be in the
+form ``key=<secret key>``.
+
+Using the Transit engine
+------------------------
+
+Keys created for use with the Transit engine should no longer be marked
+exportable. They can be created with::
+
+ vault write -f transit/keys/mybucketkey
+
+The command above creates a keyring, which contains a key of type
+``aes256-gcm96`` by default. To verify that the key was correctly created, use
+the following command::
+
+ vault read transit/mybucketkey
+
+Sample output::
+
+ Key Value
+ --- -----
+ derived false
+ exportable false
+ name mybucketkey
+ type aes256-gcm96
+
+Configure the Ceph Object Gateway
+=================================
+
+Edit the Ceph configuration file to enable Vault as a KMS backend for
+server-side encryption::
+
+ rgw crypt s3 kms backend = vault
+
+Choose the Vault authentication method, e.g.::
+
+ rgw crypt vault auth = token
+ rgw crypt vault token file = /run/.rgw-vault-token
+ rgw crypt vault addr = https://vault-server-fqdn:8200
+
+Or::
+
+ rgw crypt vault auth = agent
+ rgw crypt vault addr = http://localhost:8100
+
+Choose the secrets engine::
+
+ rgw crypt vault secret engine = kv
+
+Or::
+
+ rgw crypt vault secret engine = transit
+
+Optionally, set the Vault namespace where encryption keys will be fetched from::
+
+ rgw crypt vault namespace = tenant1
+
+Finally, the URLs where the Gateway will retrieve encryption keys from Vault can
+be restricted by setting a path prefix. For instance, the Gateway can be
+restricted to fetch KV keys as follows::
+
+ rgw crypt vault prefix = /v1/secret/data
+
+Or, when using the transit secret engine::
+
+ rgw crypt vault prefix = /v1/transit
+
+In the example above, the Gateway would only fetch transit encryption keys under
+``https://vault-server:8200/v1/transit``.
+
+You can use custom ssl certs to authenticate with vault with help of
+following options::
+
+ rgw crypt vault verify ssl = true
+ rgw crypt vault ssl cacert = /etc/ceph/vault.ca
+ rgw crypt vault ssl clientcert = /etc/ceph/vault.crt
+ rgw crypt vault ssl clientkey = /etc/ceph/vault.key
+
+where vault.ca is CA certificate and vault.key/vault.crt are private key and ssl
+ceritificate generated for RGW to access the vault server. It highly recommended to
+set this option true, setting false is very dangerous and need to avoid since this
+runs in very secured enviroments.
+
+Transit engine compatibility support
+------------------------------------
+The transit engine has compatibility support for previous
+versions of ceph, which used the transit engine as a simple key store.
+
+There is a a "compat" option which can be given to the transit
+engine to configure the compatibility support,
+
+To entirely disable backwards support, use::
+
+ rgw crypt vault secret engine = transit compat=0
+
+This will be the default in future verisons. and is safe to use
+for new installs using the current version.
+
+This is the normal default with the current version::
+
+ rgw crypt vault secret engine = transit compat=1
+
+This enables the new engine for newly created objects,
+but still allows the old engine to be used for old objects.
+In order to access old and new objects, the vault token given
+to ceph must have both the old and new transit policies.
+
+To force use of only the old engine, use::
+
+ rgw crypt vault secret engine = transit compat=2
+
+This mode is automatically selected if the vault prefix
+ends in export/encryption-key, which was the previously
+documented setting.
+
+Upload object
+=============
+
+When uploading an object to the Gateway, provide the SSE key ID in the request.
+As an example, for the kv engine, using the AWS command-line client::
+
+ aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id myproject/mybucketkey
+
+As an example, for the transit engine (new flavor), using the AWS command-line client::
+
+ aws --endpoint=http://radosgw:8000 s3 cp plaintext.txt s3://mybucket/encrypted.txt --sse=aws:kms --sse-kms-key-id mybucketkey
+
+The Object Gateway will fetch the key from Vault, encrypt the object and store
+it in the bucket. Any request to download the object will make the Gateway
+automatically retrieve the correspondent key from Vault and decrypt the object.
+
+Note that the secret will be fetched from Vault using a URL constructed by
+concatenating the base address (``rgw crypt vault addr``), the (optional)
+URL prefix (``rgw crypt vault prefix``), and finally the key ID.
+
+In the kv engine example above, the Gateway would fetch the secret from::
+
+ http://vaultserver:8200/v1/secret/data/myproject/mybucketkey
+
+In the transit engine example above, the Gateway would encrypt the secret using this key::
+
+ http://vaultserver:8200/v1/transit/mybucketkey
+
+.. _Server-Side Encryption: ../encryption
+.. _Vault: https://www.vaultproject.io/docs/
+.. _Token authentication method: https://www.vaultproject.io/docs/auth/token.html
+.. _Vault agent: https://www.vaultproject.io/docs/agent/index.html
+.. _KV Secrets engine: https://www.vaultproject.io/docs/secrets/kv/
+.. _Transit engine: https://www.vaultproject.io/docs/secrets/transit
+.. _namespaces: https://www.vaultproject.io/docs/enterprise/namespaces/index.html